VMS Help
MACRO, VAX MACRO Assembler, Vector Instructions
*Conan The Librarian (sorry for the slow response - running on an old VAX)
|
|
The assembler notation uses a format that is different from the
operand specifiers for the vector instructions. The number and
order of operands is not the same as the instruction-stream format.
For example, vector-to-vector addition is denoted by the assembler
as "VVADDL V1, V2, V3" instead of "VVADDL X123". The assembler
always generates immediate addressing mode (I#constant) for vector
control word operands. The assembler notation for vector
instructions uses opcode qualifiers to select whether vector
processor exception conditions are enabled or disabled, and to
select the value of cntrl<MTF> in masked, VMERGE, and IOTA
operations. The appropriate opcode is followed by a slash (/). The
following qualifiers are supported:
o The qualifier U enables floating underflow. The qualifier V
enables integer overflow. Both of these qualifiers set cntrl<EXC>.
The default is no vector processor exception conditions are
enabled.
o The qualifier 0 denotes masked operation on elements for which
the Vector Mask Register (VMR) bit is 0. The qualifier 1 denotes
masked operation on elements for which the VMR bit is 1. Both
qualifiers set cntrl<MOE>. The default is no masked operations.
o For the VMERGE and IOTA instructions only, the qualifier 0
denotes cntrl<MTF> is 0. The qualifier 1 denotes cntrl<MTF> is 1.
Cntrl<MTF> is 1 by default. Cntrl<MOE> is not set in this case.
o For the VLD and VGATH instructions only, the qualifier M
indicates modify intent (cntrl<MI> is 1). The default is no modify
intent (cntrl<MI> is 0).
The following examples use several of these qualifiers:
VVADDF/1 V0, V1, V2 ;Operates on elements with mask bit set
VVMULD/0 V0, V1, V2 ;Operates on elements with mask bit clear
VVADDL/V V0, V1, V2 ;Enables exception conditions
(integer overflow here)
VVSUBG/U0 V0, V1, V2 ;Enables floating underflow and
;Operates on elements with mask bit clear
VLDL/M base,#4,V1 ;Indicates Modify Intent
Generate Compressed Iota Vector
Format:
IOTA [/0|1] stride, Vc
Architecture
Format
opcode cntrl.rw, stride.rl
opcodes
EDFD IOTA Generate Compressed Iota Vector
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
| |M| | | | | |
|0|T|0|0| 0 | 0 | Vc |
| |F| | | | | |
+-+-+-+-+-------+-------+-------+
exceptions
None.
operation
j <- 0
tmp <- 0
FOR i <- 0 TO VLR-1
BEGIN
IF {VMR<i> EQL MTF} THEN
BEGIN
Vc[j]<31:0> <- tmp<31:0>
j <- j + 1
END
tmp <- tmp + stride
END
VCR <- j !return vector count
Move from Vector Processor
Format:
{ MFVCR }
{ MFVLR }
{ MFVMRLO }
{ } dst
{ MFVMRHI }
{ SYNCH }
{ MSYNCH }
{ }
Architecture
Format
opcode regnum.rw, dst.wl
opcodes
31FD MFVP Move from Vector Processor
vector_control_word
None.
exceptions
None.
operation
CASE regnum OF
0: dst <- ZEXT{VCR}
1: dst <- ZEXT{VLR}
2: dst <- VMR<31:0>
3: dst <- VMR<63:32>
4: SYNC
dst <- UNPREDICTABLE
5: MSYNC
dst <- UNPREDICTABLE
>5: Reserved
END
MFVP instructions that specify reserved values of the regnum
operand produce UNPREDICTABLE results.
Move to Vector Processor
Format:
{ MTVCR }
{ MTVLR }
{ MTVMRLO } src
{ }
{ MTVMRHI }
Architecture
Format
opcode regnum.rw, src.rl
opcodes
A9FD MTVP Move to Vector Processor
vector_control_word
None.
exceptions
None.
operation
CASE regnum OF
0: VCR <- src
1: VLR <- src
2: VMR<31:0> <- src
3: VMR<63:32> <- src
>3: Reserved
END
Move to Vector Processor instructions that specify reserved values
of the regnum operand produce UNPREDICTABLE results.
Vector Floating Add
Format:
vector + vector:
{ VVADDF }
{ VVADDD } [/U[0|1]] Va, Vb, Vc
{ VVADDG }
{ }
scalar + vector:
{ VSADDF }
{ VSADDD } [/U[0|1]] scalar, Vb, Vc
{ VSADDG }
{ }
Architecture
Format
vector + vector:
opcode cntrl.rw
scalar + vector (F_floating):
opcode cntrl.rw, addend.rl
scalar + vector (D_ and G_floating):
opcode cntrl.rw, addend.rq
opcodes
84FD VVADDF Vector Vector Add F_Floating
85FD VSADDF Vector Scalar Add F_Floating
86FD VVADDD Vector Vector Add D_Floating
87FD VSADDD Vector Scalar Add D_Floating
82FD VVADDG Vector Vector Add G_Floating
83FD VSADDG Vector Scalar Add G_Floating
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M|E| | Va | | |
|O|T|X|0| or | Vb | Vc |
|E|F|C| | 0 | | |
+-+-+-+-+-------+-------+-------+
exceptions
floating overflow
floating reserved operand
floating underflow
operation
FOR i <- 0 TO VLR-1
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL
BEGIN
IF VVADDF THEN
Vc[i]<31:0> <- Va[i]<31:0> + Vb[i]<31:0>
IF VSADDF THEN
Vc[i]<31:0> <- addend + Vb[i]<31:0>
IF VVADDD OR VVADDG THEN
Vc[i] <- Va[i] + Vb[i]
IF VSADDD OR VSADDG THEN
Vc[i] <- addend + Vb[i]
END
Vector Integer Add
Format:
vector + vector:
VVADDL [/0|1] Va, Vb, Vc
scalar + vector:
VSADDL [/0|1] scalar, Vb, Vc
Architecture
Format
vector + vector: opcode cntrl.rw
scalar + vector: opcode cntrl.rw, addend.rl
opcodes
80FD VVADDL Vector Vector Add Longword
81FD VSADDL Vector Scalar Add Longword
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M|E| | Va | | |
|O|T|X|0| or | Vb | Vc |
|E|F|C| | 0 | | |
+-+-+-+-+-------+-------+-------+
exceptions
integer overflow
operation
FOR i <- 0 TO VLR-1
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF VVADDL THEN
Vc[i]<31:0> <- Va[i]<31:0> + Vb[i]<31:0>
IF VSADDL THEN
Vc[i]<31:0> <- addend + Vb[i]<31:0>
END
Vector Logical Functions
Format:
vector op vector:
{ VVBISL }
{ VVXORL } [/V[0|1]] Va, Vb, Vc
{ VVBICL }
{ }
vector op scalar:
{ VSBISL }
{ VSXORL } [/V[0|1]] scalar, Vb, Vc
{ VSBICL }
{ }
Architecture
Format
vector op vector: opcode cntrl.rw
vector op scalar: opcode cntrl.rw, src.rl
opcodes
C8FD VVBISL Vector Vector Bit Set Longword
E8FD VVXORL Vector Vector Exclusive-OR Longword
CCFD VVBICL Vector Vector Bit Clear Longword
C9FD VSBISL Vector Scalar Bit Set Longword
E9FD VSXORL Vector Scalar Exclusive-OR Longword
CDFD VSBICL Vector Scalar Bit Clear Longword
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M| | | Va | | |
|O|T|0|0| or | Vb | Vc |
|E|F| | | 0 | | |
+-+-+-+-+-------+-------+-------+
exceptions
None.
operation
FOR i <- 0 TO VLR-1
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF VVBISL THEN
Vc[i]<31:0> <- Va[i]<31:0> OR Vb[i]<31:0>
IF VSBISL THEN
Vc[i]<31:0> <- src OR Vb[i]<31:0>
IF VVXORL THEN
Vc[i]<31:0> <- Va[i]<31:0> XOR Vb[i]<31:0>
IF VSXORL THEN
Vc[i]<31:0> <- src XOR Vb[i]<31:0>
IF VVBICL THEN
Vc[i]<31:0> <- {NOT Va[i]<31:0>} AND Vb[i]<31:0>
IF VSBICL THEN
Vc[i]<31:0> <- {NOT src} AND Vb[i]<31:0>
Vc[i]<63:32> <- Vb[i]<63:32>
END
Vector Floating Compare
Format:
{ VVGTRF }
{ VVGTRD }
{ VVGTRG }
{ }
{ VVEQLF }
{ VVEQLD }
{ VVEQLG }
{ }
{ VVLSSF }
{ VVLSSD }
{ VVLSSG }
vector - vector: { } [/U[0|1]] Va, Vb
{ VVLEQF }
{ VVLEQD }
{ VVLEQG }
{ }
{ VVNEQF }
{ VVNEQD }
{ VVNEQG }
{ }
{ VVGEQF }
{ VVGEQD }
{ VVGEQG }
{ }
{ VSGTRF }
{ VSGTRD }
{ VSGTRG }
{ }
{ VSEQLF }
{ VSEQLD }
{ VSEQLG }
{ }
{ VSLSSF }
{ VSLSSD }
{ VSLSSG }
scalar - vector: { } [/U[0|1]] src, Vb
{ VSLEQF }
{ VSLEQD }
{ VSLEQG }
{ }
{ VSNEQF }
{ VSNEQD }
{ VSNEQG }
{ }
{ VSGEQF }
{ VSGEQD }
{ VSGEQG }
{ }
Architecture
Format
vector - vector:
opcode cntrl.rw
scalar - vector (F_floating):
opcode cntrl.rw, src.rl
scalar - vector (D_ and G_floating):
opcode cntrl.rw, src.rq
opcodes
C4FD VVCMPF Vector Vector Compare F_floating
C5FD VSCMPF Vector Scalar Compare F_floating
C6FD VVCMPD Vector Vector Compare D_floating
C7FD VSCMPD Vector Scalar Compare D_floating
C2FD VVCMPG Vector Vector Compare G_floating
C3FD VSCMPG Vector Scalar Compare G_floating
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M| | | Va | | cmp |
|O|T|0|0| or | Vb | func |
|E|F| | | 0 | | |
+-+-+-+-+-------+-------+-------+
The condition being tested is determined by cntrl<2:0>, as
follows:
Value of
cntrl<2:0> Meaning
0 Greater than
1 Equal
2 Less than
3 Reserved
4 Less than or equal
5 Not equal
6 Greater than or equal
7 Reserved
NOTE
Cntrl<3> should be zero; if it is set, the results of the
instruction are UNPREDICTABLE.
exceptions
floating reserved operand
operation
FOR i <- 0 TO VLR-1
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF VVCMPF THEN
IF Va[i]<31:0> SIGNED_RELATION Vb[i]<31:0> THEN
VMR<i> <- 1
ELSE
VMR<i> <- 0
IF VVCMPD OR VVCMPG THEN
IF Va[i] SIGNED_RELATION Vb[i] THEN
VMR<i> <- 1
ELSE
VMR<i> <- 0
IF VSCMPF THEN
IF src SIGNED_RELATION Vb[i]<31:0> THEN
VMR<i> <- 1
ELSE
VMR<i> <- 0
IF VSCMPD OR VSCMPG THEN
IF src SIGNED_RELATION Vb[i] THEN
VMR<i> <- 1
ELSE
VMR<i> <- 0
END
Vector Integer Compare
Format:
vector - vector:
{ VVGTRL }
{ VVEQLL }
{ VVLSSL }
{ } [/0|1] Va, Vb
{ VVLEQL }
{ VVNEQL }
{ VVGEQL }
{ }
scalar - vector:
{ VSGTRL }
{ VSEQLL }
{ VSLSSL }
{ } [/0|1] src, Vb
{ VSLEQL }
{ VSNEQL }
{ VSGEQL }
{ }
Architecture
Format
vector + vector: opcode cntrl.rw
scalar + vector: opcode cntrl.rw, addend.rl
opcodes
80FD VVADDL Vector Vector Add Longword
81FD VSADDL Vector Scalar Add Longword
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M|E| | Va | | |
|O|T|X|0| or | Vb | Vc |
|E|F|C| | 0 | | |
+-+-+-+-+-------+-------+-------+
exceptions
integer overflow
operation
FOR i <- 0 TO VLR-1
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF VVADDL THEN
Vc[i]<31:0> <- Va[i]<31:0> + Vb[i]<31:0>
IF VSADDL THEN
Vc[i]<31:0> <- addend + Vb[i]<31:0>
END
Vector Floating Divide
Format:
vector/vector:
{ VVDIVF }
{ VVDIVD } [/U[0|1]] Va, Vb, Vc
{ VVDIVG }
{ }
scalar/vector:
{ VSDIVF }
{ VSDIVD } [/U[0|1]] scalar, Vb, Vc
{ VSDIVG }
{ }
Architecture
Format
vector/vector:
opcode cntrl.rw
scalar/vector (F_floating):
opcode cntrl.rw, divd.rl
scalar/vector (D_ and G_floating):
opcode cntrl.rw, divd.rq
opcodes
ACFD VVDIVF Vector Vector Divide F_floating
ADFD VSDIVF Vector Scalar Divide F_floating
AEFD VVDIVD Vector Vector Divide D_floating
AFFD VSDIVD Vector Scalar Divide D_floating
AAFD VVDIVG Vector Vector Divide G_floating
ABFD VSDIVG Vector Scalar Divide G_floating
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M|E| | Va | | |
|O|T|X|0| or | Vb | Vc |
|E|F|C| | 0 | | |
+-+-+-+-+-------+-------+-------+
exceptions
floating divide by zero
floating overflow
floating reserved operand
floating underflow
operation
FOR i <- 0 TO VLR-1
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF VVDIVF THEN
Vc[i]<31:0> <- Va[i]<31:0> / Vb[i]<31:0>
IF VSDIVF THEN
Vc[i]<31:0> <- divd / Vb[i]<31:0>
IF VVDIVD OR VVDIVG THEN
Vc[i] <- Va[i] / Vb[i]
IF VSDIVD OR VSDIVG THEN
Vc[i] <- divd / Vb[i]
END
Gather Memory Data into Vector Register
Format:
VGATHL [/M[0|1]] base, Vb, Vc
VGATHQ [/M[0|1]] base, Vb, Vc
Architecture
Format
opcode cntrl.rw, base.ab
opcodes
35FD VGATHL Gather Longword Vector from Memory to Vector
Register
37FD VGATHQ Gather Quadword Vector from Memory to Vector
Register
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M|M| | | | |
|O|T|I|0| 0 | Vb | Vc |
|E|F| | | | | |
+-+-+-+-+-------+-------+-------+
exceptions
access control violation
translation not valid
vector alignment
operation
FOR i <- 0 TO VLR-1
BEGIN
addr <- base + Vb[i]<31:0>
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF {addr unaligned} THEN
{Vector Alignment Exception}
IF VGATHL THEN
Vc[i] <- (addr)<31:0>
IF VGATHQ THEN
Vc[i] <- (addr)<63:0>
END
END
Load Memory Data into Vector Register
Format:
VLDL [/M[0|1]] base, stride, Vc
VLDQ [/M[0|1]] base, stride, Vc
Architecture
Format
opcode cntrl.rw, base.ab, stride.rl
opcodes
34FD VLDL Load Longword Vector from Memory to Vector
Register
36FD VLDQ Load Quadword Vector from Memory to Vector
Register
vector control word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M|M| | | | |
|O|T|I|0| 0 | 0 | Vc |
|E|F| | | | | |
+-+-+-+-+-------+-------+-------+
exceptions
access control violation
translation not valid
vector alignment
operation
addr <- base
FOR i <- 0 TO VLR-1
BEGIN
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF {addr unaligned} THEN
{Vector Alignment Exception}
IF VLDL THEN
Vc[i] <- (addr)<31:0>
IF VLDQ THEN
Vc[i] <- (addr)<63:0>
END
addr <- addr + stride !Increment by stride
END
Vector Merge
Format:
vector vector merge:
VVMERGE [/0|1] Va, Vb, Vc
vector scalar merge:
{ VSMERGE }
{ VSMERGEF }
{ VSMERGED } [/0|1] src, Vb, Vc
{ }
{ VSMERGEG }
Architecture
Format
vector-vector: opcode cntrl.rw
vector-scalar: opcode cntrl.rw,src.rq
opcodes
EEFD VVMERGE Vector Vector Merge
EFFD VSMERGE Vector Scalar Merge
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
| |M| | | Va | | |
|0|T|0|0| or | Vb | Vc |
| |F| | | 0 | | |
+-+-+-+-+-------+-------+-------+
exceptions
None.
operation
FOR i <- 0 TO VLR-1
BEGIN
IF VVMERGE THEN
IF {VMR<i> EQL MTF} THEN
Vc[i] <- Va[i]
ELSE
Vc[i] <- Vb[i]
IF VSMERGE THEN
IF {VMR<i> EQL MTF} THEN
Vc[i] <- src
ELSE
Vc[i] <- Vb[i]
END
Vector Floating Multiply
Format:
vector * vector:
{ VVMULF }
{ VVMULD } [/U[0|1]] Va, Vb, Vc
{ VVMULG }
{ }
scalar * vector:
{ VSMULF }
{ VSMULD } [/U[0|1]] scalar, Vb, Vc
{ VSMULG }
{ }
Architecture
Format
vector * vector:
opcode cntrl.rw
scalar * vector (F_floating):
opcode cntrl.rw, mulr.rl
scalar * vector (D_ and G_floating):
opcode cntrl.rw, mulr.rq
opcodes
A4FD VVMULF Vector Vector Multiply F_floating
A5FD VSMULF Vector Scalar Multiply F_floating
A6FD VVMULD Vector Vector Multiply F_floating
A7FD VSMULD Vector Scalar Multiply D_floating
A2FD VVMULG Vector Vector Multiply G_floating
A3FD VSMULG Vector Scalar Multiply G_floating
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M|E| | Va | | |
|O|T|X|0| or | Vb | Vc |
|E|F|C| | 0 | | |
+-+-+-+-+-------+-------+-------+
exceptions
floating overflow
floating reserved operand
floating underflow
operation
FOR i <- 0 TO VLR-1
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF VVMULF THEN
Vc[i]<31:0> <- Va[i]<31:0> * Vb[i]<31:0>
IF VSMULF THEN
Vc[i]<31:0> <- mulr * Vb[i]<31:0>
IF VVMULD OR VVMULG THEN
Vc[i] <- Va[i] * Vb[i]
IF VSMULD OR VSMULG THEN
Vc[i] <- mulr * Vb[i]
END
Vector Integer Multiply
Format:
vector * vector:
VVMULL [/V[0|1]] Va, Vb, Vc
scalar * vector:
VSMULL [/V[0|1]] scalar, Vb, Vc
Architecture
Format
vector * vector: opcode cntrl.rw
scalar * vector: opcode cntrl.rw, mulr.rl
opcodes
A0FD VVMULL Vector Vector Multiply Longword
A1FD VSMULL Vector Scalar Multiply Longword
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M|E| | Va | | |
|O|T|X|0| or | Vb | Vc |
|E|F|C| | 0 | | |
+-+-+-+-+-------+-------+-------+
exceptions
integer overflow
operation
FOR i <- 0 TO VLR-1
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF VVMULL THEN
Vc[i]<31:0> <- Va[i]<31:0> * Vb[i]<31:0>
IF VSMULL THEN
Vc[i]<31:0> <- mulr * Vb[i]<31:0>
END
Scatter Vector Register Data into Memory
Format:
VSCATL [/0|1] Vc, base, Vb
VSCATQ [/0|1] Vc, base, Vb
Architecture
Format
opcode cntrl.rw, base.ab
opcodes
9DFD VSCATL Scatter Longword Vector from Vector Register to
Memory
9FFD VSCATQ Scatter Quadword Vector from Vector Register to
Memory
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M| | | | | |
|O|T|0|0| 0 | Vb | Vc |
|E|F| | | | | |
+-+-+-+-+-------+-------+-------+
exceptions
access control violation
translation not valid
vector alignment
modify
operation
FOR i <- 0 TO VLR-1
BEGIN
addr <- base + Vb[i]<31:0>
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF {addr unaligned} THEN
{Vector Alignment Exception}
IF VSCATL THEN
(addr)<31:0> <- Vc[i]<31:0>
IF VSCATQ THEN
(addr)<63:0> <- Vc[i]
END
END
Vector Shift Logical
Format:
vector shift count:
{ VVSRLL }
{ VVSLLL } [/V[0|1]] Va, Vb, Vc
{ }
scalar shift count:
{ VSSRLL }
{ VSSLLL } [/V[0|1]] cnt, Vb, Vc
{ }
Architecture
Format
vector shift count: opcode cntrl.rw
scalar shift count: opcode cntrl.rw, cnt.rl
opcodes
E0FD VVSRLL Vector Vector Shift Right Logical Longword
E4FD VVSLLL Vector Vector Shift Left Logical Longword
E1FD VSSRLL Vector Scalar Shift Right Logical Longword
E5FD VSSLLL Vector Scalar Shift Left Logical Longword
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M| | | Va | | |
|O|T|0|0| or | Vb | Vc |
|E|F| | | 0 | | |
+-+-+-+-+-------+-------+-------+
exceptions
None.
operation
FOR i <- 0 TO VLR-1
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF VVSRLL THEN
Vc[i]<31:0> <- RIGHT_SHIFT(Vb[i]<31:0>, Va[i]<4:0>)
IF VVSLLL THEN
Vc[i]<31:0> <- LEFT_SHIFT(Vb[i]<31:0>, Va[i]<4:0>)
IF VSSRLL THEN
Vc[i]<31:0> <- RIGHT_SHIFT(Vb[i]<31:0>, cnt<4:0>)
IF VSSLLL THEN
Vc[i]<31:0> <- LEFT_SHIFT(Vb[i]<31:0>, cnt<4:0>)
END
Store Vector Register Data into Memory
Format:
VSTL [/0|1] Vc, base, stride
VSTQ [/0|1] Vc, base, stride
Architecture
Format
opcode cntrl.rw, base.ab, stride.rl
opcodes
9CFD VSTL Store Longword Vector from Vector Register to
Memory
9EFD VSTQ Store Quadword Vector from Vector Register to
Memory
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M| | | | | |
|O|T|0|0| 0 | 0 | Vc |
|E|F| | | | | |
+-+-+-+-+-------+-------+-------+
exceptions
access control violation
translation not valid
vector alignment
modify
operation
addr <- base
FOR i <- 0 TO VLR-1
BEGIN
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF {addr unaligned} THEN
{Vector Alignment Exception}
IF VSTL THEN
(addr)<31:0> <- Vc[i]<31:0>
IF VSTQ THEN
(addr)<63:0> <- Vc[i]
END
addr <- addr + stride !Increment by stride
END
Vector Floating Subtract
Format:
vector - vector:
{ VVSUBF }
{ VVSUBD } [/U[0|1]] Va, Vb, Vc
{ VVSUBG }
{ }
scalar - vector:
{ VSSUBF }
{ VSSUBD } [/U[0|1]] scalar, Vb, Vc
{ VSSUBG }
{ }
Architecture
Format
vector - vector:
opcode cntrl.rw
scalar - vector (F_floating):
opcode cntrl.rw, min.rl
scalar - vector (D_ and G_floating):
opcode cntrl.rw, min.rq
opcodes
8CFD VVSUBF Vector Vector Subtract F_floating
8DFD VSSUBF Vector Scalar Subtract F_floating
8EFD VVSUBD Vector Vector Subtract D_floating
8FFD VSSUBD Vector Scalar Subtract D_floating
8AFD VVSUBG Vector Vector Subtract G_floating
8BFD VSSUBG Vector Scalar Subtract G_floating
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M|E| | Va | | |
|O|T|X|0| or | Vb | Vc |
|E|F|C| | 0 | | |
+-+-+-+-+-------+-------+-------+
exceptions
floating overflow
floating reserved operand
floating underflow
operation
FOR i <- 0 TO VLR-1
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF VVSUBF THEN
Vc[i]<31:0> <- Va[i]<31:0> - Vb[i]<31:0>
IF VSSUBF THEN
Vc[i]<31:0> <- min - Vb[i]<31:0>
IF VVSUBD OR VVSUBG THEN
Vc[i] <- Va[i] - Vb[i]
IF VSSUBD OR VSSUBG THEN
Vc[i] <- min - Vb[i]
END
Vector Integer Subtract
Format:
vector - vector:
VVSUBL [/V[0|1]] Va, Vb, Vc
scalar - vector:
VSSUBL [/V[0|1]] scalar, Vb, Vc
Architecture
Format
vector - vector: opcode cntrl.rw
scalar - vector: opcode cntrl.rw, min.rl
opcodes
88FD VVSUBL Vector Vector Subtract Longword
89FD VSSUBL Vector Scalar Subtract Longword
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M|E| | Va | | |
|O|T|X|0| or | Vb | Vc |
|E|F|C| | 0 | | |
+-+-+-+-+-------+-------+-------+
exceptions
integer overflow
operation
FOR i <- 0 TO VLR-1
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
BEGIN
IF VVSUBL THEN
Vc[i]<31:0> <- Va[i]<31:0> - Vb[i]<31:0>
IF VSSUBL THEN
Vc[i]<31:0> <- min - Vb[i]<31:0>
END
Synchronize Vector Memory Access
Format:
VSYNCH
Architecture
Format
opcode regnum.rw
opcodes
A8FD VSYNC Synchronize Vector Memory Access
vector_control_word
None.
exceptions
None.
operation
CASE regnum 0: VSYNC
>0: Reserved
END
Synchronize Vector Memory Access instructions that specify
reserved values of the regnum operand produce UNPREDICTABLE
results.
Vector Convert
Format:
{ VVCVTLF }
{ VVCVTLD }
{ VVCVTLG }
{ }
{ VVCVTFL }
{ VVCVTRFL }
{ VVCVTFD }
{ }
{ VVCVTFG } [/U[0|1]] Vb, Vc
{ VVCVTDL }
{ VVCVTDF }
{ }
{ VVCVTRDL }
{ VVCVTGL }
{ VVCVTGF }
{ }
{ VVCVTRGL }
Architecture
Format
opcode cntrl.rw
opcodes
ECFD VVCVT Vector Convert
vector_control_word
1 1 1 1 1
5 4 3 2 1 8 7 4 3 0
+-+-+-+-+-------+-------+-------+
|M|M|E| | cvt | | |
|O|T|X|0| func | Vb | Vc |
|E|F|C| | | | |
+-+-+-+-+-------+-------+-------+
Cntrl<11:8> specifies the conversion to be performed, as follows:
cntrl<11:8> Meaning
1 1 1 1 CVTRGL (Convert Rounded G_Floating to Longword)
1 1 1 0 Reserved
1 1 0 1 CVTGF (Convert Rounded G_Floating to F_Floating)
1 1 0 0 CVTGL (Convert Truncated G_Floating to Longword)
1 0 1 1 Reserved
1 0 1 0 CVTRD (Convert Rounded D_Floating to Longword)
1 0 0 1 CVTDF (Convert Rounded D_Floating to F_Floating)
1 0 0 0 CVTDL (Convert Truncated D_Floating to Longword)
0 1 1 1 CVTFG (Convert F_Floating to G_Floating (exact))
0 1 1 0 CVTFD (Convert F_Floating to D_Floating (exact))
0 1 0 1 CVTRF (Convert Rounded F_Floating to Longword)
0 1 0 0 CVTFL (Convert Truncated F_Floating to Longword)
0 0 1 1 CVTLG (Convert Longword to G_Floating (exact))
0 0 1 0 CVTLD (Convert Longword to D_Floating (exact))
0 0 0 1 CVTLF (Convert Rounded Longword to F_Floating)
0 0 0 0 Reserved
exceptions
floating overflow
floating reserved operand
floating underflow
integer overflow
operation
FOR i <- 0 TO VLR-1
IF {{MOE EQL 0} OR {{MOE EQL 1} AND {VMR<i> EQL MTF}}} THEN
Vc[i] <- {conversion of Vb[i]}
[legal]
[privacy]
[GNU]
[policy]
[netiquette]
[sponsors]
[FAQ]
Polarhome, production since 1999.
Member of Polarhome portal.