Tacit, also known as point-free, programming is a key feature of Dyalog APL, but is not unique to it. Amongst others, J and Haskell also support this programming style. It has a somewhat undeserved reputation of being hard to learn and read, and the purpose of this tutorial is to help dispel this notion. Tacit is for everyone.
This is an intermediate level tutorial. To make the most of it, you already know a bit of Dyalog APL. You know how to write dfns and tradfns, how to operate the IDE (or Ride), and how to execute APL expressons in the session. This tutorial will help you take the next steps in terms of tacit programming.
Explicit code references function arguments explicitly. For example, here's an expression that calculates the difference between the max and min values of a vector:
(⌈/N) - ⌊/N ← 3 1 4 1 5 9 2 6 5
8
We're naming the input vector N
and referencing it explicitly later. Similarly, we name the arguments of a tradfn explicitly:
∇ R ← Range Y
R ← (⌈/Y) - ⌊/Y
∇
Range 3 1 4 1 5 9 2 6 5
8
In dfns, the left and right arguments already have the names ⍺
and ⍵
respectively, and we can reference those explicitly:
{(⌈/⍵) - ⌊/⍵} 3 1 4 1 5 9 2 6 5
8
In contrast, in tacit code, arguments are implied:
(⌈/ - ⌊/) 3 1 4 1 5 9 2 6 5
8
Note how this actually reads nicely in English: the max ⌈/
minus the min ⌊/
.
Here's the thing -- even if you're new to tacit, you have most likely been using it unknowingly already. For example, all the following expressions are examples of tacit programming you may alreay be using:
f/
f¨
∘.g
f\
A∘g
f⌸
f⍠B
In other words, operator application is a form of tacit programming.
What are the advantages of tacit progamming? Here are some:
≠⊆⊢
and +⌿÷≢
)×-
and ∨/⍷
)F⍥⎕C
)≡⍥⍴
)One way to think about tacit programming is known as function composition. Dyalog has several ways in which to compose functions into new functions, including:
As it turns out, function composition is "just" a matter of plumbing -- let's work through these compositions in turn to discover how they work.
The shape of an outer product ⍺ ∘.f ⍵
is
(⍴⍺) , (⍴⍵)
We can look at this as the the application of the catenate (,
) function after pre-processing both the right and left arguments with the shape (⍴
) function.
Using the Over operator, ⍥
, we can write this as
⍺ ,⍥⍴ ⍵
We can model this as a diagram. In this case, the f
function is catenate, and g
is shape. We read ,⍥⍴
as "catenate over shape", but we think "catenate the arguments, pre-processed by shape".
This is a common pattern in APL code. Perhaps you want to see if two vectors have the same tally? Think "equal over tally":
a ← 1 2 3 4 5 6 7 8 9 10 11
b ← 'hello world'
a =⍥≢ b
1
Location of the ⍺
th 1 in each element of ⍵
is
⍺ ⊃¨ ⍸¨ ⍵
Using the Beside (∘
) operator, we can write this more succinctly as
⍺ ⊃∘⍸¨ ⍵
In an expression ⍺ f∘g ⍵
, we can think of beside as pre-processing the right argument with the monadic g
function, and then applying the dyadic function f
. Read ⊃∘⍸
as "pick beside where".
Any-presence of ⍺
in ⍵
is
∨/ ⍺ ⍷ ⍵
We can write this as
⍺ ∨/⍤⍷ ⍵
using the Atop operator, which reads as "or-reduce atop find". Atop means "post-process the result", which in our case is "post-process the result of find with an or-reduction. Once you learn to spot the Atop-pattern, you'll see it's also common in APL code.
A multiplication table of N
elements can be written as
(⍳⍵) ∘.× (⍳⍵)
Using the Commute operator, we can avoid the repetition of ⍳⍵
:
∘.×⍨ ⍳⍵
Applying the derived function from the Commute operator monadically, we turn a dyadic function into a monadic function, by applying the argument on both sides. This is sometimes colloquially referred to as a "selfie".
Let's put these composition operators to work by rewriting some explicit expressions to tacit.
Our first example does a case-insensitive equality check:
'Hello' {(⎕C ⍺)≡(⎕C ⍵)} 'HELLO'
1
Here we should be able to spot the fact that both arguments are pre-processed with ⎕C
to downcase before we match the results. This is a classic Over pattern -- match over case-fold:
'Hello' ≡⍥⎕C 'HELLO'
1
Here's another. Check if all elements to the left argument are members of the right argument:
'ab' 'cd' {∧/⍺∊⍵}¨ 'abba' 'dad'
1 0
This can be rewritten neatly as an atop -- and-reduction atop member:
'ab' 'cd' ∧/⍤∊¨ 'abba' 'dad'
1 0
The next example calculates the ⍵
th root of ⍺
by raising the left argument to the reciprocal power of the right:
64 1000 0 {⍺*÷⍵} 3
4 10 0
Putting this into words, we exponentiate (dyadic *
) after pre-processing the right argument with the reciprocal (monadic ÷
). We should recognise this as the beside pattern:
64 1000 0 *∘÷ 3
4 10 0
Here's a function that multiplies the left argument with the signum of the right.
10 4 1 0 {⍺××⍵} ¯3 2 0 ¯1
¯10 4 0 0
This is also a beside: multiply, after pre-processing the right argument with the signum function (monadic ×
):
10 4 1 0 ×∘× ¯3 2 0 ¯1
¯10 4 0 0
Trains are also a type of function composition. A train is a sequence of functions in isolation. If used in-line, a train must be parenthesised:
(+⌿÷≢) 3 1 4 1 5 ⍝ Find the average
2.8
A train can also be assigned, in which case we can skip the parentheses:
Avg ← +⌿÷≢
Avg 3 1 4 1 5
2.8
fgh
Forks¶Writing, and indeed reading, trains takes a bit of practice. Like for the compositional operators we explored above, it's a matter of recognise a set of common patterns. The Avg
function we just looked at is an example of a monadic Fork:
(f Y) + (h Y) → (f + h) Y
This pattern is sometimes referred to an fgh
fork, which is a common compositional technique.
What happens if we don't have an h
function? Well, we can apply a neat trick using the same function (monadic ⊢
), which just returns its argument:
(f Y) + ( Y) → (f + ⊢) Y
If we only have the g
and the h
functions, the monadic fgh
fork is the same as the corresponding explicit formulation (which is usually written as fg
, not gh
):
f (g Y) → ( f g) Y
In fact, this is another way to think of an atop.
If f
or h
are dyadic functions, we get a similar fork pattern. In this case, the fork is applied dyadically:
(X f Y) + (X h Y) → X (f + h) Y
or, if we don't have an f
, we use the left function (dyadic ⊣
) as the filler:
(X ) + (X h Y) → X (⊣ + h) Y
Agh
Forks and Longer Trains¶Not every component of a train has to be a function, but can also be arrays. These are sometimes called Agh
forks. Their behaviour follows from the earlier fgh
rules, with the addition that an array A
is treated as the function {A}
.
Trains can be made longer by combining forks (fgh
and Agh
) and atops (fg
). For example,
(e f g h)
is treated as (e (f g h))
-- a 4-train(d e f g h)
is treated as (d e (f g h))
-- a 5-trainExplicit expressions can be converted to tacit step-by-step. Here's a dfn that checks if the elements in a vector are in ascending order. We have three functions:
∧/
⌈\
=
and one array, ⍵
.
{∧/ (⌈\⍵) = ⍵} 1 3 5 6 7
1
Let's start with rewriting the "naked" array to the right of the equal sign using same
, monadic ⊢
. Now we have four functions:
{∧/ (⌈\⍵) = (⊢⍵)} 1 3 5 6 7
1
Now we have a efgh
pattern. Let's gather the three right-most functions into the fgh
form:
{∧/ (⌈\ = ⊢) ⍵} 1 3 5 6 7
1
The inner fork doesn't need the parenthesis. We can group the whole train:
{(∧/ ⌈\ = ⊢) ⍵} 1 3 5 6 7
1
and, finally, remove the remaining ⍵
reference and the curly braces:
(∧/ ⌈\ = ⊢) 1 3 5 6 7
1
Rewrite the following explicit expressions to tacit.
Example 1
Here's a dfn that multiplies its argument by 2:
{2×⍵} 2 7 1 8
4 14 2 16
We can do this tacitly in a couple of ways, either as an Agh
train,
(2×⊢)2 7 1 8
4 14 2 16
or as a bind
2∘× 2 7 1 8
4 14 2 16
We can take a completely different tack by observing that multiplying something by two is the same as adding something to itself:
+⍨2 7 1 8
4 14 2 16
Example 2
The following expression computes the symmetric set difference between ⍺
and ⍵
as the union without the intersection:
3 1 4 {(⍺∪⍵)~(⍺∩⍵)} 1 6 1
3 4 6
This is the dyadic fgh
pattern:
3 1 4 (∪~∩) 1 6 1
3 4 6
Example 3
This function finds the unique prime factors of the argument:
{∪⍵∨⍳⍵} 10
1 2 5 10
We have one "naked" ⍵
. Rewrite that first:
{∪(⊢⍵)∨⍳⍵} 10
1 2 5 10
Now, hopefully, we can see the efgh
pattern more clearly:
(∪⊢∨⍳) 10
1 2 5 10
We can also experss this with composition operators, but it's not quite as nice:
(∪⍤∨∘⍳⍨) 10
1 2 5 10
Example 4
The following expression returns the ⍺
th largest element of ⍵
:
2 {(⍺⊃⍒⍵)⊃⍵} 3 1 4 1 5
4
Rewrite the right-most ⍵
as ⍺⊢⍵
to give us the dyadic fgh
pattern:
2 {(⍺⊃⍒⍵)⊃(⍺⊢⍵)} 3 1 4 1 5
4
We have the fgh
fork f ⊃ ⊢
, where the f
part is pick (⊃
), pre-process right with grade down (⍒
):
2 (⊃∘⍒⊃⊢) 3 1 4 1 5
4
Note: we can't write this as
2((⊣⊃⍒)⊃⊢)3 1 4 1 5 ⍝ RANK ERROR
RANK ERROR 2((⊣⊃⍒)⊃⊢)3 1 4 1 5 ⍝ RANK ERROR ∧
as the ⍒
is now interpreted as dyadic. For that approach to work, we'd need:
2((⊣⊃⍒⍤⊢)⊃⊢)3 1 4 1 5
4
There are several tools available that can be helpful when reading or writing tacit code. Let's look at the ]box
user command.
You're most likely aware already of ]box
-- a time-honoured mechanism for visual presentation of array structure in APL. Boxing is also helpful in investigating the structure of trains. You can specify the presentation style in several ways:
]box on -trains=box
|⊢÷+/⍣≡
]box on -trains=tree
|⊢÷+/⍣≡
]box on -trains=parens
|⊢÷+/⍣≡
Was ON -trains=box
┌─┬─────────────────┐ │|│┌─┬─┬───────────┐│ │ ││⊢│÷│┌─────┬─┬─┐││ │ ││ │ ││┌─┬─┐│⍣│≡│││ │ ││ │ │││+│/││ │ │││ │ ││ │ ││└─┴─┘│ │ │││ │ ││ │ │└─────┴─┴─┘││ │ │└─┴─┴───────────┘│ └─┴─────────────────┘
Was ON -trains=box
┌─┴─┐ | ┌─┼─┐ ⊢ ÷ ⍣ ┌┴┐ / ≡ ┌─┘ +
Was ON -trains=tree
|(⊢÷((+/)⍣≡))
Using ]box on -trains=…
shows the execution order of a train more explicitly.
The website tacit.help is an excellent tool for converting a tacit expression into an explicit form. Type your tacit expression into the text box, and corresponding dfns will be generated, both monadic, and dyadic forms.
Note: it cannot go the other way (explicit to tacit). Writing a tool to do this is left as an exercise for the interested reader.
Practice makes perfect. Here is a batch of slightly more complex dfns for you to practice your tacit skills.
Example 1
This functions calculates the number of leading 1s in a Boolean vector:
{(⊖⍵)⊥⊖⍵} 1 1 1 0 1 1 0
3
This looks like a straight-forward fgh
fork, and of course it can be, but it's unnecessarily costly:
(⊖⊥⊖) 1 1 1 0 1 1 0 0 ⍝ Inefficient!
This approach calculates the reverse twice. We can resolve this with a commute (⍨
) if we instead use the compositional operator over (⍥
), either at the end:
(⊥⍥⊖⍨) 1 1 1 0 1 1 0 0
3
or directly on the decode (⊥
) itself:
(⊥⍨⍥⊖) 1 1 1 0 1 1 0 0
3
There are alternative ways of formulating the solution, of course, such as summing the and-scan:
(+⌿∧⍀)1 1 1 0 1 1 0 0
3
The and-scan 'turns off' all 1s following the first 0.
Example 2
This expression splits a right argument vector on every occurrence of a character in the left argument:
',;' {(~⍵∊⍺)⊆⍵} 'ab,de;fgh'
┌──┬──┬───┐ │ab│de│fgh│ └──┴──┴───┘
First we address the 'naked' ⍵
:
',;' {(~⍵∊⍺)⊆(⍺⊢⍵)} 'ab,de;fgh'
┌──┬──┬───┐ │ab│de│fgh│ └──┴──┴───┘
Almost a dyadic fgh
fork, but the left 'tine' has the arguments the wrong way around. We can fix that with a commute (⍨
):
',;' {(~⍺∊⍨⍵)⊆(⍺⊢⍵)} 'ab,de;fgh'
┌──┬──┬───┐ │ab│de│fgh│ └──┴──┴───┘
The left tine can be simplified further by noting that we're post-processing (a.k.a. atop) the result with not (monadic ~
):
',;' {(⍺~⍤∊⍨⍵)⊆(⍺⊢⍵)} 'ab,de;fgh'
┌──┬──┬───┐ │ab│de│fgh│ └──┴──┴───┘
Now we have a clean dyadic fgh
:
',;' (~⍤∊⍨⊆⊢) 'ab,de;fgh'
┌──┬──┬───┐ │ab│de│fgh│ └──┴──┴───┘
Example 3
Calculate the windowed averages of a numeric array, where ⍺
is the window size, and ⍵
the array.
4 {(⍺+⌿⍵)÷⍺} 3 1 4 1 5
2.25 2.75
We have a "naked" ⍺
this time. Let's get rid of that with a left (dyadic ⊣
):
4 {(⍺+⌿⍵)÷(⍺⊣⍵)} 3 1 4 1 5
2.25 2.75
and the fork follows:
4 (+⌿÷⊣) 3 1 4 1 5
2.25 2.75
Example 4
Calculate the average value of a numeric array.
{(+⌿⍵)÷≢⍵} 3 1 4 1 5
2.8
Not much to do here! This is a commonly used example used to highlight how trains often track closely to the "English" description of the algorithm: sum (+⌿
) and divide (÷
) by the number of elements (≢
):
(+⌿÷≢) 3 1 4 1 5
2.8
Example 5
The two previous functions are related -- we can think of Example 4 as a special case of Example 3, where the window size is equal to the length of the array. Can we combine these into a single, ambivalent function? Yes, we can!
4 (+⌿÷⊣∘≢) 3 1 4 1 5
(+⌿÷⊣∘≢) 3 1 4 1 5
2.25 2.75
2.8
This is, in fact, the default expression given on tacit.help. How does it work? The right tine of the fork (⊣∘≢) evaluates to ⍺
if called dyadically, and ≢⍵
if called mondadically.
In the monadic case, both left and right tacks are the same function, return their argument:
⊣3 1 4 1 5
⊢3 1 4 1 5
3 1 4 1 5
3 1 4 1 5
so ⊣∘≢
is applying the same function (monadic ⊣
), after pre-processing the argument by tally (monadic ≢
), which just becomes tally, which is what we wanted.
In the dyadic case, ⊣∘≢
again becomes pre-processing the right argument with the monadic tally function, and then applying the dyadic function left (dyadic ⊣
), which returns the left argument verbatim.
There are a few situations where tacit either can't be used, should be avoided, or requires extra care:
Let's examine them in detail.
If you're tacifying expressions where some function operands themselves take arguments, it can be difficult to convert the whole expression to tacit. Here's an example where we reverse all elements less than the left argument:
10 {x←⍺ ⋄ ⌽@{x<⍵}⍵} 1 2 13 14 5 16 7 18
1 2 18 16 5 14 7 13
We can't rewrite this completely to tacit, as the two instances of ⍵
refer to different scopes. We can, however, still benefit a bit from the techniques we've introduced above, by rewriting the operand function to @
as tacit, thus no longer needing the temporary variable x
:
10 {⌽@(⍺<⊢)⍵} 1 2 13 14 5 16 7 18
1 2 18 16 5 14 7 13
Following the same line of reasoning, here's a function that rotates an array around its centre, ⍺
times a quarter turn, clockwise:
2 {(⌽⍤⍉⍣⍺)⍵} 3 3⍴⍳9
9 8 7 6 5 4 3 2 1
We can't tacit this completely.
Where you have sequences primarily consisting of monadic function applications, tacit formulation, whilst not impossible, quickly becomes longer and uglier than the obvious explicit formulation. For example, consider
{⌽+⌿↑⌽¨⍵}
possible tacit formulations are:
⌽⍤(+⌿)⍤↑⍤(⌽¨)
⌽(+⌿(↑⌽¨))
⌽+⌿⍤↑⍤(⌽¨)
⌽(+⌿⍤↑⌽¨)
neither of which particularly improves on the original.
A tacit expression cannot include
←
)a.b
)∇
)Selection is problematic. Consider compress/replicate:
{(3>⍵)⌿⍵} 3 1 4 1 5
1 1
Tacit conversion looks trivial, right? Wrong.
(3∘>⌿⊢) 3 1 4 1 5 ⍝ SYNTAX ERROR
SYNTAX ERROR: The function does not take a left argument (3∘>⌿⊢)3 1 4 1 5 ⍝ SYNTAX ERROR ∧
This is because ⌿
is a hybrid -- it can either be compress/replicate as a function, or reduce-first as an operator. Hybrids get interpreted as operators first, if at all possible. This means that we have to employ a non-obvious, ugly hoop-jump:
(3∘> ⊢⍤⌿ ⊢)3 1 4 1 5
1 1
By having an operator to the left of ⌿
, it must be treated as a function. Thus, "same atop compress-first" can only be interpreted as the function compress-first.
How do we fix this? The plan is to introduce a new operator, behind/reverse-compose (⍛
), in v20.0 of Dyalog APL. In the monadic case, we can model that as
∆←{(⍵⍵⍨∘⍺⍺⍨)⍵} ⍝ Behind/reverse-compose
which enables us to write the much nicer
(3∘>∆⌿) 3 1 4 1 5
1 1
Bracket indexing doesn't work in trains, as it doesn't follow the normal function call conventions. Unfortunately, functional indexing, squad (dyadic ⌷
) is a bit un-ergonomic. Let's say we want to sort a character vector based on a custom collating sequence:
'aeiou' {⍵[⍺⍋⍵]} 'hello world'
eoohll wrld
It would be nice if we could write
'aeiou' {⍵⌷⍨⍺⍋⍵} 'hello world' ⍝ LENGTH ERROR
LENGTH ERROR 'aeiou'{⍵⌷⍨⍺⍋⍵}'hello world' ⍝ LENGTH ERROR ∧
instead of the clumsier
'aeiou' {⍵⌷⍨⊂⍺⍋⍵} 'hello world' ⍝ LENGTH ERROR
eoohll wrld
which, unfortunately, means that the tacit version has to become
'aeiou' (⊂⍤⍋⌷⊢) 'hello world'
eoohll wrld
In v20.0, we're introducing a new primitive select (dyadic ⊇
), sometimes half-jokingly referred to as 'sane indexing'. It's modelled as
I←⌷⍨∘⊃⍨⍤0 99
meaning that we'd get the rather nice tacit formulation
'aeiou' (⍋I⊢) 'hello world' ⍝ I is select, ⊇
eoohll wrld
So far, we've focused on rewriting explicit expressions into tactit form. Sometimes you might want to do the reverse, so let's practice that on the following set of examples. The way to approach these is to remember the fact that every train consists of 3-trains (forks) and 2-trains (atops) going right to left. Or simply copy and paste into tacit.help.
Example 1
=∘⌊⍨ ⍝ monadic
This function checks which numbers in an array are integers. Equality, after the right argument is pre-processed by floor (monadic ⌊
), and the same argument to the left and right. This is:
{⍵=⌊⍵}
Example 2
××⌊⍤| ⍝ monadic
This is a fork. Starting from the right, we have "floor atop absolute-value" (⌊⍤|
) as the right tine. The middle function is multiply (dyadic ×
) and the left tine is signum (monadic ×
).
{(×⍵) × ⌊|⍵}
Example 3
⌊∘≢↑⊢ ⍝ dyadic
Again a fork. Left tine is "min (dyadic ⌊
), pre-process right with tally (monadic ≢
). Middle function is take (dyadic ↑
) and the right tine is right (dyadic ⊢
). This function caps the length of ⍵
to ⍺
, if smaller than ≢⍵
.
{(⍺⌊≢⍵) ↑ ⍵}
Example 4
≡⍥(⎕C~∘' ') ⍝ dyadic
This function does a case-insensitive comparison, after first removing all spaces.
Match (dyadic ≡
), pre-process both with case-convert (monadic ⎕C
) without (dyadic ~
) space. We'll factor out the pre-process function as a local dfn:
{f ← {⎕C⍵~' '} ⋄ (f⍺)≡f⍵}
Example 5
+⌿⊢>+⌿÷≢ ⍝ monadic
Wow. Deep breath. From the right, the first three functions make up a fork: +⌿÷≢
which we probably by now recognise as the average. Let's temorarily name that F
.
F ← {(+⌿⍵)÷≢⍵}
+⌿ ⊢>F ⍝ monadic
Again, take the three right-most functions and form the next fork: same greater-than Avg. Same in this case just means ⍵
, so we can add this comparison to our F
function:
F ← {⍵>(+⌿⍵)÷≢⍵}
+⌿ F ⍝ monadic
and what remains is to inline the final atop:
F ← {+⌿ ⍵>(+⌿⍵)÷≢⍵}
Example 6
+∘÷⍣≡ ⍝ dyadic
This calculates the golden ratio. Not much to do here; only the left operand of the power (⍣
) operator, which is a beside -- sum after pre-processing the right argument with reciprocal (monadic ÷
):
{⍺+÷⍵}⍣≡
1 (+∘÷⍣≡) 1
1.618033989
A neat, but perhaps lesser-known, feature of Dyalog APL is that under certain circumstances, a subset of primitives will maintain a hash table for subsequent lookups that can boost performance. One way to inform the interpreter that you want to do multiple lookups -- and so would benefit from a persistent hash table -- is to bind an array to a lookup primitive, and tacit formulations make this convenient.
The following primitives support this style:
Not hashed | Hashed | ||
---|---|---|---|
P ⍋ s |
P∘⍋ s |
||
P ⍒ s |
P∘⍒ s |
||
P ⍳ s |
P∘⍳ s |
||
P ∪ s |
P∘∪ s |
||
s ∊ P |
(∊∘P) s |
||
s ~ P |
(~∘P) s |
||
s ∩ P |
(∩∘P) s |
Creating and maintaining a hash table has a cost associated with it, so it's not always clear-cut if it's worthwhile. We can compare the performance of hashed and unhashed versions of dyadic iota and dyadic grade if we pre-populate the hash before the comparison itself:
'cmpx'⎕CY'dfns'
s←'Hello, World!'
AVi←⎕AV∘⍳ ⋄ {}AVi s
cmpx '⎕AV⍳s' 'AVi s'
⎕AV⍳s → 3.8E¯7 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ AVi s → 2.8E¯7 | -27% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
AVg←⎕AV∘⍋ ⋄ {}AVg s
cmpx '⎕AV⍋s' 'AVg s'
⎕AV⍋s → 1.2E¯6 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ AVg s → 2.4E¯7 | -81% ⎕⎕⎕⎕⎕⎕⎕⎕
However, if we take the creation of the hash table itself into account, the results look rather different:
cmpx '⎕AV⍳s' '⎕AV∘⍳ s'
⎕AV⍳s → 4.3E¯7 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ ⎕AV∘⍳ s → 8.1E¯7 | +87% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
cmpx '⎕AV⍋s' '⎕AV∘⍋ s'
⎕AV⍋s → 1.3E¯6 | 0% ⎕⎕⎕ ⎕AV∘⍋ s → 1.9E¯5 | +1400% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
The cost of creating the hash is likely only worth it if you intend to do many lookups.
Here is a handy little cheat-sheet:
Compositional operators:
⍥
Pre-process both∘
Pre-process right⍤
Post-process result⍨
SelfieOperators: long left scope
Trains: odd-even from right
Tools:
]box on -t=…
Watch out for these:
Don’t try:
Selection issues:
⊢⍤⌿
⊂⍤… ⌷ …