A direct function (dfn, pronounced "dee fun") is an alternative way to define a function and operator (a higher-order function) in the programming language APL. A direct operator can also be called a dop (pronounced "dee op"). They were invented by John Scholes in 1996.[1] They are a unique combination of array programming, higher-order function, and functional programming, and are a major distinguishing advance of early 21st century APL over prior versions.
A dfn is a sequence of possibly guarded expressions (or just a guard) between { and }, separated by ⋄ or new-lines, wherein ⍺ denotes the left argument and ⍵ the right, and ∇ denotes recursion (function self-reference). For example, the function PT tests whether each row of ⍵ is a Pythagorean triplet (by testing whether the sum of squares equals twice the square of the maximum).
fact←{0=⍵:1⋄⍵×∇⍵-1}fact5120fact¨⍳10⍝ fact applied to each element of 0 to 9112624120720504040320362880
Description
The rules for dfns are summarized by the following "reference card":[2]
{⍺function⍵}
{⍺⍺operator⍵⍵}
: guard
⍺ left argument
⍺⍺ left operand
:: error-guard
⍵ right argument
⍵⍵ right operand
⍺← default left argument
∇ self-reference
∇∇ self-reference
s← shy result
A dfn is a sequence of possibly guarded expressions (or just a guard) between { and }, separated by ⋄ or new-lines.
expressionguard:expressionguard:
The expressions and/or guards are evaluated in sequence. A guard must evaluate to a 0 or 1; its associated expression is evaluated if the value is 1. A dfn terminates after the first unguarded expression which does not end in assignment, or after the first guarded expression whose guard evaluates to 1, or if there are no more expressions. The result of a dfn is that of the last evaluated expression. If that last evaluated expression ends in assignment, the result is "shy"—not automatically displayed in the session.
⍺ denotes the left function argument and ⍵ the right; ⍺⍺ denotes the left operand and ⍵⍵ the right. If ⍵⍵ occurs in the definition, then the dfn is a dyadic operator; if only ⍺⍺ occurs but not ⍵⍵, then it is a monadic operator; if neither ⍺⍺ or ⍵⍵ occurs, then the dfn is a function.
The special syntax ⍺←expression is used to give a default value to the left argument if a dfn is called monadically, that is, called with no left argument. The ⍺←expression is not evaluated otherwise.
∇ denotes recursion or self-reference by the function, and ∇∇ denotes self-reference by the operator. Such denotation permits anonymous recursion.
Error trapping is provided through error-guards, errnums::expression. When an error is generated, the system searches dynamically through the calling functions for an error-guard that matches the error. If one is found, the execution environment is unwound to its state immediately prior to the error-guard's execution and the associated expression of the error-guard is evaluated as the result of the dfn.
Additional descriptions, explanations, and tutorials on dfns are available in the cited articles.[3][4][5][6][7]
Examples
The examples here illustrate different aspects of dfns. Additional examples are found in the cited articles.[8][9][10]
Default left argument
The function {⍺+0j1×⍵} adds ⍺ to 0j1 (i or ) times ⍵.
The significance of this function can be seen as follows:
Complex numbers can be constructed as ordered pairs of real numbers, similar to how integers can be constructed as ordered pairs of natural numbers and rational numbers as ordered pairs of integers. For complex numbers, {⍺+0j1×⍵} plays the same role as - for integers and ÷ for rational numbers.[11]: §8
Moreover, analogous to that monadic -⍵ ⇔ 0-⍵ (negate) and monadic ÷⍵ ⇔ 1÷⍵ (reciprocal), a monadic definition of the function is useful, effected by specifying a default value of 0 for ⍺: if j←{⍺←0⋄⍺+0j1×⍵}, then j⍵ ⇔ 0j⍵ ⇔ 0+0j1×⍵.
The last sequence, the number of primes less than powers of 10, is an initial segment of OEIS: A006880. The last number, 50847534, is the number of primes less than . It is called Bertelsen's number, memorably described by MathWorld as "an erroneous name erroneously given the erroneous value of ".[12]
sieve uses two different methods to mark composites with 0s, both effected using local anonymous dfns: The first uses the sieve of Eratosthenes on an initial mask of 1 and a prefix of the primes 2 3...43, using the insert operator ⌿ (right fold). (The length of the prefix obtains by comparison with the primorial function×⍀p.) The second finds the smallest new prime q remaining in b (q←b⍳1), and sets to 0 bit q itself and bits at q times the numbers at remaining 1 bits in an initial segment of b (⍸b↑⍨⌈n÷q). This second dfn uses tail recursion.
Tail recursion
Typically, the factorial function is define recursively (as above), but it can be coded to exploit tail recursion by using an accumulator left argument:[13]
det←{⍝ determinant of a square complex matrix⍺←1⍝ product of co-factor coefficients so far0=≢⍵:⍺⍝ result for 0-by-0(ij)←(⍴⍵)⊤⊃⍒|,⍵⍝ row and column index of the maximal elementk←⍳≢⍵(⍺×⍵[i;j]ׯ1*i+j)∇⍵[k~i;k~j]-⍵[k~i;j]∘.×⍵[i;k~j]÷⍵[i;j]}
Multiple recursion
A partition of a non-negative integer is a vector of positive integers such that n=+⌿v, where the order in is not significant. For example, 22 and 211 are partitions of 4, and 211 and 121 and 112 are considered to be the same partition.
The basis step 1≥⍵:0≤⍵ states that for 1≥⍵, the result of the function is 0≤⍵, 1 if ⍵ is 0 or 1 and 0 otherwise. The recursive step is highly multiply recursive. For example, pn200 would result in the function being applied to each element of rec200, which are:
and pn200 requires longer than the age of the universe to compute ( function calls to itself).[10]: §16 The compute time can be reduced by memoization, here implemented as the direct operator (higher-order function) M:
M←{f←⍺⍺i←2+'⋄'⍳⍨t←2↓,⎕cr'f'⍎'{T←(1+⍵)⍴¯1 ⋄ ',(i↑t),'¯1≢T[⍵]:⊃T[⍵] ⋄ ⊃T[⍵]←⊂',(i↓t),'⍵}⍵'}pnM2003.973E120⍕pnM200⍝ format to 0 decimal places3972999029388
This value of pnM200 agrees with that computed by Hardy and Ramanujan in 1918.[16]
The memo operator M defines a variant of its operand function ⍺⍺ to use a cacheT and then evaluates it. With the operand pn the variant is:
Quicksort on an array ⍵ works by choosing a "pivot" at random among its major cells, then catenating the sorted major cells which strictly precede the pivot, the major cells equal to the pivot, and the sorted major cells which strictly follow the pivot, as determined by a comparison function ⍺⍺. Defined as a direct operator (dop) Q:
Q3 is a variant that catenates the three parts enclosed by the function ⊂ instead of the parts per se. The three parts generated at each recursive step are apparent in the structure of the final result. Applying the function derived from Q3 to the same argument multiple times gives different results because the pivots are chosen at random. In-order traversal of the results does yield the same sorted array.
The above formulation is not new; see for example Figure 3.7 of the classic The Design and Analysis of Computer Algorithms.[17] However, unlike the pidginALGOL program in Figure 3.7, Q is executable, and the partial order used in the sorting is an operand, the (×-) the examples above.[9]
Dfns with operators and trains
Dfns, especially anonymous dfns, work well with operators and trains. The following snippet solves a "Programming Pearls" puzzle:[18] given a dictionary of English words, here represented as the character matrix a, find all sets of anagrams.
The algorithm works by sorting the rows individually ({⍵[⍋⍵]}⍤1⊢a), and these sorted rows are used as keys ("signature" in the Programming Pearls description) to the key operator ⌸ to group the rows of the matrix.[9]: §3.3 The expression on the right is a train, a syntactic form employed by APL to achieve tacit programming. Here, it is an isolated sequence of three functions such that (fgh)⍵ ⇔ (f⍵)g(h⍵), whence the expression on the right is equivalent to ({⍵[⍋⍵]}⍤1⊢a){⊂⍵}⌸a.
Lexical scope
When an inner (nested) dfn refers to a name, it is sought by looking outward through enclosing dfns rather than down the call stack. This regime is said to employ lexical scope instead of APL's usual dynamic scope. The distinction becomes apparent only if a call is made to a function defined at an outer level. For the more usual inward calls, the two regimes are indistinguishable.[19]: p.137
For example, in the following function which, the variable ty is defined both in which itself and in the inner function f1. When f1 calls outward to f2 and f2 refers to ty, it finds the outer one (with value 'lexical') rather than the one defined in f1 (with value 'dynamic'):
The following function illustrates use of error guards:[19]: p.139
plus←{tx←'catch all'⋄0::txtx←'domain'⋄11::txtx←'length'⋄5::tx⍺+⍵}2plus3⍝ no errors52345plus'three'⍝ argument lengths don't matchlength2345plus'four'⍝ can't add charactersdomain23plus34⍴5⍝ can't add vector to matrixcatchall
In APL, error number 5 is "length error"; error number 11 is "domain error"; and error number 0 is a "catch all" for error numbers 1 to 999.
The example shows the unwinding of the local environment before an error-guard's expression is evaluated. The local name tx is set to describe the purview of its following error-guard. When an error occurs, the environment is unwound to expose tx's statically correct value.
Dfns versus tradfns
Since direct functions are dfns, APL functions defined in the traditional manner are referred to as tradfns, pronounced "trad funs". Here, dfns and tradfns are compared by consideration of the function sieve: On the left is a dfn (as defined above); in the middle is a tradfn using control structures; on the right is a tradfn using gotos (→) and line labels.
A dfn is named by assignment (←); a tradfn is named by embedding the name in the representation of the function and applying ⎕fx (a system function) to that representation.
A dfn is handier than a tradfn as an operand (see preceding items: a tradfn must be named; a tradfn is named by embedding ...).
Names assigned in a dfn are local by default; names assigned in a tradfn are global unless specified in a locals list.
Locals in a dfn have lexical scope; locals in a tradfn have dynamic scope, visible in called functions unless shadowed by their locals list.
The arguments of a dfn are named ⍺ and ⍵ and the operands of a dop are named ⍺⍺ and ⍵⍵; the arguments and operands of a tradfn can have any name, specified on its leading line.
The result (if any) of a dfn is unnamed; the result (if any) of a tradfn is named in its header.
A default value for ⍺ is specified more neatly than for the left argument of a tradfn.
Recursion in a dfn is effected by invoking ∇ or ∇∇ or its name; recursion in a tradfn is effected by invoking its name.
Flow control in a dfn is effected by guards and function calls; that in a tradfn is by control structures and → (goto) and line labels.
Evaluating an expression in a dfn not ending in assignment causes return from the dfn; evaluating a line in a tradfn not ending in assignment or goto displays the result of the line.
A dfn returns on evaluating an expression not ending in assignment, on evaluating a guarded expression, or after the last expression; a tradfn returns on → (goto) line 0 or a non-existing line, or on evaluating a :Return control structure, or after the last line.
The simpler flow control in a dfn makes it easier to detect and implement tail recursion than in a tradfn.
A dfn may call a tradfn and vice versa; a dfn may be defined in a tradfn, and vice versa.
History
Kenneth E. Iverson, the inventor of APL, was dissatisfied with the way user functions (tradfns) were defined. In 1974, he devised "formal function definition" or "direct definition" for use in exposition.[20] A direct definition has two or four parts, separated by colons:
Within a direct definition, ⍺ denotes the left argument and ⍵ the right argument. In the first instance, the result of expression is the result of the function; in the second instance, the result of the function is that of expression0 if proposition evaluates to 0, or expression1 if it evaluates to 1. Assignments within a direct definition are dynamically local. Examples of using direct definition are found in the 1979 Turing Award Lecture[21] and in books and application papers.[22][23][24][25][9]
Direct definition was too limited for use in larger systems. The ideas were further developed by multiple authors in multiple works[26]: §8 [27][28]: §4.17 [29][30][31][32] but the results were unwieldy. Of these, the "alternative APL function definition" of Bunda in 1987[31] came closest to current facilities, but is flawed in conflicts with existing symbols and in error handling which would have caused practical difficulties, and was never implemented. The main distillates from the different proposals were that (a) the function being defined is anonymous, with subsequent naming (if required) being effected by assignment; (b) the function is denoted by a symbol and thereby enables anonymous recursion.[9]
In 1996, John Scholes of Dyalog Limited invented direct functions (dfns).[1][6][7] The ideas originated in 1989 when he read a special issue of The Computer Journal on functional programming.[33] He then proceeded to study functional programming and became strongly motivated ("sick with desire", like Yeats) to bring these ideas to APL.[6][7] He initially operated in stealth because he was concerned the changes might be judged too radical and an unnecessary complication of the language; other observers say that he operated in stealth because Dyalog colleagues were not so enamored and thought he was wasting his time and causing trouble for people. Dfns were first presented in the Dyalog Vendor Forum at the APL '96 Conference and released in Dyalog APL in early 1997.[1] Acceptance and recognition were slow in coming. As late as 2008, in Dyalog at 25,[34] a publication celebrating the 25th anniversary of Dyalog Limited, dfns were barely mentioned (mentioned twice as "dynamic functions" and without elaboration). As of 2019[update], dfns are implemented in Dyalog APL,[19] NARS2000,[35] and ngn/apl.[36] They also play a key role in efforts to exploit the computing abilities of a graphics processing unit (GPU).[37][9]