Introduction to the Julia programming language
11 Multiple Dispatch¶
Multiple Dispatch¶
You may have noticed that we seem to write functions in Julia much as we do in Python: without any requirement for explicit types.
We can specify types if we want to:
function multiply(x ::Int64, y ::Int64)
x*y
end
multiply(5,6)
30
multiply(9.0,1)
MethodError: no method matching multiply(::Float64, ::Int64) Closest candidates are: multiply(!Matched::Int64, ::Int64) @ Main ~/uni/Vorlesungen/Julia-Course-Uni-HD/julia-11-multiple-dispatch.ipynb:1 Stacktrace: [1] top-level scope @ ~/uni/Vorlesungen/Julia-Course-Uni-HD/julia-11-multiple-dispatch.ipynb:1
As you can see, unlike some versions of Python, if we do specify types in our function declaration, Julia takes us seriously - it will not provide the function for arguments with other types.
However, this also reveals a difference between how Julia and Python think about functions in the first place: Julia always generates different versions of a function for different types - it's just that it waits until a particular set of argument types is required before doing the work.
We can declare additional versions of multiply for different arguments explicitly:
function multiply(x ::Float64, y ::Int64)
x*y + 1
end
multiply (generic function with 2 methods)
and you notice that when we do so, Julia notes that multiply is now a "generic function with 2 methods" - those methods being "multiply a Float64 and an Int64" and "multiply two Int64s"
Julia will always pick the version of our function that matches the types of its arguments - this is called Multiple Dispatch and is the basis for how Julia implements Object Orientation without classes. Because the "method" of the function used depends on the types of all of its arguments, Julia does not treat methods as being "owned" by the first argument's type - you can't type "myobject.method()" - instead, you simply use the function "as is":
multiply(5.5, 2) #Float64, Int64
12.0
multiply(6,5) #Int64, Int64
30
We can always also provide a generic version that will be used to generate versions of the function for any other combinations of types we've not thought of:
function multiply(x,y)
x*y -1
end
multiply (generic function with 3 methods)
multiply(2,6.5) #Int64, Float64 - uses our generic method since we didn't explicitly cover this
12.0
Julia's type system is hierarchical, so we can also specify the domain of a method via higher-order types (for example, that a function works only on Numbers
- which Ints
and Floats
are both members of - via):
function myfunc(x ::Number)
x^2
end
print(myfunc(2))
print("\n")
print(myfunc(2.4))
4 5.76
myfunc("Banana")
MethodError: no method matching myfunc(::String) Closest candidates are: myfunc(!Matched::Number) @ Main ~/uni/Vorlesungen/Julia-Course-Uni-HD/julia-11-multiple-dispatch.ipynb:1 Stacktrace: [1] top-level scope @ ~/uni/Vorlesungen/Julia-Course-Uni-HD/julia-11-multiple-dispatch.ipynb:1
A function without any type specifiers is equivalent to one where all the type specifiers are ::Any
- the "Any" type matches all types in the system.
We can also, as in C++ etc, use parameterised types to restrict relationships between the types of arguments - for example, that all arguments must be of the same type (and in this case, that type is a kind of Number):
function myfunc2(x::T, y::T) where {T<:Number}
x+y
end
myfunc2(2,3)
5
myfunc(2.0, 3)
MethodError: no method matching myfunc(::Float64, ::Int64) Closest candidates are: myfunc(::Number) @ Main ~/uni/Vorlesungen/Julia-Course-Uni-HD/julia-11-multiple-dispatch.ipynb:1 Stacktrace: [1] top-level scope @ ~/uni/Vorlesungen/Julia-Course-Uni-HD/julia-11-multiple-dispatch.ipynb:1
Because all Julia functions are "open", even those provided by other packages, we can even define additional methods for existing (built-in) functions:
struct RelativisticSpeed
v ::Float64
end
import Base.+ #necessary to let us modify this function
function +(x::RelativisticSpeed, y::RelativisticSpeed)
RelativisticSpeed( (x.v + y.v) / (1 + x.v*y.v) )
end
RelativisticSpeed(0.9)+RelativisticSpeed(0.9)
RelativisticSpeed(0.994475138121547)
Whilst this risks the same issues as operator overloading in C++ - breaking natural assumptions about the behaviour of operations - it can also be very useful, especially if particular algorithms have efficient representations for a given type or types.