Replies: 3 comments 4 replies
-
must be
it works |
Beta Was this translation helpful? Give feedback.
3 replies
-
The |
Beta Was this translation helpful? Give feedback.
1 reply
-
Verified that this works, on an M2 MacBook Air, MacOS 13.1.1, Julia 1.9.0-rc2: using Metal,Flux
W = mtl(rand(2, 5))
b = mtl(rand(2))
predict(x) = W*x .+ b
loss(x, y) = sum((predict(x) .- y).^2)
x, y = mtl(rand(5)), mtl(rand(2))
loss(x, y) Output:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
1 from flux document demo
ERROR: LoadError: InvalidIRError: compiling kernel #63#64(Metal.mtlKernelContext, MtlDeviceVector{Float64, 1}, MtlDeviceMatrix{Float64, 1}, MtlDeviceVector{Float64, 1}) resulted in invalid LLVM IR Reason: unsupported use of double floating-point value Reason: unsupported use of double floating-point value
Just replace function's name
MtlArray work fine! But
loss
function seems has problem , I don't know how to solve2 there are built-in
gpu
function likeCUDA.jl
?it can be define model's layer one by one , but
fmap
not working.like this
Beta Was this translation helpful? Give feedback.
All reactions