Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when using @Const macro with Julia --check-bounds=no startup option #498

Open
0samuraiE opened this issue Jul 29, 2024 · 0 comments
Open

Comments

@0samuraiE
Copy link

0samuraiE commented Jul 29, 2024

Hello, I've encountered the error described in the title and would like to report it.
To reproduce, please run following code with julia --check-bonds=no

using CUDA
using KernelAbstractions

N = 16
x = CUDA.ones(N)
y = CUDA.zeros(N)

@kernel function test!(y, @Const(x))
    I = @index(Global, Cartesian)
    y[I] = x[I]
end

test!(CUDABackend(), N)(y, x; ndrange=N)
KernelAbstractions.synchronize(CUDABackend())

Error message is

Reason: unsupported dynamic function invocation (call to convert)
Stacktrace:
 [1] setindex!
   @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:166
 [2] setindex!
   @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:178
 [3] macro expansion
   @ ./REPL[6]:3
 [4] gpu_test!
   @ ~/.julia/packages/KernelAbstractions/MAxUm/src/macros.jl:95
 [5] gpu_test!
   @ ./none:0
Reason: unsupported call to an unknown function (call to jl_f__svec_ref)
Stacktrace:
  [1] getindex
    @ ./essentials.jl:769
  [2] macro expansion
    @ ~/.julia/packages/LLVM/5DlHM/src/interop/pointer.jl:342
  [3] pointerref_ldg
    @ ~/.julia/packages/CUDA/Tl08O/src/device/pointer.jl:40
  [4] unsafe_cached_load
    @ ~/.julia/packages/CUDA/Tl08O/src/device/pointer.jl:69
  [5] #const_arrayref
    @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:156
  [6] getindex
    @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:204
  [7] _getindex
    @ ./abstractarray.jl:1319
  [8] getindex
    @ ./abstractarray.jl:1291
  [9] macro expansion
    @ ./REPL[6]:3
 [10] gpu_test!
    @ ~/.julia/packages/KernelAbstractions/MAxUm/src/macros.jl:95
 [11] gpu_test!
    @ ./none:0
Reason: unsupported dynamic function invocation (call to cconvert)
Stacktrace:
  [1] macro expansion
    @ ~/.julia/packages/LLVM/5DlHM/src/interop/pointer.jl:342
  [2] pointerref_ldg
    @ ~/.julia/packages/CUDA/Tl08O/src/device/pointer.jl:40
  [3] unsafe_cached_load
    @ ~/.julia/packages/CUDA/Tl08O/src/device/pointer.jl:69
  [4] #const_arrayref
    @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:156
  [5] getindex
    @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:204
  [6] _getindex
    @ ./abstractarray.jl:1319
  [7] getindex
    @ ./abstractarray.jl:1291
  [8] macro expansion
    @ ./REPL[6]:3
  [9] gpu_test!
    @ ~/.julia/packages/KernelAbstractions/MAxUm/src/macros.jl:95
 [10] gpu_test!
    @ ./none:0
Reason: unsupported dynamic function invocation (call to unsafe_convert)
Stacktrace:
  [1] macro expansion
    @ ~/.julia/packages/LLVM/5DlHM/src/interop/pointer.jl:342
  [2] pointerref_ldg
    @ ~/.julia/packages/CUDA/Tl08O/src/device/pointer.jl:40
  [3] unsafe_cached_load
    @ ~/.julia/packages/CUDA/Tl08O/src/device/pointer.jl:69
  [4] #const_arrayref
    @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:156
  [5] getindex
    @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:204
  [6] _getindex
    @ ./abstractarray.jl:1319
  [7] getindex
    @ ./abstractarray.jl:1291
  [8] macro expansion
    @ ./REPL[6]:3
  [9] gpu_test!
    @ ~/.julia/packages/KernelAbstractions/MAxUm/src/macros.jl:95
 [10] gpu_test!
    @ ./none:0
Reason: unsupported call to an unknown function (call to jl_f_apply_type)
Stacktrace:
  [1] Val
    @ ./essentials.jl:874
  [2] macro expansion
    @ ~/.julia/packages/LLVM/5DlHM/src/interop/pointer.jl:342
  [3] pointerref_ldg
    @ ~/.julia/packages/CUDA/Tl08O/src/device/pointer.jl:40
  [4] unsafe_cached_load
    @ ~/.julia/packages/CUDA/Tl08O/src/device/pointer.jl:69
  [5] #const_arrayref
    @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:156
  [6] getindex
    @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:204
  [7] _getindex
    @ ./abstractarray.jl:1319
  [8] getindex
    @ ./abstractarray.jl:1291
  [9] macro expansion
    @ ./REPL[6]:3
 [10] gpu_test!
    @ ~/.julia/packages/KernelAbstractions/MAxUm/src/macros.jl:95
 [11] gpu_test!
    @ ./none:0
Reason: unsupported call to an unknown function (call to ijl_new_structv)
Stacktrace:
  [1] Val
    @ ./essentials.jl:872
  [2] Val
    @ ./essentials.jl:874
  [3] macro expansion
    @ ~/.julia/packages/LLVM/5DlHM/src/interop/pointer.jl:342
  [4] pointerref_ldg
    @ ~/.julia/packages/CUDA/Tl08O/src/device/pointer.jl:40
  [5] unsafe_cached_load
    @ ~/.julia/packages/CUDA/Tl08O/src/device/pointer.jl:69
  [6] #const_arrayref
    @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:156
  [7] getindex
    @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:204
  [8] _getindex
    @ ./abstractarray.jl:1319
  [9] getindex
    @ ./abstractarray.jl:1291
 [10] macro expansion
    @ ./REPL[6]:3
 [11] gpu_test!
    @ ~/.julia/packages/KernelAbstractions/MAxUm/src/macros.jl:95
 [12] gpu_test!
    @ ./none:0
Reason: unsupported dynamic function invocation (call to _typed_llvmcall)
Stacktrace:
  [1] macro expansion
    @ ~/.julia/packages/LLVM/5DlHM/src/interop/pointer.jl:344
  [2] pointerref_ldg
    @ ~/.julia/packages/CUDA/Tl08O/src/device/pointer.jl:40
  [3] unsafe_cached_load
    @ ~/.julia/packages/CUDA/Tl08O/src/device/pointer.jl:69
  [4] #const_arrayref
    @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:156
  [5] getindex
    @ ~/.julia/packages/CUDA/Tl08O/src/device/array.jl:204
  [6] _getindex
    @ ./abstractarray.jl:1319
  [7] getindex
    @ ./abstractarray.jl:1291
  [8] macro expansion
    @ ./REPL[6]:3
  [9] gpu_test!
    @ ~/.julia/packages/KernelAbstractions/MAxUm/src/macros.jl:95
 [10] gpu_test!
    @ ./none:0
Hint: catch this exception as `err` and call `code_typed(err; interactive = true)` to introspect the erronous code with Cthulhu.jl
Stacktrace:
  [1] check_ir(job::GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}, args::LLVM.Module)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/Y4hSX/src/validation.jl:147
  [2] macro expansion
    @ ~/.julia/packages/GPUCompiler/Y4hSX/src/driver.jl:458 [inlined]
  [3] macro expansion
    @ ~/.julia/packages/TimerOutputs/Lw5SP/src/TimerOutput.jl:253 [inlined]
  [4] macro expansion
    @ ~/.julia/packages/GPUCompiler/Y4hSX/src/driver.jl:457 [inlined]
  [5] emit_llvm(job::GPUCompiler.CompilerJob; libraries::Bool, toplevel::Bool, optimize::Bool, cleanup::Bool, only_entry::Bool, validate::Bool)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/Y4hSX/src/utils.jl:103
  [6] emit_llvm
    @ ~/.julia/packages/GPUCompiler/Y4hSX/src/utils.jl:97 [inlined]
  [7] codegen(output::Symbol, job::GPUCompiler.CompilerJob; libraries::Bool, toplevel::Bool, optimize::Bool, cleanup::Bool, strip::Bool, validate::Bool, only_entry::Bool, parent_job::Nothing)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/Y4hSX/src/driver.jl:136
  [8] codegen
    @ ~/.julia/packages/GPUCompiler/Y4hSX/src/driver.jl:115 [inlined]
  [9] compile(target::Symbol, job::GPUCompiler.CompilerJob; libraries::Bool, toplevel::Bool, optimize::Bool, cleanup::Bool, strip::Bool, validate::Bool, only_entry::Bool)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/Y4hSX/src/driver.jl:111
 [10] compile
    @ ~/.julia/packages/GPUCompiler/Y4hSX/src/driver.jl:103 [inlined]
 [11] #1145
    @ ~/.julia/packages/CUDA/Tl08O/src/compiler/compilation.jl:254 [inlined]
 [12] JuliaContext(f::CUDA.var"#1145#1148"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}}; kwargs::@Kwargs{})
    @ GPUCompiler ~/.julia/packages/GPUCompiler/Y4hSX/src/driver.jl:52
 [13] JuliaContext(f::Function)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/Y4hSX/src/driver.jl:42
 [14] compile(job::GPUCompiler.CompilerJob)
    @ CUDA ~/.julia/packages/CUDA/Tl08O/src/compiler/compilation.jl:253
 [15] actual_compilation(cache::Dict{…}, src::Core.MethodInstance, world::UInt64, cfg::GPUCompiler.CompilerConfig{…}, compiler::typeof(CUDA.compile), linker::typeof(CUDA.link))
    @ GPUCompiler ~/.julia/packages/GPUCompiler/Y4hSX/src/execution.jl:237
 [16] cached_compilation(cache::Dict{Any, CuFunction}, src::Core.MethodInstance, cfg::GPUCompiler.CompilerConfig{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}, compiler::Function, linker::Function)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/Y4hSX/src/execution.jl:151
 [17] macro expansion
    @ ~/.julia/packages/CUDA/Tl08O/src/compiler/execution.jl:369 [inlined]
 [18] macro expansion
    @ ./lock.jl:267 [inlined]
 [19] cufunction(f::typeof(gpu_test!), tt::Type{Tuple{KernelAbstractions.CompilerMetadata{…}, CuDeviceVector{…}, CuDeviceVector{…}}}; kwargs::@Kwargs{always_inline::Bool, maxthreads::Int64})
    @ CUDA ~/.julia/packages/CUDA/Tl08O/src/compiler/execution.jl:364
 [20] macro expansion
    @ ~/.julia/packages/CUDA/Tl08O/src/compiler/execution.jl:112 [inlined]
 [21] (::KernelAbstractions.Kernel{…})(::CuArray{…}, ::Vararg{…}; ndrange::Int64, workgroupsize::Nothing)
    @ CUDA.CUDAKernels ~/.julia/packages/CUDA/Tl08O/src/CUDAKernels.jl:103
 [22] top-level scope
    @ REPL[7]:1
Some type information was truncated. Use `show(err)` to see complete types.

versioninfo

Julia Version 1.10.4
Commit 48d4fd48430 (2024-06-04 10:41 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: 24 × 13th Gen Intel(R) Core(TM) i7-13700K
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, goldmont)
Threads: 1 default, 0 interactive, 1 GC (on 24 virtual cores)
Environment:
  JULIA_PKG_USE_CLI_GIT = true

CUDA.jl versioninfo

CUDA runtime 12.5, artifact installation
CUDA driver 12.5
NVIDIA driver 555.58.2

CUDA libraries: 
- CUBLAS: 12.5.3
- CURAND: 10.3.6
- CUFFT: 11.2.3
- CUSOLVER: 11.6.3
- CUSPARSE: 12.5.1
- CUPTI: 2024.2.1 (API 23.0.0)
- NVML: 12.0.0+555.58.2

Julia packages: 
- CUDA: 5.4.3
- CUDA_Driver_jll: 0.9.1+1
- CUDA_Runtime_jll: 0.14.1+0

Toolchain:
- Julia: 1.10.4
- LLVM: 15.0.7

1 device:
  0: NVIDIA GeForce RTX 4070 Ti (sm_89, 6.613 GiB / 11.994 GiB available)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant