******************************************************************************** conan test cci-711f9b8b/recipes/llama-cpp/all/test_package/conanfile.py llama-cpp/b2038@#b3086bc88288c598660641595004ff87 -pr /home/conan/w/prod-v1/bsr/102487/ddace/profile_linux_11_libstdcpp11_gcc_debug_64.llama-cpp-shared-False.txt -c tools.system.package_manager:mode=install -c tools.system.package_manager:sudo=True ******************************************************************************** Auto detecting your dev setup to initialize the default profile (/home/conan/w/prod-v1/bsr/102487/efeeb/.conan/profiles/default) Found gcc 11.1 gcc>=5, using the major as version ************************* WARNING: GCC OLD ABI COMPATIBILITY *********************** Conan detected a GCC version > 5 but has adjusted the 'compiler.libcxx' setting to 'libstdc++' for backwards compatibility. Your compiler is likely using the new CXX11 ABI by default (libstdc++11). If you want Conan to use the new ABI for the default profile, run: $ conan profile update settings.compiler.libcxx=libstdc++11 default Or edit '/home/conan/w/prod-v1/bsr/102487/efeeb/.conan/profiles/default' and set compiler.libcxx=libstdc++11 ************************************************************************************ Default settings os=Linux os_build=Linux arch=x86_64 arch_build=x86_64 compiler=gcc compiler.version=11 compiler.libcxx=libstdc++ build_type=Release *** You can change them in /home/conan/w/prod-v1/bsr/102487/efeeb/.conan/profiles/default *** *** Or override with -s compiler='other' -s ...s*** Configuration: [settings] arch=x86_64 build_type=Debug compiler=gcc compiler.libcxx=libstdc++11 compiler.version=11 os=Linux [options] llama-cpp:shared=False [build_requires] [env] [conf] tools.system.package_manager:mode=install tools.system.package_manager:sudo=True llama-cpp/b2038 (test package): Installing package Requirements llama-cpp/b2038 from local cache - Cache Packages llama-cpp/b2038:f66ca460a3d8d71154a322a3e79d8fd96d2e0129 - Download Installing (downloading, building) binaries... llama-cpp/b2038: Retrieving package f66ca460a3d8d71154a322a3e79d8fd96d2e0129 from remote 'c3i_PR-22621' Downloading conanmanifest.txt Downloading conaninfo.txt Downloading conan_package.tgz llama-cpp/b2038: Package installed f66ca460a3d8d71154a322a3e79d8fd96d2e0129 llama-cpp/b2038: Downloaded package revision 058137cbfeeb3424712b176a358d9a51 llama-cpp/b2038 (test package): Generator 'CMakeToolchain' calling 'generate()' llama-cpp/b2038 (test package): Preset 'debug' added to CMakePresets.json. Invoke it manually using 'cmake --preset debug' llama-cpp/b2038 (test package): If your CMake version is not compatible with CMakePresets (<3.19) call cmake like: 'cmake -G "Unix Makefiles" -DCMAKE_TOOLCHAIN_FILE=/home/conan/w/prod-v1/bsr/cci-711f9b8b/recipes/llama-cpp/all/test_package/build/Debug/generators/conan_toolchain.cmake -DCMAKE_POLICY_DEFAULT_CMP0091=NEW -DCMAKE_BUILD_TYPE=Debug' llama-cpp/b2038 (test package): Generator 'VirtualRunEnv' calling 'generate()' llama-cpp/b2038 (test package): Generator txt created conanbuildinfo.txt llama-cpp/b2038 (test package): Generator 'CMakeDeps' calling 'generate()' llama-cpp/b2038 (test package): Aggregating env generators llama-cpp/b2038 (test package): Generated conaninfo.txt llama-cpp/b2038 (test package): Generated graphinfo Using lockfile: '/home/conan/w/prod-v1/bsr/cci-711f9b8b/recipes/llama-cpp/all/test_package/build/Debug/generators/conan.lock' Using cached profile from lockfile [HOOK - conan-center.py] pre_build(): [FPIC MANAGEMENT (KB-H007)] 'fPIC' option not found [HOOK - conan-center.py] pre_build(): [FPIC MANAGEMENT (KB-H007)] OK llama-cpp/b2038 (test package): Calling build() llama-cpp/b2038 (test package): CMake command: cmake -G "Unix Makefiles" -DCMAKE_TOOLCHAIN_FILE="/home/conan/w/prod-v1/bsr/cci-711f9b8b/recipes/llama-cpp/all/test_package/build/Debug/generators/conan_toolchain.cmake" -DCMAKE_POLICY_DEFAULT_CMP0091="NEW" -DCMAKE_BUILD_TYPE="Debug" "/home/conan/w/prod-v1/bsr/cci-711f9b8b/recipes/llama-cpp/all/test_package/." ----Running------ > cmake -G "Unix Makefiles" -DCMAKE_TOOLCHAIN_FILE="/home/conan/w/prod-v1/bsr/cci-711f9b8b/recipes/llama-cpp/all/test_package/build/Debug/generators/conan_toolchain.cmake" -DCMAKE_POLICY_DEFAULT_CMP0091="NEW" -DCMAKE_BUILD_TYPE="Debug" "/home/conan/w/prod-v1/bsr/cci-711f9b8b/recipes/llama-cpp/all/test_package/." ----------------- -- Using Conan toolchain: /home/conan/w/prod-v1/bsr/cci-711f9b8b/recipes/llama-cpp/all/test_package/build/Debug/generators/conan_toolchain.cmake -- The CXX compiler identification is GNU 11.1.0 -- Check for working CXX compiler: /usr/local/bin/c++ -- Check for working CXX compiler: /usr/local/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Conan: Component target declared 'llama-cpp::llama' -- Conan: Component target declared 'llama-cpp::common' -- Conan: Target declared 'llama-cpp::llama-cpp' -- Configuring done -- Generating done -- Build files have been written to: /home/conan/w/prod-v1/bsr/cci-711f9b8b/recipes/llama-cpp/all/test_package/build/Debug llama-cpp/b2038 (test package): CMake command: cmake --build "/home/conan/w/prod-v1/bsr/cci-711f9b8b/recipes/llama-cpp/all/test_package/build/Debug" '--' '-j3' ----Running------ > cmake --build "/home/conan/w/prod-v1/bsr/cci-711f9b8b/recipes/llama-cpp/all/test_package/build/Debug" '--' '-j3' ----------------- Scanning dependencies of target test_package [ 50%] Building CXX object CMakeFiles/test_package.dir/test_package.cpp.o [100%] Linking CXX executable test_package [100%] Built target test_package llama-cpp/b2038 (test package): Running test() ----Running------ > . "/home/conan/w/prod-v1/bsr/cci-711f9b8b/recipes/llama-cpp/all/test_package/build/Debug/generators/conanrun.sh" && ./test_package ./models/ggml-vocab-llama.gguf 'Hello World' ----------------- WARNING: Behavior may be unexpected when allocating 0 bytes for ggml_malloc! 1 -> '' 15043 -> ' Hello' 2787 -> ' World' CMake Warning: Manually-specified variables were not used by the project: CMAKE_POLICY_DEFAULT_CMP0091 llama_model_loader: loaded meta data with 17 key-value pairs and 0 tensors from ./models/ggml-vocab-llama.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: tokenizer.ggml.model str = llama llama_model_loader: - kv 11: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<... llama_model_loader: - kv 12: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 13: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 14: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 15: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 16: tokenizer.ggml.unknown_token_id u32 = 0 llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = all F32 (guessed) llm_load_print_meta: model params = 0.00 K llm_load_print_meta: model size = 0.00 MiB (-nan BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llama_model_load: vocab only - skipping tensors llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama-cpp/b2038 (test package): WARN: Using the new toolchains and generators without specifying a build profile (e.g: -pr:b=default) is discouraged and might cause failures and unexpected behavior llama-cpp/b2038 (test package): WARN: Using the new toolchains and generators without specifying a build profile (e.g: -pr:b=default) is discouraged and might cause failures and unexpected behavior