******************************************************************************** conan test cci-d37eb4b8\recipes\llama-cpp\all\test_package\conanfile.py llama-cpp/b2038@#b3086bc88288c598660641595004ff87 -pr C:/J2/w/prod-v1/bsr/101792/deabb/profile_windows_16_md_vs_release_64.llama-cpp-shared-False.txt -c tools.system.package_manager:mode=install -c tools.system.package_manager:sudo=True ******************************************************************************** Configuration: [settings] arch=x86_64 build_type=Release compiler=Visual Studio compiler.runtime=MD compiler.version=16 os=Windows [options] llama-cpp:shared=False [build_requires] [env] [conf] tools.system.package_manager:mode=install tools.system.package_manager:sudo=True llama-cpp/b2038 (test package): Installing package Requirements llama-cpp/b2038 from local cache - Cache Packages llama-cpp/b2038:8845d0a5ca053144c992a1176214ab3f43651c9b - Cache Installing (downloading, building) binaries... llama-cpp/b2038: Already installed! llama-cpp/b2038 (test package): Generator 'CMakeToolchain' calling 'generate()' llama-cpp/b2038 (test package): Preset 'default' added to CMakePresets.json. Invoke it manually using 'cmake --preset default' llama-cpp/b2038 (test package): If your CMake version is not compatible with CMakePresets (<3.19) call cmake like: 'cmake -G "Visual Studio 16 2019" -DCMAKE_TOOLCHAIN_FILE=C:\J2\w\prod-v1\bsr\cci-d37eb4b8\recipes\llama-cpp\all\test_package\build\generators\conan_toolchain.cmake -DCMAKE_POLICY_DEFAULT_CMP0091=NEW' llama-cpp/b2038 (test package): Generator 'VirtualRunEnv' calling 'generate()' llama-cpp/b2038 (test package): Generator txt created conanbuildinfo.txt llama-cpp/b2038 (test package): Generator 'CMakeDeps' calling 'generate()' llama-cpp/b2038 (test package): Aggregating env generators llama-cpp/b2038 (test package): Generated conaninfo.txt llama-cpp/b2038 (test package): Generated graphinfo Using lockfile: 'C:\J2\w\prod-v1\bsr\cci-d37eb4b8\recipes\llama-cpp\all\test_package\build\generators/conan.lock' Using cached profile from lockfile [HOOK - conan-center.py] pre_build(): [FPIC MANAGEMENT (KB-H007)] 'fPIC' option not found [HOOK - conan-center.py] pre_build(): [FPIC MANAGEMENT (KB-H007)] OK llama-cpp/b2038 (test package): Calling build() llama-cpp/b2038 (test package): CMake command: cmake -G "Visual Studio 16 2019" -DCMAKE_TOOLCHAIN_FILE="C:/J2/w/prod-v1/bsr/cci-d37eb4b8/recipes/llama-cpp/all/test_package/build/generators/conan_toolchain.cmake" -DCMAKE_POLICY_DEFAULT_CMP0091="NEW" "C:\J2\w\prod-v1\bsr\cci-d37eb4b8\recipes\llama-cpp\all\test_package\." ----Running------ > cmake -G "Visual Studio 16 2019" -DCMAKE_TOOLCHAIN_FILE="C:/J2/w/prod-v1/bsr/cci-d37eb4b8/recipes/llama-cpp/all/test_package/build/generators/conan_toolchain.cmake" -DCMAKE_POLICY_DEFAULT_CMP0091="NEW" "C:\J2\w\prod-v1\bsr\cci-d37eb4b8\recipes\llama-cpp\all\test_package\." ----------------- -- Using Conan toolchain: C:/J2/w/prod-v1/bsr/cci-d37eb4b8/recipes/llama-cpp/all/test_package/build/generators/conan_toolchain.cmake -- The CXX compiler identification is MSVC 19.29.30148.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Conan: Component target declared 'llama-cpp::llama' -- Conan: Component target declared 'llama-cpp::common' -- Conan: Target declared 'llama-cpp::llama-cpp' -- Configuring done -- Generating done -- Build files have been written to: C:/J2/w/prod-v1/bsr/cci-d37eb4b8/recipes/llama-cpp/all/test_package/build llama-cpp/b2038 (test package): CMake command: cmake --build "C:\J2\w\prod-v1\bsr\cci-d37eb4b8\recipes\llama-cpp\all\test_package\build" --config Release ----Running------ > cmake --build "C:\J2\w\prod-v1\bsr\cci-d37eb4b8\recipes\llama-cpp\all\test_package\build" --config Release ----------------- Microsoft (R) Build Engine version 16.11.2+f32259642 for .NET Framework Copyright (C) Microsoft Corporation. All rights reserved. Checking Build System Building Custom Rule C:/J2/w/prod-v1/bsr/cci-d37eb4b8/recipes/llama-cpp/all/test_package/CMakeLists.txt test_package.cpp test_package.vcxproj -> C:\J2\w\prod-v1\bsr\cci-d37eb4b8\recipes\llama-cpp\all\test_package\build\Release\test_package.exe Building Custom Rule C:/J2/w/prod-v1/bsr/cci-d37eb4b8/recipes/llama-cpp/all/test_package/CMakeLists.txt llama-cpp/b2038 (test package): Running test() ----Running------ > "C:\J2\w\prod-v1\bsr\cci-d37eb4b8\recipes\llama-cpp\all\test_package\build\generators\conanrun.bat" && Release\test_package ./models/ggml-vocab-llama.gguf 'Hello World' ----------------- WARNING: Behavior may be unexpected when allocating 0 bytes for ggml_malloc! 1 -> '' 525 -> ' '' 10994 -> 'Hello' CMake Warning: Manually-specified variables were not used by the project: CMAKE_POLICY_DEFAULT_CMP0091 llama_model_loader: loaded meta data with 17 key-value pairs and 0 tensors from ./models/ggml-vocab-llama.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: tokenizer.ggml.model str = llama llama_model_loader: - kv 11: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<... llama_model_loader: - kv 12: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 13: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 14: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 15: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 16: tokenizer.ggml.unknown_token_id u32 = 0 llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = all F32 (guessed) llm_load_print_meta: model params = 0.00 K llm_load_print_meta: model size = 0.00 MiB (-nan(ind) BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llama_model_load: vocab only - skipping tensors llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama-cpp/b2038 (test package): WARN: Using the new toolchains and generators without specifying a build profile (e.g: -pr:b=default) is discouraged and might cause failures and unexpected behavior llama-cpp/b2038 (test package): WARN: Using the new toolchains and generators without specifying a build profile (e.g: -pr:b=default) is discouraged and might cause failures and unexpected behavior