******************************************************************************** conan install llama-cpp/b2038@#c5d8251b84afa3f7d5c06ad29b467c11 --build=llama-cpp -pr /home/conan/w/prod-v1/bsr/102538/efead/profile_linux_13_libstdcpp_clang_release_64.llama-cpp-shared-False.txt -c tools.system.package_manager:mode=install -c tools.system.package_manager:sudo=True ******************************************************************************** Conan 1 is on a deprecation path, please consider migrating to Conan 2 Auto detecting your dev setup to initialize the default profile (/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/profiles/default) CC and CXX: clang, clang++ Found clang 13.0 clang>=8, using the major as version Default settings os=Linux os_build=Linux arch=x86_64 arch_build=x86_64 compiler=clang compiler.version=13 compiler.libcxx=libstdc++ build_type=Release *** You can change them in /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/profiles/default *** *** Or override with -s compiler='other' -s ...s*** Configuration: [settings] arch=x86_64 build_type=Release compiler=clang compiler.libcxx=libstdc++ compiler.version=13 os=Linux [options] llama-cpp:shared=False [build_requires] [env] [conf] tools.system.package_manager:mode=install tools.system.package_manager:sudo=True llama-cpp/b2038: Forced build from source Installing package: llama-cpp/b2038 Requirements llama-cpp/b2038 from local cache - Cache Packages llama-cpp/b2038:80d3c8d077516ce6e7dc913ef92912e9172eae4c - Build Installing (downloading, building) binaries... [HOOK - conan-center.py] pre_source(): [IMMUTABLE SOURCES (KB-H010)] OK llama-cpp/b2038: Configuring sources in /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/source/src llama-cpp/b2038: [HOOK - conan-center.py] post_source(): [LIBCXX MANAGEMENT (KB-H011)] OK [HOOK - conan-center.py] post_source(): [CPPSTD MANAGEMENT (KB-H022)] OK [HOOK - conan-center.py] post_source(): [SHORT_PATHS USAGE (KB-H066)] OK llama-cpp/b2038: Copying sources to build folder llama-cpp/b2038: Building your package in /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c llama-cpp/b2038: Generator txt created conanbuildinfo.txt llama-cpp/b2038: Calling generate() llama-cpp/b2038: Preset 'release' added to CMakePresets.json. Invoke it manually using 'cmake --preset release' llama-cpp/b2038: If your CMake version is not compatible with CMakePresets (<3.19) call cmake like: 'cmake -G "Unix Makefiles" -DCMAKE_TOOLCHAIN_FILE=/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/build/Release/generators/conan_toolchain.cmake -DCMAKE_POLICY_DEFAULT_CMP0091=NEW -DCMAKE_BUILD_TYPE=Release' llama-cpp/b2038: Aggregating env generators [HOOK - conan-center.py] pre_build(): [FPIC MANAGEMENT (KB-H007)] OK. 'fPIC' option found and apparently well managed [HOOK - conan-center.py] pre_build(): [FPIC MANAGEMENT (KB-H007)] OK llama-cpp/b2038: Calling build() llama-cpp/b2038: CMake command: cmake -G "Unix Makefiles" -DCMAKE_TOOLCHAIN_FILE="/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/build/Release/generators/conan_toolchain.cmake" -DCMAKE_INSTALL_PREFIX="/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c" -DCMAKE_POLICY_DEFAULT_CMP0091="NEW" -DCMAKE_BUILD_TYPE="Release" "/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/src" ----Running------ > cmake -G "Unix Makefiles" -DCMAKE_TOOLCHAIN_FILE="/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/build/Release/generators/conan_toolchain.cmake" -DCMAKE_INSTALL_PREFIX="/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c" -DCMAKE_POLICY_DEFAULT_CMP0091="NEW" -DCMAKE_BUILD_TYPE="Release" "/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/src" ----------------- -- Using Conan toolchain: /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/build/Release/generators/conan_toolchain.cmake -- Conan toolchain: Setting CMAKE_POSITION_INDEPENDENT_CODE=ON (options.fPIC) -- Conan toolchain: Setting BUILD_SHARED_LIBS = OFF -- The C compiler identification is Clang 13.0.0 -- The CXX compiler identification is Clang 13.0.0 -- Check for working C compiler: /usr/local/bin/clang -- Check for working C compiler: /usr/local/bin/clang -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/local/bin/clang++ -- Check for working CXX compiler: /usr/local/bin/clang++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.43.0") -- Looking for pthread.h -- Looking for pthread.h - found -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Check if compiler accepts -pthread -- Check if compiler accepts -pthread - yes -- Found Threads: TRUE -- Warning: ccache not found - consider installing it or use LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected -- Configuring done -- Generating done -- Build files have been written to: /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/build/Release llama-cpp/b2038: CMake command: cmake --build "/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/build/Release" '--' '-j3' ----Running------ > cmake --build "/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/build/Release" '--' '-j3' ----------------- [ 6%] Generating build details from Git -- Found Git: /usr/bin/git (found version "2.43.0") Scanning dependencies of target ggml [ 13%] Building C object CMakeFiles/ggml.dir/ggml.c.o [ 20%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o Scanning dependencies of target build_info [ 26%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o [ 26%] Built target build_info [ 33%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.o [ 40%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.o [ 40%] Built target ggml Scanning dependencies of target ggml_static [ 46%] Linking C static library libggml_static.a Scanning dependencies of target llama [ 53%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o [ 53%] Built target ggml_static [ 60%] Linking CXX static library libllama.a [ 60%] Built target llama Scanning dependencies of target common [ 66%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o [ 73%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o [ 80%] Building CXX object common/CMakeFiles/common.dir/console.cpp.o [ 86%] Building CXX object common/CMakeFiles/common.dir/grammar-parser.cpp.o [ 93%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o [100%] Linking CXX static library libcommon.a [100%] Built target common llama-cpp/b2038: Package '80d3c8d077516ce6e7dc913ef92912e9172eae4c' built llama-cpp/b2038: Build folder /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/build/Release llama-cpp/b2038: Generated conaninfo.txt llama-cpp/b2038: Generated conanbuildinfo.txt llama-cpp/b2038: Generating the package llama-cpp/b2038: Package folder /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c llama-cpp/b2038: Calling package() llama-cpp/b2038: Copied 1 file: LICENSE llama-cpp/b2038: CMake command: cmake --install "/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/build/Release" --prefix "/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c" ----Running------ > cmake --install "/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/build/Release" --prefix "/home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c" ----------------- -- Install configuration: "Release" -- Installing: /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c/lib/cmake/Llama/LlamaConfig.cmake -- Installing: /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c/lib/cmake/Llama/LlamaConfigVersion.cmake -- Installing: /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c/include/ggml.h -- Installing: /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c/include/ggml-alloc.h -- Installing: /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c/include/ggml-backend.h -- Installing: /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c/lib/libllama.a -- Installing: /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c/include/llama.h -- Installing: /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c/bin/convert.py -- Installing: /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/package/80d3c8d077516ce6e7dc913ef92912e9172eae4c/bin/convert-lora-to-ggml.py llama-cpp/b2038: Copied 10 '.gguf' files llama-cpp/b2038: Copied 1 file: .editorconfig llama-cpp/b2038: Copied 1 '.hpp' file: base64.hpp llama-cpp/b2038: Copied 7 '.h' files llama-cpp/b2038: Copied 1 '.a' file: libcommon.a [HOOK - conan-center.py] post_package(): [PACKAGE LICENSE (KB-H012)] OK [HOOK - conan-center.py] post_package(): [DEFAULT PACKAGE LAYOUT (KB-H013)] OK [HOOK - conan-center.py] post_package(): [MATCHING CONFIGURATION (KB-H014)] OK [HOOK - conan-center.py] post_package(): [SHARED ARTIFACTS (KB-H015)] OK [HOOK - conan-center.py] post_package(): [STATIC ARTIFACTS (KB-H074)] OK [HOOK - conan-center.py] post_package(): [EITHER STATIC OR SHARED OF EACH LIB (KB-H076)] OK [HOOK - conan-center.py] post_package(): [PC-FILES (KB-H020)] OK [HOOK - conan-center.py] post_package(): [CMAKE-MODULES-CONFIG-FILES (KB-H016)] OK [HOOK - conan-center.py] post_package(): [PDB FILES NOT ALLOWED (KB-H017)] OK [HOOK - conan-center.py] post_package(): [LIBTOOL FILES PRESENCE (KB-H018)] OK [HOOK - conan-center.py] post_package(): [MS RUNTIME FILES (KB-H021)] OK [HOOK - conan-center.py] post_package(): [SHORT_PATHS USAGE (KB-H066)] OK [HOOK - conan-center.py] post_package(): [MISSING SYSTEM LIBS (KB-H043)] OK [HOOK - conan-center.py] post_package(): [APPLE RELOCATABLE SHARED LIBS (KB-H077)] OK llama-cpp/b2038 package(): Packaged 10 '.gguf' files llama-cpp/b2038 package(): Packaged 2 files: .editorconfig, LICENSE llama-cpp/b2038 package(): Packaged 11 '.h' files llama-cpp/b2038 package(): Packaged 1 '.hpp' file: base64.hpp llama-cpp/b2038 package(): Packaged 2 '.py' files: convert.py, convert-lora-to-ggml.py llama-cpp/b2038 package(): Packaged 2 '.a' files: libllama.a, libcommon.a llama-cpp/b2038: Package '80d3c8d077516ce6e7dc913ef92912e9172eae4c' created llama-cpp/b2038: Created package revision dec480b3c71f803f414fa7a3287a5780 [HOOK - conan-center.py] post_package_info(): [CMAKE FILE NOT IN BUILD FOLDERS (KB-H019)] OK [HOOK - conan-center.py] post_package_info(): [LIBRARY DOES NOT EXIST (KB-H054)] OK [HOOK - conan-center.py] post_package_info(): [INCLUDE PATH DOES NOT EXIST (KB-H071)] OK Aggregating env generators fatal: not a git repository (or any parent up to mount point /home) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). fatal: not a git repository (or any parent up to mount point /home) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). CMake Warning at common/CMakeLists.txt:24 (message): Git repository not found; to enable automatic generation of build info, make sure Git is installed and the project is a Git repository. fatal: not a git repository (or any parent up to mount point /home) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). fatal: not a git repository (or any parent up to mount point /home) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/src/ggml.c:1273:5: warning: implicit conversion increases floating-point precision: 'float' to 'ggml_float' (aka 'double') [-Wdouble-promotion] GGML_F16_VEC_REDUCE(sumf, sum); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/src/ggml.c:905:37: note: expanded from macro 'GGML_F16_VEC_REDUCE' #define GGML_F16_VEC_REDUCE GGML_F32Cx8_REDUCE ^ /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/src/ggml.c:895:33: note: expanded from macro 'GGML_F32Cx8_REDUCE' #define GGML_F32Cx8_REDUCE GGML_F32x8_REDUCE ^ /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/src/ggml.c:841:11: note: expanded from macro 'GGML_F32x8_REDUCE' res = _mm_cvtss_f32(_mm_hadd_ps(t1, t1)); \ ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/src/ggml.c:1321:9: warning: implicit conversion increases floating-point precision: 'float' to 'ggml_float' (aka 'double') [-Wdouble-promotion] GGML_F16_VEC_REDUCE(sumf[k], sum[k]); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/src/ggml.c:905:37: note: expanded from macro 'GGML_F16_VEC_REDUCE' #define GGML_F16_VEC_REDUCE GGML_F32Cx8_REDUCE ^ /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/src/ggml.c:895:33: note: expanded from macro 'GGML_F32Cx8_REDUCE' #define GGML_F32Cx8_REDUCE GGML_F32x8_REDUCE ^ /home/conan/w/prod-v1/bsr/102538/ddbdd/.conan/data/llama-cpp/b2038/_/_/build/80d3c8d077516ce6e7dc913ef92912e9172eae4c/src/ggml.c:841:11: note: expanded from macro 'GGML_F32x8_REDUCE' res = _mm_cvtss_f32(_mm_hadd_ps(t1, t1)); \ ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2 warnings generated. llama-cpp/b2038: WARN: Using the new toolchains and generators without specifying a build profile (e.g: -pr:b=default) is discouraged and might cause failures and unexpected behavior llama-cpp/b2038: WARN: Using the new toolchains and generators without specifying a build profile (e.g: -pr:b=default) is discouraged and might cause failures and unexpected behavior