******************************************************************************** conan install llama-cpp/b3542@#1d40bd238142cdda7a446e45a014a509 --build=llama-cpp -pr /Users/jenkins/workspace/prod-v1/bsr/84630/cbafa/profile_osx_130_libcpp_apple-clang_release_armv8.llama-cpp-shared-False.txt -c tools.system.package_manager:mode=install -c tools.system.package_manager:sudo=True -c tools.apple:sdk_path=/Applications/conan/xcode/13.0/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.3.sdk -s:b arch=armv8 ******************************************************************************** Auto detecting your dev setup to initialize the default profile (/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/profiles/default) Found apple-clang 13.0 apple-clang>=13, using the major as version Default settings os=Macos os_build=Macos arch=armv8 arch_build=armv8 compiler=apple-clang compiler.version=13 compiler.libcxx=libc++ build_type=Release *** You can change them in /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/profiles/default *** *** Or override with -s compiler='other' -s ...s*** Configuration (profile_host): [settings] arch=armv8 build_type=Release compiler=apple-clang compiler.libcxx=libc++ compiler.version=13.0 os=Macos [options] llama-cpp:shared=False [build_requires] [env] [conf] tools.system.package_manager:mode=install tools.system.package_manager:sudo=True tools.apple:sdk_path=/Applications/conan/xcode/13.0/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.3.sdk Configuration (profile_build): [settings] arch=armv8 arch_build=armv8 build_type=Release compiler=apple-clang compiler.libcxx=libc++ compiler.version=13 os=Macos os_build=Macos [options] [build_requires] [env] llama-cpp/b3542: Forced build from source Installing package: llama-cpp/b3542 Requirements llama-cpp/b3542 from local cache - Cache Packages llama-cpp/b3542:f1a36a2aea2da1148eb4bddd758d8db183c6db83 - Build Installing (downloading, building) binaries... [HOOK - conan-center.py] pre_source(): [IMMUTABLE SOURCES (KB-H010)] OK llama-cpp/b3542: Configuring sources in /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/source/src llama-cpp/b3542: [HOOK - conan-center.py] post_source(): [LIBCXX MANAGEMENT (KB-H011)] OK [HOOK - conan-center.py] post_source(): [CPPSTD MANAGEMENT (KB-H022)] OK [HOOK - conan-center.py] post_source(): [SHORT_PATHS USAGE (KB-H066)] OK llama-cpp/b3542: Copying sources to build folder llama-cpp/b3542: Building your package in /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83 llama-cpp/b3542: Generator txt created conanbuildinfo.txt llama-cpp/b3542: Calling generate() llama-cpp/b3542: Preset 'release' added to CMakePresets.json. Invoke it manually using 'cmake --preset release' llama-cpp/b3542: If your CMake version is not compatible with CMakePresets (<3.19) call cmake like: 'cmake -G "Unix Makefiles" -DCMAKE_TOOLCHAIN_FILE=/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83/build/Release/generators/conan_toolchain.cmake -DCMAKE_POLICY_DEFAULT_CMP0091=NEW -DCMAKE_BUILD_TYPE=Release' llama-cpp/b3542: Aggregating env generators [HOOK - conan-center.py] pre_build(): [FPIC MANAGEMENT (KB-H007)] OK. 'fPIC' option found and apparently well managed [HOOK - conan-center.py] pre_build(): [FPIC MANAGEMENT (KB-H007)] OK llama-cpp/b3542: Calling build() llama-cpp/b3542: CMake command: cmake -G "Unix Makefiles" -DCMAKE_TOOLCHAIN_FILE="/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83/build/Release/generators/conan_toolchain.cmake" -DCMAKE_INSTALL_PREFIX="/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83" -DCMAKE_POLICY_DEFAULT_CMP0091="NEW" -DCMAKE_BUILD_TYPE="Release" "/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83/src" ----Running------ > cmake -G "Unix Makefiles" -DCMAKE_TOOLCHAIN_FILE="/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83/build/Release/generators/conan_toolchain.cmake" -DCMAKE_INSTALL_PREFIX="/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83" -DCMAKE_POLICY_DEFAULT_CMP0091="NEW" -DCMAKE_BUILD_TYPE="Release" "/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83/src" ----------------- -- Using Conan toolchain: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83/build/Release/generators/conan_toolchain.cmake -- Conan toolchain: Setting CMAKE_POSITION_INDEPENDENT_CODE=ON (options.fPIC) -- Conan toolchain: Setting BUILD_SHARED_LIBS = OFF -- The C compiler identification is AppleClang 13.0.0.13000029 -- The CXX compiler identification is AppleClang 13.0.0.13000029 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /Applications/conan/xcode/13.0/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /Applications/conan/xcode/13.0/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.30.1 (Apple Git-130)") -- Looking for pthread.h -- Looking for pthread.h - found -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Accelerate framework found -- Metal framework found -- The ASM compiler identification is Clang -- Found assembler: /Applications/conan/xcode/13.0/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- Could NOT find OpenMP_C (missing: OpenMP_C_FLAGS OpenMP_C_LIB_NAMES) -- Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES) -- Could NOT find OpenMP (missing: OpenMP_C_FOUND OpenMP_CXX_FOUND) -- Looking for dgemm_ -- Looking for dgemm_ - found -- Found BLAS: /Applications/conan/xcode/13.0/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.3.sdk/System/Library/Frameworks/Accelerate.framework -- BLAS found, Libraries: /Applications/conan/xcode/13.0/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.3.sdk/System/Library/Frameworks/Accelerate.framework -- BLAS found, Includes: -- Using llamafile -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: arm64 -- ARM detected -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed -- Configuring done -- Generating done -- Build files have been written to: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83/build/Release llama-cpp/b3542: CMake command: cmake --build "/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83/build/Release" '--' '-j8' ----Running------ > cmake --build "/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83/build/Release" '--' '-j8' ----------------- [ 7%] Generate assembly for embedded Metal library [ 7%] Generating build details from Git Embedding Metal library -- Found Git: /usr/bin/git (found version "2.30.1 (Apple Git-130)") Scanning dependencies of target ggml [ 14%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml.c.o [ 14%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o [ 17%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o [ 21%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o [ 25%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-metal.m.o [ 32%] Building CXX object ggml/src/CMakeFiles/ggml.dir/ggml-blas.cpp.o [ 32%] Building ASM object ggml/src/CMakeFiles/ggml.dir/__/__/autogenerated/ggml-metal-embed.s.o [ 35%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o [ 39%] Building CXX object ggml/src/CMakeFiles/ggml.dir/llamafile/sgemm.cpp.o [ 39%] Built target build_info [ 42%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-aarch64.c.o [ 46%] Linking CXX static library libggml.a [ 46%] Built target ggml [ 50%] Building CXX object src/CMakeFiles/llama.dir/llama-grammar.cpp.o [ 57%] Building CXX object src/CMakeFiles/llama.dir/unicode.cpp.o [ 57%] Building CXX object src/CMakeFiles/llama.dir/unicode-data.cpp.o [ 60%] Building CXX object src/CMakeFiles/llama.dir/llama-sampling.cpp.o [ 64%] Building CXX object src/CMakeFiles/llama.dir/llama-vocab.cpp.o [ 67%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o [ 71%] Linking CXX static library libllama.a [ 71%] Built target llama [ 75%] Building CXX object common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o [ 78%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o [ 82%] Building CXX object common/CMakeFiles/common.dir/console.cpp.o [ 85%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o [ 89%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o [ 92%] Building CXX object common/CMakeFiles/common.dir/grammar-parser.cpp.o [ 96%] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o [100%] Linking CXX static library libcommon.a [100%] Built target common llama-cpp/b3542: Package 'f1a36a2aea2da1148eb4bddd758d8db183c6db83' built llama-cpp/b3542: Build folder /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83/build/Release llama-cpp/b3542: Generated conaninfo.txt llama-cpp/b3542: Generated conanbuildinfo.txt llama-cpp/b3542: Generating the package llama-cpp/b3542: Package folder /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83 llama-cpp/b3542: Calling package() llama-cpp/b3542: Copied 1 file: LICENSE llama-cpp/b3542: CMake command: cmake --install "/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83/build/Release" --prefix "/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83" ----Running------ > cmake --install "/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/build/f1a36a2aea2da1148eb4bddd758d8db183c6db83/build/Release" --prefix "/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83" ----------------- -- Install configuration: "Release" -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/lib/libggml.a -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/ggml.h -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/ggml-alloc.h -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/ggml-backend.h -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/ggml-blas.h -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/ggml-cann.h -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/ggml-cuda.h -- Up-to-date: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/ggml.h -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/ggml-kompute.h -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/ggml-metal.h -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/ggml-rpc.h -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/ggml-sycl.h -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/ggml-vulkan.h -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/bin/ggml-metal.metal -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/lib/libllama.a -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/include/llama.h -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/lib/cmake/llama/llama-config.cmake -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/lib/cmake/llama/llama-version.cmake -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/bin/convert_hf_to_gguf.py -- Installing: /Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/lib/pkgconfig/llama.pc llama-cpp/b3542: Copied 16 '.gguf' files llama-cpp/b3542: Copied 13 '.out' files llama-cpp/b3542: Copied 13 '.inp' files llama-cpp/b3542: Copied 1 file: .editorconfig llama-cpp/b3542: Copied 9 '.h' files llama-cpp/b3542: Copied 2 '.hpp' files: json.hpp, base64.hpp llama-cpp/b3542: Copied 1 '.a' file: libcommon.a llama-cpp/b3542: Copied 1 '.cmake' file: llama-cpp-cuda-static.cmake [HOOK - conan-center.py] post_package(): [PACKAGE LICENSE (KB-H012)] OK [HOOK - conan-center.py] post_package(): [DEFAULT PACKAGE LAYOUT (KB-H013)] OK [HOOK - conan-center.py] post_package(): [MATCHING CONFIGURATION (KB-H014)] OK [HOOK - conan-center.py] post_package(): [SHARED ARTIFACTS (KB-H015)] OK [HOOK - conan-center.py] post_package(): [STATIC ARTIFACTS (KB-H074)] OK [HOOK - conan-center.py] post_package(): [EITHER STATIC OR SHARED OF EACH LIB (KB-H076)] OK [HOOK - conan-center.py] post_package(): [PC-FILES (KB-H020)] OK [HOOK - conan-center.py] post_package(): [CMAKE-MODULES-CONFIG-FILES (KB-H016)] OK [HOOK - conan-center.py] post_package(): [PDB FILES NOT ALLOWED (KB-H017)] OK [HOOK - conan-center.py] post_package(): [LIBTOOL FILES PRESENCE (KB-H018)] OK [HOOK - conan-center.py] post_package(): [MS RUNTIME FILES (KB-H021)] OK [HOOK - conan-center.py] post_package(): [SHORT_PATHS USAGE (KB-H066)] OK [HOOK - conan-center.py] post_package(): [MISSING SYSTEM LIBS (KB-H043)] OK [HOOK - conan-center.py] post_package(): [APPLE RELOCATABLE SHARED LIBS (KB-H077)] OK llama-cpp/b3542 package(): Packaged 16 '.gguf' files llama-cpp/b3542 package(): Packaged 13 '.out' files llama-cpp/b3542 package(): Packaged 13 '.inp' files llama-cpp/b3542 package(): Packaged 2 files: .editorconfig, LICENSE llama-cpp/b3542 package(): Packaged 1 '.metal' file: ggml-metal.metal llama-cpp/b3542 package(): Packaged 1 '.py' file: convert_hf_to_gguf.py llama-cpp/b3542 package(): Packaged 21 '.h' files llama-cpp/b3542 package(): Packaged 2 '.hpp' files: json.hpp, base64.hpp llama-cpp/b3542 package(): Packaged 3 '.a' files: libggml.a, libllama.a, libcommon.a llama-cpp/b3542 package(): Packaged 1 '.cmake' file: llama-cpp-cuda-static.cmake llama-cpp/b3542: Package 'f1a36a2aea2da1148eb4bddd758d8db183c6db83' created llama-cpp/b3542: Created package revision 109a621b24e5a4b5b23b942668dd04cd [HOOK - conan-center.py] post_package_info(): [CMAKE FILE NOT IN BUILD FOLDERS (KB-H019)] OK [HOOK - conan-center.py] post_package_info(): [LIBRARY DOES NOT EXIST (KB-H054)] OK [HOOK - conan-center.py] post_package_info(): [INCLUDE PATH DOES NOT EXIST (KB-H071)] OK Aggregating env generators fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git CMake Warning at ggml/src/CMakeLists.txt:167 (message): OpenMP not found CMake Warning at common/CMakeLists.txt:30 (message): Git repository not found; to enable automatic generation of build info, make sure Git is installed and the project is a Git repository. fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git WARN: *** Conan 1 is legacy and on a deprecation path *** WARN: *** Please upgrade to Conan 2 *** [HOOK - conan-center.py] post_package_info(): WARN: [CMAKE FILE NOT IN BUILD FOLDERS (KB-H019)] The *.cmake files have to be placed in a folder declared as `cpp_info.builddirs`. Currently folders declared: {'/Users/jenkins/workspace/prod-v1/bsr/84630/eadda/.conan/data/llama-cpp/b3542/_/_/package/f1a36a2aea2da1148eb4bddd758d8db183c6db83/'} [HOOK - conan-center.py] post_package_info(): WARN: [CMAKE FILE NOT IN BUILD FOLDERS (KB-H019)] Found files: ./lib/cmake/llama-cpp-cuda-static.cmake