万普插件库

jQuery插件大全与特效教程

ABACUS安装教程 - Toolchain (2-Intel)

安装简介

  • 教程作者:量子御坂
  • 教程贡献参与者:
  • 上次更新时间:2025-05-23
  • 关键词:ABACUS安装,Toolchain, Intel

01 前言

ABACUS业已发布了3.10-LTS稳定版,并仍在持续迭代当中,不少用户都希望能在自己的机器上部署ABACUS软件,体验ABACUS带来的计算效率提升。然而,在不同的服务器和工作站环境下编译ABACUS,并达到这些特定环境下的最高计算效率,始终存在着一些挑战。

ABACUS Toolchain是一套内置于ABACUS仓库中的bash脚本集,它能帮助用户在线或离线地编译安装ABACUS所需的软件依赖,自动处理各个依赖库的环境变量,并基于这些依赖库快速完成ABACUS源码编译过程,实现高效、高性能、易修改和易移植的自动化ABACUS编译方案。

本教程基于2025-02版本的ABACUS Toolchain撰写,目前,ABACUS Toolchain支持如下编译安装功能:

  • GNU Toolchain,即从足够版本的GNU编译套件(gcc, g++, gfortran,统称GCC)出发,从头编译安装ABACUS依赖库及ABACUS本体的Toolchain方法。
  • Intel Toolchain,即基于Intel的编译器、数学库和并行库(通常打包在Intel-OneAPI或Intel-parallel-xe-studio内)编译安装ABACUS依赖库及ABACUS本体的Toolchain方法。
  • AMD Toolchain,即基于AMD的编译器和数学库进行ABACUS编译安装的方法,细分为GCC-AOCL Toolchain和AOCC-AOCL Toolchain

同时,ABACUS Toolchain还支持包括功能插件支持、打包离线安装等一系列高级功能。

总体来说,ABACUS Toolchain希望达到的愿景是:

  • 方便用户高效地从源码编译最适合当前服务器环境的ABACUS,并能快速测试不同依赖库类型的Toolchain编译所得ABACUS的计算效率。
  • 建立一套ABACUS源码编译的标准流程,ABACUS开发者可以直接在Toolchain中控制ABACUS各个依赖库的版本和编译方式,而不需要自行编译并手动加入各种编译选项。

此前已有教程介绍如何使用GNU Toolchain简单直接地从头编译ABACUS:ABACUS安装教程 - Toolchain (1-GNU),这一方案兼容性最好,但编译所得的ABACUS不一定最高效,尤其对于很多配置了相应Intel OneAPI套件的Intel-CPU服务器。本教程则将侧重于如何利用Intel Toolchain,让编译得到的ABACUS获得更高性能。

1.1 基于Intel Toolchain编译安装ABACUS

下面我们将以Intel Toolchain为例,展示安装LTS-3.10.0版本ABACUS的过程。其中,针对不同的服务器特点,我们需要针对性地调整Intel Toolchain的各个脚本,教程作者习惯使用vim进行调整,会基于vim教学,具体操作可以采用任何在Linux服务器上编辑文件的方法。本教程默认Intel依赖库在Intel OneAPI内,针对parallel_xe_studio库,用户也只需要自行处理好其相关环境变量即可。

注意:使用Toolchain进行源码安装前,务必确保你正确加载了对应的Intel OneAPI环境或其他相关环境变量!

# load intel-oneapi environment via source
source /path/to/intel/oneapi/setvars.sh
# load intel-oneapi environment via module-env if exists
module load mkl mpi compiler

1.2 在Intel OneAPI 2024+依赖下直接安装ABACUS

如果你的服务器Intel OneAPI版本在2024.0版本以上,一般来说,你可以直接通过下述步骤进行ABACUS的编安装。(202502版本Toolchain已通过拉取最新版本依赖库方式支持了OneAPI 2025+版本)

1. 下载ABACUS仓库

可以通过git clone载,这一做法允许用户通过git包管理器快速更新或切换版本。也可以通过wget拉取ABACUS仓库压缩包,这样得到的ABACUS仓库没有git相关的信息,文件体积更小。

(PS:代码块是可以滑动)

# via github
git clone https://github.com/deepmodeling/abacus-develop.git -b LTS
# via wget from github by codeload: can be used under CN Internet
wget https://codeload.github.com/deepmodeling/abacus-develop/tar.gz/LTS -O abacus-LTS.tar.gz
tar -zxvf abacus-LTS.tar.gz

2. 进入toolchain目录

cd abacus-develop/toolchain
# if you download abacus via wget from codeload
cd abacus-develop-LTS/toolchain

3. 运行 toolchain_intel.sh 脚本

sh ./toolchain_intel.sh

如此即可开始基于Toolchain的ABACUS依赖库编译安装。以Intel Toolchain默认设置为例,会做如下操作:

  • 检查你的系统GNU编译器版本(2025-02版本Toolchain新功能),并链接Intel编译器,输出类似于:
MPI is detected and it appears to be Intel MPI
Checking system GCC version for gcc, intel and amd toolchain
Your System gcc/g++/gfortran version should be consistent
Minimum required version: 5
Your gcc version: 11.3.0
Your g++ version: 11.3.0
Your gfortran version: 11.3.0
Your GCC version seems to be enough for ABACUS installation.
Using MKL, so openblas is disabled.
Compiling with 16 processes for target native.
Step gcc took 0.00 seconds.
==================== Finding Intel compiler from system paths ====================
path to icx is /opt/intel/oneapi/compiler/2025.1/bin/icx
path to icpx is /opt/intel/oneapi/compiler/2025.1/bin/icpx
path to ifx is /opt/intel/oneapi/compiler/2025.1/bin/ifx
CC is /opt/intel/oneapi/compiler/2025.1/bin/icx
CXX is /opt/intel/oneapi/compiler/2025.1/bin/icpx
FC is /opt/intel/oneapi/compiler/2025.1/bin/ifx
Step intel took 0.00 seconds.
Step amd took 0.00 seconds.

注:编译ABACUS及其相关依赖是有系统C++编译器最低版本要求的,GCC版本至少不能小于5,即使是使用Intel或AMD相关依赖库和编译器进行编译安装时也是如此。然而,有一部分老旧超算依然维持着CentOS的内核和4.8.5版本的GCC,因而如果在Intel Toolchain编译过程中出现奇特报错,一大可能就是服务器GCC版本过低,使得Intel编译器不能正常编译ABACUS相关程序,此时请联系服务器管理员。

  • 下载OpenBLAS(即使Intel Toolchain也会进行这一操作),通过OpenBLAS内置的genarch程序识别系统指令集和架构,在不同服务器上的不同输出如下
# Run toolchain on Intel-CPU Machine
==================== Getting proc arch info using OpenBLAS tools ====================
wget https://codeload.github.com/OpenMathLib/OpenBLAS/tar.gz/v0.3.29 -O OpenBLAS-0.3.29.tar.gz --no-check-certificate
--2025-05-06 16:58:40-- https://codeload.github.com/OpenMathLib/OpenBLAS/tar.gz/v0.3.29
Resolving codeload.github.com (codeload.github.com)... 20.205.243.165
Connecting to codeload.github.com (codeload.github.com)|20.205.243.165|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/x-gzip]
Saving to: ‘OpenBLAS-0.3.29.tar.gz’

OpenBLAS-0.3.29.tar.gz [ <=> ] 23.53M 6.75MB/s in 3.5s

2025-05-06 16:58:45 (6.75 MB/s) - ‘OpenBLAS-0.3.29.tar.gz’ saved [24671913]

OpenBLAS-0.3.29.tar.gz: OK
Checksum of OpenBLAS-0.3.29.tar.gz Ok
OpenBLAS detected LIBCORE = skylakex
OpenBLAS detected ARCH = x86_64

此处若出现提示:./f_check: 100: [: Illegal number: , 须知其只是运行OpenBLAS genarch程序中可能出现的问题,不影响Toolchain编译过程

  • 自动下载CMake编译库
  • 从环境变量中定位并链接 Intel MPI 并行库和 Intel MKL 数学库
==================== Finding Intel MPI from system paths ====================
path to mpiexec is /opt/intel/oneapi/mpi/2021.15/bin/mpiexec
path to mpiicx is /opt/intel/oneapi/mpi/2021.15/bin/mpiicx
path to mpiicpx is /opt/intel/oneapi/mpi/2021.15/bin/mpiicpx
path to mpiifx is /opt/intel/oneapi/mpi/2021.15/bin/mpiifx
Found lib directory /opt/intel/oneapi/mpi/2021.15/lib
libmpi is found in ld search path
libmpicxx is found in ld search path
I_MPI_CXX is icpx
I_MPI_CC is icx
I_MPI_FC is ifx
MPICXX is /opt/intel/oneapi/mpi/2021.15/bin/mpiicpx
MPICC is /opt/intel/oneapi/mpi/2021.15/bin/mpiicx
MPIFC is /opt/intel/oneapi/mpi/2021.15/bin/mpiifx
Step intelmpi took 0.00 seconds.
==================== Finding MKL from system paths ====================
MKLROOT is found to be /opt/intel/oneapi/mkl/2025.1
libm is found in ld search path
libdl is found in ld search path
Step mkl took 0.00 seconds.
  • 自动下载并编译安装:
    • 数学库:ELPA
    • 泛函库:LibXC
    • 其他功能需要的库:CEREAL, RapidJSON (可选项:LibRI, LibComm, LibTorch, Libnpy)
      完整输出如下:(本案例基于Bohr容器完成)
MPI is detected and it appears to be Intel MPI
Checking system GCC version for gcc, intel and amd toolchain
Your System gcc/g++/gfortran version should be consistent
Minimum required version: 5
Your gcc version: 11.3.0
Your g++ version: 11.3.0
Your gfortran version: 11.3.0
Your GCC version seems to be enough for ABACUS installation.
Using MKL, so openblas is disabled.
Compiling with 16 processes for target native.
Step gcc took 0.00 seconds.
==================== Finding Intel compiler from system paths ====================
path to icx is /opt/intel/oneapi/compiler/2025.1/bin/icx
path to icpx is /opt/intel/oneapi/compiler/2025.1/bin/icpx
path to ifx is /opt/intel/oneapi/compiler/2025.1/bin/ifx
CC is /opt/intel/oneapi/compiler/2025.1/bin/icx
CXX is /opt/intel/oneapi/compiler/2025.1/bin/icpx
FC is /opt/intel/oneapi/compiler/2025.1/bin/ifx
Step intel took 0.00 seconds.
Step amd took 0.00 seconds.
==================== Getting proc arch info using OpenBLAS tools ====================
wget https://codeload.github.com/OpenMathLib/OpenBLAS/tar.gz/v0.3.29 -O OpenBLAS-0.3.29.tar.gz --no-check-certificate
--2025-05-09 17:15:04-- https://codeload.github.com/OpenMathLib/OpenBLAS/tar.gz/v0.3.29
Resolving ga.dp.tech (ga.dp.tech)... 10.255.254.18, 10.255.254.37, 10.255.254.7
Connecting to ga.dp.tech (ga.dp.tech)|10.255.254.18|:8118... connected.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [application/x-gzip]
Saving to: 'OpenBLAS-0.3.29.tar.gz'

OpenBLAS-0.3.29.tar.gz [ <=> ] 23.53M 4.64MB/s in 5.5s

2025-05-09 17:15:11 (4.30 MB/s) - 'OpenBLAS-0.3.29.tar.gz' saved [24671913]

OpenBLAS-0.3.29.tar.gz: OK
Checksum of OpenBLAS-0.3.29.tar.gz Ok
./f_check: 100: [: Illegal number:
OpenBLAS detected LIBCORE = skylakex
OpenBLAS detected ARCH = x86_64
==================== Installing CMake ====================
wget https://cmake.org/files/v3.31/cmake-3.31.7-linux-x86_64.sh -O cmake-3.31.7-linux-x86_64.sh --no-check-certificate
--2025-05-09 17:15:16-- https://cmake.org/files/v3.31/cmake-3.31.7-linux-x86_64.sh
Resolving ga.dp.tech (ga.dp.tech)... 10.255.254.37, 10.255.254.7, 10.255.254.18
Connecting to ga.dp.tech (ga.dp.tech)|10.255.254.37|:8118... connected.
Proxy request sent, awaiting response... 200 OK
Length: 55005854 (52M) [text/x-sh]
Saving to: 'cmake-3.31.7-linux-x86_64.sh'

cmake-3.31.7-linux-x86_64. 100%[=====================================>] 52.46M 7.10MB/s in 8.7s

2025-05-09 17:15:25 (6.04 MB/s) - 'cmake-3.31.7-linux-x86_64.sh' saved [55005854/55005854]

cmake-3.31.7-linux-x86_64.sh: OK
Checksum of cmake-3.31.7-linux-x86_64.sh Ok
Installing from scratch into /opt/abacus-develop-LTS/toolchain/install/cmake-3.31.7
Step cmake took 11.00 seconds.
==================== Finding Intel MPI from system paths ====================
path to mpiexec is /opt/intel/oneapi/mpi/2021.15/bin/mpiexec
path to mpiicx is /opt/intel/oneapi/mpi/2021.15/bin/mpiicx
path to mpiicpx is /opt/intel/oneapi/mpi/2021.15/bin/mpiicpx
path to mpiifx is /opt/intel/oneapi/mpi/2021.15/bin/mpiifx
Found lib directory /opt/intel/oneapi/mpi/2021.15/lib
libmpi is found in ld search path
libmpicxx is found in ld search path
I_MPI_CXX is icpx
I_MPI_CC is icx
I_MPI_FC is ifx
MPICXX is /opt/intel/oneapi/mpi/2021.15/bin/mpiicpx
MPICC is /opt/intel/oneapi/mpi/2021.15/bin/mpiicx
MPIFC is /opt/intel/oneapi/mpi/2021.15/bin/mpiifx
Step intelmpi took 0.00 seconds.
==================== Finding MKL from system paths ====================
MKLROOT is found to be /opt/intel/oneapi/mkl/2025.1
libm is found in ld search path
libdl is found in ld search path
Step mkl took 0.00 seconds.
==================== Installing LIBXC ====================
wget https://gitlab.com/libxc/libxc/-/archive/7.0.0/libxc-7.0.0.tar.bz2 -O libxc-7.0.0.tar.bz2 --no-check-certificate
--2025-05-09 17:15:27-- https://gitlab.com/libxc/libxc/-/archive/7.0.0/libxc-7.0.0.tar.bz2
Resolving ga.dp.tech (ga.dp.tech)... 10.255.254.18, 10.255.254.7, 10.255.254.37
Connecting to ga.dp.tech (ga.dp.tech)|10.255.254.18|:8118... connected.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [application/octet-stream]
Saving to: 'libxc-7.0.0.tar.bz2'

libxc-7.0.0.tar.bz2 [ <=> ] 49.98M 7.63MB/s in 7.5s

2025-05-09 17:15:36 (6.68 MB/s) - 'libxc-7.0.0.tar.bz2' saved [52408700]

libxc-7.0.0.tar.bz2: OK
Checksum of libxc-7.0.0.tar.bz2 Ok
Installing from scratch into /opt/abacus-develop-LTS/toolchain/install/libxc-7.0.0
Step libxc took 42.00 seconds.
Step fftw took 0.00 seconds.
Step scalapack took 0.00 seconds.
==================== Installing ELPA ====================
wget https://elpa.mpcdf.mpg.de/software/tarball-archive/Releases/2025.01.001/elpa-2025.01.001.tar.gz -O elpa-2025.01.001.tar.gz --no-check-certificate
--2025-05-09 17:16:09-- https://elpa.mpcdf.mpg.de/software/tarball-archive/Releases/2025.01.001/elpa-2025.01.001.tar.gz
Resolving ga.dp.tech (ga.dp.tech)... 10.255.254.18, 10.255.254.7, 10.255.254.37
Connecting to ga.dp.tech (ga.dp.tech)|10.255.254.18|:8118... connected.
Proxy request sent, awaiting response... 200 OK
Length: 2169795 (2.1M) [application/gzip]
Saving to: 'elpa-2025.01.001.tar.gz'

elpa-2025.01.001.tar.gz 100%[=====================================>] 2.07M 7.64KB/s in 4m 4s

2025-05-09 17:20:15 (8.69 KB/s) - 'elpa-2025.01.001.tar.gz' saved [2169795/2169795]

elpa-2025.01.001.tar.gz: OK
Checksum of elpa-2025.01.001.tar.gz Ok
Installing from scratch into /opt/abacus-develop-LTS/toolchain/install/elpa-2025.01.001/cpu
Step elpa took 535.00 seconds.
==================== Installing CEREAL ====================
===> Notice: This version of CEREAL is downloaded in GitHub master repository <===
wget https://codeload.github.com/USCiLab/cereal/tar.gz/master -O cereal-master.tar.gz --no-check-certificate
--2025-05-09 17:25:04-- https://codeload.github.com/USCiLab/cereal/tar.gz/master
Resolving ga.dp.tech (ga.dp.tech)... 10.255.254.18, 10.255.254.7, 10.255.254.37
Connecting to ga.dp.tech (ga.dp.tech)|10.255.254.18|:8118... connected.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [application/x-gzip]
Saving to: 'cereal-master.tar.gz'

cereal-master.tar.gz [ <=> ] 377.35K --.-KB/s in 0.1s

2025-05-09 17:25:05 (2.73 MB/s) - 'cereal-master.tar.gz' saved [386409]

Installing from scratch into /opt/abacus-develop-LTS/toolchain/install/cereal-master
Step cereal took 1.00 seconds.
==================== Installing RAPIDJSON ====================
===> Notice: This version of rapidjson is downloaded in GitHub master repository <===
wget https://codeload.github.com/Tencent/rapidjson/tar.gz/master -O rapidjson-master.tar.gz --no-check-certificate
--2025-05-09 17:25:05-- https://codeload.github.com/Tencent/rapidjson/tar.gz/master
Resolving ga.dp.tech (ga.dp.tech)... 10.255.254.18, 10.255.254.7, 10.255.254.37
Connecting to ga.dp.tech (ga.dp.tech)|10.255.254.18|:8118... connected.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [application/x-gzip]
Saving to: 'rapidjson-master.tar.gz'

rapidjson-master.tar.gz [ <=> ] 1.06M 1.94MB/s in 0.6s

2025-05-09 17:25:06 (1.94 MB/s) - 'rapidjson-master.tar.gz' saved [1116059]

Installing from scratch into /opt/abacus-develop-LTS/toolchain/install/rapidjson-master
Step rapidjson took 1.00 seconds.
Step libtorch took 0.00 seconds.
Step libnpy took 0.00 seconds.
Step libri took 0.00 seconds.
Step libcomm took 0.00 seconds.
========================== usage =========================
Done!
To use the installed tools and libraries and ABACUS version
compiled with it you will first need to execute at the prompt:
source /opt/abacus-develop-LTS/toolchain/install/setup
To build ABACUS by gnu-toolchain, just use:
./build_abacus_gnu.sh
To build ABACUS by intel-toolchain, just use:
./build_abacus_intel.sh
To build ABACUS by amd-toolchain in gcc-aocl, just use:
./build_abacus_gcc-aocl.sh
To build ABACUS by amd-toolchain in aocc-aocl, just use:
./build_abacus_aocc-aocl.sh
or you can modify the builder scripts to suit your needs.

编译成功后,根据输提示,运行build_abacus_intel.sh

sh ./build_abacus_intel.sh

即可完成ABACUS本体的编译安装,大致输出如下:

-- The CXX compiler identification is IntelLLVM 2025.1.1
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /opt/intel/oneapi/compiler/2025.1/bin/icpx - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- RapidJSON found. Headers:
-- Found Git: /usr/bin/git (found version "2.34.1")
-- Found git: attempting to get commit info...
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
CMake Warning at CMakeLists.txt:104 (message):
Failed to get git commit info


-- Found Cereal: /opt/abacus-develop-LTS/toolchain/install/cereal-master/include/cereal
-- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
-- ELPA : We need pkg-config to get all information about the elpa library
-- Found ELPA: /opt/abacus-develop-LTS/toolchain/install/elpa-2025.01.001/cpu/lib/libelpa_openmp.so
-- Performing Test ELPA_VERSION_SATISFIES
-- Performing Test ELPA_VERSION_SATISFIES - Success
-- Found MPI_CXX: /opt/intel/oneapi/mpi/2021.15/lib/libmpicxx.so (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found OpenMP_CXX: -fiopenmp (found version "5.1")
-- Found OpenMP: TRUE (found version "5.1")
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - NOTFOUND
-- MKL_VERSION: 2025.1.0
-- MKL_ROOT: /opt/intel/oneapi/mkl/2025.1
-- MKL_ARCH: intel64
-- MKL_SYCL_LINK: None, set to ` dynamic` by default
-- MKL_LINK: None, set to ` dynamic` by default
-- MKL_SYCL_INTERFACE_FULL: intel_lp64
-- MKL_INTERFACE_FULL: intel_lp64
-- MKL_SYCL_THREADING: None, set to ` tbb_thread` by default
-- MKL_THREADING: None, set to ` intel_thread` by default
-- MKL_MPI: None, set to ` intelmpi` by default
-- Experimental oneMKL Data Fitting SYCL API does not support LP64 on CPU
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_scalapack_lp64.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_cdft_core.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_intel_lp64.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_intel_thread.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_core.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_blacs_intelmpi_lp64.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_blas.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_lapack.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_dft.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_sparse.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_data_fitting.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_rng.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_stats.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_vm.so
-- Found /opt/intel/oneapi/mkl/2025.1/lib/libmkl_tbb_thread.so
-- Found /opt/intel/oneapi/compiler/2025.1/lib/libiomp5.so
-- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
-- Found Libxc: version 7.0.0
-- Configuring done (1.3s)
-- Generating done (0.2s)
-- Build files have been written to: /opt/abacus-develop-LTS/build_abacus_intel
[ 0%] Building CXX object source/CMakeFiles/driver.dir/driver.cpp.o
...
[100%] Built target io_basic
[100%] Building CXX object CMakeFiles/abacus.dir/source/main.cpp.o
[100%] Linking CXX executable abacus
[100%] Built target abacus
-- Install configuration: ""
-- Installing: /opt/abacus-develop-LTS/bin/abacus
========================== usage =========================
Done!
To use the installed ABACUS version
You need to source /opt/abacus-develop-LTS/toolchain/abacus_env.sh first !
"""

如上输出即为完成了ABACUS的编译安装。注意,从系统内读入的环境变量并不会被写入到abacus_env.sh对应的install/setup文件中,因此在实际使用这一abacus时,除了source abacus_env.sh,你还需要加载对应的Intel-OneAPI环境。

# after load intel-OneAPI env via source or module-load
source abacus_env.sh

如此即可加载ABACUS所有依赖库环境和ABACUS本体,此时可以通过abacus --version命令确认ABACUS已正确安装和加载:

root@bohrium-13504-1313660:/opt/abacus-develop-LTS/toolchain# source abacus_env.sh 
root@bohrium-13504-1313660:/opt/abacus-develop-LTS/toolchain# abacus --version
ABACUS version v3.10.0

1.3 基于较旧的Intel-OneAPI安装ABACUS

很多情况下,我们被迫使用旧版本的OneAPI,如

  • 使用Intel库编译GPU支持的ABACUS(此时只能用icpc编译器编译ABACUS)
  • 在AMD-CPU服务器上使用Intel-OneAPI编译ABACUS(此时只能用icpc编译器编译ELPA)
  • 服务器上仅有旧版的OneAPI(但最好在2023版及以上)或parallel_xe_studio
注:在AMD-CPU服务器上使用Intel-OneAPI编译的ABACUS,其计算性能将较其他方案偏低

此时,我们可以通过编辑关键脚本来完成基于Toolchain的ABACUS编译,用户层面操作的关键脚本包括toolchain_*.shbuild_abacus_*.sh。对于Intel Toolchain而言,相应脚本则是toolchain_intel.shbuild_abacus_intel.sh。 通过vim打开toolchain_intel.sh

#!/bin/bash
#SBATCH -J install
#SBATCH -N 1
#SBATCH -n 16
#SBATCH -o compile.log
#SBATCH -e compile.err

# JamesMisaka in 2025-05-05
# install abacus dependency by intel-toolchain
# use mkl and intelmpi
# but mpich and openmpi can also be tried
# libtorch and libnpy are for deepks support, which can be =no
# gpu-lcao supporting modify: CUDA_PATH and --enable-cuda
# export CUDA_PATH=/usr/local/cuda

# module load mkl mpi compiler

./install_abacus_toolchain.sh \
--with-intel=system \
--math-mode=mkl \
--with-gcc=no \
--with-intelmpi=system \
--with-cmake=install \
--with-scalapack=no \
--with-libxc=install \
--with-fftw=no \
--with-elpa=install \
--with-cereal=install \
--with-rapidjson=install \
--with-libtorch=no \
--with-libnpy=no \
--with-libri=no \
--with-libcomm=no \
--with-intel-classic=no \
| tee compile.log
# for using AMD-CPU or GPU-version: set --with-intel-classic=yes
# to enable gpu-lcao, add the following lines:
# --enable-cuda \
# --gpu-ver=75 \
# one should check your gpu compute capability number

:34切到第34行i键或a键进入编辑模式(好奇的小伙伴可以观察二者的区别),将--with-intel-classic=no修改为--with-intel-classic=yes, 呈现效果为:

./install_abacus_toolchain.sh \
--with-intel=system \
--math-mode=mkl \
--with-gcc=no \
--with-intelmpi=system \
--with-cmake=install \
--with-scalapack=no \
--with-libxc=install \
--with-fftw=no \
--with-elpa=install \
--with-cereal=install \
--with-rapidjson=install \
--with-libtorch=no \
--with-libnpy=no \
--with-libri=no \
--with-libcomm=no \
--with-intel-classic=yes \
| tee compile.log

注:可以发现toolchain*.sh脚本实际上是通过调用主脚本install_abacus_toolchain.sh来完成编译的,只是针对不同的toolchain进行了不同的编译选项设置,方便用户开箱即用和简单编辑。需要注意的,如此使用bash命令时,各个选项其实是连接在一起的,也即分行符号""后面不能有任何空格或其他字符,并且各个分行之间不能有注释。

按一下退出ESC退出编辑模式,输入:wq保存并退出。 如此这般,Toolchain将在运行时识别传统Intel编译器(icc, icpc, ifort)及其MPI库进行依赖库编译,运行toolchain_intel.sh时,会有与此前不同的输出:

==================== Finding Intel compiler from system paths ====================
path to icc is /mnt/sg001/opt/intel/oneapi/compiler/2022.2.1/linux/bin/intel64/icc
path to icpc is /mnt/sg001/opt/intel/oneapi/compiler/2022.2.1/linux/bin/intel64/icpc
path to ifort is /mnt/sg001/opt/intel/oneapi/compiler/2022.2.1/linux/bin/intel64/ifort
CC is /mnt/sg001/opt/intel/oneapi/compiler/2022.2.1/linux/bin/intel64/icc
CXX is /mnt/sg001/opt/intel/oneapi/compiler/2022.2.1/linux/bin/intel64/icpc
FC is /mnt/sg001/opt/intel/oneapi/compiler/2022.2.1/linux/bin/intel64/ifort
Step intel took 0.00 seconds.
Step amd took 0.00 seconds.
==================== Getting proc arch info using OpenBLAS tools ====================
OpenBLAS detected LIBCORE = skylakex
OpenBLAS detected ARCH = x86_64
==================== Finding CMake from system paths ====================
path to cmake is /mnt/sg001/home/fz_pku_jh/software/cmake/3.31.7/bin/cmake
Step cmake took 0.00 seconds.
==================== Finding Intel MPI from system paths ====================
path to mpiexec is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiexec
path to mpiicc is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiicc
path to mpiicpc is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiicpc
path to mpiifort is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiifort
Found lib directory /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/lib/release
libmpi is found in ld search path
libmpicxx is found in ld search path
I_MPI_CXX is icpc
I_MPI_CC is icc
I_MPI_FC is ifort
MPICXX is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiicpc
MPICC is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiicc
MPIFC is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiifort
Step intelmpi took 1.00 seconds.
==================== Finding MKL from system paths ====================
MKLROOT is found to be /mnt/sg001/opt/intel/oneapi/mkl/2022.2.1
libm is found in ld search path
libdl is found in ld search path
Step mkl took 0.00 seconds.

思考:为什么上面的编译输出中,cmake是来自于system的?我还修改了哪个编译选项?这个问题的答案将在下一部分揭露。

如此编辑并运行完toolchain_intel.sh后,进一步通过vim打开build_abacus_intel.sh

#!/bin/bash
#SBATCH -J build
#SBATCH -N 1
#SBATCH -n 16
#SBATCH -o install.log
#SBATCH -e install.err
# JamesMisaka in 2025.03.09

# Build ABACUS by intel-toolchain

# module load mkl compiler mpi
# source path/to/setvars.sh

ABACUS_DIR=..
TOOL=$(pwd)
INSTALL_DIR=$TOOL/install
source$INSTALL_DIR/setup
cd$ABACUS_DIR
ABACUS_DIR=$(pwd)

BUILD_DIR=build_abacus_intel
rm -rf $BUILD_DIR

PREFIX=$ABACUS_DIR
ELPA=$INSTALL_DIR/elpa-2025.01.001/cpu
# ELPA=$INSTALL_DIR/elpa-2025.01.001/nvidia # for gpu-lcao
CEREAL=$INSTALL_DIR/cereal-master/include/cereal
LIBXC=$INSTALL_DIR/libxc-7.0.0
RAPIDJSON=$INSTALL_DIR/rapidjson-master/
# LIBTORCH=$INSTALL_DIR/libtorch-2.1.2/share/cmake/Torch
# LIBNPY=$INSTALL_DIR/libnpy-1.0.1/include
# LIBRI=$INSTALL_DIR/LibRI-0.2.1.0
# LIBCOMM=$INSTALL_DIR/LibComm-master
# DEEPMD=$HOME/apps/anaconda3/envs/deepmd # v3.0 might have problem

# Notice: if you are compiling with AMD-CPU or GPU-version ABACUS, then `icpc` and `mpiicpc` compilers are recommended
cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \
-DCMAKE_CXX_COMPILER=icpx \
-DMPI_CXX_COMPILER=mpiicpx \
-DMKLROOT=$MKLROOT \
-DELPA_DIR=$ELPA \
-DCEREAL_INCLUDE_DIR=$CEREAL \
-DLibxc_DIR=$LIBXC \
-DENABLE_LCAO=ON \
-DENABLE_LIBXC=ON \
-DUSE_OPENMP=ON \
-DUSE_ELPA=ON \
-DENABLE_RAPIDJSON=ON \
-DRapidJSON_DIR=$RAPIDJSON \
# -DENABLE_DEEPKS=1 \
# -DTorch_DIR=$LIBTORCH \
# -Dlibnpy_INCLUDE_DIR=$LIBNPY \
# -DENABLE_LIBRI=ON \
# -DLIBRI_DIR=$LIBRI \
# -DLIBCOMM_DIR=$LIBCOMM \
# -DDeePMD_DIR=$DEEPMD \
# -DUSE_CUDA=ON \
# -DENABLE_CUSOLVERMP=ON \
# -D CAL_CUSOLVERMP_PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/2x.xx/math_libs/1x.x/targets/x86_64-linux/lib

cmake --build $BUILD_DIR -j `nproc`
cmake --install $BUILD_DIR 2>/dev/

# if one want's to include deepmd, your system gcc version should be >= 11.3.0 for glibc requirements

# generate abacus_env.sh
cat << EOF > "${TOOL}/abacus_env.sh"
#!/bin/bash
source$INSTALL_DIR/setup
export PATH="${PREFIX}/bin":\${PATH}
EOF

# generate information
cat << EOF
========================== usage =========================
Done!
To use the installed ABACUS version
You need to source${TOOL}/abacus_env.sh first !
"""
EOF

修改ABACUS编译所用的编译器,即按如下方式修改第38和39行:

cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \
-DCMAKE_CXX_COMPILER=icpc \
-DMPI_CXX_COMPILER=mpiicpc \

保存退出后,运行build_abacus_intel.sh即可。编译期间可能会有大量ICC相关的Warning,这z只是提醒你icpc等旧版Intel编译器不会在新版Intel-OneAPI中保持支持,不必理会。通过这一方法,即可基于旧版Intel-OneAPI编译ABACUS。

有些Intel-OneAPI处于过渡期(如Intel-OneAPI 2023.2),可能同时具备icpxmpiicpc,但不具备mpiicpx。针对这一情况,可以使用另一个编译选项--with-intel-mpi-clas=yes。编辑toolchain_intel.sh,使install_abacus_toolchain.sh呈现如下

./install_abacus_toolchain.sh \
--with-intel=system \
--math-mode=mkl \
--with-gcc=no \
--with-intelmpi=system \
--with-cmake=install \
--with-scalapack=no \
--with-libxc=install \
--with-fftw=no \
--with-elpa=install \
--with-cereal=install \
--with-rapidjson=install \
--with-libtorch=no \
--with-libnpy=no \
--with-libri=no \
--with-libcomm=no \
--with-intel-classic=no \
--with-intel-mpi-clas=yes \
| tee compile.log

即可使用icpxmpiicpc编译ABACUS依赖软件。编译ABACUS本体时,按照与上述类似的方法在build_abacus_intel.sh中修改编译器即可

cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \
-DCMAKE_CXX_COMPILER=icpx \
-DMPI_CXX_COMPILER=mpiicpc \

很好!你已经学会了通过编辑toolchain_intel.shbuild_abacus_intel.sh自定义toolchain以在不同机器条件下编译Intel依赖的ABACUS了,这种自定义同样能用于相关ABACUS功能插件的引入,如杂化泛函库LibRI,或是DeePKS支持所需Torch依赖。

02 基于Intel Toolchain引入ABACUS功能插件

在之前的内容中,我们查看并编辑了toolchain_*.shbuild_abacus_*.sh这两个toolchain核心脚本。通过编辑这两个脚本,我们可以基于Toolchain方便快捷地实现不同需求的ABACUS编译。

其中,toolchain_*.sh要通过调用install_abacus_toolchain.sh主脚本完成,涉及到几个编译选项,这些编译选项及其作用分为几类:

  • --with-PKG=[install, system, no, [abspath]],安装特定依赖库时下载并通过下载的软件包安装/从系统环境变量中读取对应依赖库/不使用对应依赖库/[高级]从绝对路径中识别对应依赖库。 --math-mode=[mkl, openblas, aocl]指定Toolchain会用到的数学库类型,默认为openblas,但如果识别到环境变量$MKLROOT则会切换到MKL(所以使用GNU Toolchain时不能加载OneAPI环境)
  • --with-option=[yes,no]等其他的编译选项,用于对特定依赖库指定不同版本,如Intel Toolchain的--with-intel-classic=[yes,no]或GNU Toolchain的--with-openmpi4=[yes,no]通过README,或运行install_abacus_toolchain.sh --help,能获取更多相关信息。上面内容中留了一个思考题,其实就只是我在具体编译时使用了--with-cmake=system选项

了解了这些之后,我们就可以很方便地基于Toolchain编译一些ABACUS功能插件,下面以Intel Toolchain为例,简要介绍如何编译支持杂化泛函和DeePKS的ABACUS

2.1 编译支持杂化泛函计算的ABACUS

在·toolchain_intel.sh·中,修改如下两个选项为install:

--with-libri=install \
--with-libcomm=install \

在编辑完其他选项后,运行toolchain_intel.sh,此时Toolchain会自动下载安装LibRI库和LibComm,并自动加入到环境变量管理文件install/setup中。运行完毕后,编辑build_abacus_intel.sh,打开相关代码附近的注释(删除注释符号#),并将相关代码连接起来:

PREFIX=$ABACUS_DIR
ELPA=$INSTALL_DIR/elpa-2025.01.001/cpu
# ELPA=$INSTALL_DIR/elpa-2025.01.001/nvidia # for gpu-lcao
CEREAL=$INSTALL_DIR/cereal-master/include/cereal
LIBXC=$INSTALL_DIR/libxc-7.0.0
RAPIDJSON=$INSTALL_DIR/rapidjson-master/
# LIBTORCH=$INSTALL_DIR/libtorch-2.1.2/share/cmake/Torch
# LIBNPY=$INSTALL_DIR/libnpy-1.0.1/include
LIBRI=$INSTALL_DIR/LibRI-0.2.1.0
LIBCOMM=$INSTALL_DIR/LibComm-master
# DEEPMD=$HOME/apps/anaconda3/envs/deepmd # v3.0 might have problem

# Notice: if you are compiling with AMD-CPU or GPU-version ABACUS, then `icpc` and `mpiicpc` compilers are recommended
cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \
-DCMAKE_CXX_COMPILER=icpx \
-DMPI_CXX_COMPILER=mpiicpx \
-DMKLROOT=$MKLROOT \
-DELPA_DIR=$ELPA \
-DCEREAL_INCLUDE_DIR=$CEREAL \
-DLibxc_DIR=$LIBXC \
-DENABLE_LCAO=ON \
-DENABLE_LIBXC=ON \
-DUSE_OPENMP=ON \
-DUSE_ELPA=ON \
-DENABLE_RAPIDJSON=ON \
-DRapidJSON_DIR=$RAPIDJSON \
-DENABLE_LIBRI=ON \
-DLIBRI_DIR=$LIBRI \
-DLIBCOMM_DIR=$LIBCOMM \
# -DENABLE_DEEPKS=1 \
# -DTorch_DIR=$LIBTORCH \
# -Dlibnpy_INCLUDE_DIR=$LIBNPY \

再运行build_abacus_intel.sh脚本即可。 再次强调:分行符号""后面不能有任何空格或其他字符,并且各个分行之间不能有注释。 注:Intel-OneAPI编译的ABACUS往往具备更强的OpenMP并行性能,可以开很多OpenMP线程,通过线程并行加速计算,尤其是跑杂化泛函等需要运行EXX部分的计算时,效率通常来说会比较高。

2.2 编译支持DeePKS的ABACUS

在·toolchain_intel.sh·中,修改如下两个选项为install:

--with-libtorch=install \
--with-libnpy=install \

toolchain_intel.sh完毕后,在build_abacus_intel.sh中打开相关注释并修改好脚本,比如:

PREFIX=$ABACUS_DIR
ELPA=$INSTALL_DIR/elpa-2025.01.001/cpu
# ELPA=$INSTALL_DIR/elpa-2025.01.001/nvidia # for gpu-lcao
CEREAL=$INSTALL_DIR/cereal-master/include/cereal
LIBXC=$INSTALL_DIR/libxc-7.0.0
RAPIDJSON=$INSTALL_DIR/rapidjson-master/
LIBTORCH=$INSTALL_DIR/libtorch-2.1.2/share/cmake/Torch
LIBNPY=$INSTALL_DIR/libnpy-1.0.1/include
LIBRI=$INSTALL_DIR/LibRI-0.2.1.0
LIBCOMM=$INSTALL_DIR/LibComm-master
# DEEPMD=$HOME/apps/anaconda3/envs/deepmd # v3.0 might have problem

# Notice: if you are compiling with AMD-CPU or GPU-version ABACUS, then `icpc` and `mpiicpc` compilers are recommended
cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \
-DCMAKE_CXX_COMPILER=icpx \
-DMPI_CXX_COMPILER=mpiicpx \
-DMKLROOT=$MKLROOT \
-DELPA_DIR=$ELPA \
-DCEREAL_INCLUDE_DIR=$CEREAL \
-DLibxc_DIR=$LIBXC \
-DENABLE_LCAO=ON \
-DENABLE_LIBXC=ON \
-DUSE_OPENMP=ON \
-DUSE_ELPA=ON \
-DENABLE_RAPIDJSON=ON \
-DRapidJSON_DIR=$RAPIDJSON \
-DENABLE_LIBRI=ON \
-DLIBRI_DIR=$LIBRI \
-DLIBCOMM_DIR=$LIBCOMM \
-DENABLE_DEEPKS=1 \
-DTorch_DIR=$LIBTORCH \
-Dlibnpy_INCLUDE_DIR=$LIBNPY \

按类似于如上修改的build_abacus_intel.sh运行,即可同时编译含LibRI和DeePKS支持的ABACUS。

总结

本教程是ABACUS用户教程的一部分,以Intel Toolchain安装ABACUS为例讲解了Toolchain的使用,包括如何编辑Toolchain的关键脚本,达到切换不同编译依赖组件或是加入特定功能插件的效果。

关于安装方法,后续教程将进一步更新:

  • 基于AMD Toolchain安装针对AMD CPU优化的ABACUS
  • 基于Toolchain编译安装GPU版本的ABACUS
  • 通过Conda和Docker快速部署ABACUS的办法 谢谢大家观看!如果你有任何问题,欢迎联系我们!

联系我们

控制面板
您好,欢迎到访网站!
  查看权限
网站分类
最新留言