Skip to content

CI: sccache s3 batch mode #3521

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: stable-23.10
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 65 additions & 10 deletions .drone.jsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -685,7 +685,7 @@ local Pipeline(branch, platform, event, arch='amd64', server='10.6-enterprise')
platform: { arch: arch },
// [if arch == 'arm64' then 'node']: { arch: 'arm64' },
clone: { depth: 10 },
steps: [
steps: std.filter(function(step) step != null, [
{
name: 'submodules',
image: 'alpine/git',
Expand Down Expand Up @@ -717,9 +717,41 @@ local Pipeline(branch, platform, event, arch='amd64', server='10.6-enterprise')
'cp -r /drone/src /mdb/' + builddir + '/storage/columnstore/columnstore',
],
},
{
// Get the copy of compile cache from the nightly build from S3
name: 'get-local-sccache',
depends_on: ['submodules'],
image: 'amazon/aws-cli',
volumes: [pipeline._volumes.mdb],
environment: {
AWS_ACCESS_KEY_ID: {
from_secret: 'aws_access_key_id',
},
AWS_SECRET_ACCESS_KEY: {
from_secret: 'aws_secret_access_key',
},
},
commands: [
'yum install -y tar zstd',
'mkdir -p /mdb/sccache',
'echo "Attempting to download SCCache archive from S3..."',
'if aws s3 cp s3://cspkg/nightly-sccache/stable-23.10-' + server + '-' + arch + '-' + std.strReplace(platform, ":", "-") + '/sccache.tar.zst /tmp/sccache.tar.zst; then',
' echo "SCCache archive found. Extracting..."',
' tar -I zstd -xf /tmp/sccache.tar.zst -C /mdb/sccache',

// TODO remove when all correct caches are uploaded
' ls /mdb/sccache/; if [ -f /mdb/sccache/mdb ]; then mv /mdb/sccache/mdb/sccache /mdb/sccache_inner && rm -rf /mdb/sccache/ && mv /mdb/sccache_inner /mdb/sccache; fi',

' echo "SCCache successfully downloaded and extracted."',
' ls /mdb/sccache',
'else',
' echo "SCCache archive not found on S3. Proceeding without it."',
'fi',
],
},
{
name: 'build',
depends_on: ['clone-mdb'],
depends_on: ['clone-mdb', 'get-local-sccache'],
image: img,
volumes: [pipeline._volumes.mdb],
environment: {
Expand All @@ -733,16 +765,15 @@ local Pipeline(branch, platform, event, arch='amd64', server='10.6-enterprise')
AWS_SECRET_ACCESS_KEY: {
from_secret: 'aws_secret_access_key',
},
SCCACHE_BUCKET: 'cs-sccache',
SCCACHE_REGION: 'us-east-1',
SCCACHE_S3_USE_SSL: 'true',
SCCACHE_S3_KEY_PREFIX: result + branch + server + arch + '${DRONE_PULL_REQUEST}',
//SCCACHE_ERROR_LOG: '/tmp/sccache_log.txt',
//SCCACHE_LOG: 'debug',
SCCACHE_DIR: '/mdb/sccache', // local cache for sccache
SCCACHE_CACHE_SIZE: '20G',
// SCCACHE_ERROR_LOG: '/tmp/sccache_log.txt',
// SCCACHE_LOG: 'sccache_debug.txt',
},
commands: [
'export CLICOLOR_FORCE=1',
get_sccache,
'mkdir -p /mdb/sccache',
'mkdir /mdb/' + builddir + '/' + result,

'bash -c "set -o pipefail && bash /mdb/' + builddir + '/storage/columnstore/columnstore/build/bootstrap_mcs.sh ' +
Expand All @@ -752,7 +783,7 @@ local Pipeline(branch, platform, event, arch='amd64', server='10.6-enterprise')
'--server-version ' + server + ' | ' +
'/mdb/' + builddir + '/storage/columnstore/columnstore/build/ansi2txt.sh ' +
'/mdb/' + builddir + '/' + result + '/build.log"' ,
'sccache --show-stats',
'sccache --show-adv-stats',

// move engine and cmapi packages to one dir and make a repo
if (pkg_format == 'rpm') then "mv -v -t ./" + result + "/ /mdb/" + builddir + "/*.rpm /drone/src/cmapi/" + result + "/*.rpm && createrepo ./" + result
Expand All @@ -762,6 +793,30 @@ local Pipeline(branch, platform, event, arch='amd64', server='10.6-enterprise')
'ls -la /mdb/' + builddir + '/storage/columnstore/columnstore/storage-manager',
],
},
// (if (event == 'cron') then { // Publish cache only in the nightly builds)
(if (1 == 1) then {
name: 'publish-sccache',
depends_on: ['build'],
image: 'amazon/aws-cli',
volumes: [pipeline._volumes.mdb],
environment: {
AWS_ACCESS_KEY_ID: {
from_secret: 'aws_access_key_id',
},
AWS_SECRET_ACCESS_KEY: {
from_secret: 'aws_secret_access_key',
},
},
commands: [
'yum install -y tar zstd',
'du -hs /mdb/sccache',
'cd /mdb/sccache',
'time tar -I pzstd -cf ../sccache.tar.zst .',
'cd ..',
'time aws s3 cp sccache.tar.zst s3://cspkg/nightly-sccache/stable-23.10-' + server + '-' + arch + '-' + std.strReplace(platform, ':', '-') + '/sccache.tar.zst',
'echo "Nightly build cache uploaded to: s3://cspkg/nightly-sccache/stable-23.10-' + server + '-' + arch + '-' + std.strReplace(platform, ':', '-') + '/sccache.tar.zst"',
],
} else null),
{
name: 'unittests',
depends_on: ['build'],
Expand Down Expand Up @@ -804,7 +859,7 @@ local Pipeline(branch, platform, event, arch='amd64', server='10.6-enterprise')
'ls -l /drone/src/%s | grep columnstore' % result,
],
},
] +
]) +
[pipeline.cmapipython] + [pipeline.cmapibuild] +
[pipeline.publish('cmapi build')] +
[pipeline.publish()] +
Expand Down
5 changes: 5 additions & 0 deletions build/bootstrap_mcs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -317,6 +317,11 @@ construct_cmake_flags() {

if [[ $SCCACHE = true ]]; then
warn "Use sccache"
# If sccache works well, the build becomes an IO bound task instead of CPU bound
# It happens because most of the time is spent is hashing the input files
# So we can run more jobs in parallel
CPUS=$((CPUS * 3))
message "Adjusted the number of CPUs for parallel build to $CPUS (multiplied by 3 to reduce IO wait)"
MDB_CMAKE_FLAGS="${MDB_CMAKE_FLAGS} -DCMAKE_C_COMPILER_LAUNCHER=sccache -DCMAKE_CXX_COMPILER_LAUNCHER=sccache"
fi

Expand Down