Skip to content

DBS3DualUpload

sfoulkes edited this page Apr 29, 2013 · 2 revisions

The changes to DBSBuffer and DBS3Upload to support the dual upload were added in 0.9.36: https://github.com/dmwm/WMCore/commit/e6988496fd2be7410c32eaa8a14a22055e1686b7#src/python/WMComponent/DBS3Buffer/MySQL/Create.py

To enable the dual upload I deploy a separate WMAgent along side the production instance. I've been using version 0.9.51 with the following patch: https://github.com/dmwm/WMCore/commit/c0b92bd6768dcc9b2ad532bae6d98f92a8e2380e#src/python/WMComponent/DBS3Buffer

Versions of WMCore newer than 0.9.51 have schema changes to DBSBuffer to support per-workflow block sizes and I didn't want to have to update the schemas of the older deployed agents. I did a deployment but didn't use the WMAgent manage script to create the configuration files. The new agent only has a single component and needs to use the database from the other agent. If you have the manage script set everything up in the new deployment you may risk blowing away the production database.

I created a config file for the DBS3 Uploader:

from WMCore.Configuration import Configuration
config = Configuration()
config.section_('Agent')
config.Agent.useMsgService = False
config.Agent.hostName = 'vocms201.cern.ch'
config.Agent.teamName = 'mc'
config.Agent.useHeartbeat = True
config.Agent.contact = '[email protected]'
config.Agent.useTrigger = False
config.Agent.agentName = 'WMAgentCommissioning'
config.Agent.agentNumber = 1
config.section_('General')
config.General.workDir = '/data/srv/dbs3upload/current/install/wmagent'
config.section_('CoreDatabase')
config.CoreDatabase.socket = '/storage/local/data1/cmsdataops/srv/wmagent/v0.9.33/install/mysql/logs/mysql.sock'
config.CoreDatabase.connectUrl = 'mysql://username:password@localhost/wmagent'
config.component_('DBSUpload')
config.DBSUpload.workerThreads = 1
config.DBSUpload.componentDir = '/data/srv/dbs3upload/current/install/wmagent/DBSUpload'
config.DBSUpload.logLevel = 'INFO'
config.DBSUpload.namespace = 'WMComponent.DBS3Buffer.DBSUpload'
config.DBSUpload.pollInterval = 100
config.DBSUpload.DBSBlockMaxFiles = 500
config.DBSUpload.DBSBlockMaxTime = 66400
config.DBSUpload.DBSBlockMaxSize = 5000000000000
config.DBSUpload.dbsUrl = "https://dbs3-testbed.cern.ch/dbs/prod/global/DBSWriter"
config.DBSUpload.dbs3UploadOnly = True

Note that the database connection information is different for each machine and will have to do changed. I copied this from the existing agent's config. Once this is in place you can start the single component and it will upload to DBS3. The DBS2 uploader in the production agent will sort the files into blocks and upload them to DBS2. The DBS3 uploader when configured with the "dbs3UploadOnly" tag will not create any blocks and will just look for block that are already in DBS2 to upload to DBS3. There is a column in the dbsbuffer_block table "status3" that it uses to keep track of which blocks are in DBS3.

The easy way

If the agents are reasonably current (>0.9.36) then the DBS3Upload component can simply be added to the config. Use the config snippet posted above.

Clone this wiki locally