ATLAS Offline Software
Public Member Functions | Private Attributes | List of all members
python.trfValidation.athenaLogFileReport Class Reference

Logfile suitable for scanning logfiles with an athena flavour, i.e., lines of the form "SERVICE LOGLEVEL MESSAGE". More...

Inheritance diagram for python.trfValidation.athenaLogFileReport:
Collaboration diagram for python.trfValidation.athenaLogFileReport:

Public Member Functions

def __init__ (self, logfile, substepName=None, msgLimit=10, msgDetailLevel=stdLogLevels['ERROR'], ignoreList=None)
 Class constructor. More...
 
def python (self)
 Produce a python dictionary summary of the log file report for inclusion in the executor report. More...
 
def resetReport (self)
 
def knowledgeFileHandler (self, knowledgefile)
 Generally, a knowledge file consists of non-standard logging error/abnormal lines which are left out during log scan and could help diagnose job failures. More...
 
def scanLogFile (self, resetReport=False)
 
def dbMonitor (self)
 Return data volume and time spend to retrieve information from the database. More...
 
def worstError (self)
 Return the worst error found in the logfile (first error of the most serious type) More...
 
def firstError (self, floor='ERROR')
 Return the first error found in the logfile above a certain loglevel. More...
 
def moreDetails (self, log, firstline, firstLineCount, knowledgeFile, offset=0)
 
def coreDumpSvcParser (self, log, lineGenerator, firstline, firstLineCount)
 Attempt to suck a core dump report from the current logfile This function scans logs in two different directions: 1) downwards, to exctract information after CoreDumpSvc; and 2) upwards, to find abnormal lines. More...
 
def g494ExceptionParser (self, lineGenerator, firstline, firstLineCount)
 
def g4ExceptionParser (self, lineGenerator, firstline, firstLineCount, g4ExceptionLineDepth)
 
def pythonExceptionParser (self, log, lineGenerator, firstline, firstLineCount)
 
def badAllocExceptionParser (self, lineGenerator, firstline, firstLineCount)
 
def rootSysErrorParser (self, lineGenerator, firstline, firstLineCount)
 
def __str__ (self)
 
def scanLogFile (self)
 
def firstError (self)
 

Private Attributes

 _ignoreList
 
 _regExp
 
 _metaPat
 
 _metaData
 
 _substepName
 
 _msgLimit
 
 _levelCounter
 
 _errorDetails
 
 _dbbytes
 
 _dbtime
 
 _logfile
 
 _msgDetails
 
 _re
 

Detailed Description

Logfile suitable for scanning logfiles with an athena flavour, i.e., lines of the form "SERVICE LOGLEVEL MESSAGE".

Definition at line 211 of file trfValidation.py.

Constructor & Destructor Documentation

◆ __init__()

def python.trfValidation.athenaLogFileReport.__init__ (   self,
  logfile,
  substepName = None,
  msgLimit = 10,
  msgDetailLevel = stdLogLevels['ERROR'],
  ignoreList = None 
)

Class constructor.

Parameters
logfileLogfile (or list of logfiles) to scan
substepNameName of the substep executor, that has requested this log scan
msgLimitThe number of messages in each category on which a

Definition at line 216 of file trfValidation.py.

216  def __init__(self, logfile, substepName=None, msgLimit=10, msgDetailLevel=stdLogLevels['ERROR'], ignoreList=None):
217  if ignoreList:
218  self._ignoreList = ignoreList
219  else:
220  self._ignoreList = ignorePatterns()
221 
222 
227  self._regExp = re.compile(r'(?P<service>[^\s]+\w)(.*)\s+(?P<level>' + '|'.join(stdLogLevels) + r')\s+(?P<message>.*)')
228 
229  self._metaPat = re.compile(r"MetaData:\s+(.*?)\s*=\s*(.*)$")
230  self._metaData = {}
231  self._substepName = substepName
232  self._msgLimit = msgLimit
233 
234  self.resetReport()
235 
236  super(athenaLogFileReport, self).__init__(logfile, msgLimit, msgDetailLevel)
237 

Member Function Documentation

◆ __str__()

def python.trfValidation.athenaLogFileReport.__str__ (   self)

Reimplemented from python.trfValidation.logFileReport.

Definition at line 673 of file trfValidation.py.

673  def __str__(self):
674  return str(self._levelCounter) + str(self._errorDetails)
675 
676 

◆ badAllocExceptionParser()

def python.trfValidation.athenaLogFileReport.badAllocExceptionParser (   self,
  lineGenerator,
  firstline,
  firstLineCount 
)

Definition at line 661 of file trfValidation.py.

661  def badAllocExceptionParser(self, lineGenerator, firstline, firstLineCount):
662  badAllocExceptionReport = 'terminate after \'std::bad_alloc\'.'
663 
664  msg.debug('Identified bad_alloc - adding to error detail report')
665  self._levelCounter['CATASTROPHE'] += 1
666  self._errorDetails['CATASTROPHE'].append({'message': badAllocExceptionReport, 'firstLine': firstLineCount, 'count': 1})
667 

◆ coreDumpSvcParser()

def python.trfValidation.athenaLogFileReport.coreDumpSvcParser (   self,
  log,
  lineGenerator,
  firstline,
  firstLineCount 
)

Attempt to suck a core dump report from the current logfile This function scans logs in two different directions: 1) downwards, to exctract information after CoreDumpSvc; and 2) upwards, to find abnormal lines.

Note
: Current downwards scan just eats lines until a 'normal' line is seen. There is a slight problem here in that the end of core dump trigger line will not get parsed TODO: fix this (OTOH core dump is usually the very last thing and fatal!)

Definition at line 507 of file trfValidation.py.

507  def coreDumpSvcParser(self, log, lineGenerator, firstline, firstLineCount):
508  _eventCounter = _run = _event = _currentAlgorithm = _functionLine = _currentFunction = None
509  coreDumpReport = 'Core dump from CoreDumpSvc'
510  # Number of lines to ignore above 'core dump' when looking for abnormal lines
511  offset = 1
512  coreDumpDetailsReport = {}
513 
514  for line, linecounter in lineGenerator:
515  m = self._regExp.match(line)
516  if m is None:
517  if 'Caught signal 11(Segmentation fault)' in line:
518  coreDumpReport = 'Segmentation fault'
519  if 'Event counter' in line:
520  _eventCounter = line
521 
522  #Lookup: 'EventID: [Run,Evt,Lumi,Time,BunchCross,DetMask] = [267599,7146597,1,1434123751:0,0,0x0,0x0,0x0]'
523  if 'EventID' in line:
524  match = re.findall(r'\[.*?\]', line)
525  if match and match.__len__() >= 2: # Assuming the line contains at-least one key-value pair.
526  brackets = "[]"
527  commaDelimer = ','
528  keys = (match[0].strip(brackets)).split(commaDelimer)
529  values = (match[1].strip(brackets)).split(commaDelimer)
530 
531  if 'Run' in keys:
532  _run = 'Run: ' + values[keys.index('Run')]
533 
534  if 'Evt' in keys:
535  _event = 'Evt: ' + values[keys.index('Evt')]
536 
537  if 'Current algorithm' in line:
538  _currentAlgorithm = line
539  if '<signal handler called>' in line:
540  _functionLine = linecounter+1
541  if _functionLine and linecounter is _functionLine:
542  if ' in ' in line:
543  _currentFunction = 'Current Function: ' + line.split(' in ')[1].split()[0]
544  else:
545  _currentFunction = 'Current Function: ' + line.split()[1]
546  else:
547  # Can this be done - we want to push the line back into the generator to be
548  # reparsed in the normal way (might need to make the generator a class with the
549  # __exec__ method supported (to get the line), so that we can then add a
550  # pushback onto an internal FIFO stack
551  # lineGenerator.pushback(line)
552  break
553  _eventCounter = 'Event counter: unknown' if not _eventCounter else _eventCounter
554  _run = 'Run: unknown' if not _run else _run
555  _event = 'Evt: unknown' if not _event else _event
556  _currentAlgorithm = 'Current algorithm: unknown' if not _currentAlgorithm else _currentAlgorithm
557  _currentFunction = 'Current Function: unknown' if not _currentFunction else _currentFunction
558  coreDumpReport = '{0}: {1}; {2}; {3}; {4}; {5}'.format(coreDumpReport, _eventCounter, _run, _event, _currentAlgorithm, _currentFunction)
559 
560  coreDumpDetailsReport = self.moreDetails(log, firstline, firstLineCount, 'knowledgeFile.db', offset)
561  abnormalLines = coreDumpDetailsReport['abnormalLines']
562 
563  # concatenate an extract of first seen abnormal line to the core dump message
564  if 'message0' in abnormalLines.keys():
565  coreDumpReport += '; Abnormal line seen just before core dump: ' + abnormalLines['message0'][0:30] + '...[truncated] ' + '(see the jobReport)'
566 
567  # Core dumps are always fatal...
568  msg.debug('Identified core dump - adding to error detail report')
569  self._levelCounter['FATAL'] += 1
570  self._errorDetails['FATAL'].append({'moreDetails': coreDumpDetailsReport, 'message': coreDumpReport, 'firstLine': firstLineCount, 'count': 1})
571 
572 

◆ dbMonitor()

def python.trfValidation.athenaLogFileReport.dbMonitor (   self)

Return data volume and time spend to retrieve information from the database.

Definition at line 420 of file trfValidation.py.

420  def dbMonitor(self):
421  return {'bytes' : self._dbbytes, 'time' : self._dbtime} if self._dbbytes > 0 or self._dbtime > 0 else None
422 

◆ firstError() [1/2]

def python.trfValidation.logFileReport.firstError (   self)
inherited

Definition at line 201 of file trfValidation.py.

201  def firstError(self):
202  pass
203 

◆ firstError() [2/2]

def python.trfValidation.athenaLogFileReport.firstError (   self,
  floor = 'ERROR' 
)

Return the first error found in the logfile above a certain loglevel.

Definition at line 440 of file trfValidation.py.

440  def firstError(self, floor='ERROR'):
441  firstLine = firstError = None
442  firstLevel = stdLogLevels[floor]
443  firstName = floor
444  for lvl, count in self._levelCounter.items():
445  if (count > 0 and stdLogLevels.get(lvl, 0) >= stdLogLevels[floor] and
446  (firstError is None or self._errorDetails[lvl][0]['firstLine'] < firstLine)):
447  firstLine = self._errorDetails[lvl][0]['firstLine']
448  firstLevel = stdLogLevels[lvl]
449  firstName = lvl
450  firstError = self._errorDetails[lvl][0]
451 
452  return {'level': firstName, 'nLevel': firstLevel, 'firstError': firstError}
453 
454 

◆ g494ExceptionParser()

def python.trfValidation.athenaLogFileReport.g494ExceptionParser (   self,
  lineGenerator,
  firstline,
  firstLineCount 
)

Definition at line 573 of file trfValidation.py.

573  def g494ExceptionParser(self, lineGenerator, firstline, firstLineCount):
574  g4Report = firstline
575  g4lines = 1
576  if 'Aborting execution' not in g4Report:
577  for line, linecounter in lineGenerator:
578  g4Report += os.linesep + line
579  g4lines += 1
580  # Test for the closing string
581  if '*** ' in line:
582  break
583  if g4lines >= 25:
584  msg.warning('G4 exception closing string not found within {0} log lines of line {1}'.format(g4lines, firstLineCount))
585  break
586 
587  # G4 exceptions can be fatal or they can be warnings...
588  msg.debug('Identified G4 exception - adding to error detail report')
589  if "just a warning" in g4Report:
590  if self._levelCounter['WARNING'] <= self._msgLimit:
591  self._levelCounter['WARNING'] += 1
592  self._errorDetails['WARNING'].append({'message': g4Report, 'firstLine': firstLineCount, 'count': 1})
593  elif self._levelCounter['WARNING'] == self._msgLimit + 1:
594  msg.warning("Found message number {0} at level WARNING - this and further messages will be supressed from the report".format(self._levelCounter['WARNING']))
595  else:
596  self._levelCounter['FATAL'] += 1
597  self._errorDetails['FATAL'].append({'message': g4Report, 'firstLine': firstLineCount, 'count': 1})
598 

◆ g4ExceptionParser()

def python.trfValidation.athenaLogFileReport.g4ExceptionParser (   self,
  lineGenerator,
  firstline,
  firstLineCount,
  g4ExceptionLineDepth 
)

Definition at line 599 of file trfValidation.py.

599  def g4ExceptionParser(self, lineGenerator, firstline, firstLineCount, g4ExceptionLineDepth):
600  g4Report = firstline
601  g4lines = 1
602  for line, linecounter in lineGenerator:
603  g4Report += os.linesep + line
604  g4lines += 1
605  # Test for the closing string
606  if 'G4Exception-END' in line:
607  break
608  if g4lines >= g4ExceptionLineDepth:
609  msg.warning('G4 exception closing string not found within {0} log lines of line {1}'.format(g4lines, firstLineCount))
610  break
611 
612  # G4 exceptions can be fatal or they can be warnings...
613  msg.debug('Identified G4 exception - adding to error detail report')
614  if "-------- WWWW -------" in g4Report:
615  if self._levelCounter['WARNING'] <= self._msgLimit:
616  self._levelCounter['WARNING'] += 1
617  self._errorDetails['WARNING'].append({'message': g4Report, 'firstLine': firstLineCount, 'count': 1})
618  elif self._levelCounter['WARNING'] == self._msgLimit + 1:
619  msg.warning("Found message number {0} at level WARNING - this and further messages will be supressed from the report".format(self._levelCounter['WARNING']))
620  else:
621  self._levelCounter['FATAL'] += 1
622  self._errorDetails['FATAL'].append({'message': g4Report, 'firstLine': firstLineCount, 'count': 1})
623 
624 

◆ knowledgeFileHandler()

def python.trfValidation.athenaLogFileReport.knowledgeFileHandler (   self,
  knowledgefile 
)

Generally, a knowledge file consists of non-standard logging error/abnormal lines which are left out during log scan and could help diagnose job failures.

Definition at line 266 of file trfValidation.py.

266  def knowledgeFileHandler(self, knowledgefile):
267  # load abnormal/error line(s) from the knowledge file(s)
268  linesList = []
269  fullName = trfUtils.findFile(os.environ['DATAPATH'], knowledgefile)
270  if not fullName:
271  msg.warning('Knowledge file {0} could not be found in DATAPATH'.format(knowledgefile))
272  try:
273  with open(fullName) as knowledgeFileHandle:
274  msg.debug('Opened knowledge file {0} from here: {1}'.format(knowledgefile, fullName))
275 
276  for line in knowledgeFileHandle:
277  if line.startswith('#') or line == '' or line =='\n':
278  continue
279  line = line.rstrip('\n')
280  linesList.append(line)
281  except OSError as e:
282  msg.warning('Failed to open knowledge file {0}: {1}'.format(fullName, e))
283  return linesList
284 

◆ moreDetails()

def python.trfValidation.athenaLogFileReport.moreDetails (   self,
  log,
  firstline,
  firstLineCount,
  knowledgeFile,
  offset = 0 
)

Definition at line 455 of file trfValidation.py.

455  def moreDetails(self, log, firstline, firstLineCount, knowledgeFile, offset=0):
456  # Look for "abnormal" and "last normal" line(s)
457  # Make a list of last e.g. 50 lines before core dump
458  abnormalLinesList = self.knowledgeFileHandler(knowledgeFile)
459  linesToBeScanned = 50
460  seenAbnormalLines = []
461  abnormalLinesReport = {}
462  lastNormalLineReport = {}
463 
464  linesList = []
465  myGen = trfUtils.lineByLine(log)
466  for line, linecounter in myGen:
467  if linecounter in range(firstLineCount - linesToBeScanned, firstLineCount-offset):
468  linesList.append([linecounter, line])
469  elif linecounter == firstLineCount:
470  break
471 
472  for linecounter, line in reversed(linesList):
473  if re.findall(r'|'.join(abnormalLinesList), line):
474  seenLine = False
475  for dic in seenAbnormalLines:
476  # count repetitions or similar (e.g. first 15 char) abnormal lines
477  if dic['message'] == line or dic['message'][0:15] == line[0:15]:
478  dic['count'] += 1
479  seenLine = True
480  break
481  if seenLine is False:
482  seenAbnormalLines.append({'message': line, 'firstLine': linecounter, 'count': 1})
483  else:
484  if line != '':
485  lastNormalLineReport = {'message': line, 'firstLine': linecounter, 'count': 1}
486  break
487  else:
488  continue
489 
490  # Write the list of abnormal lines into the abnormalLinesReport dictionary
491  # The keys of each abnormal line have a number suffix starting with 0
492  # e.g., first abnormal line's keys are :{'mesage0', 'firstLine0', 'count0'}
493 
494  for a in range(len(seenAbnormalLines)):
495  abnormalLinesReport.update({'message{0}'.format(a): seenAbnormalLines[a]['message'], 'firstLine{0}'.format(a): seenAbnormalLines[a]['firstLine'],
496  'count{0}'.format(a): seenAbnormalLines[a]['count']})
497 
498  return {'abnormalLines': abnormalLinesReport, 'lastNormalLine': lastNormalLineReport}
499 
500 

◆ python()

def python.trfValidation.athenaLogFileReport.python (   self)

Produce a python dictionary summary of the log file report for inclusion in the executor report.

Definition at line 241 of file trfValidation.py.

241  def python(self):
242  errorDict = {'countSummary': {}, 'details': {}}
243  for level, count in self._levelCounter.items():
244  errorDict['countSummary'][level] = count
245  if self._levelCounter[level] > 0 and len(self._errorDetails[level]) > 0:
246  errorDict['details'][level] = []
247  for error in self._errorDetails[level]:
248  errorDict['details'][level].append(error)
249  return errorDict
250 

◆ pythonExceptionParser()

def python.trfValidation.athenaLogFileReport.pythonExceptionParser (   self,
  log,
  lineGenerator,
  firstline,
  firstLineCount 
)

Definition at line 625 of file trfValidation.py.

625  def pythonExceptionParser(self, log, lineGenerator, firstline, firstLineCount):
626  pythonExceptionReport = ""
627  lastLine = firstline
628  lastLine2 = firstline
629  pythonErrorLine = firstLineCount
630  pyLines = 1
631  for line, linecounter in lineGenerator:
632  if 'Py:Athena' in line and 'INFO leaving with code' in line:
633  if len(lastLine)> 0:
634  pythonExceptionReport = lastLine
635  pythonErrorLine = linecounter-1
636  else: # Sometimes there is a blank line after the exception
637  pythonExceptionReport = lastLine2
638  pythonErrorLine = linecounter-2
639  break
640  if pyLines >= 25:
641  msg.warning('Could not identify python exception correctly scanning {0} log lines after line {1}'.format(pyLines, firstLineCount))
642  pythonExceptionReport = "Unable to identify specific exception"
643  pythonErrorLine = firstLineCount
644  break
645  lastLine2 = lastLine
646  lastLine = line
647  pyLines += 1
648 
649  pythonExceptionDetailsReport = self.moreDetails(log, firstline, firstLineCount, 'knowledgeFile.db')
650  abnormalLines = pythonExceptionDetailsReport['abnormalLines']
651 
652  # concatenate an extract of first seen abnormal line to pythonExceptionReport
653  if 'message0' in abnormalLines.keys():
654  pythonExceptionReport += '; Abnormal line seen just before python exception: ' + abnormalLines['message0'][0:30] + '...[truncated] ' + '(see the jobReport)'
655 
656  msg.debug('Identified python exception - adding to error detail report')
657  self._levelCounter['FATAL'] += 1
658  self._errorDetails['FATAL'].append({'moreDetails': pythonExceptionDetailsReport, 'message': pythonExceptionReport, 'firstLine': pythonErrorLine, 'count': 1})
659 
660 

◆ resetReport()

def python.trfValidation.athenaLogFileReport.resetReport (   self)

Reimplemented from python.trfValidation.logFileReport.

Definition at line 251 of file trfValidation.py.

251  def resetReport(self):
252  self._levelCounter = {}
253  for level in list(stdLogLevels) + ['UNKNOWN', 'IGNORED']:
254  self._levelCounter[level] = 0
255 
256  self._errorDetails = {}
257  for level in self._levelCounter:
258  self._errorDetails[level] = []
259  # Format:
260  # List of dicts {'message': errMsg, 'firstLine': lineNo, 'count': N}
261  self._dbbytes = 0
262  self._dbtime = 0.0
263 

◆ rootSysErrorParser()

def python.trfValidation.athenaLogFileReport.rootSysErrorParser (   self,
  lineGenerator,
  firstline,
  firstLineCount 
)

Definition at line 668 of file trfValidation.py.

668  def rootSysErrorParser(self, lineGenerator, firstline, firstLineCount):
669  msg.debug('Identified ROOT IO problem - adding to error detail report')
670  self._levelCounter['FATAL'] += 1
671  self._errorDetails['FATAL'].append({'message': firstline, 'firstLine': firstLineCount, 'count': 1})
672 

◆ scanLogFile() [1/2]

def python.trfValidation.logFileReport.scanLogFile (   self)
inherited

Definition at line 195 of file trfValidation.py.

195  def scanLogFile(self):
196  pass
197 

◆ scanLogFile() [2/2]

def python.trfValidation.athenaLogFileReport.scanLogFile (   self,
  resetReport = False 
)

Definition at line 285 of file trfValidation.py.

285  def scanLogFile(self, resetReport=False):
286  nonStandardErrorsList = self.knowledgeFileHandler('nonStandardErrors.db')
287 
288  if resetReport:
289  self.resetReport()
290 
291  for log in self._logfile:
292  msg.debug('Now scanning logfile {0}'.format(log))
293  seenNonStandardError = ''
294  # N.B. Use the generator so that lines can be grabbed by subroutines, e.g., core dump svc reporter
295  try:
296  myGen = trfUtils.lineByLine(log, substepName=self._substepName)
297  except IOError as e:
298  msg.error('Failed to open transform logfile {0}: {1:s}'.format(log, e))
299  # Return this as a small report
300  self._levelCounter['ERROR'] = 1
301  self._errorDetails['ERROR'] = {'message': str(e), 'firstLine': 0, 'count': 1}
302  return
303  for line, lineCounter in myGen:
304  m = self._metaPat.search(line)
305  if m is not None:
306  key, value = m.groups()
307  self._metaData[key] = value
308 
309  m = self._regExp.match(line)
310  if m is None:
311  # We didn't manage to get a recognised standard line from the file
312  # But we can check for certain other interesting things, like core dumps
313  if 'Core dump from CoreDumpSvc' in line:
314  msg.warning('Detected CoreDumpSvc report - activating core dump svc grabber')
315  self.coreDumpSvcParser(log, myGen, line, lineCounter)
316  continue
317  # Add the G4 exceptipon parsers
318  if 'G4Exception-START' in line:
319  msg.warning('Detected G4 exception report - activating G4 exception grabber')
320  self.g4ExceptionParser(myGen, line, lineCounter, 40)
321  continue
322  if '*** G4Exception' in line:
323  msg.warning('Detected G4 9.4 exception report - activating G4 exception grabber')
324  self.g494ExceptionParser(myGen, line, lineCounter)
325  continue
326  # Add the python exception parser
327  if 'Shortened traceback (most recent user call last)' in line:
328  msg.warning('Detected python exception - activating python exception grabber')
329  self.pythonExceptionParser(log, myGen, line, lineCounter)
330  continue
331  # Add parser for missed bad_alloc
332  if 'terminate called after throwing an instance of \'std::bad_alloc\'' in line:
333  msg.warning('Detected bad_alloc!')
334  self.badAllocExceptionParser(myGen, line, lineCounter)
335  continue
336  # Parser for ROOT reporting a stale file handle (see ATLASG-448)
337  # Amendment: Generalize the search (see ATLASRECTS-7121)
338  if 'Error in <TFile::ReadBuffer>' in line:
339  self.rootSysErrorParser(myGen, line, lineCounter)
340  continue
341 
342  if 'Error in <TFile::WriteBuffer>' in line:
343  self.rootSysErrorParser(myGen, line, lineCounter)
344  continue
345  # Check if the line is among the non-standard logging errors from the knowledge file
346  if any(line in l for l in nonStandardErrorsList):
347  seenNonStandardError = line
348  continue
349 
350  msg.debug('Non-standard line in %s: %s', log, line)
351  self._levelCounter['UNKNOWN'] += 1
352  continue
353 
354  # Line was matched successfully
355  fields = {}
356  for matchKey in ('service', 'level', 'message'):
357  fields[matchKey] = m.group(matchKey)
358  msg.debug('Line parsed as: {0}'.format(fields))
359 
360  # Check this is not in our ignore list
361  ignoreFlag = False
362  for ignorePat in self._ignoreList.structuredPatterns:
363  serviceMatch = ignorePat['service'].match(fields['service'])
364  levelMatch = (ignorePat['level'] == "" or ignorePat['level'] == fields['level'])
365  messageMatch = ignorePat['message'].match(fields['message'])
366  if serviceMatch and levelMatch and messageMatch:
367  msg.info('Error message "{0}" was ignored at line {1} (structured match)'.format(line, lineCounter))
368  ignoreFlag = True
369  break
370  if ignoreFlag is False:
371  for searchPat in self._ignoreList.searchPatterns:
372  if searchPat.search(line):
373  msg.info('Error message "{0}" was ignored at line {1} (search match)'.format(line, lineCounter))
374  ignoreFlag = True
375  break
376  if ignoreFlag:
377  # Got an ignore - message this to a special IGNORED error
378  fields['level'] = 'IGNORED'
379  else:
380  # Some special handling for specific errors (maybe generalise this if
381  # there end up being too many special cases)
382  # Upgrade bad_alloc to CATASTROPHE to allow for better automated handling of
383  # jobs that run out of memory
384  if 'std::bad_alloc' in fields['message']:
385  fields['level'] = 'CATASTROPHE'
386 
387  # concatenate the seen non-standard logging error to the FATAL
388  if fields['level'] == 'FATAL':
389  if seenNonStandardError:
390  line += '; ' + seenNonStandardError
391 
392  # Count this error
393  self._levelCounter[fields['level']] += 1
394 
395  # Record some error details
396  # N.B. We record 'IGNORED' errors as these really should be flagged for fixing
397  if fields['level'] == 'IGNORED' or stdLogLevels[fields['level']] >= self._msgDetails:
398  if self._levelCounter[fields['level']] <= self._msgLimit:
399  detailsHandled = False
400  for seenError in self._errorDetails[fields['level']]:
401  if seenError['message'] == line:
402  seenError['count'] += 1
403  detailsHandled = True
404  break
405  if detailsHandled is False:
406  self._errorDetails[fields['level']].append({'message': line, 'firstLine': lineCounter, 'count': 1})
407  elif self._levelCounter[fields['level']] == self._msgLimit + 1:
408  msg.warning("Found message number {0} at level {1} - this and further messages will be supressed from the report".format(self._levelCounter[fields['level']], fields['level']))
409  else:
410  # Overcounted
411  pass
412  if 'Total payload read from COOL' in fields['message']:
413  msg.debug("Found COOL payload information at line {0}".format(line))
414  a = re.match(r'(\D+)(?P<bytes>\d+)(\D+)(?P<time>\d+[.]?\d*)(\D+)', fields['message'])
415  self._dbbytes += int(a.group('bytes'))
416  self._dbtime += float(a.group('time'))
417 
418 

◆ worstError()

def python.trfValidation.athenaLogFileReport.worstError (   self)

Return the worst error found in the logfile (first error of the most serious type)

Reimplemented from python.trfValidation.logFileReport.

Definition at line 424 of file trfValidation.py.

424  def worstError(self):
425  worst = stdLogLevels['DEBUG']
426  worstName = 'DEBUG'
427  for lvl, count in self._levelCounter.items():
428  if count > 0 and stdLogLevels.get(lvl, 0) > worst:
429  worstName = lvl
430  worst = stdLogLevels[lvl]
431  if len(self._errorDetails[worstName]) > 0:
432  firstError = self._errorDetails[worstName][0]
433  else:
434  firstError = None
435 
436  return {'level': worstName, 'nLevel': worst, 'firstError': firstError}
437 
438 

Member Data Documentation

◆ _dbbytes

python.trfValidation.athenaLogFileReport._dbbytes
private

Definition at line 261 of file trfValidation.py.

◆ _dbtime

python.trfValidation.athenaLogFileReport._dbtime
private

Definition at line 262 of file trfValidation.py.

◆ _errorDetails

python.trfValidation.athenaLogFileReport._errorDetails
private

Definition at line 256 of file trfValidation.py.

◆ _ignoreList

python.trfValidation.athenaLogFileReport._ignoreList
private

Definition at line 218 of file trfValidation.py.

◆ _levelCounter

python.trfValidation.athenaLogFileReport._levelCounter
private

Definition at line 252 of file trfValidation.py.

◆ _logfile

python.trfValidation.logFileReport._logfile
privateinherited

Definition at line 181 of file trfValidation.py.

◆ _metaData

python.trfValidation.athenaLogFileReport._metaData
private

Definition at line 230 of file trfValidation.py.

◆ _metaPat

python.trfValidation.athenaLogFileReport._metaPat
private

Definition at line 229 of file trfValidation.py.

◆ _msgDetails

python.trfValidation.logFileReport._msgDetails
privateinherited

Definition at line 186 of file trfValidation.py.

◆ _msgLimit

python.trfValidation.athenaLogFileReport._msgLimit
private

Definition at line 232 of file trfValidation.py.

◆ _re

python.trfValidation.logFileReport._re
privateinherited

Definition at line 187 of file trfValidation.py.

◆ _regExp

python.trfValidation.athenaLogFileReport._regExp
private
Note
This is the regular expression match for athena logfile lines Match first strips off any HH:MM:SS prefix the transform has added, then takes the next group of non-whitespace characters as the service, then then matches from the list of known levels, then finally, ignores any last pieces of whitespace prefix and takes the rest of the line as the message

Definition at line 227 of file trfValidation.py.

◆ _substepName

python.trfValidation.athenaLogFileReport._substepName
private

Definition at line 231 of file trfValidation.py.


The documentation for this class was generated from the following file:
vtune_athena.format
format
Definition: vtune_athena.py:14
CaloCellPos2Ntuple.int
int
Definition: CaloCellPos2Ntuple.py:24
dumpHVPathFromNtuple.append
bool append
Definition: dumpHVPathFromNtuple.py:91
search
void search(TDirectory *td, const std::string &s, std::string cwd, node *n)
recursive directory search for TH1 and TH2 and TProfiles
Definition: hcg.cxx:738
plotBeamSpotVxVal.range
range
Definition: plotBeamSpotVxVal.py:195
python.MadGraphUtils.python
string python
Definition: MadGraphUtils.py:14
histSizes.list
def list(name, path='/')
Definition: histSizes.py:38
TCS::join
std::string join(const std::vector< std::string > &v, const char c=',')
Definition: Trigger/TrigT1/L1Topo/L1TopoCommon/Root/StringUtils.cxx:10
TrigJetMonitorAlgorithm.items
items
Definition: TrigJetMonitorAlgorithm.py:79
python.processes.powheg.ZZ.ZZ.__init__
def __init__(self, base_directory, **kwargs)
Constructor: all process options are set here.
Definition: ZZ.py:18
Trk::open
@ open
Definition: BinningType.h:40
str
Definition: BTagTrackIpAccessor.cxx:11
readCCLHist.float
float
Definition: readCCLHist.py:83
Trk::split
@ split
Definition: LayerMaterialProperties.h:38
match
bool match(std::string s1, std::string s2)
match the individual directories of two strings
Definition: hcg.cxx:356