Loading [MathJax]/extensions/tex2jax.js
ATLAS Offline Software
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Friends Macros Modules Pages
Public Member Functions | Private Attributes | List of all members
python.trfValidation.athenaLogFileReport Class Reference

Logfile suitable for scanning logfiles with an athena flavour, i.e., lines of the form "SERVICE LOGLEVEL MESSAGE". More...

Inheritance diagram for python.trfValidation.athenaLogFileReport:
Collaboration diagram for python.trfValidation.athenaLogFileReport:

Public Member Functions

def __init__ (self, logfile, substepName=None, msgLimit=10, msgDetailLevel=stdLogLevels['ERROR'], ignoreList=None)
 Class constructor. More...
 
def python (self)
 Produce a python dictionary summary of the log file report for inclusion in the executor report. More...
 
def resetReport (self)
 
def knowledgeFileHandler (self, knowledgefile)
 Generally, a knowledge file consists of non-standard logging error/abnormal lines which are left out during log scan and could help diagnose job failures. More...
 
def scanLogFile (self, resetReport=False)
 
def dbMonitor (self)
 Return data volume and time spend to retrieve information from the database. More...
 
def worstError (self)
 Return the worst error found in the logfile (first error of the most serious type) More...
 
def firstError (self, floor='ERROR')
 Return the first error found in the logfile above a certain loglevel. More...
 
def moreDetails (self, log, firstline, firstLineCount, knowledgeFile, offset=0)
 
def coreDumpSvcParser (self, log, lineGenerator, firstline, firstLineCount)
 Attempt to suck a core dump report from the current logfile This function scans logs in two different directions: 1) downwards, to exctract information after CoreDumpSvc; and 2) upwards, to find abnormal lines. More...
 
def g494ExceptionParser (self, lineGenerator, firstline, firstLineCount)
 
def g4ExceptionParser (self, lineGenerator, firstline, firstLineCount, g4ExceptionLineDepth)
 
def pythonExceptionParser (self, log, lineGenerator, firstline, firstLineCount)
 
def badAllocExceptionParser (self, lineGenerator, firstline, firstLineCount)
 
def rootSysErrorParser (self, lineGenerator, firstline, firstLineCount)
 
def __str__ (self)
 
def scanLogFile (self)
 
def firstError (self)
 

Private Attributes

 _ignoreList
 
 _regExp
 
 _metaPat
 
 _metaData
 
 _substepName
 
 _msgLimit
 
 _levelCounter
 
 _errorDetails
 
 _dbbytes
 
 _dbtime
 
 _logfile
 
 _msgDetails
 
 _re
 

Detailed Description

Logfile suitable for scanning logfiles with an athena flavour, i.e., lines of the form "SERVICE LOGLEVEL MESSAGE".

Definition at line 211 of file trfValidation.py.

Constructor & Destructor Documentation

◆ __init__()

def python.trfValidation.athenaLogFileReport.__init__ (   self,
  logfile,
  substepName = None,
  msgLimit = 10,
  msgDetailLevel = stdLogLevels['ERROR'],
  ignoreList = None 
)

Class constructor.

Parameters
logfileLogfile (or list of logfiles) to scan
substepNameName of the substep executor, that has requested this log scan
msgLimitThe number of messages in each category on which a

Definition at line 216 of file trfValidation.py.

216  def __init__(self, logfile, substepName=None, msgLimit=10, msgDetailLevel=stdLogLevels['ERROR'], ignoreList=None):
217  if ignoreList:
218  self._ignoreList = ignoreList
219  else:
220  self._ignoreList = ignorePatterns()
221 
222 
227  self._regExp = re.compile(r'(?P<service>[^\s]+\w)(.*)\s+(?P<level>' + '|'.join(stdLogLevels) + r')\s+(?P<message>.*)')
228 
229  self._metaPat = re.compile(r"MetaData:\s+(.*?)\s*=\s*(.*)$")
230  self._metaData = {}
231  self._substepName = substepName
232  self._msgLimit = msgLimit
233 
234  self.resetReport()
235 
236  super(athenaLogFileReport, self).__init__(logfile, msgLimit, msgDetailLevel)
237 

Member Function Documentation

◆ __str__()

def python.trfValidation.athenaLogFileReport.__str__ (   self)

Reimplemented from python.trfValidation.logFileReport.

Definition at line 674 of file trfValidation.py.

674  def __str__(self):
675  return str(self._levelCounter) + str(self._errorDetails)
676 
677 

◆ badAllocExceptionParser()

def python.trfValidation.athenaLogFileReport.badAllocExceptionParser (   self,
  lineGenerator,
  firstline,
  firstLineCount 
)

Definition at line 662 of file trfValidation.py.

662  def badAllocExceptionParser(self, lineGenerator, firstline, firstLineCount):
663  badAllocExceptionReport = 'terminate after \'std::bad_alloc\'.'
664 
665  msg.debug('Identified bad_alloc - adding to error detail report')
666  self._levelCounter['CATASTROPHE'] += 1
667  self._errorDetails['CATASTROPHE'].append({'message': badAllocExceptionReport, 'firstLine': firstLineCount, 'count': 1})
668 

◆ coreDumpSvcParser()

def python.trfValidation.athenaLogFileReport.coreDumpSvcParser (   self,
  log,
  lineGenerator,
  firstline,
  firstLineCount 
)

Attempt to suck a core dump report from the current logfile This function scans logs in two different directions: 1) downwards, to exctract information after CoreDumpSvc; and 2) upwards, to find abnormal lines.

Note
: Current downwards scan just eats lines until a 'normal' line is seen. There is a slight problem here in that the end of core dump trigger line will not get parsed TODO: fix this (OTOH core dump is usually the very last thing and fatal!)

Definition at line 508 of file trfValidation.py.

508  def coreDumpSvcParser(self, log, lineGenerator, firstline, firstLineCount):
509  _eventCounter = _run = _event = _currentAlgorithm = _functionLine = _currentFunction = None
510  coreDumpReport = 'Core dump from CoreDumpSvc'
511  # Number of lines to ignore above 'core dump' when looking for abnormal lines
512  offset = 1
513  coreDumpDetailsReport = {}
514 
515  for line, linecounter in lineGenerator:
516  m = self._regExp.match(line)
517  if m is None:
518  if 'Caught signal 11(Segmentation fault)' in line:
519  coreDumpReport = 'Segmentation fault'
520  if 'Event counter' in line:
521  _eventCounter = line
522 
523  #Lookup: 'EventID: [Run,Evt,Lumi,Time,BunchCross,DetMask] = [267599,7146597,1,1434123751:0,0,0x0,0x0,0x0]'
524  if 'EventID' in line:
525  match = re.findall(r'\[.*?\]', line)
526  if match and match.__len__() >= 2: # Assuming the line contains at-least one key-value pair.
527  brackets = "[]"
528  commaDelimer = ','
529  keys = (match[0].strip(brackets)).split(commaDelimer)
530  values = (match[1].strip(brackets)).split(commaDelimer)
531 
532  if 'Run' in keys:
533  _run = 'Run: ' + values[keys.index('Run')]
534 
535  if 'Evt' in keys:
536  _event = 'Evt: ' + values[keys.index('Evt')]
537 
538  if 'Current algorithm' in line:
539  _currentAlgorithm = line
540  if '<signal handler called>' in line:
541  _functionLine = linecounter+1
542  if _functionLine and linecounter is _functionLine:
543  if ' in ' in line:
544  _currentFunction = 'Current Function: ' + line.split(' in ')[1].split()[0]
545  else:
546  _currentFunction = 'Current Function: ' + line.split()[1]
547  else:
548  # Can this be done - we want to push the line back into the generator to be
549  # reparsed in the normal way (might need to make the generator a class with the
550  # __exec__ method supported (to get the line), so that we can then add a
551  # pushback onto an internal FIFO stack
552  # lineGenerator.pushback(line)
553  break
554  _eventCounter = 'Event counter: unknown' if not _eventCounter else _eventCounter
555  _run = 'Run: unknown' if not _run else _run
556  _event = 'Evt: unknown' if not _event else _event
557  _currentAlgorithm = 'Current algorithm: unknown' if not _currentAlgorithm else _currentAlgorithm
558  _currentFunction = 'Current Function: unknown' if not _currentFunction else _currentFunction
559  coreDumpReport = '{0}: {1}; {2}; {3}; {4}; {5}'.format(coreDumpReport, _eventCounter, _run, _event, _currentAlgorithm, _currentFunction)
560 
561  coreDumpDetailsReport = self.moreDetails(log, firstline, firstLineCount, 'knowledgeFile.db', offset)
562  abnormalLines = coreDumpDetailsReport['abnormalLines']
563 
564  # concatenate an extract of first seen abnormal line to the core dump message
565  if 'message0' in abnormalLines.keys():
566  coreDumpReport += '; Abnormal line seen just before core dump: ' + abnormalLines['message0'][0:30] + '...[truncated] ' + '(see the jobReport)'
567 
568  # Core dumps are always fatal...
569  msg.debug('Identified core dump - adding to error detail report')
570  self._levelCounter['FATAL'] += 1
571  self._errorDetails['FATAL'].append({'moreDetails': coreDumpDetailsReport, 'message': coreDumpReport, 'firstLine': firstLineCount, 'count': 1})
572 
573 

◆ dbMonitor()

def python.trfValidation.athenaLogFileReport.dbMonitor (   self)

Return data volume and time spend to retrieve information from the database.

Definition at line 421 of file trfValidation.py.

421  def dbMonitor(self):
422  return {'bytes' : self._dbbytes, 'time' : self._dbtime} if self._dbbytes > 0 or self._dbtime > 0 else None
423 

◆ firstError() [1/2]

def python.trfValidation.logFileReport.firstError (   self)
inherited

Definition at line 201 of file trfValidation.py.

201  def firstError(self):
202  pass
203 

◆ firstError() [2/2]

def python.trfValidation.athenaLogFileReport.firstError (   self,
  floor = 'ERROR' 
)

Return the first error found in the logfile above a certain loglevel.

Definition at line 441 of file trfValidation.py.

441  def firstError(self, floor='ERROR'):
442  firstLine = firstError = None
443  firstLevel = stdLogLevels[floor]
444  firstName = floor
445  for lvl, count in self._levelCounter.items():
446  if (count > 0 and stdLogLevels.get(lvl, 0) >= stdLogLevels[floor] and
447  (firstError is None or self._errorDetails[lvl][0]['firstLine'] < firstLine)):
448  firstLine = self._errorDetails[lvl][0]['firstLine']
449  firstLevel = stdLogLevels[lvl]
450  firstName = lvl
451  firstError = self._errorDetails[lvl][0]
452 
453  return {'level': firstName, 'nLevel': firstLevel, 'firstError': firstError}
454 
455 

◆ g494ExceptionParser()

def python.trfValidation.athenaLogFileReport.g494ExceptionParser (   self,
  lineGenerator,
  firstline,
  firstLineCount 
)

Definition at line 574 of file trfValidation.py.

574  def g494ExceptionParser(self, lineGenerator, firstline, firstLineCount):
575  g4Report = firstline
576  g4lines = 1
577  if 'Aborting execution' not in g4Report:
578  for line, linecounter in lineGenerator:
579  g4Report += os.linesep + line
580  g4lines += 1
581  # Test for the closing string
582  if '*** ' in line:
583  break
584  if g4lines >= 25:
585  msg.warning('G4 exception closing string not found within {0} log lines of line {1}'.format(g4lines, firstLineCount))
586  break
587 
588  # G4 exceptions can be fatal or they can be warnings...
589  msg.debug('Identified G4 exception - adding to error detail report')
590  if "just a warning" in g4Report:
591  if self._levelCounter['WARNING'] <= self._msgLimit:
592  self._levelCounter['WARNING'] += 1
593  self._errorDetails['WARNING'].append({'message': g4Report, 'firstLine': firstLineCount, 'count': 1})
594  elif self._levelCounter['WARNING'] == self._msgLimit + 1:
595  msg.warning("Found message number {0} at level WARNING - this and further messages will be supressed from the report".format(self._levelCounter['WARNING']))
596  else:
597  self._levelCounter['FATAL'] += 1
598  self._errorDetails['FATAL'].append({'message': g4Report, 'firstLine': firstLineCount, 'count': 1})
599 

◆ g4ExceptionParser()

def python.trfValidation.athenaLogFileReport.g4ExceptionParser (   self,
  lineGenerator,
  firstline,
  firstLineCount,
  g4ExceptionLineDepth 
)

Definition at line 600 of file trfValidation.py.

600  def g4ExceptionParser(self, lineGenerator, firstline, firstLineCount, g4ExceptionLineDepth):
601  g4Report = firstline
602  g4lines = 1
603  for line, linecounter in lineGenerator:
604  g4Report += os.linesep + line
605  g4lines += 1
606  # Test for the closing string
607  if 'G4Exception-END' in line:
608  break
609  if g4lines >= g4ExceptionLineDepth:
610  msg.warning('G4 exception closing string not found within {0} log lines of line {1}'.format(g4lines, firstLineCount))
611  break
612 
613  # G4 exceptions can be fatal or they can be warnings...
614  msg.debug('Identified G4 exception - adding to error detail report')
615  if "-------- WWWW -------" in g4Report:
616  if self._levelCounter['WARNING'] <= self._msgLimit:
617  self._levelCounter['WARNING'] += 1
618  self._errorDetails['WARNING'].append({'message': g4Report, 'firstLine': firstLineCount, 'count': 1})
619  elif self._levelCounter['WARNING'] == self._msgLimit + 1:
620  msg.warning("Found message number {0} at level WARNING - this and further messages will be supressed from the report".format(self._levelCounter['WARNING']))
621  else:
622  self._levelCounter['FATAL'] += 1
623  self._errorDetails['FATAL'].append({'message': g4Report, 'firstLine': firstLineCount, 'count': 1})
624 
625 

◆ knowledgeFileHandler()

def python.trfValidation.athenaLogFileReport.knowledgeFileHandler (   self,
  knowledgefile 
)

Generally, a knowledge file consists of non-standard logging error/abnormal lines which are left out during log scan and could help diagnose job failures.

Definition at line 266 of file trfValidation.py.

266  def knowledgeFileHandler(self, knowledgefile):
267  # load abnormal/error line(s) from the knowledge file(s)
268  linesList = []
269  fullName = trfUtils.findFile(os.environ['DATAPATH'], knowledgefile)
270  if not fullName:
271  msg.warning('Knowledge file {0} could not be found in DATAPATH'.format(knowledgefile))
272  else:
273  try:
274  with open(fullName) as knowledgeFileHandle:
275  msg.debug('Opened knowledge file {0} from here: {1}'.format(knowledgefile, fullName))
276 
277  for line in knowledgeFileHandle:
278  if line.startswith('#') or line == '' or line =='\n':
279  continue
280  line = line.rstrip('\n')
281  linesList.append(line)
282  except OSError as e:
283  msg.warning('Failed to open knowledge file {0}: {1}'.format(fullName, e))
284  return linesList
285 

◆ moreDetails()

def python.trfValidation.athenaLogFileReport.moreDetails (   self,
  log,
  firstline,
  firstLineCount,
  knowledgeFile,
  offset = 0 
)

Definition at line 456 of file trfValidation.py.

456  def moreDetails(self, log, firstline, firstLineCount, knowledgeFile, offset=0):
457  # Look for "abnormal" and "last normal" line(s)
458  # Make a list of last e.g. 50 lines before core dump
459  abnormalLinesList = self.knowledgeFileHandler(knowledgeFile)
460  linesToBeScanned = 50
461  seenAbnormalLines = []
462  abnormalLinesReport = {}
463  lastNormalLineReport = {}
464 
465  linesList = []
466  myGen = trfUtils.lineByLine(log)
467  for line, linecounter in myGen:
468  if linecounter in range(firstLineCount - linesToBeScanned, firstLineCount-offset):
469  linesList.append([linecounter, line])
470  elif linecounter == firstLineCount:
471  break
472 
473  for linecounter, line in reversed(linesList):
474  if re.findall(r'|'.join(abnormalLinesList), line):
475  seenLine = False
476  for dic in seenAbnormalLines:
477  # count repetitions or similar (e.g. first 15 char) abnormal lines
478  if dic['message'] == line or dic['message'][0:15] == line[0:15]:
479  dic['count'] += 1
480  seenLine = True
481  break
482  if seenLine is False:
483  seenAbnormalLines.append({'message': line, 'firstLine': linecounter, 'count': 1})
484  else:
485  if line != '':
486  lastNormalLineReport = {'message': line, 'firstLine': linecounter, 'count': 1}
487  break
488  else:
489  continue
490 
491  # Write the list of abnormal lines into the abnormalLinesReport dictionary
492  # The keys of each abnormal line have a number suffix starting with 0
493  # e.g., first abnormal line's keys are :{'mesage0', 'firstLine0', 'count0'}
494 
495  for a in range(len(seenAbnormalLines)):
496  abnormalLinesReport.update({'message{0}'.format(a): seenAbnormalLines[a]['message'], 'firstLine{0}'.format(a): seenAbnormalLines[a]['firstLine'],
497  'count{0}'.format(a): seenAbnormalLines[a]['count']})
498 
499  return {'abnormalLines': abnormalLinesReport, 'lastNormalLine': lastNormalLineReport}
500 
501 

◆ python()

def python.trfValidation.athenaLogFileReport.python (   self)

Produce a python dictionary summary of the log file report for inclusion in the executor report.

Definition at line 241 of file trfValidation.py.

241  def python(self):
242  errorDict = {'countSummary': {}, 'details': {}}
243  for level, count in self._levelCounter.items():
244  errorDict['countSummary'][level] = count
245  if self._levelCounter[level] > 0 and len(self._errorDetails[level]) > 0:
246  errorDict['details'][level] = []
247  for error in self._errorDetails[level]:
248  errorDict['details'][level].append(error)
249  return errorDict
250 

◆ pythonExceptionParser()

def python.trfValidation.athenaLogFileReport.pythonExceptionParser (   self,
  log,
  lineGenerator,
  firstline,
  firstLineCount 
)

Definition at line 626 of file trfValidation.py.

626  def pythonExceptionParser(self, log, lineGenerator, firstline, firstLineCount):
627  pythonExceptionReport = ""
628  lastLine = firstline
629  lastLine2 = firstline
630  pythonErrorLine = firstLineCount
631  pyLines = 1
632  for line, linecounter in lineGenerator:
633  if 'Py:Athena' in line and 'INFO leaving with code' in line:
634  if len(lastLine)> 0:
635  pythonExceptionReport = lastLine
636  pythonErrorLine = linecounter-1
637  else: # Sometimes there is a blank line after the exception
638  pythonExceptionReport = lastLine2
639  pythonErrorLine = linecounter-2
640  break
641  if pyLines >= 25:
642  msg.warning('Could not identify python exception correctly scanning {0} log lines after line {1}'.format(pyLines, firstLineCount))
643  pythonExceptionReport = "Unable to identify specific exception"
644  pythonErrorLine = firstLineCount
645  break
646  lastLine2 = lastLine
647  lastLine = line
648  pyLines += 1
649 
650  pythonExceptionDetailsReport = self.moreDetails(log, firstline, firstLineCount, 'knowledgeFile.db')
651  abnormalLines = pythonExceptionDetailsReport['abnormalLines']
652 
653  # concatenate an extract of first seen abnormal line to pythonExceptionReport
654  if 'message0' in abnormalLines.keys():
655  pythonExceptionReport += '; Abnormal line seen just before python exception: ' + abnormalLines['message0'][0:30] + '...[truncated] ' + '(see the jobReport)'
656 
657  msg.debug('Identified python exception - adding to error detail report')
658  self._levelCounter['FATAL'] += 1
659  self._errorDetails['FATAL'].append({'moreDetails': pythonExceptionDetailsReport, 'message': pythonExceptionReport, 'firstLine': pythonErrorLine, 'count': 1})
660 
661 

◆ resetReport()

def python.trfValidation.athenaLogFileReport.resetReport (   self)

Reimplemented from python.trfValidation.logFileReport.

Definition at line 251 of file trfValidation.py.

251  def resetReport(self):
252  self._levelCounter = {}
253  for level in list(stdLogLevels) + ['UNKNOWN', 'IGNORED']:
254  self._levelCounter[level] = 0
255 
256  self._errorDetails = {}
257  for level in self._levelCounter:
258  self._errorDetails[level] = []
259  # Format:
260  # List of dicts {'message': errMsg, 'firstLine': lineNo, 'count': N}
261  self._dbbytes = 0
262  self._dbtime = 0.0
263 

◆ rootSysErrorParser()

def python.trfValidation.athenaLogFileReport.rootSysErrorParser (   self,
  lineGenerator,
  firstline,
  firstLineCount 
)

Definition at line 669 of file trfValidation.py.

669  def rootSysErrorParser(self, lineGenerator, firstline, firstLineCount):
670  msg.debug('Identified ROOT IO problem - adding to error detail report')
671  self._levelCounter['FATAL'] += 1
672  self._errorDetails['FATAL'].append({'message': firstline, 'firstLine': firstLineCount, 'count': 1})
673 

◆ scanLogFile() [1/2]

def python.trfValidation.logFileReport.scanLogFile (   self)
inherited

Definition at line 195 of file trfValidation.py.

195  def scanLogFile(self):
196  pass
197 

◆ scanLogFile() [2/2]

def python.trfValidation.athenaLogFileReport.scanLogFile (   self,
  resetReport = False 
)

Definition at line 286 of file trfValidation.py.

286  def scanLogFile(self, resetReport=False):
287  nonStandardErrorsList = self.knowledgeFileHandler('nonStandardErrors.db')
288 
289  if resetReport:
290  self.resetReport()
291 
292  for log in self._logfile:
293  msg.debug('Now scanning logfile {0}'.format(log))
294  seenNonStandardError = ''
295  # N.B. Use the generator so that lines can be grabbed by subroutines, e.g., core dump svc reporter
296  try:
297  myGen = trfUtils.lineByLine(log, substepName=self._substepName)
298  except IOError as e:
299  msg.error('Failed to open transform logfile {0}: {1:s}'.format(log, e))
300  # Return this as a small report
301  self._levelCounter['ERROR'] = 1
302  self._errorDetails['ERROR'] = {'message': str(e), 'firstLine': 0, 'count': 1}
303  return
304  for line, lineCounter in myGen:
305  m = self._metaPat.search(line)
306  if m is not None:
307  key, value = m.groups()
308  self._metaData[key] = value
309 
310  m = self._regExp.match(line)
311  if m is None:
312  # We didn't manage to get a recognised standard line from the file
313  # But we can check for certain other interesting things, like core dumps
314  if 'Core dump from CoreDumpSvc' in line:
315  msg.warning('Detected CoreDumpSvc report - activating core dump svc grabber')
316  self.coreDumpSvcParser(log, myGen, line, lineCounter)
317  continue
318  # Add the G4 exceptipon parsers
319  if 'G4Exception-START' in line:
320  msg.warning('Detected G4 exception report - activating G4 exception grabber')
321  self.g4ExceptionParser(myGen, line, lineCounter, 40)
322  continue
323  if '*** G4Exception' in line:
324  msg.warning('Detected G4 9.4 exception report - activating G4 exception grabber')
325  self.g494ExceptionParser(myGen, line, lineCounter)
326  continue
327  # Add the python exception parser
328  if 'Shortened traceback (most recent user call last)' in line:
329  msg.warning('Detected python exception - activating python exception grabber')
330  self.pythonExceptionParser(log, myGen, line, lineCounter)
331  continue
332  # Add parser for missed bad_alloc
333  if 'terminate called after throwing an instance of \'std::bad_alloc\'' in line:
334  msg.warning('Detected bad_alloc!')
335  self.badAllocExceptionParser(myGen, line, lineCounter)
336  continue
337  # Parser for ROOT reporting a stale file handle (see ATLASG-448)
338  # Amendment: Generalize the search (see ATLASRECTS-7121)
339  if 'Error in <TFile::ReadBuffer>' in line:
340  self.rootSysErrorParser(myGen, line, lineCounter)
341  continue
342 
343  if 'Error in <TFile::WriteBuffer>' in line:
344  self.rootSysErrorParser(myGen, line, lineCounter)
345  continue
346  # Check if the line is among the non-standard logging errors from the knowledge file
347  if any(line in l for l in nonStandardErrorsList):
348  seenNonStandardError = line
349  continue
350 
351  msg.debug('Non-standard line in %s: %s', log, line)
352  self._levelCounter['UNKNOWN'] += 1
353  continue
354 
355  # Line was matched successfully
356  fields = {}
357  for matchKey in ('service', 'level', 'message'):
358  fields[matchKey] = m.group(matchKey)
359  msg.debug('Line parsed as: {0}'.format(fields))
360 
361  # Check this is not in our ignore list
362  ignoreFlag = False
363  for ignorePat in self._ignoreList.structuredPatterns:
364  serviceMatch = ignorePat['service'].match(fields['service'])
365  levelMatch = (ignorePat['level'] == "" or ignorePat['level'] == fields['level'])
366  messageMatch = ignorePat['message'].match(fields['message'])
367  if serviceMatch and levelMatch and messageMatch:
368  msg.info('Error message "{0}" was ignored at line {1} (structured match)'.format(line, lineCounter))
369  ignoreFlag = True
370  break
371  if ignoreFlag is False:
372  for searchPat in self._ignoreList.searchPatterns:
373  if searchPat.search(line):
374  msg.info('Error message "{0}" was ignored at line {1} (search match)'.format(line, lineCounter))
375  ignoreFlag = True
376  break
377  if ignoreFlag:
378  # Got an ignore - message this to a special IGNORED error
379  fields['level'] = 'IGNORED'
380  else:
381  # Some special handling for specific errors (maybe generalise this if
382  # there end up being too many special cases)
383  # Upgrade bad_alloc to CATASTROPHE to allow for better automated handling of
384  # jobs that run out of memory
385  if 'std::bad_alloc' in fields['message']:
386  fields['level'] = 'CATASTROPHE'
387 
388  # concatenate the seen non-standard logging error to the FATAL
389  if fields['level'] == 'FATAL':
390  if seenNonStandardError:
391  line += '; ' + seenNonStandardError
392 
393  # Count this error
394  self._levelCounter[fields['level']] += 1
395 
396  # Record some error details
397  # N.B. We record 'IGNORED' errors as these really should be flagged for fixing
398  if fields['level'] == 'IGNORED' or stdLogLevels[fields['level']] >= self._msgDetails:
399  if self._levelCounter[fields['level']] <= self._msgLimit:
400  detailsHandled = False
401  for seenError in self._errorDetails[fields['level']]:
402  if seenError['message'] == line:
403  seenError['count'] += 1
404  detailsHandled = True
405  break
406  if detailsHandled is False:
407  self._errorDetails[fields['level']].append({'message': line, 'firstLine': lineCounter, 'count': 1})
408  elif self._levelCounter[fields['level']] == self._msgLimit + 1:
409  msg.warning("Found message number {0} at level {1} - this and further messages will be supressed from the report".format(self._levelCounter[fields['level']], fields['level']))
410  else:
411  # Overcounted
412  pass
413  if 'Total payload read from COOL' in fields['message']:
414  msg.debug("Found COOL payload information at line {0}".format(line))
415  a = re.match(r'(\D+)(?P<bytes>\d+)(\D+)(?P<time>\d+[.]?\d*)(\D+)', fields['message'])
416  self._dbbytes += int(a.group('bytes'))
417  self._dbtime += float(a.group('time'))
418 
419 

◆ worstError()

def python.trfValidation.athenaLogFileReport.worstError (   self)

Return the worst error found in the logfile (first error of the most serious type)

Reimplemented from python.trfValidation.logFileReport.

Definition at line 425 of file trfValidation.py.

425  def worstError(self):
426  worst = stdLogLevels['DEBUG']
427  worstName = 'DEBUG'
428  for lvl, count in self._levelCounter.items():
429  if count > 0 and stdLogLevels.get(lvl, 0) > worst:
430  worstName = lvl
431  worst = stdLogLevels[lvl]
432  if len(self._errorDetails[worstName]) > 0:
433  firstError = self._errorDetails[worstName][0]
434  else:
435  firstError = None
436 
437  return {'level': worstName, 'nLevel': worst, 'firstError': firstError}
438 
439 

Member Data Documentation

◆ _dbbytes

python.trfValidation.athenaLogFileReport._dbbytes
private

Definition at line 261 of file trfValidation.py.

◆ _dbtime

python.trfValidation.athenaLogFileReport._dbtime
private

Definition at line 262 of file trfValidation.py.

◆ _errorDetails

python.trfValidation.athenaLogFileReport._errorDetails
private

Definition at line 256 of file trfValidation.py.

◆ _ignoreList

python.trfValidation.athenaLogFileReport._ignoreList
private

Definition at line 218 of file trfValidation.py.

◆ _levelCounter

python.trfValidation.athenaLogFileReport._levelCounter
private

Definition at line 252 of file trfValidation.py.

◆ _logfile

python.trfValidation.logFileReport._logfile
privateinherited

Definition at line 181 of file trfValidation.py.

◆ _metaData

python.trfValidation.athenaLogFileReport._metaData
private

Definition at line 230 of file trfValidation.py.

◆ _metaPat

python.trfValidation.athenaLogFileReport._metaPat
private

Definition at line 229 of file trfValidation.py.

◆ _msgDetails

python.trfValidation.logFileReport._msgDetails
privateinherited

Definition at line 186 of file trfValidation.py.

◆ _msgLimit

python.trfValidation.athenaLogFileReport._msgLimit
private

Definition at line 232 of file trfValidation.py.

◆ _re

python.trfValidation.logFileReport._re
privateinherited

Definition at line 187 of file trfValidation.py.

◆ _regExp

python.trfValidation.athenaLogFileReport._regExp
private
Note
This is the regular expression match for athena logfile lines Match first strips off any HH:MM:SS prefix the transform has added, then takes the next group of non-whitespace characters as the service, then then matches from the list of known levels, then finally, ignores any last pieces of whitespace prefix and takes the rest of the line as the message

Definition at line 227 of file trfValidation.py.

◆ _substepName

python.trfValidation.athenaLogFileReport._substepName
private

Definition at line 231 of file trfValidation.py.


The documentation for this class was generated from the following file:
vtune_athena.format
format
Definition: vtune_athena.py:14
dumpHVPathFromNtuple.append
bool append
Definition: dumpHVPathFromNtuple.py:91
search
void search(TDirectory *td, const std::string &s, std::string cwd, node *n)
recursive directory search for TH1 and TH2 and TProfiles
Definition: hcg.cxx:738
plotBeamSpotVxVal.range
range
Definition: plotBeamSpotVxVal.py:195
python.MadGraphUtils.python
string python
Definition: MadGraphUtils.py:20
histSizes.list
def list(name, path='/')
Definition: histSizes.py:38
TCS::join
std::string join(const std::vector< std::string > &v, const char c=',')
Definition: Trigger/TrigT1/L1Topo/L1TopoCommon/Root/StringUtils.cxx:10
TrigJetMonitorAlgorithm.items
items
Definition: TrigJetMonitorAlgorithm.py:71
python.processes.powheg.ZZ.ZZ.__init__
def __init__(self, base_directory, **kwargs)
Constructor: all process options are set here.
Definition: ZZ.py:18
Trk::open
@ open
Definition: BinningType.h:40
python.CaloAddPedShiftConfig.int
int
Definition: CaloAddPedShiftConfig.py:45
str
Definition: BTagTrackIpAccessor.cxx:11
Trk::split
@ split
Definition: LayerMaterialProperties.h:38
match
bool match(std::string s1, std::string s2)
match the individual directories of two strings
Definition: hcg.cxx:356
python.LArMinBiasAlgConfig.float
float
Definition: LArMinBiasAlgConfig.py:65