Skip to content

Instantly share code, notes, and snippets.

@gidj
Last active January 28, 2019 22:29
Show Gist options
  • Save gidj/3f5491a3462366f8dbe28a84f4f4889e to your computer and use it in GitHub Desktop.
Save gidj/3f5491a3462366f8dbe28a84f4f4889e to your computer and use it in GitHub Desktop.
SFP Post-release monitoring checklist

SFP Production Checklist

SFP Post-release Main Checklist

FeedProcessor

  • Error logs in FeedProcessor
  • Errors logs to look for:
    • "Error parsing line: "
    • "Processing terminated * could not download S3 file for customer: "
    • "Processing terminated * duplicate file detected for customer: "
    • "Processing terminated * api listing depletion broker: "
    • "MAPPING ID:"
    • "Thread " " interrupted for customer:"
    • "Error Customer:"
    • "CUSTOMER ID:"
    • "Error getting file "

ItemParser

  • We don't have loggerName set for this, and the logs are not regular. I think it is best if we vefify this by sending through the test file.

SFP Post-release Task Checkist

Note: All of the ELK links below are not using the 'seller-feed-processor' topic, because it currently has no logs associated with it. Be sure to change it after release.

AscReplicationTask

  • Every minute
  • What to look for:
    • New file is uploaded to S3
    • record put in sellerupload table with filename that is a UUID (not "ascupload.txt")
      • select FILEPATH from sellerupload where CUSTOMERID = 218137;
    • Follow logs for this particular broker with ID 218137

ShowProductionsTask

  • This task shows productions under various conditions, some of which are not common
  • Every 10 minutes with a 35 second delay
  • What to look for:
    • Elk logs
    • Compare before and after; this looks like it don't return many records very often:
      • Select productions.PID from sellerlisting,productions Where productions.PID=sellerlisting.PRODUCTIONID and HIDE=1 and DISABLED=0 and sellerlisting.LISTINGSTATUS=1 and LISTUNTIL > CurDate() and P_DATE > Curdate() Group By PID
    • Compare before and after; they should both return no records just after the task runs.
      • Select productions.PID from brokerlisting,productions,brokers Where productions.PID=brokerlisting.PRODUCTIONID and brokers.BID=brokerlisting.BROKERID and HIDE=1 and DISABLED=0 and BLOCKED=0 and DATE_SUB(P_DATE,INTERVAL LISTINGCUTOFF HOUR) > Curdate() Group By PID
      • Select ticket.PID from vividdb.ticket,productions Where ticket.PID=productions.PID and HIDE=1 and DISABLED=0 and P_DATE > Curdate() and ticket.SOLD=0 and ticket.CANCELLED=0 and ticket.EXCHANGELISTED=1 and WEBPRICE > 0 Group By ticket.PID
    • Compare before and after; these should also return no records, but get update via different statements.
      • Select PID from sellerlisting,productions Where sellerlisting.PRODUCTIONID=productions.PID and P_DATE > Curdate() and LISTINGSTATUS=1 and LISTUNTIL > CurDate() group by sellerlisting.PRODUCTIONID
      • Select PID from brokerlisting,productions,brokers Where brokerlisting.PRODUCTIONID=productions.PID and brokers.BID=brokerlisting.BROKERID and HIDE=0 and BLOCKED=0 and DATE_SUB(P_DATE,INTERVAL LISTINGCUTOFF HOUR) > Curdate() group by brokerlisting.PRODUCTIONID
      • Select PID from packages,productions Where packages.PRODUCTIONID=productions.PID and P_DATE > Curdate() and packages.ACTIVE=1 and packages.QTYAVAIL>0 Group By PID
      • Select PID,SHADOWPID from productions Where HIDE=0
    • See differences between this before and after (should have fewer after)
      • select * from productions Where HIDE=0 and P_DATE < Curdate()

ListingNotesMappingTask

DeleteStaleFilesTask

  • Every 20 minutes with a 90 second delay
  • What to look for:
    • Elk logs
    • These queries, before and after, they should all be deleted: select * from sellerupload Where THREADLOCK=1 and PROCESSTIME < DATE_SUB(Now(),Interval 2 hour) select * from sellerupload Where STALE=1 and FILEPATH is NULL select * from sellerupload Where STALE=1 and FILEPATH='ascupload.txt' select * from sellerremap Where REMAPTIME < DATE_SUB(Now(),Interval 10 hour)
    • Look at the FILEPATH in this query, and see that they are deleted in S3; then that the sellerupload element is also deleted per ID:
      • Select SQL_NO_CACHE sellerupload.ID, FILEPATH, PROCESSED from sellerupload, (Select PROCESSTIME, CUSTOMERID, ID from sellerupload Where DELETED=0 and STALE=1 and PROCESSED=1 and PROCESSTIME < Date_Sub(Now(), INTERVAL 14 DAY ) Group By DATE(PROCESSTIME), CUSTOMERID Order By CUSTOMERID) as t1 Where DELETED=0 and STALE=1 and sellerupload.PROCESSTIME < Date_Sub(Now(), INTERVAL 14 DAY) and sellerupload.ID<>t1.ID and sellerupload.CUSTOMERID=t1.CUSTOMERID -- This is maintained in a list in the code and sellerupload.ID not in (Select MAX(ID) from sellerupload Where PROCESSED=1 group by customerid) Limit 10000
    • Look at the FILEPATH and ID of this query, and determine that the FILEPATH is removed fromm S3 and the ID is removed from sellerupload
      • Select sellerupload.ID, FILEPATH from sellerupload, (Select CUSTOMERID from account,brokers Where BLOCKUPLOAD=1 and account.BROKERID=brokers.BID and BROKERID>0) as t1 Where t1.CUSTOMERID=sellerupload.CUSTOMERID -- This is maintained in a list in the code and sellerupload.ID not in (Select MAX(ID) from sellerupload Where PROCESSED=1 group by customerid)

DuplicateSaleMonitorTask

  • Every 20 minutes with 20 minute delay
  • What to look for:
    • "Sold ticket diff results" * Check before and after, this will give us a good rubric about relf-reporting metrics
    • "DuplicateSaleMonitor broker list sync at:"
    • "DuplicateSaleMonitor sales list sync at:"
    • "DuplicateSaleMonitor order map sync at:"
    • "Error adding key: " * this currently almost never happens, monitor to see that it doesn't show up.
    • "Error removing key: " * this currently almost never happens, monitor to see that it doesn't show up.
    • Elk logs

ReconcileOrphanedMappingsTask

  • Every hour with no delay
  • What to look for:
    • Elk logs
    • Show that the CUSTOMERID's in the first query are being added to the second query (Alternatively, that the second query has records with REMAPTIME when the task runs)
      • select customerid from ( select sellermapping.CUSTOMERID, EID, VID, t1.ID from sellermapping left join ( select ID, EVENT, VENUE, EVENTDATE, EVENTTIME from sellermapping Where MAPEVENT=1 or MAPPRODUCTION=1) as t1 on ( sellermapping.ID<>t1.ID and t1.EVENT=sellermapping.EVENT and t1.VENUE=sellermapping.VENUE and t1.EVENTDATE=sellermapping.EVENTDATE and t1.EVENTTIME=sellermapping.EVENTTIME ) Where MAPEVENT=0 and MAPPRODUCTION=0 group by sellermapping.event, sellermapping.venue, sellermapping.eventdate, sellermapping.eventtime HAVING t1.ID is null ) t1 group by customerid
      • select * from sellerremap ORDER BY REMAPTIME DESC

DeleteExpiredMappingsTask

  • Every day at 1am
  • What to look for:
    • Elk logs
    • These queries, before and after, they should all be deleted:
      • select * from sellervariation Where EXPIRYTIME < Now() and EXPIRYTIME<>'0000-00-00' and EID=0 and VID=0 and EXPIRYTIME is NOT NULL
      • select * from mappingmemory Where EXPIRYTIME < Now() and EXPIRYTIME<>'0000-00-00' and EXPIRYTIME is NOT NULL and PRODUCTIONID=0
      • select * mappingmemory from mappingmemory,productions Where productions.PID=mappingmemory.PRODUCTIONID and P_DATE < Curdate()
      • select * from externaleventignore Where IGNOREUNTIL < Now()

DeleteListingsTask

  • Every 24 hours, with a 120 minute delay
  • What to look for:
    • Elk logs
    • See that records with the ELHR column >= 168 get deleted
      • Select SQL_NO_CACHE TIMESTAMPDIFF(HOUR,LASTUPLOAD,NOW()) as ELHR, STALEINTERVAL, brokers.BID, account.CUSTOMERID, account.LASTUPLOAD, account.EMAIL, B_NAME, B_PHONE from brokers, brokerlisting, account where brokers.BID=brokerlisting.BROKERID and account.BROKERID=BID and account.LASTUPLOAD < Date_Sub(Now(),Interval STALEINTERVAL Hour) and LISTINGMANAGERPOS=0 and brokers.MERCURYFEED=0 and ELHR > 168 Group by brokerlisting.BROKERID
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment