POTTER: Virtual Grand Rounds

I. Current state situation.

II. Current state clinical decision support methods and complications.

III. Solution: POTTER.

IV. Remaining challenge to solve.

V. Frequently asked questions.

Questions on Clinical Application

Question 1: How does POTTER compare against the status quo clinical decision support tools in emergency medicine, specifically the ASA, ESS, and ACS-NSQIP calculators?
Potter perf
Potter perf morb
Question 2: Can we use this live? What is the current usage? How has it changed your practice?
  • The POTTER calculator smartphone application can be used at bedside when assessing a patient
  • Effective January 2019, MGH is recording the POTTER outputs as part of a prospective study.
  • We are more confident in our clinical decision making because we have analytical re-enforcement.
Question 3: How much do you think the risk estimates would affect patient decisions? Some patients might hear 20% mortality and think "too risky" when we as surgeons know it is the right choice to do surgery.
  • This tool doesn't replace good patient counselling.
  • The other point is that something like 50% mortality doesn't give the whole picture without also considering complications. If the other 50% of the time has a near 100% complication rate, that needs to factor into the decision too, and we would recommend not doing surgery because quality of life after is not going to be good anyway. So, it's not all about patients risk tolerance. Reduce bullet point to infer that complication rate has to be considered as well, , and patients should be informed of possible complications after surgery.

Questions on Research Methodology

Question 1: What dataset was used to build POTTER?
  • Emergency cases in the National Surgical Quality Improvement Program (NSQIP) database.
  • ~400K data points from the years 2007-2013 were used to train the model.
  • The model was validated with the 2014 dataset.
  • Preoperative variables were used to design the models, while the postoperative variables were used as the dependent variables or outcomes to predict.
  • Variables that were not consistently collected between 2007 and 2014 were removed.
  • ICD-9 and CPT codes were removed because they are unknown preoperatively and their complexity compromises use by the surgeon at the bedside.
Question 2: Why doesn't the type of surgery matter?
  • We tried to force it into the model, and it didn't make a difference. We think the patient profile largely accounts for the type of surgery, ie. more serious surgeries have sicker patients presenting. Add ICD and CPG codes.
  • This drove me crazy in the initial phases of research. How could the surgery itself not be important? I think the trees are effected surgery that appear in other variables. For example, if a patient is having a simple appendectomy(?), they are usually the healthier patients without other risk factors. Versus the patients undergoing an emergency laperonomy(?) are sick on baseline.
Question 3: Why do items that do not seem important appear early in the tool? For example, why is sodium listed first?
  • Why is sodium a first order question? Captures non-linear relationships that the human mind is not equipped to process. Example: If an individual is healthy, then hypertension is critical If they are already diabetic and X(sevadic?): hypertension might not be as important because they already have a high risk. In synthesis: surgical risk is not linear. Delineate the presence or absence of certain risk factors in patients is extremely important in terms of affecting the impact of other factors. Not every risk factor effects every patient the same. NEED TO CLARIFY
Question 4: The dataset strictly includes trauma cases. In order to be generalizable, doesn't the dataset need to include a non-operational dataset?
  • An answer to that might be that you will only use this calculator when you are already thinking about operating on a patient, and the calculator predicts the outcome risk IF this patient undergoes surgery. (Need to confirm). NEED TO CONFIRM
Question 5: How can I inspect the algorithm?

Questions on Optimal Decision Trees

Question 1: Why 4-11 questions for mortality? Why sometimes 2 questions?
  • TBD
Question 2: Why are there no confidence intervals?
  • TBD
Question 3: How are the trees built?
Potter trees