Before doing any feature selection, I performed a pd.get_dummies and kept all inputs except 'Score', 'Class', and 'Case No'. K=21 appeared to be where the graph leveled off, so using 21 neighbors, I got an accuracy score of 0.925. Because K-Nearest Neighbors doesn't have its own feature selections method, I used ExtraTreesClassifier to determine the heighest weighted inputs. Making up the top 11 features were all 10 questions, plus Age. Age ranked tenth, and question 'A8' was ranked at 11, only behind by a weight of two ten-thousandth.
After performing a gridsearch to tune the hyperparameters, the testing score did bump up, from .925 to ~.947. I then proceeded to alter the feature selection five times. In addition to the questions, I focused on Age, Family History of ASD, Jaundice, and Sex. Though ethnicity inputs of 'white' and 'middle eastern' ranked 12th and 13th, I chose to not focus on ehtnicity because two ethnicies accounted for the overwhelming majority.
Through the five feature selection iterations, one-by-one, I removed the aforementioned features I had chosen to focus on, so I eventually was left with only the 10 AQ-10 questions as inputs. The highest testing score came in the last iteration, with only the questions as inputs.
Some interesting notes: