Computational stylometry has aided the work of philologists for over 50 years. From simple word counts to the latest use of machine learning for authorship attribution, computation offers the literary critic a wide array of techniques to better understand individual texts and large corpora. To date, these methods have largely been accessible to specialists possessing a background in programming and statistics. The Quantitative Criticism Lab has now designed a user-friendly toolkit that will allow humanists with no prior training in the digital humanities to obtain a wide range of philological data about most classical texts and to perform sophisticated quantitative analyses - all using a simple point-and-click interface. This presentation will demonstrate some of the experiments and literary critical insights enabled by the toolkit, and discuss relevant issues of interpretation and statistical analysis.