The increasing use of machine-learning (ML) enabled systems incritical tasks fuels the quest for novel verification and validation techniques yet grounded in accepted system assurance principles. In traditional system development, model-based techniques have been widely adopted, where the central premise is that abstract models of the required system provide a sound basis for judging its implementation. We posit an analogous approach for ML systems using an ML technique that extracts from the high-dimensional training data implicitly describing the required system, a low-dimensional underlying structure—a manifold. It is then harnessed for a range of quality assurance tasks such as test adequacy measurement, test input generation, and runtime monitoring of the target ML system.The approach is built on variational autoencoders, an unsupervised method for learning a pair of mutually near-inverse functions between a given high-dimensional dataset and a low-dimensional representation. Preliminary experiments establish that the proposedmanifold-based approach, for test adequacy drives diversity in testdata, for test generation yields fault-revealing yet realistic test casesand for run-time monitoring provides an independent means to assess trustability of the target system’s output.