Over the last years, the predictive power of supervised machine learning (ML) has undergone impressive advances, achieving the status of state of the art and super-human level in some applications. However, the employment rate of ML models in real-life applications is much slower than one would expect. One of the downsides of using ML solution-based technologies is the lack of user trust in the produced model, which is related to the black-box nature of these models. To leverage the application of ML models, the generated predictions should be easy to interpret while maintaining a high accuracy. In this context, we develop the Neural Local Smoother (NLS), a neural network architecture that yields accurate predictions with easy-to-obtain explanations. The key idea of NLS is to add a smooth local linear layer to a standard network. We show experiments that indicate that NLS leads to a predictive power that is comparable to state-of-the-art machine learning models, but that at the same time is easier to interpret.