Segmenting organelles in volume electron microscopy (vEM) images is essential for understanding cellular structures, but manual segmentation is labor-intensive and time-consuming. Deep learning has emerged as a powerful solution, automating the process and offering significant efficiency gains. Convolutional neural networks (CNNs) have been particularly effective in segmenting organelles such as mitochondria and the Golgi apparatus from vEM data. However, these methods often require substantial computational resources and large annotated datasets, or repeated ground-truthing, both of which limit their accessibility. empanada, a plugin for the napari image viewer, addresses these challenges by providing a user-friendly platform for deep learning-based segmentation of organelles https://empanada.readthedocs.io/en/latest/. It integrates pre-trained models and allows users to fine-tune them or train new models on their own data, making it adaptable to various types of electron microscopy images. While users can use this plugin to create models for any organelle, empanada comes packaged with MitoNet, a state-of-the-art generalist panoptic model that allows for instance segmentation of mitochondria from EM dartasets. empanada also simplifies the overall workflow by offering modules for training, inference, and post-processing, reducing the technical barriers typically associated with deep learning. By making advanced segmentation tools more accessible, empanada enables researchers to efficiently analyze large vEM datasets, accelerating discoveries in cell biology through precise and high-throughput analysis of organelle structures.