Activity recognition using ubiquitous wearable devices (e.g., smartphones, smartwatches and sport bracelets) can be applied to many application domains such as healthcare, smart environments, assisted living, human-computer interaction, surveillance etc. Most existing activity recognition approaches require users to provide each activity a sufficient a mount of annotations (labels) in order to achieve acceptable performance and therefore often fail to scale to a large number of activities by recognizing new (emerging) activities. To tackle this limitation, the systems requiring limited training data are much desired. However, existing activity recognition solutions on limited training data rely heavily on low-level activity or attribute extraction and therefore suffer from two major limitations: (1) failing to work well when activities are highly similar to each other, such as jogging, running, and jumping front and back, and (2) leading to overall system performance degradation on recognizing existing activities with sufficient t raining d ata. I n t his paper, we introduce SMART, a unified s emi-supervised f ramework for recognizing highly similar emerging activities without sacrificing the performance on recognizing existing activities. Extensive experiments on real-world data showed that compared to the state of the art, SMART yielded superior performance on recognizing emerging activities, especially highly similar emerging activities, while providing comparable performance on recognizing existing activities.