Training a neural embeddings model on domain-specific data is a way to build an Always Evolving Pattern Dictionary for that domain that associates the different patterns found that data via their cosine similarity in vector space. The patterns can be any sequence of symbols, including whole words, letters, digits, etc.
Some neural embeddings models like Word2Vec work only at the whole-word level (not the letter level) and are context independent in terms of the matched vector, when the trained model is fed a test pattern, i.e. each pattern is represented by just one vector regardless of the associated context in the test pattern. Others like BERT, ELMo and Flair are context dependent, i.e. may have one or more vectors for each pattern and the matched vector depends on the context in the test pattern, and work at the sub-word level. Simple embeddings that work on the word level and are context independent work well in situations where we have patterns such as product names and acronyms