ML based semantic obfuscation. Without going too much into details, the idea is to use a seq2seq+VQ-VAE model to create embeddings, then modify those embeddings via a key (ideally in a way that preserves the latent distribution of the training set). The model is trained with cycle consistency, i.e. minimising the difference between the input embedding and output embedding. By knowing the key, the embedding from the encoded output can be converted back to the embedding for the original input, and the encoded text appears as clear-text as long as you stay within the distribution.
ML based semantic obfuscation. Without going too much into details, the idea is to use a seq2seq+VQ-VAE model to create embeddings, then modify those embeddings via a key (ideally in a way that preserves the latent distribution of the training set). The model is trained with cycle consistency, i.e. minimising the difference between the input embedding and output embedding. By knowing the key, the embedding from the encoded output can be converted back to the embedding for the original input, and the encoded text appears as clear-text as long as you stay within the distribution.