But fiction it is not: a team of neuro-engineers at Columbia University did, in fact, develop such a system. Information on the technology was published yesterday in Scientific Reports.
In a press release, Columbia University explained how the system works:
By monitoring someone's brain activity, the technology can reconstruct the words a person hears with unprecedented clarity. This breakthrough, which harnesses the power of speech synthesizers and artificial intelligence, could lead to new ways for computers to communicate directly with the brain. (source)
Previous fMRI scanning research has shown that when people speak (or even imagine speaking), telltale patterns of activity appear in the brain. Distinct (but recognizable) patterns of signals also emerge when we listen to someone speak, or when we imagine listening. We previously published an article about similar technology being used in China but Columbia's tech goes much further.