We know that there are colorblind people, I'm currently not talking about them. Our brains appear to be prediction machines that are designed to model the world around us from the sensory input through time. So, everything beyond the fixed "first layer" that represents perception ([[photoreceptors]]), are interconnected locally autonomous units that are dependent on one another. Therefore, if all concepts (however small they may be) are learned beyond the atomic concepts that represent sensory input, the high-level concepts are a model of the lower level ones, and the lower level ones are a model of the even lower level ones, so on until we reach the sensory input. The sensory input appears to be the same across all humans (excluding color blind people). Thus, the concepts themselves are free to lean towards different directions as long as they model the world in a practical way. So, one person can perceive the color red according to that person, but if we were to just transport only the high-level concept of red to another persons brain, the concept-describing pattern of the neural activity would almost certainly differ, thus appearing for them not red. To test this, we can create and train a model that listens to the corresponding brain activity of a person (train only on one person) when encountering red vs blue. Once we have a model that can predict the color that the person sees reliably, we connect it to another person and see the drop in accuracy. I'm not doing that, too much work for just a thought :D