Yes. That is what machine-learning models, such as neural networks, do super well. Today you can train a neural network on a million images, and then give it a million unseen images, and the model will correctly tell you what is in every image.
What we still don’t know about classical models is the ideal size. Initially, researchers thought that a model that was neither super small nor super big in terms of the number of parameters would be the best choice to optimize learning. And there are lots of theories that explain why that should be the case. But then they tried making the models super big and found that the learning ability just got better and better.