TLDR; Hadoop is a framework of distributed storage and distributed processing of very large data sets on a cluster.
The distributed storage part means that the data is stored in pieces (blocks) on multiple computers (nodes) in a resilient way so in case of hardware failure of one of the nodes, the data stays available. This storage system is called HDFS (Apache Hadoop Distributed File System) and acts like one single storage device (i.e. hard disk) even though it is comprised of many, many different nodes that store a bit of the entire data. This is very cost-efficient as one can keep adding hardware to keep up with increasing amounts of data (horizontal scaling).
The next part is distributed processing, which means that a task (varying from simple to complex) can be split into equal parts and given to nodes (which use their own memory/cpu power) to process in parallel. That way, just like in distributed storage, many computers use their resources to act as a single "super computer" when handling a task fo