One trend I've noticed in studying different types of filters is that particle filters generally rely on comparing measurements against known data in the surrounding environment. On the other hand, examples of Kalman filters do not seem so dependent on pre-existing knowledge of the environment.
A particle filter works by taking a measurement, randomly sampling an n-dimensional grid and comparing each point, each particle that is, to see if the measurements are true for that point. The more true a particle is to the measurements, the higher it is weighted, as it is more likely to represent our location, for instance. At each time step, we then resample, which means to eliminate the low-probability particles and place new particles at higher probability places, and then we 'step' forward once through time. When this 'step' happens, we make an assumption about what a reasonable range of velocities and directions could be for our object that is being tracked. With that range, we then randomly move each particle within that range of acceptable movement. Then the process starts over, we measure, find the weight of each particle, resample, and then step. After a few time steps, the particles should be coalescing around fewer and fewer locations and then (ideally) around a single point, which will be our most-likely location.
When a first responder rushes into a building, they are equally likely to go in all directions, with a wide range of velocities. They could be climbing a rope, they could be falling down a staircase, or even just walking along flat ground. If they're at a full sprint, they could easily be going 8 or 9 meters per second, and if they're at a slow walk, they could be going less than 1 meter per second. We know nothing about the geometry of the building they're going into, so particle filter-esque techniques like "check if their altitude is increasing at 4 feet per second, while they are moving on the x and y axes at less than 1 foot per second. If so, they are probably riding the elevator" don't seem to work. The only thing we can say with any confidence is that it is more likely that they are continuing in the direction they were going than that they reversed direction. It is possible they bounced off of a wall and are now going the other way, but that is certainly less likely.
A Kalman filter, on the other hand, seems to operate on the principle that we trust each sample of data only a given amount. As we get more and more measurements that agree with each other, we trust that measurement more. In this case, the LSM9DS0 introduces noise into the data that it gives us, just as a natural byproduct of not being an ideal sensor. With a Kalman filter, spurious accelerations and rotations would become suspect. We trust those measurements a little, but we don't trust any one measurement completely.
While the two filters seem to be grouped together fairly often, they seem rather distinct to me. I have yet to see an example of a particle filter that doesn't rely on pre-existing knowledge of the outside world, which makes it hard for me to figure out how I would use one without that knowledge.
So, I'm considering going back to looking at Kalman filters now. I understand it can be difficult to tune the parameters correctly, but I don't understand how a particle filter could be applied to this situation, as I have concluded today from the research I've done. Incidentally, Kalman filters also have a lower computational load, which is not a bad thing on an embedded system. If anyone has suggestions for how to use a particle filter here, I'm definitely open to the idea.