So, IoT is (finally) definitely coming. What we are looking at, is an ever-increasing number of sensors in the environment, from which huge amounts of data will be generated.
It stands to reason that to deal with that, intelligence must be moved “down”, as close to the sensors as possible. In this way, we don’t actually need to move that much data around – the sensors can extract the relevant information locally, and transmit the juicy parts of what’s happening to some central system. This is the core idea behind Edge computing.
Now, if we want to cover the world in sensors, they must be as cheap as possible while still being able to do their job. Which means that whatever intelligence they need, it better be streamlined to the limit, as computation has a cost. This limitation doesn’t really apply to the central system, or not as heavily.
Both sensors and central system will be dealing with highly complex multi-dimensional data; this points to Machine Learning playing the lead role in enabling them to do their job (for any non-trivial job).
You can attack Machine Learning models, trying to fool them with carefully prepared data. There seems to be some indication that trying to make a model robust to attacks actually decreases its prediction performance. This might mean that, rather than making models robust, one would have a separate system checking that input data doesn’t look “attack-y”, whatever that means in practice.
All of these pieces and possibilities paint an extremely interesting panorama for cyber warfare, or general malicious attacks towards future (present?) IoT systems.
Sensors on the Edge are probably going to be very vulnerable to adversarial attacks. There’s very little resources to spare to “sanity check” malicious input, and it’s easy to imagine the robustness / performance ratio of the models being more susceptible for skewing in the performance direction. On the other hand, in the “massive sensor deployment” scenario, attacking enough sensors at once to have an effect on the overall system might be tricky. Also… how do you provide malicious data to an accelerometer?
The central systems are (hopefully) going to be more robust to adversarial attacks, simply because resources are going to be less of a problem. On the other hand, it’s much simpler to fake input data here, if one is able to run some kind of man-in-the-middle scheme.
As for the title: these thoughts have been inspired by yours truly getting up from a chair and getting the million-crawling-ants feeling from my foot. I thank my brain for being robust to such adversarial attacks.