Should LPWA devices talk directly to Azure and AWS?
Ten years ago the question was whether embedded devices could run hyperscaler SDKs. For Cat-M and NB-IoT devices, the better question now is whether they should when battery life and data plans dominate.

Should LPWA devices talk directly to Azure and AWS?
Ten years ago, a reasonable engineering question was whether a constrained embedded device could run the Azure or AWS SDK and talk directly to Azure IoT Hub or AWS IoT Core.
Today, for LPWA devices, the more important question is whether it should.
For a modern Cat-M or NB-IoT asset tracker, CPU and RAM are no longer the only meaningful constraints. Battery life and data consumption are often the real design limiters. When a device is expected to run for years on a battery and live on a small data plan, every extra byte and every extra second of radio-on time matters.
This is where the architecture changes. A bridge is no longer just a compatibility layer. It becomes an efficiency layer.
A simple LPWA tracker payload
Consider a representative asset tracker that wakes up and sends one ASCII packet containing an event, GNSS location, cell tower context, and a timestamp.
For example:
+RESP:GTNMR,930402,860931070011593,,0,1,1,0,0.3,86,13.9,-75.635525,45.448902,20260421210658,0302,0220,2D83,0358070A,16,0,4188,100,1,1,,20260422021010,00DF$
That payload is only 155 bytes. A simple server ACK such as:
+SACK:00DF$
is only 11 bytes.
That is exactly the kind of traffic LPWA devices are good at sending.
The comparison
We can compare three ways to deliver those same 24 reports per day. In the first model, the device wakes, sends a small UDP message, receives a tiny ACK, and goes back to sleep. In the second model, the device wakes 24 times per day and uses Azure IoT Hub over MQTT/TLS, paying for TCP setup, TLS setup, MQTT setup, publish, acknowledgement, and disconnect every time. In the third model, the device connects once per day, stays attached, sends its 24 telemetry messages through that session, and uses 15-minute MQTT keep-alives to keep the connection open.
The same device-side argument broadly applies to AWS IoT Core because it also expects the constrained endpoint to speak an MQTT/TLS model northbound.
Modeling assumptions
To keep the math transparent, this model isolates transport overhead and uses the same radio assumptions as our earlier LPWA protocol analysis: 20 kbps uplink, 15 kbps downlink, 120 mA while transmitting, 60 mA while receiving, and a 3.7 V supply. The UDP case includes only normal IPv4 and UDP framing. The MQTT/TLS case assumes TCP connect, a full TLS 1.2 handshake, MQTT CONNECT/CONNACK, a telemetry publish at QoS 1, PUBACK, and disconnect.
This is still a transport model, not a full carrier behavior model. It does not include operator-specific attach time, retries, RF problems, paging behavior, RRC tail energy, PSM/eDRX side effects, or the cost of keeping the modem camped on the network all day. Those omitted factors usually make the always-attached model look worse in the field than it looks on paper.
For readers who want a broader protocol comparison across CoAP, LwM2M, MQTT, and HTTP, see LPWAN protocols for battery-powered devices.
Device-side results
Using those assumptions, the transport-only model looks like this:
| Model | Daily data | Monthly data | Yearly data | Daily transport energy |
|---|---|---|---|---|
| UDP payload + UDP ACK | 5.2 KB | 0.15 MB | 1.85 MB | 0.89 J |
| Azure MQTT/TLS, reconnect every wake | 109.7 KB | 3.21 MB | 39.10 MB | 16.07 J |
| Azure MQTT/TLS, one daily connection + 15 min keep-alive | 17.2 KB | 0.50 MB | 6.12 MB | 2.72 J |
The cold reconnect MQTT/TLS model is about 21x more data than the UDP exchange and about 18x more transport energy. Even when the session is held open for the day, the model still uses about 3.3x more data and 3x more transport energy than the simple UDP design.
That alone already changes the conversation. For a 155-byte LPWA tracker payload, the question is no longer whether the device has enough flash to fit the SDK. The real problem is that the session machinery can dwarf the application payload.
Why the always-attached model is usually much worse than the table
The table above is still generous to the always-attached design because it only counts wire traffic. In a real LPWA device, keeping the session alive means the modem and MCU cannot spend the day in their deepest sleep state. The device has to remain sufficiently awake to maintain network context, process keep-alives, handle paging, and keep the TCP/TLS/MQTT machinery valid.
If we want one concrete planning number instead of a range, a reasonable estimate is to treat the daily-connection model as averaging about 2 mA across the full day once you include the modem staying attached and the MCU staying available enough to support the session. That is intentionally only an estimate, but it is not a wild one. Modern LPWA modem data sheets commonly show sub-mA current only in their deeper sleep behavior, while network-attached or idle states are materially higher. A 2 mA all-day average for a real "stay connected" design is therefore a conservative planning value, not an extreme one.
At 3.7 V, an average draw of 2 mA across 24 hours works out to about 640 joules per day before we even worry about application work. Add the 2.72 J/day transport result from the keep-alive model and the practical estimate becomes about 643 J/day.
That changes the story completely:
| Model | Estimated daily energy |
|---|---|
| UDP payload + UDP ACK | 0.89 J/day |
| Azure MQTT/TLS, reconnect every wake | 16.07 J/day |
| Azure MQTT/TLS, one daily connection + keep-alive, transport only | 2.72 J/day |
| Azure MQTT/TLS, one daily connection + keep-alive, estimated real daily power budget | 643 J/day |
Using that estimate, the always-attached model is roughly 700x the UDP design and about 40x the reconnect-on-wake MQTT/TLS model. That sounds dramatic, but it matches the intuition most LPWA engineers already have: once the modem is prevented from spending nearly all of its life in deep sleep, the architecture is no longer behaving like a long-life LPWA endpoint.
That is why battery-powered Cat-M and NB-IoT products are usually designed around short wake windows, fast uplink, and immediate return to sleep. The always-attached model does not just add protocol overhead. It changes the power architecture of the whole product.
Why the bridge matters
If the device sends a 155-byte UDP message, a bridge can terminate that compact southbound protocol and do the expensive work northbound. The bridge can speak bespoke UDP, CoAP, or LwM2M on the device side, then translate to MQTT/TLS, HTTPS, Azure-native services, AWS-native services, or enterprise systems on the cloud side. It can also decode the raw payload, normalize it, enrich it with state and context, and route it to more than one destination.
That matters because the cloud service model is often richer than the device message model. A small LPWA tracker just wants to report an event. An application team may want that same event to become telemetry, update a twin or shadow, populate a queue, trigger a workflow, and land in analytics storage. A bridge lets the device stay simple while still feeding all of those cloud-side needs.
The hidden duplication cost
The more subtle penalty is duplication at the device.
Suppose the application wants the same field data to do two jobs: it should arrive as telemetry, and part of it should also update the Azure device twin or AWS device shadow. Without a bridge, the device often has to represent those as two separate upstream operations. Best case, that means two publishes in the same wake window. Worst case, it means separate transactions with separate session overhead.
On a minimal UDP design, the cost of duplication is easy to see. One 155-byte payload plus one tiny ACK is the whole transaction. If the device has to send that logical content twice, the radio payload roughly doubles. The device sends twice as many application bytes, spends twice as long in the active transmit window, and burns roughly twice the application-side radio energy.
On a direct MQTT/TLS design, duplication is even more awkward. If both operations can be sent in one established session, the incremental cost is smaller than a full reconnect, but it is still unnecessary additional uplink and additional device-side protocol work. If the second operation forces another session or another wake, the penalty becomes much larger.
This is exactly where a bridge earns its place. The LPWA device still sends one compact message. The bridge can then publish telemetry, update a twin or shadow, fan out to queues, and enrich the record without adding any extra bytes or extra radio time to the device.
What this means architecturally
For a Cat-M or NB-IoT tracker, direct hyperscaler SDK integration is often the wrong optimization target. It solves a cloud convenience problem by pushing the cost down onto the device in the form of more bytes over the air, more radio-on time, more battery drain, more firmware complexity, and tighter coupling to one vendor's model of identity, messaging, and device state.
A bridge flips that model. The device stays simple and efficient. Cloud-specific translation moves northbound. Decoding and enrichment happen before application code sees the data. Telemetry, twin, shadow, routing, and queuing can all happen from one device uplink instead of forcing the endpoint to mimic every cloud abstraction directly.
There is also a strategic reason to be careful about embedding hyperscaler-specific device clients deeply into long-life products. Device lifecycles are measured in years, and sometimes in a decade or more. Cloud service strategies are not. Google Cloud IoT Core was discontinued on August 16, 2023. IBM sunset the Watson IoT Platform service on IBM Cloud effective December 1, 2023 without a direct replacement. The lesson is not that Azure or AWS are about to disappear. The lesson is that cloud IoT product lines can be renamed, narrowed, repositioned, or retired on a very different timeline than the devices in the field.
When the cloud-specific logic lives in a bridge instead of in the endpoint firmware, that risk becomes manageable. You can change the northbound cloud integration without recalling the device fleet, rewriting low-level comms code, or repaying the battery penalty of a heavier protocol choice.
That is the real value of the Tartabit IoT Bridge. It is not only a protocol transcoder. It is a low-power ingestion layer that lets LPWA devices speak in the language that is best for the device, while the bridge speaks in the language that is best for Azure, AWS, and enterprise applications.
Whether the southbound protocol is bespoke UDP, CoAP, or LwM2M, the architectural point is the same: keep the device payload small, keep the device asleep as much as possible, and move cloud-specific overhead into the bridge.