Implementing the concurrent access. Supports well more than 5 devices, enough for the house

This commit is contained in:
2026-03-08 17:50:07 -04:00
parent 7452b1b807
commit 5335da5c29
8 changed files with 148 additions and 37 deletions

View File

@@ -21,16 +21,16 @@ Furthermore, the current `static_file_handler` relies on a single shared `rest_c
### 3.1 Backend Configuration (ESP-IDF)
Instead of implementing complex multi-threading (spawning multiple FreeRTOS worker tasks), we will leverage the HTTP server's built-in event loop multiplexing by tuning its configuration:
1. **Increase Socket Limit**: Set `config.max_open_sockets = 10` (or up to `LWIP_MAX_SOCKETS` limit) to provide more headroom for initial connections.
2. **Enable Stale Socket Purging**: Set `config.lru_purge_enable = true`. This is the critical fix. When the socket limit is reached and a new device attempts to connect, the server will intentionally drop the oldest idle keep-alive socket to make room, allowing the new device to load the page seamlessly.
1. **Increase LwIP Socket Limit**: `LWIP_MAX_SOCKETS` is set to `32` in `sdkconfig.defaults`.
2. **Increase HTTP Socket Limit**: Set `config.max_open_sockets = 24`. This deliberately reserves `8` sockets for LwIP internals and outwards connections, guaranteeing the network stack always has headroom to accept a TCP handshake from a new client.
3. **Enable Stale Socket Purging**: Set `config.lru_purge_enable = true`. This is the critical fix. When the 24 socket limit is reached and a new device attempts to connect, the server will intentionally drop the oldest idle keep-alive socket to make room, allowing the new device to load the page seamlessly.
### 3.2 Backend Scratch Buffer Pooling
To safely support multiplexed file serving without heavy `malloc`/`free` overhead on every request, we will replace the single shared scratch buffer with a **dynamically growing Shared Buffer Pool**:
- We will allocate a global pool of scratch memory chunks.
- When `static_file_handler` begins, it will request an available chunk from the pool.
- If all chunks are currently in use by other concurrent requests, the pool will use `realloc` to expand its capacity and create a new chunk.
To safely support multiplexed file serving without heavy `malloc`/`free` overhead on every request, we will replace the single shared scratch buffer with a **Static Shared Buffer Pool**:
- We allocated a global struct with a fixed array of `MAX_SCRATCH_BUFFERS = 10`.
- When `static_file_handler` begins, it will request an available chunk from the pool, allocating a 4KB chunk on the heap only the first time it is used.
- When the handler finishes, the chunk is marked as available yielding it for the next request.
- This provides isolation between concurrent connections while minimizing heap fragmentation compared to per-request `mallocs`.
- This provides isolation between up to 10 active transmission connections while minimizing heap fragmentation compared to per-request `mallocs`.
### 3.3 Frontend Safety (Loading Spinner)
Even with backend improvements, network latency or heavy load might cause delays. We will implement a global request tracker to improve perceived performance:
@@ -44,7 +44,7 @@ Even with backend improvements, network latency or heavy load might cause delays
|---|---|---|---|
| **True Multi-Threading (Multiple Worker Tasks)** | Can process files fully in parallel on both cores. | High memory overhead for stack space per task; over-engineered for simple static file serving. | **Rejected**. Relying on the event loop's multiplexing is sufficient for local network use cases. |
| **Per-Request `malloc` / `free`** | Simplest way to isolate scratch buffers. | High heap fragmentation risk; computationally expensive on every HTTP request. | **Rejected**. |
| **Dynamically Resizing Pool (`realloc`)** | Low overhead; memory footprint only grows organically to the maximum concurrent need and stabilizes. | Slightly more complex to implement the pool state management. | **Selected**. Best balance of performance and memory safety. |
| **Fixed Pool (10 buffers)** | Low overhead; memory footprint only grows organically to the maximum concurrent need limit (10 * 4KB = 40KB) and stabilizes. | Strict limit on how many connections can be actively transmitting data at the exact same millisecond. | **Selected**. Best balance of performance and memory safety. |
## 5. Potential Future Improvements
- If the `realloc` pool grows too large during an unexpected spike, we could implement a cleanup routine that periodically shrinks the pool back to a baseline size when the server is idle.