making UDP reliable

Nahuel Molina | Silvio
7 min readJan 14, 2024

--

The UDP protocol offers faster data tranfer than other alternatives like TCP or websocket-TCP connection.

TCP means a previous handshake, identified packets transmited ordered to the TCP receptor and acknowledgement of that delivery. We could understand it as a counter productive option if we prioritize velicity, which is crucial in video streaming or game development.

However the mentioned are not sufficient reasons to choose UDP over TCP. A great risk that clearly won’t destroy our system but reflects its low quality is the lost of data during transmission. The probabilities of that happening could be reduced ensuring the network bandwidth, and practicing a high quality traffic management. But it requires great hardware performance, and sometimes the request and data loading are difficult to control.

However, what can we learn by leaving the machine to do the rough part? Maybe a bad experience can make us not only to understand inner working of a system but appreciate when it is well made. Let's bring reliability to UDP.

nodejs

Nodejs 'udp4' module brings us basic tools for applying our poor knowledges about network and file transfer.

First, create two basic npm projects, which will communicate each other.

npm init --y

FFmpeg and nginx

FFmpeg and Nginx will be the tools I specifically use in this demostration, it’s not needed to understand them, but I wanted to give you context for implementing UDP communication.

FFmpeg will do the streaming of a video sample, an .mp4 file. And it does this just with the following command that we will insert at a CMD.

ffmpeg -i D:\\djangopros\\stream_site\\samples\\taladro.mp4 -c:v libx264 -preset veryfast -c:a aac -f flv rtmp://localhost:1935/live/stream

With this setting, FFmpeg slice the .mp4 into .ts chunk files and also create an index file .m3u8 that manages the chunks putting them in order to be played. This is the HLS (Http Live Streaming) format.

However, it won’t work. We are missing a piece.

The other part of hte process will be carried out by Nginx, this server will only serve our FFmpeg videostreaming to an specific rtmp port, making the straming run properly. Even we can reproduce it using VLC, pointing to the rtmp URL established in the Nginx configuration or in the FFmpeg command at the end ( rtmp://localhost:1935/live/stream ).

Open two CMD windows and type nginx in one. Now, we can run the streaming inserting the mentioned FFmpeg command in the other CMD window. Nginx requires specific settings I recomend you to look for in internet. Beyond technical aspects the deal here is getting the concept. A draw for clarifying...

me: I drew it

One important think is nginx is configured to save streaming files in an specific folder, which is normally called hls.

identifiers

Now, we will going to order the transmitted buffers by concatenating to them an identifier, an single integer value or multiple identifiers.

We can check which is the MTU of our network, normally setted in the routers. Generally networks has a 1500 bytes as MTU, we should change the size of each datagram to 1499, utilizing the first byte for numbering the buffer, in the case that our numbers will vary between 0 and 255, I mean a byte represents integers of up to 2⁸. An option is to use two bytes, if we think the identifier will be larger than 255, extending the integer to a limit of 2¹⁶. This avoids unnecessary risks at a cost of less space for data per packet. However, to explore the capabilities of setting manually packet headers we can add a couple of identifiers besides the initial ID.

The definitive packet will have the following structure…

me:I drew it

2 bytes for a buffer identifier, 2 bytes for the size of buffer that holds the name of the file and finally the name of the file itself, a buffered string, for example of 12 bytes. They all will be all concatenated. We should calculate the packet data size (1500 — (2+2+12)) and last, make the chunk from the file we want to trasmit. The last step is cancatenate the header with the data, and the packet is ready to be sent to the UDP server receptor.

The main goal is to transmit the .ts files and the .m3u8 index file through the network. Each chunk (.ts) will be fragmented into 1484 bytes segments. then normally, each .ts should have the amount of considering that FFmpeg is creating .ts of same size.

implementation in Nodejs

Considering th two UDP servers, the frontend asks for files and the other replies wtih them. Let’s create the backend server:

const dgram = require('dgram');
const server = dgram.createSocket('udp4');

module.exports = server;
//you can use multiple scripts or just one

const path = require('path');
const fs = require('fs');
const server = require('./udpserver.js')

const { TakeFiles, LookForFiles, SendFiles, Call } = require('./filesRoutines.js')

const dotenv = require('dotenv');
dotenv.config();

const hlsDirectory = process.env.HLS_PATH;

var ADDRESS = process.env.UDP_SERVER_HOST
var PORT = process.env.UDP_SERVER_PORT

const isPathValid = (path) => { //this just checked if dir exists
try {
fs.accessSync(path);
return true;
} catch (error) {
return false;
}
}


function TakeMessage(msg){
var warn_msg = 'The HLS path must be reconfigured'

if (!isPathValid(hlsDirectory)){ return console.log(warn_msg) }

if(msg == 'start'){

if(LookForFiles(hlsDirectory)) {
TakeFiles(hlsDirectory);
} else {
console.log('there are not files');
}

} else if(msg == 'Request data'){
console.log('working on: ', msg.toString())
}
}


server.on('error', (err) => {
console.error(`server error:\n${err.stack}`);
server.close();
});

server.on('message', (msg, rinfo) => {
console.log(`server got: ${msg} from ${rinfo.address}:${rinfo.port}`);
TakeMessage(msg)
});



server.on('listening', () => {
var address = server.address()
console.log(`server listening ${address.address}:${address.port}`);
});

server.bind(PORT, ADDRESS);

This server waits for startmessage, then it calls TakeMessage() and if it’s right then calls the TakeFiles(hlsDirectory), nothing strange. Let’s look what the mentioned function does. Within its scope we are going call hlsDirectory as DIR_PATH directly, then...


fs.readdir(DIR_PATH, (err, files) => {

if (err) { return console.log('Error reading directory')};

const HandlerMSG = (err) => {
if (err) throw err;
}

let filecounter = 0

files.forEach(file => {

filepath = DIR_PATH + '\\\\' + file

if(filepath.includes('.ts') || filepath.includes('.m3u8')){
const fileBuffer = fs.readFileSync(filepath);
const FileNameBuffer = Buffer.from(file, 'utf-8');

const MTU = 1500;
const packetSize = MTU- (FileNameBuffer.length + 4);

let buffer_counter = 0;
//then fileBuffer is divided in 1500, to know how many buffers represent that file
//instead of ding the division, we go through fileBuffer each 1500, it's sampling
for (let i = 0; i < fileBuffer.length; i += packetSize) {

var IdentifierBuffer = Buffer.alloc(2);
IdentifierBuffer.writeUInt16BE(buffer_counter);

var FilenamesizeBuffer = Buffer.alloc(2);
FilenamesizeBuffer.writeUInt16BE(FileNameBuffer.length)

//inside we take the fragment and it's sended
const packetdata = fileBuffer.slice(i, i + packetSize);
const packet = Buffer.concat([IdentifierBuffer, FilenamesizeBuffer, FileNameBuffer, packetdata]);

server.send(packet, 0, packet.length, HTTP_UDP_PORT, HTTP_UDP_HOST, HandlerMSG);

buffer_counter++;
}

console.log('file:', file)

}

filecounter++;
});
})}

fs.readdir() returns the files contained in the directory (or not, just err). With a forEach() loop, each file is accessed. We create a buffer for the file and another for its name. We also establish the MTU and calculate the size its data buffer should have to match 1500 bytes.

const fileBuffer = fs.readFileSync(filepath); //a buffer of the file
const FileNameBuffer = Buffer.from(file, 'utf-8');//a buffer of its name

const MTU = 1500;
const packetSize = MTU- (FileNameBuffer.length + 4); //dataPacket sized

The following block does the identification itself. A first identifier helps us to know which is the order of the buffer for an specific file. For this we create a buffer of two bytes size and fill them with buffer_counter and another buffer the contains the size of the file’s name. Lastly, we concatenate these buffers with the buffer that holds the name of the file, and with the buffer of the chunk extracted from the .ts.

let buffer_counter = 0;

for (let i = 0; i < fileBuffer.length; i += packetSize) {

var IdentifierBuffer = Buffer.alloc(2);//creating a buffer space
IdentifierBuffer.writeUInt16BE(buffer_counter);//filling the bff

var FilenamesizeBuffer = Buffer.alloc(2);//creating a buffer space
FilenamesizeBuffer.writeUInt16BE(FileNameBuffer.length)//filling the bff

//inside we take the fragment and it's sended
const packetdata = fileBuffer.slice(i, i + packetSize);
const packet = Buffer.concat([IdentifierBuffer, FilenamesizeBuffer, FileNameBuffer, packetdata]);

server.send(packet, 0, packet.length, HTTP_UDP_PORT, HTTP_UDP_HOST, HandlerMSG);

buffer_counter++;
}

At the end of the loop we have sent an entire file in fragments.

the UDP receptor

In this case I have create two server an HTTP and the UDP. Sending the command start as a URL parameter as follows:

server.on('error', (err) => {
console.error(`server error:\n${err.stack}`);
server.close();
});

server.on('message', (msg, rinfo) => {
ReceivedBuffers(msg)
});

server.on('listening', () => {
var address = server.address()
console.log(`UDP listening ${address.address}:${address.port}`);
});

server.bind(HTTP_UDP_PORT, HTTP_UDP_HOST);

app.get('/processdata/:id', (req, res) => {

var command = req.params.id
const message = Buffer.from(command); //buffer filled with data

const Handler = (err) => {
if (err) throw err;
}


if(command == 'start'){
bmessage = Buffer.from(command)
server.send(bmessage, 0, bmessage.length, UDP_PORT, UDP_HOST, Handler);
}

res.send('staring')

});

The server application coexists with an express application. However, the function that waits for the message incomming from the UDP backend is ReceivedBuffers(msg); .

var File = {
currentFile:'none'
}

const ReceivedBuffers = (Bff) => {

var BufferID = Bff.readUInt16BE(0);
var string_size = Bff.readUInt16BE(2);
var filename = Bff.toString('utf-8', 4, 4 + string_size);

if(filename != File.currentFile){

if(File.currentFile != 'none'){
WriteStreamingFile(acumulatedBuffers, filename);
}
console.log(File.currentFile);
File.currentFile = filename;
}

var BufferData = Bff.slice(5 + string_size);

BufferReception(BufferData)

acumulatedBuffers = Buffer.concat(BUFFERS)
//buffers were a list but now they a are a unique buffer concatenated
}

The size or the quantity of bytes that fills the name was sent to mark the ending byte of this name. If 12 bytes is the size of the name buffered, then we can slice the buffer received, extracting a buffer from the 4th byte to the 16th. This buffer is the name of the file.

A File object is created, and its currentFile property wil notifies us when we read a different file. In that case the acumulated buffer is derived to WriteStreamingFile() that writes the file. The main objective of this post is to show what is to be reliable, implementing identifiers.

--

--

Nahuel Molina | Silvio

This place is what I need for writing about programming, learning in general, and for reading people's thoughts