Videstra is a company that provides streams from IP cameras placed in public places in the USA. Their clients are broadcasting companies that mostly use those streams as backgrounds for weather forecasts programmes, as well as to illustrate some local news.
We were asked to create a web service that would translate selected streams from the company’s cameras to browser-playable formats, as well as to create a simple web player for translated streams that could be embedded on clients' websites.
The web service needed to work with many streams simultaneously. One should also be able to deal with disabled cameras without crashing other streams and automatically connect back to them when they’re reenabled (sometimes after a long period).
The service needed to be accessible through a REST API and a CMS dashboard. The client needed to be able to add, remove or update RTSP links to cameras, or turn services for each camera on and off. The CMS dashboard also needed to provide basic view metrics for analysis.
For each stream, there needed to be an option to add an overlay picture that would appear on the video stream in the web player (e.g. current date and time, weather pictograms, the name of the city where the camera is placed, etc.)
The web player needed to automatically start playing a stream when loaded in the browser and stop every 2 minutes until the user pushed play manually.
We created an application in Elixir with Phoenix and Phoenix Live View frameworks to provide REST API and a CMS dashboard.
Membrane was used to connect to a camera through RTSP protocol and to negotiate the RTP connection between the service and the camera. We needed to manually manage the UDP ports that we were using for RTP connections, as there is a finite amount of available ports. We also needed to organize the processes of connecting to each camera separately to be prepared for crashing/malfunctioning cameras or streams that are unreadable. None of those could crash the app itself. Erlang’s OTP and its process supervising tree were helpful for that job.
Membrane was also used to translate each RTP stream into an HLS stream. We created a pipeline where RTP packets are collected through a UDP port and passed through other processes to decode, parse, reencode them and, finally, output them as an HLS playlist. Next, HLS files were hosted by a separate Nginx server.
We created an embeddable HTML player that can play HLS video using the hls.js library and provide custom business logic (e.g. video overlays, stop after desired time).
The application has successfully been working since November 2021. The client decided to deploy a separate instance of the service for each client, and each of the instances translates 10 to 20 streams continuously.