Difference between revisions of "Streaming Video With RaspberryPi"

From Tmplab
(Solution 1 : OGG/VORBIS + Icecast)
(Solution 1 : OGG/VORBIS + Icecast)
Line 50: Line 50:
 
'''PRO''' Icecast is simple, open, and handles authentification. Rsync using SSH is crypto friendly. The file is saved on the server.  
 
'''PRO''' Icecast is simple, open, and handles authentification. Rsync using SSH is crypto friendly. The file is saved on the server.  
  
'''Sources'''
+
=== How to stream in OGG to Icecast ===
 
 
http://sirlagz.net/2013/01/07/how-to-stream-a-webcam-from-the-raspberry-pi-part-3/
 
 
 
 
 
'''POC'''
 
 
   
 
   
 
'''A.''' Compile FFMPEG on pi & server (see below)
 
'''A.''' Compile FFMPEG on pi & server (see below)
Line 61: Line 56:
 
'''B.''' Start capture in a screen
 
'''B.''' Start capture in a screen
  
     <code># [ -d /tmp/capture ] || mkdir /tmp/capture; rm -f /tmp/capture/* && cd /tmp/capture/ && \
+
     <code>[ -d /tmp/capture ] || mkdir /tmp/capture; rm -f /tmp/capture/* && cd /tmp/capture/ && \
 
     raspivid -ih -t 0 -w 1280 -h 720 -b 1000000 -pf baseline -o - | /usr/local/bin/ffmpeg  -f alsa -itsoffset 6.5 -ac 1 -i hw:1 -acodec aac -strict -2 \
 
     raspivid -ih -t 0 -w 1280 -h 720 -b 1000000 -pf baseline -o - | /usr/local/bin/ffmpeg  -f alsa -itsoffset 6.5 -ac 1 -i hw:1 -acodec aac -strict -2 \
 
     -i - -vcodec copy -f segment -segment_list out.list -segment_list_flags +live -segment_list_size 5 -segment_time 4 -segment_time_delta 3 %10d.ts</code>
 
     -i - -vcodec copy -f segment -segment_list out.list -segment_list_flags +live -segment_list_size 5 -segment_time 4 -segment_time_delta 3 %10d.ts</code>
Line 92: Line 87:
 
###<code> -segment_time 4 </code> (segment time) Defines the capture base duration in seconds. Tweak.
 
###<code> -segment_time 4 </code> (segment time) Defines the capture base duration in seconds. Tweak.
 
###<code> -segment_time_delta 3 </code> (segment time delta) Defines a window to modulate chunks duration in seconds to include mandatory inline headers. Tweak.
 
###<code> -segment_time_delta 3 </code> (segment time delta) Defines a window to modulate chunks duration in seconds to include mandatory inline headers. Tweak.
###<code> %10d.ts </code> The format for chunks files names. %10d will start at 0000000000.ts and ffmpeg understand we want MPEGTS format for chunks
+
###<code> %10d.ts </code> The format for chunks files names. %10d will start at 0000000000.ts and ffmpeg understand we want MPEGTS format for chunks
 
# ffmpeg saves the files 0000.ts, 0001.ts, etc. and out.list in /tmp/capture
 
# ffmpeg saves the files 0000.ts, 0001.ts, etc. and out.list in /tmp/capture
  
 
'''C.''' Run rsync to server in a screen
 
'''C.''' Run rsync to server in a screen
  
     <code> while true; do rsync -a /tmp/capture user@server:/tmp/; sleep 1; done</code>
+
     <code> ssh user@server:/tmp/ "[ -d /tmp/capture ] || mkdir /tmp/capture" && while true; do rsync -a --files-from=/tmp/capture/out.list /tmp/capture user@server:/tmp/capture; sleep 1; done</code>
 +
 
 +
 
 +
''What's happening here''
 +
 
 +
#<code> ssh </code> Use SSH ...
 +
##<code> user@server:/tmp/ </code> ... to connect to server "server" as user "user"
 +
##<code> "[ -d /tmp/capture ] || mkdir /tmp/capture" </code> ... and create if not exists a folder "/tmp/capture"
 +
#<code> while true; do </code> Run an infinite loop
 +
##<code> rsync </code> Start rsync file synchronisation
 +
###<code> -a </code> (archive mode) Set the right parameters for transfer
 +
###<code> --files-from=/tmp/capture/out.list </code> Use the out.list as a list of file to transfer, which avoids scanning the whole folder
 +
###<code> /tmp/capture </code> (source) Transfer local folder content...
 +
###<code> user@server:/tmp/capture; </code> (destination) To the server "server"
 +
##<code> sleep 1; </code> Sleep one second
 +
#<code> done </code> Run loop again
  
 
'''D.''' Broadcast from server to icecast  
 
'''D.''' Broadcast from server to icecast  
Line 106: Line 116:
  
 
     <code> php /usr/local/bin/stream.php | ffmpeg -i - -f ogg - | oggfwd -p -n "Test" stream.server.com 8000 mySecretIceCastStreamingPassword /test </code>
 
     <code> php /usr/local/bin/stream.php | ffmpeg -i - -f ogg - | oggfwd -p -n "Test" stream.server.com 8000 mySecretIceCastStreamingPassword /test </code>
 +
 +
'''Sources'''
 +
 +
http://sirlagz.net/2013/01/07/how-to-stream-a-webcam-from-the-raspberry-pi-part-3/
  
 
== Solution 2 : FLVSTR + PHP Streamer ==
 
== Solution 2 : FLVSTR + PHP Streamer ==

Revision as of 19:39, 16 January 2015

General

Caution: this is a Work in progress, things are being tested. The objective is to provide in the end one or more working solutions for everyone.

Video streaming is a problem

The RaspberryPi camera offers an interesting solution to this problem. It is a very well integrated module of the Pi with one huge advantage: h264 encoding can be performed directly by the CPU as the camera uses the Serial Camera Interface protocol.

So theorically, solving the video problem with the Pi is easy but there are many subtle problems.


Problems

Audio As we use video webstreaming mostly for conferences broadcasting, good audio quality is necessary.

Slides It would be interesting to include slides of conferences while filming.

File It is important to have a file at the end of the filming.

Web It is important to have a large viewer base, therefore a well supported format.


Raspicam basics

http://elinux.org/Rpi_Camera_Module

raspivid is the basic command line used to capture video in h264.

raspivid -t 3 -fps 25 -b 1000k -w 1920 -h 1080 -o /tmp/video.h264

A very simple tutorial : http://www.raspberrypi-spy.co.uk/2013/05/capturing-hd-video-with-the-pi-camera-module/

Solution

Solution 1 : OGG/VORBIS + Icecast

Basic idea

  1.  Use the PI to capture video as h264, merge audio from usb and use ffmpeg to produce MPEGTS "chunks"
  2. Rsync the chunks to a laptop or a server (note : the audio mix can be done on the laptop)
  3. Assemble the chunks and pipe them in ffmpeg
  4. Ask ffmpeg to convert this into ogg
  5. Use oggfwd to push the ogg to your icecast server
  6. Serve m3u from the server

CON ogg does not work for everyone. It is supposed to be HTML5 compatible but icecast doesn't offer that by default.

PRO Icecast is simple, open, and handles authentification. Rsync using SSH is crypto friendly. The file is saved on the server.

How to stream in OGG to Icecast

A. Compile FFMPEG on pi & server (see below)

B. Start capture in a screen

   [ -d /tmp/capture ] || mkdir /tmp/capture; rm -f /tmp/capture/* && cd /tmp/capture/ && \
   raspivid -ih -t 0 -w 1280 -h 720 -b 1000000 -pf baseline -o - | /usr/local/bin/ffmpeg  -f alsa -itsoffset 6.5 -ac 1 -i hw:1 -acodec aac -strict -2 \
   -i - -vcodec copy -f segment -segment_list out.list -segment_list_flags +live -segment_list_size 5 -segment_time 4 -segment_time_delta 3 %10d.ts

What's happening here

  1. We create a /tmp/capture folder and make sure it's empty when starting capture
  2. Raspivid starts capturing with following parameters
    1.  -ih (inline headers) DONT CHANGE Necessary for technical reasons, as otherwise the "chunking" doesn't work
    2.  -t 0 (timeout) DONT CHANGE Necessary for technical reasons, as otherwise capture stops after 5s
    3.  -w 1080 -h 720 (height) and (width) Tweak according to your needs
    4.  -b 1000000 (bitrate) Tweak according to your needs (only integer numbers in bits are accepted, here <=> 1Mb)
    5.  -pf baseline (h264 profile) Tweak according to your needs ( only baseline, main, or high accepted)
    6.  -o - (output) DONT CHANGE Necessary in order to use the flux as Standard Output
  3. We pipe the content into ffmpeg
    1. ALSA Input
      1.  -f alsa  (format) We use alsa for usb audio capture
      2.  -itsoffset 6.5  (time offset) This one is a trick We noticed our RPi B+ had a 6.5 seconds delay to start the audio, so this is used to resync audio. Tweak.
      3.  -ac 1  (number of audio channels) We used a mono input, so 1 was the right choice. Tweak
      4. -i hw:1  (input) Tweak as your audio card adress may vary. Find more with arecord -l
      5. -acodec aac (audio codec) AAC works well for TS live.
      6.  -strict -2  Argument mandatory for AAC format
    2.  Video Input
      1.  -i -  (input) DONT CHANGE Use the Standard input
      2. -vcodec copy (video codec) DONT CHANGE Use the video codec from the RPi. Not enough CPU to do anything else.
      3. -f segment (output format) DONT CHANGE Use a "chunked" output
      4. -segment_list out.list (segment file) Defines a file containing the produced files names
      5. -segment_list_flags +live (segment file flags) Defines the way the output file caches files names.
      6. -segment_list_size 5 (segment file size)
      7. -segment_time 4 (segment time) Defines the capture base duration in seconds. Tweak.
      8. -segment_time_delta 3 (segment time delta) Defines a window to modulate chunks duration in seconds to include mandatory inline headers. Tweak.
      9.  %10d.ts  The format for chunks files names. %10d will start at 0000000000.ts and ffmpeg understand we want MPEGTS format for chunks
  4.  ffmpeg saves the files 0000.ts, 0001.ts, etc. and out.list in /tmp/capture

C. Run rsync to server in a screen

    ssh user@server:/tmp/ "[ -d /tmp/capture ] || mkdir /tmp/capture" && while true; do rsync -a --files-from=/tmp/capture/out.list /tmp/capture user@server:/tmp/capture; sleep 1; done


What's happening here

  1. ssh Use SSH ...
    1.  user@server:/tmp/ ... to connect to server "server" as user "user"
    2.  "[ -d /tmp/capture ] || mkdir /tmp/capture" ... and create if not exists a folder "/tmp/capture"
  2. while true; do Run an infinite loop
    1. rsync Start rsync file synchronisation
      1. -a (archive mode) Set the right parameters for transfer
      2. --files-from=/tmp/capture/out.list Use the out.list as a list of file to transfer, which avoids scanning the whole folder
      3. /tmp/capture (source) Transfer local folder content...
      4. user@server:/tmp/capture; (destination) To the server "server"
    2. sleep 1; Sleep one second
  3. done Run loop again

D. Broadcast from server to icecast

This requires

  1.  a PHP streamer for your incoming files : http://pastebin.com/3f5t9vDS
  2.  the oggfwd command line tool: aptitude install oggfwd
    php /usr/local/bin/stream.php | ffmpeg -i - -f ogg - | oggfwd -p -n "Test" stream.server.com 8000 mySecretIceCastStreamingPassword /test 

Sources

http://sirlagz.net/2013/01/07/how-to-stream-a-webcam-from-the-raspberry-pi-part-3/

Solution 2 : FLVSTR + PHP Streamer

Basic idea the Octopuce company has a solution to convert live MP4 to F4V. With an USB audio card, we could mux the MP4 and AAC audio and have a standalone solution.

CON authentification is hard, F4V means Flash, requires an USB disk for local backup

PRO the pi can be autonomous

First, authentification. This problem is adressed by solving encryption as well: we use an SSL socket to communicate with the server.


Solution 3 : RTSP

Basic idea Use an RTSP stream with VLC and the V4L driver

CON Non commercial RTSP server are not the norm, requires VLC or Flash player, Quality with v4l is low

PRO Easy to work out

Sources

http://www.ics.com/blog/raspberry-pi-camera-module#.VJFhbyvF-b8

http://raspberrypi.stackexchange.com/questions/23182/how-to-stream-video-from-raspberry-pi-camera-and-watch-it-live

http://ffmpeg.gusari.org/viewtopic.php?f=16&t=1130

http://blog.tkjelectronics.dk/2013/06/how-to-stream-video-and-audio-from-a-raspberry-pi-with-no-latency/

Solution 4 : HLS + RSYNC

Basic idea Use HLS segmentation and rsync

CON Not all web players can do HLS

PRO Almost out of the box, robust

Howto

1. Compile fresh ffmpeg on the pi


2. Run a capture : raspivid -t 0 -b 1000000 -w 1080 -h 720 -v -o - | ffmpeg -i - -f alsa -ac 1 -itsoffset 6.5 -i hw:1 -acodec aac -strict -2 -vcodec copy out.m3u8

3. Run a cron rsync to server (todo)

4. Connect a client (todo)


Sources

http://www.ffmpeg.org/ffmpeg-formats.html#hls

FFMPEG compilation

This installation is debian based. Some packages are included by default :

  • ffmpeg : Provides a large number of the dependencies required at compilation tim
  • yasm : modular assembler (good for compilation)
  • pkg-config : info about installed libraries (good for compilation)
  • screen : helpful for running compilation in background

For Raspberry

For the Raspberry, we only need the support of x264 and ALSA

sudo -s
aptitude install screen yasl libx264-dev libasound2-dev ffmpeg
cd /usr/src 
git clone --depth 1 git://source.ffmpeg.org/ffmpeg.git 
cd ffmpeg
./configure --enable-gpl --enable-libx264
make
make install

For laptop or server

Your default debian might come with sufficent support but if you want total control, compiling is a good idea.

Remove packages and ffmpeg support if you don't need everything.

Ex: to produce ogg format, you only need

  • aptitude packages libtheora-dev and libvorbis-dev
  • configure options --enable-libtheora --enable-libvorbis
 
sudo 
aptitude update && aptitude install screen pkg-config yasm ffmpeg libass-dev libavcodec-extra libfdk-aac-dev libmp3lame-dev libopus-dev libtheora-dev libvorbis-dev libvpx-dev libx264-dev
cd /usr/src 
git clone --depth 1 git://source.ffmpeg.org/ffmpeg.git 
cd ffmpeg
./configure --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-x11grab
make 
make install

References

Here are a number of unsorted links

http://techzany.com/2013/09/live-streaming-video-using-avconv-and-a-raspberry-pi/


https://trac.ffmpeg.org/wiki/StreamingGuide


Node

http://phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets https://github.com/fluent-ffmpeg/node-fluent-ffmpeg

http://ffmpeg.org/ffmpeg-all.html#segment_002c-stream_005fsegment_002c-ssegment