Saturday, November 14, 2020

John The Ripper Cracking ZIP and RAR Protected Files

 



Today we will focus on cracking passwords for ZIP and RAR archive files. This tool is very useful when you have an important protected ZIP and RAR file but you forgot the password and don't know how to open the file. Here is the solution ... John the Ripper !!!

John the Ripper is an Open Source password security auditing and password recovery tool available for many operating systems.

What you need is just download John the Ripper here and follow step below. This tutorial implemented in Windows 10.

1. Download and extract John the Ripper tools
2. For tutorial purpose, I have created winrar file "flag.rar" and protected with password "123"
3. Open your command prompt cmd by press button window + R and type cmd then enter
4. Go to folder 
          E:\shared\installer\hack\john-1.9.0-jumbo-1-win64\run> 

5. Type command 
         E:\shared\installer\hack\john-1.9.0-jumbo-1-win64\run>rar2john.exe test/flag.rar > test/flag.hash




this will create hash file with name "flag.hash" in folder E:\shared\installer\hack\john-1.9.0-jumbo-1-win64\run\test\flag.hash

6. Type command
         E:\john-1.9.0-jumbo-1-win64\run>john test/flag.hash
     wait until finish the process and John the Ripper will show you the password .




That's it ... very simple. John the Ripper is very useful tools for cracking password.

Thank you


Monday, September 3, 2018

Encode High Quality Audio with FFmpeg

Why we need to transcode audio?
If you want to implement streaming audio, to make it easier to stream the audio then split the file into small files. To prevent and avoid transcoding from a lossy format to the same or another lossy format when possible. Transcode to lossy from the lossless source, or just copy the lossy source audio track instead of transcoding.

Generation loss
Transcoding from a lossy format like MP3, AAC, Vorbis, Opus, WMA, etc. to the same or different lossy format might degrade the audio quality even if the bitrate stays the same (or higher). This quality degradation might not be audible to you but it might be audible to others.

Copying audio tracks
If the target container format supports the audio codec of the source file then consider just muxing it into the output file without re-encoding. Muxing is the process for combining two or more signals into one for example combining a video track, one or more audio tracks. MKV supports virtually any audio codec. This can be achieved by specifying 'copy' as the audio codec.

Example:
Transcoding a WebM file (with VP8 video/Vorbis audio) to a MKV file (with H.264 video/unaltered Vorbis audio):

           ffmpeg -i someFile.avi -c:a copy -c:v libx264 outFile.mkv

In some cases this might not be possible, because the target device/player doesn't support the codec or the target container format doesn't support the codec. Another reason to transcode might be that the source audio track is too big (it has a higher bitrate than what you want to use in the output file).


Audio encoders FFmpeg can use
 FFmpeg can encode to a wide variety of lossy audio formats.

Here are some popular lossy formats with encoders listed that FFmpeg can use:

Dolby Digital: ac3
Dolby Digital Plus: eac3
MP2: libtwolame, mp2
Windows Media Audio 1: wmav1
Windows Media Audio 2: wmav2
AAC LC: libfdk_aac, aac
HE-AAC: libfdk_aac
Vorbis: libvorbis, vorbis
MP3: libmp3lame, libshine
Opus: libopus

Based on quality produced from high to low:
libopus > libvorbis >= libfdk_aac > aac > libmp3lame >= eac3/ac3 > libtwolame > vorbis > mp2 > wmav2/wmav1

The >= sign means greater or the same quality.
This list is just a general guide and there may be cases where a codec listed to the right will perform better than one listed to the left at certain bitrates.
The highest quality internal/native encoder available in FFmpeg without any external libraries is aac.

Please note it is not recommended to use the experimental vorbis for Vorbis encoding; use libvorbis instead.
Please note that wmav1 and wmav2 don't seem to be able to reach transparency at any given bitrate.

Recommended minimum bitrates to use

The bitrates listed here assume 2-channel stereo and a sample rate of 44.1kHz or 48kHz. Mono, speech, and quiet audio may require fewer bits.

    libopus – usable range ≥ 32Kbps. Recommended range ≥ 64Kbps
    libfdk_aac default AAC LC profile – recommended range ≥ 128Kbps; see AAC Encoding Guide.
    libfdk_aac -profile:a aac_he_v2 – usable range ≤ 48Kbps CBR. Transparency: Does not reach transparency. Use AAC LC instead to achieve transparency
    libfdk_aac -profile:a aac_he – usable range ≥ 48Kbps and ≤ 80Kbps CBR. Transparency: Does not reach transparency. Use AAC LC instead to achieve transparency
    libvorbis – usable range ≥ 96Kbps. Recommended range -aq 4 (≥ 128Kbps)
    libmp3lame – usable range ≥ 128Kbps. Recommended range -aq 2 (≥ 192Kbps)
    ac3 or eac3 – usable range ≥ 160Kbps. Recommended range ≥ 160Kbps

    Example of usage:

    ffmpeg -i input.wav -c:a libfaac -q:a 330 -cutoff 15000 output.m4a

    aac – usable range ≥ 32Kbps (depending on profile and audio). Recommended range ≥ 128Kbps
    Example of usage:

    ffmpeg -i input.wav output.m4a

    libtwolame – usable range ≥ 192Kbps. Recommended range ≥ 256Kbps
    mp2 – usable range ≥ 320Kbps. Recommended range ≥ 320Kbps

The vorbis and wmav1/wmav2 encoders are not worth using.
The wmav1/wmav2 encoder does not reach transparency at any bitrate.
The vorbis encoder does not use the bitrate specified in FFmpeg. On some samples it does sound reasonable, but the bitrate is very high.

To calculate the bitrate to use for multi-channel audio: (bitrate for stereo) x (channels / 2).
Example for 5.1(6 channels) Vorbis audio: 128Kbps x (6 / 2) = 384Kbps

When compatibility with hardware players doesn't matter then use libvorbis in a MKV container when libfdk_aac isn't available.
-Note - libopus will likely give higher quality
When compatibility with hardware players does matter then use libmp3lame or ac3 in a MP4/MKV container when libfdk_aac isn't available.
Transparency means the encoded audio sounds indistinguishable from the audio in the source file.
Some codecs have a more efficient variable bitrate (VBR) mode which optimizes to a given, constant quality level rather than having variable quality at a given, constant bitrate (CBR). The info above is for CBR. VBR is more efficient than CBR but may not be as hardware-compatible.


             

Other Topics:
               
               

Thursday, March 8, 2018

NGINX and PHP-FPM

Scenario

Server with the following technical specs:
  • CPU: 4 @ 2.7GHZ
  • RAM: 8GB

Checking Nginx Parameter

Key areas to inspect when looking at Nginx for a PHP-FPM environment consist of worker_connections and worker_processes

Worker Connections

Sets the maximum number of simultaneous connections that can be opened by a worker process. This is often set to the number of CPU cores.

Worker Processes

Sets the maximum number of simultaneous connections that can be opened by a worker process.

PHP-FPM

MaxChildren

To optimize PHP-FPM it is largely depends on the application itself. However a standard rule that would satisfy a broad subset of use cases would be:

pm.max_children = (Total RAM - Memory used for Linux, DB, etc.) / Average php process size

First, we need to install ps_mem to check the memory usage.

CentOS 
yum install ps_mem -y

Ubuntu
wget http://www.pixelbeat.org/scripts/ps_mem.py mv ps_mem.py /usr/local/sbin/ chmod 755 /usr/local/sbin/ps_mem.py
then run command
ps_mem.py


Here is the result:


the output should give you something like this (refer to the picture shown as above):
 776.6 MiB + 49.4 MiB =   826.1 MiB    php-fpm (105)

This means php-fpm is using in total 826.1 MiB with 105 Processes. After converting the value in Megabyte, we will divide it by the number of processes. This will tell us: How much each child process (105) is consuming.

826.1 MiB/105 Processes =  7.86 MiB

7.86MB memory / Processes

We can now calculate the number of process php-fpm can calculate via this simple formula:

max_children = (Total Number of Memory - 1000MB) / FPM Memory per Process

We reserved 1000MB for other process like mysql, nginx, etc.

max_children = (4000 MB - 1000 MB) / 7.86

max_children = 381.67


Quick Setup

At a most base level this seems to be the ideal setup based on peer review:
  • nginx (<=1.9.0)
  • zendopcache > apc (deprecated)
  • php-fpm (fastcgi)
Note: It is also important to highlight that single core php-fpm instances will not gain much of a performance improvement. PHP-FPM benefits proportionately to the number of cpu cores available. While a dual core would yield a performance gain. A recommended number would be at least 4 cores.


Detailed Setup

In the /etc/php5/fpm/pool.d/www.conf file:

[www]
user = www-data
group = www-data
chdir = /
listen = 127.0.0.1:9000
pm = dynamic            (ondemand might be better for memory usage)
pm.max_children         (total RAM - (DB etc) / process size)
pm.start_servers        (1/4 of cpu cores)
pm.min_spare_servers    (1/2 of cpu cores)
pm.max_spare_servers    (total of cpur cores)
pm.max_requests         (high to prevent server respwan)
                        (tip: set really low to find memory leaks)
In the /etc/php5/fpm/conf.d/local.ini file:

memory_limit = 324M (WxT needs at minimum 256MB)
 
According to our scenario sample then it will give result as below
pm = ondemand            
pm.max_children         (382)
pm.start_servers        (1)
pm.min_spare_servers    (2)
pm.max_spare_servers    (4)
pm.max_requests         (1000)
                       
  
Then just execute the following commands:
nginx -s reload




Other Topics:


Friday, August 18, 2017

Implement Nginx Reverse Proxy with SSL for Live Streaming

After struggling implement NGINX reverse proxy setup, I have decided to share my running config currently and hope this could be of any help to someone.

Here is the scenario of the implementation


First, we need to configure NGINX to enable http_v2_module
sudo ./configure  --with-http_ssl_module --with-http_v2_module  --add-module=../nginx-rtmp-module
make
sudo make install

After configure and compile NGINX, then in server 10.10.16.32 as reverse proxy, the configuration as following as below and don't forget to copy your SSL key and crt to your correct folder, in my case is in /home/livetv/live.

user livetv;
worker_processes  8;

error_log    /var/log/nginx/error.log debug;
worker_rlimit_nofile 65536;  #worker_rlimit_nofile = worker_connections * worker_processes

events {
    worker_connections  8192;
    multi_accept on;
    use epoll;
}

rtmp {
    server {
        listen 1935;
        chunk_size 8192;
        ping 30s;
        notify_method get;
        allow play all;

        application live {
            live on;

            exec_pull ffmpeg -re -i http://10.10.17.50/$app/$name
                 -vcodec copy -f flv -ar 44100 -ab 64 -ac 1 -acodec mp3 rtmp://localhost:1935/show/${name}
                 -vcodec libx264 -threads 0 -vprofile baseline -acodec aac -strict -2 -b:v 1024k -b:a 128k -vf "scale=854:trunc(ow/a/2)*2" -tune zerolatency -preset veryfast -crf 23 -f flv rtmp://localhost:1935/show/${name}_480
                 -vcodec libx264 -threads 0 -vprofile baseline -acodec aac -strict -2 -b:v 776k -b:a 96k -vf "scale=640:trunc(ow/a/2)*2" -tune zerolatency -preset veryfast -crf 23 -f flv rtmp://localhost:1935/show/${name}_360
                 -vcodec libx264 -threads 0 -vprofile baseline -acodec aac -strict -2 -b:v 128k -b:a 96k -vf "scale=256:trunc(ow/a/2)*2" -tune zerolatency -preset veryfast -crf 23 -f flv rtmp://localhost:1935/show/${name}_144;
        }

        application show {
            live on;
            hls on;
            hls_path /home/livetv/live/hls/;
            hls_nested on;
            record off;

            ### Instruct clients to adjust resolution according to bandwidth
            hls_variant _720 BANDWIDTH=2048000 RESOLUTION=1280x720; # High bitrate, HD 720p resolution
            hls_variant _480 BANDWIDTH=1024000 RESOLUTION=852x480;     # High bitrate, higher-than-SD resolution
            hls_variant _360 BANDWIDTH=512000 RESOLUTION=640x360;     # Medium bitrate, SD resolution
            hls_variant _240 BANDWIDTH=307200 RESOLUTION=426x240;     # Low bitrate, sub-SD resolution
            hls_variant _144 BANDWIDTH=131000 RESOLUTION=256x144;     # Low bitrate, 140p resolution
        }
       
        exec_pull /home/livetv/live/chmodTV.sh;
    }
}

http {
    include /etc/nginx/mime.types;
    default_type  application/octet-stream;
    sendfile on;
    keepalive_timeout   65;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
    ssl_prefer_server_ciphers on;

    server {
        listen      8080;

        location /hls {
            # Disable cache
            add_header 'Cache-Control' 'no-cache';

            # CORS setup
            add_header 'Access-Control-Allow-Origin' '*' always;
            add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
            add_header 'Access-Control-Allow-Headers' 'Range';

            # allow CORS preflight requests
            if ($request_method = 'OPTIONS') {
                    add_header 'Access-Control-Allow-Origin' '*';
                    add_header 'Access-Control-Allow-Headers' 'Range';
                    add_header 'Access-Control-Max-Age' 1728000;
                    add_header 'Content-Type' 'text/plain charset=UTF-8';
                    add_header 'Content-Length' 0;
                    return 204;
            }

            #serve HLS fragments
            types {
                    application/dash+xml mpd;
                    application/vnd.apple.mpegurl m3u8;
                    video/mp2t ts;
            }

            root /home/livetv/live/;
            add_header Cache-Control no-cache;
        }

        #include /usr/local/nginx/conf/sites-enabled/*.conf;
    }

    server {
        listen 443 ssl http2;
        server_name cdn-livetv.metube.id;
        root /home/livetv/live/;

        ssl on;
        ssl_certificate /home/livetv/live/ssl/STAR_metube_id.crt;
        ssl_certificate_key /home/livetv/live/ssl/metube.key;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 10m;

        ssl_buffer_size 16k;

        location / {
            index index.html;
        }

        location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|pdf|txt|xml|js)$ {
            expires 21d;
            add_header Pragma public;
            add_header Cache-Control "public, must-revalidate, proxy-revalidate";
            access_log off;
            log_not_found off;
            fastcgi_hide_header Set-Cookie;
            tcp_nodelay off;
            sendfile off;
            break;
        }

        location /hls {
            proxy_set_header x-real-IP $remote_addr;
            proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
            proxy_set_header x-forwarded-proto https;
            proxy_set_header host $host;
            proxy_pass http://172.31.16.26;
           
            #this is will handle problem in mobile when rise protocol exception
            proxy_buffering on;
            proxy_buffer_size 8k;
            proxy_buffers 2048 8k;
        }
    }
}




In server 172.31.16.26 as backend, the configuration as following as below and don't forget to copy your SSL key and crt to your correct folder, in my case is in /home/livetv/live.


user livetv livetv;
worker_processes  20;

error_log  logs/error.log debug;
worker_rlimit_nofile 409600; #worker_rlimit_nofile = worker_connections * worker_processes

events {
    worker_connections  20480;
    multi_accept on;
    use epoll;
}

rtmp {
    server {
        listen 1935;
        chunk_size 8192;
        ping 30s;
        notify_method get;
        allow play all;

        application live {
            live on;

            exec_pull ffmpeg -re -i http://172.31.16.27/$app/$name
                 -vcodec copy -f flv -ar 44100 -ab 64 -ac 1 -acodec mp3 rtmp://localhost:1935/show/${name}
                 -vcodec libx264 -threads 0 -vprofile baseline -acodec aac -strict -2 -b:v 1024k -b:a 128k -vf "scale=854:trunc(ow/a/2)*2" -tune zerolatency -preset veryfast -crf 23 -f flv rtmp://localhost:1935/show/${name}_480
                 -vcodec libx264 -threads 0 -vprofile baseline -acodec aac -strict -2 -b:v 776k -b:a 96k -vf "scale=640:trunc(ow/a/2)*2" -tune zerolatency -preset veryfast -crf 23 -f flv rtmp://localhost:1935/show/${name}_360
                 -vcodec libx264 -threads 0 -vprofile baseline -acodec aac -strict -2 -b:v 128k -b:a 96k -vf "scale=256:trunc(ow/a/2)*2" -tune zerolatency -preset veryfast -crf 23 -f flv rtmp://localhost:1935/show/${name}_144;
        }

        application show {
            live on;
            hls on;
            hls_path /home/livetv/live/hls/;
            hls_nested on;
            record off;

            ### Instruct clients to adjust resolution according to bandwidth
            hls_variant _720 BANDWIDTH=2048000 RESOLUTION=1280x720; # High bitrate, HD 720p resolution
            hls_variant _480 BANDWIDTH=1024000 RESOLUTION=852x480;     # High bitrate, higher-than-SD resolution
            hls_variant _360 BANDWIDTH=512000 RESOLUTION=640x360;     # Medium bitrate, SD resolution
            hls_variant _240 BANDWIDTH=307200 RESOLUTION=426x240;     # Low bitrate, sub-SD resolution
            hls_variant _144 BANDWIDTH=131000 RESOLUTION=256x144;     # Low bitrate, 140p resolution

        }
    }
}


http {

    include mime.types;
    default_type  application/octet-stream;
    sendfile on;
    keepalive_timeout   65;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
    ssl_prefer_server_ciphers on;

    server {
        listen      80;

        location /hls {
            # Disable cache
            add_header 'Cache-Control' 'no-cache';

            # CORS setup
            add_header 'Access-Control-Allow-Origin' '*' always;
            add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
            add_header 'Access-Control-Allow-Headers' 'Range';

            # allow CORS preflight requests
            if ($request_method = 'OPTIONS') {
                    add_header 'Access-Control-Allow-Origin' '*';
                    add_header 'Access-Control-Allow-Headers' 'Range';
                    add_header 'Access-Control-Max-Age' 1728000;
                    add_header 'Content-Type' 'text/plain charset=UTF-8';
                    add_header 'Content-Length' 0;
                    return 204;
            }

            #serve HLS fragments
            types {
                    application/dash+xml mpd;
                    application/vnd.apple.mpegurl m3u8;
                    video/mp2t ts;
            }

            root /home/livetv/live/;
            add_header Cache-Control no-cache;
        }

        include /usr/local/nginx/conf/sites-enabled/*.conf;

    }

    server {
        listen 443 ssl http2;
        server_name cdn-livetv.metube.id;
        #rewrite ^(.*)  cdn-livetv.metube.id$1 permanent;
        root /home/livetv/live/;

        ssl on;
        ssl_certificate /home/livetv/live/ssl/STAR_metube_id.crt;
        ssl_certificate_key /home/livetv/live/ssl/metube.key;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 10m;

        ssl_buffer_size 16k;
        location / {
            index index.html;
        }

        location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|pdf|txt|xml|js)$ {
            expires 21d;
            add_header Pragma public;
            add_header Cache-Control "public, must-revalidate, proxy-revalidate";
            access_log off;
            log_not_found off;
            fastcgi_hide_header Set-Cookie;
            tcp_nodelay off;
            sendfile off;
            break;
        }

        location /hls {
            proxy_set_header x-real-IP $remote_addr;
            proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
            proxy_set_header x-forwarded-proto https;
            proxy_set_header host $host;
            proxy_pass http://127.0.0.1;
           
            #this is will handle problem in mobile when rise protocol exception
            proxy_buffering on;
            proxy_buffer_size 8k;
            proxy_buffers 2048 8k;
        }
    }
}

                  
                               
               
               

Other Topics:
               
               
               
               
               
               
               
               

Friday, August 11, 2017

IP Forwarding and Enable UDP Multicast

During few days, I am confused on steaming of live TV that I created on our product. I couldn't forward the streaming of live tv on UDP multicast from eth0 to eth1. Our server has 2 network interface card and let say eth0 with IP 10.10.10.5 and eth1 with IP 172.31.16.26. In my opinion, it should be easy to be done but unfortunately, it is not simple as i thought. In order to make this happen then i created some simple experiment to forward traffic from eth0 to eth1. Here is my step to implement the things on how to forward IP and Port.

Step 1. Make sure the ip_forward is enable. Check on /etc/sysctl.conf and change the value to net.ipv4.ip_forward=1  If the flag is commented then just uncomment the line by removing # or you can do by typing on terminal sudo sysctl -w net.ipv4.ip_forward=1

Step 2. Change net.ipv4.conf.default.rp_filter=1. Check on /proc/sys/net/ipv4/conf/eth0/rp_filter and change the value to net.ipv4.conf.default.rp_filter=2


Step 3. Type this command on terminal 

sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
This command can be explained in the following way:
iptables: the command line utility for configuring the kernel
-t nat: select table "nat" for configuration of NAT rules.
-A POSTROUTING:  Append a rule to the POSTROUTING chain (-A stands for "append").
-o eth0: this rule is valid for packets that leave on the second network interface (-o stands for "output")
-j MASQUERADE: the action that should take place is to 'masquerade' packets, i.e. replacing the sender's address by the router's address.
Then forward packet from eth0 to eth1 
sudo iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

Step 4. Add route to enable UDP multicast address from 224.0.0.0 to 239.255.255.255
route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0

Step 5. Make sure your firewall is inactive or if firewall enable then you must change DEFAULT_INPUT_POLICY="ACCEPT" on /etc/default/ufw 

Step 6. To check if the things is working properly then send data over UDP  by typing command on terminal 
echo "This is my data" > /dev/udp/239.255.0.1/3000
This will send text "This is my data" to UDP with IP 239.255.0.1 and Port 3000 and 
from the other terminal, just dump by typing command 
tcpdump -i eth1 udp port 3000 -vv -X


If you are successfully, then you will receive the text



Other Topics:




Tuesday, April 11, 2017

Creating Overlay Image and Running Text in Multiple Output Streaming with NGINX RTMP


Creating Multiple Output with No Filter

Creating multiple output without implement filtering can be done by using command

syntax:
           ffmpeg -i input1 -i input2 \
               -acodec … -vcodec … output1 \
               -acodec … -vcodec … output2 \
               -acodec … -vcodec … output3



Pic 1.  Multiple Outputs


Here is an example to create several different outputs out HD, VGA and QVGA resolution at the same time with one command

syntax:
           ffmpeg -i input \
               -s 1280x720 -acodec … -vcodec … output1 \
               -s 640x480  -acodec … -vcodec … output2 \
               -s 320x240  -acodec … -vcodec … output3


Creating Multiple Output with Same Filter to All Output

To implement same filter in multiple output can be done by using -filter_complex with split filter.
Command below to split=3 means split to three streams

syntax:
         ffmpeg -i input -filter_complex '[0:v]yadif,split=3[out1][out2][out3]' \
                           -map '[out1]' -s 1280x720 -acodec … -vcodec … output1 \
                           -map '[out2]' -s 640x480  -acodec … -vcodec … output2 \
                           -map '[out3]' -s 320x240  -acodec … -vcodec … output3


Creating Multiple Output with Specific Filter per each Output

To create multiple output with specific filter per each output can be done by using  -filter_complex with split filter, but split goes to input directly.

Below syntax to encode video to three different output at the same time, but boxblur will impact to output1, negate will impact to output2 and yadif will impact to output3.
syntax:
      ffmpeg -i input -filter_complex \
         '[0:v]split=3[in1][in2][in3];[in1]boxblur[out1];\
          [in2]negate[out2];[in3]yadif[out3]' \
        -map '[out1]' -acodec … -vcodec … output1 \
        -map '[out2]' -acodec … -vcodec … output2 \
        -map '[out3]' -acodec … -vcodec … output3


Creating Overlay Image and Running Text with Multiple Output

Here is my own experiment to implement Overlay Image and Running Text in Streaming with multiple resolution and multiple output. The main concern when implement this filter is the Overlay Image and Running Text will only display in output1 and dissappear in other output due to wrong placement of filtering. This is become my concern and do some experiment to find solution on how to put carefully of filtering placement.

Syntax:
ffmpeg -i storm_spirit.mp4 -i addies.jpg -filter_complex \
"[1:v]scale=iw/2:ih/2[logo];[0:v][logo]overlay=W-w-5:H-h-5, \
scale=320:-2,drawbox=x=0:y=122:color=yellow@0.4:width=iw:height=30:t=max, \
drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSans.ttf:text='This Is Running Text':\
fontcolor=white@1.0:fontsize=16:y=h-line_h-30:x=w/10*mod(t\,10):\
enable=gt(mod(t\,20)\,10),split=3[a][b][c]" \
-map "[a]" -map 0:a -vcodec libx264 -acodec aac -strict -2 -b:v 256k -b:a 32k -f flv rtmp://192.168.135.128:1935/myapp/mystream \
-map "[b]" -map 0:a -vcodec libx264 -acodec aac -strict -2 -b:v 128k -b:a 32k -f flv rtmp://192.168.135.128:1935/myapp/mystream2 \
-map "[c]" -map 0:a -vcodec libx264 -acodec aac -strict -2 -b:v 96k -b:a 32k -f flv rtmp://192.168.135.128:1935/myapp/mystream3


Result, Ovelay Image and Running Text can be seen in each output of streaming.



Pic 2. Overlay Image and Running Text can be implemented in each stream output

Thank you
Other Topics:





Thursday, April 6, 2017

How to Implement Running Text in Streaming with NGINX RTMP



In my previous tutorial, I have explained on How to Build NGINX RTMPSetup Live Streaming with NGINX RTMPCreate Adaptive Streaming with NGINX RTMP and Implementing Filtergraph in Streaming with NGINX RTMP.  Please read it before start to learn this tutorial.

Running Text is one of the very useful electronic or digital media to convey messages and information can also be used as a means of advertising.

In its development, the display running text now comes not only show a series of posts to walk alone but also able to display images or logos.

Running Text was elected by a lot of people as a means of advertising, as a means of advertising for reasons other than a very beatiful display, running text in itself proved to have an attaction for people around who saw it, as well as know, that the human visual sense be the eyes are very interested in a view that is bright, colorful, striking and others around him. This is the underlying color of the display running text inviting people around her eyes to look at him.

Here is the command on how to create "Hello World" Running Text in streaming

ffmpeg -i storm_spirit.mp4 -filter:v drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSans.ttf:text='Hello World':fontcolor=white@1.0:fontsize=16:y=h-line_h-100:x=w/10*mod(t\,10):enable=gt(mod(t\,20)\,10)" -vcodec libx264 -acodec aac -strict -2 -b:v 256k -b:a 32k -f flv rtmp://192.168.135.128:1935/myapp/mystream


Result shown as picture below


Pic 1.  Running Text "Hello World"



This is to show 'Running Text in Streaming with NGINX RTMP'

ffmpeg -i storm_spirit.mp4 -filter:v drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSans.ttf:text='Running Text in Streaming with NGINX RTMP':fontcolor=white@1.0:fontsize=16:y=h-line_h-100:x=w/10*mod(t\,10):enable=gt(mod(t\,20)\,10)" -vcodec libx264 -acodec aac -strict -2 -b:v 256k -b:a 32k -f flv rtmp://192.168.135.128:1935/myapp/mystream

Result shown as below:


Pic 2. Running Text "Running Text in Streaming with NGINX RTMP"

All above sample, Text is running from left to right.

Horizontal movement:

            x=w/10*mod(t\,10)

            where w is the input width
                        t is the time
                       w/10 is the speed of movement (whole width in 10s)
                        t mod 10 is used to repeat every 10s

Enabling:
           enable=gt(mod(t\,20)\,10)
           every 20s show the text animation for 10s after the initial 10s

                   

This is an example Running Text with Background. The command as below

ffmpeg -i /mnt/e/kurento/compile\ NGIX/storm_spirit.mp4 -filter:v "drawbox=y=ih/PHI:color=black@0.4:width=iw:height=30:t=max, drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSans.ttf:text='Addiestar Silaban Running Text':fontcolor=white@1.0:fontsize=16:y=h-line_h-115:x=w/10*mod(t\,10):enable=gt(mod(t\,20)\,10)" -vcodec libx264 -acodec aac -strict -2 -b:v 256k -b:a 32k -f flv rtmp://192.168.135.128:1935/myapp/mystream

Result shown as below


Pic 3. Running Text with Black Background



Don't forget to change the RTMP IP address (red color)
rtmp://192.168.135.128:1935/myapp/mystream  to your NGINX RTMP server IP Address


Thank you




Other Topics: