Final Report

About TimVideos.us

TimVideos.us is a group of exciting projects which together create a system for doing both recording and live event streaming for conferences, meetings, user groups and other presentations. We hope that, through our projects, the costs and expertise currently required to produce live streaming events will be reduced to near zero.

HDMI2USB is a part of various things done by the organization. The HDMI2USB project develops affordable hardware options to record and stream HD videos (from HDMI & DisplayPort sources) for conferences, meetings and user groups.

Description copied from: https://hdmi2usb.tv/home/

Original Goals

My project is titled “Add hardware mixing support to HDMI2USB firmware” and had the aim of  providing additional support to crop/pad/scale to HDMI2USB-Litex-Firmware and later on Developing a hardware mixer block.

Expected results

  • The HDMI2USB Litex Firmware supports crop/pad/scale feature .
  • The firmware command line has the ability to specify that an output is the combination of two inputs. These combinations should include dynamic changes like fading and wipes between two inputs.

Achieved Result

1) Used the CPU for manipulating the framebuffer to achieve Crop/Pad/Scale.

2) Cropping: Clipping off  from all sides( top – 40px , bottom-40px , left-40px, right-40px )   worked fine as shown:

3) Code : https://github.com/timvideos/HDMI2USB-litex-firmware/pull/444

Working :

a) With Command x c pattern output 0

nan

b) With Command x c input0 output0

Video Link : https://www.youtube.com/watch?v=p3Bl_UAnbkM

4)  CPU method resulted in failure because  CPU inside the HDMI SoC is not fast enough to do manipulation of the full FPGA frame which will slow down the process .

5) In order to implement  crop/pad/scale in a faster way HDL was modified  (especially the DMA section ). https://github.com/enjoy-digital/litevideo/pull/20

The Cropping module is  implemented between the DMA Reader and VideoOut core. in  HDL Code. VideoOutCore generates a Video Stream from Memory (DRAM Controller),

As highlighted in figure below: Architecture of HDMI2USB Gateware.

PicsArt_08-08-07.01.00

Next step :  I had planned to get the following merged into the codebase:

  1. Creating/Using CSR registers to make cropping dynamically configurable.
  2. For Scaling: Supporting cropping with horizontal(h)/vertical(v) resolution that is h/v full resolution divided by an integer. For example, hres_crop = hres/N, vres_crop = vres/M.This way pixel would be copied N times (for h) and for v line buffer is required that will be reuse M times.

But I felt that due to time constraints this won’t be possible. I will be working on this after GSoC. This is work in progress.

6) Documentions :

Corrected Errors :

https://github.com/timvideos/HDMI2USB-litex-firmware/pull/443

https://github.com/timvideos/HDMI2USB-litex-firmware/pull/445

Addition in Documentations :

https://github.com/timvideos/HDMI2USB-litex-firmware/pull/446

7)   HDMI2USB-mask-generation is implemented to generate wipes in the manner as explained in the hardware fader design doc , which will be later on used to generate transition effects

Major Task Left

1) Completion of HDL code for implementation on adding crop / scale on Hardware.

2) Due to Major changes in Codebase , “unforking” LiteX and Migen+MiSoC as described in mail , ssk1328 work on Hardware Fader Design  has to be reproduced/ integrated in current codebase .

Link to code

Main Code :   

https://github.com/Nancy-Chauhan/HDMI2USB-litex-firmware

https://github.com/Nancy-Chauhan/litevideo

Pull request

https://github.com/timvideos/HDMI2USB-litex-firmware/pull/445

https://github.com/enjoy-digital/litevideo/pull/20

https://github.com/timvideos/HDMI2USB-litex-firmware/pull/444

https://github.com/Nancy-Chauhan/hdmi2usb-mask

Merged :

https://github.com/timvideos/HDMI2USB-litex-firmware/pull/446

https://github.com/timvideos/HDMI2USB-litex-firmware/pull/443

Other Links :

GSoC Proposal  

Daily Updates BLOG

Weekly Reports

Learning

Doing this project I got an opportunity to learn a lot of things . The number of such things is more than I can even write but summing up all this the major things which I learn includes :

  1. Handling large code bases, this was the first time that I worked with large code base.
  2. Putting your doubts in front of others and establishing communication amongst organisation members as during this period a number of times I had to ask different organisation members.
  3. Importance of  documentation as during this period I referred to some of the other works .
  4. In Earlier stage of my Project, I was using the CPU to manipulate pixels in the memory buffer . But I learned that the CPU inside the HDMI SoC is not fast enough to do manipulation of the full FPGA frame. It can only be used for small things.  The DMA HDL code was meant to be modified which takes the pixels from the HDMI input and writes them to memory and implementing it in HDL makes the process faster .
  5. Learned about Migen which makes it possible to apply modern software concepts such as object-oriented programming and metaprogramming to designing hardware. It is more intuitive and and provides a nice abstraction layer so that we can focus more on the logic part
  6. I had planned to also complete the Hardware Mixer Block, but I felt that due to time constraints this won’t be possible. I will work on this after GSoC.

Conclusion

I would like to thank my mentor Kyle Robbertze and Tim ‘mithro’ Ansell for the support and the help they provided. Mentor Kyle was always there for me, no matter what the problem was.Thanks to Organisation members: CarlFK, Rohit Singh, Florent who have always responded to my queries related to my project . It would not have been possible without them. I have learned a lot from this project. As such, I have all intentions to complete all the major tasks left in the project .

Contact details

If you have any doubts or suggestions you can contact me anytime you want. Here are the details :

Email address : nancychn1@gmail.com

GitHUB : Nancy-Chauhan

IRC Nickname : nancy98

Weekly Update

Week 11 & Week 12 ( 22 July – 5 August )

Commenting or excluding the section from https://github.com/enjoy-digital/litevideo/blob/master/litevideo/output/core.py#L135 tohttps://github.com/enjoy-digital/litevideo/blob/master/litevideo/output/core.py#L184

Apparently it should produce error when the code is loaded on the FPGA board, But opposite to the expected result it doesn’t show any changes on the output produced on HDMI screen .

  • First change I am working on is to make cropping configurable. CSR registers should be used  to make it dynamically configurable.

Field-programmable gate arrays (FPGAs) are large, fast integrated circuits-that can be modified, or configured, almost at any point by the end user. Within the domain of configurable computing, we distinguish between two modes of configurability: static-where the configurable processor’s configuration string is loaded once at the outset, after which it does not change during execution of the task at hand, and dynamic-where the processor’s configuration may change at any moment. So to take the values from the user for cropping we need to make the code dynamically configurable

Source : IEEE Xplore Digital Library Publication on Static and dynamic configurable systems

Configuration and Status Registers :

  • Configuration/control registers are used by the CPU to configure and control the device. Bits in these configuration registers may be write-only, so the CPU can alter them, but not read them back. Most bits in control registers can be both read and written.
  • Status registers provide status information to the CPU about the I/O device. These registers are often read-only, i.e. the CPU can only read their bits, and cannot change them.

gdk9h

 

The module that will be used to import CSR Registers is https://github.com/enjoy-digital/litex/blob/master/litex/soc/interconnect/csr.py

Hdmi2usb-mask-generation

HDMI2USB-mask-generation is implemented to generate wipes in the manner as explained in the hardware fader design doc, which will be later on used to generate transition effects

  • Work is in progress and due to time constraints, only the above work will be documented in Reports. Further work will be continued after GSoC in

https://github.com/enjoy-digital/litevideo/pull/20

 

Weekly Update

Week 9 & Week 10 ( 8 July – 21 July )

  • I made a Pull request to Update the Documentations for adding “ Adding example docs commands to getting-started doc “ . These commands make it easy for those who are connecting opsis for first time ( example to view the video output on monitor )

https://github.com/timvideos/HDMI2USB-litex-firmware/pull/446

  • I made a Pull request to correct the error

https://github.com/timvideos/HDMI2USB-litex-firmware/pull/445

Connecting to lm32 softcore to send direct commands to the HDMI2USB , with direct command:

make firmware-connect  

gives following errors :

(LX P=opsis) nancy@nancy-Inspiron-5559:~/Desktop/HDMI2USB-litex-firmware$ make firmware-connect

flterm –port=$(opsis-mode-switch –get-serial-dev)

WARNING:root:unbind-helper not found, will have to run as root!

Traceback (most recent call last):

File “/home/nancy/Desktop/HDMI2USB-litex-firmware/build/conda/bin/opsis-mode-switch”, line 11, in

sys.exit(main())

File “/home/nancy/Desktop/HDMI2USB-litex-firmware/build/conda/lib/python3.6/site-packages/hdmi2usb/modeswitch/cli.py”, line 392, in main

print(board.tty()[0])

IndexError: list index out of range

[FLTERM] Starting…

Unable to open serial port: No such file or directory

I observed that scripts when run are unable to open serial port so to resolve this issue commands should be followed by :

hdmi2usb-mode-switch –mode=serial

make firmware-connect

  • I made a Pull request in enjoy- digital/litevideo/pull regarding adding support to crop feature.

https://github.com/enjoy-digital/litevideo/pull/20

Implementation

PicsArt_08-08-07.01.00.png

The Cropping module is  implemented between the DMA Reader and VideoOut core. in  HDL Code. VideoOutCore generates a Video Stream from Memory (DRAM Controller),

As highlighted in figure above: Architecture of HDMI2USB Gateware.

For working on adding the support of cop/scale feature we have to work on following signals :

frame_paramter_layout

frame_timing_layout

screenshot-from-2018-08-10-03-16-46.png

 

screenshot-from-2018-08-08-17-54-30.png

The above image shows the the working of hsync and vsync signals on HDMI Monitor

  1. There are  two pulses, hsync and vsync, that let the monitor lock onto timing
  2. One hsync per scan line
  3. One vsync per frame

All signals in frame_parameter_layout are 12 bit long. 12 is just the width of all signals. That value is coming from hbits and vbits defined here: https://github.com/enjoy-digital/litevideo/blob/152b6d71a6826dc5ba01296e5dcc1f8c5b2ea2d1/litevideo/output/common.py#L5

It simply means all the signals in the frame_parameter layout (ex hres, vres) are 12-bit wide vectors. And the reason for why 12 specifically is chosen is because all resolutions ( horizontal x vertical) have values which well within 12-bit….for example 800 x 600 (both 800 and 600 are less than 2**12 i.,e 4096 and hence as such will fit in that vector), even Full HD (1920 x 1080) or even 4K will also fit within 12-bit vectors. HDMI2USB specifically is designed for maximum of Full HD, so 12-bit vectors are more than sufficient.

signal_timing_diagram (1)

Example showing how h_sync timing signal and v_sync timing signal works .

For my understanding, I coded for  Implementing-VGA-interface-with-verilog to produce test patterns on HDMI – Screen

 

 

 

 

Daily Update

26 – 27

Design for Fundamental “fader” equation is :

****pixel(output,x,y) = pixel(sourcea,x,y)mask(x,y) + pixel(sourceb,x,y)(1-mask(x,y))

1)sourcea / sourceb are input video frames buffered in the DDR memory.

2)video frame data can be in RGB, YCbCr444 or YCbCr422.

3)mask is generated from a simple byte sequence stored in the DDR memory.

4)mask in the above equation is between 0 and 1.0, but in reality would be an integer number.

  •  Working on Mask Generator

a) mask-gen.py ( This works ) : Converts

b) repeat.py ( There are errors)

Daily Update

The first change : is to make your cropping configurable. Currently was using static cropping bounds and should create CSR registers for that to make it dynamically configurable.

For Scaling,  in a first time only supporting cropping with h/v resolution that are h/v full resolution divided an integer. For example, hres_crop = hres/N, vres_crop = vres/M.

This way pixel  are copied N times (for h) and for v  need to have a line buffer that you’ll reuse M times.

Complex Method  if this first version already works

Optimize DRAM reads. Since here we are reading all datas from DRAM and just using the one useful for cropping. That would better to only read what is needed.