Uit Hack42
Ga naar: navigatie, zoeken
(Eerste editie.)
 
k (Beperkt ingaan op de software configuratie.)
 
Regel 45: Regel 45:
  
 
At this point we have 2 locations that produce an HDMI image. The problem with HDMI is that the longest distance you can cover with it is about 10m and that requires a fairly high-spec cable to boot. Irongeek is fine with this, but we decided we wanted to be able to put more distance between us and our video sources. As early as we can we convert the HDMI signal to SDI using cheap China bought HDMI-to-SDI converter boxes. SDI is a digital signal transferred over coax and with the appropriate quality cable you can transmit an HD image well in excess of 100m.
 
At this point we have 2 locations that produce an HDMI image. The problem with HDMI is that the longest distance you can cover with it is about 10m and that requires a fairly high-spec cable to boot. Irongeek is fine with this, but we decided we wanted to be able to put more distance between us and our video sources. As early as we can we convert the HDMI signal to SDI using cheap China bought HDMI-to-SDI converter boxes. SDI is a digital signal transferred over coax and with the appropriate quality cable you can transmit an HD image well in excess of 100m.
I initially went with really cheap RG59 coax and it turned out to be the cause of sporadic frame drops - you can get away with using cheap (which tends to mean poorly shielded) cable for an HD signal, but you'll need to reduce the cable length considerably to get a reliable signal on the other end. I ended up going for Belden 1694A cable which will set you back about 2.5 euro per meter.
+
I initially went with really cheap RG59 coax and it turned out to be the cause of sporadic frame drops - you can get away with using cheap (which tends to mean poorly shielded) cable for an HD signal, but you'll need to reduce the cable length considerably to get a reliable signal on the other end. I ended up going for Belden 1505F cable which will set you back about 2.5 euro per meter, and special Neutrik plugs because the BNC connector on the card is somewhat recessed within the rear of the PC making especially the top connector hard to insert when you're only supposed to twist the front sheath.
  
 
= Audio =
 
= Audio =
Regel 67: Regel 67:
  
 
The total rig needs about 300W of power but this needs to be delivered to 3 distinct and potentially remote, relative to each other, locations: The camera, the stage and the PC. I started out by bringing 3 15m extension cords, modded to have multiple sockets on the end since even though I didn't need a lot of power, I needed to power multiple devices. The drawback here is that these cables take up a LOT of room within the toolbox. Also, all my devices have the exact same plug, but some need 12V, some need 9V and most need 5V. I've begun replacing parts such that all my devices operate off of a 5V source to prevent accidents and I'm in the process of creating small boxes that convert a roughly 12V source down to 5V. The idea here is to provide 12V DC using the PC (or something like a laptop adapter plugged in near the PC), send that out to the other 2 locations, which will lose some of that voltage so expect 11V to appear on the other end, downconvert that to 5V and power everything off of that. Instead of a bulky 220V rated thing with an even bulkier multi-socket end I just need a long stretch of basically speaker cable, a converter in a box and a splitter. That would allow me to do away with 2 extension cords as well as the power warts for the various devices.
 
The total rig needs about 300W of power but this needs to be delivered to 3 distinct and potentially remote, relative to each other, locations: The camera, the stage and the PC. I started out by bringing 3 15m extension cords, modded to have multiple sockets on the end since even though I didn't need a lot of power, I needed to power multiple devices. The drawback here is that these cables take up a LOT of room within the toolbox. Also, all my devices have the exact same plug, but some need 12V, some need 9V and most need 5V. I've begun replacing parts such that all my devices operate off of a 5V source to prevent accidents and I'm in the process of creating small boxes that convert a roughly 12V source down to 5V. The idea here is to provide 12V DC using the PC (or something like a laptop adapter plugged in near the PC), send that out to the other 2 locations, which will lose some of that voltage so expect 11V to appear on the other end, downconvert that to 5V and power everything off of that. Instead of a bulky 220V rated thing with an even bulkier multi-socket end I just need a long stretch of basically speaker cable, a converter in a box and a splitter. That would allow me to do away with 2 extension cords as well as the power warts for the various devices.
 +
 +
= The Flow Of Data =
 +
 +
I'm using OBS which is an amazing video mixer program in which you define scenes. TODO: Insert an image of my typical scene layout.
 +
Given this layout you can see that the camera section is a vertically oriented rectangle however the typical recording position uses a horizontally oriented rectangle. The easiest way to get the section I need is to cut out a rectangle of those dimensions and then down-scale it to fit the part I need. The problem is that out of the rather low number of pixels produced by the camera's sensor this method throws away about 60% of them. Instead, I chose to mount my camera on the tripod at a 90 degree angle. This lets me take a much larger image off the sensor to downscale into the final image. This greatly improves the image quality. Using avconv I can both reduce the noise using a special filter, scale the image down to the size I will use in the scene and rotate the image to account for it being tilted all in 1 go.
 +
The presenter laptop output also needs to be scaled down slightly, but the main thing here is to apply some color correction specific to the capturing device I'm using. I've labelled each and using trial and error found a configuration that will correct the colors perfectly for each one. Supposedly you can apply some corrections to the image directly on the capture card, but I've not been able to make this work. After color correction the image is scaled down to the dimensions needed, sharpening it a bit.
 +
 +
For increased redundancy I'm recording both the video and sound data as it's fed into OBS as well as make OBS record the stream it assembled. I can have both feed off the capture card, but then each would need to apply effectively the same corrections to the 2 video feeds. There's also the problem of how you would switch to a different capture card in case one decides to kick the bucket. I've found a solution to this using v4l2loopback devices - virtual video devices that return the data that's being written to it using the V4L2 API which, thankfully, avconv has full support for. I've defined 3 such devices: One with the tilted camera image, one with the laptop output and one with a 'regular' HD feed in case the camera is used in a non-tilted setup like when recording a panel discussion. The quality will be less, but at least recording is still possible. When I start recording, I let 2 capture cards feed into the appropriate virtual devices and then start the recording. Using options made available in the window manager I can switch out the feeding program to either switch to a different capture card source, switch the camera between feeding the regular or the tilted camera device or apply different color corrections to the laptop capture feed in case that got switched out.
 +
 +
All recordings produce MP4 files that are x264 encoded using the "veryfast" preset at CRF 20.
  
 
= The Hall Of Shame =
 
= The Hall Of Shame =

Huidige versie van 16 mei 2016 om 14:38

Project: Video rig
Schroefje24.png
Schroefje24.png
Schroefje24.png
Schroefje24.png
NoProjectPicture.jpg

Video rig

Naam Video rig
Door Cooper
Status Uitvoer
Madskillz Linux, Video, Audio, SDI, XLR
Doel / Omschrijving
A complete rig aimed at the recording of presentations.
Alle Projecten - Project Toevoegen

Add your picture plz.

In English since I expect to refer people to this page for more info on the subject.

Why?!?

In 2015 I volunteered at SteelCon and ended up as the happy helper to Cal Leeming who was the AV guy. Using borrowed kit neither of us was very familiar with we tried best we could to record this 2-track conference. Throughout the day there were some minor issues but we thought things went mostly fine. After the event it turned out that things weren't going as rosy as we thought and it took several months of post-production work to get the videos on Youtube. Some of the recordings were simply too poor to use. Following the belated publication of the recordings I started discussing with Cal what we could do to improve on this situation, which resulted in me putting together my first video rig...

Requirements

These are specified in order of importance:

  • Redundant

Sometimes things just don't work as you want it to. There should be some fallback scenario in place that will allow you to recover with as little interruption to the recording as possible.

  • Verifiable

It must be possible to monitor the recording and see that everything is going fine. This means we should be able to see the video signal we receive and have something in the way of volume monitoring for sound so we know we're receiving audio.

  • Open source everything

Aside from licensing and such I'm mainly familiar with Linux and because of that I want the rig to be Linux-based.

  • Self-reliant

The thinking is that when an organizer provides a venue, a beamer and some power, we can record.

  • Cheap

This is of course a pipe dream since a video rig is going to be fairly expensive. But if we can do something to cut costs, we will.

  • Compact

The goal is to end up with 6 rigs and we need to be able to transport them to events so the smaller we can make it the better.

The Camera

We will of course need to record the presenter doing his/her thing so we need a camera. We won't record onto some local storage on the camera (MicroSD card or whatever) because cameras, and especially cheap cameras, do a really poor job of producing a nice video file. Instead, we'll feed off the HDMI-out most all cameras have. This has the benefit as well as drawback of being pretty much a direct feed off of the camera's sensor which, particularly with cheap cameras, tends to be fairly noisy. When you purchase a camera for use in this manner, make sure the camera has "clean HDMI-out". The HDMI-out image usually is the image you get on the screen of the camera itself, complete with a slew of icons and blinky things that might be useful for operating the camera but you most certainly don't want to see them end up in your recording. Having a "clean HDMI-out" means those icons won't be in the HDMI feed. The camera I chose to use is the Sony CX240E because it quite literally was the cheapest HD camera with a clean HDMI out.

Now that we have the camera we also need a tripod. A good tripod is one that can place the camera at eye-level, or preferably even a bit above that. People simply don't look very pretty when filmed from below. Most cheap tripods you can get from China will be flimsy, short lumps of plastic that would fall over when you break wind next to them. I chose to go for the Velbon Videomate 638F because it's not that expensive or heavy yet provides a very solid base for the camera and can elevate it to about 1m80. I'm also a great fan of tripods that have a quick-release option and the 638F has this. It means I can screw the quick release plate to the camera and leave it on as I store it. When I put things back together the camera just clicks into the tripod and you're done. Nice!

On Stage

We chose to standardize on HDMI since it's poised to be the default for at least the next couple of years. Many, particularly smaller venues still use VGA beamers, so we try to accommodate these too. To get the projected presentation into our rig we need to tap into the video feed the laptop is sending to the beamer, so we need either an HDMI or a VGA splitter. With this, one feed can go on to the beamer and another will be sent to our recording device. I've tried using plain, unpowered splitters and can highly recommend that you stay as far away from them as you possibly can. They simply DO NOT WORK. Get an active one from China.

To support VGA you need something called a VGA scaler, which takes the VGA input and scales this up or down to fill up as much as possible of the HDMI image. As the best supported VGA resolutions tend to be 4:3 whereas HDMI is 16:9 this will likely result in black bars on the left and right sides. Some scalers will chop off a bit of the top and bottom of the image to stretch out a bit more. I've also thus far never encountered a scaler that does pixel-perfect color reproduction and in fact often find, particularly in the cheaper models, that the colors can be *WAY* off. This is going to be something we'll need to address in the recording device. Also, these scalers can be fairly picky about their incoming resolution, often demanding 60Hz signal and in spite of claims to the contrary the best I've reliably captured was 1280x800.

Feeding The Recording Device

At this point we have 2 locations that produce an HDMI image. The problem with HDMI is that the longest distance you can cover with it is about 10m and that requires a fairly high-spec cable to boot. Irongeek is fine with this, but we decided we wanted to be able to put more distance between us and our video sources. As early as we can we convert the HDMI signal to SDI using cheap China bought HDMI-to-SDI converter boxes. SDI is a digital signal transferred over coax and with the appropriate quality cable you can transmit an HD image well in excess of 100m. I initially went with really cheap RG59 coax and it turned out to be the cause of sporadic frame drops - you can get away with using cheap (which tends to mean poorly shielded) cable for an HD signal, but you'll need to reduce the cable length considerably to get a reliable signal on the other end. I ended up going for Belden 1505F cable which will set you back about 2.5 euro per meter, and special Neutrik plugs because the BNC connector on the card is somewhat recessed within the rear of the PC making especially the top connector hard to insert when you're only supposed to twist the front sheath.

Audio

Depending on the size of the venue and how often it's used for giving such presentations there might be a mixer present with some microphones, but it might just as likely not have any of that at all. When a mixer is present, you can usually expect to get an XLR hookup when there's a big mixer, or either a Jack or RCA connection when it's a small one. Because XLR (pro stuff) uses different voltages to what (consumer) recording stuff expects, even when you use one of those XLR-to-Jack plugs the signal is still going to be very different (considerably louder). Irongeek's solution to this was to use something called The Derbybox which, for about $20 in parts, converts XLR to something you could feed into the line-in of a regular PC sound card. My solution to the problem was to go with the Roland Duo-Capture EX. This is a professional external USB sound card which has 2 XLR inputs that can each also receive a Jack connector. This has the benefit of also being able to provide "phantom power", which is something that allows you to power a wired microphone directly from the sound card, allowing you to use more compact microphones.

For those situations where no microphone is present at all, I use an Audio Technica AT841UG omni-directional condenser boundary microphone because they're remarkably good at picking up a mostly clean sound in the room when people speak in its general direction. It's an XLR microphone that can operate off of phantom power so very well-suited to my sound card.

The Recording Device

The most common recording device in use is a laptop and usually a mac. Irongeek uses an i5 machine with Windows. I wanted something with Linux and because of our previous choices needed to feed this machine SDI somehow. For an HD video feed it's worth mentioning that USB2 doesn't have sufficient bandwidth. While a laptop provides you with a nice, integrated solution I wanted a system that I could more easily swap parts in and out of, so I chose to instead go for a full-on PC. Each of my rigs is equipped with a MicroATX motherboard with at least 3 PCIe slots, an i7 6700K, 16GB of RAM and 4 1TB Western Digital Purple harddisks in a software RAID10 configuration. The PCIe slots are for the capture cards I use to receive the SDI signal: 3 Magewell Pro Capture SDI cards. I only have 2 video sources, so the 3rd is a backup in case one of them were to flake out. Each PC has its own HD monitor - the cheapest money can buy.

Transport

As you've probably understood by now, there's a lot of kit that comes into play when you want to do video. To bring all this stuff you're going to want some kind of container. You could go for a flight case, but those tend to be rather bulky, expensive and better-suited to transport in a van rather than the trunk of my car. The next likely alternative is a Pelican case because the whole world seems to love the indestructible Pelican case line-up. Unfortunately that quality comes at a hefty price and in all honesty, I don't need something that's water-tight. In ended up getting a big plastic toolbox for each of my rigs made by Keter. It's a rebrand of this Rigid toolbox and comfortably houses everything in a sturdy container. I did end up taking the inside padding of the lid out since I wasn't using the separator nor the small containers and needed the extra space. Each camera is stored in a Case Logic DCB 305 bag with 2 HDMI to SDI converters (redundancy matters) and all the converters along with my external sound card are stored in cheap China bought toiletries bags. These bags keep the devices close to each other so that if the toolbox containing them gets jolted a bit, things stay in place.

Power

The total rig needs about 300W of power but this needs to be delivered to 3 distinct and potentially remote, relative to each other, locations: The camera, the stage and the PC. I started out by bringing 3 15m extension cords, modded to have multiple sockets on the end since even though I didn't need a lot of power, I needed to power multiple devices. The drawback here is that these cables take up a LOT of room within the toolbox. Also, all my devices have the exact same plug, but some need 12V, some need 9V and most need 5V. I've begun replacing parts such that all my devices operate off of a 5V source to prevent accidents and I'm in the process of creating small boxes that convert a roughly 12V source down to 5V. The idea here is to provide 12V DC using the PC (or something like a laptop adapter plugged in near the PC), send that out to the other 2 locations, which will lose some of that voltage so expect 11V to appear on the other end, downconvert that to 5V and power everything off of that. Instead of a bulky 220V rated thing with an even bulkier multi-socket end I just need a long stretch of basically speaker cable, a converter in a box and a splitter. That would allow me to do away with 2 extension cords as well as the power warts for the various devices.

The Flow Of Data

I'm using OBS which is an amazing video mixer program in which you define scenes. TODO: Insert an image of my typical scene layout. Given this layout you can see that the camera section is a vertically oriented rectangle however the typical recording position uses a horizontally oriented rectangle. The easiest way to get the section I need is to cut out a rectangle of those dimensions and then down-scale it to fit the part I need. The problem is that out of the rather low number of pixels produced by the camera's sensor this method throws away about 60% of them. Instead, I chose to mount my camera on the tripod at a 90 degree angle. This lets me take a much larger image off the sensor to downscale into the final image. This greatly improves the image quality. Using avconv I can both reduce the noise using a special filter, scale the image down to the size I will use in the scene and rotate the image to account for it being tilted all in 1 go. The presenter laptop output also needs to be scaled down slightly, but the main thing here is to apply some color correction specific to the capturing device I'm using. I've labelled each and using trial and error found a configuration that will correct the colors perfectly for each one. Supposedly you can apply some corrections to the image directly on the capture card, but I've not been able to make this work. After color correction the image is scaled down to the dimensions needed, sharpening it a bit.

For increased redundancy I'm recording both the video and sound data as it's fed into OBS as well as make OBS record the stream it assembled. I can have both feed off the capture card, but then each would need to apply effectively the same corrections to the 2 video feeds. There's also the problem of how you would switch to a different capture card in case one decides to kick the bucket. I've found a solution to this using v4l2loopback devices - virtual video devices that return the data that's being written to it using the V4L2 API which, thankfully, avconv has full support for. I've defined 3 such devices: One with the tilted camera image, one with the laptop output and one with a 'regular' HD feed in case the camera is used in a non-tilted setup like when recording a panel discussion. The quality will be less, but at least recording is still possible. When I start recording, I let 2 capture cards feed into the appropriate virtual devices and then start the recording. Using options made available in the window manager I can switch out the feeding program to either switch to a different capture card source, switch the camera between feeding the regular or the tilted camera device or apply different color corrections to the laptop capture feed in case that got switched out.

All recordings produce MP4 files that are x264 encoded using the "veryfast" preset at CRF 20.

The Hall Of Shame

Mistakes, I've made a few...

  • Blackmagic Design SDI capture cards

I've used 2 DeckLink Duo and 2 Mini Recorder cards. The former has 2 inputs, the latter 1, but technically they're mostly identical. Turns out they max out at 1080p30 meaning that the camera, which was capable of 1080p50, could only deliver 1080i50 and the card would de-interlace the image in a rather sloppy fashion. The video feed produced by the presenter's laptop would need to be restricted to 720p60 as all laptops these days delivering an HDMI signal do so at p60 without any options. Adding injury to insult, the each of the 2 550 euro DeckLink Duo cards died in the process of a firmware upgrade.

  • Passive VGA splitter

Think of a passive VGA splitter as a VGA cable with 2 ends on one side. The problem with this is that when only 1 device is connected, it behaves like a regular VGA cable. Once a second device is connected, the signal level drops on both ends giving you a rather dark image. Also, if the first device on the wire is a bit old and manky there's a high probability it will interfere with the image that is sent on along the wire and I've seen some pretty messed up displays as a result of it. I honestly don't understand why these things are even sold today.

  • Passive HDMI splitter

While these things actually work fairly well with simple devices like cameras that simply spit out an image, for PCs they're not usable. The problem here is that the PC negotiates with the device on the other end on what to transmit how and this process doesn't react too kindly to having a second device butting in to this conversation. To make matters worse, HDMI has a built-in copy protection called HDCP. This is used to encrypt the communication between 2 devices and here, too, having a second actor on the wire doesn't work well. An active HDMI splitter will itself be the end-point to the sending device and a new starting point to any receiving device, meaning each really does get its own, unique connection to the signal source.

  • Cheap RG59 COAX

There's no real need for COAX to be expensive when your signal is an SD CCTV source. Problems appear when you use that same cable for a high quality HD source which operates at a much higher frequency which this cable provides insufficient shielding for. Normally you get foil and a braid as shielding on cheap cable, but that braid should still cover about 90% of the surface of the wire. In this instance it was more like 25%. There's also far to much signal loss occurring at that frequency across its length.