Felhasználói útmutató Mégse

Tips and tricks for streaming

Here are a few tips, tricks, and workarounds to help you while streaming using Adobe Character Animator.

The following are some issues you might face with live animation and some possible solutions and workarounds. For information on basics for streaming scenes with Character Animator, see Stream a scene live.

Microphone and headphones

A high-quality microphone with background noise rejection makes a big difference, as does a noise‐free audio environment. If the voice talent needs to monitor audio from elsewhere, the best is for them to wear headphones to avoid any other sounds being heard. Headphones can also be an effective way to shield the performer from hearing their delayed voice.

Examples

  • Headset mic :
    SpeechWare FlexyMike Dual Ear Cardioid
  • Tabletop mic (if head‐tracking is being done by someone else):
    EV RE20
    Shure SM7
  • Audio Interface Manufacturers:
    RME
    MOTU

Note: While USB is convenient, it makes mixing and managing the audio more complicated than classic wired equipment.

  • For single character broadcasts, most USB microphones and audio interfaces work fine.
  • For multiple characters composited with NDI, we recommend a wired microphone and professional audio interface for each machine.
  • For ease of routing, a small mixer can be helpful too.

End‐to‐End Rehearsal

Before your live performance, be sure to test everything end‐to‐end with the actual voice talent. Once you have tuned everything to look great, don’t change anything. Test it on Facebook, YouTube, Twitch, etc.

Scene frame rate

In most cases, you might want to set the scene frame rate to 24 fps (this is the most common cartoon frame rate).

Working with multiple CPUs connected to a network switch

Connect a Gigabit (Cat 5e or Cat 6) Ethernet cable from each machine to the switch.

Monitors to see combined characters

If you want each person or character to be able to see themselves with the combined characters, provide a monitor feed from the compositing machine to the performer’s location.

Sneezing

During the live performance, if you are using face tracking and need to sneeze, take a drink of water, or look away for any reason, hold down the semicolon ( ; ) key to freeze the face and eye tracking to avoid having your character follow your movement. The Smoothing setting for the Face behavior governs how fast it jumps back to follow your current pose when you release the semicolon key.

Lip Sync

  • Test to see if turning off the Camera & Microphone panel’s Auto-enhance Audio Input option helps lip-sync accuracy. This will reduce leakage of other voices into the microphone, but requires that you have set good input levels.
  • Be ready to adjust the audio input gain on‐the‐fly if needed (e.g., voice talent move back from microphone , or gets excited and starts yelling).
  • Audio needs delaying in the software switcher (OBS / Wirecast), usually by a frame or two (around 60 milliseconds). Tune by clapping into the microphone and lining up the clap sound visually with the mouth changing (as viewed by the person watching at the other end, e.g., over Facebook Live).

The most common problems with lip sync in live scenarios are the lips moving when they should be still, the lips staying still when they should be moving, and poor synchronization between the audio and the mouth moving.

Lips moving when they should be still

This problem usually happens in a noisy environment, such as on any stage. It is less common in streaming scenarios, but can happen if unwanted audio enters the microphone.

  • Isolate your voice talent with a sound booth.
  • Reduce the level of audio coming into the machine running Character Animator.
  • Have the voice talent use a microphone or headset with excellent off‐axis rejection. The microphone 'driving' the lip sync doesn’t need to be the same microphone doing the broadcast audio.
  • Apply an audio gate and a high pass filter to the microphone signal before it gets to Character Animator. Do not send this same audio to the final broadcast.
  • Make the neutral mouth(s) triggerable so someone can mute the puppet’s mouth when they shouldn’t be talking.

Lips not moving when they should be, or showing the incorrect viseme

This problem is less noticeable/distracting, but still worth considering. The lip sync algorithm works best when it has healthy signal without unwanted audio to analyze. If the signal is too low, the mouth will not change much and will show mostly consonants. If it’s too high, it will also not change much, but will show mostly vowels.

  • Increase or decrease the level of audio coming into the machine running Character Animator. The audiometer can be helpful.
  • Apply a compressor/limiter to the microphone signal before it gets to Character Animator. This will make the signal level more even so you can raise it without overloading. You may or may not want to send this audio to the final broadcast. This is a common process done to broadcast audio (this is why commercials are so loud).

Poor synchronization between audio and mouth moving

The character’s lips should be synchronized with the character’s voice. Ideally, the audience doesn’t even notice the lip sync; bad time synchronization is very easy to notice. The viseme calculation adds a small delay to the video output, but Mercury Transmit, NDI, and professional video gear will also add a delay. This means you need to delay the audio to the audience by the same amount of the video pipeline.

  • If the delay is larger than 20ms or so, it is important that the voice performer does not hear their delayed audio while they’re performing. Have you ever heard your own voice reflected back to you half a second late on a conference call? It’s impossible to think/talk in that scenario. Noise canceling headphones with a small amount of their non‐delayed audio added can prevent the talent from hearing their own voice delayed. A set of headphones is also useful for hearing cues from producers or questions from audience members.
  • There are nearly unlimited ways to delay audio. The important part is that the delay is only for the audience. The machine running Character Animator should get undelayed audio. Some methods to delay audio are;
  • Outboard audio gear
  • Run your audio through a software DAW before it goes to broadcast.
  • Wirecast and OBS Studio also can delay both NDI audio and audio sent to the machine’s audio interface.

Using NDI to send your character to another applications

When using NDI to send your character to another application, you will need to somehow get audio to that application as well:

  • Character Animator does not (as of version 1.1) send audio over NDI. You have to manually manage your audio using a mixer and the machine’s audio interface. NDI Scan Converter on the mac can take your audio interface’s input, and send it as an NDI output. This workaround is not applicable for Windows.

Background apps running slowly

App Nap is an energy‐saving feature of macOS that causes inactive applications to go into a paused state, helping to reduce power usage. The feature can help to prolong battery life for Mac laptops, and it can also make an impact on overall energy usage from the computer.

  • If you are running any background apps during your broadcast, it is very important to turn off App Nap on the machine so they don’t slow down.

To turn off App Nap for all apps on the system, paste this command into Terminal and press Enter:

defaults write NSGlobalDomain NSAppSleepDisabled -bool YES

Puppeteering

A good puppeteer can bring a character to life, but first, they need a good puppet and a way to interact with it.

Webcams (optional)

  • A small video monitor is very helpful if the puppeteer is not at the Character Animator machine.
  • Webcams need a properly lit face to work well. LED lights can also work well.
  • Best results come from someone looking straight forward. This means things like reading from a script or taking drinks of water, can cause the character to react in unpredictable ways.
  • The Head Turner behavior can be a great way to control a character’s head, but it also forces you to look away from the computer screen. This is fine if you are the voice talent, but if you are also triggering animations or artwork, use keyboard triggers for better results.

Triggers

  • In a live setting, it’s great to have a variety of gestures a character can do to keep things more interesting For example:
    ▪ Facial emotions
    ▪ Arm/Hand gestures
    ▪ Thinking (looking up)
    ▪ Idling/Fidgeting (drumming fingers, body ticks, anything to fill dead time)
  • Currently, we use the keyboard for triggers, and also MIDI devices. You can put stickers on your keyboard, or these devices might be of use too:
    ▪ Elgato Stream Deck (little LCD buttons)
    ▪ X‐Keys (highly programmable and has good support for macros)

 Adobe

Kapjon segítséget gyorsabban és könnyebben!

Új felhasználó?

Adobe MAX 2024

Adobe MAX:
a kreativitás konferenciája

Október 14–16. Miami Beach és online

Adobe MAX

A kreativitás konferenciája

Október 14–16. Miami Beach és online

Adobe MAX 2024

Adobe MAX:
a kreativitás konferenciája

Október 14–16. Miami Beach és online

Adobe MAX

A kreativitás konferenciája

Október 14–16. Miami Beach és online