Total found: 300
Since firing system controllers need to receive a few frames of timecode to "lock on" to the timecode, it is common practice to export soundtracks with timecode that begins at least a few seconds before the show begins. Thus, playing or broadcasting the soundtrack to the controller will give the controller a few seconds to lock on before the first events in the show. 10 seconds is a common amount of leading timecode, but some companies use one or two minutes, or in some circumstances even hours of leading timecode before the show. If you need to begin the timecode before the show and you are exporting a soundtrack with timecode from Finale 3D, the easiest solution that works for all firing systems is simply to begin the show at ten seconds or a minute or whatever leading timecode you need on the timeline; and slide the imported songs on the timeline to begin at that offset rather than beginning at zero. That solves the problem but is slightly awkward because, psychologically, it is just nice to have the show begin at zero instead of at an offset. Several firing systems -- FireOne and StarFire -- offer a nice solution: negative timecode. The concept of negative timecode matches the workflow you probably would find most natural. You script your show beginning at zero, and then when it comes time to export your soundtrack from Finale 3D or create a timecode soundtrack with the firing system's software, you just tack onto the front of the soundtrack a period of negative timecode on the timecode channel and silence on the music channel. Voila! Export soundtrack from Finale 3D The "File > Export > Export soundtrack..." function in Finale 3D gives you the option to choose what goes on each channel of the exported soundtrack file, choosing among all firing system versions of FSK timecode, and the SMPTE timecode options, and of course mono or stereo music, as shown in Figure 1. Figure 1 – The export soundtrack function includes all FSK and SMPTE timecode options. If the show's firing system is FireOne or StarFire, the "Negative timecode" field on the export soundtrack dialog becomes enabled. You can type into this field however much leading timecode you want to tack onto the front of the file. If you tack 10 seconds of timecode onto the front of a soundtrack that is 20 minutes long, the exported file will be 20 minutes and 10 seconds long, as shown in Figure 2. Figure 2 – The length of the exported file is the soundtrack length plus the negative timecode length. Importing soundtrack with negative timecode as a song Imagine what would happen if you create and export your soundtrack file with negative timecode using Finale 3D, and then subsequently import your soundtrack file into Finale 3D as a song. What would it look like on the timeline? When you import a song, Finale 3D examines the file to determine if it contains timecode of any kind -- FSK or SMPTE. If it does, Finale 3D shows a dialog offering to silence the timecode channel and align the song to its timecode, as shown in Figure 3. FireOne FSK timecode is aligned 0.1 seconds early (see FSK alignment), which is why the dialog shows -9.9 seconds instead of -10 seconds. Figure 3 – Importing a song with timecode includes the option to align it on the timeline automatically. If the soundtrack contained 10 seconds of negative timecode, aligning the song such that its "timecode time = 0" aligns with the beginning of the timeline means importing the song to begin 10 seconds earlier than the start of the timeline. Finale 3D does that, and automatically crops the beginning of the song so that it begins flush at the start of the timeline.
The Pyrodigital, Pyromate, FireOne, and StarFire firing systems support, in addition to SMPTE, another form of timecode called "FSK", which is short for the encoding scheme used by the protocol. Unlike SMPTE, FSK protocols are generally different for every firing system. If you have a Pyrodigital firing system, you need to use Pyrodigital FSK; if you have a FireOne firing system, you need to use FireOne FSK. Finale 3D reads and writes all the firing system FSK protocols, as shown in Table 1. Table 1 – FSK timecode formats supported by Finale 3D Timecode format Frame rate Finale 3D writes it Finale 3D reads it Pyrodigital FSK 10 fps YES YES Pyromate FSK 10 fps YES YES FireOne FSK 1 fps YES YES StarFire FSK 4 fps YES YES To write FSK for your firing system, simply do the command "File > Export > Export soundtrack..." and select the version of timecode you want to export in one of the channels of your soundtrack, as shown in Figure 1. The export function combines the music and the timecode in the exported WAV file. No manual alignment is required. Figure 1 – Choose whatever version of SMPTE or FSK timecode you want to add to the exported WAV file. Frame rates The firing system FSK protocols encode regularly spaced data packets in the WAV file. The data packets contain what is essentially a frame count, counting in the frame rate of the FSK protocol. FireOne FSK, for example, has exactly one data packet per second; its data packets thus count seconds: 1s, 2s, 3s, 4s, etc. Pyrodigital FSK has ten data packets per second, so frame 1 corresponds to 0.1s; frame 2 corresponds to 0.2s, and so on, with frame 10 corresponding to 1s. The frame rate of the FSK is unrelated to the frame rate of the firing system script. For example, you might script a Pyrodigital show with event times expressed as 30 fps SMPTE frames, each frame representing 1/30th of a second. If you shoot that show using Pyrodigital FSK timecode, the FSK will contain data packets at 10 fps, not 30 fps. The FSK timecode drives an internal clock in the controller, which then processes the script events in whatever their frame rate is. Notation Similar to SMPTE, the times represented by FSK data packets can be notated as HH:MM:SS:FF, except the frame count FF only goes from 0 to the frame rate of the FSK protocol, minus one. In this notation, Pyrodigital and Pyromate frames go from 00 to 09; StarFire frames go from 00 to 03; FireOne frames are always 00. Table 2 – FSK frames notation HH MM SS FF Hours 0-23 Minutes 0-59 Seconds 0-59 Frames 0-N (depends on frame rate) Alignment of data packet in the frame FSK timecode data packets carried in a signal or stored in a WAV file are encoded as a string of audio samples representing a waveform. The standard for SMPTE and the convention for FSK timecode is that the time represented by the data packet corresponds to the position of the last sample of the data packet in the signal or WAV file. For example, the frame rate for StarFire FSK is 4 fps, so the first four frames represent 0.25s, 0.5s, 0.75s, and 1.0s. According to the alignment convention, the last sample of the data packet for the first frame would be at 0.25s in the file. Figure 2 – The end of the data packet aligns with the time represented by data packet -- approximately. If you happen to know the details of the StarFire FSK protocol and are able to discern FSK frequencies from the waveform, then you can see in Figure 2 that the last sample of the data packet for the first frame is actually at 0.2497 in the file. Thus it doesn't follow the alignment convention exactly. The other firing system FSK protocols are also off by a little bit. Each FSK protocol has a de facto convention for how the data packet is aligned in the frame that its time represents. The conventions were established by the reference FSK files that the firing system manufacturers distributed and that fireworks display companies have used for years or decades. Whatever the data packet alignment is in the reference FSK files, that's the de facto convention. Finale 3D exports FSK timecode in keeping with the de facto alignment conventions of the FSK reference files. The data packet of frame number 1 for each of the FSK protocols is shown in Table 3. Table 3 does not contain any information that you generally need to know to use FSK, except that if you use Finale 3D's "File > Tools > Analyze timecode in soundtrack file" function you will see in the summary dialog and the optional log the exact data packet end times in the file, and you may wonder why they don't seem to be exactly aligned with the represented times. Figure 3 – The "File > Tools > Analyze timecode in soundtrack file..." shows the alignment of every FSK frame. Figure 3 shows the timecode analysis of a Pyrodigital reference FSK file. The first frame (00:00:04:05, representing 4.5 seconds) is frame number 45 since there are 10 frames per second in Pyrodigital FSK files and the first frame is frame #1. In this reference file, the last sample of the frame's data packet reads at time 4.492 seconds in the file. By the convention that data packets end at the time they represent, this data packet is thus 4.5 seconds - 4.492 seconds = 0.008 seconds early (8 ms), which is very close to exact. In actuality, the last sample of the data packet is at 4.4925 seconds, and the dialog is rounding down to 4.492 seconds. The Pyrodigital earliness shown in Table 3 (8 ms) is from the FSK file exported from Finale 3D. As you can see in this example, the alignment of Finale 3D exported soundtracks matches the reference file within a fraction of a millisecond. Table 3 – FSK first data packet alignment Timecode format Frame rate Time represented by first frame Position of last sample of first data packet in signal or WAV file Earliness Pyrodigital FSK 10 fps 0.10 seconds 0.093 seconds 7 ms Pyromate FSK 10 fps 0.10 seconds 0.093 seconds 7 ms FireOne FSK 1 fps 1.00 seconds 0.967 seconds 33 ms StarFire FSK 4 fps 0.25 seconds 0.249 seconds 1 ms You may be wondering why in Figure 3 the first frame of the Pyrodigital reference FSK file is at 4.5 seconds in the file, instead of at the beginning of the file, ending at 0.1 seconds. The answer: no good reason. The provenance of the Pyrodigital reference FSK files that have been used for decades is a mystery lost in time, and no one seems to know why there is 4.5 seconds of empty time with no data packets at the beginning of the file. The Pyrodigital controller derives no benefit from the empty time since it cannot lock onto an empty signal, so the 4.5 seconds is just wasted. Figure 4 – Pyrodigital FSK files exported from Finale 3D begin with frame 1 at 0.093s, not frame #45 at 4.492s. The Pyrodigital FSK files exported from Finale 3D start right at the beginning of the file as shown in Figure 4, with the first frame ending at 0.093 seconds. Since controllers can lock onto the data packets as quickly as they arrive, you will find that Pyrodigital controllers lock onto FSK files exported from Finale 3D about 4.5 seconds quicker than they do with reference FSK files.
It is not uncommon to want multiple versions of site layout diagrams or rack layout diagrams for different purposes, such as one version for the fire marshal or AHJ, and a different version for the setup crew. The position name filters and diagram tag filters provide a way for you to hide positions or drawings like text annotations, shapes, and icons selectively in different diagram versions. To create a diagram template for a new version of a site layout or rack layout diagram, go to the blue gear menu in the upper right corner of the racks window, and select "Create or edit diagram template...". The dialog, shown in Figure 1, presents a set of configuration options for your version of the diagram. You can set the options as you like, and then save the diagram template (blueprint) under whatever name and title you choose. After saving the diagram, you can print it from the "File > Diagrams" menu. Figure 1 – Dialog for creating a new version of a site layout or rack layout diagram Position name filter For site layouts, the "Position name filter" determines what positions are included in the diagram. Since the site layout diagram is automatically centered and magnified to include all visible positions, the position name filter can be used to remove positions from the diagram, which will then be centered and magnified around the remaining positions. For rack layouts, each position containing racks will produce a separate page of the generated pdf. The "Position name filter" removes pages of positions that don't match the filter. If the "Position name filter" is blank, then it doesn't filter out anything. If it contains a non-empty string, the pdf will include only the positions that have that string as a substring in the position name (case-insensitive substring search). For example, if the positions are named shells-01, shells-02, shells-03, ..., and front-01, front-02, front-03, then a filter value of just "s" will include all the shell positions and not the front positions. The syntax of the position name filter allows multiple elements, separated by commas. You could set the filter to "01, 02" to filter out the positions that contain neither "01" nor "02" in their names. If the positions are already in position groups in the 3D view (the little yellow flowers on the left edge of the screen), then you can refer to those position groups in the position name filter by adding an at sign in front of the group name. For example, if you have a position group named "Front", then the position name filter value of "@front" would filter to just those positions. Diagram tag filter Diagram tag filters apply to drawings like text boxes, lines, shapes, icons, etc., which you can draw in rack layouts or site layouts by clicking the "Draw mode" link in the upper left of the window. The diagram tag filters are properties of the drawings themselves, as shown in Figure 2. They refer to the diagram tags of the diagram templates, as shown in Figure 1. Figure 2 – Diagram tag filters are properties of the drawings, referring to the diagram tags of the diagram templates. Drawings with blank diagram tag filters are visible in all diagrams. If their diagram tag filters are not blank, then they are only visible in diagrams having diagram tags that are non-blank substrings of the diagram tag filters. For example, imagine you want two versions of site layout diagrams, one for the fire marshal and one for the setup crew. The two diagrams are to have different titles, drawn as text boxes, and they are to have different subsets of drawings. Let's imagine that the fire marshal version has icons of firetrucks in various places on the site layout, and the setup crew version has lines representing firing system cables, which are obviously not relevant information for the fire marshal. To create this example, you would create two site layout templates with different diagram tags specified in the dialog of Figure 1., such as "firemarshal" and "setupcrew". The drawn firetruck icons would contain a diagram tag filter of "firemarshal". The drawn firing system cables would contain a diagram tag filter of "setupcrew". The remaining icons and text boxes and drawings and such that are expected to appear in both versions of the diagram have blank diagram tag filters. The result is that each version contains the drawings that are intended for it.
Fan row racks like the AVM SFX-20 racks, or the CraigCo MinCom racks, or the PyroDigiT PLS30E/45P+ racks can be configured in Finale 3D with pre-defined tube holder angles of your choice, or with adjustable tube holder angles that accommodate effects at any angle (see Fan row racks). In the real world, though, many fan row racks have physical constraints that restrict the tube holders to angle ranges. Figure 1 – Tube holders on the row ends may be able to lean more than tube holders in the middle of rows. Often the angle ranges are different depending on whether the tube holders are on the ends of the rows or in the interiors, as shown in Figure 1. To accommodate these physical constraints, Finale 3D supports angle range specifications in the rack definitions like "-90..50" to indicate a range from minus 90 degrees to positive 50 degrees. Figure 2 – Tube holder angle and angle ranges are written as comma separated lists. Tube angle range constraints are only supported for one rack structure, as chosen in the rack definition dialog: Single-shot rack, adjustable fan angles of tubes in each row; or more colloquially, "Fan row racks". The tube holder angle range specifications go into the "Tube fan angles" fields for each row, as shown in Figure 2. Figure 3 – The comma separated lists are held in the "Tube fan angle" fields for the rows. Figure 3 shows a definition for the CraigCo MinCom X7 35 Shot Rack. When defining racks in Finale 3D, it is imperative that the rows of the rack definition match the rows of the physical rack. Looking carefully at the rack in Figure 1, you can see seven rotation rods with clamps. The square tube holders rotate around the rotation rods. In rack definitions, the "rows" are perpendicular to the rotation rods. Thus this rack has five rows. Just count the number of tube holders on a rod -- five! The language used in rack definitions, such as the pre-wired pin order "By rows, left to right", refers to rows vertically, aiming toward the audience. In shows, most single-shot racks need to be rotated 90° counter-clockwise from this orientation to accommodate left/right angles, which is typically how you would see them in the rack layout view in Finale 3D. To avoid making you read the rack's name and pin numbers as sideways text, the rack definition has the option of defining the "standard orientation" to be rotated 90° counter-clockwise. Finale 3D also draws the clamps on the ends of the rods, so you can always tell which ways the tube holders can rotate. Please see Rack “row” and standard orientation for more information. The specifications of Figure 1 indicate the tube holders on the ends of the rows can aim outward horizontally at 90 degrees. The tube holders in the interiors of the rows are restricted to 50 or 53 degrees, depending on the end tube holders. Angle ranges in Finale 3D can't have dependencies, so the conservative definition of allowable angles for these rows is: -90..50,-50..50,-50..50,-50..50,-50..90. Syntax for angles in the "Tube fan angles" field The "Tube fan angles" field can be blank, meaning no restrictions on the angle, or it can contain a comma-separated list of specific angles or angle ranges. Angle ranges are in the syntax "X..Y" as in the previous examples. Specific angles are just numbers ("X"), and are equivalent to "X..X". Since it is inconvenient to write -180..180 to mean no restriction for a particular tube holder, the syntax also supports leaving elements in the comma-separate list blank. A blank element is equivalent to -180..180. Thus "-180..180,-180..180,-180..180,-180..180,-180..180" has the same meaning as ",,,," Sorting by "Most Horizontal Tilt -- Single-Shot" Racks with angle ranges require some accommodations when addressing the show. When the tube holders don't all have the same angle ranges, which is usually the case if you are using angle ranges, then the addressing algorithm that assigns tube holders to effects can make bad decisions about which effects to put in which tube holders. The decisions won't technically be wrong, because they will always satisfy the angle constraints you specify, but the decisions may not use the tube holders that have the best fitting angle ranges for the effects assigned to them. A simple example is this: imagine a show that contains just ten effects, five aiming horizontally to the left and five aiming vertically straight up; the show will use one or more adjustable fan angle racks like that shown in Figure 1, with five fan rows. If the addressing function first assigns the straight up effects to the tube holders on the left ends of the rows (the only tube holders capable of aiming fully to the left), then none of the other tube holders could accommodate the remaining five effects. The addressing function would need add extra racks to accommodate the remaining effects, even though a single rack could accommodate the eight effects if they were placed in the right tube holders. The addressing sort criterion "Most Horizontal Tilt -- Single-Shot" offers a solution for misallocating tube holders at the ends of rack rows. Sorting by "Most Horizontal Tilt -- Single-Shot" first, the addressing function will seek first to find tube holders for the effects with the most extreme angles, those fitting only in the end tube holders. Returning to the example of ten effects with five aiming horizontally and five vertically, the horizontally aiming effects will be assigned first while the tube holders on the left ends of the rack rows are still available. The remaining five effects fill easily into any tube holders. Thus the ten effects require only a single rack. The "Most Horizontal Tilt -- Single-Shot" sorting criterion applies equivalently to left leaning and right leaning effects, causing the rack rows to fill from the ends toward the center if the end tube holders are configured with the extreme angle ranges. For the common physical specifications of racks like the one shown in Figure 1, filling from the ends results in a balanced and efficient use of the tube holders. A really good addressing dialog configuration for racks with angle range constraints If a launch position contains single-shot racks of different kinds, and if some of the racks can hold larger effects than others, then the addressing algorithm needs to consider both the angle and the size of the effect. The sort criteria must prioritize both considerations: large effects that can only fit in the large tube holder racks, and effects tilted more than 50° that can only fit in the end tube holders. The addressing dialog configuration of Figure 5 works in almost all circumstances. The terms are explained in detail in Table 2 and Table 3 of Racks with pre-wired pins. Figure 5 – Combining "Tilt > 50° -- Single-Shot" with "Size >= 50MM -- Single-Shot" optimizes for both size and angle. Combining "Tilt > 50° -- Single-Shot" with "Size >= 50MM -- Single-Shot" optimizes for both size and angle. The "Tilt > 50°" term prioritizes the effects that can only fit in the end tube holders, and the "Size >= 50MM" term ensures that of those prioritized effects, the large ones that need the end tube holders of the large tube holder racks are allocated first. The "Most Horizontal Tilt -- Single-Shot" term is included in the addressing dialog configuration as a lower priority just to cause racks to fill with a balanced set of left and right leaning effects. Since the sort criterion intermingle left and right leaning effects, the addressing algorithm tends to allocate interior tube holders aiming toward each other on colliding paths. The "Re-arrange effects in adjustable angle racks to avoid collisions" option at the bottom of Figure 5 fixes the colliding angles by re-arranging the tube holder assignments at the end of the addressing function after their initial assignments (see Re-arrange effects in adjustable angle racks to avoid collisions). This function is guaranteed to resolve all collisions as long as the most extreme angle ranges in the rack definitions are at the ends of the rows. Using "fake" pre-wired pins to achieve a nice ordering of pins in the rack You may not care about the pin order within the rack, but if you do care, the addressing dialog configuration of Figure 5 doesn't achieve high marks by itself. Since the effects are assigned to tube holders on the ends first, and then the tube holder assignments are rearranged, the end result is anything but a natural counting sequence of pin numbers along the rack rows. If that is what you want, you can use the "Pre-wired pins" option for racks as described in Racks with pre-wired pins to achieve an excellent result.
The analyze timecode dialog and its timecode log file show you a summary and the actual timecode frames that are in a song file or that have been received as timecode input from an external source. For song files, the summary is convenient for many everyday uses since a human can't tell from looking at the waveform or listening to the timecode channel of an audio file what the timecode range of the song is. For song files and for timecode input, the log file is useful for debugging, answering such questions as whether the timecode signal has any jumps or glitches, how long it should take a device to lock onto the timecode, is its frame rate consistent, when does it start or end, etc. The "File > Tools > Analyze timecode in soundtrack file..." function works for all firing system versions of FSK (see FSK) and for all the common variations of SMPTE (see SMPTE). The function examines the file, testing whether it contains valid timecode in any of the timecode formats. Alignment on the timeline The timecode information dialog looks like the dialog of Figure 1. If you use the function, "File > Tools > Analyze timecode in soundtrack file...", you'll get a dialog that looks like this for any song file you choose, WAV for MP3. The dialog confirms that the song file actually does contain a timecode signal, and tells you what channel it is on. Figure 1 – The best alignment for the song is usually -- but not always -- the same as its first frame time. The dialog tells you the best alignment for the song on the timeline. Usually, the best alignment for the song on the timeline is the time of the first frame in the timecode signal, but depending on how the song file was created, that may not be the case. The timecode signal in the file may not start at exactly the beginning of the file, and may not start with a valid frame. If the timecode channel was cropped to a timecode range and copy / pasted into the song in an audio editor, there's a good chance that the first frame may be partially clipped and thus invalid. There's also a chance that a copy / pasted timecode signal might not be pasted at the exact beginning of the song. In all these scenarios, the first valid frame of the timecode might not correspond to the beginning of the file. Imagine that the first frame of the timecode is 04:00:00:00 but that the timecode channel happened to be pasted at an offset 1 minute into the song. Then the proper alignment of the song based on its timecode would be 4 hours minus 1 minute, or 03:59:00:00. Importing the song at that position on the timeline causes its 04:00:00:00 frame to align with exactly 4 hours on the timeline. Alignment of FSK The firing system versions of FSK follow de facto alignment conventions specific to each firing system. As described in FSK, the conventions do not match up the data packets to the times they represent exactly to the millisecond. They are a little bit off. Depending on the firing system, the data packets in the FSK end anywhere from 0 ms to 100 ms earlier than the times the data packets are meant to represent. The analyze timecode dialog and its log file show the actual alignment of the data packets in the examined file, but the "Best alignment on timeline" shown in the dialog and used to align songs when importing them is adapted to keep consistent with the firing system reference FSK files: If the calculated data packet alignment is less than or equal to 100 ms, the "Best alignment on timeline" is taken to be simply, "Beginning of timeline" rather than the actual delta necessary to align the data packets to the times they represent. That's what you want, so that's "best." Frame rate and drop frame SMPTE timecode doesn't contain the frame rate in its encoded data explicitly, but it does contain a flag indicating whether the encoded frames are counted using the drop frame system (DF) nor non-drop frame (NDF), as explained in Timecode frame rates and drop frame. The frame rate of SMPTE timecode is implicitly the frequency of the frames in the encoded signal. The dialog of Figure 1 displays the frame rate and DF/NDF for song files based on this information. For timecode input from an external source that comes into the computer as MIDI MTC timecode, Finale 3D displays the same information but it comes from a different calculation. A MIDI MTC signal does contain an explicit frame rate in its encoded data, which Finale 3D uses, but it does not contain an explicit drop frame flag. Finale 3D determines the drop frame system of MIDI MTC timecode input based on the frame rate (inferring that any frame rate other than 29.97 fps is NDF) and based on the presence or absence of the frame numbers that don't exist in the drop frame counting system. Bad frames, jumps, and pauses Since timecode can be added to a song file using an audio editor, there's no guarantee that the timecode signal starts at the beginning, or is contiguous. A sound track can be constructed in an audio editor that has different sections with different timecode ranges in them and gaps in between. The timecode ranges may not even be in chronological order. When Finale 3D reads the timecode of a song file, it keeps track of the number of gaps in the timecode frame sequence, and of discontinuous jumps. Finale 3D also keeps track of the number of bad frames, which are frames that are corrupted or whose encoded times are clearly not part of any progressing sequence of frames. It is not uncommon to see frames with times of 30:00:00:00 or 00:00:00:00 in the middle of a timecode sequence that is nowhere near 30 hours or the beginning of the show, depending on the quality of the encoded timecode signal. You may have no idea where the timecode signal in a file or from an external source originated -- a copy /pasted WAV file? A file that has been compressed as an MP3? A hardware device? A recorded or broadcasted audio signal? The bad frame count in the timecode information dialog gives you an idea of the quality of the signal. Since most firing systems and other production hardware devices lock on and track to a timecode signal while using an internal clock to provide smooth playback, the presence of bad frames in the timecode signal is not necessarily alarming. Look at the count of jumps and pauses. If those numbers align with your expectations, then the timecode signal is probably fine even if it has hundreds of bad frames. But the bad frames may be a reason for you to investigate further. Timecode log file The timecode log file shows the actual frames encoded in the song file or received as input. The file may be long, as even 20 minutes of 30 fps is 36000 frames. An excerpt from a log file is shown in Figure 2. // Format: [timestamp] (deltas from last msg: timestamp, frame) received msg HH:MM:SS:FF @ received frame rate + timecode input offset -- > interpretation [0.018] (+0 +0) 00:00:59:28 @ 30 fps NDF + 0ms -- > 00:00:59:28 30 fps NDF < non-tracking > [0.052] (+34 +94) 00:00:59:29 @ 29.97 fps NDF + 0ms -- > 00:00:59:29 29.97 fps NDF < non-tracking > [0.085] (+33 +33) 00:01:00:00 @ 29.97 fps NDF + 0ms -- > 00:01:00:00 29.97 fps NDF < non-tracking > [0.118] (+33 +33) 00:01:00:01 @ 29.97 fps NDF + 0ms -- > 00:01:00:01 29.97 fps NDF < non-tracking > [0.152] (+34 +34) 00:01:00:02 @ 29.97 fps NDF + 0ms -- > 00:01:00:02 29.97 fps NDF < non-tracking > [0.185] (+33 +33) 00:01:00:03 @ 29.97 fps NDF + 0ms -- > 00:01:00:03 29.97 fps NDF < non-tracking > [0.218] (+33 +33) 00:01:00:04 @ 29.97 fps NDF + 0ms -- > 00:01:00:04 29.97 fps NDF < non-tracking > [0.252] (+34 +34) 00:01:00:05 @ 29.97 fps NDF + 0ms -- > 00:01:00:05 29.97 fps NDF < locking > [0.285] (+33 +33) 00:01:00:06 @ 29.97 fps NDF + 0ms -- > 00:01:00:06 29.97 fps NDF [0.318] (+33 +34) 00:01:00:07 @ 29.97 fps NDF + 0ms -- > 00:01:00:07 29.97 fps NDF [0.352] (+34 +33) 00:01:00:08 @ 29.97 fps NDF + 0ms -- > 00:01:00:08 29.97 fps NDF Figure 2 – Example log file The log file lines begin with the time stamp of each received message (frame), followed by the time deltas from the previous message measured in real world time and in the difference between the HH:MM:SS:FF encoded times in the messages. If those two time deltas are exactly the same, then the encoded timecode frame sequence is progressing in synch with real world time. In practice, the time deltas aren't always in synch exactly, for a number of reasons including latency and burstiness in the timecode event processing for timecode input from an external signal. The second message in Figure 2 is an example in which the deltas between the times represented by the frames are not the same (+34 versus +94 milliseconds). But if you compare the frames 00:00:59:28 to 00:00:59:29, that would seem to be a single frame difference, or about 33ms. Where did the +94 come from? Looking more carefully at the first and second messages, the first message is interpreted to be at 30 fps, whereas the second message at 29.97 fps. Recall that SMPTE timecode does not contain an explicit frame rate in the encoded signal. The frame rate is determined based on the frequency of the encoded frames in the signal. The difference between 29.97 fps and 30 fps isn't much, so after a single frame, the signal doesn't yet contain enough precision for the reader to ascertain the frequency exactly. It happens, in this example, that the reader interprets the first frame as 30 fps, and the second (more accurately) as 29.97 fps. The real world time of 00:00:59:28 in 30 fps is 59.933 ms. The real world time of 00:00:59:29 in 29.97 fps is 60.026 ms. The difference? 94 ms. To the right of the arrow is the interpretation of the frame, which begins the same as the frame payload itself for song files. For MIDI MTC timecode input, the interpretation includes the DF/NDF determination that is not present to the left of the arrow (not shown in this example). The bracketed comment in the interpretation, such as <non-tracking> and <locking> show how the timecode reader interprets the frame in the context of the surrounding sequence of frames. You can see that it takes a few received frames for the timecode reader to lock onto the sequence. Other hardware devices may take more or fewer frames to lock on but follow a similar sort of pattern. Other comments such as <bad> and <jump> and longer comments for timecode input like <pausing because no messages in XXX ms> are also possible.
Timecode is an audio track that can be interpreted by your computer or firing system as a sequence of time stamps. Time stamps are typically written as HH:MM:SS:FF. Table 1 – SMPTE time stamps HH MM SS FF Hours 0-23 Minutes 0-59 Seconds 0-59 Frames 0-29 (depends on frame rate and drop frame) You can think of timecode as a recording of a fast talking human clock reader barking out: “0 hours, 0 minutes, 0 seconds, 0 frames, <short pause>, 0 hours, 0 minutes, 0 seconds, 1 frame, <short pause>, 0 hours, 0 minutes, 0 seconds, 2 frames, etc.” The clock reader would need to be talking fast because frames are typically about 1/30th of a second. So if the clock reader were reading every frame, he'd be talking very, very fast, with each time stamp utterance taking just a fraction of a second. Timecode, however, is meant for computers and devices like firing systems to understand, so the audio encoding of these time stamps is in a protocol that is efficient for computers to decode, not humans. There are several protocols for encoding timecode in an audio signal, some that are specific to firing systems or other hardware devices, and some that are open standards. SMPTE timecode is an open standard timecode encoding that is recognized and understood by almost all timecode processing hardware, so often when people say "timecode" they mean more specifically "SMPTE timecode". This article is discusses the frame rates and frame numbers specifically for SMPTE timecode (see LTC details). Frame rates The frame rate of SMPTE timecode is simply the number of the time stamps per second in the signal. The commonly supported frame rates are, Table 2 – SMPTE frame rates Frame rate Description 30 FPS American television system ATSC 29.97002997 FPS (or "29.97" for short) American television system NTSC 25 FPS European television system PAL 24 FPS Film The SMPTE time stamps do not contain within them any explicit indication of the frame rate. If you played a SMPTE timecode track that was recorded at 30 FPS at 0.1% slower than the normal playback rate you would have, identically to several decimal digits, 29.97002997 FPS SMPTE (the non-drop frame version). The exact frame rate of 29.97002997, which is a repeating decimal string, is 30000 frames / 1001 seconds, abbreviated as "29.97 FPS SMPTE" for short. The "drop frame" version of timecode, for readers who are looking ahead a paragraph or two, does not change the frame rate of the SMPTE time stamps. Thus in Finale 3D, the "Show > Show settings > Set timeline snap-to resolution" includes only the frame rates of Table 2 and a few unrelated options; there is no mention of drop frame or non-drop frame in the timeline snap-to resolution options because the timeline, like a ruler, is delineated in equally spaced units. A ruler is delineated in 1/16th inch units on the imperial side and millimeters on the metric side. The timeline is delineated in milliseconds by default or the snap-to resolution chosen by the user, which is equally spaced units of 1/30th of a second or approximately 1/29.97ths of a second or 1/25ths of a second or 1/24ths of a second. Figure 1 – Timeline snap-to resolutions include the four SMPTE timecode frame rates. Frame numbers The 30 FPS SMPTE time stamps count in frames sequentially from 0 to 29. Thus the stream of HH:MM:SS:FF time stamp values progresses in synch with the progression of time in the real world: after 30 frames at 30 FPS, one second has gone by in the real world and one second has gone by in the HH:MM:SS:FF time stamp representation. The 25 FPS and 24 FPS SMPTE time stamps count in frames sequentially from o to 24 and from 0 to 23 respectively. So they also progress in synch with time in the real world. If the 29.97 FPS SMPTE time stamps count in frames sequentially from 0 to 29, just like the 30 FPS time stamps, then the time stamp values will not remain in synch with time in the real world. 29.97 FPS is slightly slower than 30 FPS, so the time stamp HH:MM:SS:FF in the 29.97 FPS stream following a sequence of 30 frames would say that exactly one second has transpired while in reality slightly more than one second has transpired. Projecting this difference out to a longer period of time, after 10 minutes the 29.97 FPS time stamps would fall behind time in the real world by 0.6 seconds. If you set your watch by the time stamp values, your watch would be running slowly. If you executed a 30 FPS firing system script using sequential time stamps playing back at 29.97 FPS, the script would play back slowly, and would become out of synch with the music. To correct for this difference a new, not entirely sequential counting system was invented to count the frames of 29.97 FPS SMPTE. The new counting system counts from 0 to 29 for most seconds, but skips frame 0 and frame 1 for some seconds, jumping directly from frame 29 of the previous second to frame 2 of the next second. This counting system is called "drop frame." SMPTE 29.97 FPS DF (drop frame) SMPTE 29.97 FPS drop frame, or "29.97 FPS DF" for short, is a variation of 29.97 FPS SMPTE that keeps closely in synch with the progression of time in the real world by counting frames in a manner that is not entirely sequential. To disambiguate whether frame numbers in SMPTE time stamps are counting in drop frame (DF) or non-drop frame (NDF), the punctuation of the HH:MM:SS:FF notation is adjusted for DF, substituting a semicolon for the last colon. Table 3 – Notation for the two SMPTE 29.97 FPS versions Frame rate and version Time stamp notation 29.97 FPS DF HH:MM:SS;FF 29.97 FPS NDF HH:MM:SS:FF Both the DF and NDF version of 29.97 FPS SMPTE have a frame rate of 29.97 FPS. The difference is the NDF version counts all frames sequentially from 0 to 29, while the DF version skips in its counting sequence from frame 29 of one second to frame 2 of the next second for some of the seconds. The HH:MM:SS;FF time stamps of the DF stream count sequentially and lose ground against real world time for the first minute of play, and then at one minute the counting sequence skips from 00:00:59:29 to 00:01:00:02. The 00:00:59:29 time stamp is slightly later than 59 seconds and 29/30ths in real world time, but the 00:01:00:02 time stamp, which is immediately following, catches up. During every minute that passes, the DF time stamps lose ground against real world time, and then at the end of every minute the DF time stamps skip two frame numbers to catch up. The catching up mechanism of skipping two frame numbers slightly overcompensates for the lost ground of the preceding minute, so very gradually the catching up mechanism gains ground against real world time. To avoid gaining ground continually, the DF frame counting system doesn't skip the two frames at the beginning of minutes that are divisible by 10, which includes the first minute of play at minute zero. In the SMPTE timecode signal, the time stamps contain a "drop frame" flag along with the HH:MM:SS:FF fields, indicating whether the frame counting is sequential or following the drop frame counting system. The drop frame flag is defined for both 29.97 FPS SMPTE and also for 30 FPS SMPTE, though uses for 30 FPS DF are rare since it doesn't keep in synch with real world time. More often than not if you see the words "30 FPS DF" the intended meaning is "29.97 FPS DF", so to avoid confusion Finale 3D does not support 30 FPS DF. The NDF version of 29.97 FPS timecode also doesn't keep in synch with real world time, but there are applications that use it so Finale 3D does support it. The list of frame rates and versions that Finale 3D supports for effect time representations is in Figure 2. You can select the time format from "Show > Set show information..." or "Show > Show settings > Set effect time format". Figure 2 – Effect time formats include the four SMPTE timecode frame rates and both versions of 29.97 FPS: DF and NDF SMPTE 29.97 FPS NDF (non-drop frame) SMPTE 29.97 FPS non-drop frame, or "29.97 FPS NDF" for short, progresses through HH, MM, and SS significantly slower than the progression of time in the real world (called "wall clock" time) by 1.2 seconds for every 20 minutes. It advances the HH, MM, and SS by one second after the passage of every 30 frames, but the frames are playing back at only 29.97 per second, so the HH, MM, SS advance at a rate of 29.97 / 30 as fast as wall clock time. If a firing system controller is being driven by SMPTE 29.97 NDF timecode, it will trigger the script events at this slow rate. Some controllers have an option to compensate for the slower-than-wall-clock rate of SMPTE 29.97 NDF, but compensation in the controller isn't well suited for circumstances like concert soundtracks with multiple songs beginning at different agreed upon SMPTE times since the songs don't necessarily use the same timecode format, and since the operator of the controller may not know in what order the songs will be played. For circumstances like these, Finale 3D offers better suited options to adjust the event times in the script for SMPTE 29.97 NDF, described here: SMPTE 29.97 NDF (non-drop frame). When adding a soundtrack to your show, if you elect for Finale 3D to split the soundtrack's timecode sections apart and automatically position them independently on the timeline, Finale 3D will position them on the timeline at the wall clock time interpretation of the SMPTE HHMMSSFF timestamps, even if the SMPTE timecode sections internally are in SMPTE 29.97 FPS NDF. Similarly, if you slave the playhead in Finale 3D to external timecode input (see Timecode basic instructions), the playhead will be positioned according to the wall clock time interpretation of the timestamps. Timecode on the timeline You can change the effect time format in Finale 3D to any of the frame rates and versions in Figure 2. The timeline, which is delineated in real world time as far as hours, minutes and seconds, shows the playhead time in the effect time format. If you are scripting in an environment in which you are communicating points of the show with other people using the timecode times, it is most convenient to set both the timeline snap-to resolution and the effect time format to match the timecode of the show. That will cause the playhead to snap to the times corresponding to valid frames of the show and to display them correctly. Using the ruler analogy, the playhead would be snapping to the 1/16th inch or 1mm marks on the ruler, and would never lie in between the marks. If the playhead does lie in between the frames of the effect time format, the playhead's time will display with an additional "+ms" remainder to reflect the exact time in milliseconds. Since the timeline's hours, minutes, and seconds are in real world time, while the playhead's time is in the effect time, you can see on the timeline exactly how 29.97 DF or NDF get out of synch with time in the real world. It is analogous to the ruler that shows imperial markings on one side and metric markings on the other -- for any length on the ruler, you can compare the two measuring systems' representation for that specific length. To see, set the effect time format to "29.97 FPS DF" and leave the snap-to resolution at milliseconds. Then move the playhead to exactly 1 minute in real world time using the menu item, "Show > Goto time" and typing just "1m" into the input field. The shorthand "1m" or "1h" or any number of seconds with decimal point always indicates real world time, whereas the "00:01:00;00" format would indicate the one minute time stamp of 29.97 FPS DF timecode -- not the same as one minute in real world time. Figure 3 – One minute in real world time corresponds to "00:00:59;28" in 29.97 FPS DF plus 7 milliseconds. Figure 3 shows the playhead at one minute in real world time, and simultaneously displays "00:00:59;28+07ms" in the 29.97 FPS DF effect time format. As you can see, the timecode representation hasn't yet reached frame "00:00:59;29" and has one more frame to go after that to reach the next minute. Since frames are about 33ms long, it is about 33 - 7 + 33 = 59ms behind real world time. At the one minute mark, however, the drop frame counting system catches up by skipping two frames, amounting to 2 * 33 = 66ms. Set the snap-to resolution to 29.97 FPS, and drag the playhead to the right. Observe as you move the playhead from frame to frame that the frame numbers count: "00:00:59;28", then "00:00:59;29" and then... Figure 4 – The DF frame after "00:00:59;29" is "00:01:00;02". Frame numbers "00:01:00;00" and "00:01:00;01" don't exist. Frame numbers "00:01:00;00" and "00:01:00;01" don't exist in 29.97 FPS DF, so the frame after "00:00:59;29" is "00:01:00;02". Change the effect time format to 29.97 FPS NDF and you'll see the equivalent NDF frame number as shown in Figure 5 zoomed in. Figure 5 – The NDF frame after "00:00:59:29" is "00:01:00:00" and it is well after the 1m mark in real world time. The frame "00:01:00:00" in 29.97 FPS NDF is well after one minute in real world time, as you can see by the gap between the playhead and the 1m mark on the timeline in Figure 5. The timeline illustrates why the drop frame system was necessary for 29.97 FPS frame rates.
Synchronizing events in a production controlled by other people is an agreement between the parties involved about what timecode ranges correspond to what elements of the production. Consider a concert in which the performer will perform 10 to 15 songs. The pyro production company, lighting company, and possibly other special effects companies may all have contributions to the production that are designed specifically for each of the songs. The timecode agreement is a list of songs and the timecode range that each song corresponds to. The agreement typically allocates a one-hour range of timecode in a 24 hour "day" for each song, making the list simple for everyone. At the time of the concert, the songs can be performed in any order. At the onset of each song, the production operator will start playing the timecode range corresponding to that song, which will trigger the pyro, lights, and special effects to play along in synch with the song. Matching the timecode range provided by others When other people control the production using timecode, you need to know what timecode ranges correspond to your part or parts of the show. If the overall show begins at timecode 00:00:00:00, and your part of the show is at the beginning of the show, then you can design your part of the show beginning at zero on the timeline in Finale 3D and export your script normally. When the production operator plays the timecode beginning with 00:00:00:00, your part of the show will play along with it. If your part of the show is not at the beginning, then you need to make sure your script events correspond to the correct timecode range. Typically, timecode ranges are allocated in one hour slots of a 24 hour "day". Say your part of the show is at the hour slot beginning at 7 hours (07:00:00:00). It is imperative that your script events begin at 7 hours, not zero hours. In Finale 3D you have two choices for getting your script events to begin at 7 hours instead of zero: 1) you can script your show on the timeline beginning at 7 hours, or 2) you can script your show at zero on the timeline and set the "Firing system export offset" to add 7 hours to the event times in the exported script. Moving songs on the timeline to their timecode ranges Scripting your show beginning at whatever time range is required is your only option if you are contributing multiple parts to the production at various timecode ranges scripted in the same show file. While it may seem disorienting to have a timeline that is 15 or 20 hours long with short, 5-minute songs at the beginnings of various hours, that's what people do. If you are told the timecode ranges for each song, you need to import all the songs. If the song files contain their timecode on one of the channels, Finale 3D will automatically position them on the timeline according to the timecode. If the files are just audio, you need to slide them on the timeline to the correct timecode ranges and then script the parts of your show to correspond to the music. If you are given a single concert soundtrack file containing all the songs with their independent timecode ranges, then you can just import the soundtrack file ("Music > Add song...") and Finale 3D will automatically split it up and align the sections to the timeline as shown in Figure 1. Please see Concert soundtracks containing multiple SMPTE timecode sections for more information. Figure 1 – A soundtrack with eight songs beginning on SMPTE hours 1-8 is eight hours long! Finale 3D also has the function, "File > Tools > Analyze timecode in soundtrack file..." which reads the timecode of any chosen soundtrack file without importing it into your show, displays information about the timecode, and optionally saves a text log file of the timecode data in a human readable form so you can see exactly what timecode times the file contains, all of them. See The “Analyze timecode in soundtrack file…” function for SMPTE and FSK for more information. Scripting at zero on the timeline and adding export offsets Scripting your show at zero on the timeline and setting the "Firing system export offset" to begin at the required timecode range is an option if your show file contains only one part, and thus needs only one offset to the required timecode range. The "Firing system export offset" field is in the "Show > Set show information..." dialog, along with another field, "Timecode export offset". If you are the one supplying a soundtrack for your part of the show with music and timecode at the correct range, then before exporting the soundtrack with the function "File > Export > Export soundtrack" it is imperative that you also set the "Timecode export offset" to the correct range, matching the "Firing system export offset". You can verify the soundtrack file you export has the correct timecode range by using the function, "File > Tools > Show timecode statistics for song file...".
Synchronizing events to music in a production you control is essentially an automated equivalent to the manual task of "press play on the soundtrack player and press start on the firing system at exactly the same time." The automated equivalent involves combining a timecode signal as one channel of the sound track with music as the other channel or channels of the same soundtrack, ensuring the timecode signal and the music play in perfect synchronization when the soundtrack is played. While the music channels are routed to speakers for people to hear, the timecode channel is routed into the firing system, which is configured to fire the script events according to the times in the timecode. Creating a soundtrack with timecode The mechanics of synchronizing events involve three resources: the script, the timecode recording, and the music recording. All synchronization is based on the relationship between these resources. The script file contains a list of events, or triggers, at specific times. Imagine an event as, "Trigger module 1, pin 1 at 00:04:01:00 (0 hours, 4 minutes, 1 second, 0 frames)". The timecode is an audio signal that hardware devices like firing systems or your computer can decode into a sequence of times, represented as HH:MM:SS:FF streaming in sequentially at some frame rate. Imagine timecode as a fast talking clock reader, barking out: "0 hours, 0 minutes, 0 seconds, 0 frames, <short pause>, 0 hours, 0 minutes, 0 seconds, 1 frame, <short pause>, 0 hours, 0 minutes, 0 seconds, 2 frames, etc." Lastly, the music is the third resource, and it is just regular digital song. Combining a timecode signal on a soundtrack in one channel with music on the other channels creates a correlation between the HH:MM:SS:FF times in the timecode with points in the music. The timecode for the beginning of the song could begin with 00:00:00:00, or it could begin with other timecode times, such as 07:00:00:00. Whatever the timecode range is on the soundtrack, that is the range that correlates with the music. When a firing system is configured to operate by timecode, it plays the script events whose times correspond to the timecode signal streaming in. Some older firing systems require the script event times to match the timecode times exactly, but most firing systems today have an internal clock that locks onto the incoming stream of timecode times and progresses along with it continuously. If the timecode signal drops out, the firing system typically stops after losing the lock; if the timecode jumps discontinuously, the firing system typically recognizes the jump after a short delay and jumps along with it. Thus, by playing the script events using its internal clock, the firing system keeps in pace with the timecode streaming in as far as speed of play, starting, stopping, and jumping, but is resilient to minor glitches in the timecode signal, and doesn't skip script events that fall in between the times in the timecode signal. Whereas the timecode signal is represented as HH:MM:SS:FF in units of frames, the script events can be represented in decimal seconds or milliseconds or any other time representation. The representations of script events and timecode times don't need the same syntax because they all correspond to real time points on a timeline. Returning to the objective of synchronizing events to music in a production you control, you can use timecode as an automated equivalent to the manual task of "press play on the soundtrack player and press start on the firing system at exactly the same time" by 1) exporting your script from Finale 3D, 2) exporting a sound track from Finale 3D with timecode on one of the channels, 3) configuring your firing system to play by timecode, 4) routing the timecode channel from your soundtrack playing device to your firing system, using a splitter for the audio cable, and routing the other channel or channels to your speakers, and 5) playing the sound track to initiate the show! Adding pre-roll time In real world productions, it is nice to have a delay at the beginning of the soundtrack before the music and pyro events kick in but during which the timecode is playing. One reason for the delay is that the firing systems require a short bit of time to lock on to the incoming timecode signal. If your show began with an event at time 00:00:00:00, the firing system might skip it if it hasn't locked on to timecode signal yet. Another reason for the delay is that it gives you a chance to confirm that the timecode signal is correctly routed to the firing system, and that the firing system is properly configured to receive it. Most firing systems will display the timecode times as they are being received. If you start playing the soundtrack and don't see the timecode times displayed on the firing system, you know something is wrong. Having a delay in the soundtrack before the music and events kick in gives you a chance to scramble and fix the problem without delaying the show. How long a delay should you have? 10 seconds, 30 seconds, a minute, or even 10 minutes are common practice. Bear in mind that if the show is set to begin at 9pm and you have a one minute delay, you need to start playing the sound track at exactly 8:59pm. In Finale 3D, you can add a delay by unlocking the song file ("Music > Lock songs on timeline") and dragging its dotted line on the timeline to right, moving the song to start at the appropriate delay. Add the same delay to the events to keep them in synch with the song. Press control-A to select all the events, then do "Script > Time adjustments > Shift times" to move them over. When you export the soundtrack with "File > Export > Export soundtrack...", the exported timecode will begin at zero without the music, and later, after the delay, the music will begin. The event times in the exported script will include the proper delay because they are at the proper positions on the timeline in coordination with the music. It is not necessary to set the "Firing system export offset" or "Timecode export offset" to add the delay when you've added the delay by shifting everything on the timeline. The offsets are used for delays in other timecode workflows described in Synchronizing events to music in a production controlled by others; not for the delay at the beginning of a show controlled by you. Adjusting for latency Inherently every firing system or timecode decoding device has internal latency for processing timecode streaming into it. Decoding hardware includes algorithms to compensate, or even overcompensate, for the latency. Thus even if the timecode in the exported soundtrack and the music in the exported soundtrack correspond exactly to the times of the pyro events in the exported script, the pyro events may be a short bit late or early in reality, in comparison to the music. To adjust for this latency, Finale 3D has an optional field in the "Show > Set show information..." dialog called "Firing system export offset". This field is also used for some timecode workflows described in Synchronizing events to music in a production controlled by others, but for shows controlled by you this field is used exclusively for making small adjustments, positive of negative, to script times in order to make them align with the music in reality, taking into consideration all the latencies in all aspects of the firing system and audio systems. The best way to determine the firing system export offset is to run a test on your actual equipment with an LED or e-match on a firing system pin corresponding to a recognizable point in the music. Start the soundtrack playing a minimum of a few seconds before that point in the music, to give the firing system a chance to lock onto the timecode signal and settle in, and observe if the LED or e-match fires exactly on time or a little early or late. Choose a firing system export offset to provide the right degree of compensation.
When you export a firing system script from Finale 3D, you produce a script file that contains a list of events, or triggers, at specific times. Timecode provides a mechanism for synchronizing these events to music in a production you control yourself, or to music or other elements of a production controlled by other people. Synchronizing events to music in a production you control yourself is essentially just an automated equivalent to the manual task of "press play on the soundtrack player and press start on the firing system at exactly the same time." The automated equivalent involves combining a timecode signal as one channel of the sound track with music as the other channel or channels of the same soundtrack, ensuring the timecode signal and the music play in perfect synchronization when the soundtrack is played. While the music channels are routed to speakers for people to hear, the timecode channel is routed into the firing system, which is configured to fire the script events according to the times in the timecode. The details and specific instructions for this process are described in Synchronizing events to music in a production you control. Synchronizing events to music or other elements of a production controlled by other people is essentially an agreement between the parties involved in a production about what timecode ranges correspond to what elements of a production. Consider a concert in which the performer will perform 10 to 15 songs. The pyro production company, lighting company, and possibly other special effects companies may all have contributions to the production that are designed specifically for each of the songs. The timecode agreement is simply a list of songs and the timecode range that each song corresponds to. The agreement typically allocates a one-hour range of timecode in a 24 hour "day" for each song, making the list very simple for everyone. At the time of the concert, the songs can be performed in any order. At the onset of each song, the production operator will start playing the timecode range corresponding to that song, which will trigger the pyro, lights, and special effects to play along in synch with the song. The details and specific instructions for this process are described in Synchronizing events to music in a production controlled by others.
Most of the table editing features in Finale 3D are modelled after Excel. Copy/paste and the "fill handle" are no exception. Copy/paste applies to cells, or rectangular selections of cells, or rows, or selection sets of rows. The "fill handle" is the small black dot in the lower right corner of a selected cell that provides a user interface to "fill down" with a cell value or a continuing pattern. Selecting cells and rows Clicking on an unselected cell may select the entire row, or just the cell, depending on the user settings: File > User settings > Set click action for effects window File > User settings > Set click action for other windows By default, the click action for the effects window selects an entire row; whereas the click action for other windows selects only the clicked-on cell. If the click action selects a cell, you can always select a row instead by clicking on the row number in the left-most column. If the click action selects a row, you can always select and focus a cell by double-clicking on the cell. When a cell or row is selected, the arrow keys on the keyboard will move the selection. Hold shift and press the arrow keys to extend the selection, growing it from a single cell or row to a rectangle of cells or a range of rows. Whereas cell selection sets are restricted to a rectangular grid of cells, row selection sets can include any subset of rows, not just a contiguous range of rows. To select multiple rows by clicking, hold the control key and click on a row to toggle it selected or unselected. To select a rectangular range of cells or a contiguous range of rows by clicking, select the first cell or row, and then click on another cell or row while holding the shift key to select the range between them. The copy buffer Control-C and control-V are the usual hot keys for copy/paste in the tables. Equivalently, you can right click a row or cell and choose "Copy" or "Paste" from the context menu. The contents of a copy operation are stored in the system clipboard, as text, making it possible to copy/paste between applications and making it possible to examine the contents of the copy buffer simply by pasting it into Notepad or any text editor. Copied cells are represented in the copy buffer straight forwardly as the editable cell text with tab and newline delimiters separating the cells. In some cases, the editable cell text may be different from the unfocused cell text shown in the table, which may have fewer digits after the decimal point, for example, to make the table more readable. The editable cell text always has the maximal resolution. Copied rows are also represented in the copy buffer, but in a more complex manner that depends on which table the rows are copied from. To facilitate copy/paste between shows, or between applications, the copy buffer for rows contains the rows' explicit data and any rows from other tables that the copied rows depends on. For example, copying a row in the script representing an event must include the row from the effects window defining the effect used in the event and the row from the positions window defining the position at which the event is located. The representation of all this information in the copy buffer is human readable text, but it is not as straight forward as copied cells. The details of the copy buffer are described in Copy/paste. Copy/pasting over cells or rows If you want to fill an entire column with the value from one cell, select the cell, then copy it with control-C or using the right-click context menu, then right-click the column header and choose "Select this column", then paste with control-V or using the right-click context menu. The copied value is pasted over all the selected cells. You can also paste a single cell value over any rectangular range of selected cells in a similar manner. Pasting rows is different. Unlike cells, pasting rows adds the copied rows without removing any rows that are selected at the time of pasting. Depending on the type of table, the pasted rows may be modified as appropriate. For example, pasted events are pasted at the time of the playhead (modifying the times of the rows from the copy buffer). Pasted positions or effects are another example. Since the positions and effects tables require that every row has a unique name or part number, the pasted rows will have their names or part numbers modified to make them unique. The fill handle The fill handle is the dot in the lower right corner of a selected cell, or of the bottom selected cell if a vertical column of cells is selected. If you click on the fill handle and drag down, you will fill the dragged-over cells with the value from the selected cell or with values continuing a pattern defined by the selected cells, such an increasing sequence of numbers . Table 1 – What the fill handle does, depending on the selected cells Contents of selected cells Number of cells selected Modifiers keys held Cells filled with Example Integers 1 Incrementing by 1 1 --> 1, 2, 3, 4... Integers 1 Control Same numbers 1 --> 1, 1, 1, 1... Integers 2 Incrementing numbers by delta 1, 3 --> 1, 3, 5, 7... Integers 2 Control Repeating pattern 1, 3 --> 1, 3, 1, 3... Integers 3 or more Control or none Repeating pattern 1, 3, 10 --> 1, 3, 10, 1, 3, 10... Strings ending in integers 1 Incrementing numbers Pos-01 --> Pos-01, Pos-02, Pos-03, ... Strings ending in integers 1 Control Same string Pos-01 --> Pos-01, Pos-01, Pos-01, ... Strings ending in integers 2 Incrementing numbers by delta Pos-01, Pos-03 --> Pos-01, Pos-03, Pos-05, ... Strings ending in integers 2 Control Repeating pattern Pos-01, Pos-03 --> Pos-01, Pos-03, Pos-01, ... Strings ending in integers 3 or more Control or none Repeating pattern Pos-01, Pos-03, Pos-10 --> Pos-01, Pos-03, Pos-10, Pos-01, ... Time 2 Times incrementing by delta 00:05.10, 00:05.20 --> 00:05.10, 00:05.20, 00:05.30, ... Time 2 Control Repeating pattern 00:05.10, 00:05.20 --> 00:05.10, 00:05.20, 00:05.10, ... Floating point number 2 Incrementing by delta 2.5, 3.1 --> 2.5, 3.1, 3.7, ... Floating point number 2 Control Repeating pattern 2.5, 3.1 --> 2.5, 3.1, 2.5, ... Coordinates or angles with three components 2 Incrementing by delta for each component (0, 30, 30), (0, 40, 30) --> (0, 30, 30), (0, 40, 30), (0, 50, 30), ... Coordinates or angles with three components 2 Control or none Repeating pattern (0, 30, 30), (0, 40, 30) --> (0, 30, 30), (0, 40, 30), (0, 50, 30), ... Anything else Anything else Control or none Repeating pattern Apple, Banana --> Apple, Banana, Apple, Banana, ... Loosely speaking, if you select one cell containing an integer or string ending in an integer and drag down its fill handle, you'll get a sequence of incrementing numbers. If you hold control while dragging, you'll fill with a copy of the original cell. If you select two cells containing some form of numerical delta or string ending in an integer and drag down the fill handle, you'll get a stepping sequence defined by the delta between the initial two selected cells. If you hold control while dragging, you'll fill with a copy of the original cell. In all other circumstances, you'll get a repeating pattern defined by the initially selected cell or cells.