Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - GarageGothic

#1581
CJ, I know that voice lipsyncing is an unsupported feature of AGS. But would it be possible to get some documentation on how AGS interprets the Pamela format? I haven't been able to track down any official documentation of the format, and looking through the source code didn't help me. I'm messing about a bit with visual c++ to see how hard it would be to output .pam files from the automatic lipsync source.

I assume that since you can customize your own phonemes, that part of the conversion isn't a problem. As long as the phonemes put out by the automatic lipsync match those in AGS' lipsync table everything should be fine, right?

The timings puzzle me however, I thought they were measured in frames (Pamela's fps, not AGS loops), but the numbers seem far too high. The timings are not in sequence either, so they can't be times measured from the beginning of the wave file. Are they additive (i.e. first phoneme "1215:S" runs from 0 to 1215, second phoneme "765:IH1" runs from 1215 to 1215+765=1980)? If so, how is silence handled? I see no spaces in my .pam files?

Edit: Playing around a bit, I discovered that the timings are indeed measured from the beginning of the wave file, they're just listed totally out of order in the .pam file. Does this (lack of) order have any special significance? I still can't work out what the timings are measured in though. Changing the framespersecond doesn't seem to alter the timings. And framesperphoneme seems only to be used internally in Pamela when breaking up the phonemes from a text?
While playing around with Pamela, I also found out something about the lack of pauses. It seems that each phoneme frame is held until the next one starts, and that the default phoneme set of Pamela phonemes doesn't include silence (closed mouth frame). But if we were to associate a frame in AGS with a symbol to indicate pause (the lipsync software uses "), and used the same symbol in the .pam file, it should work, I believe. Can you please confirm this?

Also, just to make sure, is it possible for AGS to interpret timing values that aren't mutliples of 15 (as seems to be the default in Pamela and not very precise)? And do AGS at all parse the framespersecond and framesperphoneme values?

Edit 2: Sorry about the constant updates, it's just that I keep playing around with the programs. After adding a phoneme to a distinctive section of a voice clip in Pamela and locating the same part in Audacity, I worked out that Pamela values are the timing in seconds multiplied by 360, for some unknown reason.

So the basic conversion would now be:

*Retrieve phoneme start value from automatic lipsync in milliseconds
*Multiply this value with 0.360
*Retrieve the phoneme as-is. If we set the lipsync in AGS up properly we don't need any conversion

The above is a very simple process which could even be accomplished in AGS script. However, for the lipsync process to be manageble for the user, we'd still need:

*Batch processing of a whole audio folder (*.wav doesn't currently work as a parameter)
*Decoding of .mp3 and .ogg files to a temporary .wav before lipsync (could be done with external program before syncing, but would be nice to have integrated)
*Output of (modified as specified above) lipsync data to files named after the source audio files with the extension .pam (currently the program outputs data to a console)

As mentioned above, I downloaded Visual C++ Express to try to change the source code. But so far I haven't even figured out how to import it!  :P I convert it without any errors reported, and am then told it can't be opened in this version of Visual Studio. Perhaps I should try another compiler.
#1582
I uploaded a short test clip. Click here to view directly in your browser. Please don't sue me for copyright infringement, Dave :). I should also add that I recorded some lines in my own language, Danish, and the lipsync works just as well with non-English voice samples.

Edit: Fixed the video format and uploaded to streaming server so you don't have to download anything. Note that the lipsyncing was done entirely with the open-source tools. The commercial Lipsync Tool software is only used for playback in the example.
#1583
Quote from: Rui 'Trovatore' Pires on Mon 12/05/2008 00:01:33Also, I can set a character's speed to any positive or negative number, but not 0, how come?

Because 0 would mean not moving at all. Positive numbers are in pixels per loop, so with a speed of 5, the character will move 5 pixels every loop (within the engine's 320x200 grid). Negative numbers are fractions, so a speed of -5 equals 1/5 pixels per loop, so that in 5 loops the character will move one pixel. 0 would either mean not moving at all (when treated as a positive number) or give a divide-by-zero error (when treated as negative).
#1584
in repeatedly_execute:

Code: ags
Hotspot* myhotspot = Hotspot.GetAtScreenXY(mouse.x, mouse.y);
if (myhotspot != hotspot[0]) {  //if it's over a hotspot
   //do stuff
   }


Characters and Objects have similar GetAtScreenXY functions. Since you're only checking a point, not all pixels of the character/object, you will have to decide for yourself which point to check. For characters standing on a region, the basic centre point of its bottom line (character.x, character.y). But with objects it may make more sense to either check the middle of the bottom line or even the centre of the sprite.
#1585
most likely the name of the .exe file has been changed after it was compiled (to add the "-32bit" part maybe?). However, winsetup.exe really just calls the game .exe with the parameter "--setup", so it's quite easy to work around.

Either 1) use the Run function in the Windows Start Menu and type the game's directory path and file name followed by "--setup", like so: "c:\Games\tfas2-32bit.exe --setup". Or 2) create a shortcut to the game .exe and then modify it's properties. In the "Target" field, you should see the folder and filename. After the last quotation mark (which should be kept), write "--setup" (without the quotes). Then apply your settings and double-click the shortcut icon.
#1586
While surfing around, I came across this open-source lipsync software. It analyzes a voice clip (.wav, .ogg, .mp3*) and automatically assigns phonemes to it. It then writes the data in a fully documented file format. But I assume that the source code can be changed to output data to Pamela and other formats. (To see it in action, check out the demo for Lipsync Tool a commercial implementation of the technology - I tried it on a few old voice samples, and was quite impressed).

With the growing number of voice acted AGS games, lipsync support is becoming a standard feature of AGS. Unfortunately the only supported lipsync program, Pamela, is crash-prone and slow to work with. Everything has do be done manually, and the text of each voice line must be pasted in. The Al Emmo team spent a month or more just on the lipsyncing. It's probably wise that Dave Gilbert hasn't tried to use it on his games, or we would still be waiting for Blackwell Legacy.

However, with this source code lipsyncing would become a batch process with little-to-no work from the developer.

I don't really think this needs to be integrated in the AGS engine, but could work fine as a stand-alone tool (which would also prevent any license issues for the finished game). Unless CJ sees any need to add further lipsync format support, it would have to be customized to output Pamela files though. Unfortunately I don't have the programming skills to do any of this, but I thought it might be a good idea to make you aware of this technology, and maybe hear what people think about its possibly use with AGS.

Edit: I see now that it requires Microsoft's SAPI SDK to be installed, but I don't think that would be a problem for developers dedicated enough to recruit voice actors.

*Edit 2: It seems that, unlike the Lipsync Tool, the source code only supports .wav files. I don't know how much work it would be to integrate .ogg and .mp3 decoding before running the sync.

Edit 3: Made some further tests. I ran the compiled binary from the source code distribution file on a couple of wave files (purely automated, textless sync) and then played the output back in Lipsync Tool.  They were Blackwell Convergence samples of Joey and Rosangela that Dave posted on his forums, and both voices synced up great. I'm also quite impressed how many of the words that the speech recognition identified correctly in the text output (not that the exact words as written are all that important for the sync).
#1587
Congratulations on the mention over at Rock, Paper, Shotgun. The reviewer calls it "The best adventure game I’ve played in a long time" and the puzzles "far better put together than anything in the last two series of Sam & Max".
#1588
Despite not being an AGS game, I think 'Enclosure' would be right up your alley.
It can be downloaded from here (look under "Projects"). And here's a review.

Edit: You may also like 'Prodigal', though a few of the cutscenes are quite graphically intensive and could be too much for your computer.
#1589
As much as I love AGS, I think Wintermute definitely sounds like the engine for you. It supports higher resolutions, has built-in particle system and parallax scrolling, and allows you to import 3D models for the characters. If you have talented artists, you can definitely create gorgeous visuals. Just check out Jonathan Boakes' latest game, The Lost Crown which was made with Wintermute.
#1590
Which folder is your speech.vox file in? I remember on some of the older versions I had to copy it from the compiled folder to the source code folder to be able to test without compiling. Not sure how this is handled with AGS 3.0's test function.

Edit: Also make sure that your audio files are named properly, e.g. ego5.wav rather than ego05.wav. You may try converting your files to .ogg format to see if that works better. There's known issues with certain .wav encodings.
#1591
As long as we're notified in advance on the forum whenever there's going to be extended (and pre-known) server downtime like a few weeks ago.
#1592
The big deal is that Glumol was announced while AGS was still in its low-res infancy. Several people on the forums announced that they were just using AGS while waiting for Glumol (or SCRAMM, or one of the many other engines in production back then) to be released. Ironically, most of those engines died miserably while AGS grew and prospered into the magnificent tool we have today.

So yeah, today it may not seem very impressive, but back then the feature list made people hold their breath.
#1593
General Discussion / Re: Game release
Mon 05/05/2008 13:55:15
I think Oliwerko's idea is excellent. You could also add the first game and its production artwork as bonus features on the sequel's game disc.
#1594
This project might be of interest to you, if it gets completed.
#1595
If you establish a quite simple general layout, this should definitely be possible. For example the entire sea in A Tale of Two Kingdoms consisted of a single physical room. If you want to make a maze where the rooms have more or less the same layout (you can always randomize different backgrounds to avoid it being repetitive) but with between one and three exits depending on location, you can use objects for the exits and turn on/off the walkable area near them. Your main concern will be to get the walkbehinds working properly in all cases, but you may replace walkbehinds with objects.

You would have to initialize the randomization beforehand (either in game_start or when entering the sequence of random rooms), possibly storing the pseudo-room states in a struct. That way the rooms are random but will retain their state as the player walks back and forth between them (I assume this is what you want).

Edit: Just a thought. Something like the minotaur maze in Ben Jordan 6 could easily have been randomized this way. You just have to make sure that the script checks that all rooms can be entered and all objects/characters accessible when needed to avoid dead ends.
#1596
To be on the safe side, use saves over 100 as the manual suggests. AGS manages the first 50 save slots (used to be 20), and you never know if this will be expanded in later versions.
#1597
I should clarify that this is a module, not a plugin.

Thanks for the code, lemmy. I already have a bit of script that converts RGB values to the S and V part of HSV and then applies them as the Saturation and Luminance value in the tint. I didn't find any need to do the arithmetic for the hue, since I don't really need that value anywhere. But I'll try to add it in a way that makes sense, and also HSL.
#1598
Quote from: subspark on Fri 02/05/2008 02:25:52Well I imagined that people would simply create their z-depth maps in Photoshop. Not 'everything' has to be done inside AGS. Designing these depth maps is best left in the hands of a technical artist or an artistically motivated programmer anyway. As for the debug tools, youve got me there. :)

The problem with creating them in Photoshop is that you have little color/size reference unless you keep pasting your character in at different scales. With in-game tools, you could move your character around on-screen, set his size for a specific area and the game would provide you with the correct color for that scale. Once painted, you could see him walk around and make adjustments if something looks wrong. But of course you can use external tools if you wish.

QuoteFair enough. But after all the bugs are ironed out or after you have found a new method of going about shadows, do you plan to integrate the two features together?

I intend to keep them as separate modules, but they will be compatible with eachother so you can use both on the same screen.
#1599
Oh, I could easily add the scale map thing. Not sure how many people will use it unless the editing/debug tools are super easy to use, so that's the most demanding task in implementing it.

As I said to tolworthy, I don't intend to merge the ShadowBox module with the LightMap module because it's currently too buggy. But since I need to rewrite parts of it to DrawingSurface code, I may rethink if there's better ways to do it than the current coordinate based wall setup functions.
#1600
Pumaman is right about the method, but if you want to do it for the whole screen, of course it's not enough to just do it for the characters. If you take the code I posted before and then put a variable in place of the saturation setting, you've got the code for tinting characters grayscale. You just need to modify the saturation variable in-between loops. Then add a similar function for objects and work out how to tint the background efficiently. Instead of using DrawingSurface routines, which can be quite slow, I recommend making a DynamicSprite copy of the background, tinting it grayscale and then displaying it as a transparent object, changing it's transparency along with the saturation variable.
SMF spam blocked by CleanTalk