Not logged inOpenClonk Forum
Up Topic General / General / 3D mesh rendering
1 2 Previous Next
- - By Clonk-Karl [us] Date 2009-07-28 04:24 Edited 2010-03-05 18:54
I have pushed a branch called "mesh" with support for loading and rendering 3D meshes in the Ogre XML format. It also contains a Monster object that I used to test the functionality. This is not necessarily supposed to become an official object though. Some options such as lighting have been chosen so that this test object looks halfway good. Maybe others don't. We probably need more objects to choose sane defaults, or make things such as light sources scriptable.

How to create an object using a mesh

To see your mesh in the game, you have to do the following:

  1. Create the mesh in Blender. The Y/Z-plane is the one that will be rendered parallel to the screen, the X axis goes inside the screen. You can probably use other modeling tools as well as long as they allow exporting to Ogre (XML or binary) format.

  2. Close Blender. Install the OGRE exporter following the instructions on the OGRE wiki. Or install an OGRE exporter for your other modeling tool, if it does not ship with one.

  3. In Blender, select the mesh in object mode, then go to Scripts Window->Scripts->Export->Ogre Meshes (If this does not exist, then the exporter was not installed properly). Select the mesh in the combobox, and add all the animations you want to export in the list below. Make sure that the "Export materials" button is pressed, and that the "OgreXMLConverter" button is pressed. Hit "Export".

  4. The process will create at least 3 files. Put all generated files in your Clonk object. Rename the file ending on .mesh to Graphics.mesh. Keep the names of the other files as the exporter generated them.

  5. To set an animation for an action, use Animation="Something" in the ActMap, where "Something" is the animation name as defined in the modeling tool. Set that Action via SetAction in the script. The Animation has Length*Delay frames, and these are mapped equidistantly to the animation length specified in the modeling tool.

  6. Check out the "mesh" branch, and compile the engine. Find someone to compile it for your platform if you are unable to. If you use Windows, make sure to use OpenGL. Your object should then be rendered ingame. If it does not work, have a look at the log file, or maybe at the Monster test object.



Known Issues

There are a few known issues:

  • Meshes flicker when there is an explosion somewhere on the map. This might have something to do with the depth buffer. Need to investigate.

  • I think this has been fixed with this patch.
  • Support for textures is not yet fully implemented. Textures are only supported in .png format. In fact, it would be good if someone could create a test object which uses a texture, so I can test this when implementing it.
    With this patch textures can have all supported image formats (png,jpg,bmp).

  • I am not sure the Vertex positions to query the ClrModMap are computed correctly. It looks strange when a mesh is moving near the FoW borders. But maybe the problem is only with the resolution of the ClrModMap.

  • There is much room for optimizations to the current code.

  • We probably also want to support the OGRE Binary format later. If the data structures are the same as in the XML format, it should be fairly straight-forward to get support for this in. Done.

Reply
Parent - By Nachtfalter [de] Date 2009-07-28 11:28
Export from Cinema 4D

It is necessary, that you have C4D Release 10 or 9. (if you have R11 it'll don't work!)

     1. Go to pagesperso-orange.fr,
     2. download the files and copy it to Cinema4D\plugins\(Ogre3D_Exporter.cdl).
     3. Click on Plug-Ins --> Ogre3D Exporter
     4. Select the objects you want to export, or choose the option "export all objects" (Warning: Polygonal Objects dont work)
     5. make a tick at "Export UV coordinates", choose Animation Names and frames
     6. choose a path

as easy as pie...
Reply
Parent - - By Günther [de] Date 2009-07-28 13:20
Random thoughts from reading the patches:
- Perhaps it's time we switched the alpha channel to the way everyone else uses it.
- The render code really shouldn't use glBegin etc., those are deprecated and slow. Per-vertex processing should be done by shaders, users without them can live with inferior FoW and enjoy the speed increase.
- If you are sorting the triangles anyway, you don't need a depth buffer, do you?
- AdditionalRessourcesLoader::LoadTexture should use C4Surface::LoadAny. If the resulting surface needs more than one texture, the loader can still error out or resize the texture.

Now to compile and test the stuff :-)
Reply
Parent - - By Clonk-Karl [us] Date 2009-07-28 18:30

> Perhaps it's time we switched the alpha channel to the way everyone else uses it.


I considered this as well, but maybe this should happen independently from the mesh rendering. I also feared breaking D3D-Code...

> If you are sorting the triangles anyway, you don't need a depth buffer, do you?


The faces are sorted from nearest to farthest, not from farthest to nearest. This way the depth buffer is still required. It is done this way so that applying ColorModulation with nonzero alpha on an object gives the same result as for pixel-based graphics.

I am going to have a look at your other proposals. Shaders are completely new terrain for me, though :)
Reply
Parent - By Günther [de] Date 2009-08-01 15:16

> It is done this way so that applying ColorModulation with nonzero alpha on an object gives the same result as for pixel-based graphics.


We could also use backface culling so that only the viewer-facing triangles are shown. Sure, it might look a bit weird to see the clonks body through his arms, but that might be an acceptable price for the potential performance gains. We'll have to measure...
Reply
Parent - By Günther [de] Date 2009-09-12 12:57
On the topic of depth buffer and giving every object it's own layer, there's the nonlinear precision to consider: http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html
Reply
Parent - By Günther [de] Date 2009-08-04 20:46
More precisely, glDrawArrays is the method to use. Unfortunately, the only way to specify the actual data still present in OpenGL 3.2 is glVertexAttrib(Pointer), which was introduced in OpenGL 2.0, which is higher than the minimum version we're requiring at the moment (1.3). But the difference between glVertexAttrib and glInterleavedArrays is not that big if you're not using the additional features of the former.
Reply
Parent - - By Clonk-Karl [us] Date 2009-08-23 23:56

> Perhaps it's time we switched the alpha channel to the way everyone else uses it.


I committed a patch for this to a newly created bitbucket repository. I might have missed a few places, but it seems to work in general. I am not sure whether the Shader Code needs to be adapted, and the D3D side of things remains to be done as well. It would be great if someone could do at least the latter as I can't test the D3D code.
Reply
Parent - - By Günther [de] Date 2009-08-24 10:40 Edited 2009-08-24 10:44

> TODO: Why do we clear here when we set new pixel values for all pixels in that region anyway?


Because ClearBoxDw allocates the main memory buffer for the box, so that only that box needs to be sent to the gpu, and not the whole texture, or every pixel separately. It's an important optimization.

> TODO: This was GL_ADD. Is GL_MODULATE correct for inverted alpha? Others seem to break things... - ModulateClrA was changed to do a+b-1, but there is no GL_-constant for this...


No, but changing this to GL_MODULATE is one of the advantages that this patch gives us.

And yes, the shaders need the same. They can probably be simplified.
Reply
Parent - - By Clonk-Karl [us] Date 2009-08-25 01:37

> Because ClearBoxDw allocates the main memory buffer for the box, so that only that box needs to be sent to the gpu, and not the whole texture, or every pixel separately. It's an important optimization.


Thanks for the explanation. I added a comment mentioning this in the code so I don't wonder again later :)

> No, but changing this to GL_MODULATE is one of the advantages that this patch gives us.


Ah, so GL_MODULATE is actually what we want and GL_ADD was some sort of "approximation" for the inverted case?
Reply
Parent - - By Günther [de] Date 2009-08-25 10:31
Well, I wasn't there when the decision to add alpha values instead of multiplying them was made, but I think it's good when SetColorModulation(RGBa(255,255,255,128)) makes an object half as visible as it was, and not half-visible where it was fully opaque and invisible where it was mostly transparent.
Reply
Parent - - By Newton [es] Date 2009-08-25 11:56
Did the fading-out of particles work the same way?
Parent - By Günther [de] Date 2009-08-25 13:55
Probably.
Reply
Parent - - By Clonk-Karl [us] Date 2009-09-14 23:36

> It would be great if someone could do at least the latter as I can't test the D3D code.


I would like to merge this patch rather sooner than later (to get it tested, and to avoid the need to adapt more things later), but I think this fully breaks D3D which is why I hestitated to do so until now. I would be glad if a Windows Coder could give this a try (or if we agree on it being OK if D3D support is broken until someone does).
Reply
Parent - By Günther [de] Date 2009-09-15 19:13
Well, I don't think it's right to hold improvements back because some platform breaks, and that's especially true with D3D given the availability of OpenGL on Windows. That's not to say that we shouldn't take portability into consideration when making decisions, but either we have enough developers on every platform, or we don't, and waiting would not fix that.
Reply
Parent - - By Sven2 [de] Date 2009-08-24 09:02 Edited 2009-08-24 09:05

> - If you are sorting the triangles anyway, you don't need a depth buffer, do you?


Long time ago I heard that drawing front-to-back is faster than vice versa, because less pixels need to be drawn. In other words: Checking the depth buffer should be a faster operation than drawing a pixel (+texture lookup, executing the shader, etc.).

Doesn't work for transparency, of course.

I think CX splits all mehes into transparency+non-transparency, then sorts the list and draws front-to-back. Supposedly, sorting the list every frame AND having a depth buffer is still faster than the alternative.
Parent - By Günther [de] Date 2009-08-24 10:46
It probably depends on the complexity of the calculations the GPU has to do for every pixel. A real 3d engine with shadows has to do more work than a 2d engine, but that doesn't mean that the advantage is flipped.
Reply
Parent - - By Marky [de] Date 2009-07-28 23:10
The new code generated a lot of problems in VC9. Here are two patches that fixed the problem for me, but there are still crashes when playing a map with the monster.

These are two patches because at first I tried compiling 'standard' only, and after the success there I encountered new problems in 'clonk'

I hereby license the following file(s) under the CC-by license

I hereby license the following file(s) under the CC-by license
Parent - - By Clonk-Karl [us] Date 2009-07-29 00:37
So they are compile errors that your patches are fixing? I wasn't 100% sure whether using StdStrBuf and not StdCopyStrBuf was the right thing, but I didn't see any corresponding errors in valgrind, so assumed it to be OK.

Can't you even pass the StdCopyStrBuf by const reference to most functions? Probably I should let someone else who has a better understanding of Std*Buf comment on the patches.
Reply
Parent - By Marky [de] Date 2009-07-29 07:57
Yes, compile errors :). Passing "const &" gave me errors as well, so I asked Günther what to do and he said passing StdStrBuf in these cases should be ok, but I did not test it with every single function.
Parent - By Günther [de] Date 2009-07-29 11:05
StdCopyStrBuf is virtually always the right class to use for class members. That way, you get a default copy constructor that actually works as expected. For function arguments things are less clear, often StdStrBuf is okay. So maybe somebody should do the minimal "replace all StdStrBuf class members with StdCopyStrBuf" and check again what the msvc-std::map-implementation doesn't like.
Reply
Parent - By Anonymous [es] Date 2009-07-30 14:09

>Support for textures is not yet fully implemented. Textures are only supported in .png format. In fact, it would be good if someone could create a test object which uses a texture, so I can test this when implementing it.


There are some models in the art workshop, already done with textures. You need to import the .obj first and then export according to your howto.

P.S: Greetings from Matthi and Clonkonaut

- Newton
Reply
Parent - By MrBeast [de] Date 2009-07-31 18:49
There is also an import script btw.

You can find it here.
Reply
Parent - - By Newton [es] Date 2009-08-01 13:10
If you implemented the rendering by yourself, without using OGRE, then we don't have features like using normal maps, reflection maps etc., automatic LOD-mesh creation (or even using them) and a bone-animation system by default, right?
Parent - By Clonk-Karl [us] Date 2009-08-01 16:18
Right. Animations using bones have already been implemented though.
Reply
Parent - By Günther [de] Date 2009-09-18 23:39

>    // TODO: Note this cannot be CSurface here, because CSurface
>    // does not have a virtual destructor, so we couldn't delete it
>    // properly in that case. I am a bit annoyed that this
>    // currently requires a cross-ref to lib/texture. I think
>    // C4Surface should go away and the file loading/saving
>    // should be free functions instead. I also think the file
>    // loading/saving should be decoupled from the surfaces, so we
>    // can skip the surface here and simply use a CTexRef. armin.


Most of CSurface and all of CSurface8 can go to lib/textures, because it isn't actually platform dependent. The parts that are, the "primary surface" stuff, would be better in an extra class anyway. I think it's only used to draw the rain in d3d mode, and making screenshots. That leaves CTexRef in platform, cleanly separated from the platform independent image loading/saving code. Whether those should support loading directly into textures I'm unsure, because it would probably need code duplication. We don't want to go to a virtual method call for every pixel for performance reasons. Maybe templates.
Also, the mesh loading stuff probably deserves it's own directory under lib, or can go together with the stuff in lib/texture to a renamed lib/graphics. XML file reading is not at all platform dependent :-)
Reply
Parent - - By Zapper [de] Date 2009-09-19 20:04 Edited 2009-12-31 13:29
Stuff that should be able with the mesh system, still brainstorm here. If anyone can think of other cool or needed features.. :)


  • mixing two animations that use different bones: individually animating f.e. upper- and lower-body (or door and forge!)

  • attaching objects (or only other models) to specific bones

  • setting the position of specific bones manually (overwriting action data for that bone?)

  • getting the position of specific bones - for example to know where to do hitchecks or where to cast particles

  • changing only the texture via script while keeping the other animation data (or placing more textures above the used one (overlays))

  • jumping to frames of an animation - for example to switch between the bow-aiming-steps while the actual animation is still (already possible?)

  • Newton: animated textures (f.e. to animate the fire in a forge)

  • Newton: getting rid of the old phase-based animations and rather set the duration of the whole animation and let the steps be dynamically interpolated (has to be possible to jump to a specific percentage of the animation length, though)

Parent - By Clonk-Karl [us] Date 2009-09-19 20:13

> jumping to frames of an animation - for example to switch between the bow-aiming-steps while the actual animation is still (already possible?)


That should be possible already using an action with Delay=0 and Length=some arbitrary value (such as the number of keyframes of the animation). Then use SetPhase() to jump to a certain frame. If Length is greater than the number of keyframes of the animation (or the number of keyframes is not divisible by Length without remainder), then linear interpolation between two frames is performed.
Reply
Parent - - By Newton [de] Date 2009-09-20 13:10 Edited 2009-09-20 13:13
When we have smooth animations instead of animation phases where one phase is one image, I'd like to see the Phase-Thing dropped. Only by dropping Phase we can truly use the benefits of smooth animations like controlling (dynamically) the speed of the animation. Instead of Delay=1 or Delay=2 (double as slow), we could use this:

[Action]
Delay=140
Length=auto, -1 or something similar


to assign that the whole animation should last 140 frames. The speed is steplessly adjustable by modifying the Delay (which should be possible by now, right?)
This feature whould be very helpful for at least all creatures and vehicles that accelerate so that the actual animation can be synced to the actual moving speed. Think of a steam engine, the wheels of a pushed lorry, the turning of the wind wheel or the clonk who first starts to walk faster and faster and then eventually starts to jog (faster and faster). Especially with the clonk it always disturbed me that he is moving so much faster on the ground than the animation suggests.

(And with mesh rendering, I am very happy that the clonk is not in his 16x20 prision anymore and can actually move up and down while running/jogging which would look much better)

> mixing two animations that use different bones: individually animating f.e. upper- and lower-body (or door and forge!)


e.g. Aiming the bow while running

>setting the position of specific bones manually (overwriting action data for that bone?)


For what purpose?

>changing only the texture via script while keeping the other animation data (already possible?)


What do you mean? Classical texture animation inside a model animation? If yes, I acknowledge that. Otherwise I don't know how we could animate the fire inside the oven of the tool workshop.
Parent - - By Zapper [de] Date 2009-09-20 14:44

>e.g. Aiming the bow while running


Exactly

>For what purpose?


Well, there are surely cool graphical effects possible with that. But I wouldn't assign a high priority there myself. :)

>What do you mean? Classical texture animation inside a model animation?


No, more like setting the texture of the fused T-Flint but using the old model. But animated textures are also good :)
Parent - - By Newton [de] Date 2009-09-20 14:59

>No, more like setting the texture of the fused T-Flint but using the old model. But animated textures are also good :)


That would be pretty much texture animation. In this case, just one animation phase.
Parent - By Zapper [de] Date 2009-09-20 15:43

>That would be pretty much texture animation. In this case, just one animation phase.


Well, I am sure that one can find examples where you can't just have one texture. For example when you have those enchanted weapons that were discussed. You could have one golden texture to overlay the texture of the weapons you enchant or something
Parent - By boni [at] Date 2009-09-21 13:32

>>For what purpose?
>Well, there are surely cool graphical effects possible with that. But I wouldn't assign a high priority there myself. :)


You just want to make Zombies without having to make a new walk-animation. :P
Parent - - By Clonk-Karl [us] Date 2009-09-20 16:13

> Only by dropping Phase we can truly use the benefits of smooth animations like controlling (dynamically) the speed of the animation


I am not sure I understand this part of your post. How is Delay=140,Length=-1 different from, say, Delay=1,Length=140 (with modifying Length in the script) to provide synchronized movement?

> > mixing two animations that use different bones: individually animating f.e. upper- and lower-body (or door and forge!)
> e.g. Aiming the bow while running


But this would still require a second "Running" animation in which the Clonk does not move his arms, right? I mean, he needs to hold the bow somehow. Hm... or the running animation just needs to be separate from the "move arms back and forth" animation...

I think this can even allow mixing multiple animations that affect the same bones, btw, using (weighted) interpolation.

What could the C4Script interface for this look like? Would the following functions be sufficient?

bool PlayAnimation(string animname, int length[, int weight?]);
bool SetAnimationPosition(string animname, int pos);
bool StopAnimation(string animname);


Length can be 0 to not play the animation automatically, in that case SetAnimationPosition needs to be called whenever the animation phase is to be changed. An animation that is already being played via the object's action cannot be started this way; if the object's action changes to an animation which is currently played via this script interface, then an implicit call to StopAnimation() is performed and the action playback takes over.
Reply
Parent - - By Newton [de] Date 2009-09-20 17:01

>I am not sure I understand this part of your post. How is Delay=140,Length=-1 different from, say, Delay=1,Length=140 (with modifying Length in the script) to provide synchronized movement?


Its a question of the interface. I guess if one can freely set the length without having the limitations like in the Graphics.png of actually having to supply all the animation steps as images, then your implementation is already doing exactly what I was suggesting.

>What could the C4Script interface for this look like? Would the following functions be sufficient?


Mh, if we think about multiple animations at the same time, we should think about the future of the ActMap in general as this was and is mainly used for animation.
Parent - By Clonk-Karl [us] Date 2009-09-20 17:23

> I guess if one can freely set the length without having the limitations like in the Graphics.png of actually having to supply all the animation steps as images, then your implementation is already doing exactly what I was suggesting.


Yeah, you can set Length to whatever value you like when using a mesh. This basically defines how fast the animation is played (and if you use Delay=0, then it just defines the granularity).

> Mh, if we think about multiple animations at the same time, we should think about the future of the ActMap in general as this was and is mainly used for animation.


It is also used for procedures.
You are of course free not to set an animation in the ActMap and use the scripting interface only. Currently, actions are then still used for procedures and transition between actions (e.g. Walk->Jump if a clonk loses ground under its feet), though.
Reply
Parent - - By Günther [de] Date 2009-09-20 18:51
That interface would require additional per-object state. I think the principle that animations are centrally defined for each kind of object and every object just records which animation it plays and at what position, is still good.
Reply
Parent - By Newton [de] Date 2009-09-20 19:12
How would you solve the problem of the demand for several animations at the same time (while using meshes)?
Parent - - By Clonk-Karl [us] Date 2009-09-20 19:17

> That interface would require additional per-object state.


Is there any problem with this? Note that this would not make C4Object bigger, as I would probably implement this in StdMeshInstance.
Reply
Parent - - By Günther [de] Date 2009-09-20 20:27
Well, each additional object state does not have a big cost by itself, but it adds up and we already have lots of stuff in C4Object. Whether it's in C4Object or another class is not really important. Another thing is that we probably want to store the vertex data on the GPU, because that's the most efficient way to use contemporary GPUs, and preferably neither once for every object nor recalculated every frame. Interpolating between two vertex sets can be done with a vertex shader. I think some games also do the bone animation stuff with vertex shaders, but that is more complex. I think without that, the two or more animations cannot share triangles, so that the drawing function can use one set of precalculated vertex data for one animation and one set for the other.
Reply
Parent - - By Clonk-Karl [us] Date 2009-09-20 21:37

> Well, each additional object state does not have a big cost by itself, but it adds up and we already have lots of stuff in C4Object. Whether it's in C4Object or another class is not really important.


Thing is that StdMeshInstance is only instantiated for each object that actually uses a mesh, not for _every_ object. It currently duplicates all the vertex data already, by the way (for example to avoid sorting the faces when nothing changes from frame to frame).

> Another thing is that we probably want to store the vertex data on the GPU, because that's the most efficient way to use contemporary GPUs, and preferably neither once for every object nor recalculated every frame. Interpolating between two vertex sets can be done with a vertex shader. I think some games also do the bone animation stuff with vertex shaders, but that is more complex.


Hm, I don't understand how we can avoid uploading the vertex set to the GPU every frame when not also doing the bone animation in hardware. But wouldn't this just involve uploading the transformation matrix for each bone of the current animation (each frame), assuming the vertex bone assignments are somehow known already?

> I think without that, the two or more animations cannot share triangles, so that the drawing function can use one set of precalculated vertex data for one animation and one set for the other.


What is possible already (and is used by the Monster) is that one vertex can belong to multiple bones, and thus can be influenced by multiple animations, even if each animation uses different bones.
Reply
Parent - By Günther [de] Date 2009-09-21 20:37

> Thing is that StdMeshInstance is only instantiated for each object that actually uses a mesh, not for _every_ object.


Which will mean a lot of objects.

> It currently duplicates all the vertex data already, by the way (for example to avoid sorting the faces when nothing changes from frame to frame).


And I think vertex data should be precalculated for all frames of all animations, and not recalculated all the time. Or, at least, not recalculated all the time on the CPU. If enough objects of a kind are in the game, that would even not cost much if any RAM. Okay, you trade video RAM for system RAM, but that is also a good thing.
Reply
Parent - - By Clonk-Karl [de] Date 2009-12-30 22:25

> mixing two animations that use different bones: individually animating f.e. upper- and lower-body (or door and forge!)


I have implemented this; it is available using the following C4Script functions:

bool AnimationPlay(string animation, int weight);
bool AnimationStop(string animation);
bool AnimationSetState(string animation, int position, int weight);


position is given in milliseconds. weight is used when two or more animations which are played concurrently affect the same bone. In that case, weighted linear interpolation is applied, with the given weight for the given animation. If only one animation is played, then weight is not used, but it must not be zero. For AnimationSetState(), both position and weight can be nil to keep the current value. Note that when calling AnimationPlay() is called the Animation is not played automatically; instead one needs to call AnimationSetState() with increasing position periodically. If Günther allows me to introduce another per-object variable, then I can add another parameter to play the animation automatically for this though ;)

If an animation is played via the "Animation" ActMap entry then it's weight is 1000. If the action is already played before, then it is reset to the beginning and it will stop when the action changes again. I'm not exactly sure whether this scripting interface should replace the ActMap one, or whether they should stay side by side. Any input on this is welcome; also on the C4Script interface.

A demonstration of the functionality can be seen in the Outset.c4s scenario in the mesh branch: A clonk is playing the "Walk" animation, but every once in a while it changes to "Jump", smoothly fading between the two animations.
Reply
Parent - - By Günther [de] Date 2009-12-30 23:08
Well, the ActMap interface is a declarative interface, while the functions are imperative. Both styles have advantages and disadvantages, but I think it's far easier to implement the former in terms of the latter than the other way round, so we could theoretically keep the ActMap interface by implementing it in C4Script. But then there's also the fact that there are lots of interaction points between the Actions and the other parts of the engine. Some of them probably shouldn't be reimplemented in C4Script for speed reasons, so we'd need to replace the action changes with Callbacks, and the action property queries with object property queries. A lot of work for little gain, except for a cleaner interface. On the other hand, your script interface looks like it might be replaceable by enabling objects to have multiple actions.
Reply
Parent - By Sven2 [de] Date 2009-12-31 11:56
I agree ActMap should be kept and actions get properties to let them play animations, along with the ability to set multiple actions. Maybe have a primary action used for physics behaviour (procedure, attach) and secondary actions for animations and callbacks. Physics behaviour could be detached from actions later (e.g.: Have procedures seperate from actions), but that would be a different and unrelated step.
Parent - - By Newton [de] Date 2009-12-31 01:37

>Note that when calling AnimationPlay() is called the Animation is not played automatically; instead one needs to call AnimationSetState() with increasing position periodically.


So what is AnimationStop for then?
Parent - - By MimmoO Date 2009-12-31 02:10
imagine some spell that instantly freezes your clonk. all actions will be stopped, the clonk remains in the state he just is
Parent - By Randrian [de] Date 2009-12-31 02:12
Well, this is what the function is not for^^
The function just removes the animation from the animations, that are calculated at the moment.
So when you have finished an animation you remove it with AnimationStop. CK's example script in the repos is quite self explaning.
Reply
Parent - By Clonk-Karl [de] Date 2009-12-31 03:06
An animation can either be active or inactive. If it is active, then it has a position and weight assigned and it influences the bone transformations. AnimationPlay() now activates an animation, and AnimationStop() deactivates it.

So when not calling AnimationSetState() periodically then the animation is "paused", but it still influences the Clonk's appearance. With AnimationStop() it does not anymore.
Reply
Parent - - By Newton [de] Date 2009-12-31 02:16
Can't compile it:
error C2461: 'StdMeshInstance::Animation': Formale Parameterliste für Konstruktor fehlt
Up Topic General / General / 3D mesh rendering
1 2 Previous Next

Powered by mwForum 2.29.7 © 1999-2015 Markus Wichitill