Not logged inOpenClonk Forum
Up Topic General / Feedback and Ideas / Easier Shading Concept
- - By Foaly Date 2016-08-26 11:13
This is a concept for making materials and shading in OpenClonk easier and more understandable.
A simple custom material should only take a few lines of code.
This will only affect mesh and sprite materials, not landscape materials (at least for now?).

A new format for shader files and materials will be introduced.
Legacy material files will be loaded and used with a special shader that accepts their parameters until they have been adapted.
Using a custom file format has the advantage of not creating any confusion about which parts of the material file are supported by OpenClonk.
Of course, the exporter for Blender has to be modified and a converter script has to be written.

It should allow a great variety of materials, but the main focus is on making it easy to understand.
A simple solid material shader could look like this:

=== SimpleShader_Example.shader ===

Shader(SimpleShader_Example)
{
  BlendMode = Solid;
  NormalMapping = Supported;  //default, if the shader modifies the normals itself, this can be set differently so it does not interfere
  Textures = {BaseMap};
  Parameters =
  {
    float3 DiffuseColor = float3(1.0, 1.0, 1.0),
    float3 SpecularColor = float3(0.1, 0.1, 0.1),
    float Shininess = 0.3,
    float3 EmissionColor = float3(0.0, 0.0, 0.0),
    float3 AmbientColor = float3(1.0, 1.0, 1.0),  //comma at the end is allowed :P
  };
}

//Begin of the actual shader code in glsl
#include DielectricShader

float4 shader_main()
{
  float3 baseColor = sampleTextureAuto(BaseMap).rgb;

  // defined in DielectricShader
  // float3 Dielectric_Shade(float3 DiffuseColor, float3 SpecularColor, float Shininess)
  float3 shadedColor = Dielectric_Shade(
    baseColor * DiffuseColor,
    SpecularColor,
    Shininess);
  float3 ambientLight = getAmbient();
  float3 finalColor = shadedColor + EmissionColor + ambientLight * AmbientColor;
  return float4(finalColor, 1.0);
}

=== End SimpleShader_Example.shader ===


And a material file could look like this:

=== Hut_Wood.material ===

Material(Hut_Wood)
{
  Shader = SimpleShader_Example;
  NormalMappingMode = Disabled;  //default, if no normal map is provided
  Textures =
  {
    {BaseMap = Hut_WoodDiffuse.jpg, 0},  //0 means first texture coordinate set. may be omitted
  };
  Parameters =
  {
    Shininess = 0.2,  //other parameters stay at default
  };
}

=== End Hut_Wood.material ===


The shaders will be written in a glsl-like language. In fact, it will still use the glsl compiler, but some keywords are different to possibly allow compatibilty to other shading languages in the future. All the new names could be set using #defines.
The engine, when preprocessing the shader, could also do extra checks, to make sure the shader only uses the new keywords to ensure compatibility and output warnings (but for now this is very optional.)
All important functions will have a wrapper for them.
Also, all uniforms are defined in a seperate syntax. The engine (by using a preprocessor) also takes care of the normal mapping.
Identical versions of one shader will be cached and only compiled once.

Shaders could get a separate Shaders.ocg.
.shader files contain one shader definition per file.
.h files can be included by shaders and other headers, to build libraries to simplify writing shaders.
.material files will be put next to the mesh

Also, it will only be possible to create pixel shaders. (At least for now.)
Skinning and animation will happen under the hood, the person writing a shader should only have to think about the shading itself.

*** Example Documentation ***

*** Shader Definitions ***
Shaders can be overloaded, allowing modders to replace shaders to create stylized looks.

*How the shader is applied onto the framebuffer is defined as BlendMode.
Available BlendModes:

Solid    //fully opaque material
TransparentClip  //either fully transparent or fully opaque. does not require alpha sorting and is great for foliage.
TransparentBlend //usual alpha blending transparency
Add  //Add the color. Great for light / magic effects
Multiply //Great for making the background darker

Additional BlendModes may be implemented in the future.

*Normal mapping allows simulating extra surface detail without extra geomety.
The NormalMapping flag may either be:

  Supperted  //The shader accepts normal maps, and if a normals texture is provided by a material, code will be injected by the engine.
  NotSupported //The shader uses it's own code to generate normals / does not support normal maps for some reason.

*The shader can also specify an array of textures. The name "NormalMap", however, can only be used if the normal mapping flag is set to NotSupported.
Textures = { TextureA, TextureB, TextureC };

*Parameters are realized using uniforms and may have the following types:
  float, float2, float3, float4
If they provide default values, they can be left out in a corresponding .material definition.

*The EntryPoint of the shader will always be shader_main and returns a float4 consisting of R G B A values.

*BackfaceCulling can either be
  Forced  //backfaces are always culled. This should be enabled, if the shader does not support double-sided lighting.
or
  Optional  //means that the .material can decide, whether backface-culling should be enabled and that this shader supports double-sided lighting.

*** Material Definitions ***

*Shader = Shader_Name;  //defines the shader that should be used with this material

*BackfaceCulling = Enabled | Disabled;  //enabled is default, disabled is only allowed if backfaceculling is not forced by shader

*NormalMappingMode
  Disabled  //no normal mapping. this is the only supported mode, if the shader does not support normal mapping
  Tangents  //Tangent-space normal mapping
  Local    //Object-space normal mapping

*NormalMap defines the normal map, with optional texture coordinate set (see textures).
  NormalMap = Normals.jpg;
or
  NormalMap = {Normals.jpg, 5};  //uses sixth coordinate set

*Textures specifes an array of textures (the names must match those in the shader definition), and an optional texture coordinate index.
Textures =
{
  TextureA = foo.jpg,
  { TextureB = foo2.jpg },
  { TextureC = bar.png, 1},
};

*Parameters looks just like the one in the shader definition, but
  a) Types may be omitted
  b) Parameters with default values may be omitted completely

*** Available Functions ***

float3 getNormals();  //gets the normals (with applied normal mapping, if enabled)
float3 getNormalsRaw();  //gets the normals, always without normal mapping
float3 getAmbient();  //get the ambient lighting
float3 getTeamColor();  //the team / player color
// there is no getModulationColor. Code for SetClrModulation will be injected by the engine.

float4 sampleTextureAuto(textureName);  //returns a color form a texture, automatically using the correct coordinate set
float4 sampletexture(textureName, float2 textureCoord);  //allows using custom textureCoordinates
float2 getTextureCoords(textureName);  //gets the texture coordinates for a texture
float2 getTextureCoords(int index);  //gets the texture coordinates for an index
float2 getMatcapTextureCoords([float3 normal]);  //gets texture coordinates for a matcap from a normal (optional, can also get the normal automatically?)
float3 getGlobalCoord();  //returns the world-space coordinate of the current pixel
float3 getLocalCoord();  //returns the object-space coordinate of the current pixel

*** Funcions in DielectricShader.h ***

float3 Dielectric_DiffuseLighting([float3 normal]);  //returns the diffuse lighting, with optional normal
float3 Dielectric_SpecularLighting(float Shininess, [float3 normal, float3 lookDir]); returns the specular lighting, with optional normal and lookDir
float3 Dielectric_Shade(float3 DiffuseColor, float3 SpecularColor, float Shininess);  //simple function applying both diffuse and specular lighting at once

This new shading system could also provide a few pre-made shaders which can be used in own shaders using #include:
Dielectric
Matcap / Fake Reflections
Metal?
  Effects
  Different lighting models (e.g. Cook-Torrance, Blinn)
  Sytlized Lighting (rim lighting, etc.)
  Holograms

I'm not sure about the optional parameters, first because I'm not sure how well function overloading is supported in glsl and because getting the normals by calling getNormals() could potentially be slow if it is called from different places (unless the compiler optimizes it).
So instead, the entry method could have the normals as a parameter, and the person writing the shader would have to pass it everywhere they need to.
If they do not use it, I'm certain the optimizer will remove the dead code.

Thank you for reading.
If I missed something about how shading works in Clonk currently, which would not work this this new system, please comment.
Also I'm completely unsure about naming and the syntax details, so these can be changed to fit more into Clonk.

And looking at the source code, it looks like multiple texture coordinates aren't supported (yet?), so it could just be ignored for now.

And I'll try to implement this myself :D
Parent - - By Zapper [de] Date 2016-08-30 06:29
I have to admit I don't get the advantage of the changed syntax.
A clear disadvantage would be that it wouldn't work in ogre meshy anymore. And that we would have to write our own exporters for blender & co.

With the current syntax your example would look something along the lines of  this.. (minus some copy&paste mistakes). I don't really see where your changed syntax would have a clear advantage.

===== material =====
fragment_program hut_fragment glsl
{
    source hut_fragment.glsl
}

material WoodenCabin
{
  receive_shadows on
  technique
  {
    pass
    {
      specular 0.2 0.2 0.2 12.500000
      texture_unit
      {
        texture woodencabin.jpg
        tex_address_mode wrap
        filtering trilinear
      }
      fragment_program_ref normal_map_fragment
      {
        param_named basemap int 0
      }
    }
  }
}
===== shader =====
uniform sampler2D basemap;

slice(texture+1)
{
  fragColor = fragColor * texture(basemap, texcoord);
}


PS: including shaders, shading libraries and special shading effects would all be cool, of course. But that doesn't have to do anything with the syntax and could also be implemented in the current system.
Parent - - By Foaly [de] Date 2016-08-30 11:12

>A clear disadvantage would be that it wouldn't work in ogre meshy anymore. And that we would have to write our own exporters for blender & co.


It's true, the materials would not work in ogre meshy.
But even now, once you start using a custom shader, you can't really preview it with ogre meshy.
To see how the shading really looks, you'll always have to view it in openclonk.

And yes, a modified exporter for Blender will be necessary. This should not be too much work, as the mesh file format stays the same.
And to be honest, the current exporter does not generate very good materials, they have to be edited by hand almost every time.
In the meanwhile, it will still be possible to load "legacy" material files in openclonk.

>I have to admit I don't get the advantage of the changed syntax.


I once tried to create a fake reflective material (which worked as far as I can remember).
It should have two matcaps added on top of each other, of which one would have a color tint and the other would not.
In the end, it should all be multiplied with an ambient occlusion map.
This is what the shader / material looked like:

=== Material ===

fragment_program shiny_matcap_fragment glsl
{
    source shiny_matcap_fragment.glsl
}

material WizTurret_Sphere
{
    receive_shadows on
    technique
    {
        pass
        {
            ambient 1.0 1.0 1.0 1.0
            diffuse 1.0 1.0 1.0 1.0
            specular 0.5 0.5 0.5 1.0 12.5
            emissive 1.0 1.0 1.0 1.0

            texture_unit
            {
                texture WizTurret_Diffuse.jpg
                tex_address_mode wrap
                filtering trilinear
            }
            texture_unit
            {
                texture WizTurret_Color.jpg
                tex_address_mode wrap
                filtering trilinear
            }
            texture_unit
            {
                texture WizTurret_Gloss.jpg
                tex_address_mode wrap
                filtering trilinear
            }
            fragment_program_ref shiny_matcap_fragment
            {
                param_named diffuseTex int 0
                param_named reflectionTintTex int 1
                param_named reflectionTex int 2
            }
        }
    }
}

=== Shader ===

uniform sampler2D diffuseTex;
uniform sampler2D reflectionTintTex;
uniform sampler2D reflectionTex;
uniform vec3 oc_PlayerColor;

#ifndef OPENCLONK
in vec2 texcoord;
out vec4 fragColor;
#define slice(x)
void main()
{
  fragColor = vec4(1.0, 1.0, 1.0, 1.0);
#endif

slice(texture+1)
{
  vec2 normalTCoord = (vtxNormal.xy + vec2(1.0, 1.0)) / 2;
  vec4 colorReflectionSample = texture2D(reflectionTex, normalTCoord);
  vec4 colorReflectionTintSample = texture2D(reflectionTintTex, normalTCoord);
  vec4 colorAOSample = texture2D(diffuseTex, texcoord);
}

slice(color+1)
{
  color = (colorReflectionTintSample * vec4(oc_PlayerColor.rgb, 1.0) + colorReflectionSample) * colorAOSample;
}

#ifndef OPENCLONK
}
#endif


And this is what I wish it looked like:

=== Shader ===
Shader(MatcapShader_Example)
{
  BlendMode = Solid;
  NormalMapping = Supported;
  Textures = {diffuseTex, reflectionTintTex, reflectionTex};
  Parameters =
  {
    //none
  };
}

float4 shaderMain()
{
  float2 texCoordMatcap = getMatcapTextureCoords(getNormals());
  float3 reflectionTint = sampleTexture(reflectionTintTex, texCoordMatcap).rgb * getTeamColor();
  float3 reflection = sampleTexture(reflectionTex, texCoordMatcap).rgb;
  float3 ambientOcclusion = sampleTextureAuto(diffuseTex);
 
  float3 finalColor = (reflectionTint + reflection) * ambientOcclusion;
  return float4(finalColor, 1.0);
}

=== Material ===
Material(MatcapShader_Example)
{
  Shader = MatcapShader_Example;
  Textures =
  {
    diffuseTex = WizTurret_Diffuse.jpg,
    reflectionTintTex = WizTurret_Color.jpg,
    reflectionTex = WizTurret_Gloss.jpg,
  };
  Parameters =
  {
    //none
  };
}


Especially the material definition itself is much shorter, and for most cases you will be using existing shaders.
Also, it is possible to read the shaderMain() from top to bottom and you know what happens.
No code is automatically inserted in between, it is much easier to understand in what order things happen.
And for the material properties (ambient, diffuse, etc.), they are not used at all in the shader and could be removed (I'm actually not sure whether it would complain currently if they were missing).
However, with the new system, if someone tried to use them in the .material file, they would get a warning because the shader does not specify them.

Also the new syntax has a few general advantages:
No need to specify the textures in a certain order in the material file / setting indices. That all happens automatically.
It easily possible to provide custom parameters for a shader, that can be set in a material file.
No need to know in which order slices are processed and which variable names they use (because there won't be slices).
For sampling a texture, it is not required to specify texture coordinates, if it just uses the default UV coordinates.

When working with the current system, I had weird compilation errors all the time and had to dig through the source files to find out what was happening internally.
With the new system, it should not be necessary at all to read the shader sources the engine uses in order to write a new shader, only the documentation.
So this project will also consist of writing a good documentation that explains everything.

Generally, the main idea is to make shaders more readable and easy to understand.

P. S. Sorry for posting so much source code which makes this look kind of messy :/
Parent - - By Zapper [de] Date 2016-09-03 09:52

>(I'm actually not sure whether it would complain currently if they were missing).


That's a big point. I think a lot of your "shorter" syntax comes from you assuming sensible defaults. If we had the same defaults in the current system, you could also leave out "receive_shadows on" or "ambient 1.0 1.0 1.0 1.0" or "tex_address_mode wrap / filtering trilinear". Also I don't see where in your example the fog of war is added to the object color. The slice system of the current shaders was done exactly for that reason: that you could easily add code that modifies the color in certain steps of the pipeline (before FoW/after FoW/before the lighting/...).
How would a shader in your example look like that just wants to change the edge highlighting a bit (slice "light"?) but leaves the color and the FoW at default? Could you do that? That's pretty important, because you mostly will want to leave a lot of the preprocessing and postprocessing at default to not make your object stand out too much. Similarly, how could I change the overlay shading (slice "texture") then leave everything intact and then in the end make the overlay shine through FoW?

I agree that the current situation is not the best. But it solves many of the problems we have (and your solution would apparently lose some features?).
Maybe a bunch of sensible defaults would already shrink the material definition enough. And a better exporter (as you say we need anyway) would be able to just export the stuff we can actually use. And..

> With the new system, it should not be necessary at all to read the shader sources the engine uses in order to write a new shader, only the documentation.


Why? You would still have to search through the generated shader code, because your example is obviously not all that happens (what about FoW? What about GFX_BLIT_Mod2? What about color modulation?)..

>So this project will also consist of writing a good documentation that explains everything.


I think this would also help a lot for the current stuff. Especially if e.g. the windmill project would tackle to visualize the shader code somehow.

I think you want to solve the right things. I just don't think what you propose would help alot :I
Parent - - By Foaly [de] Date 2016-09-03 11:48

>How would a shader in your example look like that just wants to change the edge highlighting a bit (slice "light"?) but leaves the color and the FoW at default?


If you wanted different lighting for an object, you'd have to write a whole new shader. But the shaders will tend to be very short anyways, so I don't think that's a big problem.
And FoW will be applied automatically, there won't be a way inside the shader code to apply it manually. Just a flag to turn it on or off.

>Similarly, how could I change the overlay shading (slice "texture") then leave everything intact and then in the end make the overlay shine through FoW?


In the current proposal, there is no way to change a shader to do this, so if you wanted an overlay on top, you'd have to copy the whole shader modify it to serve that purpose.
And shining trough the FoW in that case won't be possible, an object would either be in front of or behind the FoW.

Concerning the overlays and changing the shading without replacing the shader, I had also thought about a different concept earlier, which would create a shading language more similar to the way #appendto and #include work in C4Script, but I thought it may be harder to implement and that the current proposal would be satisfactory, but now I see that loosing the flexibility to modify existing shaders is not good enough.

(The following is not a full concept, just what it might look like.)
E.g. if you wanted to add edge lighting to the current lighting you'd write something like:

#appendto DefaultShader

float4 getLigthing(...)  //maybe parameters, idk
{
  float4 defaultLighting = _inherited(...); //call the previous method without edge light
  float4 edgeLighting = //insert edge lighting code here
  return defaultLighting + edgeLighting;
}


That way, it would be simple to modify shaders afterwards and (at least to me) it will look clearer.
And it would be familiar to anyone familiar to C4Script.
Things like FoW could also be introduced as overloadable methods.
Also things like getUVCoords() could be overloaded to animate them, etc.

A documentation would then have to state which overoadable methods exist, in what order they are called, etc.

I like this style of writing everything in separate functions, because I think it is easy to read.
But a major disadvantage over the current slice system would be, that, as in you example where you wanted the overlay to be on top of the FoW would require some code duplication, so I'm not sure yet how that could work.

I'll have to think about this again...
Parent - - By Zapper [de] Date 2016-09-03 12:16 Edited 2016-09-03 12:24

>And FoW will be applied automatically, there won't be a way inside the shader code to apply it manually. Just a flag to turn it on or off.


For Caedes I had a shader that showed the outline of objects/landscape through FoW (see attachment). So that wouldn't be possible anymore?

>so if you wanted an overlay on top, you'd have to copy the whole shader modify it to serve that purpose.


So that doesn't make the shaders shorter and less error prone but it makes them longer and you have to adjust more when things change? That is exactly what the slices should prevent.

>I like this style of writing everything in separate functions, because I think it is easy to read.


The slices are basically different functions. I don't see the difference between
#include DefaultShader_Preprocessing
#include DefaultShader_Texture
foo.xyz = coolComicShadingEffect();
#include DefaultShader_Modulation
#include DefaultShader_Postprocessing
#include DefaultShader_FoW


and

slice (texture + 1)
{
    foo.xyz = coolComicShadingEffect();
}


I mean, I completely get what you mean: the current shaders are hard to understand. But I just fear that your solution is to make them easier to understand by removing all the features.

PS: The code for the landscape FoW thing is this: (currently you have to add it to CommongShader.glsl)
slice(color+6)
{
#ifdef OC_LANDSCAPE
  float edgeHighlighting = edgeFactor * edgeFactor;
  if (edgeHighlighting > 0.3) edgeHighlighting = 0.0;
  fragColor.rgb = max(lightBright, edgeHighlighting) * materialPx.rgb * (matEmit + lightColorNorm * spotLight);
  float mincol = min(fragColor.rgb[0], min(fragColor.rgb[1], fragColor.rgb[2]));
  float fading = min(lightBright + 0.8, 1.0);
  fragColor.rgb = fading * fragColor.rgb + (1.0 - fading) * vec3(mincol, mincol, min(1.0, mincol + 0.4));
#endif
}
Parent - - By Foaly [de] Date 2016-09-03 17:09

>But I just fear that your solution is to make them easier to understand by removing all the features.


Yes, that was pretty much my idea...
But I see now, that it won't work. At least not the way the proposal is right now.

>The slices are basically different functions. I don't see the difference between
>#include DefaultShader_Preprocessing
>#include DefaultShader_Texture
>foo.xyz = coolComicShadingEffect();
>#include DefaultShader_Modulation
>#include DefaultShader_Postprocessing
>#include DefaultShader_FoW
>...


That's not what it would look like.
It would be like

#include DefaultShader

float4 getTexture()
{
  //float4 foo = _inherited();  //apperently, we just discard it?
  return coolComicShadingEffect();
}

which in my opinion would be better, because you don't have to know the names of the local variables / you cannot get in trouble by using names that are already in use.
It's true, it does look quite similar, but I'd prefer something like it.

Of course, the problem is, that you cannot easily pass local variables on into other functions.
Parent - - By Zapper [de] Date 2016-09-03 17:42
Mh, I really don't get it. Your example with the current syntax would be something along the lines of


// the default shader is included by default... #include DefaultShader
//float4 getTexture()
slice (texture + 1)
{
  //float4 foo = _inherited();  //apperently, we just discard it?
  // ..float4 foo = fragColor; <- also not necessary, just added for clarity
  //return coolComicShadingEffect();
  fragColor = coolComicShadingEffect();
}


So you would still have to write about the same amount of lines of code. And you would also still have to remember the same amount of "special names" like "fragColor", because you would need to remember how to get the default color, the light map, the overlay color, the material properties, the color modulation, etc...
And whether it's "fragColor" vs "getDefaultColor" or "clrMod" vs "getColorModulation" or "oc_Color" vs "getTeamColor" does not truly make a difference...

And this way you can still access information from other parts of the shader (your "local variable" problem)
Parent - - By Foaly [de] Date 2016-09-04 17:08
Personally, I think that it would be nicer to have functions instead of slices, but I don't know a good way to make it actually easier yet.

>And whether it's "fragColor" vs "getDefaultColor" or "clrMod" vs "getColorModulation" or "oc_Color" vs "getTeamColor" does not truly make a difference...


I think it makes a difference whether the naming is consistent or not, and I'd prefer it, if all input parameters had getter functions.
Parent - By Zapper [de] Date 2016-09-04 21:53

>I think it makes a difference whether the naming is consistent or not


Yes definitely, but that again has nothing to do with the changed syntax in general
Parent - - By Sven2 Date 2016-09-03 16:22

> If you wanted different lighting for an object, you'd have to write a whole new shader. But the shaders will tend to be very short anyways, so I don't think that's a big problem.


That is a problem because it wouldn't mix well with other changes done to the shaders. Imagine someone wants to write a scenario where everything is comic-shaded. Right now this should work with a slice. If objects come with their own complete shaders, then each object with a custom shader won't be comic-shaded.
Parent - By Foaly [de] Date 2016-09-03 17:25

>If objects come with their own complete shaders, then each object with a custom shader won't be comic-shaded.


With the above proposal, the person creating the scenario would just replace the DielectricShader.h (or #appendto it), and it will replace the lighting for all objects that include this header and use it's lighting functions. (Which will be almost all objects).
For any object that implements lighting all by itself, it won't, that's true.
But replacing their special lighting with a general lighting equation is also quite likely to completely breaking their shading anyway.

Currently, if you replace the lighting with a slice, it would replace the lighting of all objects, regardless of the way they do shading.
It might even cause compilation errors for certain shaders and these objects would not render at all (or maybe they will use a fallback shader?).

So I'd consider the above proposal safer in this case.
But I see that it is too restrictive and that it won't work out.
Parent - - By Apfelclonk Date 2016-08-30 11:28 Edited 2016-08-30 11:32
Talking about the ogre mesh format, does it have any features, that one could'nt supply in an own oc format? I mean, if one would write a converter that converts a wide-spread format like .obj to oc-binary format, we don't have to bother with using/creating extensions for the most important 3d-programs (3dmax, c4d, blender,...). One only would have to manage an interface between .obj and an oc-format. Maybe there is even an open source reader for such formats, and all we have to do is, writing/reading the oc-format part. That way one could create a format that fullfills our specific needs and simplicity.

Also there are oc-specific things that could be improved in contrast to the current .mesh format: Pre-ordered bones (ordered by their hierarchy, not their indices) and normalized vectors (I think the rotation axes are not normalized, something that the loader has to do). When storing normalized axes one could even ommit the last coordinate and calculate it, that gives some higher degree of compression. Also one wouldn't have to bother with uv-coords with 4 components or other obsolete stuff (diffuse colors per-vertices). In general an own format would allow higher compression and faster loading times, if there aren't mature compression components, that OGRE has, that we couldn't build by our own.

>A clear disadvantage would be that it wouldn't work in ogre meshy anymore.


That would be enhanced by my proposal. Beside Ogre Meshy never worked on any of my machines, a program displaying the mesh/animations and giving some basic options to play around would be manageable by me to integrate into a website or a qt webengine desktop application. But if you want to compile custom shaders one would need something that isn't based on webgl (so its out of my range), since webgl is very limited.
Reply
Parent - By Clonkonaut [de] Date 2016-08-30 12:20
As with all of these suggestions, someone needs to do this. Our supply of engine coders is a bit short nowadays. Apart from that, our own format might be better, yeah. Back then when OC started, Ogre was easier to implement of course.
Reply
Parent - By Isilkor Date 2016-08-30 14:37

> In general an own format would allow higher compression and faster loading times


That's not really an issue; my profiling shows that loading meshes (and skeletons) takes about 2% of the total loading times. Most of the time is actually spent loading textures and sounds from disk, which is certainly something that could be optimized.
Reply
Parent - By Newton [de] Date 2016-08-30 22:54
The Ogre format was chosen back then as the mesh format because we wanted to use some kind of standard (but sleek) format which can easily be exported from popular 3d modelling tools. Ogre being an open source 3D engine, this seemed like a reasonable choice. We did not anticipate that there will be so many problems with the ogre export tool for blender.
Parent - By Zapper [de] Date 2016-09-03 09:57

>if one would write a converter that converts a wide-spread format like .obj to oc-binary format,


If that was possible, we could also just use e.g. the .obj format. However, as far as I know the .obj format does only include xyz/uv per vertex. And e.g. not bone assignment/weights, animations, etc.

>Also there are oc-specific things that could be improved in contrast to the current .mesh format: Pre-ordered bones (ordered by their hierarchy, not their indices) and normalized vectors (I think the rotation axes are not normalized, something that the loader has to do)


That's also not an issue with the format but with the exporter. You could modify the blender exporter to do those things and still use the .mesh format. This would still be less work than to write an own format AND rewrite the exporters.

>In general an own format would allow higher compression and faster loading times, if there aren't mature compression components, that OGRE has, that we couldn't build by our own.


That's not an issue as Isilkor states.

Realistically, the biggest issue is that we can get stuff done sensibly well with the manpower we have (and using existing stuff is a big part of that). Might not be a comfortable truth, but it has been like that since 2009..
Up Topic General / Feedback and Ideas / Easier Shading Concept

Powered by mwForum 2.29.7 © 1999-2015 Markus Wichitill