- I have no idea where the light texture is saved to
- I have no idea how the shaders access the light texture
- I have no idea how to create a second texture
the rest/blending should be similar to how brightess information is saved to the texture in C4DrawFoWLightTextureStrategy::DrawVertex
> My first stop was the shader uniform stuff, because I wanted to know where the shader gets the "lightTex" from. This is probably done with the uniforms that are defined in the enums?
Yes. The lightTex uniform is set up in the code prior to the drawing call, e.g. here for landscape rendering, here for sprite rendering and here for mesh rendering.
> - I have no idea where the light texture is saved to
I don't quite understand that question. The texture data is held in a C4Surface, and a framebuffer object is associated to it so that it can be used as a render target. Both happens in
> - I have no idea how the shaders access the light texture
You mean how the coordinates into the light texture are computed? This is different for landscape and meshes/sprites. For landscape, an additional set of texture coordinates is uploaded with every vertex, while for meshes/sprites a transformation matrix is stored in another uniform value that converts device coordinates to texture coordinates.
> - I have no idea how to create a second texture
Probably looking into
C4FoWRegion::BindFramebufwould be a good start.
> the rest/blending should be similar to how brightess information is saved to the texture in C4DrawFoWLightTextureStrategy::DrawVertex
C4DrawFowLightTextureStrategy::DrawVertexdoes (parts of) the rendering into the texture.
C4FoWRegion::Renderhas the main logic for that. You'll probably want to change it such that it renders into two textures at the same time. Here is some example code how to do that.
>The lightTex uniform is set up in the code prior to the drawing call
This is what I meant. "lightTex" and "C4SSU_LightTex" are similar in name, but why is the uniform "lightTex" and not "LightText" or "C4SSU_LightTex"?
>change it such that it renders into two textures at the same time
Is there a benefit to doing it that way? I wanted to add a second uniform "lightColorTex"
> This is what I meant. "lightTex" and "C4SSU_LightTex" are similar in name, but why is the uniform "lightTex" and not "LightText" or "C4SSU_LightTex"?
Hm. I don't know. Because Peter likes camel casing? :) C4SSU_LightTex is just a symbolic constant in the engine, it appears nowhere as a string, whereas "lightTex" is the actual uniform variable name. It's a bit like having a symbolic constant such as C4V_Object instead of the plain integer value that's behind it.
> > change it such that it renders into two textures at the same time
>Is there a benefit to doing it that way? I wanted to add a second uniform "lightColorTex"
There are two steps involved: a) rendering the light created by all light sources into the light texture, b) using the light texture to apply all lights when rendering the landscape, sprites, meshes. I was talking about how to do a), while you are talking about b). Yes, for b) there's no way around binding another texture for all drawing.
I would change the implementation so that the original light texture is expanded, as Peter suggested.
> Landscape does not render, because I get a lot of GL errors because of an illegal operation.
My only guess is that the light color texture uniform variable is optimized out, and then assigning a value to it produces an illegal operation error. You could try something like
lightColor.r = min(lightColor.r, 1.0);instead of
lightColor.r = 1.0;, to prevent that.
To check which operation produces the error, you could add
CheckGLErrorcalls after every GL operation you are doing, and then set a breakpoint in
CheckGLErrorin the debugger.
> To check which operation produces the error, you could add CheckGLError calls after every GL operation you are doing, and then set a breakpoint in CheckGLError in the debugger.
Or (if on Windows) run the engine inside a debugger with
--debug-opengl, and it'll automatically break on GL error.
Here's roughly what you'd need to do:
1. Change the texture allocation so it's bigger
2. Run the "
pFoW->Render" bit in "
C4FoWRegion::Render" twice. Either do a GL coordinate transformation or adjust the code so it writes to the other half for the second iteration.
3. The fade-out stuff ("Copy over the old state") you can probably leave in. Check
C4FoW::GetFramebufShaderfor what happens here. Not that exciting.
4. Adjust the renderer so it knowns what part of the light texture generation it's in, and generate the information you're interested in.
5. Adjust the light texture coordinate calculations in the shader. Best to make variant(s) of the LIGHT_DEBUG mode for figuring this part out.
6. Finally put everything together by multiplying light colour in at the right point.
Still not exactly a trivial task. Feel free to ask if you get stuck somewhere.
Anyway, after looking at the code I agree that expanding the texture seems to be easier than adding a new texture.
> I am not a fan of writing to another half of the texture because it is just confusing to read
Keep in mind that this might be very time critical code, though
The reason the two textures are separate is because we copy from one to the other for the fade-out. To be fair, I didn't actually test whether we could read from a texture at the same time as writing to it, but it seemed like a very risky thing to do.
2. Why do I need to do this? Is it not enough to adjust
"C4FoWLight::Render"so that it writes the additional information to the second half of the texture?
4. Would add parameters/functions similar to DrawLightVertex and DrawDarkVertex
5. Did this already. For now I feed it with dummy information. Seems to work, except for some white pixels where the landscape meets the sky.
6. See 5.
C4FoWLight::Rendertwice is probably wasteful, yes, as it calculates the whole triangle positions twice (which is a bad idea for robustness alone).
Maybe it would make more sense to have a third "pass" for the "pen"? That has a bunch of advantages - the color part of the texture probably won't need a second pass (no signed magic there), the wireframe strategy still only needs one pass total, and finally you could probably do the vertex translation right there in
That way you especially don't need any new members or parameters. Could be quite elegant.
lightCoordin the shader script come from?
Also I wanted to save a default white light into the texture at the moment. I do not understand how the OpenGL stuff works yet, so I am just guessing stuff. See here for the code that should add the default color to the line 2*y + 1.
> For now I added the following: Coordinate transformation: line 2*y contains the light information, line 2*y + 1 contains the color information.
That might be a bad idea, since OpenGL will do interpolation between neighbouring pixels. I would try to use the top half of the texture for intensity/normals and the bottom half for color.
I think it was also explained before that
lightCoordis calculated differently depending on what we're drawing. There's no real reason behind that - just that for the landscape I went the easier route (texture coordinate), whereas for object & particle drawing it's a good idea to calculate it from the screen coordinate (at least I think that's what's happening).
Bottom line - don't worry too much about it. Once you've everything set up correctly, you should be able to simply access the lights texture at
lightsCoord / vec2(1.0,2.0)and
lightsCoord / vec2(1.0,2.0) + vec2(0.0,0.5).
>Once you've everything set up correctly, you should be able to simply access the lights texture at lightsCoord / vec2(1.0,2.0) and lightsCoord / vec2(1.0,2.0) + vec2(0.0,0.5).
Unfortunately not. That is, not everything is set up correctly. I added a third pass for the draw strategy (then draw_color = true). It should draw to the lower half of the texture, but it does not:
// global coords -> region coords
x += -region->getRegion().x;
y += -region->getRegion().y;
y += region->getSurface()->Hgt / 2;
Debugging did not really help me. What are the region coordinates exactly? At first glance it looks like landscape coordinates (but those should be global?), but they can be negative.
Hgton the surface is already twice as large as it should be, so
region->getSurface()->Hgt / 2;should already be the offset for the lower half.
draw_colorand not simply
pass == 2? Or preferably something like "
pass == C4DP_Color" with a suitable
enumbehind it. Otherwise it looks correct to me.
DrawVertex(). This produced some ugly conflicts if there is a second clonk, but since this was not a problem in the original code it is probably because of my lazy/experimental coding.
Anyway, I with the current implementation I have no idea how to merge light colors yet. Also, the default color in the texture seems to be black. For light color this should be white. How do I change this?
Furthermore, with different light colors next to each other, the light has to be merged somehow. At first I thought that the easiest way is adding the color values, but for that I need the values that are saved in the texture?
Ideally I want to have something like this, except that the background is also white, and the image is blurred.
Also, the beam going out of the tunnel looks a little weird.
> Instead of decreasing the brightness with distance I fade the color to white (if shadow is true) in DrawVertex()
This will desaturate the shadow the darker it gets. Not sure.
> Anyway, I with the current implementation I have no idea how to merge light colors yet.
Well, don't overwrite it, but do an alpha blit. That way the first light has a fighting chance.
Hm, as far as I'm concerned, light intensity should decide how bright a light is, so the color from the color texture should get "normalised" (yay for information loss). Assuming we fade towards black this has the good property that the first alpha blit would effectively "set" the fully saturated color, no matter the alpha value. So we could make the alpha depend on brightness, which would help with mixing. We need to be careful about divisions by zero though. Will probably all be a bit fiddly...
> This will desaturate the shadow the darker it gets. Not sure.
How? The "default" color in the current lights branch is "white" (1.0, 1.0, 1.0) in the shader, so the worst thing is that it would give you the current behaviour.
>the color from the color texture should get "normalised"
What do you mean by that? Change it, depending on the direction of the light, as is the case for the upper half?
> What do you mean by that?
lightColor = normalize(lightColor);
lightColor = lightColor / sqrt(lightColor.r * lightColor.r + lightColor.g * lightColor.g + lightColor.b * lightColor.b)
Note that this will reduce the brightness of white by about 40%, so you might want to put a multiplier on it to restore that. This means that if the light texture starts at black, even the first alpha blit over it will produce the desired colour at full brightness (before factoring in actual light brightness, of course). The good thing is that this allows proper mixing: Two lights blit over each other, normalisation normalises brightness, and you hopefully get a sensible result.
- I want the color to fade to default/white the further it gets away from the center. Now idea how to achieve that exactly, but probably with one of the blending functions and going over the same point multiple times.
- I want the color to be additive (or rather subtractive, since white is the default color?). At the moment I use
for the color pass. That however just mixes the colors. Ideally RGB(255,0,0) and RGB(0,0,255) should add to RGB(255,0,255), no? And adding RGB(0,255,0) should add up to white light RGB(255, 255, 255). So probably this one is the better blending function?
> - I want the color to fade to default/white the further it gets away from the center. Now idea how to achieve that exactly, but probably with one of the blending functions and going over the same point multiple times.
> - I want the color to be additive
"Additive" doesn't really mean much after normalisation - it doesn't matter whether you normalise (a+b) or (a+b)/2. So averaging *is* effectively additive blitting, but less likely to overflow with lots of lights around. Pretty sure in your model 10 pink lights on the same spot would produce white light, even if individual light brightness was only 1/10.
My proposal would be to continue using alpha blits, and use alpha 0.1 for bright vertices and alpha 0 for dark vertices. That way you get rid of the sharp colour edge inside the earth in your screenshot.
>My proposal would be to continue using alpha blits, and use alpha 0.1 for bright vertices and alpha 0 for dark vertices. That way you get rid of the sharp colour edge inside the earth in your screenshot.
This is what I mean by "fade to default the further it gets away from the center".
>it doesn't matter whether you normalise (a+b) or (a+b)/2
Exactly, and that makes a dark red light "impossible". At least setting light color RGB(128, 0, 0) seems more intuitive than setting light color to RGB(128, 0, 0) (which is no differen than RGB(255, 0, 0) and then light intensity to 50%. [Edit: I adjust the intensity of that light in the other pass, depending on the color brightness, of course]
>alpha 0.1 for bright vertices and alpha 0 for dark vertices
Previously I used alpha 0.5 regardless of the vertex type. How do I recognize the dark vertices?
shadow == true?
> Exactly, and that makes a dark red light "impossible"
Well, there's no functional difference between "dark red" light and dark "red" light. If you want, you can split out the brightness of the light colour and add it as a factor to brightness. Might be less surprising for the user.
> Previously I used alpha 0.5 regardless of the vertex type. How do I recognize the dark vertices? shadow == true?
Yes - Newton renamed that. Actually thinking about it, it might make sense to have the light vertex alpha depend on the light's brightness as well. That way, brighter lights influence he colour more strongly.
Anyway, I finally got the fading part figured out. Now I have a problem that propably results from the code structure.
The process goes as follows:
A0. Paint white texture.
B0. Get light color and normalize it. Multiply so that it ranges from 0.0 to 1.0 again (not sure if this is necessary, but looks nicer during debugging process).
B1. Draw normalized/stretched color to frame buffer with alpha blending.
B2. Repeat previous steps for all light sources
C0. Read texture from the buffer in the shader, as is.
The more light sources I place, the darker the texture gets, naturally. So in the end I would have to normalize the color information in the shader again, right?
Nevermind, that was it. Unfortunately the first color that is blended is always more prominent in alpha blending, but that will be less noticeable once I put shadows back in :)
normalizeis a GLSL function).
Also would suggest to factor out the light colour brightness right when it gets set - setting brightness and colour together makes sense anyway, right?
>setting brightness and colour together makes sense anyway, right?
Yes. Ideally you would set the color with HSL values, because it translates the best to what you get in the end.
Considering ambient value, what are we going to do with that? Sven2 suggested a shader parameter, but what about setting it in the light texture (default background color) itself? The shader parameter has the advantage that it would get applied immediately, but we have to do alpha blending in the shader. The background color has the advantage that alpha blending happens automatically in the engine/openGL, but you have to render the whole light texture again.
With a shader parameter the clonk always emits ambient light, with a default background color the clonk would emit "white" light.
One more thing that I noticed: I would like some lights to affect the light color part texture, but not the light intensity/fog of war part of the texture. For example a glowing crystal could emit blueish light, but it would be blocked by shadow until your clonk light makes it visible.
> I'll create screenshots in the same positions with the different implementations, so that we can compare them.
Unless you're overwriting the colour completely every frame, we would need to see it moving, right? Otherwise it will settle on the same value anyway...
> Considering ambient value, what are we going to do with that?
Not quite sure what you mean, to be honest. The sun colour could be set globally. The light following Clonks should probably stay white, don't see a reason to change that.
> I would like some lights to affect the light color part texture, but not the light intensity/fog of war part of the texture.
Not sure. Personally, I think we should avoid introducing inconsistencies - if it's glowing, it's visible. If that doesn't make sense gameplay-wise, use script to only have the crystal start glowing if it is close enough or has a line-of-sight. With appropriate fade-in, that should make sense visually.
>we would need to see it moving, right?
I am not sure what you are talking about. Currently I have three variants
1) Default background color is white, color is normalized when drawing the texture and in the shader, alpha depends on lightness
2) Default background color is white, color is normalized only in the shader, alpha depends on value
3) Default background color is black, color is normalized only in the shader, alpha depends on value
Regarding 2 and 3: they will settle on different colors:
RGB(0,0,0) and RGB(255,0,128) blend to RGB(228,0,114) with alpha = 0.3 when normalized.
RGB(255,255,255) and RGB(255,0,128) blend to RGB(207,26,135) with alpha = 0.3 when normalized (this is the desaturation you talked about).
>Not quite sure what you mean, to be honest. The sun colour could be set globally. The light following Clonks should probably stay white, don't see a reason to change that.
There are two ways to adjust the sun color globally:
1) Shader parameter. Blend every color with the sun color in the shader. The Clonk light is white by default and will be blended with the sun color (for example blue), giving you a blueish light.
2) Fill the background of the light texture with the sun light (or whatever parts of the world are affected by the sunlight). If the sun color is blue, as in the above example, then the Clonk light will still be white after blending, not blue.
Still confused that black doesn't work out though.
Powered by mwForum 2.29.7 © 1999-2015 Markus Wichitill