Not logged inOpenClonk Forum
Up Topic Development / Developer's Corner / Colored lights
1 2 3 Previous Next
- - Date 2015-05-13 12:09
Parent - - By Marky [de] Date 2015-05-03 17:28
I dug through the code again and decided adding a second light texture with the color information. This is not implemented yet. My first stop was the shader uniform stuff, because I wanted to know where the shader gets the "lightTex" from. This is probably done with the uniforms that are defined in the enums? Anyway, here is what keeps me from going on:
- I have no idea where the light texture is saved to
- I have no idea how the shaders access the light texture
- I have no idea how to create a second texture

the rest/blending should be similar to how brightess information is saved to the texture in C4DrawFoWLightTextureStrategy::DrawVertex
Parent - - By Clonk-Karl [us] Date 2015-05-06 03:34 Edited 2015-05-06 03:39

> My first stop was the shader uniform stuff, because I wanted to know where the shader gets the "lightTex" from. This is probably done with the uniforms that are defined in the enums?


Yes. The lightTex uniform is set up in the code prior to the drawing call, e.g. here for landscape rendering, here for sprite rendering and here for mesh rendering.

> - I have no idea where the light texture is saved to


I don't quite understand that question. The texture data is held in a C4Surface, and a framebuffer object is associated to it so that it can be used as a render target. Both happens in C4FoWRegion::BindFramebuf.

> - I have no idea how the shaders access the light texture


You mean how the coordinates into the light texture are computed? This is different for landscape and meshes/sprites. For landscape, an additional set of texture coordinates is uploaded with every vertex, while for meshes/sprites a transformation matrix is stored in another uniform value that converts device coordinates to texture coordinates.

> - I have no idea how to create a second texture


Probably looking into C4FoWRegion::BindFramebuf would be a good start.

> the rest/blending should be similar to how brightess information is saved to the texture in C4DrawFoWLightTextureStrategy::DrawVertex


C4DrawFowLightTextureStrategy::DrawVertex does (parts of) the rendering into the texture. C4FoWRegion::Render has the main logic for that. You'll probably want to change it such that it renders into two textures at the same time. Here is some example code how to do that.
Reply
Parent - - By Marky [de] Date 2015-05-06 19:32

>The lightTex uniform is set up in the code prior to the drawing call


This is what I meant. "lightTex" and "C4SSU_LightTex" are similar in name, but why is the uniform "lightTex" and not "LightText" or "C4SSU_LightTex"?

>change it such that it renders into two textures at the same time


Is there a benefit to doing it that way? I wanted to add a second uniform "lightColorTex"
Parent - - By Clonk-Karl [us] Date 2015-05-06 21:41

> This is what I meant. "lightTex" and "C4SSU_LightTex" are similar in name, but why is the uniform "lightTex" and not "LightText" or "C4SSU_LightTex"?


Hm. I don't know. Because Peter likes camel casing? :) C4SSU_LightTex is just a symbolic constant in the engine, it appears nowhere as a string, whereas "lightTex" is the actual uniform variable name. It's a bit like having a symbolic constant such as C4V_Object instead of the plain integer value that's behind it.

> > change it such that it renders into two textures at the same time
>Is there a benefit to doing it that way? I wanted to add a second uniform "lightColorTex"


There are two steps involved: a) rendering the light created by all light sources into the light texture, b) using the light texture to apply all lights when rendering the landscape, sprites, meshes. I was talking about how to do a), while you are talking about b). Yes, for b) there's no way around binding another texture for all drawing.
Reply
Parent - - By Marky [de] Date 2015-05-09 18:28
First test is here, but it does not do much yet. Detailed description in the commit message. Landscape does not render, because I get a lot of GL errors because of an illegal operation.

I would change the implementation so that the original light texture is expanded, as Peter suggested.
Parent - - By Clonk-Karl [us] Date 2015-05-10 15:50 Edited 2015-05-10 15:53

> Landscape does not render, because I get a lot of GL errors because of an illegal operation.


My only guess is that the light color texture uniform variable is optimized out, and then assigning a value to it produces an illegal operation error. You could try something like lightColor.r = min(lightColor.r, 1.0); instead of lightColor.r = 1.0;, to prevent that.

To check which operation produces the error, you could add CheckGLError calls after every GL operation you are doing, and then set a breakpoint in CheckGLError in the debugger.
Reply
Parent - - By Isilkor Date 2015-05-10 21:19

> To check which operation produces the error, you could add CheckGLError calls after every GL operation you are doing, and then set a breakpoint in CheckGLError in the debugger.


Or (if on Windows) run the engine inside a debugger with --debug-opengl, and it'll automatically break on GL error.
Reply
Parent - - By Isilkor Date 2015-05-11 19:38
By the way, I mention Windows specifically because apparently nobody cares about OpenGL debugging on other OSes. *hint* *hint* (somebody port this)
Reply
Parent - - By Isilkor Date 2015-05-25 23:09
Fine, nobody port it then. At least test my port of it, which I can't really because I don't have a Linux desktop: https://github.com/isilkor/openclonk.git glx-debug
Reply
Parent - By Caesar [jp] Date 2015-05-29 02:24
https://bpaste.net/show/6d6442f6b3fa
Just for archivation purposes.
Parent - - By PeterW [gb] Date 2015-05-07 16:27 Edited 2015-05-07 16:32
I would suggest against adding a second light texture - that will be a *lot* of code changes for little benefit. It should be a lot easier to just double the size of the existing texture, and use (say) the bottom half for the new information.

Here's roughly what you'd need to do:
1. Change the texture allocation so it's bigger
2. Run the "pFoW->Render" bit in "C4FoWRegion::Render" twice. Either do a GL coordinate transformation or adjust the code so it writes to the other half for the second iteration.
3. The fade-out stuff ("Copy over the old state") you can probably leave in. Check C4FoW::GetFramebufShader for what happens here. Not that exciting.
4. Adjust the renderer so it knowns what part of the light texture generation it's in, and generate the information you're interested in.
5. Adjust the light texture coordinate calculations in the shader. Best to make variant(s) of the LIGHT_DEBUG mode for figuring this part out.
6. Finally put everything together by multiplying light colour in at the right point.

Still not exactly a trivial task. Feel free to ask if you get stuck somewhere.
Parent - - By Marky [de] Date 2015-05-09 18:13
Yes, I think I understand your reason for expanding the original texture. I am not a fan of writing to another half of the texture because it is just confusing to read. What I can say as a newcomer is that the engine code is confusing as it is. For example, what is pBacksurface in C4FoWRegion used for, I mean, why does it have to flip back and forth between two textures?
Anyway, after looking at the code I agree that expanding the texture seems to be easier than adding a new texture.
Parent - By Zapper [de] Date 2015-05-09 21:57

> I am not a fan of writing to another half of the texture because it is just confusing to read


Keep in mind that this might be very time critical code, though
Parent - By PeterW [gb] Date 2015-05-11 10:55
All depends on presentation and documentation. I don't think it *has* to be harder to follow.

The reason the two textures are separate is because we copy from one to the other for the fade-out. To be fair, I didn't actually test whether we could read from a texture at the same time as writing to it, but it seemed like a very risky thing to do.
Parent - - By Marky [de] Date 2015-05-10 10:17
Ok, first set of questions.
1. Done
2. Why do I need to do this? Is it not enough to adjust "C4FoWLight::Render" so that it writes the additional information to the second half of the texture?
4. Would add parameters/functions similar to DrawLightVertex and DrawDarkVertex
5. Did this already. For now I feed it with dummy information. Seems to work, except for some white pixels where the landscape meets the sky.
6. See 5.
Parent - - By PeterW [gb] Date 2015-05-11 11:13 Edited 2015-05-11 11:16
Don't take what I say as gospel, this was just my first thoughts about approaching it. Running C4FoWLight::Render twice is probably wasteful, yes, as it calculates the whole triangle positions twice (which is a bad idea for robustness alone).

Maybe it would make more sense to have a third "pass" for the "pen"? That has a bunch of advantages - the color part of the texture probably won't need a second pass (no signed magic there), the wireframe strategy still only needs one pass total, and finally you could probably do the vertex translation right there in C4FoWDrawLightTextureStrategy::DrawVertex.

That way you especially don't need any new members or parameters. Could be quite elegant.
Parent - - By Marky [de] Date 2015-05-12 20:48
For now I added the following: Coordinate transformation: line 2*y contains the light information, line 2*y + 1 contains the color information. The access in the shader seems to be wrong though. Where does lightCoord in the shader script come from?

Also I wanted to save a default white light into the texture at the moment. I do not understand how the OpenGL stuff works yet, so I am just guessing stuff. See here for the code that should add the default color to the line 2*y + 1.
Parent - By Clonk-Karl [us] Date 2015-05-13 02:24

> For now I added the following: Coordinate transformation: line 2*y contains the light information, line 2*y + 1 contains the color information.


That might be a bad idea, since OpenGL will do interpolation between neighbouring pixels. I would try to use the top half of the texture for intensity/normals and the bottom half for color.
Reply
Parent - - By PeterW [gb] Date 2015-05-13 12:09 Edited 2015-05-13 12:18
+1 on what ck said - don't expect the coordinates to be integer (at all), we use interpolation quite a bit. Interleaving lines won't work... for a lot of reasons.

I think it was also explained before that lightCoord is calculated differently depending on what we're drawing. There's no real reason behind that - just that for the landscape I went the easier route (texture coordinate), whereas for object & particle drawing it's a good idea to calculate it from the screen coordinate (at least I think that's what's happening).

Bottom line - don't worry too much about it. Once you've everything set up correctly, you should be able to simply access the lights texture at lightsCoord / vec2(1.0,2.0) and lightsCoord / vec2(1.0,2.0) + vec2(0.0,0.5).
Parent - - By Marky [de] Date 2015-05-14 07:53

>Once you've everything set up correctly, you should be able to simply access the lights texture at lightsCoord / vec2(1.0,2.0) and lightsCoord / vec2(1.0,2.0) + vec2(0.0,0.5).


Unfortunately not. That is, not everything is set up correctly. I added a third pass for the draw strategy (then draw_color = true). It should draw to the lower half of the texture, but it does not:


  // global coords -> region coords
  x += -region->getRegion().x;
  y += -region->getRegion().y;

  if (draw_color)
  {
    y += region->getSurface()->Hgt / 2;
  }


Debugging did not really help me. What are the region coordinates exactly? At first glance it looks like landscape coordinates (but those should be global?), but they can be negative. Hgt on the surface is already twice as large as it should be, so region->getSurface()->Hgt / 2; should already be the offset for the lower half.
Parent - - By Sven2 Date 2015-05-14 10:28
Wouldn't it be better to distinguish passes by #defines? Otherwise you have two extra cases of branching in every iteration.
Parent - By Marky [de] Date 2015-05-14 11:34
For now I am just doing a lot of experiments without much of a direction. Since progress with the upper and lower half is slow, I am now setting up the code for the actual light color (at the moment it draws the light color on the light texture, not the direction/brightness).
Parent - By PeterW [gb] Date 2015-05-15 22:23
Region is the part of the landscape that the light texture covers. Generally a bit more than the screen (rounded up). Coordinates can be negative for triangles that are partly off-screen - which is actually a good point, because this means that while drawing one part of the texture you might overwrite the other part. You might need to set clippers or something - ugly.

Why draw_color and not simply pass == 2? Or preferably something like "pass == C4DP_Color" with a suitable enum behind it. Otherwise it looks correct to me.
Parent - - By Marky [de] Date 2015-05-14 12:33 Edited 2015-05-14 12:35
First experiments with actual light color is done, and I saw some things that I did not realize before: The way lights work is a little bit strange. For my experiment the light texture contains the color of the light. Instead of decreasing the brightness with distance I fade the color to white (if shadow is true) in DrawVertex(). This produced some ugly conflicts if there is a second clonk, but since this was not a problem in the original code it is probably because of my lazy/experimental coding.
Anyway, I with the current implementation I have no idea how to merge light colors yet. Also, the default color in the texture seems to be black. For light color this should be white. How do I change this?
Furthermore, with different light colors next to each other, the light has to be merged somehow. At first I thought that the easiest way is adding the color values, but for that I need the values that are saved in the texture?
Ideally I want to have something like this, except that the background is also white, and the image is blurred.
Also, the beam going out of the tunnel looks a little weird.
Parent - - By PeterW [gb] Date 2015-05-15 22:36 Edited 2015-05-15 22:39

> Instead of decreasing the brightness with distance I fade the color to white (if shadow is true) in DrawVertex()


This will desaturate the shadow the darker it gets. Not sure.

> Anyway, I with the current implementation I have no idea how to merge light colors yet.


Well, don't overwrite it, but do an alpha blit. That way the first light has a fighting chance.

Hm, as far as I'm concerned, light intensity should decide how bright a light is, so the color from the color texture should get "normalised" (yay for information loss). Assuming we fade towards black this has the good property that the first alpha blit would effectively "set" the fully saturated color, no matter the alpha value. So we could make the alpha depend on brightness, which would help with mixing. We need to be careful about divisions by zero though. Will probably all be a bit fiddly...
Parent - - By Marky [de] Date 2015-05-16 07:08

> This will desaturate the shadow the darker it gets. Not sure.


How? The "default" color in the current lights branch is "white" (1.0, 1.0, 1.0) in the shader, so the worst thing is that it would give you the current behaviour.

>the color from the color texture should get "normalised"


What do you mean by that? Change it, depending on the direction of the light, as is the case for the upper half?
Parent - - By PeterW [gb] Date 2015-05-18 12:30 Edited 2015-05-18 12:52
Well, you multiply something effectively white with low brightness, and you get gray instead of dark X. Just because white light is gray in the shadow, doesn't mean that coloured light should be, too, right?

> What do you mean by that?


lightColor = normalize(lightColor);

or manually:

lightColor = lightColor / sqrt(lightColor.r * lightColor.r + lightColor.g * lightColor.g + lightColor.b * lightColor.b)

Note that this will reduce the brightness of white by about 40%, so you might want to put a multiplier on it to restore that. This means that if the light texture starts at black, even the first alpha blit over it will produce the desired colour at full brightness (before factoring in actual light brightness, of course). The good thing is that this allows proper mixing: Two lights blit over each other, normalisation normalises brightness, and you hopefully get a sensible result.
Parent - - By Marky [de] Date 2015-05-18 21:00
Ok, I will implement the normalization, sounds good. The next obstacles are the following:
- I want the color to fade to default/white the further it gets away from the center. Now idea how to achieve that exactly, but probably with one of the blending functions and going over the same point multiple times.
- I want the color to be additive (or rather subtractive, since white is the default color?). At the moment I use
      glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
for the color pass. That however just mixes the colors. Ideally RGB(255,0,0) and RGB(0,0,255) should add to RGB(255,0,255), no? And adding RGB(0,255,0) should add up to white light RGB(255, 255, 255). So probably this one is the better blending function?
      glBlendFunc(GL_ONE, GL_ONE);
      glBlendEquationSeparate(GL_FUNC_ADD, GL_MAX);
Parent - - By PeterW [gb] Date 2015-05-21 16:15 Edited 2015-05-21 16:19

> - I want the color to fade to default/white the further it gets away from the center. Now idea how to achieve that exactly, but probably with one of the blending functions and going over the same point multiple times.


Why?

> - I want the color to be additive


"Additive" doesn't really mean much after normalisation - it doesn't matter whether you normalise (a+b) or (a+b)/2. So averaging *is* effectively additive blitting, but less likely to overflow with lots of lights around. Pretty sure in your model 10 pink lights on the same spot would produce white light, even if individual light brightness was only 1/10.

My proposal would be to continue using alpha blits, and use alpha 0.1 for bright vertices and alpha 0 for dark vertices. That way you get rid of the sharp colour edge inside the earth in your screenshot.
Parent - - By Marky [de] Date 2015-05-21 19:58 Edited 2015-05-22 04:16

>My proposal would be to continue using alpha blits, and use alpha 0.1 for bright vertices and alpha 0 for dark vertices. That way you get rid of the sharp colour edge inside the earth in your screenshot.


This is what I mean by "fade to default the further it gets away from the center".

>it doesn't matter whether you normalise (a+b) or (a+b)/2


Exactly, and that makes a dark red light "impossible". At least setting light color RGB(128, 0, 0) seems more intuitive than setting light color to RGB(128, 0, 0) (which is no differen than RGB(255, 0, 0) and then light intensity to 50%. [Edit: I adjust the intensity of that light in the other pass, depending on the color brightness, of course]

>alpha 0.1 for bright vertices and alpha 0 for dark vertices


Previously I used alpha 0.5 regardless of the vertex type. How do I recognize the dark vertices? shadow == true?
Parent - - By PeterW [gb] Date 2015-05-22 16:44

> Exactly, and that makes a dark red light "impossible"


Well, there's no functional difference between "dark red" light and dark "red" light. If you want, you can split out the brightness of the light colour and add it as a factor to brightness. Might be less surprising for the user.

> Previously I used alpha 0.5 regardless of the vertex type. How do I recognize the dark vertices? shadow == true?


Yes - Newton renamed that. Actually thinking about it, it might make sense to have the light vertex alpha depend on the light's brightness as well. That way, brighter lights influence he colour more strongly.
Parent - - By Marky [de] Date 2015-05-22 23:05
I agree on all points. Additionally, the alpha channel of the color could be used as an overall factor for the alpha blending after the other modifications (alpha for blending = brightness * alpha channel). Or is this too confusing for the user?
Parent - - By PeterW [gb] Date 2015-05-23 08:59
Which would then be a light that is relatively bright, but changes colour quickly when another light source comes into view? Not sure how useful that would be.
Parent - - By Marky [de] Date 2015-05-23 16:41 Edited 2015-05-23 18:46
Yes, I'll skip that.

Anyway, I finally got the fading part figured out. Now I have a problem that propably results from the code structure.

The process goes as follows:
A0. Paint white texture.
B0. Get light color and normalize it. Multiply so that it ranges from 0.0 to 1.0 again (not sure if this is necessary, but looks nicer during debugging process).
B1. Draw normalized/stretched color to frame buffer with alpha blending.
B2. Repeat previous steps for all light sources
C0. Read texture from the buffer in the shader, as is.

The more light sources I place, the darker the texture gets, naturally. So in the end I would have to normalize the color information in the shader again, right?

[Edit]
Nevermind, that was it. Unfortunately the first color that is blended is always more prominent in alpha blending, but that will be less noticeable once I put shadows back in :)
Parent - - By PeterW [gb] Date 2015-05-24 13:41 Edited 2015-05-24 13:47
Still think black is the better default for the texture. And yes, I was talking about normalisation in the shader - the whole idea was that the colour texture brightness shouldn't matter (note that normalize is a GLSL function).

Also would suggest to factor out the light colour brightness right when it gets set - setting brightness and colour together makes sense anyway, right?
Parent - - By Marky [de] Date 2015-05-24 16:42
Factoring out the brightness (L in HSL, not V in HSV) could be done, yes. A black default color will certainly produce more saturated colors, whicht might be good. I set up a test scenario based on the dark castle map. I'll create screenshots in the same positions with the different implementations, so that we can compare them.

>setting brightness and colour together makes sense anyway, right?


Yes. Ideally you would set the color with HSL values, because it translates the best to what you get in the end.

Considering ambient value, what are we going to do with that? Sven2 suggested a shader parameter, but what about setting it in the light texture (default background color) itself? The shader parameter has the advantage that it would get applied immediately, but we have to do alpha blending in the shader. The background color has the advantage that alpha blending happens automatically in the engine/openGL, but you have to render the whole light texture again.
With a shader parameter the clonk always emits ambient light, with a default background color the clonk would emit "white" light.

One more thing that I noticed: I would like some lights to affect the light color part texture, but not the light intensity/fog of war part of the texture. For example a glowing crystal could emit blueish light, but it would be blocked by shadow until your clonk light makes it visible.
Parent - - By PeterW [gb] Date 2015-05-24 21:54

> I'll create screenshots in the same positions with the different implementations, so that we can compare them.


Unless you're overwriting the colour completely every frame, we would need to see it moving, right? Otherwise it will settle on the same value anyway...

> Considering ambient value, what are we going to do with that?


Not quite sure what you mean, to be honest. The sun colour could be set globally. The light following Clonks should probably stay white, don't see a reason to change that.

> I would like some lights to affect the light color part texture, but not the light intensity/fog of war part of the texture.


Not sure. Personally, I think we should avoid introducing inconsistencies - if it's glowing, it's visible. If that doesn't make sense gameplay-wise, use script to only have the crystal start glowing if it is close enough or has a line-of-sight. With appropriate fade-in, that should make sense visually.
Parent - - By Marky [de] Date 2015-05-24 22:50

>we would need to see it moving, right?


I am not sure what you are talking about. Currently I have three variants

1) Default background color is white, color is normalized when drawing the texture and in the shader, alpha depends on lightness
2) Default background color is white, color is normalized only in the shader, alpha depends on value
3) Default background color is black, color is normalized only in the shader, alpha depends on value

Regarding 2 and 3: they will settle on different colors:
RGB(0,0,0) and RGB(255,0,128) blend to RGB(228,0,114) with alpha = 0.3 when normalized.
RGB(255,255,255) and RGB(255,0,128) blend to RGB(207,26,135) with alpha = 0.3 when normalized (this is the desaturation you talked about).

>Not quite sure what you mean, to be honest. The sun colour could be set globally. The light following Clonks should probably stay white, don't see a reason to change that.


There are two ways to adjust the sun color globally:

1) Shader parameter. Blend every color with the sun color in the shader. The Clonk light is white by default and will be blended with the sun color (for example blue), giving you a blueish light.

2) Fill the background of the light texture with the sun light (or whatever parts of the world are affected by the sunlight). If the sun color is blue, as in the above example, then the Clonk light will still be white after blending, not blue.
Parent - - By PeterW [gb] Date 2015-05-25 17:50 Edited 2015-05-25 17:52
Well, what are you doing with the background colour? My suggestion would be to alpha-blit it over, so the old light texture colours shine through. Otherwise a new light source would cause very jarring colour changes. Which means that the real question is what the colour is going to be after you let it settle for a few frames - and at that point it probably won't matter to much whether you move it toward black or white by default.
Parent - - By Marky [de] Date 2015-05-25 17:58
What do you mean with settle for a few frames? The light color is the correct color immediately. For black background, see explanation above and my post here.
Parent - - By PeterW [gb] Date 2015-05-25 18:01
Yeah, I saw in the code that you're clearing the colour every frame. I guess my point is: Don't do that. Alpha-blit the old data over, that's what we have it for.
Parent - - By Marky [de] Date 2015-05-25 18:02
I just took what was already there :p. So how would I adjust it to not clear every frame?
Parent - - By PeterW [gb] Date 2015-05-25 18:07
Ah, right - I changed it so it first draws, then alpha-blits over it. Sorry, my bad.

Still confused that black doesn't work out though.
Parent - By Marky [de] Date 2015-05-25 18:11
It works, and significantly decreases the fading distance, because the color is more saturated. Grey as a default color is pretty good, because the fade is still visible, but the color is also more visible than with a white color default. This all comes from normalization. Just make an example with low alpha and normalize that, you'll see how it results in the example on the right.
Parent - - By Marky [de] Date 2015-05-25 15:48
The black texture was not so good, it removed the fading towards the edge. Also, with normalization, a dark colored light becomes nearly invisible, regardless of the alpha or brightness value.
Parent - - By PeterW [gb] Date 2015-05-25 17:51
What? The whole point of normalisation is that the brightness of the colour shouldn't matter, right? o_O
Parent - By Marky [de] Date 2015-05-25 18:15
I have a solution for that already, will post screens shortly :)
Parent - - By Marky [de] Date 2015-05-18 20:33
A first success with changeable colors. Do we need GetLightColor()? So far only SetLightColor() exists.
Parent - By Maikel Date 2015-05-18 20:39
I would say yes, good to see some progress on the lights!
Up Topic Development / Developer's Corner / Colored lights
1 2 3 Previous Next

Powered by mwForum 2.29.7 © 1999-2015 Markus Wichitill