I attached two examples: in the first there are light sources from both sides of the wall, but only one side is illuminated. In the second example (screenshots 2+3, from Tutorial 2), it's basically the same situation. As soon as the Clonk shows up on the left hand side, the side that's illuminated by the torch suddenly gets dark.
These are basically artifacts of the fact that for each pixel we are saving a single light direction in the light texture, so in these examples the two lights with different directions get somehow blended into a single direction (something like the stronger light wins?).
The ideal way to fix this would probably be to render each light into a separate texture, and then in the pixel shader evaluate the lights from all textures, and either sum the intensities up or take the one with maximum intensity. Now, one light texture for every light source might be prohibitively expensive, but maybe we can experiment with a fixed set of light textures (2-4?) and assign each light to one of these light textures. That would allow to get a correct result for at least that many light sources. It would be cool if we could then re-assign a given light source to a different texture dynamically at run-time (e.g. if some light sources were removed and one of the textures would be unused otherwise), however I'm not sure this is possible to do smoothly due to the slow fadeout as the light changes.
Just posting this to get some feedback on the topic in general and maybe additional ideas from the people who are familiar with the technicalities of the light system.
I think a more precise result can be achieved if the light direction on the screen (xy_lightdir) and the light inclination onto the screen (z_lightdir) is separated in calculation. Let's say
z_lightdir: the higher the value, the more the light goes directly toward the screen
xy_lightdir: An angle. Since they are all normalized, there should be no strength component.
Then if the final z-lightdir is calculated like this:
normalize(z_lightdir1+z_lightdir2+...)
, it makes the light direction go more tilted towards orthogonality to the screen (light coming from the camera) the more light sources from different directions shine onto one point. Less shadows the more lights there are, i.e. the rock in the middle would not be so dark on the right side, with this change alone. But lets move on to the calculation of the xy_lightdir: (xy_lightdir1*lightstrength1+xy_lightdir2*lightstrength2+...)/sum_of_lightstrengths
. The lightstrength would be the strength component of the light which only falls off after the light hit material or exceeded it's range.I am not sure, though, if this change of calculation is possible with just one light texture?
> normalize(z_lightdir1+z_lightdir2+...)
Isn't z_lightdir just a scalar value? What do you mean with normalizing that? Dividing by the total number of light sources?
I guess the calculations are doable in a single surface by just summing up all lights and multiplying each with a factor 1/sum_of_lightstrengts.
But if I understand your suggestion correctly, then a wall that is illuminated from one side would be brighter (on that side) than if it were illuminated from both sides (because then basically we would just change the light direction so it comes more from the Z direction and not so much from X/Y anymore?) If that's the case it sounds like we are trading one problem for another.
>Isn't z_lightdir just a scalar value? What do you mean with normalizing that? Dividing by the total number of light sources?
Oops, error in reasoning because first I was thinking z_lightdir being the z-component of the vector. Another try:
z_lightdir: the angle of the light vector towards the screen (0° - 90°; 90° = orthogonal to the screen)
Now several calcuations are possible, I was first thinking of
z_lightdir = min(90,z_lightdir1+z_lightdir2+...)
but also possible would be z_lightdir = max(z_lightdir1,z_lightdir2,...)
. One would need to see how it looks.> a wall that is illuminated from one side would be brighter (on that side) than if it were illuminated from both sides
Yes, true. But this is not necessarily a problem. After all, you could think of it like that the more light sources there are, the more diffuse is the light, so the less shadows are visible. I think this makes sense. The rock on the leftmost screenshot would basically look like the piece of rock that is above the tunnel then.
I've got another screenshot lying on my hard drive at home.
> this has nothing to do with multiple light sources
Is that so? It only happens if multiple light sources are around, some kind of blending error. It has nothing to do with the problem described in this thread so far though!
That being said, I would really like to try a model where light sources actually add up properly. Right now, we have a light normal and a light intensity (= 3 values), but maybe it would be better if we just had 4 values for light intensity from top, bottom, left and right respectively? Updating the light would be trivial, and shading would boil down to calculating the light for these 4 directions, then weighting by normal direction. Or we simply reduce it to a light normal+intensity again and proceed as before.
Difficult to say up-front how good or bad it would look though.
We also have light color, however, so with this scheme how would we mix the colors from different lights? What do you think about splitting the texture into 3 instead of 2, and then we can store intensities for R, G and B separately?
min
in there though - you would want this to go dark outside the FoW, right? Z dimension I would leave implied as in extend_normal
. It's going to be a lot of tweaking and approximation anyway.And yes, this doesn't help with light colour - only proper solution would be to have 3x4 intensity values as far as I'm concerned. For the moment, I'd assume it not noticeable until proven otherwise. Also note that this would probably kill shiny materials, but we might want to implement that differently anyway.
> Not sure why you'd have a min in there though - you would want this to go dark outside the FoW, right?
Yes, sure -- the
min
just puts an upper bound on the total intensity. Otherwise you could have a 45 degree diagonal with normal vector (1/sqrt(2), 1/sqrt(2)), and two lights coming toward it from directions (1,0) and (0,1). When adding up the directions like in my post above, it would lead to a total intensity of 2/sqrt(2).> Also note that this would probably kill shiny materials
Good point. I guess we would lose too much directional information to keep that up in a meaningful way? The fundamental flaw of this design would be that a single light in diagonal direction would be treated pretty much like two lights, one horizontal and one vertical.
For restoring shiny materials, we could re-calculate the normal vector as (right-left, top-bottom) and intensity as the sum of dot products with that (hm, expensive?). This would be a completely separate shading path that adds a highlight. This would be the "proper" way of doing reflection anyway, because the angles you consider are different.
> Secondly, I'd only want to cap if we find a definite need for it. The incoming intensities are going to be naturally capped at 1.0, so there's a natural upper bound for light intensity anyway.
My line of thinking was that light that hits a surface perpendicular should produce the maximum intensity, which is 1. Without the
min
, two lights from opposite directions that hit the surface in a 45 degree angle would produce higher intensity than that: dot((1.0, 0.0), (1/sqrt(2),1/sqrt(2))) + dot((1.0, 0.0) * (1/sqrt(2), -1/sqrt(2))) = 2/sqrt(2)...> For restoring shiny materials, we could re-calculate the normal vector as (right-left, top-bottom) and intensity as the sum of dot products with that (hm, expensive?).
That would be the normal vector for what exactly? Do you mean the "reconstructed" direction of the incidental light?
No matter what we do we won't be able to tell apart the scenario where you have two lights coming from a 90 degree angle with respect to another, or a single light at a 45 degree angle...
And yes. For reflections, the goal would basically be to have something that makes at least a certain amount of sense for one light source. For multiple light sources, every single one would have to produce a distinct reflection somewhere on a normal map of sufficient range - therefore forcing us into making the same number of passes. The other solution here would be full deferred shading, but that's a whole different can of worms...
> but maybe we can experiment with a fixed set of light textures (2-4?)
If we go that way, we could make this configurable in the graphics options, so that slower computers could choose to just use 1 texture to get the current behaviour and for faster computers it could be greater than 1.
Powered by mwForum 2.29.7 © 1999-2015 Markus Wichitill