- Add "new p {}" syntactic sugar for "{ Prototype = p }"
- '->' also works with proplists
- Add a proplist representing the global scope, named Global
- Add a proplist representing the scenario, named Scenario
- this returns the Scenario proplist in Scenario calls
- this returns the definition in a definition call
- Add def and effect parameter types
- Remove unused PrivateCall and ProtectedCall
- Remove unused function "descs"
- Remove ScriptGo, ScriptCounter, goto and the ScriptN callbacks
For the details, check out the repository.
Now, the last remaining problem is the behavior of the inherited and _inherited keywords and of plain function calls.
- inherited in definition Foo points to different functions depending on what other definitions were included before Foo, which is the only thing that makes it still necessary to have multiple copies of functions instead of one for for every definition using it. (We're basically solving the Diamond Problem in an unconventional way.) I'd like to restructure the parser a bit to make local variable initialization work across script reloads, and these multiple copies of function makes that inconvenient.
- Is there a common use case for calling a function with this != the proplist it's in? Perhaps we could replace #including libraries with that and ban multiple inheritance?
- I think the main use case for inherited (besides #appendto scripts) are callback chains. But this basically relies on everyone writing _inherited(...) into callbacks in case one of the #included definitions needs the same callback later. Other languages/libraries are using callback lists instead, which can be dynamically manipulated. We're using Effects and GameCallEx, which are kind of awkward in comparison.
- Should the parser look up the function to be called when not using "->", or should the lookup be done at runtime, as if one had used "this->"?
I probably need to read some recently written scripts to get a feeling for the patterns that could be simplified with function pointers. Unfortunately, this isn't as obvious as the EffectVar stuff...
I am also curious about your thoughts/plans to replace #include with another system. Currently I can't picture how that would look like. Any more details?
> Could you make a hello-world example how the script would look like then? @ Global/Scenario proplist, functions in proplists(?), this-usage in definitions?
I haven't made any actual syntax changes beyond the "new foo {}" thingy.
Global and Scenario mostly work like definition proplists. So like you can do "Clonk->IsClonk()" now, you can do "Scenario->InitializePlayer()" instead of GameCall("InitializePlayer"). Except that the scenario proplist is also writable like an object, so you now can use local variables in scenario scripts.
this in DefinitionCalls like "Clonk->Foo()" means that in Foo, this==Clonk instead of this==nil.
Functions as values mean that you can do "obj.Hit = Flint.Hit" to make obj explode like a flint does.
>- Should the parser look up the function to be called when not using "->", or should the lookup be done at runtime, as if one had used "this->"?
Is there any disavantage of the parser already looking it up?
>(besides #appendto scripts)
I think #appendto scripts are an important part when you have something as modular as the Clonk package system. I think it would be a pity to lose that
Until now I really liked the way Clonk handled inheritance and overloading/appendtoes.
PS: On the other hand some sort of "scoping" would be nice - that is: library A has private function "Draw" and B has private function "Draw", C inherits from A and B and executes some code of B that uses "Draw" - A's "Draw" should not be called here because it was never ment to be called from somewhere else. ("private"/"public")
PPS: And the system should be easy to understand/work with to not scare off beginners :/
> Is there any disavantage of the parser already looking it up?
If you have the parser look it up, you lose the possibility of switching the function out for another one at runtime.
We would have to think of a method to call the previously overloaded function though. The overloaded function is often integrated into the logic of the overloader, for example as in
func MaxContents() { if (foo) return inherited()*2 else return inherited(); }.
So I don't think it's sufficient to just collect the functions as in Rock->Hit = [Rock->Hit, MyHitImplementation].
Both would probably require that the including object calls LibFooInit() in Construction() instead of each Construction() calling _inherited. The total lines of code shouldn't grow too much from that. But I'm not sure how complicated and widespread other callback inherited-chains are.
>one could do "local IsPowerConsumer = Library_PowerConsumer.IsPowerConsumer"
That doesn't really sound feasible with big libraries :/
What I liked about the old library system was that you really could get the ~whole functionality of an object/library just by saying, "hey, I want to use that object!" via #include.
And to be honest I would rather have a system that is usable and easy than a system that is beautiful from a code designer's perspective :/
#include Bar
func Foo() { _inherited(...); }
Which function gets called by that _inherited? Answer is: You can't know without looking at every script. It could be from some script that appends to Bar, or from some entirely unrelated script that merely got included before the example script in case Bar doesn't have a Foo function. This only works in practice because we mostly use _inherited for callback functions which don't rely too much on exactly which function gets called. And these callback chains fall apart when one callback function forgets the _inherited.
So while the occasion to change all this is my desire to make the script parser more sane, we should use the opportunity to build something better.
#include Foo
#include Bar
func Baz() { ++counter; return Foo::Baz(); }
I am just a little bit afraid that either old well-used features (#appendto/the possibility to "freely" overload functionality in scripts) get thrown out because the don't fit with the new design or that "overloading" then means a lot of new bloat around what you actually want to accomplish
new p {}
actually calls p.new()
" feature? With the standard "new" implementation being some standard - possibly engine - function? That could open up a lot of very prototypic-y programming patterns.Like, say, have effects getting created by doing
var effect = new Effect { period = 100, call = &MyFunc };
static const MegaClonk = new Clonk { MaxEnergy = 100000 };
,and the second
CreateObject(MegaClonk);
.Also, Effects call more for something like
local GlowEffect = { // or "new Effect {" if we want
func Start() { [...] },
Interval = 42
};
[...]
CreateEffect(GlowEffect, obj);
That way, not every effect instance has to carry redundant data.
Whether proplists have magic engine behavior depends on the proplist, not the prototypes of the proplist (except that objects must have a definition in their prototype chain). I like that the syntax reflects that - Magic engine behavior comes from calling an engine function. Hiding the difference between them and plain script constructs isn't a goal, because the rules differ a lot.
>Say you want to create one MegaClonk prototype that derives from the Clonk, and then create a bunch of instances of tha
Hopefully enough syntax sugar to make it possible that every beginner still starts with the SuperTeraXFlintBomb as their first object :)
>Also, Effects call more for something like[...]
Mh, that cries for more than one script per object definition!
ScriptGlowEffect.c would contain the properties for the glow effect then - less nesting of braces! (Especially when they contain long functions)
global func CreateObject(d, x, y, o) {
var obj = new d {};
obj->SetStatus(1);
obj->SetPosition(x, y);
obj->SetOwner(o);
}
around. (We can't have the constructor create an active object because that'd change the state of the game, and the constructors have to be side effect free since they would get called from the parser for
local foo = new bar { baz = 42 };
)
CreateObject
" the function that assigns engine-magic to the object, and have "RemoveObject
" just disassociate the object with a game object? That could solve some strangeness with deleted objects as well.
>So I think what I'll do is simply ban including the same function from multiple scripts
I am not a fan of this I guess.
What happens if you include two scripts that use the same function then? One of the functions is completely hidden and cannot be called at all?
Also the pdf doesn't work (looks empty to me - I can copy&paste text from it, but that is not exactly helpful). Can anyone confirm?
> What happens if you include two scripts that use the same function then?
You get a helpful error message, of course. That has the nice aspect of catching accidental name collisions.
The pdf is probably simply too big for your viewer. I could try to make graphviz make it smaller, but it can be nicely summarized - almost all inherited chains are only two functions long.
We don't have extension packs now, which a part of Clonk (and should be of OpenClonk too). And every pack my want to add new functionalitys to the clonk. I hope you system covers that too.
>So I think what I'll do is simply ban including the same function from multiple scripts
Sec, you mean, I can't have 2 objects including a third with both calling inherited() in a function? What's with all the library design then?
>but only two objects use this feature, so I'll take the enhanced protection against accidental name collisions instead.
As Randrian said: I am not sure whether the current work state of OpenClonk is far enough to base the decision on it whether features are needed
PS: if I include A and B - do they current have an order? if so, what speaks against making include-chains work like C->B->A? As in "#include works like you copy&pasted the whole script"
> PS: if I include A and B - do they current have an order? if so, what speaks against making include-chains work like C->B->A? As in "#include works like you copy&pasted the whole script"
I answered that in the first post in this thread:
>> inherited in definition Foo points to different functions depending on what other definitions were included before Foo, which is the only thing that makes it still necessary to have multiple copies of functions instead of one for for every definition using it. (We're basically solving the Diamond Problem in an unconventional way.) I'd like to restructure the parser a bit to make local variable initialization work across script reloads, and these multiple copies of function makes that inconvenient.
The fact that you didn't know that it currently works that way convinces me even more that it is just too surprising or obscure a feature to keep :-) Especially since "inconvenient" is an euphemism for "I spent hours trying to find a solution that I liked and found none". Also, since the engine will get simpler as a result of this change, we can spend some code on a better solution.
> if so, what speaks against making include-chains work like C->B->A? As in "#include works like you copy&pasted the whole script"
Ignoring the implementation considerations for a moment, "as if copying the script" may be easy to explain, but it isn't the most useful tool. One normally wants some isolation against other scripts to avoid accidental name collisions. Splitting out a helper function in a library shouldn't risk breaking a random extension object that uses that library.
Next steps:
- the mentioned #include deduplication
- fix function lifetimes (either not deleting functions until game end or reference counting)
- test
- merge
- Add/replace lots of features that should be using function pointers and only didn't because those didn't exist
- Take another look at inherited using the lessons learned from that. Hopefully a pure "how can we make this better" will be more productive than the "how can we make this better while also solving the local variable initialization problem".
- the "target" parameter
..is unneeded. The only parameter needed is "effect" with the target as a property. Maybe same for "time" (in Fx*Timer)?
- temporary callbacks / Initialize / Destruction
You probably see the line
if(temp) return;
more than Start-calls without it.Effects should have additional callbacks Fx*Initialize and Fx*Destruction that are meant to f.e. initialize variables. Those callbacks are obviously only called once on creation/removal - not on temporary Start/Stop
- all callbacks should automatically be forwarded to all effects on the object
- too few seem to use the return-value constants for Fx*Timer
Including me... why? Are they too hard to remember? In the docs it even says
If this function is not implemented or returns -1, the effect will be deleted after this call.
Something else that I needed once:
- when is Fx*Stop called? After the items are dropped but before the Death()-callback in the object?
There is no possibility to easily f.e. remove the items of a Clonk when he dies(?)
- constants for some effect priorities (minor)
Anything else?
> - all callbacks should automatically be forwarded to all effects on the object
Didn't Guenther implement a more generic way to hook into object functions, like e.g. FindObject(Flint).Hit = MyEffect.Hit;? This would also work for callbacks from script, like OnShockwaveBlast, etc.
static const HitEffect = {
NextHit = nil;
func Hit () {
// this is the object
Awesome();
var fn = this.Hit;
this.Hit = GetEffect(HitEffect).NextHit;
var r = this->Hit();
this.Hit = fn;
return r;
}
func Start(obj) {
// this is the effect
if (obj->GetEffect(HitEffect))
return obj->RemoveEffect(this);
NextHit = obj.Hit;
obj.Hit = Hit;
}
func Stop(obj) {
// this is the effect
obj.Hit = NextHit;
}
}
One could probably imagine some library solution that would reduce the boilerplate a bit, but having the engine do the job would be easier. And would work with multiple instances of an effect on the same object. (Though the Real Solution(tm) to that would be closures. With closures one could also reduce the boilerplate to a single function call, at the cost of one wrapper function per callback. Not really worth it...)
I'll add some documentation next. Until then, here's some silly test-scenarioscript, and the commits with "Script:" in the title are the ones that change the C4Script language or API.
func InitializePlayer(i)
{
var p1 = { Name="p1" };
p1.Bar = this.Foo;
p1->Bar();
local p2;
p2 = { Name="p2" };
p2.Bar = this.Foo;
p2->Bar();
Foo();
}
func Foo() {
Log("Foo: %v", this);
}
Next steps:
- Calling bare functions. Probably simply like this:
func foo(function bar) { bar(); }
- Declaring functions inside proplist literals. Probably only in proplist literals that initialize local or static const variables, to make the savegame code simpler.
- Creating effects with a prototype, and make effect callbacks work like "effect->Foo()". Combined with the above, this should reduce the boilerplate involved with writing effects considerably.
- Make all the things that accept function names also accept function pointers.
foo
in my local scenario. Whenever I want to add this to an existing definition I cannot do it without getting at least a warning:In
Foo
:#appendto Bar
=> will produce an error: #appendto in definition
In
System.ocg\AppendToBar.c
:#include Foo
#appendto Bar
=> will produce a warning #include in #appendto
Is there an official way to do this without a warning? Alternatively, I would be glad if I could just overload the definition somehow without copying it. For example, the possibility to just redefine the whole script part of a definition without the need for having a DefCore, graphics, etc. This is already partially possible by replacing functions from another definition, but it is a very dirty method, imo.
Powered by mwForum 2.29.7 © 1999-2015 Markus Wichitill