Is there a performance increase for extensions/widgets
-
- Posts: 53
- Joined: Thu Aug 15, 2013 9:25 am
Is there a performance increase for extensions/widgets
So in a performance-critical application, would it make sense to build much of the app's functionality in an extension?
Re: Is there a performance increase for extensions/widgets
@geoffcanyon: The idea of extensions is that you decompose your app into robust 'black-boxes' which you script together using LCS - this isn't really so much for performance reasons, but more for app-design, app development and component re-use reasons. The idea is that if you want to build an app you will eventually be able to combine off-the-shelf components quickly and easily into your desired app; and even if they aren't off the shelf, you should be able to decompose the app's functionality into these components you write yourself, making maintenance and development easier/faster as it will be composed of independently developable pieces.
In terms of performance, then there is a 'now' story and a 'goal' story...
Right now the LCB VM is slower than the LCS VM for similar code. From the last tests I did about 50-60% of this slow-down is attributable to how syntax binds to the functions it executes. When you compile an LCB source-file, syntax is compiled to a list of methods which might match dynamically at runtime. At the moment 99% of these methods are written as foreign C functions and thus the VM has to execute these dynamically which is quite a slow operation at the moment (as you might expect, constructing a call frame for a native function dynamically at runtime based on type information stored abstractly and then calling it is quite a bit more work than if the dispatch is written and then compiled in C). (I actually have a prototype patch from a while ago that mitigates this a lot for the builtin syntax we currently have in LCB - although I've not updated it in a while as at this stage we are aiming for correctness over performance).
However, this turns out not to matter too much when you start to use it to build widgets. A good example is comparing the performance of the graph widget with a similar widget written as a group in LiveCode Script. The simple act of the visual representation being produced direct from the source data cuts out a huge amount of overhead - managing lots of child objects, creating/deleting child objects, setting child object properties etc. It also cuts out the overhead the engine has in rendering and managing lots of child objects.
In terms of libraries, then right now, performance-wise you might not see much of a benefit. However, LCB has many features which make it a great deal more suitable for writing primitives that LCS can use, rather than writing them in LCS and it is this reason why you should consider moving what might be considered more 'low-level' functionality into LCB. For example, right now:
In terms of the future then LCB, as a language, is being designed with both correctness and performance in mind. As with other languages which are appearing (such as Rust and Swift) we are trying to design LCB so that you cannot make any of the common programming errors you might make in other languages - most the compiler will catch them for you at compile-time; and even for things which the compiler cannot infer statically, the runtime will catch (LCB is very very strict - for example you cannot get char 3 of tString if there are not 3 characters in tString).
From the performance side of things we are designing it so that it will be possible to generate efficient native code from the compiled representation (after a suitable amount of analysis). There are a number of things which can be done here - unnecessary typechecking can be eliminated; the overhead of dynamic foreign function calling can be eliminated; method selection at call sites can be reduced (syntax bindings are naturally polymorphic - 'is', for example, is implemented for each datatype - so if you do 'if x is y' then which 'is' needs to be selected appropriately). Additionally, typed variables which can be represented without 'boxing' will be so. For example, an 'Integer' will pretty much always fit into a native int type and so it will be possible to generate code which does this.
It is the aim, for example, that the following code:
Will eventually be compilable so that it essentially uses the same sequence of instructions as this C code:
We do still have quite a way to go to achieve the end goal, and it will be a bit of a journey. However, LCB is useful as it is now and we are working hard to ensure that it will only get better - on all the vectors we are designing it on.
In terms of performance, then there is a 'now' story and a 'goal' story...
Right now the LCB VM is slower than the LCS VM for similar code. From the last tests I did about 50-60% of this slow-down is attributable to how syntax binds to the functions it executes. When you compile an LCB source-file, syntax is compiled to a list of methods which might match dynamically at runtime. At the moment 99% of these methods are written as foreign C functions and thus the VM has to execute these dynamically which is quite a slow operation at the moment (as you might expect, constructing a call frame for a native function dynamically at runtime based on type information stored abstractly and then calling it is quite a bit more work than if the dispatch is written and then compiled in C). (I actually have a prototype patch from a while ago that mitigates this a lot for the builtin syntax we currently have in LCB - although I've not updated it in a while as at this stage we are aiming for correctness over performance).
However, this turns out not to matter too much when you start to use it to build widgets. A good example is comparing the performance of the graph widget with a similar widget written as a group in LiveCode Script. The simple act of the visual representation being produced direct from the source data cuts out a huge amount of overhead - managing lots of child objects, creating/deleting child objects, setting child object properties etc. It also cuts out the overhead the engine has in rendering and managing lots of child objects.
In terms of libraries, then right now, performance-wise you might not see much of a benefit. However, LCB has many features which make it a great deal more suitable for writing primitives that LCS can use, rather than writing them in LCS and it is this reason why you should consider moving what might be considered more 'low-level' functionality into LCB. For example, right now:
- There are real lists - the List type - i.e. a sequence of values with a lot of syntax that helps manipulate them.
- There is the notion of a 'handler type', and variables of 'handler type' - i.e. you can put a handler in a variable and then call it later (think safe function pointers).
- There is (albeit basic at the moment) C foreign function interface - i.e. you can bind to a foreign C function with native parameter types and call it from LCB code.
- Handlers can have in/out/inout parameters - these have much better semantics than '@' parameters in LCS.
- There are optional types - providing a strict distinction between 'nothing' (currently 'undefined' in the language - but we are changing that) and having a value.
In terms of the future then LCB, as a language, is being designed with both correctness and performance in mind. As with other languages which are appearing (such as Rust and Swift) we are trying to design LCB so that you cannot make any of the common programming errors you might make in other languages - most the compiler will catch them for you at compile-time; and even for things which the compiler cannot infer statically, the runtime will catch (LCB is very very strict - for example you cannot get char 3 of tString if there are not 3 characters in tString).
From the performance side of things we are designing it so that it will be possible to generate efficient native code from the compiled representation (after a suitable amount of analysis). There are a number of things which can be done here - unnecessary typechecking can be eliminated; the overhead of dynamic foreign function calling can be eliminated; method selection at call sites can be reduced (syntax bindings are naturally polymorphic - 'is', for example, is implemented for each datatype - so if you do 'if x is y' then which 'is' needs to be selected appropriately). Additionally, typed variables which can be represented without 'boxing' will be so. For example, an 'Integer' will pretty much always fit into a native int type and so it will be possible to generate code which does this.
It is the aim, for example, that the following code:
Code: Select all
variable tIndex as Integer
variable tAccumulate as Integer
put 0 into tAccumulate
repeat with tIndex from 0 up to pLimit
add tIndex to tAccumulate
end repeat
Code: Select all
int tIndex, tAccumulate;
tAccumulate = 0;
for(tIndex = 0; tIndex < pLimit;)
tAccumulate += tIndex
-
- Posts: 53
- Joined: Thu Aug 15, 2013 9:25 am
Re: Is there a performance increase for extensions/widgets
The planned enhancements sound great, I'm just wondering if at this point something like this in LCB:
is faster or slower than this in LCS:
Code: Select all
variable tIndex as Integer
variable tAccumulate as Integer
put 0 into tAccumulate
repeat with tIndex from 0 up to pLimit
add tIndex to tAccumulate
end repeat
Code: Select all
repeat with i = 0 to pLimit
add i to tSum
end repeat
Re: Is there a performance increase for extensions/widgets
@geoffcanyon: As I said above:
So yes the loop in LCB will be slower than LCS. However, the scope/context of your question seemed somewhat wider than you perhaps intended (particularly given the title) - which is why you got a more verbose response
Right now the LCB VM is slower than the LCS VM for similar code
So yes the loop in LCB will be slower than LCS. However, the scope/context of your question seemed somewhat wider than you perhaps intended (particularly given the title) - which is why you got a more verbose response

-
- Posts: 53
- Joined: Thu Aug 15, 2013 9:25 am
Re: Is there a performance increase for extensions/widgets
I just wasn't clear whether
thx
gc
meant that LCB code was slower when making calls to native libraries, or under all circumstances. It seems like the obvious optimization of strict typing should increase performance, but apparently not (enough).Right now the LCB VM is slower than the LCS VM for similar code. From the last tests I did about 50-60% of this slow-down is attributable to how syntax binds to the functions it executes. When you compile an LCB source-file, syntax is compiled to a list of methods which might match dynamically at runtime. At the moment 99% of these methods are written as foreign C functions and thus the VM has to execute these dynamically which is quite a slow operation at the moment (as you might expect, constructing a call frame for a native function dynamically at runtime based on type information stored abstractly and then calling it is quite a bit more work than if the dispatch is written and then compiled in C). (I actually have a prototype patch from a while ago that mitigates this a lot for the builtin syntax we currently have in LCB - although I've not updated it in a while as at this stage we are aiming for correctness over performance).
thx
gc
Re: Is there a performance increase for extensions/widgets
Sounds good.LCMark wrote: [*]There is (albeit basic at the moment) C foreign function interface - i.e. you can bind to a foreign C function with native parameter types and call it from LCB code..
One of LC's competitors uses Basic as it's main language.
I can drop native code (Objective C or Java for Android) between tags into the Basic code flow and use it as it stands.
for example:
--------- iOS App code snippet -----------------------------------
--- Basic here
--- Basic here
#If OBJC
- (void)test {
NSLog(@"test");
}
#End If
--- Basic here
--- Basic here
-----------------------------------------------------
I use an Objective C library (1000 lines ) for a calculator engine.
Using LC, I have to build an external to make use of my library.
Using the above method I drop the 1000 lines between tags and it works flawlessly.
Be nice to see LC do something similar so we can use native code as is...
Paul
Last edited by paul_gr on Thu Mar 19, 2015 8:14 pm, edited 1 time in total.
Re: Is there a performance increase for extensions/widgets
@geoffcanyon: Strict typing will allow us to increase performance substantially - we'll be doing considerable work on the VM side of things in due course to achieve this. At the moment though we are focusing our efforts on correctness - making sure we get the core syntax and semantics right as they have (obviously) a direct impact on the optimization we are able to do in the VM.
Re: Is there a performance increase for extensions/widgets
@paul_gr: I'm not sure whether embedding the objc in the lcb source is necessarily how I'd approach this. However, packages will be able to contain compiled native code referenced by the lcb foreign handlers and we do plan on making the compiling such code integrated into the build mechanism for packages themselves so you don't have to do all the compilation setup and configuration yourself.
What we're aiming for though is ensuring that you rarely, if ever, have to write bridging code in a lower-level language. We want to make the foreign interoperation rich enough that you never have to leave LCB, and the LCB language rich enough itself that you can do anything with it that you could with a lower level language but in an as much as possible 'safe' way.
What we're aiming for though is ensuring that you rarely, if ever, have to write bridging code in a lower-level language. We want to make the foreign interoperation rich enough that you never have to leave LCB, and the LCB language rich enough itself that you can do anything with it that you could with a lower level language but in an as much as possible 'safe' way.
Re: Is there a performance increase for extensions/widgets
Thanks Mark,
I appreciate the info.
Paul
I appreciate the info.
Paul
Re: Is there a performance increase for extensions/widgets
Hmm.... how will you allow us to do things like blocks and assign delegates? I had been expecting I would still need to write intermediary code between any asynchronous APIs (just about anything non-trivial) and LCB but maybe I'm wrong.LCMark wrote:What we're aiming for though is ensuring that you rarely, if ever, have to write bridging code in a lower-level language. We want to make the foreign interoperation rich enough that you never have to leave LCB, and the LCB language rich enough itself that you can do anything with it that you could with a lower level language but in an as much as possible 'safe' way.
LiveCode User Group on Facebook : http://FaceBook.com/groups/LiveCodeUsers/
Re: Is there a performance increase for extensions/widgets
@monte: Blocks are just closures - and LCB has the basic form of these already. There is a 'handler type' and you can treat handler identifiers as values - when you fetch such a identifier it binds the reference handler together with the current instance ptr (i.e. the module scope if a library, the actual instance of the widget if a widget). These handlers can be called from either LCB or native code via an API (there's an MCHandlerRef abstraction in libfoundation).
The only thing we need to add to this to make LCB handlers callable from arbitrary native code is the ability to generate a native code 'trampoline' for a given handler value. This 'trampoline' would be a small dynamically generated native code function which wraps the appropriate HandlerRef API together with the specific HandlerRef value and calls it - from the point of view of C, the trampoline is just a normal function pointer. (iOS does need some special consideration here as you can't dynamically generate native code on that platform - but it's a solved problem, you just pre-compile a bunch of trampolines referencing a global variable for the handler value).
In terms of delegates then we plan for you to be able to derive a foreign object type in LCB - so you'll be able to create a real objective-C object which is written in LCB. The basis for this is pretty much the idea of LCB handler values being able to be trampolined to from native code.
Indeed even in the short term, once we have added the ability to pass a handler value to a foreign function, with a bit of getting-your-hands-dirty with the C objc-runtime API you'll be able to build objects and such directly.
Also, it's not just Obj-C which we are wanting to target with this level of interoperation, but also Java and C++.
The only thing we need to add to this to make LCB handlers callable from arbitrary native code is the ability to generate a native code 'trampoline' for a given handler value. This 'trampoline' would be a small dynamically generated native code function which wraps the appropriate HandlerRef API together with the specific HandlerRef value and calls it - from the point of view of C, the trampoline is just a normal function pointer. (iOS does need some special consideration here as you can't dynamically generate native code on that platform - but it's a solved problem, you just pre-compile a bunch of trampolines referencing a global variable for the handler value).
In terms of delegates then we plan for you to be able to derive a foreign object type in LCB - so you'll be able to create a real objective-C object which is written in LCB. The basis for this is pretty much the idea of LCB handler values being able to be trampolined to from native code.
Indeed even in the short term, once we have added the ability to pass a handler value to a foreign function, with a bit of getting-your-hands-dirty with the C objc-runtime API you'll be able to build objects and such directly.
Also, it's not just Obj-C which we are wanting to target with this level of interoperation, but also Java and C++.