19 March 07 Model export
Although there has been an import function for some time, the only importable packages were created as part of the build process. There is now an export function on any model that assembles everything needed to reconstruct the model into a single file. This includes other referenced models in the workspace, the component definitions, connection types and signals, and any external modules (source code) that may be referenced from scripts.
The result is a single jar file that can be used to distribute the model. A couple of such files are included in the examples area of the website. They do present potential problems of importing different components with the same path, or different versions of the same component, either of which could break existing models. The best solution may be to promote the use of distinct workspaces for new imports.
January - February 07 Code fragments for component behavior and model control
The previous approach for generating the class files that define model behavior was to generate method stubs according to the component definition and then let the user edit the java source file. However, this is rather complex and exposes more of the source code than is necessary when really only the method bodies are required (not signatures, curly brackets, package declarations etc) possibly along with some state variables and dependencies.
So now all the source code entry is fragment based, with different slots for the bodies of different methods, imports, state variables and external dependencies. As well as not showing the user any code that they did not write, this also allows better error reporting because of the tighter structure. By removing java-specific declarations it should also make it easier to allow code fragments in other languages.
4 January 07 Physical Interactions and model control
Physical interactions are working again (rewritten), within the new connections/signals framework. Currently there are only three transmitter/tuner pairs ("tuner" rather than "receiver" by analogy with TV tuners, and because "receiver" is used for cabled connections): for objects, actions, and contact with the environment. But the code is in place to add generated signal specific components, just as there are for routers and aggregators, if it proves useful.
The new connection model aimed at using generated code for connectivity ends up significantly shorter than before but there is probably still scope for further consolidation.
December 06 Model construction
Copy and paste now works within models to a limited degree. This proves essential if models and, particularly, component specifications are to have a useful quantity of metadata. A typical case where this is needed is for the metadata surrounding a physical quantity such as a membrane potential which may figure in many models. In most cases, the appropriate units, ranges, scale etc will be the same, so copying is an improvement on re-entering the data. A more structured solution is to create a new type containing just the field and appropriate metadata, but this needs streamlining in the user interface (TODO) before it beats copy-and-paste.
Model control scripts are now also presented under a new tab for any model (though most models do not have scripts), just as the structure scripts are. It makes it rather easier to edit and test the scripts. But it still needs a smarter java editor, and probably for the (user-written) method bodies to be separated from the (auto-generated) declarations.
23 November 06 Aggregation
There is a new generated "...Aggregator" class for each non-primitive signal type along with the "...Conveyor" classes. In effect, this splits out functionality that had been accumulating in the conveyor class. When a socket allows multiple incoming connection, the read method on the socket should return an array of all the incoming values or objects. When there is only one, it could return the value itself. But this leaves the reader to either always access an array or to work out what it has got. But in fact, the system knows, so now sockets that accept multiple inputs always return arrays. This is all fine, and was being done with some extra methods on the normal Conveyor class. But it breaks down for socketed components. There, the input conveyor on the component itself relays from the conveyor on the socket, but it is unclear whether the single input to the inner conveyor should be treated as a genuine single input, or whether the (possibly) multiple inputs of that input should be returned. Sometimes you want one, sometimes the other.
So now, conveyors that return arrays are called "Aggregators" and are generated from signal definitions at the same time as other conveyors. An aggregator can have a source relay, in which case it just reports the inputs to that source; otherwise it aggregates its sources. This means that conveyors always chain through their one and only source; and aggregators also chain through their relays, if set.
At the same time, the primitive conveyors have been split into different classes (IntConveyor, DoubleConveyor etc) which matches the case for generated object conveyors better. TODO - the same should be done for Routers - at present there is only a BareRouter and a deprecated MultiModeRouter.
22 November 06 Evaluation on demand?
Not a change (yet) in fact, but a candidate: should components that normally require a continuous update have an option of saying they are able to supply the state on demand and that they do not need the continuous update? This could cut out execution costs in generating values that are never read. Of course, it would only work with components that can compute their state from the current state of their neighbors (ones that don't need to know their own history). A canonical example is an analogue to digital converter: if nothing is recording the output, it needn't compute anything.
This could be implemented at present (in user space) with reverse signals on the scalar cable: before reading a value, it could send a signal back saying "I'm about to read your value" which would cause the recipient to check that it is up to date. But this would be complicating the model specification for the sake of implementation convenience, which is probably not the best solution.
There is a possible downside too in that many components reading the same value would either cause multiple unnecessary updates or would require extra overheads to detect and avoid. Either way could end up costing more than the continuous update. More examples needed.
22 November 06 Namespaces and refactoring
Although user-defined components internally have full java style IDs, these aren't used everywhere, and it is normally possible to get away with just the component name. Should all user-defined components be forced to live in the same namespace? That is, should each name be allowed only once even if components are in different folders? In general, this is a pretty good policy even if it isn't obligatory (and it makes moving components much easier in the absence of comprehensive refactoring support...). But, equally there is a pretty good case for using the same name for objects of different type. In particular, given a signal called "Potential" the natural name for the corresponding connection type is also "Potential". Anything else, such as prefixes or suffixes for the different types seems a bit unnatural and risks being hard to remember.
So, duplicate names are now allowed, as long as they are for different types. The resolver takes an additional string argument to enable it to decide to which to load. The string can be any text that is found in the serialization of the intended type: the type that matches the string earliest is the one returned. I'm not sure how well this will generalize, but since there are only four core types and their serializations are system defined, it ought to work pretty robustly for now.
10 November 06 Code Generation and Compilation
The October refactoring was mainly to get the model specification process working smoothly. Behind the scenes, the generated code is now functional for custom object connections. The most interesting problem here is with the instantiation of the conveyor components that handle routing and aggregation. Since these are now generated, they have to be loaded by the Janino class loader, but the code doing the main assembly is all static. they could, of course, be created by reflection, but this is unnecessary because the type definitions have to know about them and are already loaded that way. The current solution is that the generated type definitions instantiate representatives of the necessary conveyors as member variables. This way they are loaded by Janino when the component is accessed. The core code just has to clone the representatives that it finds without knowing anything more about them.
Error reporting: I should have done this ages ago, but at least now any Janino class loading errors are tracked back through Exception.getCause() as far as possible to show the root of the problem in the script-checking dialog. It would still be good to highlight the location of the problem in the script, though this sounds a bit like writing and IDE.
1 November 06 Drawings
Fixed a curious problem that the drawing order of components of icons could be changed OK, but reverted on saving and reloading. It comes from the shapes being actual objects that are treated as peers to the tables that define them (just for drawing efficiency) so an assembly can reorder the peers without the source table knowing. Now sets can reorder themselves to correspond to the ordering of their peers in a separate set.
Started a ScriptMath class that is accessible to all scripts and mediates access to mathematical functions available elsewhere (such as Poisson random numbers, which was the first requirement). Also need some sensible inherited methods for testing if values are defined rather than Double.isNaN(v) (TODO).
25 October 06 Connections and Signals
This is a big change: instead of two primary types (components and "conveyable") there are now three: components, connections and signals.
The previous picture was that the user defined the forms of communication as "conveyables" that encapsulated the quantity that is transferred (boolean, int, float, arrays or "object" for the rest), the connectivity style (whether one or more plugs could go in a socket etc) and the visual presentation. However, there are a number difficulties with this scheme:
The upshot of this is that the single conveyable type had to be replaced by separate definitions of the signals and of the connections. At the same time, the signals now figure in the code generation which creates signal-specific container and routing objects.
So, signals define what can move between components, and can be defined in much the same way as components themselves (though without all the field type options). They can be set to a primitive type or field definitions can be added to specify the content of the signal. For example, a contributor to the membrane conductance could supply a conductance and a driving potential. A class is generated from the signal type, and also a "...Conveyor" class that handles routing and aggregation between components. In the case that multiple inputs are attached to a single socket, the (generated) conveyors combine them to an array that can be iterated over from the script.
Above the signals now are connection types that specify the forward, and, optionally, the reverse content. For example, this allows a "membrane" connection for adding a synapse to a cell such that the synapse supplies a current object (G and E) and the cell sends back events when it fires.
One slightly odd feature of the current scheme is that in the code generation, signals now map to two classes, (the signal and its conveyor) types map to three classes (a properties class, the component case, and the component itself containing the script), but connections don't map to any classes. That is, the connection definition is only used at setup time. There is some sense in this, since it is a purely structural definition that has finished its job once the model is constructed, but it seems likely that generated code will be needed here too at some stage if connections acquire more dynamic properties (as seems likely with more complex network architectures).
20 September 06 Component sharing
Previous versions came with a default set of components models that were unpacked on the first startup. This has been disabled temporarily and replaced by an import option (File -> import) for importing a package of components as a jar file.
Naming - all the example components and models have been shifted to a new domain, neurostruct.org so that they can have globally unique IDs. Both components (as shown in the components view) and models (models view) can have the same root and can be packaged together in a jar file. Internally, separate folders are still used however, so they are sorted out when a jar is unpacked. This seems like the most flexible solution, and may be useful if the model/component definition blurs in future. It also has the benefit that a full model, made of many component definitions and model items, can be packaged as a single executable jar file so TODOa "run model from jar" version of the execution environment could easily be developed.
19 September 06 Janino
Arno has fixed the embedded compiler. Everything seems to be working fine with the new version.
References within components are now accessible from the graphical view and de facto socket components (ie, any component containing a single field that is a reference) show the icon of the selected target overlaid.
14 September O6 Embedded Scripting
Components can now have two different program fragments. One is run once per table, the other, once per instance. This should be pretty much the definitive script structure for V3.
The initial design required that tables were purely declarative expressions of a model's structure, and therefore shouldn't have associated code. But this breaks down for cases where you want to sidestep all the internal instantiation and provide an independent object to spawn its own children. Export to the new object therefore takes the structure, not an instance, hence the need for per-table scripts.
The goal of the scripts is to allow external classes to supply executable model components without any knowledge of the (user-configurable) model structure. This is tricky for tree-like structures such as an ion channel with sets of states and transitions. The best pattern appears to be to require the external code to support a construction interface, with methods like addState(channelTypeKey, state parameters...) and, when built, getChannelInstance(channelTypeKey). The channel builder must keep track of what channel a state goes in (for example, addState can be called before any other initialization for a channel). The addState methods can then be called from the substate tables in the model using their parent as the key.
It does impose some requirements on how the external implementation can be accessed, but its probably good discipline anyway. But, most importantly, the external code needn't know anything about the internal structure of the model.
Using the latest version of Janino would be good for error reporting but the Catacomb class structure causes class loading errors. It appears to be a Janino problem - lets hope it is fixed soon.
6 September Graphical editing
There are still too many operations that can only be effected via the table view. Now local references can be edited in the diagram via a popup menu. The same is still needed for references within components.
24 August O6 Model Structure - Subcomponents and References
A core feature is that the same component definition can be reused in many different contexts. From an implementation/execution point of view this is straightforward: who cares who holds the primary reference? But from a UI perspective it raises a lot of issues.
Catacomb 2 handled it with a special socket type that holds a reference to any other table. Ports on the target table are inherited by the socket allowing connections from the local environment. This works for simple cases, but gets awkward quickly for several reasons: not all ports can be meaningfully be exposed to the enclosing assembly; it is not possible to change the reference target without breaking the enclosing assembly; exposing too many ports makes objects unnecessarily complicated; if the ports are there, then the icon has to look like the referenced component, thereby loosing much of the benefit of encapsulation.
The V3 design forces much more encapsulation: only ports in the targets of explicit reference fields (those that are part of the table specification) are exposed. For all the rest (including the normal case of adding a component of a given type to an assembly) the user should create explicit socket types (possibly to be auto-generated later). The socket type, like any other type specifies its own ports, and these are the ones seen by the enclosing assembly. Connections to the sockets internals are effected by port-forwarding to and from named ports on the internal structure. That is, forwarding behavior, is a new primary attribute of any port. This allows, for example, a cell socket to be constructed that contains a reference to a cell assembly. The socket can have a port for reading the potential, that forwards to the port called "potential" on a subcomponent called "body". Any cell assembly can be set as a target of the reference, and, if they are present, the corresponding quantities will be exposed via the socket.
Sockets also have an external/internal switch which specifies where they can be accessed from. These are used, particularly in a set of relay components (part of the standard component set - there is nothing hard-coded about them) for externalizing or internalizing communications. These are convenient for developing case like structures around assemblies analogous to the case of a computer that holds its input and output ports that mediate between peripherals and the internal components.