general-issue-tracker issueshttps://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues2018-06-28T23:11:34+10:00https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/54Flags in arithmetic/logical operations2018-06-28T23:11:34+10:00John ZhangFlags in arithmetic/logical operations*Created by: wks*
This issue will give the client access to the flags set by arithmetic or logical operations, such as overflow, carry, zero, negative, ... This issue should only affect the BinOp instructions (ADD, SUB, MUL, ...).
Th...*Created by: wks*
This issue will give the client access to the flags set by arithmetic or logical operations, such as overflow, carry, zero, negative, ... This issue should only affect the BinOp instructions (ADD, SUB, MUL, ...).
The design should consider:
- [ ] scalar integral types
- [ ] scalar floating point types
- [ ] vector typeshttps://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/53Access to native thread-local memory2018-06-28T23:11:34+10:00John ZhangAccess to native thread-local memory*Created by: wks*
This issue is about accessing thread-local memory/variables defined in native programs (C). One important application is the `errno` variable in C/C++.
This is only slightly related to #52 which introduces thread-lo...*Created by: wks*
This issue is about accessing thread-local memory/variables defined in native programs (C). One important application is the `errno` variable in C/C++.
This is only slightly related to #52 which introduces thread-local storage to Mu itself. There is no intention to force Mu's thread-local storage use the same mechanism as native programs.
Thread-local storage in native programs is highly machine/OS/ABI-dependent. The register used to point to thread-local buffers varies, and maybe not all platform have such register.
One possible workaround could be depending on helper functions written in C or assembly.
But if we want Mu to integrate deeper with native programs (i.e. do things more efficiently), we can define more instructions (probably "common instructions") to give Mu more capabilities, such as getting/setting the value of the FS register. But any such instructions would likely be platform-dependent and probably optional for unsuitable platforms.https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/52Thread-local storage2018-06-28T23:11:34+10:00John ZhangThread-local storage*Created by: wks*
Add **thread-local** memory to Mu, in addition to the existing *heap*, *stack* and *global* memory.
[Proposal 1](https://github.com/microvm/microvm-meta/issues/52#issuecomment-213364592): the C-like approach, has kn...*Created by: wks*
Add **thread-local** memory to Mu, in addition to the existing *heap*, *stack* and *global* memory.
[Proposal 1](https://github.com/microvm/microvm-meta/issues/52#issuecomment-213364592): the C-like approach, has known problems
[Proposal 2](https://github.com/microvm/microvm-meta/issues/52#issuecomment-213375674) (preferred): a more aggressive designhttps://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/51WebKit's B3 JIT compiler2016-02-19T14:27:53+11:00John ZhangWebKit's B3 JIT compiler*Created by: wks*
The B3 JIT compiler has received much attention recently.
I started a Wiki page: https://github.com/microvm/microvm-meta/wiki/B3-JIT-%28WebKit%29
Let's summarise B3 and its influences to Mu in the Wiki.
*Created by: wks*
The B3 JIT compiler has received much attention recently.
I started a Wiki page: https://github.com/microvm/microvm-meta/wiki/B3-JIT-%28WebKit%29
Let's summarise B3 and its influences to Mu in the Wiki.
https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/50IR construction API for the convenience of verification2016-06-17T15:24:01+10:00John ZhangIR construction API for the convenience of verification*Created by: wks*
# Problem
The only way for the current API to transfer an IR bundle from a client to a Mu micro VM is via the `load_bundle` API (for example, `microvm.load_bundle(""".typedef @i32 = int<32>\n""")`, or use a binary f...*Created by: wks*
# Problem
The only way for the current API to transfer an IR bundle from a client to a Mu micro VM is via the `load_bundle` API (for example, `microvm.load_bundle(""".typedef @i32 = int<32>\n""")`, or use a binary format), which passes a serialised IR format (text or binary). JVM takes this approach. Its serialisation format is "Java bytecode".
An alternative way to deliver a bundle is to let the client construct each node (type, function, basic block, instruction, ...) in the bundle by calling an API function (for example, `handle = microvm.make_instruction("ADD", i32, op1, op2)`) which returns a handle to the node, then passing this handle to the Mu micro VM. LLVM uses a similar approach: it provides constructors to each node class, and provides a "builder" to conveniently build a CFG.
Since the construction of the IR should be off-loaded from the micro VM, there should be a client-side library (we call it **libmu**) which constructs the IR for the client and provides an API to the client. `libmu` itself talks with the micro VM through implementation-dependent "private" APIs.
It is not clear whether **serialisation** or **construction by calling** is "better" because "better" has many definitions, but *the calling-based API must exist* because it is reported that it is very difficult to verify a parser.
## Comparing the two approaches
**serialisation**
* pros
+ language independent
+ The serialisation format can be standardised and (almost) all languages can access byte arrays.
+ faster than *construction by calling* when the client and the micro VM have different runtime environments
+ (e.g. Haskell to C, Java to C, ...)
+ FFI can introduce considerable cost. The Go lang impl performs a SWAP-STACK operation every time it calls C.
+ simple API
+ one API function `load_bundle` handles the entire IR format.
* cons
- difficult to verify (there are reports that it is very difficult to verify a parser)
- require a parser inside the micro VM:
- there may be a mixed approach to offload the parser outside the micro VM.
- not suitable for the end user, i.e. those who writes the high-level language compiler.
- Either the high-level compiler writer or some third party need to build a library for serialisation.
**construction by calling**
* pros
+ easy to verify
+ minimal for the micro VM
+ No parser in the micro VM is needed.
+ faster than *serialisation* when the client and the micro VM have similar runtime environments
+ (e.g. C to C, Java to Java, C to C++, Haskell to Haskell, ...)
+ The libmu library can be implemented in the same language as the client.
* cons
- bloated API
- There will be hundreds of API functions just to construct each IR node. All of them need to be verified.
- The internal states of libmu needs to be verified, but hopefully this can be easier than verifying the parser.
- paradigm impedance
- There is no single API that satisfies all languages. For example, if the API is defined in C, it may be unsuitable for a client written in Haskell. So ideally there should be a libmu for Haskell, whether that is part of the formally verified Mu/libmu pair or not.
# Details
## Cost of foreign function calls
Depending on the two languages calling each other, the cost can be trivial (such as C and C++) or huge. Java must go through JNI to call native functions. Go performs a swap-stack operation every time it calls C in order to work around blocking system calls in its M*N threading model. Other language implementations, such as Python, relies on `libffi`, which builds a native call by dynamically prepare its arguments, and the high-level language such as Python needs to convert C types to Python types and vice versa.
An experiment shows calling a simplest C function from Haskell introduces 30x overhead comparing to calling the same C function from another C function.
But from verification's point of view, the cost does not matter as long as it is spent in the client.
## Cost of serialisation
Serialisation is not free. In a simple experiment, serialising a CFG-like data structure to an intermediate format and then parse it in another module (both written in C) introduces a 10% overhead comparing to directly constructing the target CFG structure in the receiver while assuming the sender only holds opaque references. The major cost is memory allocation (where malloc is the bottleneck) and resolving the cross-references between nodes (where the hash map is the bottleneck).
Serialisation is not free, but is reasonably cheap. When foreign function call is expensive, serialisation can be used as an alternative.
## Not calling across languages
Since calling across languages is expensive, it is desirable to implement part of the libmu in the same (or similar) language the client is written in, and let libmu construct the data structure that is native to the micro VM.
For example, if the micro VM is written in C++, the libmu should construct a tree of C++ class instances as LLVM does.
Note that LLVM is designed and implemented in C++ and serves C/C++. It is not a problem if the only official API it provides is C++. There is a C binding, too, but it is trivial. However, Mu has a specification which allows multiple implementations. In this case, the micro VM core would not always be in C/C++ or any particular language.
However, if the micro VM is written in a managed language, such as RJava or Java, then it will be interesting:
* If both the client, libmu and the micro VM happen to share the same (or similar runtime), then the API calls can be cheap. The ideal case is being "metacircular", i.e. both the client and libmu run on the same micro VM as the micro VM itself. The cost is minimal.
* If libmu is written in C and provides the C API, then there is a semantic mismatching between the micro VM and libmu: *somewhere in libmu* must cross the line between two different runtimes, which introduces a FFI-like overhead, the cost of which depends on the concrete languages. Holding opaque references of objects in Mu requires such object to be pinned, or being held by some containers (like the `MuCtx` structure in the current API, which is not light-weight).
* If the client and libmu uses a different runtime as the micro VM ("Haskell on libmu on micro VM" is this case), it will pay for two levels of cross-runtime calls.
# Other concerns in design
## Does libmu need to be minimal?
Maybe. At least the verified libmu needs to be minimal.
There may be even higher-level libraries outside above in the client's world outside libmu. Those libraries are not minimal.
## How many languages should be supported?
Ideally the libmu should have both intimate knowledge of a particular Mu implementation, and intimate knowledge of the language the client is written in. In theory, there can be one (or more) libmu for each (micro VM impl, client language) pair.
Since C is so popular, we will define a C API for libmu. In theory,
1. there can be more than one C APIs
2. there can be APIs for other languages and they may or may not look like the C API. (preferably not, to avoid paradigm impedance)
## Mu CFG and the client CFG (or AST)
**How much should the client use the Mu IR CFG?** i.e. should the client construct Mu IR nodes and do transformations on it as LLVM does? Probably not.
LLVM is designed to be maximum: it attempts to be maximum and its CFG contains much information for optimisation, such as the "nsw" and "nuw" flag on the "add" and the "sub" instructions.
But the Mu IR is designed to be minimal and is only designed for the micro VM to consume. It does not contain much information that benefits the client.
It is possible that a client-side library performs IR transformations, but it is doubtful whether that IR is the same Mu IR. Many optimisations, such as whether `x+1 > x` is always true, depends on extra information (such as the "nsw" flag in LLVM) which the Mu IR do not provide.
I @wks believe the Mu IR is only generated as the last step of the client-side transformation, i.e. the next step is to deliver it into the micro VM.
# Towards the new API
## Micro-VM-to-client API
The existing controlling API does not need to be changed.
The bundle loading API can be removed. i.e. "how to load bundles into the micro VM" is implementation-dependent, which actually mean **libmu-dependent**.
## libmu-to-client API
This API needs to be carefully designed because it is part of the formal verification.
There should be a model of the internal states of libmu which includes:
* The set of handles which map to Mu IR nodes.
* The state of each Mu IR node.
* Isolation between threads.
During construction, each node can hold incomplete information at a given time. The Mu IR nodes may circularly refer to each other (currently they refer to each other via IDs), so it is desirable to allocate several nodes and then link them with each other.
If multiple threads can use the same libmu, the transition in its internal state must be properly handled.
Cares must be taken to select the minimum set of API functions, because the current Mu IR has 18 types and 37 instructions. The number of API functions may easily bloat to 100+ if too many CRUD commands are added.
## Micro VM-to-libmu interaction
This interface does not need to be public, but the proper handling of data structures in the micro VM is important. This interface is part of the verification.
In all cases, the choice of languages does matter. Properly chosen languages for the client and the micro VM will result in high performance and verifiability.
# Future languages
This section contains my @wks personal opinions. These affects my opinions on this API, too.
In the future, popular system programming languages will generally be higher-level than current languages (such as C). We already showed that a high-level language, such as RJava and Java, can produce high-quality language runtimes, such as JikesRVM.
Instead of relying on C, high-level system programming languages will gain direct control over low-level operations, such as raw memory access (pointers). These low-level stuff can be done even though the language itself still has very high-level features, such as garbage collection and object-oriented programming. It is possible (in my opinion at least) that eventually such high-level language can replace libc and directly interface with the kernel, thus eliminates the necessity of C except for some very rare cases.
This trend is already visible in several languages. C# already has unsafe pointers and unsafe native interface (P/Invoke). The Java API is mostly implemented in Java, unlike the old days when the standard Java library is mostly implemented in C++. There is also a [JEP to add unsafe native interface in Java](http://openjdk.java.net/jeps/191), which still has not become mainstream yet. However, OpenJDK already exposes some low-level stuff though the `sun.misc.Unsafe` class. RJava obviously used magic to gain low-level support. In Ruby, [ruby-ffi](https://github.com/ffi/ffi) is recommended over directly writing C modules.
With C being redundant, the high-level language (or runtime) may be optimised for internal interoperation (e.g. Java-to-Java calls will be faster and faster) at the expense of the interoperability with C (e.g. even unsafe native call may be costly. Object pinning or opaque handles are required for native programs, which would only run briefly.).
Since raw memory access will be faster while foreign function call will be slower, serialisation may have an advantage over calling (my experiment already shows this for Java and C). But this does not rule out the calling-based API because it may not be foreign calls, in which case it is still faster than serialisation. Ideally, in the meta-circular setting, if both Mu, libmu and the client are in Mu, then Mu-to-Mu function calls are virtually free.
https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/49Stack frame iterator2015-11-13T14:34:31+11:00John ZhangStack frame iterator*Created by: wks*
# Problem
The current stack introspection/OSR API is inefficient.
It selects a frame by a number. For example:
```
ctx->cur_func(ctx, stack, 1);
ctx->cur_func_ver(ctx, stack, 2);
ctx->cur_inst(ctx, stack, 3...*Created by: wks*
# Problem
The current stack introspection/OSR API is inefficient.
It selects a frame by a number. For example:
```
ctx->cur_func(ctx, stack, 1);
ctx->cur_func_ver(ctx, stack, 2);
ctx->cur_inst(ctx, stack, 3);
ctx->dump_keepalives(ctx, stack, 4, &values);
```
A real-world implementation may need to unwind the stack from the top, one frame at a time, until reached the frame. If the client needs to traverse through stacks of many frames, the O(n^2) complexity may be a performance bottleneck. One application is to use the Client API (or the equivalent Mu IR (common) instructions) to generate the stack trace for exception handling.
This link shows a real-world Java stack trace: https://ptrthomas.wordpress.com/2006/06/06/java-call-stack-from-http-upto-jdbc-as-a-picture/ Or click here: [jtrac-callstack.pdf](https://github.com/microvm/microvm-meta/files/24885/jtrac-callstack.pdf)
EDIT: well, this is a call graph, not a stack trace. But imagine something goes wrong in JDBC...
# Desired API
The API should provide a "frame cursor" data type, which refers to a frame in a stack. It can be generated for a stack, and iterate through its frames from top to bottom.
The introspection API `cur_func`, `cur_func_ver`, `cur_inst` and `dump_keepalives` will work on this "cursor" instead of a stack and a number.
The OSR API `pop_frame`, can, instead of popping one frame at a time, pop all frames above a particular "cursor". [Preliminary experiments](https://github.com/microvm/liblushan/blob/master/src/test_remote_stack_chop.c) show that this is possible with C programs and [libunwind](http://www.nongnu.org/libunwind/).
## The "frame cursor" type
The "frame cursor" type shall be an opaque reference to ~~a Mu frame~~ a cursor. The cursor holds the context of a "current frame", and can move down to the parent frame.
It must be platform-independent.
It could potentially be large (given the number of registers in a CPU). Therefore it is desirable to be **mutable** – making a fresh copy for each frame would be costly (Haskell programmers may disagree).
There are some subtle interactions between it and the GC. GC may modify references on the stack, but the API must hide this detail from the client. So the API should not expose raw CPU
The cursor may be allocated on the Mu heap, but also may not.
The cursor is only valid when the stack is still unbound. As soon as the stack is bound again, the stack may change in any ways and the cursor is invalidated.
So I can think of some possible solutions:
1. ~~Create a new type `frameref`, like our existing `threadref` and `stackref`.~~ Create a new type `framecursor`. It has reference semantics: it refers to a mutable structure internally held by the Mu VM.
* pro: A dedicated opaque type, the cleanest model.
* con: A new primitive type, pointing to a large structure, just for introspection? Well... maybe not that bad.
* choices: Is it managed by the GC? GC is the easiest way, but we may not be able to print stack trace for OutOfMemoryException. (really? I am not sure) Alternatively it may be required to be closed explicitly.
2. Use `ref<void>` for the "cursor" type. Its content is allocated on the heap, opaque to the client, and may be platform-specific. When invalidated, the object remains live, but the content becomes invalid.
* pro: No new types introduced
* con: This implies the data content is on the Mu heap.
* con: The GC must have special knowledge of such a heap object, which is not a regular Mu object.
3. Use `ptr<void>`. Similar to `ref<void>`, but implies it is not GC-ed.
* pro, con: same as `ref<void>`
## Example Mu API
This example prints a stack trace on Mu.
```c
// This trap handler prints the stack trace.
void stack_printing_trap_handler(
MuCtx *ctx, // Equivalent to JNIEnv
MuThreadRefValue thread, // The current thread
MuStackRefValue stack, // The current stack
int wpid, // Watchpoint ID. 0 for ordinary traps.
MuTrapHandlerResult *result, // How the Mu thread should resume?
MuStackRefValue *new_stack, // Which stack shall the Mu thread bind to? Usually the old stack.
MuValue *values, int *nvalues, // What values shall be passed on the stack?
MuRefValue *exception, // What exception shall be thrown on that stack?
MuCPtr userdata) { // Client-specific data
ClientCompiler *clientCompiler = (ClientCompiler*) userdata; // The client-specific compiler.
// Get a cursor to the top of the stack.
MuFrameCursorValue *cursor = ctx->get_stack_cursor(ctx, stack);
// Iterate through.
int func_id;
while((func_id = ctx->cur_func(ctx, cursor)) != ID_OF_MY_STACK_BOTTOM_FUNC) {
if (func_id == 0) { // func_id == 0 means the frame is a native frame.
printf("This frame is native");
} else { // It is a Mu frame.
// Get the ID of the current Mu instruction.
int inst_id = ctx->cur_inst(ctx, cursor);
// The client looks up the source-level information.
SourcePosition sp = clientCompiler->getSourcePosition(inst_id);
printf("File: %s, Function: %s, Line: %d, Column: %d: %d\n",
sp.file, sp.func, sp.line, sp.column);
}
}
printf("End of stack trace\n");
// Close the cursor. (Alternatively let the GC close the cursor.)
ctx->close_cursor(ctx, cursor);
// We want to return to the old stack and continue normally,
*new_stack = stack;
// but do not pass any values.
*nvalues = 0;
// Continue normally (not throwing exception).
return MU_REBIND_PASS_VALUES. // passing 0 values
}
```
# Existing approaches
[libunwind](http://www.nongnu.org/libunwind/) is a portable way to walk stack frames in the C language. There are different implementations on different platforms (OSX has its own implementation), but the API is the same.
`unw_getcontext` creates a `unw_ucontext_t` structure for the current stack. `unw_init_local` creates a `unw_cursor_t` on the context. Then the user can call `unw_step` on the cursor to step through stack frames. `unw_get_reg` gets the value of a machine register from a cursor. The cursor keeps the state of registers (usually it is only able to recover callee-saved registers) at the resumption points (return addreses) of frames.
Example:
```c
#define UNW_LOCAL_ONLY
#include <libunwind.h>
void show_backtrace (void) {
unw_cursor_t cursor; unw_context_t uc;
unw_word_t ip, sp;
unw_getcontext(&uc);
unw_init_local(&cursor, &uc);
while (unw_step(&cursor) > 0) {
unw_get_reg(&cursor, UNW_REG_IP, &ip);
unw_get_reg(&cursor, UNW_REG_SP, &sp);
printf ("ip = %lx, sp = %lx\n", (long) ip, (long) sp);
}
}
```
https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/48Mu IR rewriting library2015-10-29T16:47:55+11:00John ZhangMu IR rewriting library*Created by: wks*
Mu aims to be minimal, but such minimalism has made the construction of Mu clients hard. A client-level library can ease the client's job by accepting a slightly higher-level variant of the Mu IR and translating that h...*Created by: wks*
Mu aims to be minimal, but such minimalism has made the construction of Mu clients hard. A client-level library can ease the client's job by accepting a slightly higher-level variant of the Mu IR and translating that higher-level IR to actual Mu IR code (and/or HAIL scripts and/or subsequent API calls).
This issue tracks tasks that should be done at this layer.
* Pre-SSA to SSA converter. #44
- Writing Mu IR in the SSA form is hard, and the goto-with-values form is even harder. The library should automatically convert ordinary CFGs into the goto-with-values form using well-known algorithms.
* Platform-dependent constant values. #47
- Some ahead-of-time clients (notably C or other "traditional" languages) exposes platform details to the programmer as compile-time constants, but binding those values too early will make the object code non-portable. The rewriter should help the client determine these values so that the client compiler can be strictly ahead-of-time.
* Merge Mu IR and HAIL: #29 #46
- The library is not minimal. Integrating both languages will make the client's job easier.
* Annotations
- This will allow clients to attach arbitrary information to the Mu IR code, which can help the client introspect the program at run time.
- Note that if we need to use system debuggers (such as GDB), then these annotations need to go through the micro VM itself because it is the micro VM's responsibility to generate object codes (including the DWARF debug info).
https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/47Sizeof?2016-06-21T13:56:11+10:00John ZhangSizeof?*Created by: eliotmoss*
We have encountered an interesting issue in developing the C client, namely how to deal with union types. Our thought was to define a separate struct type for each union variant, and then to cast to the appropri...*Created by: eliotmoss*
We have encountered an interesting issue in developing the C client, namely how to deal with union types. Our thought was to define a separate struct type for each union variant, and then to cast to the appropriate struct type when accessing a particular variant. (Note that this requires structs to be heap or alloca allocated, which I think is ok -- C does not treat them as single values that can go into a register, etc., as I recall.)
The problem we have is that because Mu defines the detailed layout of a struct on a given target, we cannot determine the sizes of the structs, and thus we cannot determine the maximum size, something we need in order to allocate an instance of a union type.
We observe that Mu gives no way to as the size of a type (or to get the offset of a field in a struct or an element of an array). While such information may not be used for typical accesses, we now see that it has at least one important use case. Given that C programs are typical way-ahead-of-time compiled, we do not consider it appropriate to generate Mu for C code only at the last minute.
We suggest that Mu provide means to determine sizes and perhaps to do simple load-time (if that is the right word) computations over these constants. Here is some possible syntax (admitting that I have not thought about it long or deeply yet):
.sizeof **name** **type**
Define **name** to be the constant that is the number of bytes needed for **type**.
.sizeof **name** **op** **t1** **t2** ... **tn**
Define **name** to be the sizes of **t1** through **tn** combined with operator **op**, where **op** can be at least **max** and **sum**.
Alternatively, we could define names for the sizes of each type, and a more general constant-computing form:
.define **name** **op** **e1** ... **en**
This would define **name** to be **op** applied to the **ei**. We could provide a suitable range of operators.
For offsets we could have:
.offset **name** **struct or array type** **idx**
This would define **name** to be the constant giving the offset of the **idx**'th field/element of the given struct or array type.
The point is to allow target-dependent computations over constants to be written in a target independent (symbolic) way. I believe this would meet the needs of C.https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/44Pre-SSA form2016-06-17T15:23:52+10:00John ZhangPre-SSA form*Created by: eliotmoss*
We have concluded that while the official form of Mu IR is SSA form (but see Issue #18 for current thoughts on how to represent that form), many clients will find it more convenient to generate something that is ...*Created by: eliotmoss*
We have concluded that while the official form of Mu IR is SSA form (but see Issue #18 for current thoughts on how to represent that form), many clients will find it more convenient to generate something that is mostly Mu IR but that is not in SSA form, and that is it further desirable to offer a standard tool to convert from some "pre-SSA" form to proper SSA form. This tool may operate in a stand alone manner or be more in bed with an implementation of Mu.
We propose the following specific pre-SSA form, according to how it differs from SSA-form Mu.
1. "SSA-variables" may be assigned more than once; however, any individual such variable must be used in a type-consistent manner.
1. PHIs may be omitted (or, in the proposal of #18, values may be omitted at branches and variables omitted at labels)
1. For convenience we introduce a "copy" operator, var = ID <T> arg, which takes one argument arg of type T and assigns it to variable var. This operator seems to be convenient sometimes from a client perspective.
The converter to SSA-form will perform live-ness analysis and add variables to labels and values to branches as necessary, checking for type consistency. If some variable is live but not initialized, then the converter will insert a safe initialization (to 0 or 0.0 for numeric types, null for a pointer, etc.) at the latest possible point that does not interfere with existing assignments to the variable. (Optimization may move the initialization earlier as deemed appropriate.)
We will undertake to develop the converter in Scala or Java.https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/43Reduce special cases involving the void type2016-06-17T15:23:50+10:00John ZhangReduce special cases involving the void type*Created by: wks*
# The current status
The `void` type is a special type in the Mu type system. It has no value, and thus many instructions/mechanisms have special cases for the `void` type.
**Instructions that have special cases ...*Created by: wks*
# The current status
The `void` type is a special type in the Mu type system. It has no value, and thus many instructions/mechanisms have special cases for the `void` type.
**Instructions that have special cases for `void`**:
- `RET` and `RETVOID`: Since `void` has no value (In fact it does. The return value of the `BRANCH` instruction, for example, is a value of the `void` type.), we needed a special syntax to return `void`, thus we have `RETVOID`.
- The "new-stack clause" of the `SWAPSTACK` instruction: `PASS_VALUE <T> %val` and `PASS_VOID`: for the same reason why we have `RET` and `RETVOID`.
**The trap handler has a special case for `void`**:
Just like `SWAPSTACK`, the trap handler may rebind the thread to a stack and either "pass a value" or "pass `void`" or "throw an exception".
## Other existing uses
**Instructions that always return `void`**: `BRAHCN`, `BRANCH2`, `SELECT`, `TAILCALL`, `RET`, `RETVOID`, `THROW`, `STORE`, `FENCE`, some common instructions: `@uvm.kill_stack`, `@uvm.thread_exit`, `@uvm.native.unpin`, `@uvm.native.unexpose`, `@uvm.meta.load_bundle`, `@uvm.meta.load_hail`, `@uvm.meta.pop_frame`, `@uvm.meta.push_frame`, `@uvm.meta.enable_watchpoint`, `@uvm.meta.disable_watchpoint`, `@uvm.meta.set_trap_handler`: These instructions do not return meaningful values.
**Instructions that may return `void` sometimes**: `CALL`, `TRAP`, `WATCHPOINT`, `CCALL`, `SWAP_STACK`: The callee, client, swappee, or whatever the other end of communication is, may not return meaningful values.
## Current properties of `void`
`void` can only be used in 3 cases:
1. As the type of allocation units that do not represent values. Hence it is usable as the referent type of reference types and pointer types. e.g. You can run `NEW <@void>`. Each time you NEW a void, you have a **new** empty object, not the same as any other.
2. As the fixed part of a hybrid to indicate the absence of the fixed part. e.g. `hybrid<void int<64>>` is a variable-length array of `int<64>`, without a fixed part.
3. As the type of instructions or the return type of functions that do not return values. e.g. the `BRANCH` instruction returns `void`.
Other properties:
- `void` has no value (in fact it does, as mentioned before)
- `void` is neither a scalar type nor a composite type.
- Only scalar types can be used for memory access: `LOAD`, `STORE`, ...
- Only composite types have other types as components: fields/elements
- `void` is nether storable nor loadable. It does not contain other parts. It cannot be part of a struct/array/vector. i.e. there is no "array of void". The "fixed part of a hybrid" is an exception.
- `void` is native-safe: It can be returned from native functions; and there can be `uptr<void>`.
# Proposed changes
**value of `void`**: Instead of "having no value", `void` now has exactly one value: NULL. This is consistent with Python: `NoneType` has only one value `None`.
**`void` constant**: We reuse the `NULL` literal to create a "void constant":
```c
.const @VOID <@void> = NULL // The only possible value of void.
// For the sake of consistency, we require the client to define it.
//
// Alternative: make it a pre-defined value, such as the @uvm.predef.void_t type
// and the @uvm.predef.VOID value. We could define @uvm.predef.i8, @uvm.predef.i16,
// @uvm.predef.i32, @uvm.predef.i64, @uvm.predef.float, @uvm.predef.double,
// @uvm.predef.ref_void, @uvm.predef.ref_i32..., @uvm.predef.but the choice seems too arbitrary.
```
All existing instructions that return `void` return this `NULL` value. In theory, the following snippet is valid, but stupid:
```c
%entry:
%x = BRANCH %bb1
%bb1:
RET <@void> %x // return void. Should have said RET <@void> @VOID
// or even "RET @VOID" omitting the type argument, because RET always returns the
// return type of the current function. ADD, SUB, MUL ... would have to infer the operand
// types if the operand type is not provided, but RET does not need to be inferred: the
// function return type is explicit.
```
**Remove the `RETVOID` instruction**: Use `RET <@void> @VOID` instead, or simply `RET @VOID`.
**Remove the `SWAPSTACK` clause `PASS_VOID`**: Use ``PASS_VAL <@void> @VOID`` instead. Unlike `RET`, the type parameter here is necessary: the type that the swappee expects is dynamic. It may expect a different type at a different `SWAPSTACK` site. Guessing the wrong type while swapping has undefined behaviour.
**Trap handlers no longer needs a PASS_VOID return case**: Instead, pass a `NULL` constant.
## New ways to use `void`
In addition to the existing three ways, i.e. empty objects, hybrid fixed part, empty return value, `void` can now be used in the following ways:
- In `RET` to return from a function of `void` return type.
- In `SWAPSTACK` to swap to a stack that does not expect to receive a value (it receives the `NULL` value of the `void` type).
- In the trap handler, rebind the stack which expect void.
They all fit into the category that "the other end of communication" does not pass a value.
## Things that should still be forbidden
**`void` must not be a parameter type**: I don't have a very compelling reason, but it is completely useless (only increases the apparent arity of a function).
**`void` must not be part of a struct/array/vector or the variable part of a hybrid**: Not allowing this will gain us a very nice property: each field/element in any struct/array/vector/varpart has a different offset. In `struct<@i32 void void void void @x>`, since `void` should have size 0 and alignment 1 (in the sense `void` can be allocated at any address *a* such that *a* % 1 == 0), void does occupy space. Then all of the void fields are at the same offset as `@x`. Another reason: C does not allow void to be a struct field.
**Empty structs (`struct<>`) should be forbidden**: For the same reason as `void` as a field. Just use `void` because it is so special. C forbids empty structs, too, but GCC allows it.
# How about LLVM?
LLVM IR has two syntax for the `ret` instruction:
- `ret <type> <value>` for example: `ret i32 100`
- `ret void` this returns void.
LLVM does not have "void constant", either, since `void` is not a "first class type".
LLVM `void` is not a "first class type". Only `void` and function types are not "first class type". LLVM has both "function" types and "pointer to function" types.
LLVM LangRef does not say parameter types cannot be `void`, but `void` is never used as parameter types. In C, `void` is an incomplete type, and thus cannot be a parameter type.
https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/40Mu Client Interface as C Binding2015-08-21T15:55:43+10:00John ZhangMu Client Interface as C Binding*Created by: wks*
The current API is expressed in a language-neutral form, and it is the implementation that decides how to implement such an interface. Programmers still need to resort to implementation-specific interfaces to actually ...*Created by: wks*
The current API is expressed in a language-neutral form, and it is the implementation that decides how to implement such an interface. Programmers still need to resort to implementation-specific interfaces to actually use a particular Mu implementation.
Since C is so widely used as a system programming language, the Mu client interface (a.k.a the API) should be expressed as data types and function calls in the C programming language. If the client is not in C, it usually still has a C FFI.
# The API in C
Resources in Mu are exposed in opaque types which have reference semantics: they can be copied and still refers to the same resource.
* `mu_micro_vm_t`: a reference to a Mu micro VM instance.
* `mu_client_agent_t`: a reference to a client agent.
* `mu_handle_t`: a handle to a value in the Mu type system exposed to the client.
Messages are C functions. Like JNI, they are contained in a struct: `typedef struct mu_api_msgs {...} mu_api_msgs_t`. In this way, the client in C does not need to link against any libraries when compiling. The reason is, for a Mu micro VM implemented in a higher-level language (like the reference implementation in Scala), the binding of the callable C function is generated very late, later than even the loading time, and has no access to the native loader.
For example, assume there is a `mu_api_msgs_t* msgs` defined:
```c
mu_client_agent_t ca = ...
char buf[999999];
int sz;
// load file into buf
msgs->load_bundle(ca, buf, sz); // Load a bundle
// Putting C values into Mu
mu_handle_t h1 = msgs->put_schar(ca, 127);
mu_handle_t h2 = msgs->put_sshort(ca, 32767);
mu_handle_t h3 = msgs->put_sint(ca, 42);
mu_handle_t h4 = msgs->put_slong(ca, 42);
mu_handle_t h5 = msgs->put_slonglong(ca, 999999999999999);
// Converting Mu values to C
int v3 = msgs->to_sint(ca, h3);
unsigned long v4 = msgs->to_ulong(ca, h4); // just treat the int as unsigned
```
Mu-level flags are C preprocessor macros. They have type int.
```c
msgs->store(SEQ_CST, hLoc, hNewVal); // SEQ_CST is a macro
```
Callbacks, including the trap handler and undefined function handler, have defined signatures:
```
typedef mu_trap_return_status_t (*mu_trap_handler_t)(
mu_client_agent_t ca,
mu_handle_t stack,
mu_handle_t thread,
int watchpoint_id,
mu_handle_t &new_stack,
mu_handle_t &data_passed,
mu_handle_t &new_exception,
mu_api_msgs *msgs,
void *user_data);
typedef void (*mu_undefined_function_handler_t)(
mu_micro_vm_t microvm,
int funciton_id,
mu_api_msgs *msgs,
void *user_data);
```
These functions are registered via the `msgs->register_trap_handler` and `msgs->register_undefined_function_handler` API messages. In their parameters, the `user_data` is an arbitrary pointer provided by the client in an implementation-specific manner (see below).
# Implementation-defined behaviours
Some aspects of the C binding are implementation-specified. They include:
* How to create a Mu micro VM? Options are:
1. The C executable creates the Mu instance.
2. Mu loads the C dynamic library.
3. Mu starts separately and C connects to the existing instance in the same process.
4. C connects to a Mu instance in a different process, or a different machine.
* Options in creating Mu instances. Options are:
1. Heap size. Giving a heap size means the Client determines the heap size rather than Mu automatically decide its own storage.
2. Global data space size. Setting this value means the global data may have their own storage. Actual implementation could use the heap space, too.
3. Stack size. Similarly, this is too implementation-specific.
* What happens during initialisation?
1. Mu calls a C function to initialise the client, and the client provides a `void*` to Mu for the client's own context. (note: in this case, it is Mu loading C rather than C creating Mu.)
2. C creates a Mu instance, and sets its `void*` user data in a proprietary API message.
# Open questions
* Should we allow each Mu implementation have its own "namespace"? The opaque types (`mu_micro_vm_t` and so on) are opaque, but different implementations may have different representations. The current C binding design forbids one C program working with more than one Mu implementations (though it is okay to work with more than one *instances* of the same implementation).
* JNI does not solve this problem, either.https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/39Call-back from native to Mu2016-09-06T20:28:02+10:00John ZhangCall-back from native to Mu*Created by: wks*
# Overview
## Rationale
Some existing C libraries or system interfaces use call-back functions, i.e. user-provided function pointers which are called by C or system libraries. Mu should provide appropriate mechan...*Created by: wks*
# Overview
## Rationale
Some existing C libraries or system interfaces use call-back functions, i.e. user-provided function pointers which are called by C or system libraries. Mu should provide appropriate mechanisms to interface with those libraries.
This is part of the (unsafe) native interface. See super issue: https://github.com/microvm/microvm-meta/issues/24
## Exposing appropriate Mu functions as C-style function pointers
"Appropriate" Mu functions must only use the following types as their parameter types or return types: `int<n>`, `float`, `double`, `vector<T>`, `ptr<T>` or `struct` types whose components are these types. In the case of `ptr<T>`, `T` can also be `array<T n>` or `hybrid<F V>` where `T`, `F` and `V` are one of the above types. In other words, (traced) references and Mu-specific opaque types are not allowed.
The Mu ABI will be designed to be compatible with the C calling convention as defined by the platform ABI.
**way 1**: (simple) Mu functions are declared with the optional `WITH_FP` clauses to create their associated C-style function pointers. For example:
```
.funcdecl @some_func WITH_FP(@fp_some_func DEFAULT @COOKIE) <@sig>
.funcdef @other_func VERSION @other_func_v1 WITH_FP(@fp_other_func DEFAULT @COOKIE) <@sig2> WITH_FP @fp_other_func (%param0) {
...
}
```
With the above definitions, `@some_func` has type `func<@sig>`, which is a Mu function reference value. `@fp_some_func` has type `funcptr<@sig>`, which is a C-style function pointer. Similarly `@other_func` is a `func<@sig2>`, while `@fp_other_func` is a `funcptr<@sig2>`. `DEFAULT` is the calling convention. `@COOKIE` is a "cookie" (see *way 2* below).
The Mu IR program or the API can pass the function pointer to the native program. When called, the Mu function will run and return its return value to the native caller.
* pros:
1. simple
2. The native funcptr is immediately available after loading the Mu bundle.
* cons: does not support "closures" well. Some languages/implementations (e.g. LuaJIT) would like to expose closures (rather than just functions) to C as callbacks.
**way 2**: (complex) Mu functions are exposed with a run-time invocation of a Mu instruction or a Mu API message.
Format:
* Instruction: *fp* = `EXPOSE_MU_FUNC` `<` *sig* `>` *mufunc* *cookie*
* API: *fpHandle* = ca.exposeMuFunc( *hMuFunc*, *hCookie* )
The resulting *fp* has type `funcptr<sig>` and can be called from C. A function can be exposed multiple times, and the resulting function pointers are mutually inequal. The *cookie* is an `int<64>` value associated to the resulting function pointer. If a Mu function is called through a particular function pointer, a special instruction `NATIVE_COOKIE` will return the associated *cookie* value.
Example:
```
%fp1 = EXPOSE_MU_FUNC <@sig> @some_func @some_int64_value
%fp2 = EXPOSE_MU_FUNC <@sig> @some_func @other_int64_value
...
UNEXPOSE_MU_FUNC %fp1
UNEXPOSE_MU_FUNC %fp2
// in @some_func
%cookie = NATIVE_COOKIE
%eq = EQ <@i64> %cookie @some_int64_value
...
```
```
val hFP = ca.exposeMuFunc(hFunc, hSomeInt64Value)
...
ca.unexposeMuFunc(hFP)
```
Both `%fp1` and `%fp2` have type `funcptr<@sig>`. But if the Mu fucntion `@some_func` is called from C via `%fp1`, the `NATIVE_COOKIE` instruction will return `@some_int64_value`. If called via `%fp2`, then `NATIVE_COOKIE` returns `@other_int64_value`, instead.
* pro: the cookie can be used to identify different closures and look up the contexts of the closures.
* con:
1. Not as simple as way1.
2. Exposing a Mu function requires a Mu instruction or an API message. This makes "implementing the Mu client API directly as exposed Mu functions" difficult. (In this case, exposing a Mu function requires an API function, which is also an exposed Mu function.)
## Contexts necessary for Mu functions to run
Even if a Mu function is exposed to the native program as a `functpr<sig>`, some contexts must be set up so that the Mu function can make use of Mu-specific features. These include:
* **Thread-local garbage collection states**: including thread-local allocation pools, and registering the thread for yielding as requested by the GC.
* **Stack context**: Each Mu stack has an associated `stack` value (the opaque reference to the current stack). This is necessary for swap-stack.
Similar to the JNI's "attaching a native thread to the JVM", Mu will also require attaching Mu contexts to a native thread before any exposed Mu function pointers can be called.
If the native program is executed because some Mu program called the native function through the native interface (via `CCALL`), the context is already set up and the C program can safely call back to Mu.
## Mixed native/Mu stacks
With the possibility of both C-to-Mu and Mu-to-C calling, a stack may have mixed C or Mu frames. It has some implications for stack introspection and exception handling. Possible approaches are:
1. Stack introspection cannot go deeper than the last contiguous Mu frame from the top. i.e. introspection is immediately unavailable when reached a native frame. Exceptions may not go into native frames. This approach has the weakest promise from Mu, and is thus the easiest.
2. Mu can skip non-Mu frames and unwind to other Mu frames underneath.
3. Stack introspection and stack unwinding caused by exceptions can go through frames which are supported by the native debugger. This is harder than the previous one, but still practicable.
4. Support non-standard frames (such as JavaScript frames of SpiderMonkey or V8). Too hard.
https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/38Dynamic loading for Java2016-08-11T14:46:43+10:00John ZhangDynamic loading for Java*Created by: eliotmoss*
So Adam and I have run into an interesting question about how to do dynamic loading for Java. The thing is, one does not know all the details of a class in advance. Therefore, it is hard to give things signatur...*Created by: eliotmoss*
So Adam and I have run into an interesting question about how to do dynamic loading for Java. The thing is, one does not know all the details of a class in advance. Therefore, it is hard to give things signatures. Consider, for example, the vtable. We need to have Mu types for all the classes mentioned in all the methods -- the vtable will be a struct of function pointers, each pointer specifically typed. But that would force eager loading of the entire universe to figure out the types!
The only alternative seems to be to refcast all over the place at run time. Is that the intent? (Coming from Java I had a (mistaken) bias that this involves a cost, but I see on referring to the spec that refcast does not involve any run-time work.)https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/37Memory model in native interface2016-06-17T15:23:38+10:00John ZhangMemory model in native interface*Created by: wks*
# Problem
Currently the Mu memory is all about "memory locations" – a region that holds a Mu value, not directly related to addresses or bytes. The native memory is a sequence of bytes, addressed by integer "address...*Created by: wks*
# Problem
Currently the Mu memory is all about "memory locations" – a region that holds a Mu value, not directly related to addresses or bytes. The native memory is a sequence of bytes, addressed by integer "addresses". They are separate until a Mu memory location is pinned. In that case, the Mu memory location is mapped to a region of bytes in the address space. Accessing one will affect another.
Meanwhile Mu's memory model uses the C++11-style model based on the happen-before relation.
This model imposes a challenge that the model should bridge the Mu and the native world. The native view of the memory as a sequence of bytes should work nicely with the Mu memory, i.e. map to meaningful memory operations in the Mu world. Atomic actions should be consistent and may establish the happen-before relation between two worlds. Specifically:
* What is the unit of memory actions? Previously, it is "Mu memory location".
* If a "load" action is modeled as a tuple: `LOAD(order, type, location)`, and location was "Mu memory location", then what should location be now? Address? What value does it see? Some store? Or something else?
* If a "store" action is modeled as a tuple: `STORE(order, type, location, newvalue)`, and location was "Mu memory location", then what should location be now?
* If a Mu memory location is pinned, and is accessed in a different granularity than the type declared, what will be the result?
* If stored as a whole, but loaded in parts...
* If stored in parts, but loaded as a whole...
* But we cannot model the memory as a byte array which sequentially changes state. (or, can we? Since non-atomic conflicting accesses are meaningless, does this imply it must be changed sequentially, or errors occur?)
# The current model
* A non-atomic load sees the unique store operation that happens before it, and there isn't another store operation that happens between the visible store and the load. If there are more than one such operations, it has undefined behaviour.
* An atomic load sees the value from any of its visible sequence of store operations.
* Mixing non-atomic and atomic operations on the same memory location has undefined behaviour.
# Possible directions
In any way, pure Mu programs should keep its original C++11-like semantics.
1. Make the memory model more machine-oriented and machine-specific.
* May give more dependable behaviours. For example, unaligned memory access is allowed in many architectures, but are not always atomic.
* Obviously this makes Mu less portable. All pointer-based memory access will have machine-specific semantics. But does this matter? This is the "native interface" anyway.
* Interoperability with the C++11 memory model for C/C++ programs will be built upon the machine-specific memory model.
2. Limit what operations are allowed in the native memory.
* Simpler model.
* Probably more undefined behaviours, because they cannot be defined if we tries to make things simple and generic.
* Will limit the capability. e.g. unions won't be used by Mu.
3. Something in between
# Examples
The native program should synchronise with the Mu program via atomic memory accesses.
```c++
// C++ pseudo code
struct Foo {
int x;
int y;
};
Mu_thread_1 {
ref<Foo> f = new<Foo>
ptr<Foo> fp = pin(f);
create_thread(native_thread_2, fp);
store(&f->x, 10, NOT_ATOMIC); // Mu-level store
store(&f->y, 20, RELEASE); // Mu-level store
}
native_thread_2(ptr<Foo> fp) {
while(load(&fp->y, ACQUIRE) != 20) {} // Native load
int a = load(&fp->x, NOT_ATOMIC); // Native load
assert(a == 10);
}
```
In non-atomic memory access, partial reads/write should be based on the bytes representation (it is called the "object representation" of a value in C11).
```c++
ref<i32> r = new<i32>;
store(r, 0x12345678); // Assume little endian
ptr<i32> p = pin(r);
i64 addr = ptrcast<i64>(p); // cast the pointer to the integer address
addr += 3;
ptr<i8> p2 = ptrcast<ptr<i8>>(addr); // cast back to pointer, but a different type
i8 value = load(p2);
assert(value == 0x12);
store(p2, 0x9a);
i32 value2 = load(r);
assert(value2 == 0x9a345678);
```
Unaligned 16-, 32- and 64-bit memory access is allowed in x64 (and P6-family guarantees atomicity if not crossing any cache line boundary).
```C++
struct Foo { i32 a; i32 b; };
ref<Foo> r = new<Foo>;
store(&r->a, 0x9abcdef0);
store(&r->b, 0x12345678);
ptr<Foo> p = pin(r);
ptr<i64> p2 = ptrcast<ptr<i64>>(p);
i64 value = load(p2);
assert(value == 0x123456789abcdef0);
```
Could non-atomic memory access mix with atomic counterparts?
```C++
struct Foo { i32 x; i8 y; double z; };
ref<Foo> r1 = new<Foo>;
ref<Foo> r2 = new<Foo>;
ptr<Foo> p1 = pin(r1);
ptr<Foo> p2 = pin(r2);
store(&p1->x, 0x12345678, NOT_ATOMIC);
store(&p1->y, 42, NOT_ATOMIC);
store(&p1->z, 3.1415927D, NOT_ATOMIC);
memcpy(p2, p1, sizeof(Foo)); // This is obviously not atomic
some_synchronization_operation_after_which_atomic_accesses_will_be_safe(); // What should this be?
thread1 {
store(r2->y, 84, RELAXED); // This is atomic
store(r2->x, 0x9abcdef0, RELEASE); // This is atomic
}
thread2 {
i32 a = load(&r2->x, ACQUIRE); // This is atomic
if (a == 0x9abcdef0) {
i8 b = load(&r2->y, RELAXED); // This is atomic
double c = load(&r2->z, RELAXED); // This is atomic
assert(b == 84 && c == 3.1415927D);
}
}
```
https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/35Alternative LISP-like Mu IR format2015-06-22T16:04:08+10:00John ZhangAlternative LISP-like Mu IR format*Created by: wks*
Problem: Mu IR needs a parser, but constructing a parser is tedious. Parser generators pulls in additional dependencies.
Solution: Use a simplistic syntax based on LISP.
Example:
```scheme
(typedef @i32 int 3...*Created by: wks*
Problem: Mu IR needs a parser, but constructing a parser is tedious. Parser generators pulls in additional dependencies.
Solution: Use a simplistic syntax based on LISP.
Example:
```scheme
(typedef @i32 int 32)
(typedef @float float)
(typedef @void void)
(typedef @refvoid ref @void)
(typedef @foo struct @i32 @i64 @float @double @refvoid)
(funcsig @f_sig @i32 (@i32 @i32))
(const @FORTY_TWO @i32 42)
(const @DOUBLE_FORTY_TWO @double 42.0d)
(const @SOME_STRUCT_CONST @some_struct @const1 @const2 @const3)
(const @NULLREF @refvoid NULL)
(global @errno @i32)
(funcdecl @write @write_sig)
(funcdef @write @write_v1 @write_sig (%p0 %p1 %p2)
(basic-block %entry
(inst %a (ADD @i32 %p0 %p1))
(inst %b (CALL @sig @callee (%arg1 %arg2 %arg3) (exc %nor %exc) (keepalive %v1 %v2 %v3)))
)
(basic-block %nor
(inst _ (SUB @i32 %p0 %p2)) ; unnamed instruction
(inst _ (BRANCH %exit))
)
(basic-block %exc
(inst _ (TRAP @void))
)
(basic-block %exit
(inst _ (@uvm.thread_exit)) ; COMMINST is no longer necessary because the syntax is already dynamic
)
)
```
**How would this benefit the Mu implementer?** The parser can be written by hand in very few lines of code. This is convenient for languages that has less capabilities (such as C which does not handle complex type hierarchies easily).
**How would this benefit client implementers?** The code generator can be more typed (using structured nested lists), rather than constructing arbitrary strings (using string formatting).
**Binary format?** There can be a simpler and direct mapping between the text format and the binary format. For example, atoms can be encoded as a hash code, and a list can be encoded as a type, a length and a list of values. Mu spec no longer needs to define a text format and a binary format separately.
Problems?
Does not look like assembly.
May be less readable than the current text format without aggressive pretty-printing.
Extra validation should be performed by the parser. (Really? The Mu micro VM is not required to correct any errors. Any error is allowed to have undefined behaviours.)
https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/33Impossible states in full-state frame making (OSR)2015-06-18T12:48:59+10:00John ZhangImpossible states in full-state frame making (OSR)*Created by: wks*
There was a proposal about the OSR before, https://github.com/microvm/microvm-meta/issues/5
According to our discussion recently, we decided that:
1. the state of a frame is the PC and a set of local variable val...*Created by: wks*
There was a proposal about the OSR before, https://github.com/microvm/microvm-meta/issues/5
According to our discussion recently, we decided that:
1. the state of a frame is the PC and a set of local variable values.
2. It should be only possible to continue from some designated "OSR points", or the beginning of a basic block, rather than from arbitrary point in the code. This reduces the compiler's burden to generate stack maps.
3. The client supplies an arbitrary subset of local variables and their values and
1. if a variable is supplied but is never used, it has no effect on the execution and is simply ignored.
2. if a variable is not supplied but is used later, it gives undefined behaviour.
But this leads to a problem: the supplied state may never be reproducible from normal execution. For example:
```
.funcsig @foo_sig = @i64 (@i64 @i64)
.funcdef @foo VERSION @foo_v1 <@foo_sig> (%a %b) {
%entry:
%x = MUL <@i64> %a %b
%y = ADD <@i64> %x @i64_0 // The rhs is the constant 0.
%trap = TRAP <@void>
// OSR continues here
CALL @print (%x)
CALL @print (%y)
RET <@i64> %y
}
```
Assume we perform an OSR and construct a frame which continues *after* the `%trap` instruction with local variable values: `%a = 6; %b = 9; %x = 42; %y = 54`. Then the value `%x = 42` is impossible.
But the code generator may consider "adding zero" as a no-op and thus generates the machine code that aliases the register of `%x` and `%y`. For example:
```
foo:
push rbx ; save callee-saved register
mov rbx, rdi ; do multiplication. rbx holds the value of %x and also %y
mul rbx, rsi
mov rdi, rbx ; prepare to call @print (%x)
call print
mov rdi, rbx ; prepare to call @print (%y)
call print
pop rbx
ret
```
Then it is impossible to create such a state as mentioned above. This implies that either
1. we require that such state construction must be possible and **require the code generator not to generate the code like above**, or
2. we further **restrict our API** on frame state construction.
https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/32Better support for the tagref64 type2015-05-21T17:20:07+10:00John ZhangBetter support for the tagref64 type*Created by: wks*
In dynamic languages, the `tagref64` type (or other future tagged reference type variants) will be used pervasively in the language runtime.
This issue summarises potential improvements on the support for such types...*Created by: wks*
In dynamic languages, the `tagref64` type (or other future tagged reference type variants) will be used pervasively in the language runtime.
This issue summarises potential improvements on the support for such types.
# Tagged reference constant
The Mu IR currently does not have a constant for `tagref64` mainly because it may holds a reference and non-NULL references cannot be constant. However, one possible use of the `tagref64` type is to store a NULL reference together with an `int<6>` tag. In this case, the tag determines the concrete thing it is representing (undefined, nil, null, false, true or other frequently used singleton objects). So it should be possible in Mu to create such `tagref64` as a constant.
Proposed new syntax:
```
.const @name <@tagref64> = TR64FP @double_constant
.const @name <@tagref64> = TR64INT @int52_constant
.const @name <@tagref64> = TR64NULLTAG @int6_constant // The ref is NULL, the tag is @int64_constant
.const @double_constant <@double> = 3.14d
.const @int52_constant <@i52> = 0x123456789abcd
.const @int6_constant <@i6> = 30
```
# Tagged reference equality
Comparing floating point numbers bit by bit is not equivalent to IEEE754's definition of "equality". However, when two `tagref64` values both holds integers or references+tags, the result is deterministic.
In dynamic languages, such comparisons can quickly determine whether two tagged references have the same type (identified by the tag part) and refers to the same object.
Proposed semantic of `EQ` comparison between `tagref64` values:
The result of the `EQ` comparing instruction between `v1` and `v2` is 1 (true) if and only if any of the following is true:
* Both holds `double` values, and
* neither were NaN and both have the same bit-wise representation, or
* both are NaN and they happen to have the same bit-wise representation after converted to `tagref64`.
* Both holds `int<52>` values and they are bit-wise equal.
* Both holds references, and
* their references refer to the same object or both are NULL, and
* their `int<6>` tags are bit-wise equal.
The `NE` instruction returns the opposite result of `EQ`.
> NOTE: `tagref64` uses the NaN space of double. Real NaN `double` values may lose its precise bit-wise representation when converted to `tagref64`. So comparing two `tagref64` values both holding NaNs has unspecified result.
*Alternative possibility*: Require Mu to canonicalise all NaNs to one unique bit-wise representation. In this way, all NaNs compare equal when comparing `tagref64` values bit by bit.
# Default values of `tagref64` types.
Currently the default value (all zero bits. All newly-allocated memory (heap, stack, global) holds all zero bits.) of `tagref64` holds +0.0 as a `double` value. In this representation, all `tagref64` values which hold `double` contents are bit-wise equal to its real `double` representation. So converting a `tagref64` to `double` is trivial: just do a bitcast.
However, languages usually define the values for uninitialised variables/fields as null-like values: `undefined` in JS, `nil` in Lua, `null` in java. There should be an option to make 00000000..00 represent their null types.
There could be a flag to determine the zero value of a `tagref64` type. The proposed syntax is:
```
.typedef @tr64_with_fp_default = tagref64 <DEF_FP(3.14d)> // All 0s represents double value 3.14d
.typedef @tr64_with_ref_default = tagref64 <DEF_REF(0x5a)> // All 0s represents NULL ref with 0x5a as tag.
.typedef @tr64_with_int_default = tagref64 <DEF_INT(0x55aa55aa55aa5)> // All 0s represents integer 0x55aa55aa55aa5.
.typedef @tr64_as_current = tagref64 <FP_DEF(0.0d)> // All 0s represents double value 0.0d, which is the same as the current `tagref64`.
```
The kind of default is a static metadata and the garbage collector can identify it.
This can be implemented by applying an XOR mask on the value after encoding to `tagref64` and before decoding an existing `tagref64`.
https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/31GC: Is "liveness" of objects really needs to be defined?2015-04-15T19:47:10+10:00John ZhangGC: Is "liveness" of objects really needs to be defined?*Created by: wks*
Currently the Mu spec defines "live object" as being reachable from roots.
An alternative definition is to define heap objects as *always live*, but implementations can "*cheat*". We define:
A memory location has...*Created by: wks*
Currently the Mu spec defines "live object" as being reachable from roots.
An alternative definition is to define heap objects as *always live*, but implementations can "*cheat*". We define:
A memory location has a **lifetime**. An internal reference (iref) to a memory location is **valid** as long as the memory location's lifetime has not expired. Specifically,
* The lifetime of a memory location in the *heap* begins when the `NEW` or `NEWHYBRID` that allocates the heap object is executed. It **never expires**.
* The lifetime of a memory location in the *global memory* begins when the bundle that defines it is loaded. It never expires.
* The lifetime of a memory location in the *stack* begins when the `ALLOCA` or `ALLOCAHYBRID` that allocates the stack cell is executed. It expires when the function activation which the stack cell is allocated is destroyed by either returning, throwing an exception, or killing the stack.
If the Mu spec no longer define liveness by the "root set" and transitive reachability, the Mu implementation must infer those reachability rules from other parts of the spec. I believe a carefully defined spec implies the same rules as explicitly defined reachability rules.
## Examples and corner cases
**When an object is unreachable from the roots** (previously defined as "dead"): Mu can reclaim the object. Since it cannot be reached, the client and Mu IR programs will never find out Mu is cheating about the lifetime of heap objects, which were defined as "forever". (You can kill an immortal if nobody can see him/her again.)
**Object pinning**: Since an address is exposed during the period of pinning, the GC must not collect the object otherwise the native code will find Mu is cheating. In other words, pinning keeps an object alive.
## Weak references and finalisers
**Weak reference**: We must change the meaning of "weak references" because we no longer define "reachable". We can define it as:
* At any time, Mu **may** atomically set the values of some weak references to NULL if "after doing so, no one can prove that their referred objects can otherwise be reached". (This is not formal at all. Maybe weak references are really meaningless.)
In this way, a Mu implementation that never clear weak references is a valid implementation. But an implementation that does so may legally do so.
**Finaliser**: It was not defined because of it is not guaranteed to be caught. But in order to allow the Mu implementation cheat, I define it as:
* Any object may have a "prevent-one-death" flag (set when a finalisable object is created).
* There is a queue maintained by Mu. (finalising queue. The client implements a finalising thread watching the queue.)
* At any time, Mu **may** atomically remove the "prevent-one-death" flag of an object and put it in the queue mentioned above, provided that "the only way to get a reference to that object is via the queue". (This does not sound very formal, either.)
In this way, a Mu implementation that never call any finaliser is a valid implementation. But an implementation that does so may legally do so.
https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/29Heap Allocation and Initialisation Language (HAIL)2016-07-21T14:40:31+10:00John ZhangHeap Allocation and Initialisation Language (HAIL)*Created by: wks*
This proposal describes a language that allocates and initialises heap objects (and also global memory)
This proposal *does not* address *initialiser function*. It will be addressed in another issue.
# Rationale
...*Created by: wks*
This proposal describes a language that allocates and initialises heap objects (and also global memory)
This proposal *does not* address *initialiser function*. It will be addressed in another issue.
# Rationale
A code bundle (or simply "bundle" in our current terminology) contains **types**, **function signatures**, **constants**, **global memory cells** and **functions**. This is insufficient for a standalone Mu IR program.
A typical program usually contain statically declared and **load-time initialised** **heap objects**, e.g. **strings**, **class objects** (`java.lang.Class`) and so on. A developer from the PyPy project has indicated that there can be a lot of statically declared heap object. Currently those objects can be created and initialised in two ways:
1. The client allocates and initialises heap objects via the Mu Client API. This approach suffers from one particular shortcoming: performance. The API can only initialise one memory location (e.g. one element of an array, or one scalar field of a struct) per API call.
2. Include a particular function per bundle which creates and initialises heap objects. This approach has performance and complexity problems. This "function" must contain full description of all heap objects: their types, and the values of all (or some non-zero) fields, therefore the function can be huge. This information has to be encoded as Mu IR instructions and Mu IR constants, and the compiler has to translate this **humongous** "initialiser function" into runnable form and then execute it to make heap objects, and this function is executed only once. It is a waste of time and memory to compile such a one-shot function.
# Solution
The proposed solution is a compact file format that describes heap objects and initialises the memory.
Sample:
Assume we have a "traditional" Mu IR Bundle:
```
.typedef @i64 = int<64>
.typedef @i8 = int<8>
.typedef @double = double
.typedef @string = hybrid <@i64 @i8>
.typedef @void = void
.typedef @refstring = ref<@string>
.typedef @refvoid = ref<@void>
.typedef @ClassFoo = struct<@i64 @double @refstring>
.typedef @intarray = hybrid<@i64 @i32>
.global @HW <@refstring> // A global memory cell, initialised to NULL, which may hold a string reference later.
```
**After** loading the previous bundle, load this Heap Allocation and Initialisation Language (HAIL) file:
```
// HAIL file
.new $a <@i64> // A new object of just a number
.newhybrid $hw <@string> 12
.new $classFoo <@Foo>
.new $x <@refvoid> // An object whose content is only a heap reference to void
.new $y <@refvoid> // ditto
.newhybrid $hugeArray <@intarray> 10000
.init $a = 42
.init $hw = {12, {'H', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd'}}
.init $classFoo = {42, 42.0d, $hw} // Objects can directly refer to each other
.init $x = $y // Objects are first allocated and then initialised
.init $y = $x // So they can form circular references
.init @HW = $hw // @HW is a global cell declared in the previous "traditional" bundle. HAIL can initialise global cells in traditional bundles, too.
.init $hugeArray[5000] = 42 // Only initialise a particular elements. Other elements are 0.
// NOTE: only $hw is retailed because it is referenced by the global cell @HW. Other objects may immediately be garbage-collected (or not allocated at all if the Mu VM can "cheat")
```
## Structure
Heap objects allocated in this form has a special sigil `$` which is local to the current file.
A Heap Allocation and Initialisation Language (HAIL) file contains many of the following **top-level definitions**:
**.new**: Allocate scalar object in the heap. Has the form: `.new $name <@type>`
* `$name`: the local name of the object.
* `@type`: the type of the object.
**.newhybrid**: Allocate hybrid object in the heap. Has the form: `.newhybrid $name <@type> length`
* `$name`, `@type`: same as ".new"
* `length`: the length of the var part
**.init**: Initialise a heap object or a global cell. Has the form: `.init name[sub1][sub2]... = val`
* `name`: The name of the heap object or global cell. In this format, heap objects use special sigils (`$xxx`) while global cells uses global names in the Mu IR (`@xxx`).
* `sub1`, `sub2`, ...: Subscriptions. Ways to navigate through structs, arrays and hybrids. Specifically, in hybrid, the fixed part is 0 and the var part is 1.
* `val`: The value. It can be one of the following:
* Integer literals: 1, 24, -345, 0x456, 'H'
* FP literals: 1.0f, 3.14d, nanf, nand, +infd, -infd, bitsd(0x7ff0000000000001)
* Struct/array/hybrid literals: {elem0, elem1, elem2, ...}
* NULL
* other names (can be other heap objects of Mu IR constants, global cells (as internal references) and functions (as function references)): `$hw` `@HW` `@main`
## Comparing to API-based object allocation and initialisation
A HAIL file is a unit of delivery to the Mu VM. Only one API call is needed to load a whole HAIL file and it can allocate and initialise many objects.
"Loading a HAIL file" will be a new API message (or function).
# Performance Concerns
For better performance, this format should have a more compact binary format. Ideally the binary format can be very close to the in-memory representation of objects and require little more than copying data from the file to the memory and handle data sizes/padding/alignments. It cannot be perfectly identical to the in-memory representation because Mu's object layout is platform-dependent.
# When to use the HAIL format
HAIL should be used when the client wishes to allocate many objects and bulk-initialise the memory. For example, when loading a Java .class file, a Mu IR bundle is loaded for the Java functions, and then a HAIL file is loaded to create/initalise the `Class` object, the virtual table, string literals and so on.
Another example: Assume there is a PyPy interpreter implemented on Mu IR. The executable PyPy interpreter is represented as Mu IR bundle, but a HAIL file can be used to initialise the interpreter **instance** and associated objects.
# When HAIL may not be ideal
If the Mu VM is metacircular, the client is written in the Mu IR and accessing the Mu memory from the client will have no overhead. The HAIL format can still be implemented for compatible reason, but would not have any advantage in performance over direct memory accesses. For example, a metacircular Mu-based JVM can load a .class file and compile its methods to Mu IR, but the Class object can be created directly in the Mu IR because the JVM client itself is in Mu IR. It does not need to serialise the sequence of object allocations and initialisations into HAIL before doing them.
https://gitlab.anu.edu.au/mu/general-issue-tracker/-/issues/28Object Pinning2017-05-02T16:25:32+10:00John ZhangObject Pinning*Created by: wks*
This issue is part of https://github.com/microvm/microvm-meta/issues/24
# TL;DR
This proposal gives meaning to the "object pinning" operation.
The meaning is: The PIN operation takes an `ref<T>` or `iref<T>`, ...*Created by: wks*
This issue is part of https://github.com/microvm/microvm-meta/issues/24
# TL;DR
This proposal gives meaning to the "object pinning" operation.
The meaning is: The PIN operation takes an `ref<T>` or `iref<T>`, pins the object for the current thread, and returns a `ptr<T>` (pointer to `T`). This pointer **can be used to** access the memory location of the `iref` until all threads which have pinned the object unpinned it using UNPIN operations.
Note: This has very few implications to the Mu implementation. It only says the pointer can be *used* in the expected way, but does not say anything about the storage of the actual object. (The micro VM can cheat!)
## Operations
In the following two instructions, `R` can be either `ref` or `iref`.
* `PIN(%r: R<T>) -> ptr<T>`: Add the object referred by `%r` to the *pinning set* of the current thread, and return a pointer.
* `UNPIN(%r: R<T>) -> void`: Remove the object referred by `%r` from the *pinning set* of the current thread.
`PIN` and `UNPIN` do not pin any object if `%r` refers to a memory location not in any heap object. If `%r` is NULL, `PIN` returns a NULL pointer. If `%r` is an `iref` and refers to a stack cell or a global cell, `PIN` returns a pointer to it.
> NOTE: All memory locations in Mu, not just heap objects, are referred by `iref`. In order to let native code work with the Mu memory, pointers always have to be generated. That is why `PIN` and `UNPIN` trivially work with non-heap memory locations as well. It may be impossible at compile time to know whether an `iref` refers to the heap. For example, there may be a function taking an `iref` as a parameter.
## The guarantees
The pointer returned by `PIN` has the following guarantees:
* The pointer is usable as long as the object pinned by `PIN` is in the *pinning set* of **any** thread.
* The pointer points to a region of address which can be used to access the memory location of the parameter of `PIN` (i.e. `%r`). Specifically:
+ The object layout conforms to the platform's Mu Application Binary Interface (yet to be defined).
+ The native code can perform LOAD, STORE, CMPXCHG, ATOMICRMW, FENCE operations on those locations and they shall conform to the Mu memory model. However, *which native instruction/operator/function performs which operation in the Mu memory model* is implementation defined.
> One memory order can be implemented in multiple different ways. e.g. on x86, SEQ_CST can be implemented as (load: MOV, store: XCHG), but also (load: LOCK XADD(0), store: MOV). It is the implementation to guarantee the Mu memory operation (Mu IR instructions) is compatible with the native counterparts (C11 `<stdatomic.h>` or C++11 `<atomic>`). For example, one particular implementation may let the `atomic_load(ptr, memory_order_xxxxxx)` function in glibc (but not `atomic<T>.load(xxxx)` libc++ provided by LLVM) to perform the LOAD operation in xxxxxx memory order in the Mu memory model.
## Issues about multi-threading
It is possible for two threads to pin the same object. For example, there are two threads T1 and T2 and object O. The execution appears like the following sequence:
1. T1: pin O
2. T2: pin O
3. T1: do something with O
4. T1: unpin O
5. T2: do something with O
6. T2: unpin O
In step 5, T1 has performed an unpin operation. If an object can be pinned from one thread but unpinned by another thread, then there will be a problem: If the object O is no longer pinned, it will be an error if T2 do anything to the pointer.
It is possible to require a thread to acquire a lock or perform reference counting before pinning/unpinning, but this will be inefficient because this inevitably involves expensive atomic operations. But one reason for using the FFI is performance.
Therefore, we let different threads to pin/unpin an object **locally**: `PIN` means pinning an object **for the current thread**. An object is pinned if and only if at least one thread is pinning it.
Implementation-wise, this can be done by keeping a thread-local buffer which records all objects the current thread is pinning. When GC happens the marker looks at the thread-local buffers to find all objects pinned by any thread. In this way, mutators do not need atomic memory operations, but the GC needs to look at all threads.
This "thread-local pinning" mechanism cannot be implemented by the client if the `PIN` instruction in Mu is racy. Giving the client access to the thread-local buffer is no different from the thread-local `PIN` instruction. So this thread-local pinning mechanism does not violate the principle of *minimalism* of Mu: it cannot be implemented efficiently outside Mu.
-----------------------------CUT HERE. BELOW ARE LEGACY TEXTS-------------------------------
# Abstract
I propose defining two kinds of memory spaces: *real space* which models the memory used by C or native programs, and *imaginary space* for that of the µVM. *Object pinning* (or *realising*) is an operation that temporarily makes a memory location in the imaginary space real so that it can be access form C programs.
# Proposal
## Concepts
* **memory**: self-explanatory, but... I don't trust "common sense".
* **memory location**: a region of data storage. Holds values.
* **virtual memory space**: the abstraction provided by the OS and the architecture. It has the following properties:
+ At any moment, it is a mapping from addresses (a subset of integers) to byte values. (I don't like this property. For any multi-threaded program, different threads may not see the same value, and Albert Einstein does not like "the same time".)
+ It can be accessed (read/written/atomicRMW) in various granularities (sizes). The atomicity and visibility between threads follows a certain memory model (the one defined by the architecture, OS and related programming languages).
+ It may be shared between processes and threads. Thus it can be accessed by things not in the µVM.
* **real memory**: memory in which memory locations satisfy the following properties:
+ (Does not need to have "addresses", that is, a memory location can be a variable, not numerical value.)
+ Allows memory accesses (load/store/atomicRMW).
+ For every memory location L, there is a unique memory location L' in the virtual memory space. (This disallows replication.) This L' does not change during the lifetime of L. (This disallows moving.) Accessing of both locations are equivalent.
+ For any two memory locations L1 and L2, their corresponding memory locations in the virtual memory space do not overlap. That is, their accesses are independent. (This disallows aliasing.)
+ For an array in a real memory, its corresponding memory location in the virtual memory space is contiguous. (This disallows implementing arrays as multiple disjoint sub-arrays.)
* **imaginary memory**: memory in which memory locations satisfy the following properties:
+ (Does not need to have "addresses", that is, a memory location can be a variable, not numerical value.)
+ Allows memory accesses (load/store/atomicRMW).
NOTE: As can be seen, "real memory" is trivially "imaginary memory".
* **realising**: temporarily letting a memory location in an imaginary memory have the property of real memory. (This is colloquially called **object pinning**, but it is more than "not moving").
* `iref<T>` (**internal reference**): refer to a memory location in real or imaginary memory.
* `ptr<T>` (**pointer**): an address. May or may correspond to a memory location in the real memory.
## In the µVM
* All memory in the µVM (heap, stack and global) are imaginary memory.
* Introduce the pointer type `ptr<T>`. It is just a raw address, but is typed.
* Introduce the `PTRCAST` instruction which can freely cast `ptr<T>` to or from `int<n>` if n is the appropriate size.
* `LOAD`, `STORE`, `CMPXCHG`, `ATOMICRMW` now work with both `iref<T>` and `ptr<T>`.
* The `CCALL` can call a C function.
+ Plan A: The callee can have type `int<n>`. It is just an integer address.
+ Plan B: Introduce a `c_func<sig>` type. It is castable to/from `int<n>`. NOTE: `func<sig>` refers to µVM functions.
## Pinning
* "Pinning a memory location" means "realising" it, granting it the property of real memory.
* Implicit pinning: Any `iref<T>` values used as arguments of `CCALL` are implicitly pinned during this call.
* Explicit pinning:
+ Plan A: Introduce `REALISE` and `UNREALISE` instructions. Do as it means. The `REALISE` instruction returns a `ptr<T>` value.
+ Plan B: `REALISE` and `UNREALISE` have counting semantics. An object is "unpinned" if its pin-count reduces to 0.
+ Plan C: (the tracing approach) Introduce a type `pinner_iref<T>` which actually holds an `iref<T>` (a [marked storage type](https://github.com/microvm/microvm-spec/wiki/type-system#types-and-type-constructors) of `iref`). `pinner_iref<T>` must be in the memory (not SSA, just like `weakref<T>` cannot be SSA variable). If such a reference is reachable, the referent is pinned. After pinning, the pointer can be obtained via a `GETPTR` instruction. (Plan C does not address replication and non-contiguous arrays)
## Open questions
1. Do we assume stacks and globals as "real" by default?
2. If stacks can move, how do we efficiently realise (pin) it?
3. Do we prevent non-contiguous arrays?
4. How to implement temporary "un-replicating".
# Background: Inter-language interaction
Currently the only way for the µVM to interact with the "outside world" is via traps handled by the client. This interface is called **µVM-client Interface** or **The API**.
For performance concerns, we should introduce a more direct and low-level interface to the "outside world". This new interface is called **foreign function interface** or **FFI**.
## Two worlds
**Imaginary memory**: In a world with advanced garbage collectors, the memory is managed by the GC.
* A high-level memory location (in object or not, for example, if a VM implements movable stacks) may be moved from one address to another (address is the operating system or architecture's virtual address space).
* A high-level memory location It can be replicated (a single high-level object/field corresponds to multiple system memory addresses). This may have different purposes, for example, concurrent GC, security, etc.
* A high-level memory data structure may not have the same structure of the system-level memory. For example, a high-level array may be implemented as segments of (non-contiguous) arrays.
* Programs written in C can only access this kind of memory assisted by the memory manager (GC).
* Example: Java, µVM.
**Real memory**: In a world closely interacting with C, the GC is somewhat naive, or there is no GC at all.
* High-level memory locations (as seen by the programming languages (like C) or VMs (like CPython)) do not move and are not replicated. Each high-level memory location (in object or not) corresponds to exactly one OS/architecture-level address as long as it is not deallocated.
* Programs written in C can directly access the memory as long as it has a raw pointer to the memory location.
* Example:
* Any non-GC language: C, C++, Rust, ...
* Any language/impl that tightly interacts with C: CPython, Lua (partially)
## Examples
* The µVM uses "imaginary memory". It does not assume any low-level memory layout except some high-level rules.
* Java exclusively use "imaginary memory". All Java memory accesses through JNI must go though handles. It is even a problem to expose an array to the C language: 1) the VM must support object pinning, and 2) the VM must implement arrays contiguously.
* CPython uses "real memory". C programs hold any Python objects by raw pointers. A C module can customise its own Python object layout to include its own private data.
* Lua uses "real memory". "Userdata" (a chunk of memory allocated by Lua, but is used by the user, like a managed "malloc") is a Lua object. `lua_touserdata` gets a raw pointer to such a chunk of memory and does not need pinning. `lua_topointer` gets a raw pointer to any Lua object (for debugging purpose).
* SpiderMonkey uses something hybrid. Its GC can move objects, but not within a "request" (a delimited region in C programs where GC "must not happen"). In a "request" (probably everything in C that interacts with SpiderMonkey, the C program can use raw pointers to refer to JS objects, though their structures are opaque, and it is recommended to use `JSHandleValue` to mark it as a GC root.