To protect your data, the CISO officer has suggested users to enable 2FA as soon as possible.
Currently 2.7% of users enabled 2FA.

lib.rs 16.1 KB
Newer Older
Isaac Oscar Gariano's avatar
Isaac Oscar Gariano committed
1
// Copyright 2017 The Australian National University
qinsoon's avatar
qinsoon committed
2
//
Isaac Oscar Gariano's avatar
Isaac Oscar Gariano committed
3
4
5
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
qinsoon's avatar
qinsoon committed
6
//
Isaac Oscar Gariano's avatar
Isaac Oscar Gariano committed
7
//     http://www.apache.org/licenses/LICENSE-2.0
qinsoon's avatar
qinsoon committed
8
//
Isaac Oscar Gariano's avatar
Isaac Oscar Gariano committed
9
10
11
12
13
14
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

qinsoon's avatar
qinsoon committed
15
16
17
18
19
20
21
//! # An Immix garbage collector implementation
//!
//! This crate implements a garbage collector for Zebu. We carefully designed
//! the interface so the garbage collector is a standalone crate from the VM,
//! and it should be able to reuse easily outside Zebu project.
//!
//! The GC implements immix for small object allocation/reclamation, and
22
23
24
//! treadmill for large objects. It uses an object model with 64-bits object
//! header before the start of the object. Allocation always returns an
//! ObjectReference pointing to the start of the object.
qinsoon's avatar
qinsoon committed
25
//!
26
27
//! The idea of the GC implementation is discussed in the paper: Rust as a
//! language for high performance GC implementation (ISMM'16).
qinsoon's avatar
qinsoon committed
28
//!
29
30
//! A user who tries to use this GC (Zebu or other user) should do the
//! following:
qinsoon's avatar
qinsoon committed
31
32
33
34
//!
//! 1. initialize the GC by calling gc_init()
//! 2. for a running mutator thread, call new_mutator() to create a mutator
//!    (and store it somewhere in TLS). And call set_low_water_mark() to inform
35
36
37
38
39
//!    the GC so that when it conservatively scans stack, it will not scan
//! beyond    the low water mark
//! 3. insert yieldpoint() occasionally in the code so that the GC can
//! synchronise    with the thread, or insert yieldpoint_slow() if user decides
//! to implement an inlined    fastpath
qinsoon's avatar
qinsoon committed
40
41
//! 4. call alloc_fast() to ask for allocation, or alloc_slow()
//!    if user decides to implement an inlined fastpath
42
43
//! 5. the allocation may trigger a GC, and it is guaranteed to return a valid
//! address 6. call init_object() or init_hybrid() to initialize the object
qinsoon's avatar
qinsoon committed
44
45
46
47
//! 7. when the thread quits, call drop_mutator() to properly destroy a mutator.
//!
//! Other utility functions provided by the GC:
//!
48
49
//! * explicit control of root set - add_to_root()/remove_root(): the GC treats
//!   stacks and registers as default root set, however the client may
qinsoon's avatar
qinsoon committed
50
//!   explicitly add references as root
51
52
53
54
55
56
57
58
//! * explicit control of object movement/liveness -
//!   pin_object()/unpin_object(): the GC will keep the object alive, and in
//!   place (does not move it)
//! * capability of persisting the heap as a relocatable boot image -
//!   persist_heap(): the GC will traverse the heap from given roots, and dump
//!   all reachable objects in a structured way so that the user can use the
//!   data structure to access every object and persist them in their own
//!   approach
qinsoon's avatar
qinsoon committed
59
60
61
//!
//! Issues (going to be fixed in a major GC rewrite):
//!
62
63
64
65
66
67
//! * currently collection is disabled: due to bugs (and the fact that we are
//!   going to majorly change the GC)
//! * we are using a 64-bits header for each object, we will switch to sidemap
//!   object model (Issue #12)
//! * we are allocating the whole heap, and initialize it all at once during
//!   startup. We should allow dynamic growth of heap (Issue #56)
qinsoon's avatar
qinsoon committed
68
//! * pin/unpin operations is different from Mu spec (Issue #33)
69
70
71
//! * we are using some utility C functions (heap/gc/clib_(architecture).c/.S)
//!   to help acquire some information for GC. And those C functions are not
//!   returning accurate results (Issue #21)
qinsoon's avatar
qinsoon committed
72

73
74
#[macro_use]
extern crate rodal;
qinsoon's avatar
qinsoon committed
75
extern crate mu_utils as utils;
qinsoon's avatar
qinsoon committed
76
77
78
79
80
81
#[macro_use]
extern crate lazy_static;
#[macro_use]
extern crate log;
extern crate aligned_alloc;
extern crate crossbeam;
82
extern crate stderrlog;
83
84
#[macro_use]
extern crate field_offset;
qinsoon's avatar
qinsoon committed
85

qinsoon's avatar
qinsoon committed
86
use common::objectdump;
qinsoon's avatar
qinsoon committed
87
use common::ptr::*;
qinsoon's avatar
qinsoon committed
88
use heap::freelist::*;
89
use heap::immix::*;
qinsoon's avatar
qinsoon committed
90
use utils::*;
91

92
use std::sync::atomic::Ordering;
93
94
use std::sync::Arc;
use std::sync::RwLock;
qinsoon's avatar
qinsoon committed
95
96
97

/// data structures for the GC and the user
pub mod common;
98

qinsoon's avatar
qinsoon committed
99
100
101
/// object model (metadata for objects managed by the GC)
/// this allows the user to know some GC semantics, and to be able to implement
/// fastpath on their side
102
103
//  FIXME: this mod can be private (we expose it only because tests are using
// it)  we should consider moving those tests within the mod
qinsoon's avatar
qinsoon committed
104
pub mod objectmodel;
105
106
/// offset from an object reference to the header (in byte, can be
/// negative)
qinsoon's avatar
qinsoon committed
107
pub use objectmodel::OBJECT_HEADER_OFFSET;
108
109
/// object header size (in byte)
pub use objectmodel::OBJECT_HEADER_SIZE;
qinsoon's avatar
qinsoon committed
110

111
112
113
114
/// the main GC crate, heap structures (including collection, immix space,
/// freelist space)
//  FIXME: this mod can be private (we expose it only because tests are using
// it)  we should consider moving those tests within the mod
qinsoon's avatar
qinsoon committed
115
116
117
118
pub mod heap;

/// whether this GC will move objects?
/// (does an object have a fixed address once allocated before it is reclaimed)
qinsoon's avatar
qinsoon committed
119
pub const GC_MOVES_OBJECT: bool = false;
qinsoon's avatar
qinsoon committed
120

121
122
123
/// threshold for small objects. Use small object allocator (immix) for objects
/// that are smaller than this threshold. Otherwise use large object allocator
/// (freelist)
qinsoon's avatar
qinsoon committed
124
pub const LARGE_OBJECT_THRESHOLD: usize = BYTES_IN_LINE;
qinsoon's avatar
qinsoon committed
125

qinsoon's avatar
qinsoon committed
126
/// the mutator that the user is supposed to put to every mutator thread
127
128
/// Most interface functions provided by the GC require a pointer to this
/// mutator.
qinsoon's avatar
qinsoon committed
129
pub use heap::*;
130

131
132
pub use objectmodel::*;

qinsoon's avatar
qinsoon committed
133
//  these two offsets help user's compiler to generate inlined fastpath code
134

qinsoon's avatar
qinsoon committed
135
/// offset to the immix allocator cursor from its pointer
qinsoon's avatar
qinsoon committed
136
//pub use heap::immix::CURSOR_OFFSET as ALLOCATOR_CURSOR_OFFSET;
qinsoon's avatar
qinsoon committed
137
/// offset to the immix allocator limit from its pointer
qinsoon's avatar
qinsoon committed
138
//pub use heap::immix::LIMIT_OFFSET as ALLOCATOR_LIMIT_OFFSET;
qinsoon's avatar
qinsoon committed
139
140
141
142
/// GC represents the context for the current running GC instance
struct GC {
    immix_tiny: Raw<ImmixSpace>,
    immix_normal: Raw<ImmixSpace>,
qinsoon's avatar
qinsoon committed
143
    lo: Raw<FreelistSpace>,
144
    roots: LinkedHashSet<ObjectReference>
qinsoon's avatar
qinsoon committed
145
146
147
}

lazy_static! {
148
    static ref MY_GC: RwLock<Option<GC>> = RwLock::new(None);
qinsoon's avatar
qinsoon committed
149
150
151
152
}

impl GC {
    pub fn is_heap_object(&self, addr: Address) -> bool {
153
154
155
        self.immix_tiny.addr_in_space(addr)
            || self.immix_normal.addr_in_space(addr)
            || self.lo.addr_in_space(addr)
qinsoon's avatar
qinsoon committed
156
157
158
159
160
161
162
163
164
165
    }
}

#[repr(C)]
#[derive(Copy, Clone)]
pub struct GCConfig {
    pub immix_tiny_size: ByteSize,
    pub immix_normal_size: ByteSize,
    pub lo_size: ByteSize,
    pub n_gcthreads: usize,
166
    pub enable_gc: bool
qinsoon's avatar
qinsoon committed
167
}
168

qinsoon's avatar
qinsoon committed
169
170
171
//  the implementation of this GC will be changed dramatically in the future,
//  but the exposed interface is likely to stay the same.
/// initializes the GC
172
#[no_mangle]
qinsoon's avatar
qinsoon committed
173
pub extern "C" fn gc_init(config: GCConfig) {
qinsoon's avatar
qinsoon committed
174
    trace!("Initializing GC...");
175
176
    // init object model - init this first, since spaces may use it
    objectmodel::init();
qinsoon's avatar
qinsoon committed
177

qinsoon's avatar
qinsoon committed
178
    // init spaces
qinsoon's avatar
qinsoon committed
179
    trace!("  initializing tiny immix space...");
180
181
    let immix_tiny =
        ImmixSpace::new(SpaceDescriptor::ImmixTiny, config.immix_tiny_size);
qinsoon's avatar
qinsoon committed
182
    trace!("  initializing normal immix space...");
183
184
    let immix_normal =
        ImmixSpace::new(SpaceDescriptor::ImmixNormal, config.immix_normal_size);
qinsoon's avatar
qinsoon committed
185
186
    trace!("  initializing large object space...");
    let lo = FreelistSpace::new(SpaceDescriptor::Freelist, config.lo_size);
qinsoon's avatar
qinsoon committed
187

qinsoon's avatar
qinsoon committed
188
189
    // init GC
    heap::gc::init(config.n_gcthreads);
qinsoon's avatar
qinsoon committed
190
    *MY_GC.write().unwrap() = Some(GC {
qinsoon's avatar
qinsoon committed
191
192
        immix_tiny,
        immix_normal,
qinsoon's avatar
qinsoon committed
193
        lo,
194
        roots: LinkedHashSet::new()
qinsoon's avatar
qinsoon committed
195
    });
qinsoon's avatar
qinsoon committed
196
    heap::gc::ENABLE_GC.store(config.enable_gc, Ordering::Relaxed);
197

qinsoon's avatar
qinsoon committed
198
    info!(
qinsoon's avatar
qinsoon committed
199
200
        "heap is {} bytes (immix_tiny: {} bytes, immix_normal: {} bytes, lo: {} bytes)",
        config.immix_tiny_size + config.immix_normal_size + config.lo_size,
qinsoon's avatar
qinsoon committed
201
        config.immix_tiny_size,
qinsoon's avatar
qinsoon committed
202
203
        config.immix_normal_size,
        config.lo_size
qinsoon's avatar
qinsoon committed
204
    );
qinsoon's avatar
qinsoon committed
205
206
    info!("{} gc threads", config.n_gcthreads);
    if !config.enable_gc {
207
208
        warn!("GC disabled (panic when a collection is triggered)");
    }
209
210
}

211
212
/// destroys current GC instance
#[no_mangle]
qinsoon's avatar
qinsoon committed
213
pub extern "C" fn gc_destroy() {
214
    debug!("cleanup for GC...");
215
216
    objectmodel::cleanup();
    let mut gc_lock = MY_GC.write().unwrap();
217
218
219

    if gc_lock.is_some() {
        {
qinsoon's avatar
qinsoon committed
220
            let gc = gc_lock.as_mut().unwrap();
221
222
223
224
225
226
227
228
229
230
            gc.immix_tiny.destroy();
            gc.immix_normal.destroy();
            gc.lo.destroy();
        }
        *gc_lock = None;
    } else {
        warn!(
            "GC has been cleaned up before (probably multiple \
             Zebu instances are running, and getting destroyed at the same time?"
        );
231
    }
232
233
}

qinsoon's avatar
qinsoon committed
234
/// creates a mutator
235
#[no_mangle]
236
pub extern "C" fn new_mutator_ptr() -> *mut Mutator {
qinsoon's avatar
qinsoon committed
237
238
239
240
241
242
243
    let gc_lock = MY_GC.read().unwrap();
    let gc: &GC = gc_lock.as_ref().unwrap();

    let global = Arc::new(MutatorGlobal::new());
    let m: *mut Mutator = Box::into_raw(Box::new(Mutator::new(
        ImmixAllocator::new(gc.immix_tiny.clone()),
        ImmixAllocator::new(gc.immix_normal.clone()),
qinsoon's avatar
qinsoon committed
244
        FreelistAllocator::new(gc.lo.clone()),
245
        global
qinsoon's avatar
qinsoon committed
246
247
248
249
250
    )));

    // allocators have a back pointer to the mutator
    unsafe { (&mut *m) }.tiny.set_mutator(m);
    unsafe { (&mut *m) }.normal.set_mutator(m);
qinsoon's avatar
qinsoon committed
251
    unsafe { (&mut *m) }.lo.set_mutator(m);
qinsoon's avatar
qinsoon committed
252
253

    m
254
255
}

256
257
258
259
260
261
262
263
264
265
266
/// we need to set mutator for each allocator manually
#[no_mangle]
pub extern "C" fn new_mutator() -> Mutator {
    let gc_lock = MY_GC.read().unwrap();
    let gc: &GC = gc_lock.as_ref().unwrap();

    let global = Arc::new(MutatorGlobal::new());
    Mutator::new(
        ImmixAllocator::new(gc.immix_tiny.clone()),
        ImmixAllocator::new(gc.immix_normal.clone()),
        FreelistAllocator::new(gc.lo.clone()),
267
        global
268
269
270
    )
}

qinsoon's avatar
qinsoon committed
271
/// destroys a mutator
272
273
274
/// Note the user has to explicitly drop mutator that they are not using,
/// otherwise the GC may not be able to stop all the mutators before GC, and
/// ends up in an endless pending status
275
#[no_mangle]
qinsoon's avatar
qinsoon committed
276
pub extern "C" fn drop_mutator(mutator: *mut Mutator) {
qinsoon's avatar
qinsoon committed
277
278
    unsafe { mutator.as_mut().unwrap() }.destroy();

279
280
281
    // rust will reclaim the boxed mutator
}

qinsoon's avatar
qinsoon committed
282
/// sets low water mark for current thread
283
284
/// When the GC conservatively scans stack for root, it will not scan
/// beyond the low water mark
285
pub use heap::gc::set_low_water_mark;
286

qinsoon's avatar
qinsoon committed
287
/// adds an object reference to the root set
288
#[no_mangle]
qinsoon's avatar
qinsoon committed
289
pub extern "C" fn add_to_root(obj: ObjectReference) {
290
291
292
293
    let mut gc = MY_GC.write().unwrap();
    gc.as_mut().unwrap().roots.insert(obj);
}

qinsoon's avatar
qinsoon committed
294
/// removes an object reference from the root set
295
#[no_mangle]
qinsoon's avatar
qinsoon committed
296
pub extern "C" fn remove_root(obj: ObjectReference) {
297
298
299
300
    let mut gc = MY_GC.write().unwrap();
    gc.as_mut().unwrap().roots.remove(&obj);
}

qinsoon's avatar
qinsoon committed
301
/// pins an object so that it will be moved or reclaimed
302
#[no_mangle]
qinsoon's avatar
qinsoon committed
303
pub extern "C" fn muentry_pin_object(obj: ObjectReference) -> Address {
304
    trace!("gc::src::lib::muentry_pin_object");
305
306
307
308
    add_to_root(obj);
    obj.to_address()
}

qinsoon's avatar
qinsoon committed
309
/// unpins an object so that it can be freely moved/reclaimed as normal objects
310
#[no_mangle]
qinsoon's avatar
qinsoon committed
311
pub extern "C" fn muentry_unpin_object(obj: Address) {
312
    trace!("gc::src::lib::muentry_unpin_object");
qinsoon's avatar
qinsoon committed
313
    remove_root(unsafe { obj.to_object_reference() });
314
315
}

qinsoon's avatar
qinsoon committed
316
/// a regular check to see if the mutator should stop for synchronisation
317
#[no_mangle]
qinsoon's avatar
qinsoon committed
318
pub extern "C" fn yieldpoint(mutator: *mut Mutator) {
qinsoon's avatar
qinsoon committed
319
    unsafe { mutator.as_mut().unwrap() }.yieldpoint();
320
321
}

qinsoon's avatar
qinsoon committed
322
/// the slowpath for yieldpoint
323
324
/// We assume for performance, the user will implement an inlined fastpath, we
/// provide constants, offsets to fields and this slowpath function for the user
325
326
#[no_mangle]
#[inline(never)]
qinsoon's avatar
qinsoon committed
327
pub extern "C" fn yieldpoint_slow(mutator: *mut Mutator) {
qinsoon's avatar
qinsoon committed
328
    unsafe { mutator.as_mut().unwrap() }.yieldpoint_slow()
329
330
}

qinsoon's avatar
qinsoon committed
331
332
333
334
335
#[inline(always)]
fn mutator_ref(m: *mut Mutator) -> &'static mut Mutator {
    unsafe { &mut *m }
}

qinsoon's avatar
qinsoon committed
336
/// allocates an object in the immix space
qinsoon's avatar
qinsoon committed
337
338
339
340
#[no_mangle]
pub extern "C" fn muentry_alloc_tiny(
    mutator: *mut Mutator,
    size: usize,
341
    align: usize
qinsoon's avatar
qinsoon committed
342
) -> ObjectReference {
qinsoon's avatar
qinsoon committed
343
    let m = mutator_ref(mutator);
344
    trace!("gc::src::lib::muentry_alloc_tiny({}, {})", size, align);
qinsoon's avatar
qinsoon committed
345
    unsafe { m.tiny.alloc(size, align).to_object_reference() }
346
347
}

348
#[no_mangle]
qinsoon's avatar
qinsoon committed
349
350
pub extern "C" fn muentry_alloc_normal(
    mutator: *mut Mutator,
qinsoon's avatar
qinsoon committed
351
    size: usize,
352
    align: usize
qinsoon's avatar
qinsoon committed
353
) -> ObjectReference {
qinsoon's avatar
qinsoon committed
354
    let m = mutator_ref(mutator);
355
    trace!("gc::src::lib::muentry_alloc_normal({}, {})", size, align);
qinsoon's avatar
qinsoon committed
356
    let res = m.normal.alloc(size, align);
qinsoon's avatar
qinsoon committed
357
    m.normal.post_alloc(res, size);
qinsoon's avatar
qinsoon committed
358
    unsafe { res.to_object_reference() }
qinsoon's avatar
qinsoon committed
359
360
361
362
363
364
365
366
}

/// allocates an object with slowpath in the immix space
#[no_mangle]
#[inline(never)]
pub extern "C" fn muentry_alloc_tiny_slow(
    mutator: *mut Mutator,
    size: usize,
367
    align: usize
qinsoon's avatar
qinsoon committed
368
) -> Address {
qinsoon's avatar
qinsoon committed
369
    let m = mutator_ref(mutator);
370
    trace!("gc::src::lib::muentry_alloc_tiny_slow({}, {})", size, align);
qinsoon's avatar
qinsoon committed
371
    m.tiny.alloc_slow(size, align)
372
373
}

qinsoon's avatar
qinsoon committed
374
375
376
/// allocates an object with slowpath in the immix space
#[no_mangle]
#[inline(never)]
qinsoon's avatar
qinsoon committed
377
378
pub extern "C" fn muentry_alloc_normal_slow(
    mutator: *mut Mutator,
qinsoon's avatar
qinsoon committed
379
    size: usize,
380
    align: usize
qinsoon's avatar
qinsoon committed
381
) -> Address {
qinsoon's avatar
qinsoon committed
382
    let m = mutator_ref(mutator);
383
    trace!("gc::src::lib::muentry_alloc_normal_slow({}, {})", size, align);
qinsoon's avatar
qinsoon committed
384
    let res = m.normal.alloc_slow(size, align);
qinsoon's avatar
qinsoon committed
385
    m.normal.post_alloc(res, size);
qinsoon's avatar
qinsoon committed
386
    res
qinsoon's avatar
qinsoon committed
387
388
389
390
}

/// allocates an object in the freelist space (large object space)
#[no_mangle]
qinsoon's avatar
qinsoon committed
391
#[inline(never)]
qinsoon's avatar
qinsoon committed
392
pub extern "C" fn muentry_alloc_large(
qinsoon's avatar
qinsoon committed
393
    mutator: *mut Mutator,
qinsoon's avatar
qinsoon committed
394
    size: usize,
395
    align: usize
qinsoon's avatar
qinsoon committed
396
) -> ObjectReference {
qinsoon's avatar
qinsoon committed
397
    let m = mutator_ref(mutator);
398
    trace!("gc::src::lib::muentry_alloc_large({}, {})", size, align);
qinsoon's avatar
qinsoon committed
399
400
    let res = m.lo.alloc(size, align);
    unsafe { res.to_object_reference() }
qinsoon's avatar
qinsoon committed
401
402
403
}

/// initializes a fix-sized object
404
#[no_mangle]
qinsoon's avatar
qinsoon committed
405
pub extern "C" fn muentry_init_tiny_object(
qinsoon's avatar
qinsoon committed
406
    mutator: *mut Mutator,
qinsoon's avatar
qinsoon committed
407
    obj: ObjectReference,
408
    encode: TinyObjectEncode
qinsoon's avatar
qinsoon committed
409
) {
410
    trace!("gc::src::lib::muentry_init_tiny_object");
qinsoon's avatar
qinsoon committed
411
412
413
414
415
416
417
418
419
420
    unsafe { &mut *mutator }
        .tiny
        .init_object(obj.to_address(), encode);
}

/// initializes a fix-sized object
#[no_mangle]
pub extern "C" fn muentry_init_small_object(
    mutator: *mut Mutator,
    obj: ObjectReference,
421
    encode: SmallObjectEncode
qinsoon's avatar
qinsoon committed
422
) {
423
    trace!("gc::src::lib::muentry_init_small_object");
qinsoon's avatar
qinsoon committed
424
425
426
427
428
429
430
431
432
433
    unsafe { &mut *mutator }
        .normal
        .init_object(obj.to_address(), encode);
}

/// initializes a fix-sized object
#[no_mangle]
pub extern "C" fn muentry_init_medium_object(
    mutator: *mut Mutator,
    obj: ObjectReference,
434
    encode: MediumObjectEncode
qinsoon's avatar
qinsoon committed
435
) {
436
    trace!("gc::src::lib::muentry_init_medium_object");
qinsoon's avatar
qinsoon committed
437
438
439
    unsafe { &mut *mutator }
        .normal
        .init_object(obj.to_address(), encode);
440
441
}

qinsoon's avatar
qinsoon committed
442
443
444
445
#[no_mangle]
pub extern "C" fn muentry_init_large_object(
    mutator: *mut Mutator,
    obj: ObjectReference,
446
    encode: LargeObjectEncode
qinsoon's avatar
qinsoon committed
447
) {
448
    trace!("gc::src::lib::muentry_init_large_object");
qinsoon's avatar
qinsoon committed
449
450
451
452
453
    unsafe { &mut *mutator }
        .lo
        .init_object(obj.to_address(), encode);
}

qinsoon's avatar
qinsoon committed
454
/// forces gc to happen
455
456
/// (this is not a 'hint' - world will be stopped, and heap traversal will
/// start)
457
#[no_mangle]
qinsoon's avatar
qinsoon committed
458
pub extern "C" fn force_gc(mutator: *mut Mutator) {
qinsoon's avatar
qinsoon committed
459
    heap::gc::trigger_gc();
qinsoon's avatar
qinsoon committed
460
    yieldpoint(mutator);
qinsoon's avatar
qinsoon committed
461
}
462

qinsoon's avatar
qinsoon committed
463
464
465
/// traces reachable objects and record them as a data structure
/// so that the user can inspect the reachable heap and persist it in their way
#[no_mangle]
qinsoon's avatar
qinsoon committed
466
pub extern "C" fn persist_heap(roots: Vec<Address>) -> objectdump::HeapDump {
qinsoon's avatar
qinsoon committed
467
468
469
470
471
472
    objectdump::HeapDump::from_roots(roots)
}

// the following API functions may get removed in the future

#[no_mangle]
qinsoon's avatar
qinsoon committed
473
474
475
476
477
478
479
480
pub extern "C" fn get_space_immix_tiny() -> Raw<ImmixSpace> {
    let space_lock = MY_GC.read().unwrap();
    let space = space_lock.as_ref().unwrap();
    space.immix_tiny.clone()
}

#[no_mangle]
pub extern "C" fn get_space_immix_normal() -> Raw<ImmixSpace> {
qinsoon's avatar
qinsoon committed
481
482
    let space_lock = MY_GC.read().unwrap();
    let space = space_lock.as_ref().unwrap();
qinsoon's avatar
qinsoon committed
483
484
    space.immix_normal.clone()
}
qinsoon's avatar
qinsoon committed
485

qinsoon's avatar
qinsoon committed
486
487
488
489
490
#[no_mangle]
pub extern "C" fn get_space_freelist() -> Raw<FreelistSpace> {
    let space_lock = MY_GC.read().unwrap();
    let space = space_lock.as_ref().unwrap();
    space.lo.clone()
491
492
}

qinsoon's avatar
qinsoon committed
493
494
pub fn start_logging_trace() {
    match stderrlog::new().verbosity(4).init() {
495
496
497
498
        Ok(()) => info!("logger initialized"),
        Err(e) => error!(
            "failed to init logger, probably already initialized: {:?}",
            e
499
        )
qinsoon's avatar
qinsoon committed
500
    }
qinsoon's avatar
qinsoon committed
501
}