repo_name
stringclasses 6
values | pr_number
int64 512
78.9k
| pr_title
stringlengths 3
144
| pr_description
stringlengths 0
30.3k
| author
stringlengths 2
21
| date_created
timestamp[ns, tz=UTC] | date_merged
timestamp[ns, tz=UTC] | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 17
30.4k
| filepath
stringlengths 9
210
| before_content
stringlengths 0
112M
| after_content
stringlengths 0
112M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/src/libunwind/src/mips/Ginit_remote.c | /* libunwind - a platform-independent unwind library
Copyright (C) 2008 CodeSourcery
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "init.h"
#include "unwind_i.h"
int
unw_init_remote (unw_cursor_t *cursor, unw_addr_space_t as, void *as_arg)
{
#ifdef UNW_LOCAL_ONLY
return -UNW_EINVAL;
#else /* !UNW_LOCAL_ONLY */
struct cursor *c = (struct cursor *) cursor;
if (!atomic_load(&tdep_init_done))
tdep_init ();
Debug (1, "(cursor=%p)\n", c);
c->dwarf.as = as;
c->dwarf.as_arg = as_arg;
return common_init (c, 0);
#endif /* !UNW_LOCAL_ONLY */
}
| /* libunwind - a platform-independent unwind library
Copyright (C) 2008 CodeSourcery
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "init.h"
#include "unwind_i.h"
int
unw_init_remote (unw_cursor_t *cursor, unw_addr_space_t as, void *as_arg)
{
#ifdef UNW_LOCAL_ONLY
return -UNW_EINVAL;
#else /* !UNW_LOCAL_ONLY */
struct cursor *c = (struct cursor *) cursor;
if (!atomic_load(&tdep_init_done))
tdep_init ();
Debug (1, "(cursor=%p)\n", c);
c->dwarf.as = as;
c->dwarf.as_arg = as_arg;
return common_init (c, 0);
#endif /* !UNW_LOCAL_ONLY */
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/native/public/mono/metadata/image.h | /**
* \file
*/
#ifndef _MONONET_METADATA_IMAGE_H_
#define _MONONET_METADATA_IMAGE_H_
#include <mono/metadata/details/image-types.h>
MONO_BEGIN_DECLS
#define MONO_API_FUNCTION(ret,name,args) MONO_API ret name args;
#include <mono/metadata/details/image-functions.h>
#undef MONO_API_FUNCTION
mono_bool mono_has_pdb_checksum (char *raw_data, uint32_t raw_data_len);
MONO_END_DECLS
#endif
| /**
* \file
*/
#ifndef _MONONET_METADATA_IMAGE_H_
#define _MONONET_METADATA_IMAGE_H_
#include <mono/metadata/details/image-types.h>
MONO_BEGIN_DECLS
#define MONO_API_FUNCTION(ret,name,args) MONO_API ret name args;
#include <mono/metadata/details/image-functions.h>
#undef MONO_API_FUNCTION
mono_bool mono_has_pdb_checksum (char *raw_data, uint32_t raw_data_len);
MONO_END_DECLS
#endif
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/src/libunwind/src/x86_64/regname.c | /* libunwind - a platform-independent unwind library
Contributed by Max Asbock <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "unwind_i.h"
static const char *regname[] =
{
"RAX",
"RDX",
"RCX",
"RBX",
"RSI",
"RDI",
"RBP",
"RSP",
"R8",
"R9",
"R10",
"R11",
"R12",
"R13",
"R14",
"R15",
"RIP",
};
const char *
unw_regname (unw_regnum_t reg)
{
if (reg < (unw_regnum_t) ARRAY_SIZE (regname))
return regname[reg];
else
return "???";
}
| /* libunwind - a platform-independent unwind library
Contributed by Max Asbock <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "unwind_i.h"
static const char *regname[] =
{
"RAX",
"RDX",
"RCX",
"RBX",
"RSI",
"RDI",
"RBP",
"RSP",
"R8",
"R9",
"R10",
"R11",
"R12",
"R13",
"R14",
"R15",
"RIP",
};
const char *
unw_regname (unw_regnum_t reg)
{
if (reg < (unw_regnum_t) ARRAY_SIZE (regname))
return regname[reg];
else
return "???";
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/src/libunwind/src/loongarch64/Lcreate_addr_space.c | #define UNW_LOCAL_ONLY
#include <libunwind.h>
#if defined(UNW_LOCAL_ONLY) && !defined(UNW_REMOTE_ONLY)
#include "Gcreate_addr_space.c"
#endif
| #define UNW_LOCAL_ONLY
#include <libunwind.h>
#if defined(UNW_LOCAL_ONLY) && !defined(UNW_REMOTE_ONLY)
#include "Gcreate_addr_space.c"
#endif
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/src/libunwind/include/tdep-arm/jmpbuf.h | /* libunwind - a platform-independent unwind library
Copyright (C) 2008 CodeSourcery
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
/* Use glibc's jump-buffer indices; NPTL peeks at SP: */
/* FIXME for ARM! */
#define JB_SP 4
#define JB_RP 5
#define JB_MASK_SAVED 6
#define JB_MASK 7
| /* libunwind - a platform-independent unwind library
Copyright (C) 2008 CodeSourcery
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
/* Use glibc's jump-buffer indices; NPTL peeks at SP: */
/* FIXME for ARM! */
#define JB_SP 4
#define JB_RP 5
#define JB_MASK_SAVED 6
#define JB_MASK 7
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/mono/mono/metadata/boehm-gc.c | /**
* \file
* GC implementation using either the installed or included Boehm GC.
*
* Copyright 2001-2003 Ximian, Inc (http://www.ximian.com)
* Copyright 2004-2011 Novell, Inc (http://www.novell.com)
* Copyright 2011-2012 Xamarin, Inc (http://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#include "config.h"
#include <string.h>
#define GC_I_HIDE_POINTERS
#include <mono/metadata/gc-internals.h>
#include <mono/metadata/mono-gc.h>
#include <mono/metadata/profiler-private.h>
#include <mono/metadata/class-internals.h>
#include <mono/metadata/method-builder.h>
#include <mono/metadata/method-builder-ilgen.h>
#include <mono/metadata/method-builder-ilgen-internals.h>
#include <mono/metadata/opcodes.h>
#include <mono/metadata/domain-internals.h>
#include <mono/metadata/metadata-internals.h>
#include <mono/metadata/marshal.h>
#include <mono/metadata/runtime.h>
#include <mono/metadata/handle.h>
#include <mono/metadata/sgen-toggleref.h>
#include <mono/metadata/w32handle.h>
#include <mono/metadata/abi-details.h>
#include <mono/utils/atomic.h>
#include <mono/utils/mono-logger-internals.h>
#include <mono/utils/mono-memory-model.h>
#include <mono/utils/mono-time.h>
#include <mono/utils/mono-threads.h>
#include <mono/utils/dtrace.h>
#include <mono/utils/gc_wrapper.h>
#include <mono/utils/mono-os-mutex.h>
#include <mono/utils/mono-counters.h>
#include <mono/utils/mono-compiler.h>
#include <mono/utils/unlocked.h>
#include <mono/metadata/icall-decl.h>
#if HAVE_BOEHM_GC
#if defined(HOST_DARWIN) && defined(HAVE_PTHREAD_GET_STACKADDR_NP)
void *pthread_get_stackaddr_np(pthread_t);
#endif
#define GC_NO_DESCRIPTOR ((gpointer)(0 | GC_DS_LENGTH))
/*Boehm max heap cannot be smaller than 16MB*/
#define MIN_BOEHM_MAX_HEAP_SIZE_IN_MB 16
#define MIN_BOEHM_MAX_HEAP_SIZE (MIN_BOEHM_MAX_HEAP_SIZE_IN_MB << 20)
static gboolean gc_initialized = FALSE;
static gboolean gc_dont_gc_env = FALSE;
static mono_mutex_t mono_gc_lock;
static GC_push_other_roots_proc default_push_other_roots;
static GHashTable *roots;
static void
mono_push_other_roots(void);
static void
register_test_toggleref_callback (void);
#define BOEHM_GC_BIT_FINALIZER_AWARE 1
static MonoGCFinalizerCallbacks fin_callbacks;
/* GC Handles */
static mono_mutex_t handle_section;
#define lock_handles(handles) mono_os_mutex_lock (&handle_section)
#define unlock_handles(handles) mono_os_mutex_unlock (&handle_section)
typedef struct {
guint32 *bitmap;
gpointer *entries;
guint32 size;
guint8 type;
guint slot_hint : 24; /* starting slot for search in bitmap */
/* 2^16 appdomains should be enough for everyone (though I know I'll regret this in 20 years) */
/* we alloc this only for weak refs, since we can get the domain directly in the other cases */
guint16 *domain_ids;
} HandleData;
#define EMPTY_HANDLE_DATA(type) {NULL, NULL, 0, (type), 0, NULL}
/* weak and weak-track arrays will be allocated in malloc memory
*/
static HandleData gc_handles [] = {
EMPTY_HANDLE_DATA (HANDLE_WEAK),
EMPTY_HANDLE_DATA (HANDLE_WEAK_TRACK),
EMPTY_HANDLE_DATA (HANDLE_NORMAL),
EMPTY_HANDLE_DATA (HANDLE_PINNED)
};
static void
mono_gc_warning (char *msg, GC_word arg)
{
mono_trace (G_LOG_LEVEL_WARNING, MONO_TRACE_GC, msg, (unsigned long)arg);
}
static void on_gc_notification (GC_EventType event);
// GC_word here to precisely match Boehm. Not size_t, not gsize.
static void on_gc_heap_resize (GC_word new_size);
void
mono_gc_base_init (void)
{
char *env;
if (gc_initialized)
return;
mono_counters_init ();
#ifndef HOST_WIN32
mono_w32handle_init ();
#endif
roots = g_hash_table_new (NULL, NULL);
default_push_other_roots = GC_get_push_other_roots ();
GC_set_push_other_roots (mono_push_other_roots);
#if !defined(HOST_ANDROID)
/* If GC_no_dls is set to true, GC_find_limit is not called. This causes a seg fault on Android. */
GC_set_no_dls (TRUE);
#endif
{
if ((env = g_getenv ("MONO_GC_DEBUG"))) {
char **opts = g_strsplit (env, ",", -1);
for (char **ptr = opts; ptr && *ptr; ptr ++) {
char *opt = *ptr;
if (!strcmp (opt, "do-not-finalize")) {
mono_do_not_finalize = 1;
} else if (!strcmp (opt, "log-finalizers")) {
mono_log_finalizers = 1;
}
}
g_free (env);
}
}
/* cache value rather than calling during collection since g_hasenv may take locks and can deadlock */
gc_dont_gc_env = g_hasenv ("GC_DONT_GC");
GC_init ();
GC_set_warn_proc (mono_gc_warning);
GC_set_finalize_on_demand (1);
GC_set_finalizer_notifier(mono_gc_finalize_notify);
GC_init_gcj_malloc (5, NULL);
GC_allow_register_threads ();
if ((env = g_getenv ("MONO_GC_PARAMS"))) {
char **ptr, **opts = g_strsplit (env, ",", -1);
for (ptr = opts; *ptr; ++ptr) {
char *opt = *ptr;
if (g_str_has_prefix (opt, "max-heap-size=")) {
size_t max_heap;
opt = strchr (opt, '=') + 1;
if (*opt && mono_gc_parse_environment_string_extract_number (opt, &max_heap)) {
if (max_heap < MIN_BOEHM_MAX_HEAP_SIZE) {
fprintf (stderr, "max-heap-size must be at least %dMb.\n", MIN_BOEHM_MAX_HEAP_SIZE_IN_MB);
exit (1);
}
GC_set_max_heap_size (max_heap);
} else {
fprintf (stderr, "max-heap-size must be an integer.\n");
exit (1);
}
continue;
} else if (g_str_has_prefix (opt, "toggleref-test")) {
register_test_toggleref_callback ();
continue;
} else {
/* Could be a parameter for sgen */
/*
fprintf (stderr, "MONO_GC_PARAMS must be a comma-delimited list of one or more of the following:\n");
fprintf (stderr, " max-heap-size=N (where N is an integer, possibly with a k, m or a g suffix)\n");
exit (1);
*/
}
}
g_free (env);
g_strfreev (opts);
}
mono_thread_callbacks_init ();
mono_thread_info_init (sizeof (MonoThreadInfo));
mono_os_mutex_init (&mono_gc_lock);
mono_os_mutex_init_recursive (&handle_section);
mono_thread_info_attach ();
GC_set_on_collection_event (on_gc_notification);
GC_set_on_heap_resize (on_gc_heap_resize);
gc_initialized = TRUE;
}
void
mono_gc_base_cleanup (void)
{
GC_set_finalizer_notifier (NULL);
}
void
mono_gc_init_icalls (void)
{
}
/**
* mono_gc_collect:
* \param generation GC generation identifier
*
* Perform a garbage collection for the given generation, higher numbers
* mean usually older objects. Collecting a high-numbered generation
* implies collecting also the lower-numbered generations.
* The maximum value for \p generation can be retrieved with a call to
* \c mono_gc_max_generation, so this function is usually called as:
*
* <code>mono_gc_collect (mono_gc_max_generation ());</code>
*/
void
mono_gc_collect (int generation)
{
#ifndef DISABLE_PERFCOUNTERS
mono_atomic_inc_i32 (&mono_perfcounters->gc_induced);
#endif
GC_gcollect ();
}
/**
* mono_gc_max_generation:
*
* Get the maximum generation number used by the current garbage
* collector. The value will be 0 for the Boehm collector, 1 or more
* for the generational collectors.
*
* Returns: the maximum generation number.
*/
int
mono_gc_max_generation (void)
{
return 0;
}
guint64
mono_gc_get_allocated_bytes_for_current_thread (void)
{
return 0;
}
/**
* mono_gc_get_generation:
* \param object a managed object
*
* Get the garbage collector's generation that \p object belongs to.
* Use this has a hint only.
*
* \returns a garbage collector generation number
*/
int
mono_gc_get_generation (MonoObject *object)
{
return 0;
}
/**
* mono_gc_collection_count:
* \param generation a GC generation number
*
* Get how many times a garbage collection has been performed
* for the given \p generation number.
*
* \returns the number of garbage collections
*/
int
mono_gc_collection_count (int generation)
{
return GC_get_gc_no ();
}
int64_t
mono_gc_get_generation_size (int generation)
{
return 0;
}
void
mono_stop_world (MonoThreadInfoFlags flags)
{
g_assert ("mono_stop_world is not supported in Boehm");
}
void
mono_restart_world (MonoThreadInfoFlags flags)
{
g_assert ("mono_restart_world is not supported in Boehm");
}
/**
* mono_gc_add_memory_pressure:
* \param value amount of bytes
*
* Adjust the garbage collector's view of how many bytes of memory
* are indirectly referenced by managed objects (for example unmanaged
* memory holding image or other binary data).
* This is a hint only to the garbage collector algorithm.
* Note that negative amounts of p value will decrease the memory
* pressure.
*/
void
mono_gc_add_memory_pressure (gint64 value)
{
}
/**
* mono_gc_get_used_size:
*
* Get the approximate amount of memory used by managed objects.
*
* Returns: the amount of memory used in bytes
*/
int64_t
mono_gc_get_used_size (void)
{
return GC_get_heap_size () - GC_get_free_bytes ();
}
/**
* mono_gc_get_heap_size:
*
* Get the amount of memory used by the garbage collector.
*
* Returns: the size of the heap in bytes
*/
int64_t
mono_gc_get_heap_size (void)
{
return GC_get_heap_size ();
}
gboolean
mono_gc_is_gc_thread (void)
{
return GC_thread_is_registered ();
}
gpointer
mono_gc_thread_attach (MonoThreadInfo* info)
{
struct GC_stack_base sb;
int res;
/* TODO: use GC_get_stack_base instead of baseptr. */
sb.mem_base = info->stack_end;
res = GC_register_my_thread (&sb);
if (res == GC_UNIMPLEMENTED)
return NULL; /* Cannot happen with GC v7+. */
info->handle_stack = mono_handle_stack_alloc ();
return info;
}
void
mono_gc_thread_detach (MonoThreadInfo *p)
{
/* Detach without threads lock as Boehm
* will take it's own lock internally. Note in
* on_gc_notification we take threads lock after
* Boehm already has it's own lock. For consistency
* always take lock ordering of Boehm then threads.
*/
GC_unregister_my_thread ();
}
void
mono_gc_thread_detach_with_lock (MonoThreadInfo *p)
{
MonoNativeThreadId tid;
tid = mono_thread_info_get_tid (p);
if (p->runtime_thread)
mono_threads_add_joinable_thread ((gpointer)tid);
mono_handle_stack_free (p->handle_stack);
p->handle_stack = NULL;
}
gboolean
mono_gc_thread_in_critical_region (MonoThreadInfo *info)
{
return FALSE;
}
gboolean
mono_object_is_alive (MonoObject* o)
{
return GC_is_marked ((const void *)o);
}
int
mono_gc_walk_heap (int flags, MonoGCReferences callback, void *data)
{
return 1;
}
static gint64 gc_start_time;
static void
on_gc_notification (GC_EventType event)
{
MonoProfilerGCEvent e;
switch (event) {
case GC_EVENT_PRE_STOP_WORLD:
e = MONO_GC_EVENT_PRE_STOP_WORLD;
MONO_GC_WORLD_STOP_BEGIN ();
break;
case GC_EVENT_POST_STOP_WORLD:
e = MONO_GC_EVENT_POST_STOP_WORLD;
MONO_GC_WORLD_STOP_END ();
break;
case GC_EVENT_PRE_START_WORLD:
e = MONO_GC_EVENT_PRE_START_WORLD;
MONO_GC_WORLD_RESTART_BEGIN (1);
break;
case GC_EVENT_POST_START_WORLD:
e = MONO_GC_EVENT_POST_START_WORLD;
MONO_GC_WORLD_RESTART_END (1);
break;
case GC_EVENT_START:
e = MONO_GC_EVENT_START;
MONO_GC_BEGIN (1);
#ifndef DISABLE_PERFCOUNTERS
if (mono_perfcounters)
mono_atomic_inc_i32 (&mono_perfcounters->gc_collections0);
#endif
mono_atomic_inc_i32 (&mono_gc_stats.major_gc_count);
gc_start_time = mono_100ns_ticks ();
break;
case GC_EVENT_END:
e = MONO_GC_EVENT_END;
MONO_GC_END (1);
#if defined(ENABLE_DTRACE) && defined(__sun__)
/* This works around a dtrace -G problem on Solaris.
Limit its actual use to when the probe is enabled. */
if (MONO_GC_END_ENABLED ())
sleep(0);
#endif
#ifndef DISABLE_PERFCOUNTERS
if (mono_perfcounters) {
guint64 heap_size = GC_get_heap_size ();
guint64 used_size = heap_size - GC_get_free_bytes ();
/* FIXME: change these to mono_atomic_store_i64 () */
UnlockedWrite64 (&mono_perfcounters->gc_total_bytes, used_size);
UnlockedWrite64 (&mono_perfcounters->gc_committed_bytes, heap_size);
UnlockedWrite64 (&mono_perfcounters->gc_reserved_bytes, heap_size);
UnlockedWrite64 (&mono_perfcounters->gc_gen0size, heap_size);
}
#endif
UnlockedAdd64 (&mono_gc_stats.major_gc_time, mono_100ns_ticks () - gc_start_time);
mono_trace_message (MONO_TRACE_GC, "gc took %" G_GINT64_FORMAT " usecs", (mono_100ns_ticks () - gc_start_time) / 10);
break;
default:
break;
}
switch (event) {
case GC_EVENT_MARK_START:
case GC_EVENT_MARK_END:
case GC_EVENT_RECLAIM_START:
case GC_EVENT_RECLAIM_END:
break;
default:
MONO_PROFILER_RAISE (gc_event, (e, 0, TRUE));
break;
}
switch (event) {
case GC_EVENT_PRE_STOP_WORLD:
mono_thread_info_suspend_lock ();
MONO_PROFILER_RAISE (gc_event, (MONO_GC_EVENT_PRE_STOP_WORLD_LOCKED, 0, TRUE));
break;
case GC_EVENT_POST_START_WORLD:
mono_thread_info_suspend_unlock ();
MONO_PROFILER_RAISE (gc_event, (MONO_GC_EVENT_POST_START_WORLD_UNLOCKED, 0, TRUE));
break;
default:
break;
}
}
// GC_word here to precisely match Boehm. Not size_t, not gsize.
static void
on_gc_heap_resize (GC_word new_size)
{
guint64 heap_size = GC_get_heap_size ();
#ifndef DISABLE_PERFCOUNTERS
if (mono_perfcounters) {
/* FIXME: change these to mono_atomic_store_i64 () */
UnlockedWrite64 (&mono_perfcounters->gc_committed_bytes, heap_size);
UnlockedWrite64 (&mono_perfcounters->gc_reserved_bytes, heap_size);
UnlockedWrite64 (&mono_perfcounters->gc_gen0size, heap_size);
}
#endif
MONO_PROFILER_RAISE (gc_resize, (new_size));
}
typedef struct {
char *start;
char *end;
} RootData;
static gpointer
register_root (gpointer arg)
{
RootData* root_data = (RootData*)arg;
g_hash_table_insert (roots, root_data->start, root_data->end);
return NULL;
}
int
mono_gc_register_root (char *start, size_t size, void *descr, MonoGCRootSource source, void *key, const char *msg)
{
RootData root_data;
root_data.start = start;
/* Boehm root processing requires one byte past end of region to be scanned */
root_data.end = start + size + 1;
GC_call_with_alloc_lock (register_root, &root_data);
MONO_PROFILER_RAISE (gc_root_register, ((const mono_byte *) start, size, source, key, msg));
return TRUE;
}
int
mono_gc_register_root_wbarrier (char *start, size_t size, MonoGCDescriptor descr, MonoGCRootSource source, void *key, const char *msg)
{
return mono_gc_register_root (start, size, descr, source, key, msg);
}
static gpointer
deregister_root (gpointer arg)
{
gboolean removed = g_hash_table_remove (roots, arg);
g_assert (removed);
return NULL;
}
void
mono_gc_deregister_root (char* addr)
{
GC_call_with_alloc_lock (deregister_root, addr);
MONO_PROFILER_RAISE (gc_root_unregister, ((const mono_byte *) addr));
}
static void
push_root (gpointer key, gpointer value, gpointer user_data)
{
GC_push_all (key, value);
}
static void
push_handle_stack (HandleStack* stack)
{
HandleChunk *cur = stack->bottom;
HandleChunk *last = stack->top;
if (!cur)
return;
while (cur) {
if (cur->size > 0)
GC_push_all ((gpointer)&cur->elems[0], (char*)(cur->elems + cur->size) + 1);
if (cur == last)
break;
cur = cur->next;
}
}
static void
mono_push_other_roots (void)
{
g_hash_table_foreach (roots, push_root, NULL);
FOREACH_THREAD_EXCLUDE (info, MONO_THREAD_INFO_FLAGS_NO_GC) {
HandleStack* stack = info->handle_stack;
if (stack)
push_handle_stack (stack);
} FOREACH_THREAD_END
if (default_push_other_roots)
default_push_other_roots ();
}
static void
mono_gc_weak_link_add (void **link_addr, MonoObject *obj, gboolean track)
{
/* libgc requires that we use HIDE_POINTER... */
*link_addr = (void*)HIDE_POINTER (obj);
if (track)
GC_REGISTER_LONG_LINK (link_addr, obj);
else
GC_GENERAL_REGISTER_DISAPPEARING_LINK (link_addr, obj);
}
static void
mono_gc_weak_link_remove (void **link_addr, gboolean track)
{
if (track)
GC_unregister_long_link (link_addr);
else
GC_unregister_disappearing_link (link_addr);
*link_addr = NULL;
}
static gpointer
reveal_link (gpointer link_addr)
{
void **link_a = (void **)link_addr;
return REVEAL_POINTER (*link_a);
}
static MonoObject *
mono_gc_weak_link_get (void **link_addr)
{
MonoObject *obj = (MonoObject *)GC_call_with_alloc_lock (reveal_link, link_addr);
if (obj == (MonoObject *) -1)
return NULL;
return obj;
}
void*
mono_gc_make_descr_for_string (gsize *bitmap, int numbits)
{
return mono_gc_make_descr_from_bitmap (bitmap, numbits);
}
void*
mono_gc_make_descr_for_object (gsize *bitmap, int numbits, size_t obj_size)
{
return mono_gc_make_descr_from_bitmap (bitmap, numbits);
}
void*
mono_gc_make_descr_for_array (int vector, gsize *elem_bitmap, int numbits, size_t elem_size)
{
/* libgc has no usable support for arrays... */
return GC_NO_DESCRIPTOR;
}
void*
mono_gc_make_descr_from_bitmap (gsize *bitmap, int numbits)
{
/* It seems there are issues when the bitmap doesn't fit: play it safe */
if (numbits >= 30)
return GC_NO_DESCRIPTOR;
else
return (gpointer)GC_make_descriptor ((GC_bitmap)bitmap, numbits);
}
void*
mono_gc_make_vector_descr (void)
{
return NULL;
}
void*
mono_gc_make_root_descr_all_refs (int numbits)
{
return NULL;
}
MonoObject*
mono_gc_alloc_fixed (size_t size, void *descr, MonoGCRootSource source, void *key, const char *msg)
{
void *start = GC_MALLOC_UNCOLLECTABLE (size);
MONO_PROFILER_RAISE (gc_root_register, ((const mono_byte *) start, size, source, key, msg));
return (MonoObject*)start;
}
MonoObject*
mono_gc_alloc_fixed_no_descriptor (size_t size, MonoGCRootSource source, void *key, const char *msg)
{
return mono_gc_alloc_fixed (size, 0, source, key, msg);
}
void
mono_gc_free_fixed (void* addr)
{
MONO_PROFILER_RAISE (gc_root_unregister, ((const mono_byte *) addr));
GC_FREE (addr);
}
MonoObject*
mono_gc_alloc_obj (MonoVTable *vtable, size_t size)
{
MonoObject *obj;
if (!m_class_has_references (vtable->klass)) {
obj = (MonoObject *)GC_MALLOC_ATOMIC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->vtable = vtable;
obj->synchronisation = NULL;
memset (mono_object_get_data (obj), 0, size - MONO_ABI_SIZEOF (MonoObject));
} else if (vtable->gc_descr != GC_NO_DESCRIPTOR) {
obj = (MonoObject *)GC_GCJ_MALLOC (size, vtable);
if (G_UNLIKELY (!obj))
return NULL;
} else {
obj = (MonoObject *)GC_MALLOC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->vtable = vtable;
}
if (G_UNLIKELY (mono_profiler_allocations_enabled ()))
MONO_PROFILER_RAISE (gc_allocation, (obj));
return obj;
}
MonoArray*
mono_gc_alloc_pinned_vector (MonoVTable *vtable, size_t size, uintptr_t max_length)
{
return mono_gc_alloc_vector (vtable, size, max_length);
}
MonoArray*
mono_gc_alloc_vector (MonoVTable *vtable, size_t size, uintptr_t max_length)
{
MonoArray *obj;
if (!m_class_has_references (vtable->klass)) {
obj = (MonoArray *)GC_MALLOC_ATOMIC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->obj.vtable = vtable;
obj->obj.synchronisation = NULL;
memset (mono_object_get_data ((MonoObject*)obj), 0, size - MONO_ABI_SIZEOF (MonoObject));
} else if (vtable->gc_descr != GC_NO_DESCRIPTOR) {
obj = (MonoArray *)GC_GCJ_MALLOC (size, vtable);
if (G_UNLIKELY (!obj))
return NULL;
} else {
obj = (MonoArray *)GC_MALLOC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->obj.vtable = vtable;
}
obj->max_length = max_length;
if (G_UNLIKELY (mono_profiler_allocations_enabled ()))
MONO_PROFILER_RAISE (gc_allocation, (&obj->obj));
return obj;
}
MonoArray*
mono_gc_alloc_array (MonoVTable *vtable, size_t size, uintptr_t max_length, uintptr_t bounds_size)
{
MonoArray *obj;
if (!m_class_has_references (vtable->klass)) {
obj = (MonoArray *)GC_MALLOC_ATOMIC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->obj.vtable = vtable;
obj->obj.synchronisation = NULL;
memset (mono_object_get_data ((MonoObject*)obj), 0, size - MONO_ABI_SIZEOF (MonoObject));
} else if (vtable->gc_descr != GC_NO_DESCRIPTOR) {
obj = (MonoArray *)GC_GCJ_MALLOC (size, vtable);
if (G_UNLIKELY (!obj))
return NULL;
} else {
obj = (MonoArray *)GC_MALLOC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->obj.vtable = vtable;
}
obj->max_length = max_length;
if (bounds_size)
obj->bounds = (MonoArrayBounds *) ((char *) obj + size - bounds_size);
if (G_UNLIKELY (mono_profiler_allocations_enabled ()))
MONO_PROFILER_RAISE (gc_allocation, (&obj->obj));
return obj;
}
MonoString*
mono_gc_alloc_string (MonoVTable *vtable, size_t size, gint32 len)
{
MonoString *obj = (MonoString *)GC_MALLOC_ATOMIC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->object.vtable = vtable;
obj->object.synchronisation = NULL;
obj->length = len;
obj->chars [len] = 0;
if (G_UNLIKELY (mono_profiler_allocations_enabled ()))
MONO_PROFILER_RAISE (gc_allocation, (&obj->object));
return obj;
}
MonoObject*
mono_gc_alloc_mature (MonoVTable *vtable, size_t size)
{
return mono_gc_alloc_obj (vtable, size);
}
MonoObject*
mono_gc_alloc_pinned_obj (MonoVTable *vtable, size_t size)
{
return mono_gc_alloc_obj (vtable, size);
}
int
mono_gc_invoke_finalizers (void)
{
/* There is a bug in GC_invoke_finalizer () in versions <= 6.2alpha4:
* the 'mem_freed' variable is not initialized when there are no
* objects to finalize, which leads to strange behavior later on.
* The check is necessary to work around that bug.
*/
if (GC_should_invoke_finalizers ())
return GC_invoke_finalizers ();
return 0;
}
MonoBoolean
mono_gc_pending_finalizers (void)
{
return GC_should_invoke_finalizers ();
}
void
mono_gc_wbarrier_set_field_internal (MonoObject *obj, gpointer field_ptr, MonoObject* value)
{
*(void**)field_ptr = value;
}
void
mono_gc_wbarrier_set_arrayref_internal (MonoArray *arr, gpointer slot_ptr, MonoObject* value)
{
*(void**)slot_ptr = value;
}
void
mono_gc_wbarrier_arrayref_copy_internal (gpointer dest_ptr, gconstpointer src_ptr, int count)
{
mono_gc_memmove_aligned (dest_ptr, src_ptr, count * sizeof (gpointer));
}
void
mono_gc_wbarrier_generic_store_internal (void volatile* ptr, MonoObject* value)
{
*(void**)ptr = value;
}
void
mono_gc_wbarrier_generic_store_atomic_internal (gpointer ptr, MonoObject *value)
{
mono_atomic_store_ptr ((volatile gpointer *)ptr, value);
}
void
mono_gc_wbarrier_generic_nostore_internal (gpointer ptr)
{
}
void
mono_gc_wbarrier_value_copy_internal (gpointer dest, gconstpointer src, int count, MonoClass *klass)
{
mono_gc_memmove_atomic (dest, src, count * mono_class_value_size (klass, NULL));
}
void
mono_gc_wbarrier_object_copy_internal (MonoObject* obj, MonoObject *src)
{
/* do not copy the sync state */
mono_gc_memmove_aligned (mono_object_get_data (obj), (char*)src + MONO_ABI_SIZEOF (MonoObject),
m_class_get_instance_size (mono_object_class (obj)) - MONO_ABI_SIZEOF (MonoObject));
}
void
mono_gc_clear_domain (MonoDomain *domain)
{
}
void
mono_gc_suspend_finalizers (void)
{
}
int
mono_gc_get_suspend_signal (void)
{
return GC_get_suspend_signal ();
}
int
mono_gc_get_restart_signal (void)
{
return GC_get_thr_restart_signal ();
}
#if defined(USE_COMPILER_TLS) && defined(__linux__) && (defined(__i386__) || defined(__x86_64__))
// Look at history around late August 2019 if this is to be restored.
// The code was effectively dead, not merely deleted to avoid maintaining it.
#endif
gboolean
mono_gc_is_critical_method (MonoMethod *method)
{
return FALSE;
}
MonoMethod*
mono_gc_get_managed_allocator (MonoClass *klass, gboolean for_box, gboolean known_instance_size)
{
return NULL;
}
MonoMethod*
mono_gc_get_managed_array_allocator (MonoClass *klass)
{
return NULL;
}
MonoMethod*
mono_gc_get_managed_allocator_by_type (int atype, ManagedAllocatorVariant variant)
{
return NULL;
}
guint32
mono_gc_get_managed_allocator_types (void)
{
return 0;
}
MonoMethod*
mono_gc_get_write_barrier (void)
{
g_assert_not_reached ();
return NULL;
}
MonoMethod*
mono_gc_get_specific_write_barrier (gboolean is_concurrent)
{
g_assert_not_reached ();
return NULL;
}
int
mono_gc_get_aligned_size_for_allocator (int size)
{
return size;
}
const char *
mono_gc_get_gc_name (void)
{
return "boehm";
}
void*
mono_gc_invoke_with_gc_lock (MonoGCLockedCallbackFunc func, void *data)
{
return GC_call_with_alloc_lock (func, data);
}
char*
mono_gc_get_description (void)
{
return g_strdup (DEFAULT_GC_NAME);
}
void
mono_gc_set_desktop_mode (void)
{
GC_set_dont_expand (1);
}
gboolean
mono_gc_is_moving (void)
{
return FALSE;
}
gboolean
mono_gc_is_disabled (void)
{
if (GC_is_disabled () || gc_dont_gc_env)
return TRUE;
else
return FALSE;
}
void
mono_gc_wbarrier_range_copy (gpointer _dest, gconstpointer _src, int size)
{
g_assert_not_reached ();
}
MonoRangeCopyFunction
mono_gc_get_range_copy_func (void)
{
return &mono_gc_wbarrier_range_copy;
}
guint8*
mono_gc_get_card_table (int *shift_bits, gpointer *card_mask)
{
*shift_bits = 0;
*card_mask = 0;
//g_assert_not_reached ();
return NULL;
}
guint8*
mono_gc_get_target_card_table (int *shift_bits, target_mgreg_t *card_mask)
{
*shift_bits = 0;
*card_mask = 0;
return NULL;
}
gboolean
mono_gc_card_table_nursery_check (void)
{
g_assert_not_reached ();
return TRUE;
}
void*
mono_gc_get_nursery (int *shift_bits, size_t *size)
{
return NULL;
}
gboolean
mono_gc_precise_stack_mark_enabled (void)
{
return FALSE;
}
FILE *
mono_gc_get_logfile (void)
{
return NULL;
}
void
mono_gc_params_set (const char* options)
{
}
void
mono_gc_debug_set (const char* options)
{
}
void
mono_gc_conservatively_scan_area (void *start, void *end)
{
g_assert_not_reached ();
}
void *
mono_gc_scan_object (void *obj, void *gc_data)
{
g_assert_not_reached ();
return NULL;
}
gsize*
mono_gc_get_bitmap_for_descr (void *descr, int *numbits)
{
g_assert_not_reached ();
return NULL;
}
void
mono_gc_set_gc_callbacks (MonoGCCallbacks *callbacks)
{
}
void
mono_gc_set_stack_end (void *stack_end)
{
}
void GC_start_blocking ()
{
}
void GC_end_blocking ()
{
}
void
mono_gc_skip_thread_changing (gboolean skip)
{
/*
* Unlike SGen, Boehm doesn't respect our thread info flags. We need to
* inform Boehm manually to skip/not skip the current thread.
*/
if (skip)
GC_start_blocking ();
else
GC_end_blocking ();
}
void
mono_gc_skip_thread_changed (gboolean skip)
{
}
void
mono_gc_register_for_finalization (MonoObject *obj, MonoFinalizationProc user_data)
{
guint offset = 0;
#ifndef GC_DEBUG
/* This assertion is not valid when GC_DEBUG is defined */
g_assert (GC_base (obj) == (char*)obj - offset);
#endif
GC_REGISTER_FINALIZER_NO_ORDER ((char*)obj - offset, user_data, GUINT_TO_POINTER (offset), NULL, NULL);
}
#ifndef HOST_WIN32
int
mono_gc_pthread_create (pthread_t *new_thread, const pthread_attr_t *attr, void *(*start_routine)(void *), void *arg)
{
/* it is being replaced by GC_pthread_create on some
* platforms, see libgc/include/gc_pthread_redirects.h */
return pthread_create (new_thread, attr, start_routine, arg);
}
#endif
#ifdef HOST_WIN32
BOOL APIENTRY mono_gc_dllmain (HMODULE module_handle, DWORD reason, LPVOID reserved)
{
#ifdef GC_INSIDE_DLL
return GC_DllMain (module_handle, reason, reserved);
#else
return TRUE;
#endif
}
#endif
MonoVTable *
mono_gc_get_vtable (MonoObject *obj)
{
// No pointer tagging.
return obj->vtable;
}
guint
mono_gc_get_vtable_bits (MonoClass *klass)
{
if (fin_callbacks.is_class_finalization_aware) {
if (fin_callbacks.is_class_finalization_aware (klass))
return BOEHM_GC_BIT_FINALIZER_AWARE;
}
return 0;
}
/*
* mono_gc_register_altstack:
*
* Register the dimensions of the normal stack and altstack with the collector.
* Currently, STACK/STACK_SIZE is only used when the thread is suspended while it is on an altstack.
*/
void
mono_gc_register_altstack (gpointer stack, gint32 stack_size, gpointer altstack, gint32 altstack_size)
{
GC_register_altstack (stack, stack_size, altstack, altstack_size);
}
int
mono_gc_get_los_limit (void)
{
return G_MAXINT;
}
void
mono_gc_set_string_length (MonoString *str, gint32 new_length)
{
mono_unichar2 *new_end = str->chars + new_length;
/* zero the discarded string. This null-delimits the string and allows
* the space to be reclaimed by SGen. */
memset (new_end, 0, (str->length - new_length + 1) * sizeof (mono_unichar2));
str->length = new_length;
}
gboolean
mono_gc_user_markers_supported (void)
{
return FALSE;
}
void *
mono_gc_make_root_descr_user (MonoGCRootMarkFunc marker)
{
g_assert_not_reached ();
return NULL;
}
/* Toggleref support */
void
mono_gc_toggleref_add (MonoObject *object, mono_bool strong_ref)
{
if (GC_toggleref_add ((GC_PTR)object, (int)strong_ref) != GC_SUCCESS)
g_error ("GC_toggleref_add failed\n");
}
void
mono_gc_toggleref_register_callback (MonoToggleRefStatus (*proccess_toggleref) (MonoObject *obj))
{
GC_set_toggleref_func ((GC_ToggleRefStatus (*) (GC_PTR obj)) proccess_toggleref);
}
/* Test support code */
static MonoToggleRefStatus
test_toggleref_callback (MonoObject *obj)
{
MonoToggleRefStatus status = MONO_TOGGLE_REF_DROP;
MONO_STATIC_POINTER_INIT (MonoClassField, mono_toggleref_test_field)
mono_toggleref_test_field = mono_class_get_field_from_name_full (mono_object_class (obj), "__test", NULL);
g_assert (mono_toggleref_test_field);
MONO_STATIC_POINTER_INIT_END (MonoClassField*, mono_toggleref_test_field)
mono_field_get_value_internal (obj, mono_toggleref_test_field, &status);
printf ("toggleref-cb obj %d\n", status);
return status;
}
static void
register_test_toggleref_callback (void)
{
mono_gc_toggleref_register_callback (test_toggleref_callback);
}
static gboolean
is_finalization_aware (MonoObject *obj)
{
MonoVTable *vt = obj->vtable;
return (vt->gc_bits & BOEHM_GC_BIT_FINALIZER_AWARE) == BOEHM_GC_BIT_FINALIZER_AWARE;
}
static void
fin_notifier (MonoObject *obj)
{
if (is_finalization_aware (obj))
fin_callbacks.object_queued_for_finalization (obj);
}
void
mono_gc_register_finalizer_callbacks (MonoGCFinalizerCallbacks *callbacks)
{
if (callbacks->version != MONO_GC_FINALIZER_EXTENSION_VERSION)
g_error ("Invalid finalizer callback version. Expected %d but got %d\n", MONO_GC_FINALIZER_EXTENSION_VERSION, callbacks->version);
fin_callbacks = *callbacks;
GC_set_await_finalize_proc ((void (*) (GC_PTR))fin_notifier);
}
#define BITMAP_SIZE (sizeof (*((HandleData *)NULL)->bitmap) * CHAR_BIT)
static gboolean
slot_occupied (HandleData *handles, guint slot) {
return handles->bitmap [slot / BITMAP_SIZE] & (1 << (slot % BITMAP_SIZE));
}
static void
vacate_slot (HandleData *handles, guint slot) {
handles->bitmap [slot / BITMAP_SIZE] &= ~(1 << (slot % BITMAP_SIZE));
}
static void
occupy_slot (HandleData *handles, guint slot) {
handles->bitmap [slot / BITMAP_SIZE] |= 1 << (slot % BITMAP_SIZE);
}
static int
find_first_unset (guint32 bitmap)
{
int i;
for (i = 0; i < 32; ++i) {
if (!(bitmap & (1 << i)))
return i;
}
return -1;
}
static void
handle_data_alloc_entries (HandleData *handles)
{
handles->size = 32;
if (MONO_GC_HANDLE_TYPE_IS_WEAK (handles->type)) {
handles->entries = (void **)g_malloc0 (sizeof (*handles->entries) * handles->size);
handles->domain_ids = (guint16 *)g_malloc0 (sizeof (*handles->domain_ids) * handles->size);
} else {
handles->entries = (void **)mono_gc_alloc_fixed (sizeof (*handles->entries) * handles->size, NULL, MONO_ROOT_SOURCE_GC_HANDLE, NULL, "GC Handle Table (Boehm)");
}
handles->bitmap = (guint32 *)g_malloc0 (handles->size / CHAR_BIT);
}
static gint
handle_data_next_unset (HandleData *handles)
{
gint slot;
for (slot = handles->slot_hint; slot < handles->size / BITMAP_SIZE; ++slot) {
if (handles->bitmap [slot] == 0xffffffff)
continue;
handles->slot_hint = slot;
return find_first_unset (handles->bitmap [slot]);
}
return -1;
}
static gint
handle_data_first_unset (HandleData *handles)
{
gint slot;
for (slot = 0; slot < handles->slot_hint; ++slot) {
if (handles->bitmap [slot] == 0xffffffff)
continue;
handles->slot_hint = slot;
return find_first_unset (handles->bitmap [slot]);
}
return -1;
}
/* Returns the index of the current slot in the bitmap. */
static void
handle_data_grow (HandleData *handles, gboolean track)
{
guint32 *new_bitmap;
guint32 new_size = handles->size * 2; /* always double: we memset to 0 based on this below */
/* resize and copy the bitmap */
new_bitmap = (guint32 *)g_malloc0 (new_size / CHAR_BIT);
memcpy (new_bitmap, handles->bitmap, handles->size / CHAR_BIT);
g_free (handles->bitmap);
handles->bitmap = new_bitmap;
/* resize and copy the entries */
if (MONO_GC_HANDLE_TYPE_IS_WEAK (handles->type)) {
gpointer *entries;
guint16 *domain_ids;
gint i;
domain_ids = (guint16 *)g_malloc0 (sizeof (*handles->domain_ids) * new_size);
entries = (void **)g_malloc0 (sizeof (*handles->entries) * new_size);
memcpy (domain_ids, handles->domain_ids, sizeof (*handles->domain_ids) * handles->size);
for (i = 0; i < handles->size; ++i) {
MonoObject *obj = mono_gc_weak_link_get (&(handles->entries [i]));
if (obj) {
mono_gc_weak_link_add (&(entries [i]), obj, track);
mono_gc_weak_link_remove (&(handles->entries [i]), track);
} else {
g_assert (!handles->entries [i]);
}
}
g_free (handles->entries);
g_free (handles->domain_ids);
handles->entries = entries;
handles->domain_ids = domain_ids;
} else {
gpointer *entries;
entries = (void **)mono_gc_alloc_fixed (sizeof (*handles->entries) * new_size, NULL, MONO_ROOT_SOURCE_GC_HANDLE, NULL, "GC Handle Table (Boehm)");
mono_gc_memmove_aligned (entries, handles->entries, sizeof (*handles->entries) * handles->size);
mono_gc_free_fixed (handles->entries);
handles->entries = entries;
}
handles->slot_hint = handles->size / BITMAP_SIZE;
handles->size = new_size;
}
static guint32
alloc_handle (HandleData *handles, MonoObject *obj, gboolean track)
{
gint slot, i;
guint32 res;
lock_handles (handles);
if (!handles->size)
handle_data_alloc_entries (handles);
i = handle_data_next_unset (handles);
if (i == -1 && handles->slot_hint != 0)
i = handle_data_first_unset (handles);
if (i == -1) {
handle_data_grow (handles, track);
i = 0;
}
slot = handles->slot_hint * BITMAP_SIZE + i;
occupy_slot (handles, slot);
handles->entries [slot] = NULL;
if (MONO_GC_HANDLE_TYPE_IS_WEAK (handles->type)) {
/*FIXME, what to use when obj == null?*/
handles->domain_ids [slot] = (obj ? mono_object_get_domain_internal (obj) : mono_domain_get ())->domain_id;
if (obj)
mono_gc_weak_link_add (&(handles->entries [slot]), obj, track);
} else {
handles->entries [slot] = obj;
}
#ifndef DISABLE_PERFCOUNTERS
mono_atomic_inc_i32 (&mono_perfcounters->gc_num_handles);
#endif
unlock_handles (handles);
res = MONO_GC_HANDLE (slot, handles->type);
MONO_PROFILER_RAISE (gc_handle_created, (res, (MonoGCHandleType)handles->type, obj));
return res;
}
/**
* mono_gchandle_new_internal:
* \param obj managed object to get a handle for
* \param pinned whether the object should be pinned
*
* This returns a handle that wraps the object, this is used to keep a
* reference to a managed object from the unmanaged world and preventing the
* object from being disposed.
*
* If \p pinned is false the address of the object can not be obtained, if it is
* true the address of the object can be obtained. This will also pin the
* object so it will not be possible by a moving garbage collector to move the
* object.
*
* \returns a handle that can be used to access the object from
* unmanaged code.
*/
MonoGCHandle
mono_gchandle_new_internal (MonoObject *obj, gboolean pinned)
{
return MONO_GC_HANDLE_FROM_UINT(alloc_handle (&gc_handles [pinned? HANDLE_PINNED: HANDLE_NORMAL], obj, FALSE));
}
/**
* mono_gchandle_new_weakref_internal:
* \param obj managed object to get a handle for
* \param track_resurrection Determines how long to track the object, if this is set to TRUE, the object is tracked after finalization, if FALSE, the object is only tracked up until the point of finalization.
*
* This returns a weak handle that wraps the object, this is used to
* keep a reference to a managed object from the unmanaged world.
* Unlike the \c mono_gchandle_new_internal the object can be reclaimed by the
* garbage collector. In this case the value of the GCHandle will be
* set to zero.
*
* If \p track_resurrection is TRUE the object will be tracked through
* finalization and if the object is resurrected during the execution
* of the finalizer, then the returned weakref will continue to hold
* a reference to the object. If \p track_resurrection is FALSE, then
* the weak reference's target will become NULL as soon as the object
* is passed on to the finalizer.
*
* \returns a handle that can be used to access the object from
* unmanaged code.
*/
MonoGCHandle
mono_gchandle_new_weakref_internal (MonoObject *obj, gboolean track_resurrection)
{
return MONO_GC_HANDLE_FROM_UINT (alloc_handle (&gc_handles [track_resurrection? HANDLE_WEAK_TRACK: HANDLE_WEAK], obj, track_resurrection));
}
/**
* mono_gchandle_get_target_internal:
* \param gchandle a GCHandle's handle.
*
* The handle was previously created by calling \c mono_gchandle_new_internal or
* \c mono_gchandle_new_weakref.
*
* \returns A pointer to the \c MonoObject* represented by the handle or
* NULL for a collected object if using a weakref handle.
*/
MonoObject*
mono_gchandle_get_target_internal (MonoGCHandle gch)
{
guint32 gchandle = MONO_GC_HANDLE_TO_UINT (gch);
guint slot = MONO_GC_HANDLE_SLOT (gchandle);
guint type = MONO_GC_HANDLE_TYPE (gchandle);
HandleData *handles = &gc_handles [type];
MonoObject *obj = NULL;
if (type >= HANDLE_TYPE_MAX)
return NULL;
lock_handles (handles);
if (slot < handles->size && slot_occupied (handles, slot)) {
if (MONO_GC_HANDLE_TYPE_IS_WEAK (handles->type)) {
obj = mono_gc_weak_link_get (&handles->entries [slot]);
} else {
obj = (MonoObject *)handles->entries [slot];
}
} else {
/* print a warning? */
}
unlock_handles (handles);
/*g_print ("get target of entry %d of type %d: %p\n", slot, handles->type, obj);*/
return obj;
}
void
mono_gchandle_set_target (MonoGCHandle gch, MonoObject *obj)
{
guint32 gchandle = MONO_GC_HANDLE_TO_UINT (gch);
guint slot = MONO_GC_HANDLE_SLOT (gchandle);
guint type = MONO_GC_HANDLE_TYPE (gchandle);
HandleData *handles = &gc_handles [type];
MonoObject *old_obj = NULL;
g_assert (type < HANDLE_TYPE_MAX);
lock_handles (handles);
if (slot < handles->size && slot_occupied (handles, slot)) {
if (MONO_GC_HANDLE_TYPE_IS_WEAK (handles->type)) {
old_obj = (MonoObject *)handles->entries [slot];
(void)old_obj;
if (handles->entries [slot])
mono_gc_weak_link_remove (&handles->entries [slot], handles->type == HANDLE_WEAK_TRACK);
if (obj)
mono_gc_weak_link_add (&handles->entries [slot], obj, handles->type == HANDLE_WEAK_TRACK);
/*FIXME, what to use when obj == null?*/
handles->domain_ids [slot] = (obj ? mono_object_get_domain_internal (obj) : mono_domain_get ())->domain_id;
} else {
handles->entries [slot] = obj;
}
} else {
/* print a warning? */
}
/*g_print ("changed entry %d of type %d to object %p (in slot: %p)\n", slot, handles->type, obj, handles->entries [slot]);*/
unlock_handles (handles);
}
gboolean
mono_gc_is_null (void)
{
return FALSE;
}
/**
* mono_gchandle_free_internal:
* \param gchandle a GCHandle's handle.
*
* Frees the \p gchandle handle. If there are no outstanding
* references, the garbage collector can reclaim the memory of the
* object wrapped.
*/
void
mono_gchandle_free_internal (MonoGCHandle gch)
{
guint32 gchandle = MONO_GC_HANDLE_TO_UINT (gch);
if (!gchandle)
return;
guint slot = MONO_GC_HANDLE_SLOT (gchandle);
guint type = MONO_GC_HANDLE_TYPE (gchandle);
HandleData *handles = &gc_handles [type];
if (type >= HANDLE_TYPE_MAX)
return;
lock_handles (handles);
if (slot < handles->size && slot_occupied (handles, slot)) {
if (MONO_GC_HANDLE_TYPE_IS_WEAK (handles->type)) {
if (handles->entries [slot])
mono_gc_weak_link_remove (&handles->entries [slot], handles->type == HANDLE_WEAK_TRACK);
} else {
handles->entries [slot] = NULL;
}
vacate_slot (handles, slot);
} else {
/* print a warning? */
}
#ifndef DISABLE_PERFCOUNTERS
mono_atomic_dec_i32 (&mono_perfcounters->gc_num_handles);
#endif
/*g_print ("freed entry %d of type %d\n", slot, handles->type);*/
unlock_handles (handles);
MONO_PROFILER_RAISE (gc_handle_deleted, (gchandle, (MonoGCHandleType)handles->type));
}
guint64
mono_gc_get_total_allocated_bytes (MonoBoolean precise)
{
return 0;
}
void
mono_gc_register_obj_with_weak_fields (void *obj)
{
g_error ("Weak fields not supported by boehm gc");
}
gboolean
mono_gc_ephemeron_array_add (MonoObject *obj)
{
return TRUE;
}
void
mono_gc_get_gcmemoryinfo (
gint64 *high_memory_load_threshold_bytes,
gint64 *memory_load_bytes,
gint64 *total_available_memory_bytes,
gint64 *total_committed_bytes,
gint64 *heap_size_bytes,
gint64 *fragmented_bytes)
{
*high_memory_load_threshold_bytes = 0;
*memory_load_bytes = 0;
*total_available_memory_bytes = 0;
*total_committed_bytes = 0;
*heap_size_bytes = 0;
*fragmented_bytes = 0;
}
void mono_gc_get_gctimeinfo (
guint64 *time_last_gc_100ns,
guint64 *time_since_last_gc_100ns,
guint64 *time_max_gc_100ns)
{
*time_last_gc_100ns = 0;
*time_since_last_gc_100ns = 0;
*time_max_gc_100ns = 0;
}
#else
MONO_EMPTY_SOURCE_FILE (boehm_gc);
#endif /* no Boehm GC */
| /**
* \file
* GC implementation using either the installed or included Boehm GC.
*
* Copyright 2001-2003 Ximian, Inc (http://www.ximian.com)
* Copyright 2004-2011 Novell, Inc (http://www.novell.com)
* Copyright 2011-2012 Xamarin, Inc (http://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#include "config.h"
#include <string.h>
#define GC_I_HIDE_POINTERS
#include <mono/metadata/gc-internals.h>
#include <mono/metadata/mono-gc.h>
#include <mono/metadata/profiler-private.h>
#include <mono/metadata/class-internals.h>
#include <mono/metadata/method-builder.h>
#include <mono/metadata/method-builder-ilgen.h>
#include <mono/metadata/method-builder-ilgen-internals.h>
#include <mono/metadata/opcodes.h>
#include <mono/metadata/domain-internals.h>
#include <mono/metadata/metadata-internals.h>
#include <mono/metadata/marshal.h>
#include <mono/metadata/runtime.h>
#include <mono/metadata/handle.h>
#include <mono/metadata/sgen-toggleref.h>
#include <mono/metadata/w32handle.h>
#include <mono/metadata/abi-details.h>
#include <mono/utils/atomic.h>
#include <mono/utils/mono-logger-internals.h>
#include <mono/utils/mono-memory-model.h>
#include <mono/utils/mono-time.h>
#include <mono/utils/mono-threads.h>
#include <mono/utils/dtrace.h>
#include <mono/utils/gc_wrapper.h>
#include <mono/utils/mono-os-mutex.h>
#include <mono/utils/mono-counters.h>
#include <mono/utils/mono-compiler.h>
#include <mono/utils/unlocked.h>
#include <mono/metadata/icall-decl.h>
#if HAVE_BOEHM_GC
#if defined(HOST_DARWIN) && defined(HAVE_PTHREAD_GET_STACKADDR_NP)
void *pthread_get_stackaddr_np(pthread_t);
#endif
#define GC_NO_DESCRIPTOR ((gpointer)(0 | GC_DS_LENGTH))
/*Boehm max heap cannot be smaller than 16MB*/
#define MIN_BOEHM_MAX_HEAP_SIZE_IN_MB 16
#define MIN_BOEHM_MAX_HEAP_SIZE (MIN_BOEHM_MAX_HEAP_SIZE_IN_MB << 20)
static gboolean gc_initialized = FALSE;
static gboolean gc_dont_gc_env = FALSE;
static mono_mutex_t mono_gc_lock;
static GC_push_other_roots_proc default_push_other_roots;
static GHashTable *roots;
static void
mono_push_other_roots(void);
static void
register_test_toggleref_callback (void);
#define BOEHM_GC_BIT_FINALIZER_AWARE 1
static MonoGCFinalizerCallbacks fin_callbacks;
/* GC Handles */
static mono_mutex_t handle_section;
#define lock_handles(handles) mono_os_mutex_lock (&handle_section)
#define unlock_handles(handles) mono_os_mutex_unlock (&handle_section)
typedef struct {
guint32 *bitmap;
gpointer *entries;
guint32 size;
guint8 type;
guint slot_hint : 24; /* starting slot for search in bitmap */
/* 2^16 appdomains should be enough for everyone (though I know I'll regret this in 20 years) */
/* we alloc this only for weak refs, since we can get the domain directly in the other cases */
guint16 *domain_ids;
} HandleData;
#define EMPTY_HANDLE_DATA(type) {NULL, NULL, 0, (type), 0, NULL}
/* weak and weak-track arrays will be allocated in malloc memory
*/
static HandleData gc_handles [] = {
EMPTY_HANDLE_DATA (HANDLE_WEAK),
EMPTY_HANDLE_DATA (HANDLE_WEAK_TRACK),
EMPTY_HANDLE_DATA (HANDLE_NORMAL),
EMPTY_HANDLE_DATA (HANDLE_PINNED)
};
static void
mono_gc_warning (char *msg, GC_word arg)
{
mono_trace (G_LOG_LEVEL_WARNING, MONO_TRACE_GC, msg, (unsigned long)arg);
}
static void on_gc_notification (GC_EventType event);
// GC_word here to precisely match Boehm. Not size_t, not gsize.
static void on_gc_heap_resize (GC_word new_size);
void
mono_gc_base_init (void)
{
char *env;
if (gc_initialized)
return;
mono_counters_init ();
#ifndef HOST_WIN32
mono_w32handle_init ();
#endif
roots = g_hash_table_new (NULL, NULL);
default_push_other_roots = GC_get_push_other_roots ();
GC_set_push_other_roots (mono_push_other_roots);
#if !defined(HOST_ANDROID)
/* If GC_no_dls is set to true, GC_find_limit is not called. This causes a seg fault on Android. */
GC_set_no_dls (TRUE);
#endif
{
if ((env = g_getenv ("MONO_GC_DEBUG"))) {
char **opts = g_strsplit (env, ",", -1);
for (char **ptr = opts; ptr && *ptr; ptr ++) {
char *opt = *ptr;
if (!strcmp (opt, "do-not-finalize")) {
mono_do_not_finalize = 1;
} else if (!strcmp (opt, "log-finalizers")) {
mono_log_finalizers = 1;
}
}
g_free (env);
}
}
/* cache value rather than calling during collection since g_hasenv may take locks and can deadlock */
gc_dont_gc_env = g_hasenv ("GC_DONT_GC");
GC_init ();
GC_set_warn_proc (mono_gc_warning);
GC_set_finalize_on_demand (1);
GC_set_finalizer_notifier(mono_gc_finalize_notify);
GC_init_gcj_malloc (5, NULL);
GC_allow_register_threads ();
if ((env = g_getenv ("MONO_GC_PARAMS"))) {
char **ptr, **opts = g_strsplit (env, ",", -1);
for (ptr = opts; *ptr; ++ptr) {
char *opt = *ptr;
if (g_str_has_prefix (opt, "max-heap-size=")) {
size_t max_heap;
opt = strchr (opt, '=') + 1;
if (*opt && mono_gc_parse_environment_string_extract_number (opt, &max_heap)) {
if (max_heap < MIN_BOEHM_MAX_HEAP_SIZE) {
fprintf (stderr, "max-heap-size must be at least %dMb.\n", MIN_BOEHM_MAX_HEAP_SIZE_IN_MB);
exit (1);
}
GC_set_max_heap_size (max_heap);
} else {
fprintf (stderr, "max-heap-size must be an integer.\n");
exit (1);
}
continue;
} else if (g_str_has_prefix (opt, "toggleref-test")) {
register_test_toggleref_callback ();
continue;
} else {
/* Could be a parameter for sgen */
/*
fprintf (stderr, "MONO_GC_PARAMS must be a comma-delimited list of one or more of the following:\n");
fprintf (stderr, " max-heap-size=N (where N is an integer, possibly with a k, m or a g suffix)\n");
exit (1);
*/
}
}
g_free (env);
g_strfreev (opts);
}
mono_thread_callbacks_init ();
mono_thread_info_init (sizeof (MonoThreadInfo));
mono_os_mutex_init (&mono_gc_lock);
mono_os_mutex_init_recursive (&handle_section);
mono_thread_info_attach ();
GC_set_on_collection_event (on_gc_notification);
GC_set_on_heap_resize (on_gc_heap_resize);
gc_initialized = TRUE;
}
void
mono_gc_base_cleanup (void)
{
GC_set_finalizer_notifier (NULL);
}
void
mono_gc_init_icalls (void)
{
}
/**
* mono_gc_collect:
* \param generation GC generation identifier
*
* Perform a garbage collection for the given generation, higher numbers
* mean usually older objects. Collecting a high-numbered generation
* implies collecting also the lower-numbered generations.
* The maximum value for \p generation can be retrieved with a call to
* \c mono_gc_max_generation, so this function is usually called as:
*
* <code>mono_gc_collect (mono_gc_max_generation ());</code>
*/
void
mono_gc_collect (int generation)
{
#ifndef DISABLE_PERFCOUNTERS
mono_atomic_inc_i32 (&mono_perfcounters->gc_induced);
#endif
GC_gcollect ();
}
/**
* mono_gc_max_generation:
*
* Get the maximum generation number used by the current garbage
* collector. The value will be 0 for the Boehm collector, 1 or more
* for the generational collectors.
*
* Returns: the maximum generation number.
*/
int
mono_gc_max_generation (void)
{
return 0;
}
guint64
mono_gc_get_allocated_bytes_for_current_thread (void)
{
return 0;
}
/**
* mono_gc_get_generation:
* \param object a managed object
*
* Get the garbage collector's generation that \p object belongs to.
* Use this has a hint only.
*
* \returns a garbage collector generation number
*/
int
mono_gc_get_generation (MonoObject *object)
{
return 0;
}
/**
* mono_gc_collection_count:
* \param generation a GC generation number
*
* Get how many times a garbage collection has been performed
* for the given \p generation number.
*
* \returns the number of garbage collections
*/
int
mono_gc_collection_count (int generation)
{
return GC_get_gc_no ();
}
int64_t
mono_gc_get_generation_size (int generation)
{
return 0;
}
void
mono_stop_world (MonoThreadInfoFlags flags)
{
g_assert ("mono_stop_world is not supported in Boehm");
}
void
mono_restart_world (MonoThreadInfoFlags flags)
{
g_assert ("mono_restart_world is not supported in Boehm");
}
/**
* mono_gc_add_memory_pressure:
* \param value amount of bytes
*
* Adjust the garbage collector's view of how many bytes of memory
* are indirectly referenced by managed objects (for example unmanaged
* memory holding image or other binary data).
* This is a hint only to the garbage collector algorithm.
* Note that negative amounts of p value will decrease the memory
* pressure.
*/
void
mono_gc_add_memory_pressure (gint64 value)
{
}
/**
* mono_gc_get_used_size:
*
* Get the approximate amount of memory used by managed objects.
*
* Returns: the amount of memory used in bytes
*/
int64_t
mono_gc_get_used_size (void)
{
return GC_get_heap_size () - GC_get_free_bytes ();
}
/**
* mono_gc_get_heap_size:
*
* Get the amount of memory used by the garbage collector.
*
* Returns: the size of the heap in bytes
*/
int64_t
mono_gc_get_heap_size (void)
{
return GC_get_heap_size ();
}
gboolean
mono_gc_is_gc_thread (void)
{
return GC_thread_is_registered ();
}
gpointer
mono_gc_thread_attach (MonoThreadInfo* info)
{
struct GC_stack_base sb;
int res;
/* TODO: use GC_get_stack_base instead of baseptr. */
sb.mem_base = info->stack_end;
res = GC_register_my_thread (&sb);
if (res == GC_UNIMPLEMENTED)
return NULL; /* Cannot happen with GC v7+. */
info->handle_stack = mono_handle_stack_alloc ();
return info;
}
void
mono_gc_thread_detach (MonoThreadInfo *p)
{
/* Detach without threads lock as Boehm
* will take it's own lock internally. Note in
* on_gc_notification we take threads lock after
* Boehm already has it's own lock. For consistency
* always take lock ordering of Boehm then threads.
*/
GC_unregister_my_thread ();
}
void
mono_gc_thread_detach_with_lock (MonoThreadInfo *p)
{
MonoNativeThreadId tid;
tid = mono_thread_info_get_tid (p);
if (p->runtime_thread)
mono_threads_add_joinable_thread ((gpointer)tid);
mono_handle_stack_free (p->handle_stack);
p->handle_stack = NULL;
}
gboolean
mono_gc_thread_in_critical_region (MonoThreadInfo *info)
{
return FALSE;
}
gboolean
mono_object_is_alive (MonoObject* o)
{
return GC_is_marked ((const void *)o);
}
int
mono_gc_walk_heap (int flags, MonoGCReferences callback, void *data)
{
return 1;
}
static gint64 gc_start_time;
static void
on_gc_notification (GC_EventType event)
{
MonoProfilerGCEvent e;
switch (event) {
case GC_EVENT_PRE_STOP_WORLD:
e = MONO_GC_EVENT_PRE_STOP_WORLD;
MONO_GC_WORLD_STOP_BEGIN ();
break;
case GC_EVENT_POST_STOP_WORLD:
e = MONO_GC_EVENT_POST_STOP_WORLD;
MONO_GC_WORLD_STOP_END ();
break;
case GC_EVENT_PRE_START_WORLD:
e = MONO_GC_EVENT_PRE_START_WORLD;
MONO_GC_WORLD_RESTART_BEGIN (1);
break;
case GC_EVENT_POST_START_WORLD:
e = MONO_GC_EVENT_POST_START_WORLD;
MONO_GC_WORLD_RESTART_END (1);
break;
case GC_EVENT_START:
e = MONO_GC_EVENT_START;
MONO_GC_BEGIN (1);
#ifndef DISABLE_PERFCOUNTERS
if (mono_perfcounters)
mono_atomic_inc_i32 (&mono_perfcounters->gc_collections0);
#endif
mono_atomic_inc_i32 (&mono_gc_stats.major_gc_count);
gc_start_time = mono_100ns_ticks ();
break;
case GC_EVENT_END:
e = MONO_GC_EVENT_END;
MONO_GC_END (1);
#if defined(ENABLE_DTRACE) && defined(__sun__)
/* This works around a dtrace -G problem on Solaris.
Limit its actual use to when the probe is enabled. */
if (MONO_GC_END_ENABLED ())
sleep(0);
#endif
#ifndef DISABLE_PERFCOUNTERS
if (mono_perfcounters) {
guint64 heap_size = GC_get_heap_size ();
guint64 used_size = heap_size - GC_get_free_bytes ();
/* FIXME: change these to mono_atomic_store_i64 () */
UnlockedWrite64 (&mono_perfcounters->gc_total_bytes, used_size);
UnlockedWrite64 (&mono_perfcounters->gc_committed_bytes, heap_size);
UnlockedWrite64 (&mono_perfcounters->gc_reserved_bytes, heap_size);
UnlockedWrite64 (&mono_perfcounters->gc_gen0size, heap_size);
}
#endif
UnlockedAdd64 (&mono_gc_stats.major_gc_time, mono_100ns_ticks () - gc_start_time);
mono_trace_message (MONO_TRACE_GC, "gc took %" G_GINT64_FORMAT " usecs", (mono_100ns_ticks () - gc_start_time) / 10);
break;
default:
break;
}
switch (event) {
case GC_EVENT_MARK_START:
case GC_EVENT_MARK_END:
case GC_EVENT_RECLAIM_START:
case GC_EVENT_RECLAIM_END:
break;
default:
MONO_PROFILER_RAISE (gc_event, (e, 0, TRUE));
break;
}
switch (event) {
case GC_EVENT_PRE_STOP_WORLD:
mono_thread_info_suspend_lock ();
MONO_PROFILER_RAISE (gc_event, (MONO_GC_EVENT_PRE_STOP_WORLD_LOCKED, 0, TRUE));
break;
case GC_EVENT_POST_START_WORLD:
mono_thread_info_suspend_unlock ();
MONO_PROFILER_RAISE (gc_event, (MONO_GC_EVENT_POST_START_WORLD_UNLOCKED, 0, TRUE));
break;
default:
break;
}
}
// GC_word here to precisely match Boehm. Not size_t, not gsize.
static void
on_gc_heap_resize (GC_word new_size)
{
guint64 heap_size = GC_get_heap_size ();
#ifndef DISABLE_PERFCOUNTERS
if (mono_perfcounters) {
/* FIXME: change these to mono_atomic_store_i64 () */
UnlockedWrite64 (&mono_perfcounters->gc_committed_bytes, heap_size);
UnlockedWrite64 (&mono_perfcounters->gc_reserved_bytes, heap_size);
UnlockedWrite64 (&mono_perfcounters->gc_gen0size, heap_size);
}
#endif
MONO_PROFILER_RAISE (gc_resize, (new_size));
}
typedef struct {
char *start;
char *end;
} RootData;
static gpointer
register_root (gpointer arg)
{
RootData* root_data = (RootData*)arg;
g_hash_table_insert (roots, root_data->start, root_data->end);
return NULL;
}
int
mono_gc_register_root (char *start, size_t size, void *descr, MonoGCRootSource source, void *key, const char *msg)
{
RootData root_data;
root_data.start = start;
/* Boehm root processing requires one byte past end of region to be scanned */
root_data.end = start + size + 1;
GC_call_with_alloc_lock (register_root, &root_data);
MONO_PROFILER_RAISE (gc_root_register, ((const mono_byte *) start, size, source, key, msg));
return TRUE;
}
int
mono_gc_register_root_wbarrier (char *start, size_t size, MonoGCDescriptor descr, MonoGCRootSource source, void *key, const char *msg)
{
return mono_gc_register_root (start, size, descr, source, key, msg);
}
static gpointer
deregister_root (gpointer arg)
{
gboolean removed = g_hash_table_remove (roots, arg);
g_assert (removed);
return NULL;
}
void
mono_gc_deregister_root (char* addr)
{
GC_call_with_alloc_lock (deregister_root, addr);
MONO_PROFILER_RAISE (gc_root_unregister, ((const mono_byte *) addr));
}
static void
push_root (gpointer key, gpointer value, gpointer user_data)
{
GC_push_all (key, value);
}
static void
push_handle_stack (HandleStack* stack)
{
HandleChunk *cur = stack->bottom;
HandleChunk *last = stack->top;
if (!cur)
return;
while (cur) {
if (cur->size > 0)
GC_push_all ((gpointer)&cur->elems[0], (char*)(cur->elems + cur->size) + 1);
if (cur == last)
break;
cur = cur->next;
}
}
static void
mono_push_other_roots (void)
{
g_hash_table_foreach (roots, push_root, NULL);
FOREACH_THREAD_EXCLUDE (info, MONO_THREAD_INFO_FLAGS_NO_GC) {
HandleStack* stack = info->handle_stack;
if (stack)
push_handle_stack (stack);
} FOREACH_THREAD_END
if (default_push_other_roots)
default_push_other_roots ();
}
static void
mono_gc_weak_link_add (void **link_addr, MonoObject *obj, gboolean track)
{
/* libgc requires that we use HIDE_POINTER... */
*link_addr = (void*)HIDE_POINTER (obj);
if (track)
GC_REGISTER_LONG_LINK (link_addr, obj);
else
GC_GENERAL_REGISTER_DISAPPEARING_LINK (link_addr, obj);
}
static void
mono_gc_weak_link_remove (void **link_addr, gboolean track)
{
if (track)
GC_unregister_long_link (link_addr);
else
GC_unregister_disappearing_link (link_addr);
*link_addr = NULL;
}
static gpointer
reveal_link (gpointer link_addr)
{
void **link_a = (void **)link_addr;
return REVEAL_POINTER (*link_a);
}
static MonoObject *
mono_gc_weak_link_get (void **link_addr)
{
MonoObject *obj = (MonoObject *)GC_call_with_alloc_lock (reveal_link, link_addr);
if (obj == (MonoObject *) -1)
return NULL;
return obj;
}
void*
mono_gc_make_descr_for_string (gsize *bitmap, int numbits)
{
return mono_gc_make_descr_from_bitmap (bitmap, numbits);
}
void*
mono_gc_make_descr_for_object (gsize *bitmap, int numbits, size_t obj_size)
{
return mono_gc_make_descr_from_bitmap (bitmap, numbits);
}
void*
mono_gc_make_descr_for_array (int vector, gsize *elem_bitmap, int numbits, size_t elem_size)
{
/* libgc has no usable support for arrays... */
return GC_NO_DESCRIPTOR;
}
void*
mono_gc_make_descr_from_bitmap (gsize *bitmap, int numbits)
{
/* It seems there are issues when the bitmap doesn't fit: play it safe */
if (numbits >= 30)
return GC_NO_DESCRIPTOR;
else
return (gpointer)GC_make_descriptor ((GC_bitmap)bitmap, numbits);
}
void*
mono_gc_make_vector_descr (void)
{
return NULL;
}
void*
mono_gc_make_root_descr_all_refs (int numbits)
{
return NULL;
}
MonoObject*
mono_gc_alloc_fixed (size_t size, void *descr, MonoGCRootSource source, void *key, const char *msg)
{
void *start = GC_MALLOC_UNCOLLECTABLE (size);
MONO_PROFILER_RAISE (gc_root_register, ((const mono_byte *) start, size, source, key, msg));
return (MonoObject*)start;
}
MonoObject*
mono_gc_alloc_fixed_no_descriptor (size_t size, MonoGCRootSource source, void *key, const char *msg)
{
return mono_gc_alloc_fixed (size, 0, source, key, msg);
}
void
mono_gc_free_fixed (void* addr)
{
MONO_PROFILER_RAISE (gc_root_unregister, ((const mono_byte *) addr));
GC_FREE (addr);
}
MonoObject*
mono_gc_alloc_obj (MonoVTable *vtable, size_t size)
{
MonoObject *obj;
if (!m_class_has_references (vtable->klass)) {
obj = (MonoObject *)GC_MALLOC_ATOMIC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->vtable = vtable;
obj->synchronisation = NULL;
memset (mono_object_get_data (obj), 0, size - MONO_ABI_SIZEOF (MonoObject));
} else if (vtable->gc_descr != GC_NO_DESCRIPTOR) {
obj = (MonoObject *)GC_GCJ_MALLOC (size, vtable);
if (G_UNLIKELY (!obj))
return NULL;
} else {
obj = (MonoObject *)GC_MALLOC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->vtable = vtable;
}
if (G_UNLIKELY (mono_profiler_allocations_enabled ()))
MONO_PROFILER_RAISE (gc_allocation, (obj));
return obj;
}
MonoArray*
mono_gc_alloc_pinned_vector (MonoVTable *vtable, size_t size, uintptr_t max_length)
{
return mono_gc_alloc_vector (vtable, size, max_length);
}
MonoArray*
mono_gc_alloc_vector (MonoVTable *vtable, size_t size, uintptr_t max_length)
{
MonoArray *obj;
if (!m_class_has_references (vtable->klass)) {
obj = (MonoArray *)GC_MALLOC_ATOMIC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->obj.vtable = vtable;
obj->obj.synchronisation = NULL;
memset (mono_object_get_data ((MonoObject*)obj), 0, size - MONO_ABI_SIZEOF (MonoObject));
} else if (vtable->gc_descr != GC_NO_DESCRIPTOR) {
obj = (MonoArray *)GC_GCJ_MALLOC (size, vtable);
if (G_UNLIKELY (!obj))
return NULL;
} else {
obj = (MonoArray *)GC_MALLOC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->obj.vtable = vtable;
}
obj->max_length = max_length;
if (G_UNLIKELY (mono_profiler_allocations_enabled ()))
MONO_PROFILER_RAISE (gc_allocation, (&obj->obj));
return obj;
}
MonoArray*
mono_gc_alloc_array (MonoVTable *vtable, size_t size, uintptr_t max_length, uintptr_t bounds_size)
{
MonoArray *obj;
if (!m_class_has_references (vtable->klass)) {
obj = (MonoArray *)GC_MALLOC_ATOMIC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->obj.vtable = vtable;
obj->obj.synchronisation = NULL;
memset (mono_object_get_data ((MonoObject*)obj), 0, size - MONO_ABI_SIZEOF (MonoObject));
} else if (vtable->gc_descr != GC_NO_DESCRIPTOR) {
obj = (MonoArray *)GC_GCJ_MALLOC (size, vtable);
if (G_UNLIKELY (!obj))
return NULL;
} else {
obj = (MonoArray *)GC_MALLOC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->obj.vtable = vtable;
}
obj->max_length = max_length;
if (bounds_size)
obj->bounds = (MonoArrayBounds *) ((char *) obj + size - bounds_size);
if (G_UNLIKELY (mono_profiler_allocations_enabled ()))
MONO_PROFILER_RAISE (gc_allocation, (&obj->obj));
return obj;
}
MonoString*
mono_gc_alloc_string (MonoVTable *vtable, size_t size, gint32 len)
{
MonoString *obj = (MonoString *)GC_MALLOC_ATOMIC (size);
if (G_UNLIKELY (!obj))
return NULL;
obj->object.vtable = vtable;
obj->object.synchronisation = NULL;
obj->length = len;
obj->chars [len] = 0;
if (G_UNLIKELY (mono_profiler_allocations_enabled ()))
MONO_PROFILER_RAISE (gc_allocation, (&obj->object));
return obj;
}
MonoObject*
mono_gc_alloc_mature (MonoVTable *vtable, size_t size)
{
return mono_gc_alloc_obj (vtable, size);
}
MonoObject*
mono_gc_alloc_pinned_obj (MonoVTable *vtable, size_t size)
{
return mono_gc_alloc_obj (vtable, size);
}
int
mono_gc_invoke_finalizers (void)
{
/* There is a bug in GC_invoke_finalizer () in versions <= 6.2alpha4:
* the 'mem_freed' variable is not initialized when there are no
* objects to finalize, which leads to strange behavior later on.
* The check is necessary to work around that bug.
*/
if (GC_should_invoke_finalizers ())
return GC_invoke_finalizers ();
return 0;
}
MonoBoolean
mono_gc_pending_finalizers (void)
{
return GC_should_invoke_finalizers ();
}
void
mono_gc_wbarrier_set_field_internal (MonoObject *obj, gpointer field_ptr, MonoObject* value)
{
*(void**)field_ptr = value;
}
void
mono_gc_wbarrier_set_arrayref_internal (MonoArray *arr, gpointer slot_ptr, MonoObject* value)
{
*(void**)slot_ptr = value;
}
void
mono_gc_wbarrier_arrayref_copy_internal (gpointer dest_ptr, gconstpointer src_ptr, int count)
{
mono_gc_memmove_aligned (dest_ptr, src_ptr, count * sizeof (gpointer));
}
void
mono_gc_wbarrier_generic_store_internal (void volatile* ptr, MonoObject* value)
{
*(void**)ptr = value;
}
void
mono_gc_wbarrier_generic_store_atomic_internal (gpointer ptr, MonoObject *value)
{
mono_atomic_store_ptr ((volatile gpointer *)ptr, value);
}
void
mono_gc_wbarrier_generic_nostore_internal (gpointer ptr)
{
}
void
mono_gc_wbarrier_value_copy_internal (gpointer dest, gconstpointer src, int count, MonoClass *klass)
{
mono_gc_memmove_atomic (dest, src, count * mono_class_value_size (klass, NULL));
}
void
mono_gc_wbarrier_object_copy_internal (MonoObject* obj, MonoObject *src)
{
/* do not copy the sync state */
mono_gc_memmove_aligned (mono_object_get_data (obj), (char*)src + MONO_ABI_SIZEOF (MonoObject),
m_class_get_instance_size (mono_object_class (obj)) - MONO_ABI_SIZEOF (MonoObject));
}
void
mono_gc_clear_domain (MonoDomain *domain)
{
}
void
mono_gc_suspend_finalizers (void)
{
}
int
mono_gc_get_suspend_signal (void)
{
return GC_get_suspend_signal ();
}
int
mono_gc_get_restart_signal (void)
{
return GC_get_thr_restart_signal ();
}
#if defined(USE_COMPILER_TLS) && defined(__linux__) && (defined(__i386__) || defined(__x86_64__))
// Look at history around late August 2019 if this is to be restored.
// The code was effectively dead, not merely deleted to avoid maintaining it.
#endif
gboolean
mono_gc_is_critical_method (MonoMethod *method)
{
return FALSE;
}
MonoMethod*
mono_gc_get_managed_allocator (MonoClass *klass, gboolean for_box, gboolean known_instance_size)
{
return NULL;
}
MonoMethod*
mono_gc_get_managed_array_allocator (MonoClass *klass)
{
return NULL;
}
MonoMethod*
mono_gc_get_managed_allocator_by_type (int atype, ManagedAllocatorVariant variant)
{
return NULL;
}
guint32
mono_gc_get_managed_allocator_types (void)
{
return 0;
}
MonoMethod*
mono_gc_get_write_barrier (void)
{
g_assert_not_reached ();
return NULL;
}
MonoMethod*
mono_gc_get_specific_write_barrier (gboolean is_concurrent)
{
g_assert_not_reached ();
return NULL;
}
int
mono_gc_get_aligned_size_for_allocator (int size)
{
return size;
}
const char *
mono_gc_get_gc_name (void)
{
return "boehm";
}
void*
mono_gc_invoke_with_gc_lock (MonoGCLockedCallbackFunc func, void *data)
{
return GC_call_with_alloc_lock (func, data);
}
char*
mono_gc_get_description (void)
{
return g_strdup (DEFAULT_GC_NAME);
}
void
mono_gc_set_desktop_mode (void)
{
GC_set_dont_expand (1);
}
gboolean
mono_gc_is_moving (void)
{
return FALSE;
}
gboolean
mono_gc_is_disabled (void)
{
if (GC_is_disabled () || gc_dont_gc_env)
return TRUE;
else
return FALSE;
}
void
mono_gc_wbarrier_range_copy (gpointer _dest, gconstpointer _src, int size)
{
g_assert_not_reached ();
}
MonoRangeCopyFunction
mono_gc_get_range_copy_func (void)
{
return &mono_gc_wbarrier_range_copy;
}
guint8*
mono_gc_get_card_table (int *shift_bits, gpointer *card_mask)
{
*shift_bits = 0;
*card_mask = 0;
//g_assert_not_reached ();
return NULL;
}
guint8*
mono_gc_get_target_card_table (int *shift_bits, target_mgreg_t *card_mask)
{
*shift_bits = 0;
*card_mask = 0;
return NULL;
}
gboolean
mono_gc_card_table_nursery_check (void)
{
g_assert_not_reached ();
return TRUE;
}
void*
mono_gc_get_nursery (int *shift_bits, size_t *size)
{
return NULL;
}
gboolean
mono_gc_precise_stack_mark_enabled (void)
{
return FALSE;
}
FILE *
mono_gc_get_logfile (void)
{
return NULL;
}
void
mono_gc_params_set (const char* options)
{
}
void
mono_gc_debug_set (const char* options)
{
}
void
mono_gc_conservatively_scan_area (void *start, void *end)
{
g_assert_not_reached ();
}
void *
mono_gc_scan_object (void *obj, void *gc_data)
{
g_assert_not_reached ();
return NULL;
}
gsize*
mono_gc_get_bitmap_for_descr (void *descr, int *numbits)
{
g_assert_not_reached ();
return NULL;
}
void
mono_gc_set_gc_callbacks (MonoGCCallbacks *callbacks)
{
}
void
mono_gc_set_stack_end (void *stack_end)
{
}
void GC_start_blocking ()
{
}
void GC_end_blocking ()
{
}
void
mono_gc_skip_thread_changing (gboolean skip)
{
/*
* Unlike SGen, Boehm doesn't respect our thread info flags. We need to
* inform Boehm manually to skip/not skip the current thread.
*/
if (skip)
GC_start_blocking ();
else
GC_end_blocking ();
}
void
mono_gc_skip_thread_changed (gboolean skip)
{
}
void
mono_gc_register_for_finalization (MonoObject *obj, MonoFinalizationProc user_data)
{
guint offset = 0;
#ifndef GC_DEBUG
/* This assertion is not valid when GC_DEBUG is defined */
g_assert (GC_base (obj) == (char*)obj - offset);
#endif
GC_REGISTER_FINALIZER_NO_ORDER ((char*)obj - offset, user_data, GUINT_TO_POINTER (offset), NULL, NULL);
}
#ifndef HOST_WIN32
int
mono_gc_pthread_create (pthread_t *new_thread, const pthread_attr_t *attr, void *(*start_routine)(void *), void *arg)
{
/* it is being replaced by GC_pthread_create on some
* platforms, see libgc/include/gc_pthread_redirects.h */
return pthread_create (new_thread, attr, start_routine, arg);
}
#endif
#ifdef HOST_WIN32
BOOL APIENTRY mono_gc_dllmain (HMODULE module_handle, DWORD reason, LPVOID reserved)
{
#ifdef GC_INSIDE_DLL
return GC_DllMain (module_handle, reason, reserved);
#else
return TRUE;
#endif
}
#endif
MonoVTable *
mono_gc_get_vtable (MonoObject *obj)
{
// No pointer tagging.
return obj->vtable;
}
guint
mono_gc_get_vtable_bits (MonoClass *klass)
{
if (fin_callbacks.is_class_finalization_aware) {
if (fin_callbacks.is_class_finalization_aware (klass))
return BOEHM_GC_BIT_FINALIZER_AWARE;
}
return 0;
}
/*
* mono_gc_register_altstack:
*
* Register the dimensions of the normal stack and altstack with the collector.
* Currently, STACK/STACK_SIZE is only used when the thread is suspended while it is on an altstack.
*/
void
mono_gc_register_altstack (gpointer stack, gint32 stack_size, gpointer altstack, gint32 altstack_size)
{
GC_register_altstack (stack, stack_size, altstack, altstack_size);
}
int
mono_gc_get_los_limit (void)
{
return G_MAXINT;
}
void
mono_gc_set_string_length (MonoString *str, gint32 new_length)
{
mono_unichar2 *new_end = str->chars + new_length;
/* zero the discarded string. This null-delimits the string and allows
* the space to be reclaimed by SGen. */
memset (new_end, 0, (str->length - new_length + 1) * sizeof (mono_unichar2));
str->length = new_length;
}
gboolean
mono_gc_user_markers_supported (void)
{
return FALSE;
}
void *
mono_gc_make_root_descr_user (MonoGCRootMarkFunc marker)
{
g_assert_not_reached ();
return NULL;
}
/* Toggleref support */
void
mono_gc_toggleref_add (MonoObject *object, mono_bool strong_ref)
{
if (GC_toggleref_add ((GC_PTR)object, (int)strong_ref) != GC_SUCCESS)
g_error ("GC_toggleref_add failed\n");
}
void
mono_gc_toggleref_register_callback (MonoToggleRefStatus (*proccess_toggleref) (MonoObject *obj))
{
GC_set_toggleref_func ((GC_ToggleRefStatus (*) (GC_PTR obj)) proccess_toggleref);
}
/* Test support code */
static MonoToggleRefStatus
test_toggleref_callback (MonoObject *obj)
{
MonoToggleRefStatus status = MONO_TOGGLE_REF_DROP;
MONO_STATIC_POINTER_INIT (MonoClassField, mono_toggleref_test_field)
mono_toggleref_test_field = mono_class_get_field_from_name_full (mono_object_class (obj), "__test", NULL);
g_assert (mono_toggleref_test_field);
MONO_STATIC_POINTER_INIT_END (MonoClassField*, mono_toggleref_test_field)
mono_field_get_value_internal (obj, mono_toggleref_test_field, &status);
printf ("toggleref-cb obj %d\n", status);
return status;
}
static void
register_test_toggleref_callback (void)
{
mono_gc_toggleref_register_callback (test_toggleref_callback);
}
static gboolean
is_finalization_aware (MonoObject *obj)
{
MonoVTable *vt = obj->vtable;
return (vt->gc_bits & BOEHM_GC_BIT_FINALIZER_AWARE) == BOEHM_GC_BIT_FINALIZER_AWARE;
}
static void
fin_notifier (MonoObject *obj)
{
if (is_finalization_aware (obj))
fin_callbacks.object_queued_for_finalization (obj);
}
void
mono_gc_register_finalizer_callbacks (MonoGCFinalizerCallbacks *callbacks)
{
if (callbacks->version != MONO_GC_FINALIZER_EXTENSION_VERSION)
g_error ("Invalid finalizer callback version. Expected %d but got %d\n", MONO_GC_FINALIZER_EXTENSION_VERSION, callbacks->version);
fin_callbacks = *callbacks;
GC_set_await_finalize_proc ((void (*) (GC_PTR))fin_notifier);
}
#define BITMAP_SIZE (sizeof (*((HandleData *)NULL)->bitmap) * CHAR_BIT)
static gboolean
slot_occupied (HandleData *handles, guint slot) {
return handles->bitmap [slot / BITMAP_SIZE] & (1 << (slot % BITMAP_SIZE));
}
static void
vacate_slot (HandleData *handles, guint slot) {
handles->bitmap [slot / BITMAP_SIZE] &= ~(1 << (slot % BITMAP_SIZE));
}
static void
occupy_slot (HandleData *handles, guint slot) {
handles->bitmap [slot / BITMAP_SIZE] |= 1 << (slot % BITMAP_SIZE);
}
static int
find_first_unset (guint32 bitmap)
{
int i;
for (i = 0; i < 32; ++i) {
if (!(bitmap & (1 << i)))
return i;
}
return -1;
}
static void
handle_data_alloc_entries (HandleData *handles)
{
handles->size = 32;
if (MONO_GC_HANDLE_TYPE_IS_WEAK (handles->type)) {
handles->entries = (void **)g_malloc0 (sizeof (*handles->entries) * handles->size);
handles->domain_ids = (guint16 *)g_malloc0 (sizeof (*handles->domain_ids) * handles->size);
} else {
handles->entries = (void **)mono_gc_alloc_fixed (sizeof (*handles->entries) * handles->size, NULL, MONO_ROOT_SOURCE_GC_HANDLE, NULL, "GC Handle Table (Boehm)");
}
handles->bitmap = (guint32 *)g_malloc0 (handles->size / CHAR_BIT);
}
static gint
handle_data_next_unset (HandleData *handles)
{
gint slot;
for (slot = handles->slot_hint; slot < handles->size / BITMAP_SIZE; ++slot) {
if (handles->bitmap [slot] == 0xffffffff)
continue;
handles->slot_hint = slot;
return find_first_unset (handles->bitmap [slot]);
}
return -1;
}
static gint
handle_data_first_unset (HandleData *handles)
{
gint slot;
for (slot = 0; slot < handles->slot_hint; ++slot) {
if (handles->bitmap [slot] == 0xffffffff)
continue;
handles->slot_hint = slot;
return find_first_unset (handles->bitmap [slot]);
}
return -1;
}
/* Returns the index of the current slot in the bitmap. */
static void
handle_data_grow (HandleData *handles, gboolean track)
{
guint32 *new_bitmap;
guint32 new_size = handles->size * 2; /* always double: we memset to 0 based on this below */
/* resize and copy the bitmap */
new_bitmap = (guint32 *)g_malloc0 (new_size / CHAR_BIT);
memcpy (new_bitmap, handles->bitmap, handles->size / CHAR_BIT);
g_free (handles->bitmap);
handles->bitmap = new_bitmap;
/* resize and copy the entries */
if (MONO_GC_HANDLE_TYPE_IS_WEAK (handles->type)) {
gpointer *entries;
guint16 *domain_ids;
gint i;
domain_ids = (guint16 *)g_malloc0 (sizeof (*handles->domain_ids) * new_size);
entries = (void **)g_malloc0 (sizeof (*handles->entries) * new_size);
memcpy (domain_ids, handles->domain_ids, sizeof (*handles->domain_ids) * handles->size);
for (i = 0; i < handles->size; ++i) {
MonoObject *obj = mono_gc_weak_link_get (&(handles->entries [i]));
if (obj) {
mono_gc_weak_link_add (&(entries [i]), obj, track);
mono_gc_weak_link_remove (&(handles->entries [i]), track);
} else {
g_assert (!handles->entries [i]);
}
}
g_free (handles->entries);
g_free (handles->domain_ids);
handles->entries = entries;
handles->domain_ids = domain_ids;
} else {
gpointer *entries;
entries = (void **)mono_gc_alloc_fixed (sizeof (*handles->entries) * new_size, NULL, MONO_ROOT_SOURCE_GC_HANDLE, NULL, "GC Handle Table (Boehm)");
mono_gc_memmove_aligned (entries, handles->entries, sizeof (*handles->entries) * handles->size);
mono_gc_free_fixed (handles->entries);
handles->entries = entries;
}
handles->slot_hint = handles->size / BITMAP_SIZE;
handles->size = new_size;
}
static guint32
alloc_handle (HandleData *handles, MonoObject *obj, gboolean track)
{
gint slot, i;
guint32 res;
lock_handles (handles);
if (!handles->size)
handle_data_alloc_entries (handles);
i = handle_data_next_unset (handles);
if (i == -1 && handles->slot_hint != 0)
i = handle_data_first_unset (handles);
if (i == -1) {
handle_data_grow (handles, track);
i = 0;
}
slot = handles->slot_hint * BITMAP_SIZE + i;
occupy_slot (handles, slot);
handles->entries [slot] = NULL;
if (MONO_GC_HANDLE_TYPE_IS_WEAK (handles->type)) {
/*FIXME, what to use when obj == null?*/
handles->domain_ids [slot] = (obj ? mono_object_get_domain_internal (obj) : mono_domain_get ())->domain_id;
if (obj)
mono_gc_weak_link_add (&(handles->entries [slot]), obj, track);
} else {
handles->entries [slot] = obj;
}
#ifndef DISABLE_PERFCOUNTERS
mono_atomic_inc_i32 (&mono_perfcounters->gc_num_handles);
#endif
unlock_handles (handles);
res = MONO_GC_HANDLE (slot, handles->type);
MONO_PROFILER_RAISE (gc_handle_created, (res, (MonoGCHandleType)handles->type, obj));
return res;
}
/**
* mono_gchandle_new_internal:
* \param obj managed object to get a handle for
* \param pinned whether the object should be pinned
*
* This returns a handle that wraps the object, this is used to keep a
* reference to a managed object from the unmanaged world and preventing the
* object from being disposed.
*
* If \p pinned is false the address of the object can not be obtained, if it is
* true the address of the object can be obtained. This will also pin the
* object so it will not be possible by a moving garbage collector to move the
* object.
*
* \returns a handle that can be used to access the object from
* unmanaged code.
*/
MonoGCHandle
mono_gchandle_new_internal (MonoObject *obj, gboolean pinned)
{
return MONO_GC_HANDLE_FROM_UINT(alloc_handle (&gc_handles [pinned? HANDLE_PINNED: HANDLE_NORMAL], obj, FALSE));
}
/**
* mono_gchandle_new_weakref_internal:
* \param obj managed object to get a handle for
* \param track_resurrection Determines how long to track the object, if this is set to TRUE, the object is tracked after finalization, if FALSE, the object is only tracked up until the point of finalization.
*
* This returns a weak handle that wraps the object, this is used to
* keep a reference to a managed object from the unmanaged world.
* Unlike the \c mono_gchandle_new_internal the object can be reclaimed by the
* garbage collector. In this case the value of the GCHandle will be
* set to zero.
*
* If \p track_resurrection is TRUE the object will be tracked through
* finalization and if the object is resurrected during the execution
* of the finalizer, then the returned weakref will continue to hold
* a reference to the object. If \p track_resurrection is FALSE, then
* the weak reference's target will become NULL as soon as the object
* is passed on to the finalizer.
*
* \returns a handle that can be used to access the object from
* unmanaged code.
*/
MonoGCHandle
mono_gchandle_new_weakref_internal (MonoObject *obj, gboolean track_resurrection)
{
return MONO_GC_HANDLE_FROM_UINT (alloc_handle (&gc_handles [track_resurrection? HANDLE_WEAK_TRACK: HANDLE_WEAK], obj, track_resurrection));
}
/**
* mono_gchandle_get_target_internal:
* \param gchandle a GCHandle's handle.
*
* The handle was previously created by calling \c mono_gchandle_new_internal or
* \c mono_gchandle_new_weakref.
*
* \returns A pointer to the \c MonoObject* represented by the handle or
* NULL for a collected object if using a weakref handle.
*/
MonoObject*
mono_gchandle_get_target_internal (MonoGCHandle gch)
{
guint32 gchandle = MONO_GC_HANDLE_TO_UINT (gch);
guint slot = MONO_GC_HANDLE_SLOT (gchandle);
guint type = MONO_GC_HANDLE_TYPE (gchandle);
HandleData *handles = &gc_handles [type];
MonoObject *obj = NULL;
if (type >= HANDLE_TYPE_MAX)
return NULL;
lock_handles (handles);
if (slot < handles->size && slot_occupied (handles, slot)) {
if (MONO_GC_HANDLE_TYPE_IS_WEAK (handles->type)) {
obj = mono_gc_weak_link_get (&handles->entries [slot]);
} else {
obj = (MonoObject *)handles->entries [slot];
}
} else {
/* print a warning? */
}
unlock_handles (handles);
/*g_print ("get target of entry %d of type %d: %p\n", slot, handles->type, obj);*/
return obj;
}
void
mono_gchandle_set_target (MonoGCHandle gch, MonoObject *obj)
{
guint32 gchandle = MONO_GC_HANDLE_TO_UINT (gch);
guint slot = MONO_GC_HANDLE_SLOT (gchandle);
guint type = MONO_GC_HANDLE_TYPE (gchandle);
HandleData *handles = &gc_handles [type];
MonoObject *old_obj = NULL;
g_assert (type < HANDLE_TYPE_MAX);
lock_handles (handles);
if (slot < handles->size && slot_occupied (handles, slot)) {
if (MONO_GC_HANDLE_TYPE_IS_WEAK (handles->type)) {
old_obj = (MonoObject *)handles->entries [slot];
(void)old_obj;
if (handles->entries [slot])
mono_gc_weak_link_remove (&handles->entries [slot], handles->type == HANDLE_WEAK_TRACK);
if (obj)
mono_gc_weak_link_add (&handles->entries [slot], obj, handles->type == HANDLE_WEAK_TRACK);
/*FIXME, what to use when obj == null?*/
handles->domain_ids [slot] = (obj ? mono_object_get_domain_internal (obj) : mono_domain_get ())->domain_id;
} else {
handles->entries [slot] = obj;
}
} else {
/* print a warning? */
}
/*g_print ("changed entry %d of type %d to object %p (in slot: %p)\n", slot, handles->type, obj, handles->entries [slot]);*/
unlock_handles (handles);
}
gboolean
mono_gc_is_null (void)
{
return FALSE;
}
/**
* mono_gchandle_free_internal:
* \param gchandle a GCHandle's handle.
*
* Frees the \p gchandle handle. If there are no outstanding
* references, the garbage collector can reclaim the memory of the
* object wrapped.
*/
void
mono_gchandle_free_internal (MonoGCHandle gch)
{
guint32 gchandle = MONO_GC_HANDLE_TO_UINT (gch);
if (!gchandle)
return;
guint slot = MONO_GC_HANDLE_SLOT (gchandle);
guint type = MONO_GC_HANDLE_TYPE (gchandle);
HandleData *handles = &gc_handles [type];
if (type >= HANDLE_TYPE_MAX)
return;
lock_handles (handles);
if (slot < handles->size && slot_occupied (handles, slot)) {
if (MONO_GC_HANDLE_TYPE_IS_WEAK (handles->type)) {
if (handles->entries [slot])
mono_gc_weak_link_remove (&handles->entries [slot], handles->type == HANDLE_WEAK_TRACK);
} else {
handles->entries [slot] = NULL;
}
vacate_slot (handles, slot);
} else {
/* print a warning? */
}
#ifndef DISABLE_PERFCOUNTERS
mono_atomic_dec_i32 (&mono_perfcounters->gc_num_handles);
#endif
/*g_print ("freed entry %d of type %d\n", slot, handles->type);*/
unlock_handles (handles);
MONO_PROFILER_RAISE (gc_handle_deleted, (gchandle, (MonoGCHandleType)handles->type));
}
guint64
mono_gc_get_total_allocated_bytes (MonoBoolean precise)
{
return 0;
}
void
mono_gc_register_obj_with_weak_fields (void *obj)
{
g_error ("Weak fields not supported by boehm gc");
}
gboolean
mono_gc_ephemeron_array_add (MonoObject *obj)
{
return TRUE;
}
void
mono_gc_get_gcmemoryinfo (
gint64 *high_memory_load_threshold_bytes,
gint64 *memory_load_bytes,
gint64 *total_available_memory_bytes,
gint64 *total_committed_bytes,
gint64 *heap_size_bytes,
gint64 *fragmented_bytes)
{
*high_memory_load_threshold_bytes = 0;
*memory_load_bytes = 0;
*total_available_memory_bytes = 0;
*total_committed_bytes = 0;
*heap_size_bytes = 0;
*fragmented_bytes = 0;
}
void mono_gc_get_gctimeinfo (
guint64 *time_last_gc_100ns,
guint64 *time_since_last_gc_100ns,
guint64 *time_max_gc_100ns)
{
*time_last_gc_100ns = 0;
*time_since_last_gc_100ns = 0;
*time_max_gc_100ns = 0;
}
#else
MONO_EMPTY_SOURCE_FILE (boehm_gc);
#endif /* no Boehm GC */
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/src/libunwind/src/ia64/Ginit_remote.c | /* libunwind - a platform-independent unwind library
Copyright (C) 2001-2002, 2004 Hewlett-Packard Co
Contributed by David Mosberger-Tang <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "init.h"
#include "unwind_i.h"
int
unw_init_remote (unw_cursor_t *cursor, unw_addr_space_t as, void *as_arg)
{
#ifdef UNW_LOCAL_ONLY
return -UNW_EINVAL;
#else /* !UNW_LOCAL_ONLY */
struct cursor *c = (struct cursor *) cursor;
unw_word_t sp, bsp;
int ret;
if (!atomic_load(&tdep_init_done))
tdep_init ();
Debug (1, "(cursor=%p)\n", c);
if (as == unw_local_addr_space)
/* This special-casing is unfortunate and shouldn't be needed;
however, both Linux and HP-UX need to adjust the context a bit
before it's usable. Try to think of a cleaner way of doing
this. Not sure it's possible though, as long as we want to be
able to use the context returned by getcontext() et al. */
return unw_init_local (cursor, as_arg);
c->as = as;
c->as_arg = as_arg;
if ((ret = ia64_get (c, IA64_REG_LOC (c, UNW_IA64_GR + 12), &sp)) < 0
|| (ret = ia64_get (c, IA64_REG_LOC (c, UNW_IA64_AR_BSP), &bsp)) < 0)
return ret;
return common_init (c, sp, bsp);
#endif /* !UNW_LOCAL_ONLY */
}
| /* libunwind - a platform-independent unwind library
Copyright (C) 2001-2002, 2004 Hewlett-Packard Co
Contributed by David Mosberger-Tang <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "init.h"
#include "unwind_i.h"
int
unw_init_remote (unw_cursor_t *cursor, unw_addr_space_t as, void *as_arg)
{
#ifdef UNW_LOCAL_ONLY
return -UNW_EINVAL;
#else /* !UNW_LOCAL_ONLY */
struct cursor *c = (struct cursor *) cursor;
unw_word_t sp, bsp;
int ret;
if (!atomic_load(&tdep_init_done))
tdep_init ();
Debug (1, "(cursor=%p)\n", c);
if (as == unw_local_addr_space)
/* This special-casing is unfortunate and shouldn't be needed;
however, both Linux and HP-UX need to adjust the context a bit
before it's usable. Try to think of a cleaner way of doing
this. Not sure it's possible though, as long as we want to be
able to use the context returned by getcontext() et al. */
return unw_init_local (cursor, as_arg);
c->as = as;
c->as_arg = as_arg;
if ((ret = ia64_get (c, IA64_REG_LOC (c, UNW_IA64_GR + 12), &sp)) < 0
|| (ret = ia64_get (c, IA64_REG_LOC (c, UNW_IA64_AR_BSP), &bsp)) < 0)
return ret;
return common_init (c, sp, bsp);
#endif /* !UNW_LOCAL_ONLY */
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/inc/securitywrapper.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//*****************************************************************************
// File: SecurityWrapper.h
//
// Wrapper around Win32 Security functions
//
//*****************************************************************************
#ifndef _SECURITY_WRAPPER_H
#define _SECURITY_WRAPPER_H
#ifdef TARGET_UNIX
#error This file should not be included on non-Windows platforms.
#endif
//-----------------------------------------------------------------------------
// Wrapper around a PSID.
// This class does not own the memory.
//-----------------------------------------------------------------------------
class Sid
{
public:
// Initial the Sid wrapper around an existing SID.
Sid(PSID pSid);
static bool Equals(const Sid & a, const Sid & b) { return Equals(a.m_pSid, b.m_pSid); }
static bool Equals(const Sid & a, PSID b) { return Equals(a.m_pSid, b); }
static bool Equals(PSID a, const Sid & b) { return Equals(a, b.m_pSid); }
static bool Equals(PSID a, PSID b);
PSID RawSid() { return m_pSid; }
protected:
// Pointer to Sid buffer. We don't owner the data.
PSID m_pSid;
};
//-----------------------------------------------------------------------------
// Wrapper around a PSID with buffer.
//-----------------------------------------------------------------------------
class SidBuffer
{
public:
SidBuffer();
~SidBuffer();
// Get the underlying sid
Sid GetSid();
// Do we not have a sid? This will be true if init fails.
bool IsNull() { return m_pBuffer == NULL; }
// Go to definitions to see detailed comments
HRESULT InitFromProcessNoThrow(DWORD pid);
void InitFromProcess(DWORD pid); // throws
HRESULT InitFromProcessUserNoThrow(DWORD pid);
void InitFromProcessUser(DWORD pid); // throws
HRESULT InitFromProcessAppContainerSidNoThrow(DWORD pid);
protected:
BYTE * m_pBuffer;
};
//-----------------------------------------------------------------------------
// Access Control List.
//-----------------------------------------------------------------------------
class Dacl
{
public:
Dacl(PACL pAcl);
SIZE_T GetAceCount();
ACE_HEADER * GetAce(SIZE_T dwAceIndex);
protected:
PACL m_acl;
};
//-----------------------------------------------------------------------------
// Represent a win32 SECURITY_DESCRIPTOR object.
// (Note there's a "SecurityDescriptor" class in the VM for managed goo,
// so we prefix this with "Win32" to avoid a naming collision.)
//-----------------------------------------------------------------------------
class Win32SecurityDescriptor
{
public:
Win32SecurityDescriptor();
~Win32SecurityDescriptor();
HRESULT InitFromHandleNoThrow(HANDLE h);
void InitFromHandle(HANDLE h); // throws
// Gets the owner SID from this SecurityDescriptor.
HRESULT GetOwnerNoThrow( PSID* ppSid );
Sid GetOwner(); // throws
Dacl GetDacl(); // throws
protected:
PSECURITY_DESCRIPTOR m_pDesc;
};
#endif // _SECURITY_WRAPPER_H
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//*****************************************************************************
// File: SecurityWrapper.h
//
// Wrapper around Win32 Security functions
//
//*****************************************************************************
#ifndef _SECURITY_WRAPPER_H
#define _SECURITY_WRAPPER_H
#ifdef TARGET_UNIX
#error This file should not be included on non-Windows platforms.
#endif
//-----------------------------------------------------------------------------
// Wrapper around a PSID.
// This class does not own the memory.
//-----------------------------------------------------------------------------
class Sid
{
public:
// Initial the Sid wrapper around an existing SID.
Sid(PSID pSid);
static bool Equals(const Sid & a, const Sid & b) { return Equals(a.m_pSid, b.m_pSid); }
static bool Equals(const Sid & a, PSID b) { return Equals(a.m_pSid, b); }
static bool Equals(PSID a, const Sid & b) { return Equals(a, b.m_pSid); }
static bool Equals(PSID a, PSID b);
PSID RawSid() { return m_pSid; }
protected:
// Pointer to Sid buffer. We don't owner the data.
PSID m_pSid;
};
//-----------------------------------------------------------------------------
// Wrapper around a PSID with buffer.
//-----------------------------------------------------------------------------
class SidBuffer
{
public:
SidBuffer();
~SidBuffer();
// Get the underlying sid
Sid GetSid();
// Do we not have a sid? This will be true if init fails.
bool IsNull() { return m_pBuffer == NULL; }
// Go to definitions to see detailed comments
HRESULT InitFromProcessNoThrow(DWORD pid);
void InitFromProcess(DWORD pid); // throws
HRESULT InitFromProcessUserNoThrow(DWORD pid);
void InitFromProcessUser(DWORD pid); // throws
HRESULT InitFromProcessAppContainerSidNoThrow(DWORD pid);
protected:
BYTE * m_pBuffer;
};
//-----------------------------------------------------------------------------
// Access Control List.
//-----------------------------------------------------------------------------
class Dacl
{
public:
Dacl(PACL pAcl);
SIZE_T GetAceCount();
ACE_HEADER * GetAce(SIZE_T dwAceIndex);
protected:
PACL m_acl;
};
//-----------------------------------------------------------------------------
// Represent a win32 SECURITY_DESCRIPTOR object.
// (Note there's a "SecurityDescriptor" class in the VM for managed goo,
// so we prefix this with "Win32" to avoid a naming collision.)
//-----------------------------------------------------------------------------
class Win32SecurityDescriptor
{
public:
Win32SecurityDescriptor();
~Win32SecurityDescriptor();
HRESULT InitFromHandleNoThrow(HANDLE h);
void InitFromHandle(HANDLE h); // throws
// Gets the owner SID from this SecurityDescriptor.
HRESULT GetOwnerNoThrow( PSID* ppSid );
Sid GetOwner(); // throws
Dacl GetDacl(); // throws
protected:
PSECURITY_DESCRIPTOR m_pDesc;
};
#endif // _SECURITY_WRAPPER_H
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/Interop/COM/NativeServer/InspectableTesting.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#pragma once
#include "Servers.h"
class InspectableTesting : public UnknownImpl, public IInspectableTesting, public IInspectableTesting2
{
public: // IInspectableTesting2
DEF_FUNC(Add)(
/*[in]*/ int a,
/*[in]*/ int b,
/*[out] [retval] */ int* retVal)
{
*retVal = a + b;
return S_OK;
}
public: // IInspectable
STDMETHOD(GetIids)(
/* [out] */ ULONG *iidCount,
/* [size_is][size_is][out] */ IID **iids)
{
return E_NOTIMPL;
}
STDMETHOD(GetRuntimeClassName)(
/* [out] */ HSTRING *className)
{
className = nullptr;
return S_OK;
}
STDMETHOD(GetTrustLevel)(
/* [out] */ TrustLevel *trustLevel)
{
*trustLevel = TrustLevel::FullTrust;
return S_OK;
}
public: // IUnknown
STDMETHOD(QueryInterface)(
/* [in] */ REFIID riid,
/* [iid_is][out] */ _COM_Outptr_ void __RPC_FAR *__RPC_FAR *ppvObject)
{
return DoQueryInterface(riid, ppvObject, static_cast<IInspectableTesting *>(this), static_cast<IInspectableTesting2 *>(this), static_cast<IInspectable*>(this));
}
DEFINE_REF_COUNTING();
};
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#pragma once
#include "Servers.h"
class InspectableTesting : public UnknownImpl, public IInspectableTesting, public IInspectableTesting2
{
public: // IInspectableTesting2
DEF_FUNC(Add)(
/*[in]*/ int a,
/*[in]*/ int b,
/*[out] [retval] */ int* retVal)
{
*retVal = a + b;
return S_OK;
}
public: // IInspectable
STDMETHOD(GetIids)(
/* [out] */ ULONG *iidCount,
/* [size_is][size_is][out] */ IID **iids)
{
return E_NOTIMPL;
}
STDMETHOD(GetRuntimeClassName)(
/* [out] */ HSTRING *className)
{
className = nullptr;
return S_OK;
}
STDMETHOD(GetTrustLevel)(
/* [out] */ TrustLevel *trustLevel)
{
*trustLevel = TrustLevel::FullTrust;
return S_OK;
}
public: // IUnknown
STDMETHOD(QueryInterface)(
/* [in] */ REFIID riid,
/* [iid_is][out] */ _COM_Outptr_ void __RPC_FAR *__RPC_FAR *ppvObject)
{
return DoQueryInterface(riid, ppvObject, static_cast<IInspectableTesting *>(this), static_cast<IInspectableTesting2 *>(this), static_cast<IInspectable*>(this));
}
DEFINE_REF_COUNTING();
};
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/debug/inc/dbgipcevents.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/* ------------------------------------------------------------------------- *
* DbgIPCEvents.h -- header file for private Debugger data shared by various
//
* debugger components.
* ------------------------------------------------------------------------- */
#ifndef _DbgIPCEvents_h_
#define _DbgIPCEvents_h_
#include <new.hpp>
#include <cor.h>
#include <cordebug.h>
#include <corjit.h> // for ICorDebugInfo::VarLocType & VarLoc
#include <specstrings.h>
#include "dbgtargetcontext.h"
// Get version numbers for IPCHeader stamp
#include "clrversion.h"
#include "dbgappdomain.h"
#include "./common.h"
//-----------------------------------------------------------------------------
// V3 additions to IPC protocol between LS and RS.
//-----------------------------------------------------------------------------
// Special Exception code for LS to communicate with RS.
// LS will raise this exception to communicate managed debug events to the RS.
// Exception codes can't use bit 0x10000000, that's reserved by OS.
#define CLRDBG_NOTIFICATION_EXCEPTION_CODE ((DWORD) 0x04242420)
// This is exception argument 0 included in debugger notification events.
// The debugger uses this as a sanity check.
// This could be very volatile data that changes between builds.
#define CLRDBG_EXCEPTION_DATA_CHECKSUM ((DWORD) 0x31415927)
// Reasons for hijack.
namespace EHijackReason
{
enum EHijackReason
{
kUnhandledException = 1,
kM2UHandoff = 2,
kFirstChanceSuspend = 3,
kGenericHijack = 4,
kMax
};
inline bool IsValid(EHijackReason value)
{
SUPPORTS_DAC;
return (value > 0) && (value < kMax);
}
}
#define MAX_LOG_SWITCH_NAME_LEN 256
//-----------------------------------------------------------------------------
// Versioning note:
// This file describes the IPC communication protocol between the LS (mscorwks)
// and the RS (mscordbi). For Desktop builds, it is private and can change on a
// daily basis. The version of the LS will always match the version of the RS
// (but see the discussion of CoreCLR below). They are like a single conceptual
// DLL split across 2 processes.
// The only restriction is that it should be flavor agnostic - so don't change
// layout based off '#ifdef DEBUG'. This lets us drop a Debug flavor RS onto
// a retail installation w/o any further installation woes. That's very useful
// for debugging.
//-----------------------------------------------------------------------------
// We want this available for DbgInterface.h - put it here.
typedef enum
{
IPC_TARGET_OUTOFPROC,
IPC_TARGET_COUNT,
} IpcTarget;
//
// Names of the setup sync event and shared memory used for IPC between the Left Side and the Right Side. NOTE: these
// names must include a %d for the process id. The process id used is the process id of the debuggee.
//
#define CorDBIPCSetupSyncEventName W("CorDBIPCSetupSyncEvent_%d")
//
// This define controls whether we always pass first chance exceptions to the in-process first chance hijack filter
// during interop debugging or if we try to short-circuit and make the decision out-of-process as much as possible.
//
#define CorDB_Short_Circuit_First_Chance_Ownership 1
//
// Defines for current version numbers for the left and right sides
//
#define CorDB_LeftSideProtocolCurrent 2
#define CorDB_LeftSideProtocolMinSupported 2
#define CorDB_RightSideProtocolCurrent 2
#define CorDB_RightSideProtocolMinSupported 2
//
// The remaining data structures in this file can be shared between two processes and for network transport
// based debugging this can mean two different platforms as well. The two platforms that can share these
// data structures must have identical layouts for them (each field must lie at the same offset and have the
// same length). The MSLAYOUT macro should be applied to each structure to avoid any compiler packing differences.
//
//
// DebuggerIPCRuntimeOffsets contains addresses and offsets of important global variables, functions, and fields in
// Runtime objects. This is populated during Left Side initialization and is read by the Right Side. This struct is
// mostly to facilitate unmanaged debugging support, but it may have some small uses for managed debugging.
//
struct MSLAYOUT DebuggerIPCRuntimeOffsets
{
#ifdef FEATURE_INTEROP_DEBUGGING
void *m_genericHijackFuncAddr;
void *m_signalHijackStartedBPAddr;
void *m_excepForRuntimeHandoffStartBPAddr;
void *m_excepForRuntimeHandoffCompleteBPAddr;
void *m_signalHijackCompleteBPAddr;
void *m_excepNotForRuntimeBPAddr;
void *m_notifyRSOfSyncCompleteBPAddr;
DWORD m_debuggerWordTLSIndex; // The TLS slot for the debugger word used in the debugger hijack functions
#endif // FEATURE_INTEROP_DEBUGGING
SIZE_T m_TLSIndex; // The TLS index of the thread-local storage for coreclr.dll
SIZE_T m_TLSEEThreadOffset; // TLS Offset of the Thread pointer.
SIZE_T m_TLSIsSpecialOffset; // TLS Offset of the "IsSpecial" status for a thread.
SIZE_T m_TLSCantStopOffset; // TLS Offset of the Can't-Stop count.
SIZE_T m_EEThreadStateOffset; // Offset of m_state in a Thread
SIZE_T m_EEThreadStateNCOffset; // Offset of m_stateNC in a Thread
SIZE_T m_EEThreadPGCDisabledOffset; // Offset of the bit for whether PGC is disabled or not in a Thread
DWORD m_EEThreadPGCDisabledValue; // Value at m_EEThreadPGCDisabledOffset that equals "PGC disabled".
SIZE_T m_EEThreadFrameOffset; // Offset of the Frame ptr in a Thread
SIZE_T m_EEThreadMaxNeededSize; // Max memory to read to get what we need out of a Thread object
DWORD m_EEThreadSteppingStateMask; // Mask for Thread::TSNC_DebuggerIsStepping
DWORD m_EEMaxFrameValue; // The max Frame value
SIZE_T m_EEThreadDebuggerFilterContextOffset; // Offset of debugger's filter context within a Thread Object.
SIZE_T m_EEFrameNextOffset; // Offset of the next ptr in a Frame
DWORD m_EEIsManagedExceptionStateMask; // Mask for Thread::TSNC_DebuggerIsManagedException
void *m_pPatches; // Addr of patch table
BOOL *m_pPatchTableValid; // Addr of g_patchTableValid
SIZE_T m_offRgData; // Offset of m_pcEntries
SIZE_T m_offCData; // Offset of count of m_pcEntries
SIZE_T m_cbPatch; // Size per patch entry
SIZE_T m_offAddr; // Offset within patch of target addr
SIZE_T m_offOpcode; // Offset within patch of target opcode
SIZE_T m_cbOpcode; // Max size of opcode
SIZE_T m_offTraceType; // Offset of the trace.type within a patch
DWORD m_traceTypeUnmanaged; // TRACE_UNMANAGED
DebuggerIPCRuntimeOffsets()
{
ZeroMemory(this, sizeof(DebuggerIPCRuntimeOffsets));
}
};
//
// The size of the send and receive IPC buffers.
// These must be big enough to fit a DebuggerIPCEvent. Also, the bigger they are, the fewer events
// it takes to send variable length stuff like the stack trace.
// But for perf reasons, they need to be small enough to not just push us over a page boundary in an IPC block.
// Unfortunately, there's a lot of other goo in the IPC block, so we can't use some clean formula. So we
// have to resort to just tuning things.
//
// When using a network transport rather than shared memory buffers CorDBIPC_BUFFER_SIZE is the upper bound
// for a single DebuggerIPCEvent structure. This now relates to the maximal size of a network message and is
// orthogonal to the host's page size. Because of this we defer definition of CorDBIPC_BUFFER_SIZE until we've
// declared DebuggerIPCEvent at the end of this header (and we can do so because in the transport case there
// aren't any embedded buffers in the DebuggerIPCControlBlock).
#if defined(TARGET_X86) || defined(TARGET_ARM)
#ifdef HOST_64BIT
#define CorDBIPC_BUFFER_SIZE 2104
#else
#define CorDBIPC_BUFFER_SIZE 2092
#endif
#else // !TARGET_X86 && !TARGET_ARM
// This is the size of a DebuggerIPCEvent. You will hit an assert in Cordb::Initialize() (di\rsmain.cpp)
// if this is not defined correctly. AMD64 actually has a page size of 0x1000, not 0x2000.
#define CorDBIPC_BUFFER_SIZE 4016 // (4016 + 6) * 2 + 148 = 8192 (two (DebuggerIPCEvent + alignment padding) + other fields = page size)
#endif // TARGET_X86 || TARGET_ARM
//
// DebuggerIPCControlBlock describes the layout of the shared memory shared between the Left Side and the Right
// Side. This includes error information, handles for the IPC channel, and space for the send/receive buffers.
//
struct MSLAYOUT DebuggerIPCControlBlock
{
// Version data should be first in the control block to ensure that we can read it even if the control block
// changes.
SIZE_T m_DCBSize; // note this field is used as a semaphore to indicate the DCB is initialized
ULONG m_verMajor; // CLR build number for the Left Side.
ULONG m_verMinor; // CLR build number for the Left Side.
// This next stuff fits in a DWORD.
bool m_checkedBuild; // CLR build type for the Left Side.
// using the first padding byte to indicate if hosted in fiber mode.
// We actually just need one bit. So if needed, can turn this to a bit.
// BYTE padding1;
bool m_bHostingInFiber;
BYTE padding2;
BYTE padding3;
ULONG m_leftSideProtocolCurrent; // Current protocol version for the Left Side.
ULONG m_leftSideProtocolMinSupported; // Minimum protocol the Left Side can support.
ULONG m_rightSideProtocolCurrent; // Current protocol version for the Right Side.
ULONG m_rightSideProtocolMinSupported; // Minimum protocol the Right Side requires.
HRESULT m_errorHR;
unsigned int m_errorCode;
#if defined(TARGET_64BIT)
// 64-bit needs this padding to make the handles after this aligned.
// But x86 can't have this padding b/c it breaks binary compatibility between v1.1 and v2.0.
ULONG padding4;
#endif // TARGET_64BIT
RemoteHANDLE m_rightSideEventAvailable;
RemoteHANDLE m_rightSideEventRead;
// @dbgtodo inspection - this is where LSEA and LSER used to be. We need to the padding to maintain binary compatibility.
// Eventually, we expect to remove this whole block.
RemoteHANDLE m_paddingObsoleteLSEA;
RemoteHANDLE m_paddingObsoleteLSER;
RemoteHANDLE m_rightSideProcessHandle;
//.............................................................................
// Everything above this point must have the exact same binary layout as v1.1.
// See protocol details below.
//.............................................................................
RemoteHANDLE m_leftSideUnmanagedWaitEvent;
// This is set immediately when the helper thread is created.
// This will be set even if there's a temporary helper thread or if the real helper
// thread is not yet pumping (eg, blocked on a loader lock).
DWORD m_realHelperThreadId;
// This is only published once the helper thread starts running in its main loop.
// Thus we can use this field to see if the real helper thread is actually pumping.
DWORD m_helperThreadId;
// This is non-zero if the LS has a temporary helper thread.
DWORD m_temporaryHelperThreadId;
// ID of the Helper's canary thread.
DWORD m_CanaryThreadId;
DebuggerIPCRuntimeOffsets *m_pRuntimeOffsets;
void *m_helperThreadStartAddr;
void *m_helperRemoteStartAddr;
DWORD *m_specialThreadList;
BYTE m_receiveBuffer[CorDBIPC_BUFFER_SIZE];
BYTE m_sendBuffer[CorDBIPC_BUFFER_SIZE];
DWORD m_specialThreadListLength;
bool m_shutdownBegun;
bool m_rightSideIsWin32Debugger; // RS status
bool m_specialThreadListDirty;
bool m_rightSideShouldCreateHelperThread;
// NOTE The Init method works since there are no virtual functions - don't add any virtual functions without
// changing this!
// Only initialized by the LS, opened by the RS.
HRESULT Init(
HANDLE rsea,
HANDLE rser,
HANDLE lsea,
HANDLE lser,
HANDLE lsuwe
);
};
#if defined(FEATURE_DBGIPC_TRANSPORT_VM) || defined(FEATURE_DBGIPC_TRANSPORT_DI)
// We need an alternate definition for the control block if using the transport, because the control block has to be sent over the transport
// In particular we can't nest the send/receive buffers inside of it and we don't use any of the remote handles
struct MSLAYOUT DebuggerIPCControlBlockTransport
{
// Version data should be first in the control block to ensure that we can read it even if the control block
// changes.
SIZE_T m_DCBSize; // note this field is used as a semaphore to indicate the DCB is initialized
ULONG m_verMajor; // CLR build number for the Left Side.
ULONG m_verMinor; // CLR build number for the Left Side.
// This next stuff fits in a DWORD.
bool m_checkedBuild; // CLR build type for the Left Side.
// using the first padding byte to indicate if hosted in fiber mode.
// We actually just need one bit. So if needed, can turn this to a bit.
// BYTE padding1;
bool m_bHostingInFiber;
BYTE padding2;
BYTE padding3;
ULONG m_leftSideProtocolCurrent; // Current protocol version for the Left Side.
ULONG m_leftSideProtocolMinSupported; // Minimum protocol the Left Side can support.
ULONG m_rightSideProtocolCurrent; // Current protocol version for the Right Side.
ULONG m_rightSideProtocolMinSupported; // Minimum protocol the Right Side requires.
HRESULT m_errorHR;
unsigned int m_errorCode;
#if defined(TARGET_64BIT)
// 64-bit needs this padding to make the handles after this aligned.
// But x86 can't have this padding b/c it breaks binary compatibility between v1.1 and v2.0.
ULONG padding4;
#endif // TARGET_64BIT
// This is set immediately when the helper thread is created.
// This will be set even if there's a temporary helper thread or if the real helper
// thread is not yet pumping (eg, blocked on a loader lock).
DWORD m_realHelperThreadId;
// This is only published once the helper thread starts running in its main loop.
// Thus we can use this field to see if the real helper thread is actually pumping.
DWORD m_helperThreadId;
// This is non-zero if the LS has a temporary helper thread.
DWORD m_temporaryHelperThreadId;
// ID of the Helper's canary thread.
DWORD m_CanaryThreadId;
DebuggerIPCRuntimeOffsets *m_pRuntimeOffsets;
void *m_helperThreadStartAddr;
void *m_helperRemoteStartAddr;
DWORD *m_specialThreadList;
DWORD m_specialThreadListLength;
bool m_shutdownBegun;
bool m_rightSideIsWin32Debugger; // RS status
bool m_specialThreadListDirty;
bool m_rightSideShouldCreateHelperThread;
// NOTE The Init method works since there are no virtual functions - don't add any virtual functions without
// changing this!
// Only initialized by the LS, opened by the RS.
HRESULT Init();
};
#endif // defined(FEATURE_DBGIPC_TRANSPORT_VM) || defined(FEATURE_DBGIPC_TRANSPORT_DI)
#if defined(FEATURE_DBGIPC_TRANSPORT_VM) || defined(FEATURE_DBGIPC_TRANSPORT_DI)
#include "dbgtransportsession.h"
#endif // defined(FEATURE_DBGIPC_TRANSPORT_VM) || defined(FEATURE_DBGIPC_TRANSPORT_DI)
#define INITIAL_APP_DOMAIN_INFO_LIST_SIZE 16
//-----------------------------------------------------------------------------
// Provide some Type-safety in the IPC block when we pass remote pointers around.
//-----------------------------------------------------------------------------
//-----------------------------------------------------------------------------
// This is the same in both the LS & RS.
// Definitions on the LS & RS should be binary compatible. So all storage is
// declared in GeneralLsPointer, and then the Ls & RS each have their own
// derived accessors.
//-----------------------------------------------------------------------------
class MSLAYOUT GeneralLsPointer
{
protected:
friend ULONG_PTR LsPtrToCookie(GeneralLsPointer p);
void * m_ptr;
public:
bool IsNull() { return m_ptr == NULL; }
};
class MSLAYOUT GeneralRsPointer
{
protected:
UINT m_data;
public:
bool IsNull() { return m_data == 0; }
};
// In some cases, we need to get a uuid from a pointer (ie, in a hash)
inline ULONG_PTR LsPtrToCookie(GeneralLsPointer p) {
return (ULONG_PTR) p.m_ptr;
}
#define VmPtrToCookie(vm) LsPtrToCookie((vm).ToLsPtr())
#ifdef RIGHT_SIDE_COMPILE
//-----------------------------------------------------------------------------
// Infrasturcture for RS Definitions
//-----------------------------------------------------------------------------
// On the RS, we don't have the LS classes defined, so we can't templatize that
// in terms of <class T>, but we still want things to be unique.
// So we create an empty enum for each LS type and then templatize it in terms
// of the enum.
template <typename T>
class MSLAYOUT LsPointer : public GeneralLsPointer
{
public:
void Set(void * p)
{
m_ptr = p;
}
void * UnsafeGet()
{
return m_ptr;
}
static LsPointer<T> NullPtr()
{
return MakePtr(NULL);
}
static LsPointer<T> MakePtr(T* p)
{
#ifdef _PREFAST_
#pragma warning(push)
#pragma warning(disable:6001) // PREfast warning: Using uninitialize memory 't'
#endif // _PREFAST_
LsPointer<T> t;
t.Set(p);
return t;
#ifdef _PREFAST_
#pragma warning(pop)
#endif // _PREFAST_
}
bool operator!= (void * p) { return m_ptr != p; }
bool operator== (void * p) { return m_ptr == p; }
bool operator==(LsPointer<T> p) { return p.m_ptr == this->m_ptr; }
// We should never UnWrap() them in the RS, so we don't define that here.
};
class CordbProcess;
template <class T> UINT AllocCookie(CordbProcess * pProc, T * p);
template <class T> T * UnwrapCookie(CordbProcess * pProc, UINT cookie);
UINT AllocCookieCordbEval(CordbProcess * pProc, class CordbEval * p);
class CordbEval * UnwrapCookieCordbEval(CordbProcess * pProc, UINT cookie);
template <class CordbEval> UINT AllocCookie(CordbProcess * pProc, CordbEval * p)
{
return AllocCookieCordbEval(pProc, p);
}
template <class CordbEval> CordbEval * UnwrapCookie(CordbProcess * pProc, UINT cookie)
{
return UnwrapCookieCordbEval(pProc, cookie);
}
// This is how the RS sees the pointers in the IPC block.
template<class T>
class MSLAYOUT RsPointer : public GeneralRsPointer
{
public:
// Since we're being used inside a union, we can't have a ctor.
static RsPointer<T> NullPtr()
{
RsPointer<T> t;
t.m_data = 0;
return t;
}
bool AllocHandle(CordbProcess *pProc, T* p)
{
// This will force validation.
m_data = AllocCookie<T>(pProc, p);
return (m_data != 0);
}
bool operator==(RsPointer<T> p) { return p.m_data == this->m_data; }
T* UnWrapAndRemove(CordbProcess *pProc)
{
return UnwrapCookie<T>(pProc, m_data);
}
protected:
};
// Forward declare a class so that each type of LS pointer can have
// its own type. We use the real class name to be compatible with VMPTRs.
#define DEFINE_LSPTR_TYPE(ls_type, ptr_name) \
ls_type; \
typedef LsPointer<ls_type> ptr_name;
#define DEFINE_RSPTR_TYPE(rs_type, ptr_name) \
class rs_type; \
typedef RsPointer<rs_type> ptr_name;
#else // !RIGHT_SIDE_COMPILE
//-----------------------------------------------------------------------------
// Infrastructure for LS Definitions
//-----------------------------------------------------------------------------
// This is how the LS sees the pointers in the IPC block.
template<typename T>
class MSLAYOUT LsPointer : public GeneralLsPointer
{
public:
// Since we're being used inside a union, we can't have a ctor.
//LsPointer() { }
static LsPointer<T> NullPtr()
{
return MakePtr(NULL);
}
static LsPointer<T> MakePtr(T * p)
{
#ifdef _PREFAST_
#pragma warning(push)
#pragma warning(disable:6001) // PREfast warning: Using uninitialize memory 't'
#endif // _PREFAST_
LsPointer<T> t;
t.Set(p);
return t;
#ifdef _PREFAST_
#pragma warning(pop)
#endif // _PREFAST_
}
bool operator!= (void * p) { return m_ptr != p; }
bool operator== (void * p) { return m_ptr == p; }
bool operator==(LsPointer<T> p) { return p.m_ptr == this->m_ptr; }
// @todo - we want to be able to swap out Set + Unwrap functions
void Set(T * p)
{
SUPPORTS_DAC;
// We could validate the pointer here.
m_ptr = p;
}
T * UnWrap()
{
// If we wanted to validate the pointer, here's our chance.
return static_cast<T*>(m_ptr);
}
};
template <class n>
class MSLAYOUT RsPointer : public GeneralRsPointer
{
public:
static RsPointer<n> NullPtr()
{
RsPointer<n> t;
t.m_data = 0;
return t;
}
bool operator==(RsPointer<n> p) { return p.m_data == this->m_data; }
// We should never UnWrap() them in the LS, so we don't define that here.
};
#define DEFINE_LSPTR_TYPE(ls_type, ptr_name) \
ls_type; \
typedef LsPointer<ls_type> ptr_name;
#define DEFINE_RSPTR_TYPE(rs_type, ptr_name) \
enum __RS__##rs_type { }; \
typedef RsPointer<__RS__##rs_type> ptr_name;
#endif // !RIGHT_SIDE_COMPILE
// We must be binary compatible w/ a pointer.
static_assert_no_msg(sizeof(LsPointer<void>) == sizeof(GeneralLsPointer));
static_assert_no_msg(sizeof(void*) == sizeof(GeneralLsPointer));
//-----------------------------------------------------------------------------
// Definitions for Left-Side ptrs.
// NOTE: Use VMPTR instead of LSPTR. Don't add new LSPTR types.
//
//-----------------------------------------------------------------------------
DEFINE_LSPTR_TYPE(class Assembly, LSPTR_ASSEMBLY);
DEFINE_LSPTR_TYPE(class DebuggerJitInfo, LSPTR_DJI);
DEFINE_LSPTR_TYPE(class DebuggerMethodInfo, LSPTR_DMI);
DEFINE_LSPTR_TYPE(class MethodDesc, LSPTR_METHODDESC);
DEFINE_LSPTR_TYPE(class DebuggerBreakpoint, LSPTR_BREAKPOINT);
DEFINE_LSPTR_TYPE(class DebuggerDataBreakpoint, LSPTR_DATA_BREAKPOINT);
DEFINE_LSPTR_TYPE(class DebuggerEval, LSPTR_DEBUGGEREVAL);
DEFINE_LSPTR_TYPE(class DebuggerStepper, LSPTR_STEPPER);
// Need to be careful not to annoy the compiler here since DT_CONTEXT is a typedef, not a struct.
#if defined(RIGHT_SIDE_COMPILE)
typedef LsPointer<DT_CONTEXT> LSPTR_CONTEXT;
#else // RIGHT_SIDE_COMPILE
typedef LsPointer<DT_CONTEXT> LSPTR_CONTEXT;
#endif // RIGHT_SIDE_COMPILE
DEFINE_LSPTR_TYPE(struct OBJECTHANDLE__, LSPTR_OBJECTHANDLE);
DEFINE_LSPTR_TYPE(class TypeHandleDummyPtr, LSPTR_TYPEHANDLE); // TypeHandle in the LS is not a direct pointer.
//-----------------------------------------------------------------------------
// Definitions for Right-Side ptrs.
//-----------------------------------------------------------------------------
DEFINE_RSPTR_TYPE(CordbEval, RSPTR_CORDBEVAL);
//---------------------------------------------------------------------------------------
// VMPTR_Base is the base type for an abstraction over pointers into the VM so
// that DBI can treat them as opaque handles. Classes will derive from it to
// provide type-safe Target pointers, which ICD will view as opaque handles.
//
// Lifetimes:
// VMPTR_ objects survive across flushing the DAC cache. Therefore, the underlying
// storage must be a target-pointer (and not a marshalled host pointer).
// The RS must ensure they're still in sync with the LS (eg, by
// tracking unload events).
//
//
// Assumptions:
// These handles are TADDR pointers and must not require any cleanup from DAC/DBI.
// For direct untyped pointers into the VM, use CORDB_ADDRESS.
//
// Notes:
// 1. This helps enforce that DBI goes through the primitives interface
// for all access (and that it doesn't accidentally start calling
// dac-ized methods on the objects)
// 2. This isolates DBI from VM headers.
// 3. This isolates DBI from the dac implementation (of DAC_Ptr)
// 4. This is distinct from LSPTR because LSPTRs are truly opaque handles, whereas VMPtrs
// move across VM, DAC, and DBI, exposing proper functionality in each component.
// 5. VMPTRs are blittable because they are Target Addresses which act as opaque
// handles outside of the Target / Dac-marshaller.
//
//---------------------------------------------------------------------------------------
template <typename TTargetPtr, typename TDacPtr>
class MSLAYOUT VMPTR_Base
{
// Underlying pointer into Target address space.
// Target pointers are blittable.
// - In Target: can be used as normal local pointers.
// - In DAC: must be marshalled to a host-pointer and then they can be used via DAC
// - In RS: opaque handles.
private:
TADDR m_addr;
public:
typedef VMPTR_Base<TTargetPtr,TDacPtr> VMPTR_This;
// For DBI, VMPTRs are opaque handles.
// But the DAC side is allowed to inspect the handles to get at the raw pointer.
#if defined(ALLOW_VMPTR_ACCESS)
//
// Case 1: Using in DAcDbi implementation
//
// DAC accessor
TDacPtr GetDacPtr() const
{
SUPPORTS_DAC;
return TDacPtr(m_addr);
}
// This will initialize the handle to a given target-pointer.
// We choose TADDR to make it explicit that it's a target pointer and avoid the risk
// of it accidentally getting marshalled to a host pointer.
void SetDacTargetPtr(TADDR addr)
{
SUPPORTS_DAC;
m_addr = addr;
}
void SetHostPtr(const TTargetPtr * pObject)
{
SUPPORTS_DAC;
m_addr = PTR_HOST_TO_TADDR(pObject);
}
#elif !defined(RIGHT_SIDE_COMPILE)
//
// Case 2: Used in Left-side. Can get/set from local pointers.
//
// This will set initialize from a Target pointer. Since this is happening in the
// Left-side (Target), the pointer is local.
// This is commonly used by the Left-side to create a VMPTR_ for a notification event.
void SetRawPtr(TTargetPtr * ptr)
{
m_addr = reinterpret_cast<TADDR>(ptr);
}
// This will get the raw underlying target pointer.
// This can be used by inproc Left-side code to unwrap a VMPTR (Eg, for a func-eval
// hijack or in-proc worker threads)
TTargetPtr * GetRawPtr()
{
return reinterpret_cast<TTargetPtr*>(m_addr);
}
// Convenience for converting TTargetPtr --> VMPTR
static VMPTR_This MakePtr(TTargetPtr * ptr)
{
#ifdef _PREFAST_
#pragma warning(push)
#pragma warning(disable:6001) // PREfast warning: Using uninitialize memory 't'
#endif // _PREFAST_
VMPTR_This t;
t.SetRawPtr(ptr);
return t;
#ifdef _PREFAST_
#pragma warning(pop)
#endif // _PREFAST_
}
#else
//
// Case 3: Used in RS. Opaque handles only.
//
#endif
#ifndef DACCESS_COMPILE
// For compatibility, these can be converted to LSPTRs on the RS or LS (case 2 and 3). We don't allow
// this in the DAC case because it's a cast between address spaces which we're trying to eliminate
// in the DAC code.
// @dbgtodo inspection: LSPTRs will go away entirely once we've moved completely over to DAC
LsPointer<TTargetPtr> ToLsPtr()
{
return LsPointer<TTargetPtr>::MakePtr( reinterpret_cast<TTargetPtr *>(m_addr));
}
#endif
//
// Operators to emulate Pointer semantics.
//
bool IsNull() { SUPPORTS_DAC; return m_addr == NULL; }
static VMPTR_This NullPtr()
{
SUPPORTS_DAC;
#ifdef _PREFAST_
#pragma warning(push)
#pragma warning(disable:6001) // PREfast warning: Using uninitialize memory 't'
#endif // _PREFAST_
VMPTR_This dummy;
dummy.m_addr = NULL;
return dummy;
#ifdef _PREFAST_
#pragma warning(pop)
#endif // _PREFAST_
}
bool operator!= (VMPTR_This vmOther) const { SUPPORTS_DAC; return this->m_addr != vmOther.m_addr; }
bool operator== (VMPTR_This vmOther) const { SUPPORTS_DAC; return this->m_addr == vmOther.m_addr; }
};
#if defined(ALLOW_VMPTR_ACCESS)
// Helper macro to define a VMPTR.
// This is used in the DAC case, so this definition connects the pointers up to their DAC values.
#define DEFINE_VMPTR(ls_type, dac_ptr_type, ptr_name) \
ls_type; \
typedef VMPTR_Base<ls_type, dac_ptr_type> ptr_name;
#else
// Helper macro to define a VMPTR.
// This is used in the Right-side and Left-side (but not DAC) case.
// This definition explicitly ignores dac_ptr_type to prevent accidental DAC usage.
#define DEFINE_VMPTR(ls_type, dac_ptr_type, ptr_name) \
ls_type; \
typedef VMPTR_Base<ls_type, void> ptr_name;
#endif
// Declare VMPTRs.
// The naming convention for instantiating a VMPTR is a 'vm' prefix.
//
// VM definition, DAC definition, pretty name for VMPTR
DEFINE_VMPTR(class AppDomain, PTR_AppDomain, VMPTR_AppDomain);
// Need to be careful not to annoy the compiler here since DT_CONTEXT is a typedef, not a struct.
// DEFINE_VMPTR(struct _CONTEXT, PTR_CONTEXT, VMPTR_CONTEXT);
#if defined(ALLOW_VMPTR_ACCESS)
typedef VMPTR_Base<DT_CONTEXT, PTR_CONTEXT> VMPTR_CONTEXT;
#else
typedef VMPTR_Base<DT_CONTEXT, void > VMPTR_CONTEXT;
#endif
// DomainAssembly is a base-class for a CLR module, with app-domain affinity.
// For domain-neutral modules (like CoreLib), there is a DomainAssembly instance
// for each appdomain the module lives in.
// This is the canonical handle ICorDebug uses to a CLR module.
DEFINE_VMPTR(class DomainAssembly, PTR_DomainAssembly, VMPTR_DomainAssembly);
DEFINE_VMPTR(class Module, PTR_Module, VMPTR_Module);
DEFINE_VMPTR(class Assembly, PTR_Assembly, VMPTR_Assembly);
DEFINE_VMPTR(class PEAssembly, PTR_PEAssembly, VMPTR_PEAssembly);
DEFINE_VMPTR(class MethodDesc, PTR_MethodDesc, VMPTR_MethodDesc);
DEFINE_VMPTR(class FieldDesc, PTR_FieldDesc, VMPTR_FieldDesc);
// ObjectHandle is a safe way to represent an object into the GC heap. It gets updated
// when a GC occurs.
DEFINE_VMPTR(struct OBJECTHANDLE__, TADDR, VMPTR_OBJECTHANDLE);
DEFINE_VMPTR(class TypeHandle, PTR_TypeHandle, VMPTR_TypeHandle);
// A VMPTR_Thread represents a thread that has entered the runtime at some point.
// It may or may not have executed managed code yet; and it may or may not have managed code
// on its callstack.
DEFINE_VMPTR(class Thread, PTR_Thread, VMPTR_Thread);
DEFINE_VMPTR(class Object, PTR_Object, VMPTR_Object);
DEFINE_VMPTR(class CrstBase, PTR_Crst, VMPTR_Crst);
DEFINE_VMPTR(class SimpleRWLock, PTR_SimpleRWLock, VMPTR_SimpleRWLock);
DEFINE_VMPTR(class SimpleRWLock, PTR_SimpleRWLock, VMPTR_RWLock);
DEFINE_VMPTR(struct ReJitInfo, PTR_ReJitInfo, VMPTR_ReJitInfo);
DEFINE_VMPTR(struct SharedReJitInfo, PTR_SharedReJitInfo, VMPTR_SharedReJitInfo);
DEFINE_VMPTR(class NativeCodeVersionNode, PTR_NativeCodeVersionNode, VMPTR_NativeCodeVersionNode);
DEFINE_VMPTR(class ILCodeVersionNode, PTR_ILCodeVersionNode, VMPTR_ILCodeVersionNode);
typedef CORDB_ADDRESS GENERICS_TYPE_TOKEN;
//-----------------------------------------------------------------------------
// We pass some fixed size strings in the IPC block.
// Helper class to wrap the buffer and protect against buffer overflows.
// This should be binary compatible w/ a wchar[] array.
//-----------------------------------------------------------------------------
template <int nMaxLengthIncludingNull>
class MSLAYOUT EmbeddedIPCString
{
public:
// Set, caller responsibility that wcslen(pData) < nMaxLengthIncludingNull
void SetString(const WCHAR * pData)
{
// If the string doesn't fit into the buffer, that's an issue (and so this is a real
// assert, not just a simplifying assumption). To fix it, either:
// - make the buffer larger
// - don't pass the string as an embedded string in the IPC block.
// This will truncate (rather than AV on the RS).
int ret;
ret = SafeCopy(pData);
// See comment above - caller should guarantee that buffer is large enough.
_ASSERTE(ret != STRUNCATE);
}
// Set a string from a substring. This will truncate if necessary.
void SetStringTruncate(const WCHAR * pData)
{
// ignore return value because truncation is ok.
SafeCopy(pData);
}
const WCHAR * GetString()
{
// For a null-termination just in case an issue in the debuggee process
// yields a malformed string.
m_data[nMaxLengthIncludingNull - 1] = W('\0');
return &m_data[0];
}
int GetMaxSize() const { return nMaxLengthIncludingNull; }
private:
int SafeCopy(const WCHAR * pData)
{
return wcsncpy_s(
m_data, nMaxLengthIncludingNull,
pData, _TRUNCATE);
}
WCHAR m_data[nMaxLengthIncludingNull];
};
//
// Types of events that can be sent between the Runtime Controller and
// the Debugger Interface. Some of these events are one way only, while
// others go both ways. The grouping of the event numbers is an attempt
// to show this distinction and perhaps even allow generic operations
// based on the type of the event.
//
enum DebuggerIPCEventType
{
#define IPC_EVENT_TYPE0(type, val) type = val,
#define IPC_EVENT_TYPE1(type, val) type = val,
#define IPC_EVENT_TYPE2(type, val) type = val,
#include "dbgipceventtypes.h"
#undef IPC_EVENT_TYPE2
#undef IPC_EVENT_TYPE1
#undef IPC_EVENT_TYPE0
};
#ifdef _DEBUG
// This is a static debugging structure to help breaking at the right place.
// Debug only. This is to track the number of events that have been happened so far.
// User can choose to set break point base on the number of events.
// Variables are named as the event name with prefix m_iDebugCount. For example
// m_iDebugCount_DB_IPCE_BREAKPOINT if for event DB_IPCE_BREAKPOINT.
struct MSLAYOUT DebugEventCounter
{
// we don't need the event type 0
#define IPC_EVENT_TYPE0(type, val)
#define IPC_EVENT_TYPE1(type, val) int m_iDebugCount_##type;
#define IPC_EVENT_TYPE2(type, val) int m_iDebugCount_##type;
#include "dbgipceventtypes.h"
#undef IPC_EVENT_TYPE2
#undef IPC_EVENT_TYPE1
#undef IPC_EVENT_TYPE0
};
#endif // _DEBUG
#if !defined(DACCESS_COMPILE)
struct MSLAYOUT IPCEventTypeNameMapping
{
DebuggerIPCEventType eventType;
const char * eventName;
};
extern const IPCEventTypeNameMapping DbgIPCEventTypeNames[];
extern const size_t nameCount;
struct MSLAYOUT IPCENames // We use a class/struct so that the function can remain in a shared header file
{
static DebuggerIPCEventType GetEventType(_In_z_ char * strEventType)
{
// pass in the string of event name and find the matching enum value
// This is a linear search which is pretty slow. However, this is only used
// at startup time when debug assert is turn on and with registry key set. So it is not that bad.
//
for (size_t i = 0; i < nameCount; i++)
{
if (_stricmp(DbgIPCEventTypeNames[i].eventName, strEventType) == 0)
return DbgIPCEventTypeNames[i].eventType;
}
return DB_IPCE_INVALID_EVENT;
}
static const char * GetName(DebuggerIPCEventType eventType)
{
enum DbgIPCEventTypeNum
{
#define IPC_EVENT_TYPE0(type, val) type##_Num,
#define IPC_EVENT_TYPE1(type, val) type##_Num,
#define IPC_EVENT_TYPE2(type, val) type##_Num,
#include "dbgipceventtypes.h"
#undef IPC_EVENT_TYPE2
#undef IPC_EVENT_TYPE1
#undef IPC_EVENT_TYPE0
};
size_t i, lim;
if (eventType < DB_IPCE_DEBUGGER_FIRST)
{
i = DB_IPCE_RUNTIME_FIRST_Num + 1;
lim = DB_IPCE_DEBUGGER_FIRST_Num;
}
else
{
i = DB_IPCE_DEBUGGER_FIRST_Num + 1;
lim = nameCount;
}
for (/**/; i < lim; i++)
{
if (DbgIPCEventTypeNames[i].eventType == eventType)
return DbgIPCEventTypeNames[i].eventName;
}
return DbgIPCEventTypeNames[nameCount - 1].eventName;
}
};
#endif // !DACCESS_COMPILE
//
// NOTE: CPU-specific values below!
//
// DebuggerREGDISPLAY is very similar to the EE REGDISPLAY structure. It holds
// register values that can be saved over calls for each frame in a stack
// trace.
//
// DebuggerIPCE_FloatCount is the number of doubles in the processor's
// floating point stack.
//
// <TODO>Note: We used to just pass the values of the registers for each frame to the Right Side, but I had to add in the
// address of each register, too, to support using enregistered variables on non-leaf frames as args to a func eval. Its
// very, very possible that we would rework the entire code base to just use the register's address instead of passing
// both, but its way, way too late in V1 to undertake that, so I'm just using these addresses to suppport our one func
// eval case. Clearly, this needs to be cleaned up post V1.
//
// -- Fri Feb 09 11:21:24 2001</TODO>
//
struct MSLAYOUT DebuggerREGDISPLAY
{
#if defined(TARGET_X86)
#define DebuggerIPCE_FloatCount 8
SIZE_T Edi;
void *pEdi;
SIZE_T Esi;
void *pEsi;
SIZE_T Ebx;
void *pEbx;
SIZE_T Edx;
void *pEdx;
SIZE_T Ecx;
void *pEcx;
SIZE_T Eax;
void *pEax;
SIZE_T FP;
void *pFP;
SIZE_T SP;
SIZE_T PC;
#elif defined(TARGET_AMD64)
#define DebuggerIPCE_FloatCount 16
SIZE_T Rax;
void *pRax;
SIZE_T Rcx;
void *pRcx;
SIZE_T Rdx;
void *pRdx;
SIZE_T Rbx;
void *pRbx;
SIZE_T Rbp;
void *pRbp;
SIZE_T Rsi;
void *pRsi;
SIZE_T Rdi;
void *pRdi;
SIZE_T R8;
void *pR8;
SIZE_T R9;
void *pR9;
SIZE_T R10;
void *pR10;
SIZE_T R11;
void *pR11;
SIZE_T R12;
void *pR12;
SIZE_T R13;
void *pR13;
SIZE_T R14;
void *pR14;
SIZE_T R15;
void *pR15;
SIZE_T SP;
SIZE_T PC;
#elif defined(TARGET_ARM)
#define DebuggerIPCE_FloatCount 32
SIZE_T R0;
void *pR0;
SIZE_T R1;
void *pR1;
SIZE_T R2;
void *pR2;
SIZE_T R3;
void *pR3;
SIZE_T R4;
void *pR4;
SIZE_T R5;
void *pR5;
SIZE_T R6;
void *pR6;
SIZE_T R7;
void *pR7;
SIZE_T R8;
void *pR8;
SIZE_T R9;
void *pR9;
SIZE_T R10;
void *pR10;
SIZE_T R11;
void *pR11;
SIZE_T R12;
void *pR12;
SIZE_T SP;
void *pSP;
SIZE_T LR;
void *pLR;
SIZE_T PC;
void *pPC;
#elif defined(TARGET_ARM64)
#define DebuggerIPCE_FloatCount 32
SIZE_T X[29];
SIZE_T FP;
SIZE_T LR;
SIZE_T SP;
SIZE_T PC;
#else
#define DebuggerIPCE_FloatCount 1
SIZE_T PC;
SIZE_T FP;
SIZE_T SP;
void *pFP;
#endif
};
inline LPVOID GetSPAddress(const DebuggerREGDISPLAY * display)
{
return (LPVOID)&display->SP;
}
#if !defined(TARGET_AMD64) && !defined(TARGET_ARM)
inline LPVOID GetFPAddress(const DebuggerREGDISPLAY * display)
{
return (LPVOID)&display->FP;
}
#endif // !TARGET_AMD64
class MSLAYOUT FramePointer
{
friend bool IsCloserToLeaf(FramePointer fp1, FramePointer fp2);
friend bool IsCloserToRoot(FramePointer fp1, FramePointer fp2);
friend bool IsEqualOrCloserToLeaf(FramePointer fp1, FramePointer fp2);
friend bool IsEqualOrCloserToRoot(FramePointer fp1, FramePointer fp2);
public:
static FramePointer MakeFramePointer(LPVOID sp)
{
LIMITED_METHOD_DAC_CONTRACT;
FramePointer fp;
fp.m_sp = sp;
return fp;
}
static FramePointer MakeFramePointer(UINT_PTR sp)
{
SUPPORTS_DAC;
return MakeFramePointer((LPVOID)sp);
}
inline bool operator==(FramePointer fp)
{
return (m_sp == fp.m_sp);
}
inline bool operator!=(FramePointer fp)
{
return !(*this == fp);
}
// This is needed because on the RS, the m_id values of CordbFrame and
// CordbChain are really FramePointers.
LPVOID GetSPValue() const
{
return m_sp;
}
private:
// Declare some private constructors which signatures matching common usage of FramePointer
// to prevent people from accidentally assigning a pointer to a FramePointer().
FramePointer &operator=(LPVOID sp);
FramePointer &operator=(BYTE* sp);
FramePointer &operator=(const BYTE* sp);
LPVOID m_sp;
};
// For non-IA64 platforms, we use stack pointers as frame pointers.
// (Stack grows towards smaller address.)
#define LEAF_MOST_FRAME FramePointer::MakeFramePointer((LPVOID)NULL)
#define ROOT_MOST_FRAME FramePointer::MakeFramePointer((LPVOID)-1)
static_assert_no_msg(sizeof(FramePointer) == sizeof(void*));
inline bool IsCloserToLeaf(FramePointer fp1, FramePointer fp2)
{
return (fp1.m_sp < fp2.m_sp);
}
inline bool IsCloserToRoot(FramePointer fp1, FramePointer fp2)
{
return (fp1.m_sp > fp2.m_sp);
}
inline bool IsEqualOrCloserToLeaf(FramePointer fp1, FramePointer fp2)
{
return !IsCloserToRoot(fp1, fp2);
}
inline bool IsEqualOrCloserToRoot(FramePointer fp1, FramePointer fp2)
{
return !IsCloserToLeaf(fp1, fp2);
}
// struct DebuggerIPCE_FuncData: DebuggerIPCE_FuncData holds data
// to describe a given function, its
// class, and a little bit about the code for the function. This is used
// in the stack trace result data to pass function information back that
// may be needed. Its also used when getting data about a specific function.
//
// void* nativeStartAddressPtr: Ptr to CORDB_ADDRESS, which is
// the address of the real start address of the native code.
// This field will be NULL only if the method hasn't been JITted
// yet (and thus no code is available). Otherwise, it will be
// the adress of a CORDB_ADDRESS in the remote memory. This
// CORDB_ADDRESS may be NULL, in which case the code is unavailable
// has been pitched (return CORDBG_E_CODE_NOT_AVAILABLE)
//
// SIZE_T nVersion: The version of the code that this instance of the
// function is using.
struct MSLAYOUT DebuggerIPCE_FuncData
{
mdMethodDef funcMetadataToken;
VMPTR_DomainAssembly vmDomainAssembly;
mdTypeDef classMetadataToken;
void* ilStartAddress;
SIZE_T ilSize;
SIZE_T currentEnCVersion;
mdSignature localVarSigToken;
};
// struct DebuggerIPCE_JITFuncData: DebuggerIPCE_JITFuncData holds
// a little bit about the JITted code for the function.
//
// void* nativeStartAddressPtr: Ptr to CORDB_ADDRESS, which is
// the address of the real start address of the native code.
// This field will be NULL only if the method hasn't been JITted
// yet (and thus no code is available). Otherwise, it will be
// the address of a CORDB_ADDRESS in the remote memory. This
// CORDB_ADDRESS may be NULL, in which case the code is unavailable
// or has been pitched (return CORDBG_E_CODE_NOT_AVAILABLE)
//
// SIZE_T nativeSize: Size of the native code.
//
// SIZE_T nativeOffset: Offset from the beginning of the function,
// in bytes. This may be non-zero even when nativeStartAddressPtr
// is NULL
// void * nativeCodeJITInfoToken: An opaque value to hand back to the left
// side when fetching the JITInfo for the native code, i.e. the
// IL->native maps for the variables. This may be NULL if no JITInfo is available.
// void * nativeCodeMethodDescToken: An opaque value to hand back to the left
// side when fetching the code. In addition this token can act as the
// unique identity for the native code in the case where there are
// multiple blobs of native code per IL method (i.e. if the method is
// generic code of some kind)
// BOOL isInstantiatedGeneric: Indicates if the method is
// generic code of some kind.
// BOOL jsutAfterILThrow: indicates that code just threw a software exception and
// nativeOffset points to an instruction just after [call IL_Throw].
// This is being used to figure out a real offset of the exception origin.
// By subtracting STACKWALK_CONTROLPC_ADJUST_OFFSET from nativeOffset you can get
// an address somewhere inside [call IL_Throw] instruction.
// void *ilToNativeMapAddr etc.: If nativeCodeJITInfoToken is not NULL then these
// specify the table giving the mapping of IPs.
struct MSLAYOUT DebuggerIPCE_JITFuncData
{
TADDR nativeStartAddressPtr;
SIZE_T nativeHotSize;
// If we have a cold region, need its size & the pointer to where starts.
TADDR nativeStartAddressColdPtr;
SIZE_T nativeColdSize;
SIZE_T nativeOffset;
LSPTR_DJI nativeCodeJITInfoToken;
VMPTR_MethodDesc vmNativeCodeMethodDescToken;
#ifdef FEATURE_EH_FUNCLETS
BOOL fIsFilterFrame;
SIZE_T parentNativeOffset;
FramePointer fpParentOrSelf;
#endif // FEATURE_EH_FUNCLETS
// indicates if the MethodDesc is a generic function or a method inside a generic class (or
// both!).
BOOL isInstantiatedGeneric;
// this is the version of the jitted code
SIZE_T enCVersion;
BOOL jsutAfterILThrow;
};
//
// DebuggerIPCE_STRData holds data for each stack frame or chain. This data is passed
// from the RC to the DI during a stack walk.
//
#if defined(_MSC_VER)
#pragma warning( push )
#pragma warning( disable:4324 ) // the compiler pads a structure to comply with alignment requirements
#endif // ARM context structures have a 16-byte alignment requirement
struct MSLAYOUT DebuggerIPCE_STRData
{
FramePointer fp;
// @dbgtodo stackwalker/shim- Ideally we should be able to get rid of the DebuggerREGDISPLAY and just use the CONTEXT.
DT_CONTEXT ctx;
DebuggerREGDISPLAY rd;
bool quicklyUnwound;
VMPTR_AppDomain vmCurrentAppDomainToken;
enum EType
{
cMethodFrame = 0,
cChain,
cStubFrame,
cRuntimeNativeFrame
} eType;
union MSLAYOUT
{
// Data for a chain
struct MSLAYOUT
{
CorDebugChainReason chainReason;
bool managed;
} u;
// Data for a Method
struct MSLAYOUT
{
struct DebuggerIPCE_FuncData funcData;
struct DebuggerIPCE_JITFuncData jitFuncData;
SIZE_T ILOffset;
CorDebugMappingResult mapping;
bool fVarArgs;
// Indicates whether the managed method has any metadata.
// Some dynamic methods such as IL stubs and LCG methods don't have any metadata.
// This is used only by the V3 stackwalker, not the V2 one, because we only
// expose dynamic methods as real stack frames in V3.
bool fNoMetadata;
TADDR taAmbientESP;
GENERICS_TYPE_TOKEN exactGenericArgsToken;
DWORD dwExactGenericArgsTokenIndex;
} v;
// Data for an Stub Frame.
struct MSLAYOUT
{
mdMethodDef funcMetadataToken;
VMPTR_DomainAssembly vmDomainAssembly;
VMPTR_MethodDesc vmMethodDesc;
CorDebugInternalFrameType frameType;
} stubFrame;
};
};
#if defined(_MSC_VER)
#pragma warning( pop )
#endif
//
// DebuggerIPCE_BasicTypeData and DebuggerIPCE_ExpandedTypeData
// hold data for each type sent across the
// boundary, whether it be a constructed type List<String> or a non-constructed
// type such as String, Foo or Object.
//
// Logically speaking DebuggerIPCE_BasicTypeData might just be "typeHandle", as
// we could then send further events to ask what the elementtype, typeToken and moduleToken
// are for the type handle. But as
// nearly all types are non-generic we send across even the basic type information in
// the slightly expanded form shown below, sending the element type and the
// tokens with the type handle itself. The fields debuggerModuleToken, metadataToken and typeHandle
// are only used as follows:
// elementType debuggerModuleToken metadataToken typeHandle
// E_T_INT8 : E_T_INT8 No No No
// Boxed E_T_INT8: E_T_CLASS No No No
// E_T_CLASS, non-generic class: E_T_CLASS Yes Yes No
// E_T_VALUETYPE, non-generic: E_T_VALUETYPE Yes Yes No
// E_T_CLASS, generic class: E_T_CLASS Yes Yes Yes
// E_T_VALUETYPE, generic class: E_T_VALUETYPE Yes Yes Yes
// E_T_BYREF : E_T_BYREF No No Yes
// E_T_PTR : E_T_PTR No No Yes
// E_T_ARRAY etc. : E_T_ARRAY No No Yes
// E_T_FNPTR etc. : E_T_FNPTR No No Yes
// This allows us to always set "typeHandle" to NULL except when dealing with highly nested
// types or function-pointer types (the latter are too complexe to transfer over in one hit).
//
struct MSLAYOUT DebuggerIPCE_BasicTypeData
{
CorElementType elementType;
mdTypeDef metadataToken;
VMPTR_Module vmModule;
VMPTR_DomainAssembly vmDomainAssembly;
VMPTR_TypeHandle vmTypeHandle;
};
// DebuggerIPCE_ExpandedTypeData contains more information showing further
// details for array types, byref types etc.
// Whenever you fetch type information from the left-side
// you get back one of these. These in turn contain further
// DebuggerIPCE_BasicTypeData's and typeHandles which you can
// then query to get further information about the type parameters.
// This copes with the nested cases, e.g. jagged arrays,
// String ****, &(String*), Pair<String,Pair<String>>
// and so on.
//
// So this type information is not "fully expanded", it's just a little
// more detail then DebuggerIPCE_BasicTypeData. For type
// instantiatons (e.g. List<int>) and
// function pointer types you will need to make further requests for
// information about the type parameters.
// For array types there is always only one type parameter so
// we include that as part of the expanded data.
//
//
struct MSLAYOUT DebuggerIPCE_ExpandedTypeData
{
CorElementType elementType; // Note this is _never_ E_T_VAR, E_T_WITH or E_T_MVAR
union MSLAYOUT
{
// used for E_T_CLASS and E_T_VALUECLASS, E_T_PTR, E_T_BYREF etc.
// For non-constructed E_T_CLASS or E_T_VALUECLASS the tokens will be set and the typeHandle will be NULL
// For constructed E_T_CLASS or E_T_VALUECLASS the tokens will be set and the typeHandle will be non-NULL
// For E_T_PTR etc. the tokens will be NULL and the typeHandle will be non-NULL.
struct MSLAYOUT
{
mdTypeDef metadataToken;
VMPTR_Module vmModule;
VMPTR_DomainAssembly vmDomainAssembly;
VMPTR_TypeHandle typeHandle; // if non-null then further fetches will be needed to get type arguments
} ClassTypeData;
// used for E_T_PTR, E_T_BYREF etc.
struct MSLAYOUT
{
DebuggerIPCE_BasicTypeData unaryTypeArg; // used only when sending back to debugger
} UnaryTypeData;
// used for E_T_ARRAY etc.
struct MSLAYOUT
{
DebuggerIPCE_BasicTypeData arrayTypeArg; // used only when sending back to debugger
DWORD arrayRank;
} ArrayTypeData;
// used for E_T_FNPTR
struct MSLAYOUT
{
VMPTR_TypeHandle typeHandle; // if non-null then further fetches needed to get type arguments
} NaryTypeData;
};
};
// DebuggerIPCE_TypeArgData is used when sending type arguments
// across to a funceval. It contains the DebuggerIPCE_ExpandedTypeData describing the
// essence of the type, but the typeHandle and other
// BasicTypeData fields should be zero and will be ignored.
// The DebuggerIPCE_ExpandedTypeData is then followed
// by the required number of type arguments, each of which
// will be a further DebuggerIPCE_TypeArgData record in the stream of
// flattened type argument data.
struct MSLAYOUT DebuggerIPCE_TypeArgData
{
DebuggerIPCE_ExpandedTypeData data;
unsigned int numTypeArgs; // number of immediate children on the type tree
};
//
// DebuggerIPCE_ObjectData holds the results of a
// GetAndSendObjectInfo, i.e., all the info about an object that the
// Right Side would need to access it. (This include array, string,
// and nstruct info.)
//
struct MSLAYOUT DebuggerIPCE_ObjectData
{
void *objRef;
bool objRefBad;
SIZE_T objSize;
// Offset from the beginning of the object to the beginning of the first field
SIZE_T objOffsetToVars;
// The type of the object....
struct DebuggerIPCE_ExpandedTypeData objTypeData;
union MSLAYOUT
{
struct MSLAYOUT
{
SIZE_T length;
SIZE_T offsetToStringBase;
} stringInfo;
struct MSLAYOUT
{
SIZE_T rank;
SIZE_T offsetToArrayBase;
SIZE_T offsetToLowerBounds; // 0 if not present
SIZE_T offsetToUpperBounds; // 0 if not present
SIZE_T componentCount;
SIZE_T elementSize;
} arrayInfo;
struct MSLAYOUT
{
struct DebuggerIPCE_BasicTypeData typedByrefType; // the type of the thing contained in a typedByref...
} typedByrefInfo;
};
};
//
// Remote enregistered info used by CordbValues and for passing
// variable homes between the left and right sides during a func eval.
//
enum RemoteAddressKind
{
RAK_NONE = 0,
RAK_REG,
RAK_REGREG,
RAK_REGMEM,
RAK_MEMREG,
RAK_FLOAT,
RAK_END
};
const CORDB_ADDRESS kLeafFrameRegAddr = 0;
const CORDB_ADDRESS kNonLeafFrameRegAddr = (CORDB_ADDRESS)(-1);
struct MSLAYOUT RemoteAddress
{
RemoteAddressKind kind;
void *frame;
CorDebugRegister reg1;
void *reg1Addr;
SIZE_T reg1Value; // this is the actual value of the register
union MSLAYOUT
{
struct MSLAYOUT
{
CorDebugRegister reg2;
void *reg2Addr;
SIZE_T reg2Value; // this is the actual value of the register
} u;
CORDB_ADDRESS addr;
DWORD floatIndex;
};
};
//
// DebuggerIPCE_FuncEvalType specifies the type of a function
// evaluation that will occur.
//
enum DebuggerIPCE_FuncEvalType
{
DB_IPCE_FET_NORMAL,
DB_IPCE_FET_NEW_OBJECT,
DB_IPCE_FET_NEW_OBJECT_NC,
DB_IPCE_FET_NEW_STRING,
DB_IPCE_FET_NEW_ARRAY
};
enum NameChangeType
{
APP_DOMAIN_NAME_CHANGE,
THREAD_NAME_CHANGE
};
//
// DebuggerIPCE_FuncEvalArgData holds data for each argument to a
// function evaluation.
//
struct MSLAYOUT DebuggerIPCE_FuncEvalArgData
{
RemoteAddress argHome; // enregistered variable home
void *argAddr; // address if not enregistered
CorElementType argElementType;
unsigned int fullArgTypeNodeCount; // Pointer to LS (DebuggerIPCE_TypeArgData *) buffer holding full description of the argument type (if needed - only needed for struct types)
void *fullArgType; // Pointer to LS (DebuggerIPCE_TypeArgData *) buffer holding full description of the argument type (if needed - only needed for struct types)
BYTE argLiteralData[8]; // copy of generic value data
bool argIsLiteral; // true if value is in argLiteralData
bool argIsHandleValue; // true if argAddr is OBJECTHANDLE
};
//
// DebuggerIPCE_FuncEvalInfo holds info necessary to setup a func eval
// operation.
//
struct MSLAYOUT DebuggerIPCE_FuncEvalInfo
{
VMPTR_Thread vmThreadToken;
DebuggerIPCE_FuncEvalType funcEvalType;
mdMethodDef funcMetadataToken;
mdTypeDef funcClassMetadataToken;
VMPTR_DomainAssembly vmDomainAssembly;
RSPTR_CORDBEVAL funcEvalKey;
bool evalDuringException;
unsigned int argCount;
unsigned int genericArgsCount;
unsigned int genericArgsNodeCount;
SIZE_T stringSize;
SIZE_T arrayRank;
};
//
// Used in DebuggerIPCFirstChanceData. This tells the LS what action to take within the hijack
//
enum HijackAction
{
HIJACK_ACTION_EXIT_UNHANDLED,
HIJACK_ACTION_EXIT_HANDLED,
HIJACK_ACTION_WAIT
};
//
// DebuggerIPCFirstChanceData holds info communicated from the LS to the RS when signaling that an exception does not
// belong to the runtime from a first chance hijack. This is used when Win32 debugging only.
//
struct MSLAYOUT DebuggerIPCFirstChanceData
{
LSPTR_CONTEXT pLeftSideContext;
HijackAction action;
UINT debugCounter;
};
//
// DebuggerIPCSecondChanceData holds info communicated from the RS
// to the LS when setting up a second chance exception hijack. This is
// used when Win32 debugging only.
//
struct MSLAYOUT DebuggerIPCSecondChanceData
{
DT_CONTEXT threadContext;
};
//-----------------------------------------------------------------------------
// This struct holds pointer from the LS and needs to copy to
// the RS. We have to free the memory on the RS.
// The transfer function is called when the RS first reads the event. At this point,
// the LS is stopped while sending the event. Thus the LS pointers only need to be
// valid while the LS is in SendIPCEvent.
//
// Since this data is in an IPC/Marshallable block, it can't have any Ctors (holders)
// in it.
//-----------------------------------------------------------------------------
struct MSLAYOUT Ls_Rs_BaseBuffer
{
#ifdef RIGHT_SIDE_COMPILE
protected:
// copy data can happen on both LS and RS. In LS case,
// ReadProcessMemory is really reading from its own process memory.
//
void CopyLSDataToRSWorker(ICorDebugDataTarget * pTargethProcess);
// retrieve the RS data and own it
BYTE *TransferRSDataWorker()
{
BYTE *pbRS = m_pbRS;
m_pbRS = NULL;
return pbRS;
}
public:
void CleanUp()
{
if (m_pbRS != NULL)
{
delete [] m_pbRS;
m_pbRS = NULL;
}
}
#else
public:
// Only LS can call this API
void SetLsData(BYTE *pbLS, DWORD cbSize)
{
m_pbRS = NULL;
m_pbLS = pbLS;
m_cbSize = cbSize;
}
#endif // RIGHT_SIDE_COMPILE
public:
// Common APIs.
DWORD GetSize() { return m_cbSize; }
protected:
// Size of data in bytes
DWORD m_cbSize;
// If this is non-null, pointer into LS for buffer.
// LS can free this after the debug event is continued.
BYTE *m_pbLS; // @dbgtodo cross-plat- for cross-platform purposes, this should be a TADDR
// If this is non-null, pointer into RS for buffer. RS must then free this.
// This buffer was copied from the LS (via CopyLSDataToRSWorker).
BYTE *m_pbRS;
};
//-----------------------------------------------------------------------------
// Byte wrapper around the buffer.
//-----------------------------------------------------------------------------
struct MSLAYOUT Ls_Rs_ByteBuffer : public Ls_Rs_BaseBuffer
{
#ifdef RIGHT_SIDE_COMPILE
BYTE *GetRSPointer()
{
return m_pbRS;
}
void CopyLSDataToRS(ICorDebugDataTarget * pTarget);
BYTE *TransferRSData()
{
return TransferRSDataWorker();
}
#endif
};
//-----------------------------------------------------------------------------
// Wrapper around a Ls_rS_Buffer to get it as a string.
// This can also do some sanity checking.
//-----------------------------------------------------------------------------
struct MSLAYOUT Ls_Rs_StringBuffer : public Ls_Rs_BaseBuffer
{
#ifdef RIGHT_SIDE_COMPILE
const WCHAR * GetString()
{
return reinterpret_cast<const WCHAR*> (m_pbRS);
}
// Copy over the string.
void CopyLSDataToRS(ICorDebugDataTarget * pTarget);
// Caller will pick up ownership.
// Since caller will delete this data, we can't give back a constant pointer.
WCHAR * TransferStringData()
{
return reinterpret_cast<WCHAR*> (TransferRSDataWorker());
}
#endif
};
// Data for an Managed Debug Assistant Probe (MDA).
struct MSLAYOUT DebuggerMDANotification
{
Ls_Rs_StringBuffer szName;
Ls_Rs_StringBuffer szDescription;
Ls_Rs_StringBuffer szXml;
DWORD dwOSThreadId;
CorDebugMDAFlags flags;
};
// The only remaining problem is that register number mappings are different for each platform. It turns out
// that the debugger only uses REGNUM_SP and REGNUM_AMBIENT_SP though, so we can just virtualize these two for
// the target platform.
// Keep this is sync with the definitions in inc/corinfo.h.
#if defined(TARGET_X86)
#define DBG_TARGET_REGNUM_SP 4
#define DBG_TARGET_REGNUM_AMBIENT_SP 9
#ifdef TARGET_X86
static_assert_no_msg(DBG_TARGET_REGNUM_SP == ICorDebugInfo::REGNUM_SP);
static_assert_no_msg(DBG_TARGET_REGNUM_AMBIENT_SP == ICorDebugInfo::REGNUM_AMBIENT_SP);
#endif // TARGET_X86
#elif defined(TARGET_AMD64)
#define DBG_TARGET_REGNUM_SP 4
#define DBG_TARGET_REGNUM_AMBIENT_SP 17
#ifdef TARGET_AMD64
static_assert_no_msg(DBG_TARGET_REGNUM_SP == ICorDebugInfo::REGNUM_SP);
static_assert_no_msg(DBG_TARGET_REGNUM_AMBIENT_SP == ICorDebugInfo::REGNUM_AMBIENT_SP);
#endif // TARGET_AMD64
#elif defined(TARGET_ARM)
#define DBG_TARGET_REGNUM_SP 13
#define DBG_TARGET_REGNUM_AMBIENT_SP 17
#ifdef TARGET_ARM
C_ASSERT(DBG_TARGET_REGNUM_SP == ICorDebugInfo::REGNUM_SP);
C_ASSERT(DBG_TARGET_REGNUM_AMBIENT_SP == ICorDebugInfo::REGNUM_AMBIENT_SP);
#endif // TARGET_ARM
#elif defined(TARGET_ARM64)
#define DBG_TARGET_REGNUM_SP 31
#define DBG_TARGET_REGNUM_AMBIENT_SP 34
#ifdef TARGET_ARM64
C_ASSERT(DBG_TARGET_REGNUM_SP == ICorDebugInfo::REGNUM_SP);
C_ASSERT(DBG_TARGET_REGNUM_AMBIENT_SP == ICorDebugInfo::REGNUM_AMBIENT_SP);
#endif // TARGET_ARM64
#else
#error Target registers are not defined for this platform
#endif
//
// Event structure that is passed between the Runtime Controller and the
// Debugger Interface. Some types of events are a fixed size and have
// entries in the main union, while others are variable length and have
// more specialized data structures that are attached to the end of this
// structure.
//
struct MSLAYOUT DebuggerIPCEvent
{
DebuggerIPCEvent* next;
DebuggerIPCEventType type;
DWORD processId;
DWORD threadId;
VMPTR_AppDomain vmAppDomain;
VMPTR_Thread vmThread;
HRESULT hr;
bool replyRequired;
bool asyncSend;
union MSLAYOUT
{
struct MSLAYOUT
{
// Pointer to a BOOL in the target.
CORDB_ADDRESS pfBeingDebugged;
} LeftSideStartupData;
struct MSLAYOUT
{
// Module whos metadata is being updated
// This tells the RS that the metadata for that module has become invalid.
VMPTR_DomainAssembly vmDomainAssembly;
} MetadataUpdateData;
struct MSLAYOUT
{
// Handle to CLR's internal appdomain object.
VMPTR_AppDomain vmAppDomain;
} AppDomainData;
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
} AssemblyData;
#ifdef TEST_DATA_CONSISTENCY
// information necessary for testing whether the LS holds a lock on data
// the RS needs to inspect. See code:DataTest::TestDataSafety and
// code:IDacDbiInterface::TestCrst for more information
struct MSLAYOUT
{
// the lock to be tested
VMPTR_Crst vmCrst;
// indicates whether the LS holds the lock
bool fOkToTake;
} TestCrstData;
// information necessary for testing whether the LS holds a lock on data
// the RS needs to inspect. See code:DataTest::TestDataSafety and
// code:IDacDbiInterface::TestCrst for more information
struct MSLAYOUT
{
// the lock to be tested
VMPTR_SimpleRWLock vmRWLock;
// indicates whether the LS holds the lock
bool fOkToTake;
} TestRWLockData;
#endif // TEST_DATA_CONSISTENCY
// Debug event that a module has been loaded
struct MSLAYOUT
{
// Module that was just loaded.
VMPTR_DomainAssembly vmDomainAssembly;
}LoadModuleData;
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
LSPTR_ASSEMBLY debuggerAssemblyToken;
} UnloadModuleData;
// The given module's pdb has been updated.
// Queury PDB from OOP
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
} UpdateModuleSymsData;
DebuggerMDANotification MDANotification;
struct MSLAYOUT
{
LSPTR_BREAKPOINT breakpointToken;
mdMethodDef funcMetadataToken;
VMPTR_DomainAssembly vmDomainAssembly;
bool isIL;
SIZE_T offset;
SIZE_T encVersion;
LSPTR_METHODDESC nativeCodeMethodDescToken; // points to the MethodDesc if !isIL
} BreakpointData;
struct MSLAYOUT
{
LSPTR_BREAKPOINT breakpointToken;
} BreakpointSetErrorData;
struct MSLAYOUT
{
#ifdef FEATURE_DATABREAKPOINT
CONTEXT context;
#else
int dummy;
#endif
} DataBreakpointData;
struct MSLAYOUT
{
LSPTR_STEPPER stepperToken;
VMPTR_Thread vmThreadToken;
FramePointer frameToken;
bool stepIn;
bool rangeIL;
bool IsJMCStop;
unsigned int totalRangeCount;
CorDebugStepReason reason;
CorDebugUnmappedStop rgfMappingStop;
CorDebugIntercept rgfInterceptStop;
unsigned int rangeCount;
COR_DEBUG_STEP_RANGE range; //note that this is an array
} StepData;
struct MSLAYOUT
{
// An unvalidated GC-handle
VMPTR_OBJECTHANDLE GCHandle;
} GetGCHandleInfo;
struct MSLAYOUT
{
// An unvalidated GC-handle for which we're returning the results
LSPTR_OBJECTHANDLE GCHandle;
// The following are initialized by the LS in response to our query:
VMPTR_AppDomain vmAppDomain; // AD that handle is in (only applicable if fValid).
bool fValid; // Did the LS determine the GC handle to be valid?
} GetGCHandleInfoResult;
// Allocate memory on the left-side
struct MSLAYOUT
{
ULONG bufSize; // number of bytes to allocate
} GetBuffer;
// Memory allocated on the left-side
struct MSLAYOUT
{
void *pBuffer; // LS pointer to the buffer allocated
HRESULT hr; // success / failure
} GetBufferResult;
// Free a buffer allocated on the left-side with GetBuffer
struct MSLAYOUT
{
void *pBuffer; // Pointer previously returned in GetBufferResult
} ReleaseBuffer;
struct MSLAYOUT
{
HRESULT hr;
} ReleaseBufferResult;
// Apply an EnC edit
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly; // Module to edit
DWORD cbDeltaMetadata; // size of blob pointed to by pDeltaMetadata
CORDB_ADDRESS pDeltaMetadata; // pointer to delta metadata in debuggee
// it's the RS's responsibility to allocate and free
// this (and pDeltaIL) using GetBuffer / ReleaseBuffer
CORDB_ADDRESS pDeltaIL; // pointer to delta IL in debugee
DWORD cbDeltaIL; // size of blob pointed to by pDeltaIL
} ApplyChanges;
struct MSLAYOUT
{
HRESULT hr;
} ApplyChangesResult;
struct MSLAYOUT
{
mdTypeDef classMetadataToken;
VMPTR_DomainAssembly vmDomainAssembly;
LSPTR_ASSEMBLY classDebuggerAssemblyToken;
} LoadClass;
struct MSLAYOUT
{
mdTypeDef classMetadataToken;
VMPTR_DomainAssembly vmDomainAssembly;
LSPTR_ASSEMBLY classDebuggerAssemblyToken;
} UnloadClass;
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
bool flag;
} SetClassLoad;
struct MSLAYOUT
{
VMPTR_OBJECTHANDLE vmExceptionHandle;
bool firstChance;
bool continuable;
} Exception;
struct MSLAYOUT
{
VMPTR_Thread vmThreadToken;
} ClearException;
struct MSLAYOUT
{
void *address;
} IsTransitionStub;
struct MSLAYOUT
{
bool isStub;
} IsTransitionStubResult;
struct MSLAYOUT
{
CORDB_ADDRESS startAddress;
bool fCanSetIPOnly;
VMPTR_Thread vmThreadToken;
VMPTR_DomainAssembly vmDomainAssembly;
mdMethodDef mdMethod;
VMPTR_MethodDesc vmMethodDesc;
SIZE_T offset;
bool fIsIL;
void * firstExceptionHandler;
} SetIP; // this is also used for CanSetIP
struct MSLAYOUT
{
int iLevel;
EmbeddedIPCString<MAX_LOG_SWITCH_NAME_LEN + 1> szCategory;
Ls_Rs_StringBuffer szContent;
} FirstLogMessage;
struct MSLAYOUT
{
int iLevel;
int iReason;
EmbeddedIPCString<MAX_LOG_SWITCH_NAME_LEN + 1> szSwitchName;
EmbeddedIPCString<MAX_LOG_SWITCH_NAME_LEN + 1> szParentSwitchName;
} LogSwitchSettingMessage;
// information needed to send to the RS as part of a custom notification from the target
struct MSLAYOUT
{
// Domain file for the domain in which the notification occurred
VMPTR_DomainAssembly vmDomainAssembly;
// metadata token for the type of the CustomNotification object's type
mdTypeDef classToken;
} CustomNotification;
struct MSLAYOUT
{
VMPTR_Thread vmThreadToken;
CorDebugThreadState debugState;
} SetAllDebugState;
DebuggerIPCE_FuncEvalInfo FuncEval;
struct MSLAYOUT
{
CORDB_ADDRESS argDataArea;
LSPTR_DEBUGGEREVAL debuggerEvalKey;
} FuncEvalSetupComplete;
struct MSLAYOUT
{
RSPTR_CORDBEVAL funcEvalKey;
bool successful;
bool aborted;
void *resultAddr;
// AppDomain that the result is in.
VMPTR_AppDomain vmAppDomain;
VMPTR_OBJECTHANDLE vmObjectHandle;
DebuggerIPCE_ExpandedTypeData resultType;
} FuncEvalComplete;
struct MSLAYOUT
{
LSPTR_DEBUGGEREVAL debuggerEvalKey;
} FuncEvalAbort;
struct MSLAYOUT
{
LSPTR_DEBUGGEREVAL debuggerEvalKey;
} FuncEvalRudeAbort;
struct MSLAYOUT
{
LSPTR_DEBUGGEREVAL debuggerEvalKey;
} FuncEvalCleanup;
struct MSLAYOUT
{
void *objectRefAddress;
VMPTR_OBJECTHANDLE vmObjectHandle;
void *newReference;
} SetReference;
struct MSLAYOUT
{
NameChangeType eventType;
VMPTR_AppDomain vmAppDomain;
VMPTR_Thread vmThread;
} NameChange;
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
BOOL fAllowJitOpts;
BOOL fEnableEnC;
} JitDebugInfo;
// EnC Remap opportunity
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
mdMethodDef funcMetadataToken ; // methodDef of function with remap opportunity
SIZE_T currentVersionNumber; // version currently executing
SIZE_T resumeVersionNumber; // latest version
SIZE_T currentILOffset; // the IL offset of the current IP
SIZE_T *resumeILOffset; // pointer into left-side where an offset to resume
// to should be written if remap is desired.
} EnCRemap;
// EnC Remap has taken place
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
mdMethodDef funcMetadataToken; // methodDef of function that was remapped
} EnCRemapComplete;
// Notification that the LS is about to update a CLR data structure to account for a
// specific edit made by EnC (function add/update or field add).
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
mdToken memberMetadataToken; // Either a methodDef token indicating the function that
// was updated/added, or a fieldDef token indicating the
// field which was added.
mdTypeDef classMetadataToken; // TypeDef token of the class in which the update was made
SIZE_T newVersionNumber; // The new function/module version
} EnCUpdate;
struct MSLAYOUT
{
void *oldData;
void *newData;
DebuggerIPCE_BasicTypeData type;
} SetValueClass;
// Event used to tell LS if a single function is user or non-user code.
// Same structure used to get function status.
// @todo - Perhaps we can bundle these up so we can set multiple funcs w/ 1 event?
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
mdMethodDef funcMetadataToken;
DWORD dwStatus;
} SetJMCFunctionStatus;
struct MSLAYOUT
{
TASKID taskid;
} GetThreadForTaskId;
struct MSLAYOUT
{
VMPTR_Thread vmThreadToken;
} GetThreadForTaskIdResult;
struct MSLAYOUT
{
CONNID connectionId;
} ConnectionChange;
struct MSLAYOUT
{
CONNID connectionId;
EmbeddedIPCString<MAX_LONGPATH> wzConnectionName;
} CreateConnection;
struct MSLAYOUT
{
void *objectToken;
CorDebugHandleType handleType;
} CreateHandle;
struct MSLAYOUT
{
VMPTR_OBJECTHANDLE vmObjectHandle;
} CreateHandleResult;
// used in DB_IPCE_DISPOSE_HANDLE event
struct MSLAYOUT
{
VMPTR_OBJECTHANDLE vmObjectHandle;
CorDebugHandleType handleType;
} DisposeHandle;
struct MSLAYOUT
{
FramePointer framePointer;
SIZE_T nOffset;
CorDebugExceptionCallbackType eventType;
DWORD dwFlags;
VMPTR_OBJECTHANDLE vmExceptionHandle;
} ExceptionCallback2;
struct MSLAYOUT
{
CorDebugExceptionUnwindCallbackType eventType;
DWORD dwFlags;
} ExceptionUnwind;
struct MSLAYOUT
{
VMPTR_Thread vmThreadToken;
FramePointer frameToken;
} InterceptException;
struct MSLAYOUT
{
VMPTR_Module vmModule;
void * pMetadataStart;
ULONG nMetadataSize;
} MetadataUpdateRequest;
};
};
// When using a network transport rather than shared memory buffers CorDBIPC_BUFFER_SIZE is the upper bound
// for a single DebuggerIPCEvent structure. This now relates to the maximal size of a network message and is
// orthogonal to the host's page size. Round the buffer size up to a multiple of 8 since MSVC seems more
// aggressive in this regard than gcc.
#define CorDBIPC_TRANSPORT_BUFFER_SIZE (((sizeof(DebuggerIPCEvent) + 7) / 8) * 8)
// A DebuggerIPCEvent must fit in the send & receive buffers, which are CorDBIPC_BUFFER_SIZE bytes.
static_assert_no_msg(sizeof(DebuggerIPCEvent) <= CorDBIPC_BUFFER_SIZE);
static_assert_no_msg(CorDBIPC_TRANSPORT_BUFFER_SIZE <= CorDBIPC_BUFFER_SIZE);
// 2*sizeof(WCHAR) for the two string terminating characters in the FirstLogMessage
#define LOG_MSG_PADDING 4
#endif /* _DbgIPCEvents_h_ */
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/* ------------------------------------------------------------------------- *
* DbgIPCEvents.h -- header file for private Debugger data shared by various
//
* debugger components.
* ------------------------------------------------------------------------- */
#ifndef _DbgIPCEvents_h_
#define _DbgIPCEvents_h_
#include <new.hpp>
#include <cor.h>
#include <cordebug.h>
#include <corjit.h> // for ICorDebugInfo::VarLocType & VarLoc
#include <specstrings.h>
#include "dbgtargetcontext.h"
// Get version numbers for IPCHeader stamp
#include "clrversion.h"
#include "dbgappdomain.h"
#include "./common.h"
//-----------------------------------------------------------------------------
// V3 additions to IPC protocol between LS and RS.
//-----------------------------------------------------------------------------
// Special Exception code for LS to communicate with RS.
// LS will raise this exception to communicate managed debug events to the RS.
// Exception codes can't use bit 0x10000000, that's reserved by OS.
#define CLRDBG_NOTIFICATION_EXCEPTION_CODE ((DWORD) 0x04242420)
// This is exception argument 0 included in debugger notification events.
// The debugger uses this as a sanity check.
// This could be very volatile data that changes between builds.
#define CLRDBG_EXCEPTION_DATA_CHECKSUM ((DWORD) 0x31415927)
// Reasons for hijack.
namespace EHijackReason
{
enum EHijackReason
{
kUnhandledException = 1,
kM2UHandoff = 2,
kFirstChanceSuspend = 3,
kGenericHijack = 4,
kMax
};
inline bool IsValid(EHijackReason value)
{
SUPPORTS_DAC;
return (value > 0) && (value < kMax);
}
}
#define MAX_LOG_SWITCH_NAME_LEN 256
//-----------------------------------------------------------------------------
// Versioning note:
// This file describes the IPC communication protocol between the LS (mscorwks)
// and the RS (mscordbi). For Desktop builds, it is private and can change on a
// daily basis. The version of the LS will always match the version of the RS
// (but see the discussion of CoreCLR below). They are like a single conceptual
// DLL split across 2 processes.
// The only restriction is that it should be flavor agnostic - so don't change
// layout based off '#ifdef DEBUG'. This lets us drop a Debug flavor RS onto
// a retail installation w/o any further installation woes. That's very useful
// for debugging.
//-----------------------------------------------------------------------------
// We want this available for DbgInterface.h - put it here.
typedef enum
{
IPC_TARGET_OUTOFPROC,
IPC_TARGET_COUNT,
} IpcTarget;
//
// Names of the setup sync event and shared memory used for IPC between the Left Side and the Right Side. NOTE: these
// names must include a %d for the process id. The process id used is the process id of the debuggee.
//
#define CorDBIPCSetupSyncEventName W("CorDBIPCSetupSyncEvent_%d")
//
// This define controls whether we always pass first chance exceptions to the in-process first chance hijack filter
// during interop debugging or if we try to short-circuit and make the decision out-of-process as much as possible.
//
#define CorDB_Short_Circuit_First_Chance_Ownership 1
//
// Defines for current version numbers for the left and right sides
//
#define CorDB_LeftSideProtocolCurrent 2
#define CorDB_LeftSideProtocolMinSupported 2
#define CorDB_RightSideProtocolCurrent 2
#define CorDB_RightSideProtocolMinSupported 2
//
// The remaining data structures in this file can be shared between two processes and for network transport
// based debugging this can mean two different platforms as well. The two platforms that can share these
// data structures must have identical layouts for them (each field must lie at the same offset and have the
// same length). The MSLAYOUT macro should be applied to each structure to avoid any compiler packing differences.
//
//
// DebuggerIPCRuntimeOffsets contains addresses and offsets of important global variables, functions, and fields in
// Runtime objects. This is populated during Left Side initialization and is read by the Right Side. This struct is
// mostly to facilitate unmanaged debugging support, but it may have some small uses for managed debugging.
//
struct MSLAYOUT DebuggerIPCRuntimeOffsets
{
#ifdef FEATURE_INTEROP_DEBUGGING
void *m_genericHijackFuncAddr;
void *m_signalHijackStartedBPAddr;
void *m_excepForRuntimeHandoffStartBPAddr;
void *m_excepForRuntimeHandoffCompleteBPAddr;
void *m_signalHijackCompleteBPAddr;
void *m_excepNotForRuntimeBPAddr;
void *m_notifyRSOfSyncCompleteBPAddr;
DWORD m_debuggerWordTLSIndex; // The TLS slot for the debugger word used in the debugger hijack functions
#endif // FEATURE_INTEROP_DEBUGGING
SIZE_T m_TLSIndex; // The TLS index of the thread-local storage for coreclr.dll
SIZE_T m_TLSEEThreadOffset; // TLS Offset of the Thread pointer.
SIZE_T m_TLSIsSpecialOffset; // TLS Offset of the "IsSpecial" status for a thread.
SIZE_T m_TLSCantStopOffset; // TLS Offset of the Can't-Stop count.
SIZE_T m_EEThreadStateOffset; // Offset of m_state in a Thread
SIZE_T m_EEThreadStateNCOffset; // Offset of m_stateNC in a Thread
SIZE_T m_EEThreadPGCDisabledOffset; // Offset of the bit for whether PGC is disabled or not in a Thread
DWORD m_EEThreadPGCDisabledValue; // Value at m_EEThreadPGCDisabledOffset that equals "PGC disabled".
SIZE_T m_EEThreadFrameOffset; // Offset of the Frame ptr in a Thread
SIZE_T m_EEThreadMaxNeededSize; // Max memory to read to get what we need out of a Thread object
DWORD m_EEThreadSteppingStateMask; // Mask for Thread::TSNC_DebuggerIsStepping
DWORD m_EEMaxFrameValue; // The max Frame value
SIZE_T m_EEThreadDebuggerFilterContextOffset; // Offset of debugger's filter context within a Thread Object.
SIZE_T m_EEFrameNextOffset; // Offset of the next ptr in a Frame
DWORD m_EEIsManagedExceptionStateMask; // Mask for Thread::TSNC_DebuggerIsManagedException
void *m_pPatches; // Addr of patch table
BOOL *m_pPatchTableValid; // Addr of g_patchTableValid
SIZE_T m_offRgData; // Offset of m_pcEntries
SIZE_T m_offCData; // Offset of count of m_pcEntries
SIZE_T m_cbPatch; // Size per patch entry
SIZE_T m_offAddr; // Offset within patch of target addr
SIZE_T m_offOpcode; // Offset within patch of target opcode
SIZE_T m_cbOpcode; // Max size of opcode
SIZE_T m_offTraceType; // Offset of the trace.type within a patch
DWORD m_traceTypeUnmanaged; // TRACE_UNMANAGED
DebuggerIPCRuntimeOffsets()
{
ZeroMemory(this, sizeof(DebuggerIPCRuntimeOffsets));
}
};
//
// The size of the send and receive IPC buffers.
// These must be big enough to fit a DebuggerIPCEvent. Also, the bigger they are, the fewer events
// it takes to send variable length stuff like the stack trace.
// But for perf reasons, they need to be small enough to not just push us over a page boundary in an IPC block.
// Unfortunately, there's a lot of other goo in the IPC block, so we can't use some clean formula. So we
// have to resort to just tuning things.
//
// When using a network transport rather than shared memory buffers CorDBIPC_BUFFER_SIZE is the upper bound
// for a single DebuggerIPCEvent structure. This now relates to the maximal size of a network message and is
// orthogonal to the host's page size. Because of this we defer definition of CorDBIPC_BUFFER_SIZE until we've
// declared DebuggerIPCEvent at the end of this header (and we can do so because in the transport case there
// aren't any embedded buffers in the DebuggerIPCControlBlock).
#if defined(TARGET_X86) || defined(TARGET_ARM)
#ifdef HOST_64BIT
#define CorDBIPC_BUFFER_SIZE 2104
#else
#define CorDBIPC_BUFFER_SIZE 2092
#endif
#else // !TARGET_X86 && !TARGET_ARM
// This is the size of a DebuggerIPCEvent. You will hit an assert in Cordb::Initialize() (di\rsmain.cpp)
// if this is not defined correctly. AMD64 actually has a page size of 0x1000, not 0x2000.
#define CorDBIPC_BUFFER_SIZE 4016 // (4016 + 6) * 2 + 148 = 8192 (two (DebuggerIPCEvent + alignment padding) + other fields = page size)
#endif // TARGET_X86 || TARGET_ARM
//
// DebuggerIPCControlBlock describes the layout of the shared memory shared between the Left Side and the Right
// Side. This includes error information, handles for the IPC channel, and space for the send/receive buffers.
//
struct MSLAYOUT DebuggerIPCControlBlock
{
// Version data should be first in the control block to ensure that we can read it even if the control block
// changes.
SIZE_T m_DCBSize; // note this field is used as a semaphore to indicate the DCB is initialized
ULONG m_verMajor; // CLR build number for the Left Side.
ULONG m_verMinor; // CLR build number for the Left Side.
// This next stuff fits in a DWORD.
bool m_checkedBuild; // CLR build type for the Left Side.
// using the first padding byte to indicate if hosted in fiber mode.
// We actually just need one bit. So if needed, can turn this to a bit.
// BYTE padding1;
bool m_bHostingInFiber;
BYTE padding2;
BYTE padding3;
ULONG m_leftSideProtocolCurrent; // Current protocol version for the Left Side.
ULONG m_leftSideProtocolMinSupported; // Minimum protocol the Left Side can support.
ULONG m_rightSideProtocolCurrent; // Current protocol version for the Right Side.
ULONG m_rightSideProtocolMinSupported; // Minimum protocol the Right Side requires.
HRESULT m_errorHR;
unsigned int m_errorCode;
#if defined(TARGET_64BIT)
// 64-bit needs this padding to make the handles after this aligned.
// But x86 can't have this padding b/c it breaks binary compatibility between v1.1 and v2.0.
ULONG padding4;
#endif // TARGET_64BIT
RemoteHANDLE m_rightSideEventAvailable;
RemoteHANDLE m_rightSideEventRead;
// @dbgtodo inspection - this is where LSEA and LSER used to be. We need to the padding to maintain binary compatibility.
// Eventually, we expect to remove this whole block.
RemoteHANDLE m_paddingObsoleteLSEA;
RemoteHANDLE m_paddingObsoleteLSER;
RemoteHANDLE m_rightSideProcessHandle;
//.............................................................................
// Everything above this point must have the exact same binary layout as v1.1.
// See protocol details below.
//.............................................................................
RemoteHANDLE m_leftSideUnmanagedWaitEvent;
// This is set immediately when the helper thread is created.
// This will be set even if there's a temporary helper thread or if the real helper
// thread is not yet pumping (eg, blocked on a loader lock).
DWORD m_realHelperThreadId;
// This is only published once the helper thread starts running in its main loop.
// Thus we can use this field to see if the real helper thread is actually pumping.
DWORD m_helperThreadId;
// This is non-zero if the LS has a temporary helper thread.
DWORD m_temporaryHelperThreadId;
// ID of the Helper's canary thread.
DWORD m_CanaryThreadId;
DebuggerIPCRuntimeOffsets *m_pRuntimeOffsets;
void *m_helperThreadStartAddr;
void *m_helperRemoteStartAddr;
DWORD *m_specialThreadList;
BYTE m_receiveBuffer[CorDBIPC_BUFFER_SIZE];
BYTE m_sendBuffer[CorDBIPC_BUFFER_SIZE];
DWORD m_specialThreadListLength;
bool m_shutdownBegun;
bool m_rightSideIsWin32Debugger; // RS status
bool m_specialThreadListDirty;
bool m_rightSideShouldCreateHelperThread;
// NOTE The Init method works since there are no virtual functions - don't add any virtual functions without
// changing this!
// Only initialized by the LS, opened by the RS.
HRESULT Init(
HANDLE rsea,
HANDLE rser,
HANDLE lsea,
HANDLE lser,
HANDLE lsuwe
);
};
#if defined(FEATURE_DBGIPC_TRANSPORT_VM) || defined(FEATURE_DBGIPC_TRANSPORT_DI)
// We need an alternate definition for the control block if using the transport, because the control block has to be sent over the transport
// In particular we can't nest the send/receive buffers inside of it and we don't use any of the remote handles
struct MSLAYOUT DebuggerIPCControlBlockTransport
{
// Version data should be first in the control block to ensure that we can read it even if the control block
// changes.
SIZE_T m_DCBSize; // note this field is used as a semaphore to indicate the DCB is initialized
ULONG m_verMajor; // CLR build number for the Left Side.
ULONG m_verMinor; // CLR build number for the Left Side.
// This next stuff fits in a DWORD.
bool m_checkedBuild; // CLR build type for the Left Side.
// using the first padding byte to indicate if hosted in fiber mode.
// We actually just need one bit. So if needed, can turn this to a bit.
// BYTE padding1;
bool m_bHostingInFiber;
BYTE padding2;
BYTE padding3;
ULONG m_leftSideProtocolCurrent; // Current protocol version for the Left Side.
ULONG m_leftSideProtocolMinSupported; // Minimum protocol the Left Side can support.
ULONG m_rightSideProtocolCurrent; // Current protocol version for the Right Side.
ULONG m_rightSideProtocolMinSupported; // Minimum protocol the Right Side requires.
HRESULT m_errorHR;
unsigned int m_errorCode;
#if defined(TARGET_64BIT)
// 64-bit needs this padding to make the handles after this aligned.
// But x86 can't have this padding b/c it breaks binary compatibility between v1.1 and v2.0.
ULONG padding4;
#endif // TARGET_64BIT
// This is set immediately when the helper thread is created.
// This will be set even if there's a temporary helper thread or if the real helper
// thread is not yet pumping (eg, blocked on a loader lock).
DWORD m_realHelperThreadId;
// This is only published once the helper thread starts running in its main loop.
// Thus we can use this field to see if the real helper thread is actually pumping.
DWORD m_helperThreadId;
// This is non-zero if the LS has a temporary helper thread.
DWORD m_temporaryHelperThreadId;
// ID of the Helper's canary thread.
DWORD m_CanaryThreadId;
DebuggerIPCRuntimeOffsets *m_pRuntimeOffsets;
void *m_helperThreadStartAddr;
void *m_helperRemoteStartAddr;
DWORD *m_specialThreadList;
DWORD m_specialThreadListLength;
bool m_shutdownBegun;
bool m_rightSideIsWin32Debugger; // RS status
bool m_specialThreadListDirty;
bool m_rightSideShouldCreateHelperThread;
// NOTE The Init method works since there are no virtual functions - don't add any virtual functions without
// changing this!
// Only initialized by the LS, opened by the RS.
HRESULT Init();
};
#endif // defined(FEATURE_DBGIPC_TRANSPORT_VM) || defined(FEATURE_DBGIPC_TRANSPORT_DI)
#if defined(FEATURE_DBGIPC_TRANSPORT_VM) || defined(FEATURE_DBGIPC_TRANSPORT_DI)
#include "dbgtransportsession.h"
#endif // defined(FEATURE_DBGIPC_TRANSPORT_VM) || defined(FEATURE_DBGIPC_TRANSPORT_DI)
#define INITIAL_APP_DOMAIN_INFO_LIST_SIZE 16
//-----------------------------------------------------------------------------
// Provide some Type-safety in the IPC block when we pass remote pointers around.
//-----------------------------------------------------------------------------
//-----------------------------------------------------------------------------
// This is the same in both the LS & RS.
// Definitions on the LS & RS should be binary compatible. So all storage is
// declared in GeneralLsPointer, and then the Ls & RS each have their own
// derived accessors.
//-----------------------------------------------------------------------------
class MSLAYOUT GeneralLsPointer
{
protected:
friend ULONG_PTR LsPtrToCookie(GeneralLsPointer p);
void * m_ptr;
public:
bool IsNull() { return m_ptr == NULL; }
};
class MSLAYOUT GeneralRsPointer
{
protected:
UINT m_data;
public:
bool IsNull() { return m_data == 0; }
};
// In some cases, we need to get a uuid from a pointer (ie, in a hash)
inline ULONG_PTR LsPtrToCookie(GeneralLsPointer p) {
return (ULONG_PTR) p.m_ptr;
}
#define VmPtrToCookie(vm) LsPtrToCookie((vm).ToLsPtr())
#ifdef RIGHT_SIDE_COMPILE
//-----------------------------------------------------------------------------
// Infrasturcture for RS Definitions
//-----------------------------------------------------------------------------
// On the RS, we don't have the LS classes defined, so we can't templatize that
// in terms of <class T>, but we still want things to be unique.
// So we create an empty enum for each LS type and then templatize it in terms
// of the enum.
template <typename T>
class MSLAYOUT LsPointer : public GeneralLsPointer
{
public:
void Set(void * p)
{
m_ptr = p;
}
void * UnsafeGet()
{
return m_ptr;
}
static LsPointer<T> NullPtr()
{
return MakePtr(NULL);
}
static LsPointer<T> MakePtr(T* p)
{
#ifdef _PREFAST_
#pragma warning(push)
#pragma warning(disable:6001) // PREfast warning: Using uninitialize memory 't'
#endif // _PREFAST_
LsPointer<T> t;
t.Set(p);
return t;
#ifdef _PREFAST_
#pragma warning(pop)
#endif // _PREFAST_
}
bool operator!= (void * p) { return m_ptr != p; }
bool operator== (void * p) { return m_ptr == p; }
bool operator==(LsPointer<T> p) { return p.m_ptr == this->m_ptr; }
// We should never UnWrap() them in the RS, so we don't define that here.
};
class CordbProcess;
template <class T> UINT AllocCookie(CordbProcess * pProc, T * p);
template <class T> T * UnwrapCookie(CordbProcess * pProc, UINT cookie);
UINT AllocCookieCordbEval(CordbProcess * pProc, class CordbEval * p);
class CordbEval * UnwrapCookieCordbEval(CordbProcess * pProc, UINT cookie);
template <class CordbEval> UINT AllocCookie(CordbProcess * pProc, CordbEval * p)
{
return AllocCookieCordbEval(pProc, p);
}
template <class CordbEval> CordbEval * UnwrapCookie(CordbProcess * pProc, UINT cookie)
{
return UnwrapCookieCordbEval(pProc, cookie);
}
// This is how the RS sees the pointers in the IPC block.
template<class T>
class MSLAYOUT RsPointer : public GeneralRsPointer
{
public:
// Since we're being used inside a union, we can't have a ctor.
static RsPointer<T> NullPtr()
{
RsPointer<T> t;
t.m_data = 0;
return t;
}
bool AllocHandle(CordbProcess *pProc, T* p)
{
// This will force validation.
m_data = AllocCookie<T>(pProc, p);
return (m_data != 0);
}
bool operator==(RsPointer<T> p) { return p.m_data == this->m_data; }
T* UnWrapAndRemove(CordbProcess *pProc)
{
return UnwrapCookie<T>(pProc, m_data);
}
protected:
};
// Forward declare a class so that each type of LS pointer can have
// its own type. We use the real class name to be compatible with VMPTRs.
#define DEFINE_LSPTR_TYPE(ls_type, ptr_name) \
ls_type; \
typedef LsPointer<ls_type> ptr_name;
#define DEFINE_RSPTR_TYPE(rs_type, ptr_name) \
class rs_type; \
typedef RsPointer<rs_type> ptr_name;
#else // !RIGHT_SIDE_COMPILE
//-----------------------------------------------------------------------------
// Infrastructure for LS Definitions
//-----------------------------------------------------------------------------
// This is how the LS sees the pointers in the IPC block.
template<typename T>
class MSLAYOUT LsPointer : public GeneralLsPointer
{
public:
// Since we're being used inside a union, we can't have a ctor.
//LsPointer() { }
static LsPointer<T> NullPtr()
{
return MakePtr(NULL);
}
static LsPointer<T> MakePtr(T * p)
{
#ifdef _PREFAST_
#pragma warning(push)
#pragma warning(disable:6001) // PREfast warning: Using uninitialize memory 't'
#endif // _PREFAST_
LsPointer<T> t;
t.Set(p);
return t;
#ifdef _PREFAST_
#pragma warning(pop)
#endif // _PREFAST_
}
bool operator!= (void * p) { return m_ptr != p; }
bool operator== (void * p) { return m_ptr == p; }
bool operator==(LsPointer<T> p) { return p.m_ptr == this->m_ptr; }
// @todo - we want to be able to swap out Set + Unwrap functions
void Set(T * p)
{
SUPPORTS_DAC;
// We could validate the pointer here.
m_ptr = p;
}
T * UnWrap()
{
// If we wanted to validate the pointer, here's our chance.
return static_cast<T*>(m_ptr);
}
};
template <class n>
class MSLAYOUT RsPointer : public GeneralRsPointer
{
public:
static RsPointer<n> NullPtr()
{
RsPointer<n> t;
t.m_data = 0;
return t;
}
bool operator==(RsPointer<n> p) { return p.m_data == this->m_data; }
// We should never UnWrap() them in the LS, so we don't define that here.
};
#define DEFINE_LSPTR_TYPE(ls_type, ptr_name) \
ls_type; \
typedef LsPointer<ls_type> ptr_name;
#define DEFINE_RSPTR_TYPE(rs_type, ptr_name) \
enum __RS__##rs_type { }; \
typedef RsPointer<__RS__##rs_type> ptr_name;
#endif // !RIGHT_SIDE_COMPILE
// We must be binary compatible w/ a pointer.
static_assert_no_msg(sizeof(LsPointer<void>) == sizeof(GeneralLsPointer));
static_assert_no_msg(sizeof(void*) == sizeof(GeneralLsPointer));
//-----------------------------------------------------------------------------
// Definitions for Left-Side ptrs.
// NOTE: Use VMPTR instead of LSPTR. Don't add new LSPTR types.
//
//-----------------------------------------------------------------------------
DEFINE_LSPTR_TYPE(class Assembly, LSPTR_ASSEMBLY);
DEFINE_LSPTR_TYPE(class DebuggerJitInfo, LSPTR_DJI);
DEFINE_LSPTR_TYPE(class DebuggerMethodInfo, LSPTR_DMI);
DEFINE_LSPTR_TYPE(class MethodDesc, LSPTR_METHODDESC);
DEFINE_LSPTR_TYPE(class DebuggerBreakpoint, LSPTR_BREAKPOINT);
DEFINE_LSPTR_TYPE(class DebuggerDataBreakpoint, LSPTR_DATA_BREAKPOINT);
DEFINE_LSPTR_TYPE(class DebuggerEval, LSPTR_DEBUGGEREVAL);
DEFINE_LSPTR_TYPE(class DebuggerStepper, LSPTR_STEPPER);
// Need to be careful not to annoy the compiler here since DT_CONTEXT is a typedef, not a struct.
#if defined(RIGHT_SIDE_COMPILE)
typedef LsPointer<DT_CONTEXT> LSPTR_CONTEXT;
#else // RIGHT_SIDE_COMPILE
typedef LsPointer<DT_CONTEXT> LSPTR_CONTEXT;
#endif // RIGHT_SIDE_COMPILE
DEFINE_LSPTR_TYPE(struct OBJECTHANDLE__, LSPTR_OBJECTHANDLE);
DEFINE_LSPTR_TYPE(class TypeHandleDummyPtr, LSPTR_TYPEHANDLE); // TypeHandle in the LS is not a direct pointer.
//-----------------------------------------------------------------------------
// Definitions for Right-Side ptrs.
//-----------------------------------------------------------------------------
DEFINE_RSPTR_TYPE(CordbEval, RSPTR_CORDBEVAL);
//---------------------------------------------------------------------------------------
// VMPTR_Base is the base type for an abstraction over pointers into the VM so
// that DBI can treat them as opaque handles. Classes will derive from it to
// provide type-safe Target pointers, which ICD will view as opaque handles.
//
// Lifetimes:
// VMPTR_ objects survive across flushing the DAC cache. Therefore, the underlying
// storage must be a target-pointer (and not a marshalled host pointer).
// The RS must ensure they're still in sync with the LS (eg, by
// tracking unload events).
//
//
// Assumptions:
// These handles are TADDR pointers and must not require any cleanup from DAC/DBI.
// For direct untyped pointers into the VM, use CORDB_ADDRESS.
//
// Notes:
// 1. This helps enforce that DBI goes through the primitives interface
// for all access (and that it doesn't accidentally start calling
// dac-ized methods on the objects)
// 2. This isolates DBI from VM headers.
// 3. This isolates DBI from the dac implementation (of DAC_Ptr)
// 4. This is distinct from LSPTR because LSPTRs are truly opaque handles, whereas VMPtrs
// move across VM, DAC, and DBI, exposing proper functionality in each component.
// 5. VMPTRs are blittable because they are Target Addresses which act as opaque
// handles outside of the Target / Dac-marshaller.
//
//---------------------------------------------------------------------------------------
template <typename TTargetPtr, typename TDacPtr>
class MSLAYOUT VMPTR_Base
{
// Underlying pointer into Target address space.
// Target pointers are blittable.
// - In Target: can be used as normal local pointers.
// - In DAC: must be marshalled to a host-pointer and then they can be used via DAC
// - In RS: opaque handles.
private:
TADDR m_addr;
public:
typedef VMPTR_Base<TTargetPtr,TDacPtr> VMPTR_This;
// For DBI, VMPTRs are opaque handles.
// But the DAC side is allowed to inspect the handles to get at the raw pointer.
#if defined(ALLOW_VMPTR_ACCESS)
//
// Case 1: Using in DAcDbi implementation
//
// DAC accessor
TDacPtr GetDacPtr() const
{
SUPPORTS_DAC;
return TDacPtr(m_addr);
}
// This will initialize the handle to a given target-pointer.
// We choose TADDR to make it explicit that it's a target pointer and avoid the risk
// of it accidentally getting marshalled to a host pointer.
void SetDacTargetPtr(TADDR addr)
{
SUPPORTS_DAC;
m_addr = addr;
}
void SetHostPtr(const TTargetPtr * pObject)
{
SUPPORTS_DAC;
m_addr = PTR_HOST_TO_TADDR(pObject);
}
#elif !defined(RIGHT_SIDE_COMPILE)
//
// Case 2: Used in Left-side. Can get/set from local pointers.
//
// This will set initialize from a Target pointer. Since this is happening in the
// Left-side (Target), the pointer is local.
// This is commonly used by the Left-side to create a VMPTR_ for a notification event.
void SetRawPtr(TTargetPtr * ptr)
{
m_addr = reinterpret_cast<TADDR>(ptr);
}
// This will get the raw underlying target pointer.
// This can be used by inproc Left-side code to unwrap a VMPTR (Eg, for a func-eval
// hijack or in-proc worker threads)
TTargetPtr * GetRawPtr()
{
return reinterpret_cast<TTargetPtr*>(m_addr);
}
// Convenience for converting TTargetPtr --> VMPTR
static VMPTR_This MakePtr(TTargetPtr * ptr)
{
#ifdef _PREFAST_
#pragma warning(push)
#pragma warning(disable:6001) // PREfast warning: Using uninitialize memory 't'
#endif // _PREFAST_
VMPTR_This t;
t.SetRawPtr(ptr);
return t;
#ifdef _PREFAST_
#pragma warning(pop)
#endif // _PREFAST_
}
#else
//
// Case 3: Used in RS. Opaque handles only.
//
#endif
#ifndef DACCESS_COMPILE
// For compatibility, these can be converted to LSPTRs on the RS or LS (case 2 and 3). We don't allow
// this in the DAC case because it's a cast between address spaces which we're trying to eliminate
// in the DAC code.
// @dbgtodo inspection: LSPTRs will go away entirely once we've moved completely over to DAC
LsPointer<TTargetPtr> ToLsPtr()
{
return LsPointer<TTargetPtr>::MakePtr( reinterpret_cast<TTargetPtr *>(m_addr));
}
#endif
//
// Operators to emulate Pointer semantics.
//
bool IsNull() { SUPPORTS_DAC; return m_addr == NULL; }
static VMPTR_This NullPtr()
{
SUPPORTS_DAC;
#ifdef _PREFAST_
#pragma warning(push)
#pragma warning(disable:6001) // PREfast warning: Using uninitialize memory 't'
#endif // _PREFAST_
VMPTR_This dummy;
dummy.m_addr = NULL;
return dummy;
#ifdef _PREFAST_
#pragma warning(pop)
#endif // _PREFAST_
}
bool operator!= (VMPTR_This vmOther) const { SUPPORTS_DAC; return this->m_addr != vmOther.m_addr; }
bool operator== (VMPTR_This vmOther) const { SUPPORTS_DAC; return this->m_addr == vmOther.m_addr; }
};
#if defined(ALLOW_VMPTR_ACCESS)
// Helper macro to define a VMPTR.
// This is used in the DAC case, so this definition connects the pointers up to their DAC values.
#define DEFINE_VMPTR(ls_type, dac_ptr_type, ptr_name) \
ls_type; \
typedef VMPTR_Base<ls_type, dac_ptr_type> ptr_name;
#else
// Helper macro to define a VMPTR.
// This is used in the Right-side and Left-side (but not DAC) case.
// This definition explicitly ignores dac_ptr_type to prevent accidental DAC usage.
#define DEFINE_VMPTR(ls_type, dac_ptr_type, ptr_name) \
ls_type; \
typedef VMPTR_Base<ls_type, void> ptr_name;
#endif
// Declare VMPTRs.
// The naming convention for instantiating a VMPTR is a 'vm' prefix.
//
// VM definition, DAC definition, pretty name for VMPTR
DEFINE_VMPTR(class AppDomain, PTR_AppDomain, VMPTR_AppDomain);
// Need to be careful not to annoy the compiler here since DT_CONTEXT is a typedef, not a struct.
// DEFINE_VMPTR(struct _CONTEXT, PTR_CONTEXT, VMPTR_CONTEXT);
#if defined(ALLOW_VMPTR_ACCESS)
typedef VMPTR_Base<DT_CONTEXT, PTR_CONTEXT> VMPTR_CONTEXT;
#else
typedef VMPTR_Base<DT_CONTEXT, void > VMPTR_CONTEXT;
#endif
// DomainAssembly is a base-class for a CLR module, with app-domain affinity.
// For domain-neutral modules (like CoreLib), there is a DomainAssembly instance
// for each appdomain the module lives in.
// This is the canonical handle ICorDebug uses to a CLR module.
DEFINE_VMPTR(class DomainAssembly, PTR_DomainAssembly, VMPTR_DomainAssembly);
DEFINE_VMPTR(class Module, PTR_Module, VMPTR_Module);
DEFINE_VMPTR(class Assembly, PTR_Assembly, VMPTR_Assembly);
DEFINE_VMPTR(class PEAssembly, PTR_PEAssembly, VMPTR_PEAssembly);
DEFINE_VMPTR(class MethodDesc, PTR_MethodDesc, VMPTR_MethodDesc);
DEFINE_VMPTR(class FieldDesc, PTR_FieldDesc, VMPTR_FieldDesc);
// ObjectHandle is a safe way to represent an object into the GC heap. It gets updated
// when a GC occurs.
DEFINE_VMPTR(struct OBJECTHANDLE__, TADDR, VMPTR_OBJECTHANDLE);
DEFINE_VMPTR(class TypeHandle, PTR_TypeHandle, VMPTR_TypeHandle);
// A VMPTR_Thread represents a thread that has entered the runtime at some point.
// It may or may not have executed managed code yet; and it may or may not have managed code
// on its callstack.
DEFINE_VMPTR(class Thread, PTR_Thread, VMPTR_Thread);
DEFINE_VMPTR(class Object, PTR_Object, VMPTR_Object);
DEFINE_VMPTR(class CrstBase, PTR_Crst, VMPTR_Crst);
DEFINE_VMPTR(class SimpleRWLock, PTR_SimpleRWLock, VMPTR_SimpleRWLock);
DEFINE_VMPTR(class SimpleRWLock, PTR_SimpleRWLock, VMPTR_RWLock);
DEFINE_VMPTR(struct ReJitInfo, PTR_ReJitInfo, VMPTR_ReJitInfo);
DEFINE_VMPTR(struct SharedReJitInfo, PTR_SharedReJitInfo, VMPTR_SharedReJitInfo);
DEFINE_VMPTR(class NativeCodeVersionNode, PTR_NativeCodeVersionNode, VMPTR_NativeCodeVersionNode);
DEFINE_VMPTR(class ILCodeVersionNode, PTR_ILCodeVersionNode, VMPTR_ILCodeVersionNode);
typedef CORDB_ADDRESS GENERICS_TYPE_TOKEN;
//-----------------------------------------------------------------------------
// We pass some fixed size strings in the IPC block.
// Helper class to wrap the buffer and protect against buffer overflows.
// This should be binary compatible w/ a wchar[] array.
//-----------------------------------------------------------------------------
template <int nMaxLengthIncludingNull>
class MSLAYOUT EmbeddedIPCString
{
public:
// Set, caller responsibility that wcslen(pData) < nMaxLengthIncludingNull
void SetString(const WCHAR * pData)
{
// If the string doesn't fit into the buffer, that's an issue (and so this is a real
// assert, not just a simplifying assumption). To fix it, either:
// - make the buffer larger
// - don't pass the string as an embedded string in the IPC block.
// This will truncate (rather than AV on the RS).
int ret;
ret = SafeCopy(pData);
// See comment above - caller should guarantee that buffer is large enough.
_ASSERTE(ret != STRUNCATE);
}
// Set a string from a substring. This will truncate if necessary.
void SetStringTruncate(const WCHAR * pData)
{
// ignore return value because truncation is ok.
SafeCopy(pData);
}
const WCHAR * GetString()
{
// For a null-termination just in case an issue in the debuggee process
// yields a malformed string.
m_data[nMaxLengthIncludingNull - 1] = W('\0');
return &m_data[0];
}
int GetMaxSize() const { return nMaxLengthIncludingNull; }
private:
int SafeCopy(const WCHAR * pData)
{
return wcsncpy_s(
m_data, nMaxLengthIncludingNull,
pData, _TRUNCATE);
}
WCHAR m_data[nMaxLengthIncludingNull];
};
//
// Types of events that can be sent between the Runtime Controller and
// the Debugger Interface. Some of these events are one way only, while
// others go both ways. The grouping of the event numbers is an attempt
// to show this distinction and perhaps even allow generic operations
// based on the type of the event.
//
enum DebuggerIPCEventType
{
#define IPC_EVENT_TYPE0(type, val) type = val,
#define IPC_EVENT_TYPE1(type, val) type = val,
#define IPC_EVENT_TYPE2(type, val) type = val,
#include "dbgipceventtypes.h"
#undef IPC_EVENT_TYPE2
#undef IPC_EVENT_TYPE1
#undef IPC_EVENT_TYPE0
};
#ifdef _DEBUG
// This is a static debugging structure to help breaking at the right place.
// Debug only. This is to track the number of events that have been happened so far.
// User can choose to set break point base on the number of events.
// Variables are named as the event name with prefix m_iDebugCount. For example
// m_iDebugCount_DB_IPCE_BREAKPOINT if for event DB_IPCE_BREAKPOINT.
struct MSLAYOUT DebugEventCounter
{
// we don't need the event type 0
#define IPC_EVENT_TYPE0(type, val)
#define IPC_EVENT_TYPE1(type, val) int m_iDebugCount_##type;
#define IPC_EVENT_TYPE2(type, val) int m_iDebugCount_##type;
#include "dbgipceventtypes.h"
#undef IPC_EVENT_TYPE2
#undef IPC_EVENT_TYPE1
#undef IPC_EVENT_TYPE0
};
#endif // _DEBUG
#if !defined(DACCESS_COMPILE)
struct MSLAYOUT IPCEventTypeNameMapping
{
DebuggerIPCEventType eventType;
const char * eventName;
};
extern const IPCEventTypeNameMapping DbgIPCEventTypeNames[];
extern const size_t nameCount;
struct MSLAYOUT IPCENames // We use a class/struct so that the function can remain in a shared header file
{
static DebuggerIPCEventType GetEventType(_In_z_ char * strEventType)
{
// pass in the string of event name and find the matching enum value
// This is a linear search which is pretty slow. However, this is only used
// at startup time when debug assert is turn on and with registry key set. So it is not that bad.
//
for (size_t i = 0; i < nameCount; i++)
{
if (_stricmp(DbgIPCEventTypeNames[i].eventName, strEventType) == 0)
return DbgIPCEventTypeNames[i].eventType;
}
return DB_IPCE_INVALID_EVENT;
}
static const char * GetName(DebuggerIPCEventType eventType)
{
enum DbgIPCEventTypeNum
{
#define IPC_EVENT_TYPE0(type, val) type##_Num,
#define IPC_EVENT_TYPE1(type, val) type##_Num,
#define IPC_EVENT_TYPE2(type, val) type##_Num,
#include "dbgipceventtypes.h"
#undef IPC_EVENT_TYPE2
#undef IPC_EVENT_TYPE1
#undef IPC_EVENT_TYPE0
};
size_t i, lim;
if (eventType < DB_IPCE_DEBUGGER_FIRST)
{
i = DB_IPCE_RUNTIME_FIRST_Num + 1;
lim = DB_IPCE_DEBUGGER_FIRST_Num;
}
else
{
i = DB_IPCE_DEBUGGER_FIRST_Num + 1;
lim = nameCount;
}
for (/**/; i < lim; i++)
{
if (DbgIPCEventTypeNames[i].eventType == eventType)
return DbgIPCEventTypeNames[i].eventName;
}
return DbgIPCEventTypeNames[nameCount - 1].eventName;
}
};
#endif // !DACCESS_COMPILE
//
// NOTE: CPU-specific values below!
//
// DebuggerREGDISPLAY is very similar to the EE REGDISPLAY structure. It holds
// register values that can be saved over calls for each frame in a stack
// trace.
//
// DebuggerIPCE_FloatCount is the number of doubles in the processor's
// floating point stack.
//
// <TODO>Note: We used to just pass the values of the registers for each frame to the Right Side, but I had to add in the
// address of each register, too, to support using enregistered variables on non-leaf frames as args to a func eval. Its
// very, very possible that we would rework the entire code base to just use the register's address instead of passing
// both, but its way, way too late in V1 to undertake that, so I'm just using these addresses to suppport our one func
// eval case. Clearly, this needs to be cleaned up post V1.
//
// -- Fri Feb 09 11:21:24 2001</TODO>
//
struct MSLAYOUT DebuggerREGDISPLAY
{
#if defined(TARGET_X86)
#define DebuggerIPCE_FloatCount 8
SIZE_T Edi;
void *pEdi;
SIZE_T Esi;
void *pEsi;
SIZE_T Ebx;
void *pEbx;
SIZE_T Edx;
void *pEdx;
SIZE_T Ecx;
void *pEcx;
SIZE_T Eax;
void *pEax;
SIZE_T FP;
void *pFP;
SIZE_T SP;
SIZE_T PC;
#elif defined(TARGET_AMD64)
#define DebuggerIPCE_FloatCount 16
SIZE_T Rax;
void *pRax;
SIZE_T Rcx;
void *pRcx;
SIZE_T Rdx;
void *pRdx;
SIZE_T Rbx;
void *pRbx;
SIZE_T Rbp;
void *pRbp;
SIZE_T Rsi;
void *pRsi;
SIZE_T Rdi;
void *pRdi;
SIZE_T R8;
void *pR8;
SIZE_T R9;
void *pR9;
SIZE_T R10;
void *pR10;
SIZE_T R11;
void *pR11;
SIZE_T R12;
void *pR12;
SIZE_T R13;
void *pR13;
SIZE_T R14;
void *pR14;
SIZE_T R15;
void *pR15;
SIZE_T SP;
SIZE_T PC;
#elif defined(TARGET_ARM)
#define DebuggerIPCE_FloatCount 32
SIZE_T R0;
void *pR0;
SIZE_T R1;
void *pR1;
SIZE_T R2;
void *pR2;
SIZE_T R3;
void *pR3;
SIZE_T R4;
void *pR4;
SIZE_T R5;
void *pR5;
SIZE_T R6;
void *pR6;
SIZE_T R7;
void *pR7;
SIZE_T R8;
void *pR8;
SIZE_T R9;
void *pR9;
SIZE_T R10;
void *pR10;
SIZE_T R11;
void *pR11;
SIZE_T R12;
void *pR12;
SIZE_T SP;
void *pSP;
SIZE_T LR;
void *pLR;
SIZE_T PC;
void *pPC;
#elif defined(TARGET_ARM64)
#define DebuggerIPCE_FloatCount 32
SIZE_T X[29];
SIZE_T FP;
SIZE_T LR;
SIZE_T SP;
SIZE_T PC;
#else
#define DebuggerIPCE_FloatCount 1
SIZE_T PC;
SIZE_T FP;
SIZE_T SP;
void *pFP;
#endif
};
inline LPVOID GetSPAddress(const DebuggerREGDISPLAY * display)
{
return (LPVOID)&display->SP;
}
#if !defined(TARGET_AMD64) && !defined(TARGET_ARM)
inline LPVOID GetFPAddress(const DebuggerREGDISPLAY * display)
{
return (LPVOID)&display->FP;
}
#endif // !TARGET_AMD64
class MSLAYOUT FramePointer
{
friend bool IsCloserToLeaf(FramePointer fp1, FramePointer fp2);
friend bool IsCloserToRoot(FramePointer fp1, FramePointer fp2);
friend bool IsEqualOrCloserToLeaf(FramePointer fp1, FramePointer fp2);
friend bool IsEqualOrCloserToRoot(FramePointer fp1, FramePointer fp2);
public:
static FramePointer MakeFramePointer(LPVOID sp)
{
LIMITED_METHOD_DAC_CONTRACT;
FramePointer fp;
fp.m_sp = sp;
return fp;
}
static FramePointer MakeFramePointer(UINT_PTR sp)
{
SUPPORTS_DAC;
return MakeFramePointer((LPVOID)sp);
}
inline bool operator==(FramePointer fp)
{
return (m_sp == fp.m_sp);
}
inline bool operator!=(FramePointer fp)
{
return !(*this == fp);
}
// This is needed because on the RS, the m_id values of CordbFrame and
// CordbChain are really FramePointers.
LPVOID GetSPValue() const
{
return m_sp;
}
private:
// Declare some private constructors which signatures matching common usage of FramePointer
// to prevent people from accidentally assigning a pointer to a FramePointer().
FramePointer &operator=(LPVOID sp);
FramePointer &operator=(BYTE* sp);
FramePointer &operator=(const BYTE* sp);
LPVOID m_sp;
};
// For non-IA64 platforms, we use stack pointers as frame pointers.
// (Stack grows towards smaller address.)
#define LEAF_MOST_FRAME FramePointer::MakeFramePointer((LPVOID)NULL)
#define ROOT_MOST_FRAME FramePointer::MakeFramePointer((LPVOID)-1)
static_assert_no_msg(sizeof(FramePointer) == sizeof(void*));
inline bool IsCloserToLeaf(FramePointer fp1, FramePointer fp2)
{
return (fp1.m_sp < fp2.m_sp);
}
inline bool IsCloserToRoot(FramePointer fp1, FramePointer fp2)
{
return (fp1.m_sp > fp2.m_sp);
}
inline bool IsEqualOrCloserToLeaf(FramePointer fp1, FramePointer fp2)
{
return !IsCloserToRoot(fp1, fp2);
}
inline bool IsEqualOrCloserToRoot(FramePointer fp1, FramePointer fp2)
{
return !IsCloserToLeaf(fp1, fp2);
}
// struct DebuggerIPCE_FuncData: DebuggerIPCE_FuncData holds data
// to describe a given function, its
// class, and a little bit about the code for the function. This is used
// in the stack trace result data to pass function information back that
// may be needed. Its also used when getting data about a specific function.
//
// void* nativeStartAddressPtr: Ptr to CORDB_ADDRESS, which is
// the address of the real start address of the native code.
// This field will be NULL only if the method hasn't been JITted
// yet (and thus no code is available). Otherwise, it will be
// the adress of a CORDB_ADDRESS in the remote memory. This
// CORDB_ADDRESS may be NULL, in which case the code is unavailable
// has been pitched (return CORDBG_E_CODE_NOT_AVAILABLE)
//
// SIZE_T nVersion: The version of the code that this instance of the
// function is using.
struct MSLAYOUT DebuggerIPCE_FuncData
{
mdMethodDef funcMetadataToken;
VMPTR_DomainAssembly vmDomainAssembly;
mdTypeDef classMetadataToken;
void* ilStartAddress;
SIZE_T ilSize;
SIZE_T currentEnCVersion;
mdSignature localVarSigToken;
};
// struct DebuggerIPCE_JITFuncData: DebuggerIPCE_JITFuncData holds
// a little bit about the JITted code for the function.
//
// void* nativeStartAddressPtr: Ptr to CORDB_ADDRESS, which is
// the address of the real start address of the native code.
// This field will be NULL only if the method hasn't been JITted
// yet (and thus no code is available). Otherwise, it will be
// the address of a CORDB_ADDRESS in the remote memory. This
// CORDB_ADDRESS may be NULL, in which case the code is unavailable
// or has been pitched (return CORDBG_E_CODE_NOT_AVAILABLE)
//
// SIZE_T nativeSize: Size of the native code.
//
// SIZE_T nativeOffset: Offset from the beginning of the function,
// in bytes. This may be non-zero even when nativeStartAddressPtr
// is NULL
// void * nativeCodeJITInfoToken: An opaque value to hand back to the left
// side when fetching the JITInfo for the native code, i.e. the
// IL->native maps for the variables. This may be NULL if no JITInfo is available.
// void * nativeCodeMethodDescToken: An opaque value to hand back to the left
// side when fetching the code. In addition this token can act as the
// unique identity for the native code in the case where there are
// multiple blobs of native code per IL method (i.e. if the method is
// generic code of some kind)
// BOOL isInstantiatedGeneric: Indicates if the method is
// generic code of some kind.
// BOOL jsutAfterILThrow: indicates that code just threw a software exception and
// nativeOffset points to an instruction just after [call IL_Throw].
// This is being used to figure out a real offset of the exception origin.
// By subtracting STACKWALK_CONTROLPC_ADJUST_OFFSET from nativeOffset you can get
// an address somewhere inside [call IL_Throw] instruction.
// void *ilToNativeMapAddr etc.: If nativeCodeJITInfoToken is not NULL then these
// specify the table giving the mapping of IPs.
struct MSLAYOUT DebuggerIPCE_JITFuncData
{
TADDR nativeStartAddressPtr;
SIZE_T nativeHotSize;
// If we have a cold region, need its size & the pointer to where starts.
TADDR nativeStartAddressColdPtr;
SIZE_T nativeColdSize;
SIZE_T nativeOffset;
LSPTR_DJI nativeCodeJITInfoToken;
VMPTR_MethodDesc vmNativeCodeMethodDescToken;
#ifdef FEATURE_EH_FUNCLETS
BOOL fIsFilterFrame;
SIZE_T parentNativeOffset;
FramePointer fpParentOrSelf;
#endif // FEATURE_EH_FUNCLETS
// indicates if the MethodDesc is a generic function or a method inside a generic class (or
// both!).
BOOL isInstantiatedGeneric;
// this is the version of the jitted code
SIZE_T enCVersion;
BOOL jsutAfterILThrow;
};
//
// DebuggerIPCE_STRData holds data for each stack frame or chain. This data is passed
// from the RC to the DI during a stack walk.
//
#if defined(_MSC_VER)
#pragma warning( push )
#pragma warning( disable:4324 ) // the compiler pads a structure to comply with alignment requirements
#endif // ARM context structures have a 16-byte alignment requirement
struct MSLAYOUT DebuggerIPCE_STRData
{
FramePointer fp;
// @dbgtodo stackwalker/shim- Ideally we should be able to get rid of the DebuggerREGDISPLAY and just use the CONTEXT.
DT_CONTEXT ctx;
DebuggerREGDISPLAY rd;
bool quicklyUnwound;
VMPTR_AppDomain vmCurrentAppDomainToken;
enum EType
{
cMethodFrame = 0,
cChain,
cStubFrame,
cRuntimeNativeFrame
} eType;
union MSLAYOUT
{
// Data for a chain
struct MSLAYOUT
{
CorDebugChainReason chainReason;
bool managed;
} u;
// Data for a Method
struct MSLAYOUT
{
struct DebuggerIPCE_FuncData funcData;
struct DebuggerIPCE_JITFuncData jitFuncData;
SIZE_T ILOffset;
CorDebugMappingResult mapping;
bool fVarArgs;
// Indicates whether the managed method has any metadata.
// Some dynamic methods such as IL stubs and LCG methods don't have any metadata.
// This is used only by the V3 stackwalker, not the V2 one, because we only
// expose dynamic methods as real stack frames in V3.
bool fNoMetadata;
TADDR taAmbientESP;
GENERICS_TYPE_TOKEN exactGenericArgsToken;
DWORD dwExactGenericArgsTokenIndex;
} v;
// Data for an Stub Frame.
struct MSLAYOUT
{
mdMethodDef funcMetadataToken;
VMPTR_DomainAssembly vmDomainAssembly;
VMPTR_MethodDesc vmMethodDesc;
CorDebugInternalFrameType frameType;
} stubFrame;
};
};
#if defined(_MSC_VER)
#pragma warning( pop )
#endif
//
// DebuggerIPCE_BasicTypeData and DebuggerIPCE_ExpandedTypeData
// hold data for each type sent across the
// boundary, whether it be a constructed type List<String> or a non-constructed
// type such as String, Foo or Object.
//
// Logically speaking DebuggerIPCE_BasicTypeData might just be "typeHandle", as
// we could then send further events to ask what the elementtype, typeToken and moduleToken
// are for the type handle. But as
// nearly all types are non-generic we send across even the basic type information in
// the slightly expanded form shown below, sending the element type and the
// tokens with the type handle itself. The fields debuggerModuleToken, metadataToken and typeHandle
// are only used as follows:
// elementType debuggerModuleToken metadataToken typeHandle
// E_T_INT8 : E_T_INT8 No No No
// Boxed E_T_INT8: E_T_CLASS No No No
// E_T_CLASS, non-generic class: E_T_CLASS Yes Yes No
// E_T_VALUETYPE, non-generic: E_T_VALUETYPE Yes Yes No
// E_T_CLASS, generic class: E_T_CLASS Yes Yes Yes
// E_T_VALUETYPE, generic class: E_T_VALUETYPE Yes Yes Yes
// E_T_BYREF : E_T_BYREF No No Yes
// E_T_PTR : E_T_PTR No No Yes
// E_T_ARRAY etc. : E_T_ARRAY No No Yes
// E_T_FNPTR etc. : E_T_FNPTR No No Yes
// This allows us to always set "typeHandle" to NULL except when dealing with highly nested
// types or function-pointer types (the latter are too complexe to transfer over in one hit).
//
struct MSLAYOUT DebuggerIPCE_BasicTypeData
{
CorElementType elementType;
mdTypeDef metadataToken;
VMPTR_Module vmModule;
VMPTR_DomainAssembly vmDomainAssembly;
VMPTR_TypeHandle vmTypeHandle;
};
// DebuggerIPCE_ExpandedTypeData contains more information showing further
// details for array types, byref types etc.
// Whenever you fetch type information from the left-side
// you get back one of these. These in turn contain further
// DebuggerIPCE_BasicTypeData's and typeHandles which you can
// then query to get further information about the type parameters.
// This copes with the nested cases, e.g. jagged arrays,
// String ****, &(String*), Pair<String,Pair<String>>
// and so on.
//
// So this type information is not "fully expanded", it's just a little
// more detail then DebuggerIPCE_BasicTypeData. For type
// instantiatons (e.g. List<int>) and
// function pointer types you will need to make further requests for
// information about the type parameters.
// For array types there is always only one type parameter so
// we include that as part of the expanded data.
//
//
struct MSLAYOUT DebuggerIPCE_ExpandedTypeData
{
CorElementType elementType; // Note this is _never_ E_T_VAR, E_T_WITH or E_T_MVAR
union MSLAYOUT
{
// used for E_T_CLASS and E_T_VALUECLASS, E_T_PTR, E_T_BYREF etc.
// For non-constructed E_T_CLASS or E_T_VALUECLASS the tokens will be set and the typeHandle will be NULL
// For constructed E_T_CLASS or E_T_VALUECLASS the tokens will be set and the typeHandle will be non-NULL
// For E_T_PTR etc. the tokens will be NULL and the typeHandle will be non-NULL.
struct MSLAYOUT
{
mdTypeDef metadataToken;
VMPTR_Module vmModule;
VMPTR_DomainAssembly vmDomainAssembly;
VMPTR_TypeHandle typeHandle; // if non-null then further fetches will be needed to get type arguments
} ClassTypeData;
// used for E_T_PTR, E_T_BYREF etc.
struct MSLAYOUT
{
DebuggerIPCE_BasicTypeData unaryTypeArg; // used only when sending back to debugger
} UnaryTypeData;
// used for E_T_ARRAY etc.
struct MSLAYOUT
{
DebuggerIPCE_BasicTypeData arrayTypeArg; // used only when sending back to debugger
DWORD arrayRank;
} ArrayTypeData;
// used for E_T_FNPTR
struct MSLAYOUT
{
VMPTR_TypeHandle typeHandle; // if non-null then further fetches needed to get type arguments
} NaryTypeData;
};
};
// DebuggerIPCE_TypeArgData is used when sending type arguments
// across to a funceval. It contains the DebuggerIPCE_ExpandedTypeData describing the
// essence of the type, but the typeHandle and other
// BasicTypeData fields should be zero and will be ignored.
// The DebuggerIPCE_ExpandedTypeData is then followed
// by the required number of type arguments, each of which
// will be a further DebuggerIPCE_TypeArgData record in the stream of
// flattened type argument data.
struct MSLAYOUT DebuggerIPCE_TypeArgData
{
DebuggerIPCE_ExpandedTypeData data;
unsigned int numTypeArgs; // number of immediate children on the type tree
};
//
// DebuggerIPCE_ObjectData holds the results of a
// GetAndSendObjectInfo, i.e., all the info about an object that the
// Right Side would need to access it. (This include array, string,
// and nstruct info.)
//
struct MSLAYOUT DebuggerIPCE_ObjectData
{
void *objRef;
bool objRefBad;
SIZE_T objSize;
// Offset from the beginning of the object to the beginning of the first field
SIZE_T objOffsetToVars;
// The type of the object....
struct DebuggerIPCE_ExpandedTypeData objTypeData;
union MSLAYOUT
{
struct MSLAYOUT
{
SIZE_T length;
SIZE_T offsetToStringBase;
} stringInfo;
struct MSLAYOUT
{
SIZE_T rank;
SIZE_T offsetToArrayBase;
SIZE_T offsetToLowerBounds; // 0 if not present
SIZE_T offsetToUpperBounds; // 0 if not present
SIZE_T componentCount;
SIZE_T elementSize;
} arrayInfo;
struct MSLAYOUT
{
struct DebuggerIPCE_BasicTypeData typedByrefType; // the type of the thing contained in a typedByref...
} typedByrefInfo;
};
};
//
// Remote enregistered info used by CordbValues and for passing
// variable homes between the left and right sides during a func eval.
//
enum RemoteAddressKind
{
RAK_NONE = 0,
RAK_REG,
RAK_REGREG,
RAK_REGMEM,
RAK_MEMREG,
RAK_FLOAT,
RAK_END
};
const CORDB_ADDRESS kLeafFrameRegAddr = 0;
const CORDB_ADDRESS kNonLeafFrameRegAddr = (CORDB_ADDRESS)(-1);
struct MSLAYOUT RemoteAddress
{
RemoteAddressKind kind;
void *frame;
CorDebugRegister reg1;
void *reg1Addr;
SIZE_T reg1Value; // this is the actual value of the register
union MSLAYOUT
{
struct MSLAYOUT
{
CorDebugRegister reg2;
void *reg2Addr;
SIZE_T reg2Value; // this is the actual value of the register
} u;
CORDB_ADDRESS addr;
DWORD floatIndex;
};
};
//
// DebuggerIPCE_FuncEvalType specifies the type of a function
// evaluation that will occur.
//
enum DebuggerIPCE_FuncEvalType
{
DB_IPCE_FET_NORMAL,
DB_IPCE_FET_NEW_OBJECT,
DB_IPCE_FET_NEW_OBJECT_NC,
DB_IPCE_FET_NEW_STRING,
DB_IPCE_FET_NEW_ARRAY
};
enum NameChangeType
{
APP_DOMAIN_NAME_CHANGE,
THREAD_NAME_CHANGE
};
//
// DebuggerIPCE_FuncEvalArgData holds data for each argument to a
// function evaluation.
//
struct MSLAYOUT DebuggerIPCE_FuncEvalArgData
{
RemoteAddress argHome; // enregistered variable home
void *argAddr; // address if not enregistered
CorElementType argElementType;
unsigned int fullArgTypeNodeCount; // Pointer to LS (DebuggerIPCE_TypeArgData *) buffer holding full description of the argument type (if needed - only needed for struct types)
void *fullArgType; // Pointer to LS (DebuggerIPCE_TypeArgData *) buffer holding full description of the argument type (if needed - only needed for struct types)
BYTE argLiteralData[8]; // copy of generic value data
bool argIsLiteral; // true if value is in argLiteralData
bool argIsHandleValue; // true if argAddr is OBJECTHANDLE
};
//
// DebuggerIPCE_FuncEvalInfo holds info necessary to setup a func eval
// operation.
//
struct MSLAYOUT DebuggerIPCE_FuncEvalInfo
{
VMPTR_Thread vmThreadToken;
DebuggerIPCE_FuncEvalType funcEvalType;
mdMethodDef funcMetadataToken;
mdTypeDef funcClassMetadataToken;
VMPTR_DomainAssembly vmDomainAssembly;
RSPTR_CORDBEVAL funcEvalKey;
bool evalDuringException;
unsigned int argCount;
unsigned int genericArgsCount;
unsigned int genericArgsNodeCount;
SIZE_T stringSize;
SIZE_T arrayRank;
};
//
// Used in DebuggerIPCFirstChanceData. This tells the LS what action to take within the hijack
//
enum HijackAction
{
HIJACK_ACTION_EXIT_UNHANDLED,
HIJACK_ACTION_EXIT_HANDLED,
HIJACK_ACTION_WAIT
};
//
// DebuggerIPCFirstChanceData holds info communicated from the LS to the RS when signaling that an exception does not
// belong to the runtime from a first chance hijack. This is used when Win32 debugging only.
//
struct MSLAYOUT DebuggerIPCFirstChanceData
{
LSPTR_CONTEXT pLeftSideContext;
HijackAction action;
UINT debugCounter;
};
//
// DebuggerIPCSecondChanceData holds info communicated from the RS
// to the LS when setting up a second chance exception hijack. This is
// used when Win32 debugging only.
//
struct MSLAYOUT DebuggerIPCSecondChanceData
{
DT_CONTEXT threadContext;
};
//-----------------------------------------------------------------------------
// This struct holds pointer from the LS and needs to copy to
// the RS. We have to free the memory on the RS.
// The transfer function is called when the RS first reads the event. At this point,
// the LS is stopped while sending the event. Thus the LS pointers only need to be
// valid while the LS is in SendIPCEvent.
//
// Since this data is in an IPC/Marshallable block, it can't have any Ctors (holders)
// in it.
//-----------------------------------------------------------------------------
struct MSLAYOUT Ls_Rs_BaseBuffer
{
#ifdef RIGHT_SIDE_COMPILE
protected:
// copy data can happen on both LS and RS. In LS case,
// ReadProcessMemory is really reading from its own process memory.
//
void CopyLSDataToRSWorker(ICorDebugDataTarget * pTargethProcess);
// retrieve the RS data and own it
BYTE *TransferRSDataWorker()
{
BYTE *pbRS = m_pbRS;
m_pbRS = NULL;
return pbRS;
}
public:
void CleanUp()
{
if (m_pbRS != NULL)
{
delete [] m_pbRS;
m_pbRS = NULL;
}
}
#else
public:
// Only LS can call this API
void SetLsData(BYTE *pbLS, DWORD cbSize)
{
m_pbRS = NULL;
m_pbLS = pbLS;
m_cbSize = cbSize;
}
#endif // RIGHT_SIDE_COMPILE
public:
// Common APIs.
DWORD GetSize() { return m_cbSize; }
protected:
// Size of data in bytes
DWORD m_cbSize;
// If this is non-null, pointer into LS for buffer.
// LS can free this after the debug event is continued.
BYTE *m_pbLS; // @dbgtodo cross-plat- for cross-platform purposes, this should be a TADDR
// If this is non-null, pointer into RS for buffer. RS must then free this.
// This buffer was copied from the LS (via CopyLSDataToRSWorker).
BYTE *m_pbRS;
};
//-----------------------------------------------------------------------------
// Byte wrapper around the buffer.
//-----------------------------------------------------------------------------
struct MSLAYOUT Ls_Rs_ByteBuffer : public Ls_Rs_BaseBuffer
{
#ifdef RIGHT_SIDE_COMPILE
BYTE *GetRSPointer()
{
return m_pbRS;
}
void CopyLSDataToRS(ICorDebugDataTarget * pTarget);
BYTE *TransferRSData()
{
return TransferRSDataWorker();
}
#endif
};
//-----------------------------------------------------------------------------
// Wrapper around a Ls_rS_Buffer to get it as a string.
// This can also do some sanity checking.
//-----------------------------------------------------------------------------
struct MSLAYOUT Ls_Rs_StringBuffer : public Ls_Rs_BaseBuffer
{
#ifdef RIGHT_SIDE_COMPILE
const WCHAR * GetString()
{
return reinterpret_cast<const WCHAR*> (m_pbRS);
}
// Copy over the string.
void CopyLSDataToRS(ICorDebugDataTarget * pTarget);
// Caller will pick up ownership.
// Since caller will delete this data, we can't give back a constant pointer.
WCHAR * TransferStringData()
{
return reinterpret_cast<WCHAR*> (TransferRSDataWorker());
}
#endif
};
// Data for an Managed Debug Assistant Probe (MDA).
struct MSLAYOUT DebuggerMDANotification
{
Ls_Rs_StringBuffer szName;
Ls_Rs_StringBuffer szDescription;
Ls_Rs_StringBuffer szXml;
DWORD dwOSThreadId;
CorDebugMDAFlags flags;
};
// The only remaining problem is that register number mappings are different for each platform. It turns out
// that the debugger only uses REGNUM_SP and REGNUM_AMBIENT_SP though, so we can just virtualize these two for
// the target platform.
// Keep this is sync with the definitions in inc/corinfo.h.
#if defined(TARGET_X86)
#define DBG_TARGET_REGNUM_SP 4
#define DBG_TARGET_REGNUM_AMBIENT_SP 9
#ifdef TARGET_X86
static_assert_no_msg(DBG_TARGET_REGNUM_SP == ICorDebugInfo::REGNUM_SP);
static_assert_no_msg(DBG_TARGET_REGNUM_AMBIENT_SP == ICorDebugInfo::REGNUM_AMBIENT_SP);
#endif // TARGET_X86
#elif defined(TARGET_AMD64)
#define DBG_TARGET_REGNUM_SP 4
#define DBG_TARGET_REGNUM_AMBIENT_SP 17
#ifdef TARGET_AMD64
static_assert_no_msg(DBG_TARGET_REGNUM_SP == ICorDebugInfo::REGNUM_SP);
static_assert_no_msg(DBG_TARGET_REGNUM_AMBIENT_SP == ICorDebugInfo::REGNUM_AMBIENT_SP);
#endif // TARGET_AMD64
#elif defined(TARGET_ARM)
#define DBG_TARGET_REGNUM_SP 13
#define DBG_TARGET_REGNUM_AMBIENT_SP 17
#ifdef TARGET_ARM
C_ASSERT(DBG_TARGET_REGNUM_SP == ICorDebugInfo::REGNUM_SP);
C_ASSERT(DBG_TARGET_REGNUM_AMBIENT_SP == ICorDebugInfo::REGNUM_AMBIENT_SP);
#endif // TARGET_ARM
#elif defined(TARGET_ARM64)
#define DBG_TARGET_REGNUM_SP 31
#define DBG_TARGET_REGNUM_AMBIENT_SP 34
#ifdef TARGET_ARM64
C_ASSERT(DBG_TARGET_REGNUM_SP == ICorDebugInfo::REGNUM_SP);
C_ASSERT(DBG_TARGET_REGNUM_AMBIENT_SP == ICorDebugInfo::REGNUM_AMBIENT_SP);
#endif // TARGET_ARM64
#else
#error Target registers are not defined for this platform
#endif
//
// Event structure that is passed between the Runtime Controller and the
// Debugger Interface. Some types of events are a fixed size and have
// entries in the main union, while others are variable length and have
// more specialized data structures that are attached to the end of this
// structure.
//
struct MSLAYOUT DebuggerIPCEvent
{
DebuggerIPCEvent* next;
DebuggerIPCEventType type;
DWORD processId;
DWORD threadId;
VMPTR_AppDomain vmAppDomain;
VMPTR_Thread vmThread;
HRESULT hr;
bool replyRequired;
bool asyncSend;
union MSLAYOUT
{
struct MSLAYOUT
{
// Pointer to a BOOL in the target.
CORDB_ADDRESS pfBeingDebugged;
} LeftSideStartupData;
struct MSLAYOUT
{
// Module whos metadata is being updated
// This tells the RS that the metadata for that module has become invalid.
VMPTR_DomainAssembly vmDomainAssembly;
} MetadataUpdateData;
struct MSLAYOUT
{
// Handle to CLR's internal appdomain object.
VMPTR_AppDomain vmAppDomain;
} AppDomainData;
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
} AssemblyData;
#ifdef TEST_DATA_CONSISTENCY
// information necessary for testing whether the LS holds a lock on data
// the RS needs to inspect. See code:DataTest::TestDataSafety and
// code:IDacDbiInterface::TestCrst for more information
struct MSLAYOUT
{
// the lock to be tested
VMPTR_Crst vmCrst;
// indicates whether the LS holds the lock
bool fOkToTake;
} TestCrstData;
// information necessary for testing whether the LS holds a lock on data
// the RS needs to inspect. See code:DataTest::TestDataSafety and
// code:IDacDbiInterface::TestCrst for more information
struct MSLAYOUT
{
// the lock to be tested
VMPTR_SimpleRWLock vmRWLock;
// indicates whether the LS holds the lock
bool fOkToTake;
} TestRWLockData;
#endif // TEST_DATA_CONSISTENCY
// Debug event that a module has been loaded
struct MSLAYOUT
{
// Module that was just loaded.
VMPTR_DomainAssembly vmDomainAssembly;
}LoadModuleData;
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
LSPTR_ASSEMBLY debuggerAssemblyToken;
} UnloadModuleData;
// The given module's pdb has been updated.
// Queury PDB from OOP
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
} UpdateModuleSymsData;
DebuggerMDANotification MDANotification;
struct MSLAYOUT
{
LSPTR_BREAKPOINT breakpointToken;
mdMethodDef funcMetadataToken;
VMPTR_DomainAssembly vmDomainAssembly;
bool isIL;
SIZE_T offset;
SIZE_T encVersion;
LSPTR_METHODDESC nativeCodeMethodDescToken; // points to the MethodDesc if !isIL
} BreakpointData;
struct MSLAYOUT
{
LSPTR_BREAKPOINT breakpointToken;
} BreakpointSetErrorData;
struct MSLAYOUT
{
#ifdef FEATURE_DATABREAKPOINT
CONTEXT context;
#else
int dummy;
#endif
} DataBreakpointData;
struct MSLAYOUT
{
LSPTR_STEPPER stepperToken;
VMPTR_Thread vmThreadToken;
FramePointer frameToken;
bool stepIn;
bool rangeIL;
bool IsJMCStop;
unsigned int totalRangeCount;
CorDebugStepReason reason;
CorDebugUnmappedStop rgfMappingStop;
CorDebugIntercept rgfInterceptStop;
unsigned int rangeCount;
COR_DEBUG_STEP_RANGE range; //note that this is an array
} StepData;
struct MSLAYOUT
{
// An unvalidated GC-handle
VMPTR_OBJECTHANDLE GCHandle;
} GetGCHandleInfo;
struct MSLAYOUT
{
// An unvalidated GC-handle for which we're returning the results
LSPTR_OBJECTHANDLE GCHandle;
// The following are initialized by the LS in response to our query:
VMPTR_AppDomain vmAppDomain; // AD that handle is in (only applicable if fValid).
bool fValid; // Did the LS determine the GC handle to be valid?
} GetGCHandleInfoResult;
// Allocate memory on the left-side
struct MSLAYOUT
{
ULONG bufSize; // number of bytes to allocate
} GetBuffer;
// Memory allocated on the left-side
struct MSLAYOUT
{
void *pBuffer; // LS pointer to the buffer allocated
HRESULT hr; // success / failure
} GetBufferResult;
// Free a buffer allocated on the left-side with GetBuffer
struct MSLAYOUT
{
void *pBuffer; // Pointer previously returned in GetBufferResult
} ReleaseBuffer;
struct MSLAYOUT
{
HRESULT hr;
} ReleaseBufferResult;
// Apply an EnC edit
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly; // Module to edit
DWORD cbDeltaMetadata; // size of blob pointed to by pDeltaMetadata
CORDB_ADDRESS pDeltaMetadata; // pointer to delta metadata in debuggee
// it's the RS's responsibility to allocate and free
// this (and pDeltaIL) using GetBuffer / ReleaseBuffer
CORDB_ADDRESS pDeltaIL; // pointer to delta IL in debugee
DWORD cbDeltaIL; // size of blob pointed to by pDeltaIL
} ApplyChanges;
struct MSLAYOUT
{
HRESULT hr;
} ApplyChangesResult;
struct MSLAYOUT
{
mdTypeDef classMetadataToken;
VMPTR_DomainAssembly vmDomainAssembly;
LSPTR_ASSEMBLY classDebuggerAssemblyToken;
} LoadClass;
struct MSLAYOUT
{
mdTypeDef classMetadataToken;
VMPTR_DomainAssembly vmDomainAssembly;
LSPTR_ASSEMBLY classDebuggerAssemblyToken;
} UnloadClass;
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
bool flag;
} SetClassLoad;
struct MSLAYOUT
{
VMPTR_OBJECTHANDLE vmExceptionHandle;
bool firstChance;
bool continuable;
} Exception;
struct MSLAYOUT
{
VMPTR_Thread vmThreadToken;
} ClearException;
struct MSLAYOUT
{
void *address;
} IsTransitionStub;
struct MSLAYOUT
{
bool isStub;
} IsTransitionStubResult;
struct MSLAYOUT
{
CORDB_ADDRESS startAddress;
bool fCanSetIPOnly;
VMPTR_Thread vmThreadToken;
VMPTR_DomainAssembly vmDomainAssembly;
mdMethodDef mdMethod;
VMPTR_MethodDesc vmMethodDesc;
SIZE_T offset;
bool fIsIL;
void * firstExceptionHandler;
} SetIP; // this is also used for CanSetIP
struct MSLAYOUT
{
int iLevel;
EmbeddedIPCString<MAX_LOG_SWITCH_NAME_LEN + 1> szCategory;
Ls_Rs_StringBuffer szContent;
} FirstLogMessage;
struct MSLAYOUT
{
int iLevel;
int iReason;
EmbeddedIPCString<MAX_LOG_SWITCH_NAME_LEN + 1> szSwitchName;
EmbeddedIPCString<MAX_LOG_SWITCH_NAME_LEN + 1> szParentSwitchName;
} LogSwitchSettingMessage;
// information needed to send to the RS as part of a custom notification from the target
struct MSLAYOUT
{
// Domain file for the domain in which the notification occurred
VMPTR_DomainAssembly vmDomainAssembly;
// metadata token for the type of the CustomNotification object's type
mdTypeDef classToken;
} CustomNotification;
struct MSLAYOUT
{
VMPTR_Thread vmThreadToken;
CorDebugThreadState debugState;
} SetAllDebugState;
DebuggerIPCE_FuncEvalInfo FuncEval;
struct MSLAYOUT
{
CORDB_ADDRESS argDataArea;
LSPTR_DEBUGGEREVAL debuggerEvalKey;
} FuncEvalSetupComplete;
struct MSLAYOUT
{
RSPTR_CORDBEVAL funcEvalKey;
bool successful;
bool aborted;
void *resultAddr;
// AppDomain that the result is in.
VMPTR_AppDomain vmAppDomain;
VMPTR_OBJECTHANDLE vmObjectHandle;
DebuggerIPCE_ExpandedTypeData resultType;
} FuncEvalComplete;
struct MSLAYOUT
{
LSPTR_DEBUGGEREVAL debuggerEvalKey;
} FuncEvalAbort;
struct MSLAYOUT
{
LSPTR_DEBUGGEREVAL debuggerEvalKey;
} FuncEvalRudeAbort;
struct MSLAYOUT
{
LSPTR_DEBUGGEREVAL debuggerEvalKey;
} FuncEvalCleanup;
struct MSLAYOUT
{
void *objectRefAddress;
VMPTR_OBJECTHANDLE vmObjectHandle;
void *newReference;
} SetReference;
struct MSLAYOUT
{
NameChangeType eventType;
VMPTR_AppDomain vmAppDomain;
VMPTR_Thread vmThread;
} NameChange;
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
BOOL fAllowJitOpts;
BOOL fEnableEnC;
} JitDebugInfo;
// EnC Remap opportunity
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
mdMethodDef funcMetadataToken ; // methodDef of function with remap opportunity
SIZE_T currentVersionNumber; // version currently executing
SIZE_T resumeVersionNumber; // latest version
SIZE_T currentILOffset; // the IL offset of the current IP
SIZE_T *resumeILOffset; // pointer into left-side where an offset to resume
// to should be written if remap is desired.
} EnCRemap;
// EnC Remap has taken place
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
mdMethodDef funcMetadataToken; // methodDef of function that was remapped
} EnCRemapComplete;
// Notification that the LS is about to update a CLR data structure to account for a
// specific edit made by EnC (function add/update or field add).
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
mdToken memberMetadataToken; // Either a methodDef token indicating the function that
// was updated/added, or a fieldDef token indicating the
// field which was added.
mdTypeDef classMetadataToken; // TypeDef token of the class in which the update was made
SIZE_T newVersionNumber; // The new function/module version
} EnCUpdate;
struct MSLAYOUT
{
void *oldData;
void *newData;
DebuggerIPCE_BasicTypeData type;
} SetValueClass;
// Event used to tell LS if a single function is user or non-user code.
// Same structure used to get function status.
// @todo - Perhaps we can bundle these up so we can set multiple funcs w/ 1 event?
struct MSLAYOUT
{
VMPTR_DomainAssembly vmDomainAssembly;
mdMethodDef funcMetadataToken;
DWORD dwStatus;
} SetJMCFunctionStatus;
struct MSLAYOUT
{
TASKID taskid;
} GetThreadForTaskId;
struct MSLAYOUT
{
VMPTR_Thread vmThreadToken;
} GetThreadForTaskIdResult;
struct MSLAYOUT
{
CONNID connectionId;
} ConnectionChange;
struct MSLAYOUT
{
CONNID connectionId;
EmbeddedIPCString<MAX_LONGPATH> wzConnectionName;
} CreateConnection;
struct MSLAYOUT
{
void *objectToken;
CorDebugHandleType handleType;
} CreateHandle;
struct MSLAYOUT
{
VMPTR_OBJECTHANDLE vmObjectHandle;
} CreateHandleResult;
// used in DB_IPCE_DISPOSE_HANDLE event
struct MSLAYOUT
{
VMPTR_OBJECTHANDLE vmObjectHandle;
CorDebugHandleType handleType;
} DisposeHandle;
struct MSLAYOUT
{
FramePointer framePointer;
SIZE_T nOffset;
CorDebugExceptionCallbackType eventType;
DWORD dwFlags;
VMPTR_OBJECTHANDLE vmExceptionHandle;
} ExceptionCallback2;
struct MSLAYOUT
{
CorDebugExceptionUnwindCallbackType eventType;
DWORD dwFlags;
} ExceptionUnwind;
struct MSLAYOUT
{
VMPTR_Thread vmThreadToken;
FramePointer frameToken;
} InterceptException;
struct MSLAYOUT
{
VMPTR_Module vmModule;
void * pMetadataStart;
ULONG nMetadataSize;
} MetadataUpdateRequest;
};
};
// When using a network transport rather than shared memory buffers CorDBIPC_BUFFER_SIZE is the upper bound
// for a single DebuggerIPCEvent structure. This now relates to the maximal size of a network message and is
// orthogonal to the host's page size. Round the buffer size up to a multiple of 8 since MSVC seems more
// aggressive in this regard than gcc.
#define CorDBIPC_TRANSPORT_BUFFER_SIZE (((sizeof(DebuggerIPCEvent) + 7) / 8) * 8)
// A DebuggerIPCEvent must fit in the send & receive buffers, which are CorDBIPC_BUFFER_SIZE bytes.
static_assert_no_msg(sizeof(DebuggerIPCEvent) <= CorDBIPC_BUFFER_SIZE);
static_assert_no_msg(CorDBIPC_TRANSPORT_BUFFER_SIZE <= CorDBIPC_BUFFER_SIZE);
// 2*sizeof(WCHAR) for the two string terminating characters in the FirstLogMessage
#define LOG_MSG_PADDING 4
#endif /* _DbgIPCEvents_h_ */
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/src/libunwind/src/x86_64/Gget_proc_info.c | /* libunwind - a platform-independent unwind library
Copyright (c) 2002-2003 Hewlett-Packard Development Company, L.P.
Contributed by David Mosberger-Tang <[email protected]>
Modified for x86_64 by Max Asbock <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "unwind_i.h"
int
unw_get_proc_info (unw_cursor_t *cursor, unw_proc_info_t *pi)
{
struct cursor *c = (struct cursor *) cursor;
if (dwarf_make_proc_info (&c->dwarf) < 0)
{
/* On x86-64, some key routines such as _start() and _dl_start()
are missing DWARF unwind info. We don't want to fail in that
case, because those frames are uninteresting and just mark
the end of the frame-chain anyhow. */
memset (pi, 0, sizeof (*pi));
pi->start_ip = c->dwarf.ip;
pi->end_ip = c->dwarf.ip + 1;
return 0;
}
*pi = c->dwarf.pi;
return 0;
}
| /* libunwind - a platform-independent unwind library
Copyright (c) 2002-2003 Hewlett-Packard Development Company, L.P.
Contributed by David Mosberger-Tang <[email protected]>
Modified for x86_64 by Max Asbock <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "unwind_i.h"
int
unw_get_proc_info (unw_cursor_t *cursor, unw_proc_info_t *pi)
{
struct cursor *c = (struct cursor *) cursor;
if (dwarf_make_proc_info (&c->dwarf) < 0)
{
/* On x86-64, some key routines such as _start() and _dl_start()
are missing DWARF unwind info. We don't want to fail in that
case, because those frames are uninteresting and just mark
the end of the frame-chain anyhow. */
memset (pi, 0, sizeof (*pi));
pi->start_ip = c->dwarf.ip;
pi->end_ip = c->dwarf.ip + 1;
return 0;
}
*pi = c->dwarf.pi;
return 0;
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/mono/mono/mini/exceptions-mips.c | /**
* \file
* exception support for MIPS
*
* Authors:
* Mark Mason ([email protected])
*
* Based on exceptions-ppc.c by:
* Dietmar Maurer ([email protected])
* Paolo Molaro ([email protected])
*
* (C) 2006 Broadcom
* (C) 2001 Ximian, Inc.
*/
#include <config.h>
#include <glib.h>
#include <signal.h>
#include <string.h>
#include <mono/arch/mips/mips-codegen.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/threads.h>
#include <mono/metadata/debug-helpers.h>
#include <mono/metadata/exception.h>
#include <mono/metadata/mono-debug.h>
#include "mini.h"
#include "mini-mips.h"
#include "mini-runtime.h"
#include "aot-runtime.h"
#include "mono/utils/mono-tls-inline.h"
#define GENERIC_EXCEPTION_SIZE 256
/*
* mono_arch_get_restore_context:
*
* Returns a pointer to a method which restores a previously saved MonoContext.
* The first argument in a0 is the pointer to the MonoContext.
*/
gpointer
mono_arch_get_restore_context (MonoTrampInfo **info, gboolean aot)
{
int i;
guint8 *code;
static guint8 start [512];
static int inited = 0;
guint32 iregs_to_restore;
g_assert (!aot);
if (info)
*info = NULL;
if (inited)
return start;
inited = 1;
code = start;
mips_move (code, mips_at, mips_a0);
iregs_to_restore = (MONO_ARCH_CALLEE_SAVED_REGS \
| (1 << mips_sp) | (1 << mips_ra));
for (i = 0; i < MONO_SAVED_GREGS; ++i) {
//if (iregs_to_restore & (1 << i)) {
if (i != mips_zero && i != mips_at) {
MIPS_LW (code, i, mips_at, G_STRUCT_OFFSET (MonoContext, sc_regs[i]));
}
}
/* Get the address to return to */
mips_lw (code, mips_t9, mips_at, G_STRUCT_OFFSET (MonoContext, sc_pc));
/* jump to the saved IP */
mips_jr (code, mips_t9);
mips_nop (code);
/* never reached */
mips_break (code, 0xff);
g_assert ((code - start) < sizeof(start));
mono_arch_flush_icache (start, code - start);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_EXCEPTION_HANDLING, NULL));
return start;
}
/*
* mono_arch_get_call_filter:
*
* Returns a pointer to a method which calls an exception filter. We
* also use this function to call finally handlers (we pass NULL as
* @exc object in this case).
*
* This function is invoked as
* call_handler (MonoContext *ctx, handler)
*
* Where 'handler' is a function to be invoked as:
* handler (void)
*/
gpointer
mono_arch_get_call_filter (MonoTrampInfo **info, gboolean aot)
{
static guint8 start [320];
static int inited = 0;
guint8 *code;
int alloc_size;
int offset;
g_assert (!aot);
if (info)
*info = NULL;
if (inited)
return start;
inited = 1;
code = start;
alloc_size = 64;
g_assert ((alloc_size & (MIPS_STACK_ALIGNMENT-1)) == 0);
mips_addiu (code, mips_sp, mips_sp, -alloc_size);
mips_sw (code, mips_ra, mips_sp, alloc_size + MIPS_RET_ADDR_OFFSET);
/* Save global registers on stack (s0 - s7) */
offset = 16;
MIPS_SW (code, mips_s0, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s1, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s2, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s3, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s4, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s5, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s6, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s7, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_fp, mips_sp, offset); offset += IREG_SIZE;
/* Restore global registers from MonoContext, including the frame pointer */
MIPS_LW (code, mips_s0, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s0]));
MIPS_LW (code, mips_s1, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s1]));
MIPS_LW (code, mips_s2, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s2]));
MIPS_LW (code, mips_s3, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s3]));
MIPS_LW (code, mips_s4, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s4]));
MIPS_LW (code, mips_s5, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s5]));
MIPS_LW (code, mips_s6, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s6]));
MIPS_LW (code, mips_s7, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s7]));
MIPS_LW (code, mips_fp, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_fp]));
/* a1 is the handler to call */
mips_move (code, mips_t9, mips_a1);
/* jump to the saved IP */
mips_jalr (code, mips_t9, mips_ra);
mips_nop (code);
/* restore all regs from the stack */
offset = 16;
MIPS_LW (code, mips_s0, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s1, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s2, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s3, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s4, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s5, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s6, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s7, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_fp, mips_sp, offset); offset += IREG_SIZE;
/* epilog */
mips_lw (code, mips_ra, mips_sp, alloc_size + MIPS_RET_ADDR_OFFSET);
mips_addiu (code, mips_sp, mips_sp, alloc_size);
mips_jr (code, mips_ra);
mips_nop (code);
g_assert ((code - start) < sizeof(start));
mono_arch_flush_icache (start, code - start);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_EXCEPTION_HANDLING, NULL));
return start;
}
static void
throw_exception (MonoObject *exc, unsigned long eip, unsigned long esp, gboolean rethrow, gboolean preserve_ips)
{
ERROR_DECL (error);
MonoContext ctx;
#ifdef DEBUG_EXCEPTIONS
g_print ("throw_exception: exc=%p eip=%p esp=%p rethrow=%d\n",
exc, (void *)eip, (void *) esp, rethrow);
#endif
/* adjust eip so that it point into the call instruction */
eip -= 8;
memset (&ctx, 0, sizeof (MonoContext));
/*g_print ("stack in throw: %p\n", esp);*/
memcpy (&ctx.sc_regs, (void *)(esp + MIPS_STACK_PARAM_OFFSET),
sizeof (gulong) * MONO_SAVED_GREGS);
memset (&ctx.sc_fpregs, 0, sizeof (mips_freg) * MONO_SAVED_FREGS);
MONO_CONTEXT_SET_IP (&ctx, eip);
if (mono_object_isinst_checked (exc, mono_defaults.exception_class, error)) {
MonoException *mono_ex = (MonoException*)exc;
if (!rethrow && !mono_ex->caught_in_unmanaged) {
mono_ex->stack_trace = NULL;
mono_ex->trace_ips = NULL;
} if (preserve_ips) {
mono_ex->caught_in_unmanaged = TRUE;
}
}
mono_error_assert_ok (error);
mono_handle_exception (&ctx, exc);
#ifdef DEBUG_EXCEPTIONS
g_print ("throw_exception: restore to pc=%p sp=%p fp=%p ctx=%p\n",
(void *) ctx.sc_pc, (void *) ctx.sc_regs[mips_sp],
(void *) ctx.sc_regs[mips_fp], &ctx);
#endif
mono_restore_context (&ctx);
g_assert_not_reached ();
}
/**
* arch_get_throw_exception_generic:
*
* Returns a function pointer which can be used to raise
* exceptions. The returned function has the following
* signature: void (*func) (MonoException *exc); or
* void (*func) (char *exc_name);
*
*/
static gpointer
mono_arch_get_throw_exception_generic (guint8 *start, int size, int corlib, gboolean rethrow, gboolean preserve_ips)
{
guint8 *code;
int alloc_size, pos, i;
code = start;
//g_print ("mono_arch_get_throw_exception_generic: code=%p\n", code);
pos = 0;
/* XXX - save all the FP regs on the stack ? */
pos += MONO_MAX_IREGS * sizeof(guint32);
alloc_size = MIPS_MINIMAL_STACK_SIZE + pos + 64;
// align to MIPS_STACK_ALIGNMENT bytes
alloc_size += MIPS_STACK_ALIGNMENT - 1;
alloc_size &= ~(MIPS_STACK_ALIGNMENT - 1);
g_assert ((alloc_size & (MIPS_STACK_ALIGNMENT-1)) == 0);
mips_addiu (code, mips_sp, mips_sp, -alloc_size);
mips_sw (code, mips_ra, mips_sp, alloc_size + MIPS_RET_ADDR_OFFSET);
/* Save all the regs on the stack */
for (i = 0; i < MONO_MAX_IREGS; i++) {
if (i != mips_sp)
MIPS_SW (code, i, mips_sp, i*IREG_SIZE + MIPS_STACK_PARAM_OFFSET);
else {
mips_addiu (code, mips_at, mips_sp, alloc_size);
MIPS_SW (code, mips_at, mips_sp, i*IREG_SIZE + MIPS_STACK_PARAM_OFFSET);
}
}
if (corlib) {
mips_move (code, mips_a1, mips_a0);
mips_load (code, mips_a0, mono_defaults.corlib);
mips_load (code, mips_t9, mono_exception_from_token);
mips_jalr (code, mips_t9, mips_ra);
mips_nop (code);
mips_move (code, mips_a0, mips_v0);
}
/* call throw_exception (exc, ip, sp, rethrow) */
/* exc is already in place in a0 */
/* pointer to ip */
if (corlib)
mips_lw (code, mips_a1, mips_sp, alloc_size + MIPS_RET_ADDR_OFFSET);
else
mips_move (code, mips_a1, mips_ra);
/* current sp & rethrow */
mips_move (code, mips_a2, mips_sp);
mips_addiu (code, mips_a3, mips_zero, rethrow);
mips_load (code, mips_t9, throw_exception);
mips_jr (code, mips_t9);
mips_nop (code);
/* we should never reach this breakpoint */
mips_break (code, 0xfe);
g_assert ((code - start) < size);
mono_arch_flush_icache (start, code - start);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_EXCEPTION_HANDLING, NULL));
return start;
}
/**
* mono_arch_get_rethrow_exception:
* \returns a function pointer which can be used to rethrow
* exceptions. The returned function has the following
* signature: void (*func) (MonoException *exc);
*/
gpointer
mono_arch_get_rethrow_exception (MonoTrampInfo **info, gboolean aot)
{
static guint8 start [GENERIC_EXCEPTION_SIZE];
static int inited = 0;
g_assert (!aot);
if (info)
*info = NULL;
if (inited)
return start;
mono_arch_get_throw_exception_generic (start, sizeof (start), FALSE, TRUE, FALSE);
inited = 1;
return start;
}
/**
* mono_arch_get_rethrow_preserve_exception:
* \returns a function pointer which can be used to rethrow
* exceptions while avoiding modification of saved trace_ips.
* The returned function has the following
* signature: void (*func) (MonoException *exc);
*/
gpointer
mono_arch_get_rethrow_preserve_exception (MonoTrampInfo **info, gboolean aot)
{
static guint8 start [GENERIC_EXCEPTION_SIZE];
static int inited = 0;
g_assert (!aot);
if (info)
*info = NULL;
if (inited)
return start;
mono_arch_get_throw_exception_generic (start, sizeof (start), FALSE, TRUE, TRUE);
inited = 1;
return start;
}
/**
* arch_get_throw_exception:
*
* Returns a function pointer which can be used to raise
* exceptions. The returned function has the following
* signature: void (*func) (MonoException *exc);
* For example to raise an arithmetic exception you can use:
*
* x86_push_imm (code, mono_get_exception_arithmetic ());
* x86_call_code (code, arch_get_throw_exception ());
*
*/
gpointer
mono_arch_get_throw_exception (MonoTrampInfo **info, gboolean aot)
{
static guint8 start [GENERIC_EXCEPTION_SIZE];
static int inited = 0;
g_assert (!aot);
if (info)
*info = NULL;
if (inited)
return start;
mono_arch_get_throw_exception_generic (start, sizeof (start), FALSE, FALSE, FALSE);
inited = 1;
return start;
}
gpointer
mono_arch_get_throw_exception_by_name (void)
{
guint8 *start, *code;
int size = 64;
/* Not used on MIPS */
start = code = mono_global_codeman_reserve (size);
mips_break (code, 0xfd);
mono_arch_flush_icache (start, code - start);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_EXCEPTION_HANDLING, NULL));
return start;
}
/**
* mono_arch_get_throw_corlib_exception:
* \returns a function pointer which can be used to raise
* corlib exceptions. The returned function has the following
* signature: void (*func) (guint32 ex_token, guint32 offset);
* On MIPS, the offset argument is missing.
*/
gpointer
mono_arch_get_throw_corlib_exception (MonoTrampInfo **info, gboolean aot)
{
static guint8 start [GENERIC_EXCEPTION_SIZE];
static int inited = 0;
g_assert (!aot);
if (info)
*info = NULL;
if (inited)
return start;
mono_arch_get_throw_exception_generic (start, sizeof (start), TRUE, FALSE, FALSE);
inited = 1;
return start;
}
/*
* mono_arch_unwind_frame:
*
* This function is used to gather information from @ctx, and store it in @frame_info.
* It unwinds one stack frame, and stores the resulting context into @new_ctx. @lmf
* is modified if needed.
* Returns TRUE on success, FALSE otherwise.
*/
gboolean
mono_arch_unwind_frame (MonoJitTlsData *jit_tls,
MonoJitInfo *ji, MonoContext *ctx,
MonoContext *new_ctx, MonoLMF **lmf,
host_mgreg_t **save_locations,
StackFrameInfo *frame)
{
memset (frame, 0, sizeof (StackFrameInfo));
frame->ji = ji;
*new_ctx = *ctx;
if (ji != NULL) {
int i;
gpointer ip = MONO_CONTEXT_GET_IP (ctx);
host_mgreg_t regs [MONO_MAX_IREGS + 1];
guint8 *cfa;
guint32 unwind_info_len;
guint8 *unwind_info;
if (ji->is_trampoline)
frame->type = FRAME_TYPE_TRAMPOLINE;
else
frame->type = FRAME_TYPE_MANAGED;
unwind_info = mono_jinfo_get_unwind_info (ji, &unwind_info_len);
for (i = 0; i < MONO_MAX_IREGS; ++i)
regs [i] = new_ctx->sc_regs [i];
gboolean success = mono_unwind_frame (unwind_info, unwind_info_len, ji->code_start,
(guint8*)ji->code_start + ji->code_size,
ip, NULL, regs, MONO_MAX_IREGS,
save_locations, MONO_MAX_IREGS, &cfa);
if (!success)
return FALSE;
for (i = 0; i < MONO_MAX_IREGS; ++i)
new_ctx->sc_regs [i] = regs [i];
new_ctx->sc_pc = regs [mips_ra];
new_ctx->sc_regs [mips_sp] = (host_mgreg_t)(gsize)cfa;
/* we substract 8, so that the IP points into the call instruction */
MONO_CONTEXT_SET_IP (new_ctx, new_ctx->sc_pc - 8);
/* Sanity check -- we should have made progress here */
g_assert (MONO_CONTEXT_GET_SP (new_ctx) != MONO_CONTEXT_GET_SP (ctx));
return TRUE;
} else if (*lmf) {
g_assert ((((guint64)(*lmf)->previous_lmf) & 2) == 0);
if (!(*lmf)->method) {
#ifdef DEBUG_EXCEPTIONS
g_print ("mono_arch_unwind_frame: bad lmf @ %p\n", (void *) *lmf);
#endif
return FALSE;
}
g_assert (((*lmf)->magic == MIPS_LMF_MAGIC1) || ((*lmf)->magic == MIPS_LMF_MAGIC2));
ji = mini_jit_info_table_find ((gpointer)(*lmf)->eip);
if (!ji)
return FALSE;
frame->ji = ji;
frame->type = FRAME_TYPE_MANAGED_TO_NATIVE;
memcpy (&new_ctx->sc_regs, (*lmf)->iregs, sizeof (gulong) * MONO_SAVED_GREGS);
memcpy (&new_ctx->sc_fpregs, (*lmf)->fregs, sizeof (float) * MONO_SAVED_FREGS);
MONO_CONTEXT_SET_IP (new_ctx, (*lmf)->eip);
/* ensure that we've made progress */
g_assert (new_ctx->sc_pc != ctx->sc_pc);
*lmf = (gpointer)(((gsize)(*lmf)->previous_lmf) & ~3);
return TRUE;
}
return FALSE;
}
gpointer
mono_arch_ip_from_context (void *sigctx)
{
return (gpointer)(gsize)UCONTEXT_REG_PC (sigctx);
}
/*
* handle_exception:
*
* Called by resuming from a signal handler.
*/
static void
handle_signal_exception (gpointer obj)
{
MonoJitTlsData *jit_tls = mono_tls_get_jit_tls ();
MonoContext ctx;
memcpy (&ctx, &jit_tls->ex_ctx, sizeof (MonoContext));
mono_handle_exception (&ctx, obj);
mono_restore_context (&ctx);
}
/*
* This is the function called from the signal handler
*/
gboolean
mono_arch_handle_exception (void *ctx, gpointer obj)
{
#if defined(MONO_CROSS_COMPILE)
g_assert_not_reached ();
#elif defined(MONO_ARCH_USE_SIGACTION)
void *sigctx = ctx;
/*
* Handling the exception in the signal handler is problematic, since the original
* signal is disabled, and we could run arbitrary code though the debugger. So
* resume into the normal stack and do most work there if possible.
*/
MonoJitTlsData *jit_tls = mono_tls_get_jit_tls ();
guint64 sp = UCONTEXT_GREGS (sigctx) [mips_sp];
/* Pass the ctx parameter in TLS */
mono_sigctx_to_monoctx (sigctx, &jit_tls->ex_ctx);
/* The others in registers */
UCONTEXT_GREGS (sigctx)[mips_a0] = (gsize)obj;
/* Allocate a stack frame */
sp -= 256;
UCONTEXT_GREGS (sigctx)[mips_sp] = sp;
UCONTEXT_REG_PC (sigctx) = (gsize)handle_signal_exception;
return TRUE;
#else
MonoContext mctx;
gboolean result;
mono_sigctx_to_monoctx (ctx, &mctx);
result = mono_handle_exception (&mctx, obj);
/* restore the context so that returning from the signal handler will invoke
* the catch clause
*/
mono_monoctx_to_sigctx (&mctx, ctx);
return result;
#endif
}
/*
* mono_arch_setup_resume_sighandler_ctx:
*
* Setup CTX so execution continues at FUNC.
*/
void
mono_arch_setup_resume_sighandler_ctx (MonoContext *ctx, gpointer func)
{
MONO_CONTEXT_SET_IP (ctx,func);
}
| /**
* \file
* exception support for MIPS
*
* Authors:
* Mark Mason ([email protected])
*
* Based on exceptions-ppc.c by:
* Dietmar Maurer ([email protected])
* Paolo Molaro ([email protected])
*
* (C) 2006 Broadcom
* (C) 2001 Ximian, Inc.
*/
#include <config.h>
#include <glib.h>
#include <signal.h>
#include <string.h>
#include <mono/arch/mips/mips-codegen.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/threads.h>
#include <mono/metadata/debug-helpers.h>
#include <mono/metadata/exception.h>
#include <mono/metadata/mono-debug.h>
#include "mini.h"
#include "mini-mips.h"
#include "mini-runtime.h"
#include "aot-runtime.h"
#include "mono/utils/mono-tls-inline.h"
#define GENERIC_EXCEPTION_SIZE 256
/*
* mono_arch_get_restore_context:
*
* Returns a pointer to a method which restores a previously saved MonoContext.
* The first argument in a0 is the pointer to the MonoContext.
*/
gpointer
mono_arch_get_restore_context (MonoTrampInfo **info, gboolean aot)
{
int i;
guint8 *code;
static guint8 start [512];
static int inited = 0;
guint32 iregs_to_restore;
g_assert (!aot);
if (info)
*info = NULL;
if (inited)
return start;
inited = 1;
code = start;
mips_move (code, mips_at, mips_a0);
iregs_to_restore = (MONO_ARCH_CALLEE_SAVED_REGS \
| (1 << mips_sp) | (1 << mips_ra));
for (i = 0; i < MONO_SAVED_GREGS; ++i) {
//if (iregs_to_restore & (1 << i)) {
if (i != mips_zero && i != mips_at) {
MIPS_LW (code, i, mips_at, G_STRUCT_OFFSET (MonoContext, sc_regs[i]));
}
}
/* Get the address to return to */
mips_lw (code, mips_t9, mips_at, G_STRUCT_OFFSET (MonoContext, sc_pc));
/* jump to the saved IP */
mips_jr (code, mips_t9);
mips_nop (code);
/* never reached */
mips_break (code, 0xff);
g_assert ((code - start) < sizeof(start));
mono_arch_flush_icache (start, code - start);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_EXCEPTION_HANDLING, NULL));
return start;
}
/*
* mono_arch_get_call_filter:
*
* Returns a pointer to a method which calls an exception filter. We
* also use this function to call finally handlers (we pass NULL as
* @exc object in this case).
*
* This function is invoked as
* call_handler (MonoContext *ctx, handler)
*
* Where 'handler' is a function to be invoked as:
* handler (void)
*/
gpointer
mono_arch_get_call_filter (MonoTrampInfo **info, gboolean aot)
{
static guint8 start [320];
static int inited = 0;
guint8 *code;
int alloc_size;
int offset;
g_assert (!aot);
if (info)
*info = NULL;
if (inited)
return start;
inited = 1;
code = start;
alloc_size = 64;
g_assert ((alloc_size & (MIPS_STACK_ALIGNMENT-1)) == 0);
mips_addiu (code, mips_sp, mips_sp, -alloc_size);
mips_sw (code, mips_ra, mips_sp, alloc_size + MIPS_RET_ADDR_OFFSET);
/* Save global registers on stack (s0 - s7) */
offset = 16;
MIPS_SW (code, mips_s0, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s1, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s2, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s3, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s4, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s5, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s6, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_s7, mips_sp, offset); offset += IREG_SIZE;
MIPS_SW (code, mips_fp, mips_sp, offset); offset += IREG_SIZE;
/* Restore global registers from MonoContext, including the frame pointer */
MIPS_LW (code, mips_s0, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s0]));
MIPS_LW (code, mips_s1, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s1]));
MIPS_LW (code, mips_s2, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s2]));
MIPS_LW (code, mips_s3, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s3]));
MIPS_LW (code, mips_s4, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s4]));
MIPS_LW (code, mips_s5, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s5]));
MIPS_LW (code, mips_s6, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s6]));
MIPS_LW (code, mips_s7, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_s7]));
MIPS_LW (code, mips_fp, mips_a0, G_STRUCT_OFFSET (MonoContext, sc_regs[mips_fp]));
/* a1 is the handler to call */
mips_move (code, mips_t9, mips_a1);
/* jump to the saved IP */
mips_jalr (code, mips_t9, mips_ra);
mips_nop (code);
/* restore all regs from the stack */
offset = 16;
MIPS_LW (code, mips_s0, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s1, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s2, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s3, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s4, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s5, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s6, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_s7, mips_sp, offset); offset += IREG_SIZE;
MIPS_LW (code, mips_fp, mips_sp, offset); offset += IREG_SIZE;
/* epilog */
mips_lw (code, mips_ra, mips_sp, alloc_size + MIPS_RET_ADDR_OFFSET);
mips_addiu (code, mips_sp, mips_sp, alloc_size);
mips_jr (code, mips_ra);
mips_nop (code);
g_assert ((code - start) < sizeof(start));
mono_arch_flush_icache (start, code - start);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_EXCEPTION_HANDLING, NULL));
return start;
}
static void
throw_exception (MonoObject *exc, unsigned long eip, unsigned long esp, gboolean rethrow, gboolean preserve_ips)
{
ERROR_DECL (error);
MonoContext ctx;
#ifdef DEBUG_EXCEPTIONS
g_print ("throw_exception: exc=%p eip=%p esp=%p rethrow=%d\n",
exc, (void *)eip, (void *) esp, rethrow);
#endif
/* adjust eip so that it point into the call instruction */
eip -= 8;
memset (&ctx, 0, sizeof (MonoContext));
/*g_print ("stack in throw: %p\n", esp);*/
memcpy (&ctx.sc_regs, (void *)(esp + MIPS_STACK_PARAM_OFFSET),
sizeof (gulong) * MONO_SAVED_GREGS);
memset (&ctx.sc_fpregs, 0, sizeof (mips_freg) * MONO_SAVED_FREGS);
MONO_CONTEXT_SET_IP (&ctx, eip);
if (mono_object_isinst_checked (exc, mono_defaults.exception_class, error)) {
MonoException *mono_ex = (MonoException*)exc;
if (!rethrow && !mono_ex->caught_in_unmanaged) {
mono_ex->stack_trace = NULL;
mono_ex->trace_ips = NULL;
} if (preserve_ips) {
mono_ex->caught_in_unmanaged = TRUE;
}
}
mono_error_assert_ok (error);
mono_handle_exception (&ctx, exc);
#ifdef DEBUG_EXCEPTIONS
g_print ("throw_exception: restore to pc=%p sp=%p fp=%p ctx=%p\n",
(void *) ctx.sc_pc, (void *) ctx.sc_regs[mips_sp],
(void *) ctx.sc_regs[mips_fp], &ctx);
#endif
mono_restore_context (&ctx);
g_assert_not_reached ();
}
/**
* arch_get_throw_exception_generic:
*
* Returns a function pointer which can be used to raise
* exceptions. The returned function has the following
* signature: void (*func) (MonoException *exc); or
* void (*func) (char *exc_name);
*
*/
static gpointer
mono_arch_get_throw_exception_generic (guint8 *start, int size, int corlib, gboolean rethrow, gboolean preserve_ips)
{
guint8 *code;
int alloc_size, pos, i;
code = start;
//g_print ("mono_arch_get_throw_exception_generic: code=%p\n", code);
pos = 0;
/* XXX - save all the FP regs on the stack ? */
pos += MONO_MAX_IREGS * sizeof(guint32);
alloc_size = MIPS_MINIMAL_STACK_SIZE + pos + 64;
// align to MIPS_STACK_ALIGNMENT bytes
alloc_size += MIPS_STACK_ALIGNMENT - 1;
alloc_size &= ~(MIPS_STACK_ALIGNMENT - 1);
g_assert ((alloc_size & (MIPS_STACK_ALIGNMENT-1)) == 0);
mips_addiu (code, mips_sp, mips_sp, -alloc_size);
mips_sw (code, mips_ra, mips_sp, alloc_size + MIPS_RET_ADDR_OFFSET);
/* Save all the regs on the stack */
for (i = 0; i < MONO_MAX_IREGS; i++) {
if (i != mips_sp)
MIPS_SW (code, i, mips_sp, i*IREG_SIZE + MIPS_STACK_PARAM_OFFSET);
else {
mips_addiu (code, mips_at, mips_sp, alloc_size);
MIPS_SW (code, mips_at, mips_sp, i*IREG_SIZE + MIPS_STACK_PARAM_OFFSET);
}
}
if (corlib) {
mips_move (code, mips_a1, mips_a0);
mips_load (code, mips_a0, mono_defaults.corlib);
mips_load (code, mips_t9, mono_exception_from_token);
mips_jalr (code, mips_t9, mips_ra);
mips_nop (code);
mips_move (code, mips_a0, mips_v0);
}
/* call throw_exception (exc, ip, sp, rethrow) */
/* exc is already in place in a0 */
/* pointer to ip */
if (corlib)
mips_lw (code, mips_a1, mips_sp, alloc_size + MIPS_RET_ADDR_OFFSET);
else
mips_move (code, mips_a1, mips_ra);
/* current sp & rethrow */
mips_move (code, mips_a2, mips_sp);
mips_addiu (code, mips_a3, mips_zero, rethrow);
mips_load (code, mips_t9, throw_exception);
mips_jr (code, mips_t9);
mips_nop (code);
/* we should never reach this breakpoint */
mips_break (code, 0xfe);
g_assert ((code - start) < size);
mono_arch_flush_icache (start, code - start);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_EXCEPTION_HANDLING, NULL));
return start;
}
/**
* mono_arch_get_rethrow_exception:
* \returns a function pointer which can be used to rethrow
* exceptions. The returned function has the following
* signature: void (*func) (MonoException *exc);
*/
gpointer
mono_arch_get_rethrow_exception (MonoTrampInfo **info, gboolean aot)
{
static guint8 start [GENERIC_EXCEPTION_SIZE];
static int inited = 0;
g_assert (!aot);
if (info)
*info = NULL;
if (inited)
return start;
mono_arch_get_throw_exception_generic (start, sizeof (start), FALSE, TRUE, FALSE);
inited = 1;
return start;
}
/**
* mono_arch_get_rethrow_preserve_exception:
* \returns a function pointer which can be used to rethrow
* exceptions while avoiding modification of saved trace_ips.
* The returned function has the following
* signature: void (*func) (MonoException *exc);
*/
gpointer
mono_arch_get_rethrow_preserve_exception (MonoTrampInfo **info, gboolean aot)
{
static guint8 start [GENERIC_EXCEPTION_SIZE];
static int inited = 0;
g_assert (!aot);
if (info)
*info = NULL;
if (inited)
return start;
mono_arch_get_throw_exception_generic (start, sizeof (start), FALSE, TRUE, TRUE);
inited = 1;
return start;
}
/**
* arch_get_throw_exception:
*
* Returns a function pointer which can be used to raise
* exceptions. The returned function has the following
* signature: void (*func) (MonoException *exc);
* For example to raise an arithmetic exception you can use:
*
* x86_push_imm (code, mono_get_exception_arithmetic ());
* x86_call_code (code, arch_get_throw_exception ());
*
*/
gpointer
mono_arch_get_throw_exception (MonoTrampInfo **info, gboolean aot)
{
static guint8 start [GENERIC_EXCEPTION_SIZE];
static int inited = 0;
g_assert (!aot);
if (info)
*info = NULL;
if (inited)
return start;
mono_arch_get_throw_exception_generic (start, sizeof (start), FALSE, FALSE, FALSE);
inited = 1;
return start;
}
gpointer
mono_arch_get_throw_exception_by_name (void)
{
guint8 *start, *code;
int size = 64;
/* Not used on MIPS */
start = code = mono_global_codeman_reserve (size);
mips_break (code, 0xfd);
mono_arch_flush_icache (start, code - start);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_EXCEPTION_HANDLING, NULL));
return start;
}
/**
* mono_arch_get_throw_corlib_exception:
* \returns a function pointer which can be used to raise
* corlib exceptions. The returned function has the following
* signature: void (*func) (guint32 ex_token, guint32 offset);
* On MIPS, the offset argument is missing.
*/
gpointer
mono_arch_get_throw_corlib_exception (MonoTrampInfo **info, gboolean aot)
{
static guint8 start [GENERIC_EXCEPTION_SIZE];
static int inited = 0;
g_assert (!aot);
if (info)
*info = NULL;
if (inited)
return start;
mono_arch_get_throw_exception_generic (start, sizeof (start), TRUE, FALSE, FALSE);
inited = 1;
return start;
}
/*
* mono_arch_unwind_frame:
*
* This function is used to gather information from @ctx, and store it in @frame_info.
* It unwinds one stack frame, and stores the resulting context into @new_ctx. @lmf
* is modified if needed.
* Returns TRUE on success, FALSE otherwise.
*/
gboolean
mono_arch_unwind_frame (MonoJitTlsData *jit_tls,
MonoJitInfo *ji, MonoContext *ctx,
MonoContext *new_ctx, MonoLMF **lmf,
host_mgreg_t **save_locations,
StackFrameInfo *frame)
{
memset (frame, 0, sizeof (StackFrameInfo));
frame->ji = ji;
*new_ctx = *ctx;
if (ji != NULL) {
int i;
gpointer ip = MONO_CONTEXT_GET_IP (ctx);
host_mgreg_t regs [MONO_MAX_IREGS + 1];
guint8 *cfa;
guint32 unwind_info_len;
guint8 *unwind_info;
if (ji->is_trampoline)
frame->type = FRAME_TYPE_TRAMPOLINE;
else
frame->type = FRAME_TYPE_MANAGED;
unwind_info = mono_jinfo_get_unwind_info (ji, &unwind_info_len);
for (i = 0; i < MONO_MAX_IREGS; ++i)
regs [i] = new_ctx->sc_regs [i];
gboolean success = mono_unwind_frame (unwind_info, unwind_info_len, ji->code_start,
(guint8*)ji->code_start + ji->code_size,
ip, NULL, regs, MONO_MAX_IREGS,
save_locations, MONO_MAX_IREGS, &cfa);
if (!success)
return FALSE;
for (i = 0; i < MONO_MAX_IREGS; ++i)
new_ctx->sc_regs [i] = regs [i];
new_ctx->sc_pc = regs [mips_ra];
new_ctx->sc_regs [mips_sp] = (host_mgreg_t)(gsize)cfa;
/* we substract 8, so that the IP points into the call instruction */
MONO_CONTEXT_SET_IP (new_ctx, new_ctx->sc_pc - 8);
/* Sanity check -- we should have made progress here */
g_assert (MONO_CONTEXT_GET_SP (new_ctx) != MONO_CONTEXT_GET_SP (ctx));
return TRUE;
} else if (*lmf) {
g_assert ((((guint64)(*lmf)->previous_lmf) & 2) == 0);
if (!(*lmf)->method) {
#ifdef DEBUG_EXCEPTIONS
g_print ("mono_arch_unwind_frame: bad lmf @ %p\n", (void *) *lmf);
#endif
return FALSE;
}
g_assert (((*lmf)->magic == MIPS_LMF_MAGIC1) || ((*lmf)->magic == MIPS_LMF_MAGIC2));
ji = mini_jit_info_table_find ((gpointer)(*lmf)->eip);
if (!ji)
return FALSE;
frame->ji = ji;
frame->type = FRAME_TYPE_MANAGED_TO_NATIVE;
memcpy (&new_ctx->sc_regs, (*lmf)->iregs, sizeof (gulong) * MONO_SAVED_GREGS);
memcpy (&new_ctx->sc_fpregs, (*lmf)->fregs, sizeof (float) * MONO_SAVED_FREGS);
MONO_CONTEXT_SET_IP (new_ctx, (*lmf)->eip);
/* ensure that we've made progress */
g_assert (new_ctx->sc_pc != ctx->sc_pc);
*lmf = (gpointer)(((gsize)(*lmf)->previous_lmf) & ~3);
return TRUE;
}
return FALSE;
}
gpointer
mono_arch_ip_from_context (void *sigctx)
{
return (gpointer)(gsize)UCONTEXT_REG_PC (sigctx);
}
/*
* handle_exception:
*
* Called by resuming from a signal handler.
*/
static void
handle_signal_exception (gpointer obj)
{
MonoJitTlsData *jit_tls = mono_tls_get_jit_tls ();
MonoContext ctx;
memcpy (&ctx, &jit_tls->ex_ctx, sizeof (MonoContext));
mono_handle_exception (&ctx, obj);
mono_restore_context (&ctx);
}
/*
* This is the function called from the signal handler
*/
gboolean
mono_arch_handle_exception (void *ctx, gpointer obj)
{
#if defined(MONO_CROSS_COMPILE)
g_assert_not_reached ();
#elif defined(MONO_ARCH_USE_SIGACTION)
void *sigctx = ctx;
/*
* Handling the exception in the signal handler is problematic, since the original
* signal is disabled, and we could run arbitrary code though the debugger. So
* resume into the normal stack and do most work there if possible.
*/
MonoJitTlsData *jit_tls = mono_tls_get_jit_tls ();
guint64 sp = UCONTEXT_GREGS (sigctx) [mips_sp];
/* Pass the ctx parameter in TLS */
mono_sigctx_to_monoctx (sigctx, &jit_tls->ex_ctx);
/* The others in registers */
UCONTEXT_GREGS (sigctx)[mips_a0] = (gsize)obj;
/* Allocate a stack frame */
sp -= 256;
UCONTEXT_GREGS (sigctx)[mips_sp] = sp;
UCONTEXT_REG_PC (sigctx) = (gsize)handle_signal_exception;
return TRUE;
#else
MonoContext mctx;
gboolean result;
mono_sigctx_to_monoctx (ctx, &mctx);
result = mono_handle_exception (&mctx, obj);
/* restore the context so that returning from the signal handler will invoke
* the catch clause
*/
mono_monoctx_to_sigctx (&mctx, ctx);
return result;
#endif
}
/*
* mono_arch_setup_resume_sighandler_ctx:
*
* Setup CTX so execution continues at FUNC.
*/
void
mono_arch_setup_resume_sighandler_ctx (MonoContext *ctx, gpointer func)
{
MONO_CONTEXT_SET_IP (ctx,func);
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/src/libunwind/src/mi/strerror.c | /* libunwind - a platform-independent unwind library
Copyright (C) 2004 BEA Systems
Contributed by Thomas Hallgren <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "libunwind_i.h"
/* Returns the text corresponding to the given err_code or the
text "invalid error code" if the err_code is invalid. */
const char *
unw_strerror (int err_code)
{
const char *cp;
unw_error_t error = (unw_error_t)-err_code;
switch (error)
{
case UNW_ESUCCESS: cp = "no error"; break;
case UNW_EUNSPEC: cp = "unspecified (general) error"; break;
case UNW_ENOMEM: cp = "out of memory"; break;
case UNW_EBADREG: cp = "bad register number"; break;
case UNW_EREADONLYREG: cp = "attempt to write read-only register"; break;
case UNW_ESTOPUNWIND: cp = "stop unwinding"; break;
case UNW_EINVALIDIP: cp = "invalid IP"; break;
case UNW_EBADFRAME: cp = "bad frame"; break;
case UNW_EINVAL: cp = "unsupported operation or bad value"; break;
case UNW_EBADVERSION: cp = "unwind info has unsupported version"; break;
case UNW_ENOINFO: cp = "no unwind info found"; break;
default: cp = "invalid error code";
}
return cp;
}
| /* libunwind - a platform-independent unwind library
Copyright (C) 2004 BEA Systems
Contributed by Thomas Hallgren <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "libunwind_i.h"
/* Returns the text corresponding to the given err_code or the
text "invalid error code" if the err_code is invalid. */
const char *
unw_strerror (int err_code)
{
const char *cp;
unw_error_t error = (unw_error_t)-err_code;
switch (error)
{
case UNW_ESUCCESS: cp = "no error"; break;
case UNW_EUNSPEC: cp = "unspecified (general) error"; break;
case UNW_ENOMEM: cp = "out of memory"; break;
case UNW_EBADREG: cp = "bad register number"; break;
case UNW_EREADONLYREG: cp = "attempt to write read-only register"; break;
case UNW_ESTOPUNWIND: cp = "stop unwinding"; break;
case UNW_EINVALIDIP: cp = "invalid IP"; break;
case UNW_EBADFRAME: cp = "bad frame"; break;
case UNW_EINVAL: cp = "unsupported operation or bad value"; break;
case UNW_EBADVERSION: cp = "unwind info has unsupported version"; break;
case UNW_ENOINFO: cp = "no unwind info found"; break;
default: cp = "invalid error code";
}
return cp;
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/mono/mono/mini/tramp-s390x.c | /**
* @file
*
* @author Neale Ferguson <[email protected]>
*
* @section description
*
* Function - JIT trampoline code for S/390.
*
* Date - January, 2004
*
* Derivation - From tramp-x86 & tramp-ppc
* Paolo Molaro ([email protected])
* Dietmar Maurer ([email protected])
*
* Copyright - 2001 Ximian, Inc.
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*
*/
/*------------------------------------------------------------------*/
/* D e f i n e s */
/*------------------------------------------------------------------*/
#define LMFReg s390_r13
/*
* Method-specific trampoline code fragment sizes
*/
#define SPECIFIC_TRAMPOLINE_SIZE 96
/*========================= End of Defines =========================*/
/*------------------------------------------------------------------*/
/* I n c l u d e s */
/*------------------------------------------------------------------*/
#include <config.h>
#include <glib.h>
#include <string.h>
#include <mono/metadata/abi-details.h>
#include <mono/metadata/gc-internals.h>
#include <mono/metadata/marshal.h>
#include <mono/metadata/profiler-private.h>
#include <mono/metadata/tabledefs.h>
#include <mono/utils/mono-hwcap.h>
#include <mono/arch/s390x/s390x-codegen.h>
#include "mini.h"
#include "mini-s390x.h"
#include "mini-runtime.h"
#include "jit-icalls.h"
#include "mono/utils/mono-tls-inline.h"
#include <mono/metadata/components.h>
/*========================= End of Includes ========================*/
/*------------------------------------------------------------------*/
/* T y p e d e f s */
/*------------------------------------------------------------------*/
typedef struct {
guint8 stk[S390_MINIMAL_STACK_SIZE]; /* Standard s390x stack */
guint64 saveFn; /* Call address */
struct MonoLMF LMF; /* LMF */
} trampStack_t;
/*========================= End of Typedefs ========================*/
/*------------------------------------------------------------------*/
/* P r o t o t y p e s */
/*------------------------------------------------------------------*/
/*========================= End of Prototypes ======================*/
/*------------------------------------------------------------------*/
/* G l o b a l V a r i a b l e s */
/*------------------------------------------------------------------*/
/*====================== End of Global Variables ===================*/
/**
*
* @brief Build the unbox trampoline
*
* @param[in] Method pointer
* @param[in] Pointer to native code for method
*
* Return a pointer to a trampoline which does the unboxing before
* calling the method.
*
* When value type methods are called through the
* vtable we need to unbox the 'this' argument.
*/
gpointer
mono_arch_get_unbox_trampoline (MonoMethod *m, gpointer addr)
{
guint8 *code, *start;
int this_pos = s390_r2;
MonoMemoryManager *mem_manager = m_method_get_mem_manager (m);
char trampName[128];
start = code = (guint8 *) mono_mem_manager_code_reserve (mem_manager, 28);
S390_SET (code, s390_r1, addr);
s390_aghi (code, this_pos, MONO_ABI_SIZEOF (MonoObject));
s390_br (code, s390_r1);
g_assert ((code - start) <= 28);
mono_arch_flush_icache (start, code - start);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_UNBOX_TRAMPOLINE, m));
snprintf(trampName, sizeof(trampName), "%s_unbox_trampoline", m->name);
mono_tramp_info_register (mono_tramp_info_create (trampName, start, code - start, NULL, NULL), mem_manager);
return start;
}
/*========================= End of Function ========================*/
/**
*
* @brief Build the SDB trampoline
*
* @param[in] Type of trampoline (ss or bp)
* @param[in] MonoTrampInfo
* @param[in] Ahead of time indicator
*
* Return a trampoline which captures the current context, passes it to
* mono_debugger_agent_single_step_from_context ()/mono_debugger_agent_breakpoint_from_context (),
* then restores the (potentially changed) context.
*/
guint8 *
mono_arch_create_sdb_trampoline (gboolean single_step, MonoTrampInfo **info, gboolean aot)
{
int tramp_size = 512;
int i, framesize, ctx_offset,
gr_offset, fp_offset, ip_offset,
sp_offset;
guint8 *code, *buf;
void *ep;
GSList *unwind_ops = NULL;
MonoJumpInfo *ji = NULL;
code = buf = (guint8 *)mono_global_codeman_reserve (tramp_size);
framesize = S390_MINIMAL_STACK_SIZE;
ctx_offset = framesize;
framesize += sizeof (MonoContext);
framesize = ALIGN_TO (framesize, MONO_ARCH_FRAME_ALIGNMENT);
/**
* Create unwind information - On entry s390_r1 has value of method's frame reg
*/
s390_stmg (code, s390_r6, s390_r15, STK_BASE, S390_REG_SAVE_OFFSET);
mono_add_unwind_op_def_cfa (unwind_ops, code, buf, STK_BASE, S390_CFA_OFFSET);
gr_offset = S390_REG_SAVE_OFFSET - S390_CFA_OFFSET;
for (i = s390_r6; i <= s390_r15; i++) {
mono_add_unwind_op_offset (unwind_ops, code, buf, i, gr_offset);
gr_offset += sizeof(uintptr_t);
}
s390_lgr (code, s390_r0, STK_BASE);
s390_aghi (code, STK_BASE, -framesize);
mono_add_unwind_op_def_cfa_offset (unwind_ops, code, buf, framesize + S390_CFA_OFFSET);
s390_stg (code, s390_r0, 0, STK_BASE, 0);
gr_offset = ctx_offset + G_STRUCT_OFFSET(MonoContext, uc_mcontext.gregs);
sp_offset = ctx_offset + G_STRUCT_OFFSET(MonoContext, uc_mcontext.gregs[15]);
ip_offset = ctx_offset + G_STRUCT_OFFSET(MonoContext, uc_mcontext.psw.addr);
/* Initialize a MonoContext structure on the stack */
s390_stmg (code, s390_r0, s390_r14, STK_BASE, gr_offset);
s390_stg (code, s390_r1, 0, STK_BASE, sp_offset);
s390_stg (code, s390_r14, 0, STK_BASE, ip_offset);
fp_offset = ctx_offset + G_STRUCT_OFFSET(MonoContext, uc_mcontext.fpregs.fprs);
for (i = s390_f0; i < s390_f15; ++i) {
s390_std (code, i, 0, STK_BASE, fp_offset);
fp_offset += sizeof(double);
}
/*
* Call the single step/breakpoint function in sdb using
* the context address as the parameter
*/
s390_la (code, s390_r2, 0, STK_BASE, ctx_offset);
if (single_step)
ep = (mono_component_debugger ())->single_step_from_context;
else
ep = (mono_component_debugger ())->breakpoint_from_context;
S390_SET (code, s390_r1, ep);
s390_basr (code, s390_r14, s390_r1);
/*
* Restore volatiles
*/
s390_lmg (code, s390_r0, s390_r5, STK_BASE, gr_offset);
/*
* Restore FP registers
*/
fp_offset = ctx_offset + G_STRUCT_OFFSET(MonoContext, uc_mcontext.fpregs.fprs);
for (i = s390_f0; i < s390_f15; ++i) {
s390_ld (code, i, 0, STK_BASE, fp_offset);
fp_offset += sizeof(double);
}
/*
* Load the IP from the context to pick up any SET_IP command results
*/
s390_lg (code, s390_r14, 0, STK_BASE, ip_offset);
/*
* Restore everything else from the on-entry values
*/
s390_aghi (code, STK_BASE, framesize);
mono_add_unwind_op_def_cfa_offset (unwind_ops, code, buf, S390_CFA_OFFSET);
mono_add_unwind_op_same_value (unwind_ops, code, buf, STK_BASE);
s390_lmg (code, s390_r6, s390_r13, STK_BASE, S390_REG_SAVE_OFFSET);
for (i = s390_r6; i <= s390_r13; i++)
mono_add_unwind_op_same_value (unwind_ops, code, buf, i);
s390_br (code, s390_r14);
g_assertf ((code - buf) <= tramp_size, "%d %d", (int)(code - buf), tramp_size);
mono_arch_flush_icache (code, code - buf);
MONO_PROFILER_RAISE (jit_code_buffer, (buf, code - buf, MONO_PROFILER_CODE_BUFFER_HELPER, NULL));
g_assert (code - buf <= tramp_size);
const char *tramp_name = single_step ? "sdb_single_step_trampoline" : "sdb_breakpoint_trampoline";
*info = mono_tramp_info_create (tramp_name, buf, code - buf, ji, unwind_ops);
return buf;
}
/**
*
* @brief Locate the address of the thunk target
*
* @param[in] @code - Instruction following the branch and save
* @returns Address of the thunk code
*
* A thunk call is a sequence of:
* lgrl rx,tgt Load address of target (.-6)
* basr rx,rx Branch and save to that address (.), or,
* br r1 Jump to target (.)
*
* The target of that code is a thunk which:
* tgt: .quad target Destination
*/
guint8 *
mono_arch_get_call_target (guint8 *code)
{
guint8 *thunk;
guint32 rel;
/*
* Determine thunk address by adding the relative offset
* in the lgrl to its location
*/
rel = *(guint32 *) ((uintptr_t) code - 4);
thunk = (guint8 *) ((uintptr_t) code - 6) + (rel * 2);
return(thunk);
}
/*========================= End of Function ========================*/
/**
*
* @brief Patch the callsite
*
* @param[in] @method_start - first instruction of method
* @param[in] @orig_code - Instruction following the branch and save
* @param[in] @addr - New value for target of call
*
* Patch a call. The call is either a 'thunked' call identified by the BASR R14,R14
* instruction or a direct call
*/
void
mono_arch_patch_callsite (guint8 *method_start, guint8 *orig_code, guint8 *addr)
{
guint64 *thunk;
thunk = (guint64 *) mono_arch_get_call_target(orig_code - 2);
*thunk = (guint64) addr;
}
/*========================= End of Function ========================*/
/**
*
* @brief Patch PLT entry (AOT only)
*
* @param[in] @code - Location of PLT
* @param[in] @got - Global Offset Table
* @param[in] @regs - Registers at the time
* @param[in] @addr - Target address
*
* Not reached on s390x until we have AOT support
*
*/
void
mono_arch_patch_plt_entry (guint8 *code, gpointer *got, host_mgreg_t *regs, guint8 *addr)
{
g_assert_not_reached ();
}
/*========================= End of Function ========================*/
/**
*
* @brief Architecture-specific trampoline creation
*
* @param[in] @tramp_type - Type of trampoline
* @param[out] @info - Pointer to trampoline information
* @param[in] @aot - AOT indicator
*
* Create a generic trampoline
*
*/
guchar*
mono_arch_create_generic_trampoline (MonoTrampolineType tramp_type, MonoTrampInfo **info, gboolean aot)
{
const char *tramp_name;
guint8 *buf, *tramp, *code;
int i, offset, has_caller;
short *o[1];
GSList *unwind_ops = NULL;
MonoJumpInfo *ji = NULL;
g_assert (!aot);
/* Now we'll create in 'buf' the S/390 trampoline code. This
is the trampoline code common to all methods */
code = buf = (guint8 *) mono_global_codeman_reserve(512);
if (tramp_type == MONO_TRAMPOLINE_JUMP)
has_caller = 0;
else
has_caller = 1;
/*-----------------------------------------------------------
STEP 0: First create a non-standard function prologue with a
stack size big enough to save our registers.
-----------------------------------------------------------*/
mono_add_unwind_op_def_cfa (unwind_ops, buf, code, STK_BASE, S390_CFA_OFFSET);
s390_stmg (buf, s390_r6, s390_r15, STK_BASE, S390_REG_SAVE_OFFSET);
offset = S390_REG_SAVE_OFFSET - S390_CFA_OFFSET;
for (i = s390_r6; i <= s390_r15; i++) {
mono_add_unwind_op_offset (unwind_ops, buf, code, i, offset);
offset += sizeof(uintptr_t);
}
s390_lgr (buf, s390_r11, s390_r15);
s390_aghi (buf, STK_BASE, -sizeof(trampStack_t));
mono_add_unwind_op_def_cfa_offset (unwind_ops, buf, code, sizeof(trampStack_t) + S390_CFA_OFFSET);
s390_stg (buf, s390_r11, 0, STK_BASE, 0);
/*---------------------------------------------------------------*/
/* we build the MonoLMF structure on the stack - see mini-s390.h */
/* Keep in sync with the code in mono_arch_emit_prolog */
/*---------------------------------------------------------------*/
s390_lgr (buf, LMFReg, STK_BASE);
s390_aghi (buf, LMFReg, G_STRUCT_OFFSET(trampStack_t, LMF));
/*---------------------------------------------------------------*/
/* Save general and floating point registers in LMF */
/*---------------------------------------------------------------*/
s390_stmg (buf, s390_r0, s390_r1, LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[0]));
s390_stmg (buf, s390_r2, s390_r5, LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[2]));
s390_mvc (buf, 10*sizeof(gulong), LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[6]),
s390_r11, S390_REG_SAVE_OFFSET);
offset = G_STRUCT_OFFSET(MonoLMF, fregs[0]);
for (i = s390_f0; i <= s390_f15; ++i) {
s390_std (buf, i, 0, LMFReg, offset);
offset += sizeof(gdouble);
}
/*----------------------------------------------------------
STEP 1: call 'mono_get_lmf_addr()' to get the address of our
LMF. We'll need to restore it after the call to
's390_magic_trampoline' and before the call to the native
method.
----------------------------------------------------------*/
S390_SET (buf, s390_r1, mono_get_lmf_addr);
s390_basr (buf, s390_r14, s390_r1);
/*---------------------------------------------------------------*/
/* Set lmf.lmf_addr = jit_tls->lmf */
/*---------------------------------------------------------------*/
s390_stg (buf, s390_r2, 0, LMFReg,
G_STRUCT_OFFSET(MonoLMF, lmf_addr));
/*---------------------------------------------------------------*/
/* Get current lmf */
/*---------------------------------------------------------------*/
s390_lg (buf, s390_r0, 0, s390_r2, 0);
/*---------------------------------------------------------------*/
/* Set our lmf as the current lmf */
/*---------------------------------------------------------------*/
s390_stg (buf, LMFReg, 0, s390_r2, 0);
/*---------------------------------------------------------------*/
/* Have our lmf.previous_lmf point to the last lmf */
/*---------------------------------------------------------------*/
s390_stg (buf, s390_r0, 0, LMFReg,
G_STRUCT_OFFSET(MonoLMF, previous_lmf));
/*---------------------------------------------------------------*/
/* save method info */
/*---------------------------------------------------------------*/
s390_lg (buf, s390_r1, 0, LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[0]));
s390_stg (buf, s390_r1, 0, LMFReg, G_STRUCT_OFFSET(MonoLMF, method));
/*---------------------------------------------------------------*/
/* save the current SP */
/*---------------------------------------------------------------*/
s390_lg (buf, s390_r1, 0, STK_BASE, 0);
s390_stg (buf, s390_r1, 0, LMFReg, G_STRUCT_OFFSET(MonoLMF, ebp));
/*---------------------------------------------------------------*/
/* save the current IP */
/*---------------------------------------------------------------*/
if (has_caller) {
s390_lg (buf, s390_r1, 0, s390_r1, S390_RET_ADDR_OFFSET);
} else {
s390_lghi (buf, s390_r1, 0);
}
s390_stg (buf, s390_r1, 0, LMFReg, G_STRUCT_OFFSET(MonoLMF, eip));
/*---------------------------------------------------------------*/
/* STEP 2: call the C trampoline function */
/*---------------------------------------------------------------*/
/* Set arguments */
/* Arg 1: host_mgreg_t *regs */
s390_la (buf, s390_r2, 0, LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[0]));
/* Arg 2: code (next address to the instruction that called us) */
if (has_caller) {
s390_lg (buf, s390_r3, 0, s390_r11, S390_RET_ADDR_OFFSET);
} else {
s390_lghi (buf, s390_r3, 0);
}
/* Arg 3: Trampoline argument */
s390_lg (buf, s390_r4, 0, LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[0]));
/* Arg 4: trampoline address. */
S390_SET (buf, s390_r5, buf);
/* Calculate call address and call the C trampoline. Return value will be in r2 */
tramp = (guint8*)mono_get_trampoline_func (tramp_type);
S390_SET (buf, s390_r1, tramp);
s390_basr (buf, s390_r14, s390_r1);
/* OK, code address is now on r2. Save it, so that we
can restore r2 and use it later */
s390_stg (buf, s390_r2, 0, STK_BASE, G_STRUCT_OFFSET(trampStack_t, saveFn));
/*----------------------------------------------------------
STEP 3: Restore the LMF
----------------------------------------------------------*/
restoreLMF(buf, STK_BASE, sizeof(trampStack_t));
/* Check for thread interruption */
S390_SET (buf, s390_r1, (guint8 *)mono_thread_force_interruption_checkpoint_noraise);
s390_basr (buf, s390_r14, s390_r1);
s390_ltgr (buf, s390_r2, s390_r2);
s390_jz (buf, 0); CODEPTR (buf, o[0]);
/*
* Exception case:
* We have an exception we want to throw in the caller's frame, so pop
* the trampoline frame and throw from the caller.
*/
S390_SET (buf, s390_r1, (guint *)mono_get_rethrow_preserve_exception_addr ());
s390_aghi (buf, STK_BASE, sizeof(trampStack_t));
s390_lg (buf, s390_r1, 0, s390_r1, 0);
s390_lmg (buf, s390_r6, s390_r14, STK_BASE, S390_REG_SAVE_OFFSET);
s390_br (buf, s390_r1);
PTRSLOT (buf, o[0]);
/* Reload result */
s390_lg (buf, s390_r1, 0, STK_BASE, G_STRUCT_OFFSET(trampStack_t, saveFn));
/*----------------------------------------------------------
STEP 4: call the compiled method
----------------------------------------------------------*/
/* Restore parameter registers */
s390_lmg (buf, s390_r2, s390_r5, LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[2]));
/* Restore the FP registers */
offset = G_STRUCT_OFFSET(MonoLMF, fregs[0]);
for (i = s390_f0; i <= s390_f15; ++i) {
s390_ld (buf, i, 0, LMFReg, offset);
offset += sizeof(gdouble);
}
/* Restore stack pointer and jump to the code -
* R14 contains the return address to our caller
*/
s390_lgr (buf, STK_BASE, s390_r11);
mono_add_unwind_op_def_cfa_offset (unwind_ops, buf, code, S390_CFA_OFFSET);
mono_add_unwind_op_same_value (unwind_ops, buf, code, STK_BASE);
s390_lmg (buf, s390_r6, s390_r14, STK_BASE, S390_REG_SAVE_OFFSET);
for (i = s390_r6; i <= s390_r14; i++)
mono_add_unwind_op_same_value (unwind_ops, buf, code, i);
if (MONO_TRAMPOLINE_TYPE_MUST_RETURN(tramp_type)) {
s390_lgr (buf, s390_r2, s390_r1);
s390_br (buf, s390_r14);
} else {
s390_br (buf, s390_r1);
}
/* Flush instruction cache, since we've generated code */
mono_arch_flush_icache (code, buf - code);
MONO_PROFILER_RAISE (jit_code_buffer, (buf, code - buf, MONO_PROFILER_CODE_BUFFER_HELPER, NULL));
g_assert (info);
tramp_name = mono_get_generic_trampoline_name (tramp_type);
*info = mono_tramp_info_create (tramp_name, code, buf - code, ji, unwind_ops);
/* Sanity check */
g_assert ((buf - code) <= 512);
return code;
}
/*========================= End of Function ========================*/
/**
*
* @brief Architecture-specific method invalidation
*
* @param[in] @func - Function to call
* @param[in] @func_arg - Argument to invalidation function
*
* Call an error routine so peple can fix their code
*
*/
void
mono_arch_invalidate_method (MonoJitInfo *ji, void *func, gpointer func_arg)
{
/* FIXME: This is not thread safe */
guint8 *code = (guint8 *) ji->code_start;
S390_SET (code, s390_r1, func);
S390_SET (code, s390_r2, func_arg);
s390_br (code, s390_r1);
}
/*========================= End of Function ========================*/
/**
*
* @brief Architecture-specific specific trampoline creation
*
* @param[in] @arg1 - Argument to trampoline being created
* @param[in] @tramp_type - Trampoline type
* @param[in] @domain - Mono Domain
* @param[out] @code_len - Length of trampoline created
*
* Create the specified kind of trampoline
*
*/
gpointer
mono_arch_create_specific_trampoline (gpointer arg1, MonoTrampolineType tramp_type, MonoMemoryManager *mem_manager, guint32 *code_len)
{
guint8 *code, *buf, *tramp;
gint64 displace;
tramp = mono_get_trampoline_code (tramp_type);
/*----------------------------------------------------------*/
/* This is the method-specific part of the trampoline. Its */
/* purpose is to provide the generic part with the */
/* MonoMethod *method pointer. We'll use r1 to keep it. */
/*----------------------------------------------------------*/
code = buf = (guint8 *) mono_mem_manager_code_reserve (mem_manager, SPECIFIC_TRAMPOLINE_SIZE);
S390_SET (buf, s390_r0, arg1);
displace = (tramp - buf) / 2;
if ((displace >= INT_MIN) && (displace <= INT_MAX))
s390_jg (buf, (gint32) displace);
else {
S390_SET (buf, s390_r1, tramp);
s390_br (buf, s390_r1);
}
/* Flush instruction cache, since we've generated code */
mono_arch_flush_icache (code, buf - code);
MONO_PROFILER_RAISE (jit_code_buffer, (buf, code - buf,
MONO_PROFILER_CODE_BUFFER_SPECIFIC_TRAMPOLINE,
(void *) mono_get_generic_trampoline_simple_name (tramp_type)));
/* Sanity check */
g_assert ((buf - code) <= SPECIFIC_TRAMPOLINE_SIZE);
if (code_len)
*code_len = buf - code;
return code;
}
/*========================= End of Function ========================*/
/**
*
* @brief Architecture-specific RGCTX lazy fetch trampoline
*
* @param[in] @slot - Instance
* @param[out] @info - Mono Trampoline Information
* @param[in] @aot - AOT indicator
*
* Create the specified kind of trampoline
*
*/
gpointer
mono_arch_create_rgctx_lazy_fetch_trampoline (guint32 slot, MonoTrampInfo **info, gboolean aot)
{
guint8 *tramp;
guint8 *code, *buf;
guint8 **rgctx_null_jumps;
gint64 displace;
int tramp_size,
depth,
index,
iPatch = 0,
i;
gboolean mrgctx;
MonoJumpInfo *ji = NULL;
GSList *unwind_ops = NULL;
mrgctx = MONO_RGCTX_SLOT_IS_MRGCTX (slot);
index = MONO_RGCTX_SLOT_INDEX (slot);
if (mrgctx)
index += MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT / sizeof (target_mgreg_t);
for (depth = 0; ; ++depth) {
int size = mono_class_rgctx_get_array_size (depth, mrgctx);
if (index < size - 1)
break;
index -= size - 1;
}
tramp_size = 48 + 16 * depth;
if (mrgctx)
tramp_size += 4;
else
tramp_size += 12;
code = buf = (guint8 *) mono_global_codeman_reserve (tramp_size);
unwind_ops = mono_arch_get_cie_program ();
rgctx_null_jumps = (guint8 **) g_malloc (sizeof (guint8*) * (depth + 2));
if (mrgctx) {
/* get mrgctx ptr */
s390_lgr (code, s390_r1, s390_r2);
} else {
/* load rgctx ptr from vtable */
s390_lg (code, s390_r1, 0, s390_r2, MONO_STRUCT_OFFSET(MonoVTable, runtime_generic_context));
/* is the rgctx ptr null? */
s390_ltgr (code, s390_r1, s390_r1);
/* if yes, jump to actual trampoline */
rgctx_null_jumps [iPatch++] = code;
s390_jge (code, 0);
}
for (i = 0; i < depth; ++i) {
/* load ptr to next array */
if (mrgctx && i == 0)
s390_lg (code, s390_r1, 0, s390_r1, MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT);
else
s390_lg (code, s390_r1, 0, s390_r1, 0);
s390_ltgr (code, s390_r1, s390_r1);
/* if the ptr is null then jump to actual trampoline */
rgctx_null_jumps [iPatch++] = code;
s390_jge (code, 0);
}
/* fetch slot */
s390_lg (code, s390_r1, 0, s390_r1, (sizeof (target_mgreg_t) * (index + 1)));
/* is the slot null? */
s390_ltgr (code, s390_r1, s390_r1);
/* if yes, jump to actual trampoline */
rgctx_null_jumps [iPatch++] = code;
s390_jge (code, 0);
/* otherwise return r1 */
s390_lgr (code, s390_r2, s390_r1);
s390_br (code, s390_r14);
for (i = 0; i < iPatch; i++) {
displace = ((uintptr_t) code - (uintptr_t) rgctx_null_jumps[i]) / 2;
s390_patch_rel ((rgctx_null_jumps [i] + 2), displace);
}
g_free (rgctx_null_jumps);
/* move the rgctx pointer to the VTABLE register */
#if MONO_ARCH_VTABLE_REG != s390_r2
s390_lgr (code, MONO_ARCH_VTABLE_REG, s390_r2);
#endif
MonoMemoryManager *mem_manager = mini_get_default_mem_manager ();
tramp = (guint8*)mono_arch_create_specific_trampoline (GUINT_TO_POINTER (slot),
MONO_TRAMPOLINE_RGCTX_LAZY_FETCH, mem_manager, NULL);
/* jump to the actual trampoline */
displace = (tramp - code) / 2;
if ((displace >= INT_MIN) && (displace <= INT_MAX))
s390_jg (code, displace);
else {
S390_SET (code, s390_r1, tramp);
s390_br (code, s390_r1);
}
mono_arch_flush_icache (buf, code - buf);
MONO_PROFILER_RAISE (jit_code_buffer, (buf, code - buf, MONO_PROFILER_CODE_BUFFER_GENERICS_TRAMPOLINE, NULL));
g_assert (code - buf <= tramp_size);
char *name = mono_get_rgctx_fetch_trampoline_name (slot);
*info = mono_tramp_info_create (name, buf, code - buf, ji, unwind_ops);
g_free (name);
return(buf);
}
/*========================= End of Function ========================*/
/**
*
* @brief Architecture-specific static RGCTX lazy fetch trampoline
*
* @param[in] @arg - Argument to trampoline being created
* @param[out] @addr - Target
*
* Create a trampoline which sets RGCTX_REG to ARG
* then jumps to ADDR.
*/
gpointer
mono_arch_get_static_rgctx_trampoline (MonoMemoryManager *mem_manager, gpointer arg, gpointer addr)
{
guint8 *code, *start;
gint64 displace;
int buf_len;
buf_len = 32;
start = code = (guint8 *) mono_mem_manager_code_reserve (mem_manager, buf_len);
S390_SET (code, MONO_ARCH_RGCTX_REG, arg);
displace = ((uintptr_t) addr - (uintptr_t) code) / 2;
if ((displace >= INT_MIN) && (displace <= INT_MAX))
s390_jg (code, (gint32) displace);
else {
S390_SET (code, s390_r1, addr);
s390_br (code, s390_r1);
}
g_assert ((code - start) < buf_len);
mono_arch_flush_icache (start, code - start);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_GENERICS_TRAMPOLINE, NULL));
mono_tramp_info_register (mono_tramp_info_create (NULL, start, code - start, NULL, NULL), mem_manager);
return(start);
}
/*========================= End of Function ========================*/
| /**
* @file
*
* @author Neale Ferguson <[email protected]>
*
* @section description
*
* Function - JIT trampoline code for S/390.
*
* Date - January, 2004
*
* Derivation - From tramp-x86 & tramp-ppc
* Paolo Molaro ([email protected])
* Dietmar Maurer ([email protected])
*
* Copyright - 2001 Ximian, Inc.
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*
*/
/*------------------------------------------------------------------*/
/* D e f i n e s */
/*------------------------------------------------------------------*/
#define LMFReg s390_r13
/*
* Method-specific trampoline code fragment sizes
*/
#define SPECIFIC_TRAMPOLINE_SIZE 96
/*========================= End of Defines =========================*/
/*------------------------------------------------------------------*/
/* I n c l u d e s */
/*------------------------------------------------------------------*/
#include <config.h>
#include <glib.h>
#include <string.h>
#include <mono/metadata/abi-details.h>
#include <mono/metadata/gc-internals.h>
#include <mono/metadata/marshal.h>
#include <mono/metadata/profiler-private.h>
#include <mono/metadata/tabledefs.h>
#include <mono/utils/mono-hwcap.h>
#include <mono/arch/s390x/s390x-codegen.h>
#include "mini.h"
#include "mini-s390x.h"
#include "mini-runtime.h"
#include "jit-icalls.h"
#include "mono/utils/mono-tls-inline.h"
#include <mono/metadata/components.h>
/*========================= End of Includes ========================*/
/*------------------------------------------------------------------*/
/* T y p e d e f s */
/*------------------------------------------------------------------*/
typedef struct {
guint8 stk[S390_MINIMAL_STACK_SIZE]; /* Standard s390x stack */
guint64 saveFn; /* Call address */
struct MonoLMF LMF; /* LMF */
} trampStack_t;
/*========================= End of Typedefs ========================*/
/*------------------------------------------------------------------*/
/* P r o t o t y p e s */
/*------------------------------------------------------------------*/
/*========================= End of Prototypes ======================*/
/*------------------------------------------------------------------*/
/* G l o b a l V a r i a b l e s */
/*------------------------------------------------------------------*/
/*====================== End of Global Variables ===================*/
/**
*
* @brief Build the unbox trampoline
*
* @param[in] Method pointer
* @param[in] Pointer to native code for method
*
* Return a pointer to a trampoline which does the unboxing before
* calling the method.
*
* When value type methods are called through the
* vtable we need to unbox the 'this' argument.
*/
gpointer
mono_arch_get_unbox_trampoline (MonoMethod *m, gpointer addr)
{
guint8 *code, *start;
int this_pos = s390_r2;
MonoMemoryManager *mem_manager = m_method_get_mem_manager (m);
char trampName[128];
start = code = (guint8 *) mono_mem_manager_code_reserve (mem_manager, 28);
S390_SET (code, s390_r1, addr);
s390_aghi (code, this_pos, MONO_ABI_SIZEOF (MonoObject));
s390_br (code, s390_r1);
g_assert ((code - start) <= 28);
mono_arch_flush_icache (start, code - start);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_UNBOX_TRAMPOLINE, m));
snprintf(trampName, sizeof(trampName), "%s_unbox_trampoline", m->name);
mono_tramp_info_register (mono_tramp_info_create (trampName, start, code - start, NULL, NULL), mem_manager);
return start;
}
/*========================= End of Function ========================*/
/**
*
* @brief Build the SDB trampoline
*
* @param[in] Type of trampoline (ss or bp)
* @param[in] MonoTrampInfo
* @param[in] Ahead of time indicator
*
* Return a trampoline which captures the current context, passes it to
* mono_debugger_agent_single_step_from_context ()/mono_debugger_agent_breakpoint_from_context (),
* then restores the (potentially changed) context.
*/
guint8 *
mono_arch_create_sdb_trampoline (gboolean single_step, MonoTrampInfo **info, gboolean aot)
{
int tramp_size = 512;
int i, framesize, ctx_offset,
gr_offset, fp_offset, ip_offset,
sp_offset;
guint8 *code, *buf;
void *ep;
GSList *unwind_ops = NULL;
MonoJumpInfo *ji = NULL;
code = buf = (guint8 *)mono_global_codeman_reserve (tramp_size);
framesize = S390_MINIMAL_STACK_SIZE;
ctx_offset = framesize;
framesize += sizeof (MonoContext);
framesize = ALIGN_TO (framesize, MONO_ARCH_FRAME_ALIGNMENT);
/**
* Create unwind information - On entry s390_r1 has value of method's frame reg
*/
s390_stmg (code, s390_r6, s390_r15, STK_BASE, S390_REG_SAVE_OFFSET);
mono_add_unwind_op_def_cfa (unwind_ops, code, buf, STK_BASE, S390_CFA_OFFSET);
gr_offset = S390_REG_SAVE_OFFSET - S390_CFA_OFFSET;
for (i = s390_r6; i <= s390_r15; i++) {
mono_add_unwind_op_offset (unwind_ops, code, buf, i, gr_offset);
gr_offset += sizeof(uintptr_t);
}
s390_lgr (code, s390_r0, STK_BASE);
s390_aghi (code, STK_BASE, -framesize);
mono_add_unwind_op_def_cfa_offset (unwind_ops, code, buf, framesize + S390_CFA_OFFSET);
s390_stg (code, s390_r0, 0, STK_BASE, 0);
gr_offset = ctx_offset + G_STRUCT_OFFSET(MonoContext, uc_mcontext.gregs);
sp_offset = ctx_offset + G_STRUCT_OFFSET(MonoContext, uc_mcontext.gregs[15]);
ip_offset = ctx_offset + G_STRUCT_OFFSET(MonoContext, uc_mcontext.psw.addr);
/* Initialize a MonoContext structure on the stack */
s390_stmg (code, s390_r0, s390_r14, STK_BASE, gr_offset);
s390_stg (code, s390_r1, 0, STK_BASE, sp_offset);
s390_stg (code, s390_r14, 0, STK_BASE, ip_offset);
fp_offset = ctx_offset + G_STRUCT_OFFSET(MonoContext, uc_mcontext.fpregs.fprs);
for (i = s390_f0; i < s390_f15; ++i) {
s390_std (code, i, 0, STK_BASE, fp_offset);
fp_offset += sizeof(double);
}
/*
* Call the single step/breakpoint function in sdb using
* the context address as the parameter
*/
s390_la (code, s390_r2, 0, STK_BASE, ctx_offset);
if (single_step)
ep = (mono_component_debugger ())->single_step_from_context;
else
ep = (mono_component_debugger ())->breakpoint_from_context;
S390_SET (code, s390_r1, ep);
s390_basr (code, s390_r14, s390_r1);
/*
* Restore volatiles
*/
s390_lmg (code, s390_r0, s390_r5, STK_BASE, gr_offset);
/*
* Restore FP registers
*/
fp_offset = ctx_offset + G_STRUCT_OFFSET(MonoContext, uc_mcontext.fpregs.fprs);
for (i = s390_f0; i < s390_f15; ++i) {
s390_ld (code, i, 0, STK_BASE, fp_offset);
fp_offset += sizeof(double);
}
/*
* Load the IP from the context to pick up any SET_IP command results
*/
s390_lg (code, s390_r14, 0, STK_BASE, ip_offset);
/*
* Restore everything else from the on-entry values
*/
s390_aghi (code, STK_BASE, framesize);
mono_add_unwind_op_def_cfa_offset (unwind_ops, code, buf, S390_CFA_OFFSET);
mono_add_unwind_op_same_value (unwind_ops, code, buf, STK_BASE);
s390_lmg (code, s390_r6, s390_r13, STK_BASE, S390_REG_SAVE_OFFSET);
for (i = s390_r6; i <= s390_r13; i++)
mono_add_unwind_op_same_value (unwind_ops, code, buf, i);
s390_br (code, s390_r14);
g_assertf ((code - buf) <= tramp_size, "%d %d", (int)(code - buf), tramp_size);
mono_arch_flush_icache (code, code - buf);
MONO_PROFILER_RAISE (jit_code_buffer, (buf, code - buf, MONO_PROFILER_CODE_BUFFER_HELPER, NULL));
g_assert (code - buf <= tramp_size);
const char *tramp_name = single_step ? "sdb_single_step_trampoline" : "sdb_breakpoint_trampoline";
*info = mono_tramp_info_create (tramp_name, buf, code - buf, ji, unwind_ops);
return buf;
}
/**
*
* @brief Locate the address of the thunk target
*
* @param[in] @code - Instruction following the branch and save
* @returns Address of the thunk code
*
* A thunk call is a sequence of:
* lgrl rx,tgt Load address of target (.-6)
* basr rx,rx Branch and save to that address (.), or,
* br r1 Jump to target (.)
*
* The target of that code is a thunk which:
* tgt: .quad target Destination
*/
guint8 *
mono_arch_get_call_target (guint8 *code)
{
guint8 *thunk;
guint32 rel;
/*
* Determine thunk address by adding the relative offset
* in the lgrl to its location
*/
rel = *(guint32 *) ((uintptr_t) code - 4);
thunk = (guint8 *) ((uintptr_t) code - 6) + (rel * 2);
return(thunk);
}
/*========================= End of Function ========================*/
/**
*
* @brief Patch the callsite
*
* @param[in] @method_start - first instruction of method
* @param[in] @orig_code - Instruction following the branch and save
* @param[in] @addr - New value for target of call
*
* Patch a call. The call is either a 'thunked' call identified by the BASR R14,R14
* instruction or a direct call
*/
void
mono_arch_patch_callsite (guint8 *method_start, guint8 *orig_code, guint8 *addr)
{
guint64 *thunk;
thunk = (guint64 *) mono_arch_get_call_target(orig_code - 2);
*thunk = (guint64) addr;
}
/*========================= End of Function ========================*/
/**
*
* @brief Patch PLT entry (AOT only)
*
* @param[in] @code - Location of PLT
* @param[in] @got - Global Offset Table
* @param[in] @regs - Registers at the time
* @param[in] @addr - Target address
*
* Not reached on s390x until we have AOT support
*
*/
void
mono_arch_patch_plt_entry (guint8 *code, gpointer *got, host_mgreg_t *regs, guint8 *addr)
{
g_assert_not_reached ();
}
/*========================= End of Function ========================*/
/**
*
* @brief Architecture-specific trampoline creation
*
* @param[in] @tramp_type - Type of trampoline
* @param[out] @info - Pointer to trampoline information
* @param[in] @aot - AOT indicator
*
* Create a generic trampoline
*
*/
guchar*
mono_arch_create_generic_trampoline (MonoTrampolineType tramp_type, MonoTrampInfo **info, gboolean aot)
{
const char *tramp_name;
guint8 *buf, *tramp, *code;
int i, offset, has_caller;
short *o[1];
GSList *unwind_ops = NULL;
MonoJumpInfo *ji = NULL;
g_assert (!aot);
/* Now we'll create in 'buf' the S/390 trampoline code. This
is the trampoline code common to all methods */
code = buf = (guint8 *) mono_global_codeman_reserve(512);
if (tramp_type == MONO_TRAMPOLINE_JUMP)
has_caller = 0;
else
has_caller = 1;
/*-----------------------------------------------------------
STEP 0: First create a non-standard function prologue with a
stack size big enough to save our registers.
-----------------------------------------------------------*/
mono_add_unwind_op_def_cfa (unwind_ops, buf, code, STK_BASE, S390_CFA_OFFSET);
s390_stmg (buf, s390_r6, s390_r15, STK_BASE, S390_REG_SAVE_OFFSET);
offset = S390_REG_SAVE_OFFSET - S390_CFA_OFFSET;
for (i = s390_r6; i <= s390_r15; i++) {
mono_add_unwind_op_offset (unwind_ops, buf, code, i, offset);
offset += sizeof(uintptr_t);
}
s390_lgr (buf, s390_r11, s390_r15);
s390_aghi (buf, STK_BASE, -sizeof(trampStack_t));
mono_add_unwind_op_def_cfa_offset (unwind_ops, buf, code, sizeof(trampStack_t) + S390_CFA_OFFSET);
s390_stg (buf, s390_r11, 0, STK_BASE, 0);
/*---------------------------------------------------------------*/
/* we build the MonoLMF structure on the stack - see mini-s390.h */
/* Keep in sync with the code in mono_arch_emit_prolog */
/*---------------------------------------------------------------*/
s390_lgr (buf, LMFReg, STK_BASE);
s390_aghi (buf, LMFReg, G_STRUCT_OFFSET(trampStack_t, LMF));
/*---------------------------------------------------------------*/
/* Save general and floating point registers in LMF */
/*---------------------------------------------------------------*/
s390_stmg (buf, s390_r0, s390_r1, LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[0]));
s390_stmg (buf, s390_r2, s390_r5, LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[2]));
s390_mvc (buf, 10*sizeof(gulong), LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[6]),
s390_r11, S390_REG_SAVE_OFFSET);
offset = G_STRUCT_OFFSET(MonoLMF, fregs[0]);
for (i = s390_f0; i <= s390_f15; ++i) {
s390_std (buf, i, 0, LMFReg, offset);
offset += sizeof(gdouble);
}
/*----------------------------------------------------------
STEP 1: call 'mono_get_lmf_addr()' to get the address of our
LMF. We'll need to restore it after the call to
's390_magic_trampoline' and before the call to the native
method.
----------------------------------------------------------*/
S390_SET (buf, s390_r1, mono_get_lmf_addr);
s390_basr (buf, s390_r14, s390_r1);
/*---------------------------------------------------------------*/
/* Set lmf.lmf_addr = jit_tls->lmf */
/*---------------------------------------------------------------*/
s390_stg (buf, s390_r2, 0, LMFReg,
G_STRUCT_OFFSET(MonoLMF, lmf_addr));
/*---------------------------------------------------------------*/
/* Get current lmf */
/*---------------------------------------------------------------*/
s390_lg (buf, s390_r0, 0, s390_r2, 0);
/*---------------------------------------------------------------*/
/* Set our lmf as the current lmf */
/*---------------------------------------------------------------*/
s390_stg (buf, LMFReg, 0, s390_r2, 0);
/*---------------------------------------------------------------*/
/* Have our lmf.previous_lmf point to the last lmf */
/*---------------------------------------------------------------*/
s390_stg (buf, s390_r0, 0, LMFReg,
G_STRUCT_OFFSET(MonoLMF, previous_lmf));
/*---------------------------------------------------------------*/
/* save method info */
/*---------------------------------------------------------------*/
s390_lg (buf, s390_r1, 0, LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[0]));
s390_stg (buf, s390_r1, 0, LMFReg, G_STRUCT_OFFSET(MonoLMF, method));
/*---------------------------------------------------------------*/
/* save the current SP */
/*---------------------------------------------------------------*/
s390_lg (buf, s390_r1, 0, STK_BASE, 0);
s390_stg (buf, s390_r1, 0, LMFReg, G_STRUCT_OFFSET(MonoLMF, ebp));
/*---------------------------------------------------------------*/
/* save the current IP */
/*---------------------------------------------------------------*/
if (has_caller) {
s390_lg (buf, s390_r1, 0, s390_r1, S390_RET_ADDR_OFFSET);
} else {
s390_lghi (buf, s390_r1, 0);
}
s390_stg (buf, s390_r1, 0, LMFReg, G_STRUCT_OFFSET(MonoLMF, eip));
/*---------------------------------------------------------------*/
/* STEP 2: call the C trampoline function */
/*---------------------------------------------------------------*/
/* Set arguments */
/* Arg 1: host_mgreg_t *regs */
s390_la (buf, s390_r2, 0, LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[0]));
/* Arg 2: code (next address to the instruction that called us) */
if (has_caller) {
s390_lg (buf, s390_r3, 0, s390_r11, S390_RET_ADDR_OFFSET);
} else {
s390_lghi (buf, s390_r3, 0);
}
/* Arg 3: Trampoline argument */
s390_lg (buf, s390_r4, 0, LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[0]));
/* Arg 4: trampoline address. */
S390_SET (buf, s390_r5, buf);
/* Calculate call address and call the C trampoline. Return value will be in r2 */
tramp = (guint8*)mono_get_trampoline_func (tramp_type);
S390_SET (buf, s390_r1, tramp);
s390_basr (buf, s390_r14, s390_r1);
/* OK, code address is now on r2. Save it, so that we
can restore r2 and use it later */
s390_stg (buf, s390_r2, 0, STK_BASE, G_STRUCT_OFFSET(trampStack_t, saveFn));
/*----------------------------------------------------------
STEP 3: Restore the LMF
----------------------------------------------------------*/
restoreLMF(buf, STK_BASE, sizeof(trampStack_t));
/* Check for thread interruption */
S390_SET (buf, s390_r1, (guint8 *)mono_thread_force_interruption_checkpoint_noraise);
s390_basr (buf, s390_r14, s390_r1);
s390_ltgr (buf, s390_r2, s390_r2);
s390_jz (buf, 0); CODEPTR (buf, o[0]);
/*
* Exception case:
* We have an exception we want to throw in the caller's frame, so pop
* the trampoline frame and throw from the caller.
*/
S390_SET (buf, s390_r1, (guint *)mono_get_rethrow_preserve_exception_addr ());
s390_aghi (buf, STK_BASE, sizeof(trampStack_t));
s390_lg (buf, s390_r1, 0, s390_r1, 0);
s390_lmg (buf, s390_r6, s390_r14, STK_BASE, S390_REG_SAVE_OFFSET);
s390_br (buf, s390_r1);
PTRSLOT (buf, o[0]);
/* Reload result */
s390_lg (buf, s390_r1, 0, STK_BASE, G_STRUCT_OFFSET(trampStack_t, saveFn));
/*----------------------------------------------------------
STEP 4: call the compiled method
----------------------------------------------------------*/
/* Restore parameter registers */
s390_lmg (buf, s390_r2, s390_r5, LMFReg, G_STRUCT_OFFSET(MonoLMF, gregs[2]));
/* Restore the FP registers */
offset = G_STRUCT_OFFSET(MonoLMF, fregs[0]);
for (i = s390_f0; i <= s390_f15; ++i) {
s390_ld (buf, i, 0, LMFReg, offset);
offset += sizeof(gdouble);
}
/* Restore stack pointer and jump to the code -
* R14 contains the return address to our caller
*/
s390_lgr (buf, STK_BASE, s390_r11);
mono_add_unwind_op_def_cfa_offset (unwind_ops, buf, code, S390_CFA_OFFSET);
mono_add_unwind_op_same_value (unwind_ops, buf, code, STK_BASE);
s390_lmg (buf, s390_r6, s390_r14, STK_BASE, S390_REG_SAVE_OFFSET);
for (i = s390_r6; i <= s390_r14; i++)
mono_add_unwind_op_same_value (unwind_ops, buf, code, i);
if (MONO_TRAMPOLINE_TYPE_MUST_RETURN(tramp_type)) {
s390_lgr (buf, s390_r2, s390_r1);
s390_br (buf, s390_r14);
} else {
s390_br (buf, s390_r1);
}
/* Flush instruction cache, since we've generated code */
mono_arch_flush_icache (code, buf - code);
MONO_PROFILER_RAISE (jit_code_buffer, (buf, code - buf, MONO_PROFILER_CODE_BUFFER_HELPER, NULL));
g_assert (info);
tramp_name = mono_get_generic_trampoline_name (tramp_type);
*info = mono_tramp_info_create (tramp_name, code, buf - code, ji, unwind_ops);
/* Sanity check */
g_assert ((buf - code) <= 512);
return code;
}
/*========================= End of Function ========================*/
/**
*
* @brief Architecture-specific method invalidation
*
* @param[in] @func - Function to call
* @param[in] @func_arg - Argument to invalidation function
*
* Call an error routine so peple can fix their code
*
*/
void
mono_arch_invalidate_method (MonoJitInfo *ji, void *func, gpointer func_arg)
{
/* FIXME: This is not thread safe */
guint8 *code = (guint8 *) ji->code_start;
S390_SET (code, s390_r1, func);
S390_SET (code, s390_r2, func_arg);
s390_br (code, s390_r1);
}
/*========================= End of Function ========================*/
/**
*
* @brief Architecture-specific specific trampoline creation
*
* @param[in] @arg1 - Argument to trampoline being created
* @param[in] @tramp_type - Trampoline type
* @param[in] @domain - Mono Domain
* @param[out] @code_len - Length of trampoline created
*
* Create the specified kind of trampoline
*
*/
gpointer
mono_arch_create_specific_trampoline (gpointer arg1, MonoTrampolineType tramp_type, MonoMemoryManager *mem_manager, guint32 *code_len)
{
guint8 *code, *buf, *tramp;
gint64 displace;
tramp = mono_get_trampoline_code (tramp_type);
/*----------------------------------------------------------*/
/* This is the method-specific part of the trampoline. Its */
/* purpose is to provide the generic part with the */
/* MonoMethod *method pointer. We'll use r1 to keep it. */
/*----------------------------------------------------------*/
code = buf = (guint8 *) mono_mem_manager_code_reserve (mem_manager, SPECIFIC_TRAMPOLINE_SIZE);
S390_SET (buf, s390_r0, arg1);
displace = (tramp - buf) / 2;
if ((displace >= INT_MIN) && (displace <= INT_MAX))
s390_jg (buf, (gint32) displace);
else {
S390_SET (buf, s390_r1, tramp);
s390_br (buf, s390_r1);
}
/* Flush instruction cache, since we've generated code */
mono_arch_flush_icache (code, buf - code);
MONO_PROFILER_RAISE (jit_code_buffer, (buf, code - buf,
MONO_PROFILER_CODE_BUFFER_SPECIFIC_TRAMPOLINE,
(void *) mono_get_generic_trampoline_simple_name (tramp_type)));
/* Sanity check */
g_assert ((buf - code) <= SPECIFIC_TRAMPOLINE_SIZE);
if (code_len)
*code_len = buf - code;
return code;
}
/*========================= End of Function ========================*/
/**
*
* @brief Architecture-specific RGCTX lazy fetch trampoline
*
* @param[in] @slot - Instance
* @param[out] @info - Mono Trampoline Information
* @param[in] @aot - AOT indicator
*
* Create the specified kind of trampoline
*
*/
gpointer
mono_arch_create_rgctx_lazy_fetch_trampoline (guint32 slot, MonoTrampInfo **info, gboolean aot)
{
guint8 *tramp;
guint8 *code, *buf;
guint8 **rgctx_null_jumps;
gint64 displace;
int tramp_size,
depth,
index,
iPatch = 0,
i;
gboolean mrgctx;
MonoJumpInfo *ji = NULL;
GSList *unwind_ops = NULL;
mrgctx = MONO_RGCTX_SLOT_IS_MRGCTX (slot);
index = MONO_RGCTX_SLOT_INDEX (slot);
if (mrgctx)
index += MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT / sizeof (target_mgreg_t);
for (depth = 0; ; ++depth) {
int size = mono_class_rgctx_get_array_size (depth, mrgctx);
if (index < size - 1)
break;
index -= size - 1;
}
tramp_size = 48 + 16 * depth;
if (mrgctx)
tramp_size += 4;
else
tramp_size += 12;
code = buf = (guint8 *) mono_global_codeman_reserve (tramp_size);
unwind_ops = mono_arch_get_cie_program ();
rgctx_null_jumps = (guint8 **) g_malloc (sizeof (guint8*) * (depth + 2));
if (mrgctx) {
/* get mrgctx ptr */
s390_lgr (code, s390_r1, s390_r2);
} else {
/* load rgctx ptr from vtable */
s390_lg (code, s390_r1, 0, s390_r2, MONO_STRUCT_OFFSET(MonoVTable, runtime_generic_context));
/* is the rgctx ptr null? */
s390_ltgr (code, s390_r1, s390_r1);
/* if yes, jump to actual trampoline */
rgctx_null_jumps [iPatch++] = code;
s390_jge (code, 0);
}
for (i = 0; i < depth; ++i) {
/* load ptr to next array */
if (mrgctx && i == 0)
s390_lg (code, s390_r1, 0, s390_r1, MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT);
else
s390_lg (code, s390_r1, 0, s390_r1, 0);
s390_ltgr (code, s390_r1, s390_r1);
/* if the ptr is null then jump to actual trampoline */
rgctx_null_jumps [iPatch++] = code;
s390_jge (code, 0);
}
/* fetch slot */
s390_lg (code, s390_r1, 0, s390_r1, (sizeof (target_mgreg_t) * (index + 1)));
/* is the slot null? */
s390_ltgr (code, s390_r1, s390_r1);
/* if yes, jump to actual trampoline */
rgctx_null_jumps [iPatch++] = code;
s390_jge (code, 0);
/* otherwise return r1 */
s390_lgr (code, s390_r2, s390_r1);
s390_br (code, s390_r14);
for (i = 0; i < iPatch; i++) {
displace = ((uintptr_t) code - (uintptr_t) rgctx_null_jumps[i]) / 2;
s390_patch_rel ((rgctx_null_jumps [i] + 2), displace);
}
g_free (rgctx_null_jumps);
/* move the rgctx pointer to the VTABLE register */
#if MONO_ARCH_VTABLE_REG != s390_r2
s390_lgr (code, MONO_ARCH_VTABLE_REG, s390_r2);
#endif
MonoMemoryManager *mem_manager = mini_get_default_mem_manager ();
tramp = (guint8*)mono_arch_create_specific_trampoline (GUINT_TO_POINTER (slot),
MONO_TRAMPOLINE_RGCTX_LAZY_FETCH, mem_manager, NULL);
/* jump to the actual trampoline */
displace = (tramp - code) / 2;
if ((displace >= INT_MIN) && (displace <= INT_MAX))
s390_jg (code, displace);
else {
S390_SET (code, s390_r1, tramp);
s390_br (code, s390_r1);
}
mono_arch_flush_icache (buf, code - buf);
MONO_PROFILER_RAISE (jit_code_buffer, (buf, code - buf, MONO_PROFILER_CODE_BUFFER_GENERICS_TRAMPOLINE, NULL));
g_assert (code - buf <= tramp_size);
char *name = mono_get_rgctx_fetch_trampoline_name (slot);
*info = mono_tramp_info_create (name, buf, code - buf, ji, unwind_ops);
g_free (name);
return(buf);
}
/*========================= End of Function ========================*/
/**
*
* @brief Architecture-specific static RGCTX lazy fetch trampoline
*
* @param[in] @arg - Argument to trampoline being created
* @param[out] @addr - Target
*
* Create a trampoline which sets RGCTX_REG to ARG
* then jumps to ADDR.
*/
gpointer
mono_arch_get_static_rgctx_trampoline (MonoMemoryManager *mem_manager, gpointer arg, gpointer addr)
{
guint8 *code, *start;
gint64 displace;
int buf_len;
buf_len = 32;
start = code = (guint8 *) mono_mem_manager_code_reserve (mem_manager, buf_len);
S390_SET (code, MONO_ARCH_RGCTX_REG, arg);
displace = ((uintptr_t) addr - (uintptr_t) code) / 2;
if ((displace >= INT_MIN) && (displace <= INT_MAX))
s390_jg (code, (gint32) displace);
else {
S390_SET (code, s390_r1, addr);
s390_br (code, s390_r1);
}
g_assert ((code - start) < buf_len);
mono_arch_flush_icache (start, code - start);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_GENERICS_TRAMPOLINE, NULL));
mono_tramp_info_register (mono_tramp_info_create (NULL, start, code - start, NULL, NULL), mem_manager);
return(start);
}
/*========================= End of Function ========================*/
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/debug/di/stdafx.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//*****************************************************************************
// stdafx.h
//
//
// Common include file for utility code.
//*****************************************************************************
#include <stdio.h>
#include <windows.h>
#include <winnt.h>
#include <dbgtargetcontext.h>
#define RIGHT_SIDE_COMPILE
//-----------------------------------------------------------------------------
// Contracts for RS threading.
// We only do this for debug builds and not for inproc
//-----------------------------------------------------------------------------
#if defined(_DEBUG)
#define RSCONTRACTS
#endif
// In case of FEATURE_DBGIPC_TRANSPORT_DI we use pipe for debugger debugee communication
// and event redirection is not needed. (won't work anyway)
#ifndef FEATURE_DBGIPC_TRANSPORT_DI
// Currently, we only can redirect exception events. Since real interop-debugging
// neeeds all events, redirection can't work in real-interop.
// However, whether we're interop-debugging is determined at runtime, so we always
// enable at compile time and then we need a runtime check later.
#define ENABLE_EVENT_REDIRECTION_PIPELINE
#endif
#include "ex.h"
#include "sigparser.h"
#include "corpub.h"
#include "rspriv.h"
// This is included to deal with GCC limitations around templates.
// For GCC, if a compilation unit refers to a templated class (like Ptr<T>), GCC requires the compilation
// unit to have T's definitions for anything that Ptr may call.
// RsPriv.h has a RSExtSmartPtr<ShimProcess>, which will call ShimProcess::AddRef, which means the same compilation unit
// must have the definition of ShimProcess::AddRef, and therefore the whole ShimProcess class.
// CL.exe does not have this problem.
// Practically, this means that anybody that includes rspriv.h must include shimpriv.h.
#include "shimpriv.h"
#ifdef _DEBUG
#include "utilcode.h"
#endif
#ifndef TARGET_ARM
#define DbiGetThreadContext(hThread, lpContext) ::GetThreadContext(hThread, (CONTEXT*)(lpContext))
#define DbiSetThreadContext(hThread, lpContext) ::SetThreadContext(hThread, (CONTEXT*)(lpContext))
#else
BOOL DbiGetThreadContext(HANDLE hThread, DT_CONTEXT *lpContext);
BOOL DbiSetThreadContext(HANDLE hThread, const DT_CONTEXT *lpContext);
#endif
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//*****************************************************************************
// stdafx.h
//
//
// Common include file for utility code.
//*****************************************************************************
#include <stdio.h>
#include <windows.h>
#include <winnt.h>
#include <dbgtargetcontext.h>
#define RIGHT_SIDE_COMPILE
//-----------------------------------------------------------------------------
// Contracts for RS threading.
// We only do this for debug builds and not for inproc
//-----------------------------------------------------------------------------
#if defined(_DEBUG)
#define RSCONTRACTS
#endif
// In case of FEATURE_DBGIPC_TRANSPORT_DI we use pipe for debugger debugee communication
// and event redirection is not needed. (won't work anyway)
#ifndef FEATURE_DBGIPC_TRANSPORT_DI
// Currently, we only can redirect exception events. Since real interop-debugging
// neeeds all events, redirection can't work in real-interop.
// However, whether we're interop-debugging is determined at runtime, so we always
// enable at compile time and then we need a runtime check later.
#define ENABLE_EVENT_REDIRECTION_PIPELINE
#endif
#include "ex.h"
#include "sigparser.h"
#include "corpub.h"
#include "rspriv.h"
// This is included to deal with GCC limitations around templates.
// For GCC, if a compilation unit refers to a templated class (like Ptr<T>), GCC requires the compilation
// unit to have T's definitions for anything that Ptr may call.
// RsPriv.h has a RSExtSmartPtr<ShimProcess>, which will call ShimProcess::AddRef, which means the same compilation unit
// must have the definition of ShimProcess::AddRef, and therefore the whole ShimProcess class.
// CL.exe does not have this problem.
// Practically, this means that anybody that includes rspriv.h must include shimpriv.h.
#include "shimpriv.h"
#ifdef _DEBUG
#include "utilcode.h"
#endif
#ifndef TARGET_ARM
#define DbiGetThreadContext(hThread, lpContext) ::GetThreadContext(hThread, (CONTEXT*)(lpContext))
#define DbiSetThreadContext(hThread, lpContext) ::SetThreadContext(hThread, (CONTEXT*)(lpContext))
#else
BOOL DbiGetThreadContext(HANDLE hThread, DT_CONTEXT *lpContext);
BOOL DbiSetThreadContext(HANDLE hThread, const DT_CONTEXT *lpContext);
#endif
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/inc/crosscomp.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//
// crosscomp.h - cross-compilation enablement structures.
//
#pragma once
#if (!defined(HOST_64BIT) && defined(TARGET_64BIT)) || (defined(HOST_64BIT) && !defined(TARGET_64BIT))
#define CROSSBITNESS_COMPILE
#endif
// Target platform-specific library naming
//
#ifdef TARGET_WINDOWS
#define MAKE_TARGET_DLLNAME_W(name) name W(".dll")
#define MAKE_TARGET_DLLNAME_A(name) name ".dll"
#else // TARGET_WINDOWS
#ifdef TARGET_OSX
#define MAKE_TARGET_DLLNAME_W(name) W("lib") name W(".dylib")
#define MAKE_TARGET_DLLNAME_A(name) "lib" name ".dylib"
#else
#define MAKE_TARGET_DLLNAME_W(name) W("lib") name W(".so")
#define MAKE_TARGET_DLLNAME_A(name) "lib" name ".so"
#endif
#endif // TARGET_WINDOWS
#ifdef UNICODE
#define MAKE_TARGET_DLLNAME(name) MAKE_TARGET_DLLNAME_W(name)
#else
#define MAKE_TARGET_DLLNAME(name) MAKE_TARGET_DLLNAME_A(name)
#endif
#if !defined(HOST_ARM) && defined(TARGET_ARM) // Non-ARM Host managing ARM related code
#ifndef CROSS_COMPILE
#define CROSS_COMPILE
#endif
#define ARM_MAX_BREAKPOINTS 8
#define ARM_MAX_WATCHPOINTS 1
#ifndef CONTEXT_UNWOUND_TO_CALL
#define CONTEXT_UNWOUND_TO_CALL 0x20000000
#endif
#if !defined(HOST_ARM64)
typedef struct _NEON128 {
ULONGLONG Low;
LONGLONG High;
} NEON128, *PNEON128;
#endif // !defined(HOST_ARM64)
typedef struct DECLSPEC_ALIGN(8) _T_CONTEXT {
//
// Control flags.
//
DWORD ContextFlags;
//
// Integer registers
//
DWORD R0;
DWORD R1;
DWORD R2;
DWORD R3;
DWORD R4;
DWORD R5;
DWORD R6;
DWORD R7;
DWORD R8;
DWORD R9;
DWORD R10;
DWORD R11;
DWORD R12;
//
// Control Registers
//
DWORD Sp;
DWORD Lr;
DWORD Pc;
DWORD Cpsr;
//
// Floating Point/NEON Registers
//
DWORD Fpscr;
DWORD Padding;
union {
NEON128 Q[16];
ULONGLONG D[32];
DWORD S[32];
};
//
// Debug registers
//
DWORD Bvr[ARM_MAX_BREAKPOINTS];
DWORD Bcr[ARM_MAX_BREAKPOINTS];
DWORD Wvr[ARM_MAX_WATCHPOINTS];
DWORD Wcr[ARM_MAX_WATCHPOINTS];
DWORD Padding2[2];
} T_CONTEXT, *PT_CONTEXT;
//
// Define function table entry - a function table entry is generated for
// each frame function.
//
#if defined(HOST_WINDOWS)
typedef struct _T_RUNTIME_FUNCTION {
DWORD BeginAddress;
DWORD UnwindData;
} T_RUNTIME_FUNCTION, *PT_RUNTIME_FUNCTION;
#else // HOST_WINDOWS
#define T_RUNTIME_FUNCTION RUNTIME_FUNCTION
#define PT_RUNTIME_FUNCTION PRUNTIME_FUNCTION
#endif // HOST_WINDOWS
//
// Nonvolatile context pointer record.
//
typedef struct _T_KNONVOLATILE_CONTEXT_POINTERS {
PDWORD R4;
PDWORD R5;
PDWORD R6;
PDWORD R7;
PDWORD R8;
PDWORD R9;
PDWORD R10;
PDWORD R11;
PDWORD Lr;
PULONGLONG D8;
PULONGLONG D9;
PULONGLONG D10;
PULONGLONG D11;
PULONGLONG D12;
PULONGLONG D13;
PULONGLONG D14;
PULONGLONG D15;
} T_KNONVOLATILE_CONTEXT_POINTERS, *PT_KNONVOLATILE_CONTEXT_POINTERS;
//
// Define dynamic function table entry.
//
#if defined(HOST_X86)
typedef
PT_RUNTIME_FUNCTION
(*PGET_RUNTIME_FUNCTION_CALLBACK) (
IN DWORD64 ControlPc,
IN PVOID Context
);
#endif // defined(HOST_X86)
typedef struct _T_DISPATCHER_CONTEXT {
ULONG ControlPc;
ULONG ImageBase;
PT_RUNTIME_FUNCTION FunctionEntry;
ULONG EstablisherFrame;
ULONG TargetPc;
PT_CONTEXT ContextRecord;
PEXCEPTION_ROUTINE LanguageHandler;
PVOID HandlerData;
PVOID HistoryTable;
ULONG ScopeIndex;
BOOLEAN ControlPcIsUnwound;
PUCHAR NonVolatileRegisters;
} T_DISPATCHER_CONTEXT, *PT_DISPATCHER_CONTEXT;
#elif defined(HOST_AMD64) && defined(TARGET_ARM64) // Host amd64 managing ARM64 related code
#ifndef CROSS_COMPILE
#define CROSS_COMPILE
#endif
//
// Specify the number of breakpoints and watchpoints that the OS
// will track. Architecturally, ARM64 supports up to 16. In practice,
// however, almost no one implements more than 4 of each.
//
#define ARM64_MAX_BREAKPOINTS 8
#define ARM64_MAX_WATCHPOINTS 2
#define CONTEXT_UNWOUND_TO_CALL 0x20000000
typedef union _NEON128 {
struct {
ULONGLONG Low;
LONGLONG High;
};
double D[2];
float S[4];
WORD H[8];
BYTE B[16];
} NEON128, *PNEON128;
typedef struct DECLSPEC_ALIGN(16) _T_CONTEXT {
//
// Control flags.
//
/* +0x000 */ DWORD ContextFlags;
//
// Integer registers
//
/* +0x004 */ DWORD Cpsr; // NZVF + DAIF + CurrentEL + SPSel
/* +0x008 */ union {
struct {
DWORD64 X0;
DWORD64 X1;
DWORD64 X2;
DWORD64 X3;
DWORD64 X4;
DWORD64 X5;
DWORD64 X6;
DWORD64 X7;
DWORD64 X8;
DWORD64 X9;
DWORD64 X10;
DWORD64 X11;
DWORD64 X12;
DWORD64 X13;
DWORD64 X14;
DWORD64 X15;
DWORD64 X16;
DWORD64 X17;
DWORD64 X18;
DWORD64 X19;
DWORD64 X20;
DWORD64 X21;
DWORD64 X22;
DWORD64 X23;
DWORD64 X24;
DWORD64 X25;
DWORD64 X26;
DWORD64 X27;
DWORD64 X28;
};
DWORD64 X[29];
};
/* +0x0f0 */ DWORD64 Fp;
/* +0x0f8 */ DWORD64 Lr;
/* +0x100 */ DWORD64 Sp;
/* +0x108 */ DWORD64 Pc;
//
// Floating Point/NEON Registers
//
/* +0x110 */ NEON128 V[32];
/* +0x310 */ DWORD Fpcr;
/* +0x314 */ DWORD Fpsr;
//
// Debug registers
//
/* +0x318 */ DWORD Bcr[ARM64_MAX_BREAKPOINTS];
/* +0x338 */ DWORD64 Bvr[ARM64_MAX_BREAKPOINTS];
/* +0x378 */ DWORD Wcr[ARM64_MAX_WATCHPOINTS];
/* +0x380 */ DWORD64 Wvr[ARM64_MAX_WATCHPOINTS];
/* +0x390 */
} T_CONTEXT, *PT_CONTEXT;
// _IMAGE_ARM64_RUNTIME_FUNCTION_ENTRY (see ExternalAPIs\Win9CoreSystem\inc\winnt.h)
typedef struct _T_RUNTIME_FUNCTION {
DWORD BeginAddress;
union {
DWORD UnwindData;
struct {
DWORD Flag : 2;
DWORD FunctionLength : 11;
DWORD RegF : 3;
DWORD RegI : 4;
DWORD H : 1;
DWORD CR : 2;
DWORD FrameSize : 9;
} PackedUnwindData;
};
} T_RUNTIME_FUNCTION, *PT_RUNTIME_FUNCTION;
#ifdef HOST_UNIX
typedef
EXCEPTION_DISPOSITION
(*PEXCEPTION_ROUTINE) (
PEXCEPTION_RECORD ExceptionRecord,
ULONG64 EstablisherFrame,
PCONTEXT ContextRecord,
PVOID DispatcherContext
);
#endif
//
// Define exception dispatch context structure.
//
typedef struct _T_DISPATCHER_CONTEXT {
DWORD64 ControlPc;
DWORD64 ImageBase;
PT_RUNTIME_FUNCTION FunctionEntry;
DWORD64 EstablisherFrame;
DWORD64 TargetPc;
PCONTEXT ContextRecord;
PEXCEPTION_ROUTINE LanguageHandler;
PVOID HandlerData;
PVOID HistoryTable;
DWORD ScopeIndex;
BOOLEAN ControlPcIsUnwound;
PBYTE NonVolatileRegisters;
} T_DISPATCHER_CONTEXT, *PT_DISPATCHER_CONTEXT;
//
// Nonvolatile context pointer record.
//
typedef struct _T_KNONVOLATILE_CONTEXT_POINTERS {
PDWORD64 X19;
PDWORD64 X20;
PDWORD64 X21;
PDWORD64 X22;
PDWORD64 X23;
PDWORD64 X24;
PDWORD64 X25;
PDWORD64 X26;
PDWORD64 X27;
PDWORD64 X28;
PDWORD64 Fp;
PDWORD64 Lr;
PDWORD64 D8;
PDWORD64 D9;
PDWORD64 D10;
PDWORD64 D11;
PDWORD64 D12;
PDWORD64 D13;
PDWORD64 D14;
PDWORD64 D15;
} T_KNONVOLATILE_CONTEXT_POINTERS, *PT_KNONVOLATILE_CONTEXT_POINTERS;
#if defined(HOST_UNIX) && defined(TARGET_ARM64) && !defined(HOST_ARM64)
enum
{
UNW_AARCH64_X19 = 19,
UNW_AARCH64_X20 = 20,
UNW_AARCH64_X21 = 21,
UNW_AARCH64_X22 = 22,
UNW_AARCH64_X23 = 23,
UNW_AARCH64_X24 = 24,
UNW_AARCH64_X25 = 25,
UNW_AARCH64_X26 = 26,
UNW_AARCH64_X27 = 27,
UNW_AARCH64_X28 = 28,
UNW_AARCH64_X29 = 29,
UNW_AARCH64_X30 = 30,
UNW_AARCH64_SP = 31,
UNW_AARCH64_PC = 32
};
#endif // TARGET_ARM64 && !HOST_ARM64
#elif defined(HOST_AMD64) && defined(TARGET_LOONGARCH64) // Host amd64 managing LOONGARCH64 related code
#ifndef CROSS_COMPILE
#define CROSS_COMPILE
#endif
//
// Specify the number of breakpoints and watchpoints that the OS
// will track. Architecturally, LOONGARCH64 supports up to 16. In practice,
// however, almost no one implements more than 4 of each.
//
#define LOONGARCH64_MAX_BREAKPOINTS 8
#define LOONGARCH64_MAX_WATCHPOINTS 2
#define CONTEXT_UNWOUND_TO_CALL 0x20000000
typedef struct DECLSPEC_ALIGN(16) _T_CONTEXT {
//
// Control flags.
//
/* +0x000 */ DWORD ContextFlags;
//
// Integer registers
//
DWORD64 R0;
DWORD64 Ra;
DWORD64 Tp;
DWORD64 Sp;
DWORD64 A0;//DWORD64 V0;
DWORD64 A1;//DWORD64 V1;
DWORD64 A2;
DWORD64 A3;
DWORD64 A4;
DWORD64 A5;
DWORD64 A6;
DWORD64 A7;
DWORD64 T0;
DWORD64 T1;
DWORD64 T2;
DWORD64 T3;
DWORD64 T4;
DWORD64 T5;
DWORD64 T6;
DWORD64 T7;
DWORD64 T8;
DWORD64 X0;
DWORD64 Fp;
DWORD64 S0;
DWORD64 S1;
DWORD64 S2;
DWORD64 S3;
DWORD64 S4;
DWORD64 S5;
DWORD64 S6;
DWORD64 S7;
DWORD64 S8;
DWORD64 Pc;
//
// Floating Point Registers
//
//TODO: support the SIMD.
DWORD64 F[32];
DWORD Fcsr;
} T_CONTEXT, *PT_CONTEXT;
// _IMAGE_LOONGARCH64_RUNTIME_FUNCTION_ENTRY (see ExternalAPIs\Win9CoreSystem\inc\winnt.h)
typedef struct _T_RUNTIME_FUNCTION {
DWORD BeginAddress;
union {
DWORD UnwindData;
struct {
DWORD Flag : 2;
DWORD FunctionLength : 11;
DWORD RegF : 3;
DWORD RegI : 4;
DWORD H : 1;
DWORD CR : 2;
DWORD FrameSize : 9;
} PackedUnwindData;
};
} T_RUNTIME_FUNCTION, *PT_RUNTIME_FUNCTION;
//
// Define exception dispatch context structure.
//
typedef struct _T_DISPATCHER_CONTEXT {
DWORD64 ControlPc;
DWORD64 ImageBase;
PT_RUNTIME_FUNCTION FunctionEntry;
DWORD64 EstablisherFrame;
DWORD64 TargetPc;
PCONTEXT ContextRecord;
PEXCEPTION_ROUTINE LanguageHandler;
PVOID HandlerData;
PVOID HistoryTable;
DWORD ScopeIndex;
BOOLEAN ControlPcIsUnwound;
PBYTE NonVolatileRegisters;
} T_DISPATCHER_CONTEXT, *PT_DISPATCHER_CONTEXT;
//
// Nonvolatile context pointer record.
//
typedef struct _T_KNONVOLATILE_CONTEXT_POINTERS {
PDWORD64 S0;
PDWORD64 S1;
PDWORD64 S2;
PDWORD64 S3;
PDWORD64 S4;
PDWORD64 S5;
PDWORD64 S6;
PDWORD64 S7;
PDWORD64 S8;
PDWORD64 Fp;
PDWORD64 Tp;
PDWORD64 Ra;
PDWORD64 F24;
PDWORD64 F25;
PDWORD64 F26;
PDWORD64 F27;
PDWORD64 F28;
PDWORD64 F29;
PDWORD64 F30;
PDWORD64 F31;
} T_KNONVOLATILE_CONTEXT_POINTERS, *PT_KNONVOLATILE_CONTEXT_POINTERS;
#else
#define T_CONTEXT CONTEXT
#define PT_CONTEXT PCONTEXT
#define T_DISPATCHER_CONTEXT DISPATCHER_CONTEXT
#define PT_DISPATCHER_CONTEXT PDISPATCHER_CONTEXT
#define T_KNONVOLATILE_CONTEXT_POINTERS KNONVOLATILE_CONTEXT_POINTERS
#define PT_KNONVOLATILE_CONTEXT_POINTERS PKNONVOLATILE_CONTEXT_POINTERS
#define T_RUNTIME_FUNCTION RUNTIME_FUNCTION
#define PT_RUNTIME_FUNCTION PRUNTIME_FUNCTION
#endif
#if defined(DACCESS_COMPILE) && defined(TARGET_UNIX)
// This is a TARGET oriented copy of CRITICAL_SECTION and PAL_CS_NATIVE_DATA_SIZE
// It is configured based on TARGET configuration rather than HOST configuration
// There is validation code in src/coreclr/vm/crst.cpp to keep these from
// getting out of sync
#define T_CRITICAL_SECTION_VALIDATION_MESSAGE "T_CRITICAL_SECTION validation failed. It is not in sync with CRITICAL_SECTION"
#if defined(TARGET_OSX) && defined(TARGET_X86)
#define DAC_CS_NATIVE_DATA_SIZE 76
#elif defined(TARGET_OSX) && defined(TARGET_AMD64)
#define DAC_CS_NATIVE_DATA_SIZE 120
#elif defined(TARGET_OSX) && defined(TARGET_ARM64)
#define DAC_CS_NATIVE_DATA_SIZE 120
#elif defined(TARGET_FREEBSD) && defined(TARGET_X86)
#define DAC_CS_NATIVE_DATA_SIZE 12
#elif defined(TARGET_FREEBSD) && defined(TARGET_AMD64)
#define DAC_CS_NATIVE_DATA_SIZE 24
#elif defined(TARGET_LINUX) && defined(TARGET_ARM)
#define DAC_CS_NATIVE_DATA_SIZE 80
#elif defined(TARGET_LINUX) && defined(TARGET_ARM64)
#define DAC_CS_NATIVE_DATA_SIZE 116
#elif defined(TARGET_LINUX) && defined(TARGET_LOONGARCH64)
#define DAC_CS_NATIVE_DATA_SIZE 96
#elif defined(TARGET_LINUX) && defined(TARGET_X86)
#define DAC_CS_NATIVE_DATA_SIZE 76
#elif defined(TARGET_LINUX) && defined(TARGET_AMD64)
#define DAC_CS_NATIVE_DATA_SIZE 96
#elif defined(TARGET_LINUX) && defined(TARGET_S390X)
#define DAC_CS_NATIVE_DATA_SIZE 96
#elif defined(TARGET_LINUX) && defined(TARGET_LOONGARCH64)
#define DAC_CS_NATIVE_DATA_SIZE 96
#elif defined(TARGET_NETBSD) && defined(TARGET_AMD64)
#define DAC_CS_NATIVE_DATA_SIZE 96
#elif defined(TARGET_NETBSD) && defined(TARGET_ARM)
#define DAC_CS_NATIVE_DATA_SIZE 56
#elif defined(TARGET_NETBSD) && defined(TARGET_X86)
#define DAC_CS_NATIVE_DATA_SIZE 56
#elif defined(__sun) && defined(TARGET_AMD64)
#define DAC_CS_NATIVE_DATA_SIZE 48
#else
#warning
#error DAC_CS_NATIVE_DATA_SIZE is not defined for this architecture. This should be same value as PAL_CS_NATIVE_DATA_SIZE (aka sizeof(PAL_CS_NATIVE_DATA)).
#endif
struct T_CRITICAL_SECTION {
PVOID DebugInfo;
LONG LockCount;
LONG RecursionCount;
HANDLE OwningThread;
ULONG_PTR SpinCount;
#ifdef PAL_TRACK_CRITICAL_SECTIONS_DATA
BOOL bInternal;
#endif // PAL_TRACK_CRITICAL_SECTIONS_DATA
volatile DWORD dwInitState;
union CSNativeDataStorage
{
BYTE rgNativeDataStorage[DAC_CS_NATIVE_DATA_SIZE];
PVOID pvAlign; // make sure the storage is machine-pointer-size aligned
} csnds;
};
#else
#define T_CRITICAL_SECTION CRITICAL_SECTION
#endif
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//
// crosscomp.h - cross-compilation enablement structures.
//
#pragma once
#if (!defined(HOST_64BIT) && defined(TARGET_64BIT)) || (defined(HOST_64BIT) && !defined(TARGET_64BIT))
#define CROSSBITNESS_COMPILE
#endif
// Target platform-specific library naming
//
#ifdef TARGET_WINDOWS
#define MAKE_TARGET_DLLNAME_W(name) name W(".dll")
#define MAKE_TARGET_DLLNAME_A(name) name ".dll"
#else // TARGET_WINDOWS
#ifdef TARGET_OSX
#define MAKE_TARGET_DLLNAME_W(name) W("lib") name W(".dylib")
#define MAKE_TARGET_DLLNAME_A(name) "lib" name ".dylib"
#else
#define MAKE_TARGET_DLLNAME_W(name) W("lib") name W(".so")
#define MAKE_TARGET_DLLNAME_A(name) "lib" name ".so"
#endif
#endif // TARGET_WINDOWS
#ifdef UNICODE
#define MAKE_TARGET_DLLNAME(name) MAKE_TARGET_DLLNAME_W(name)
#else
#define MAKE_TARGET_DLLNAME(name) MAKE_TARGET_DLLNAME_A(name)
#endif
#if !defined(HOST_ARM) && defined(TARGET_ARM) // Non-ARM Host managing ARM related code
#ifndef CROSS_COMPILE
#define CROSS_COMPILE
#endif
#define ARM_MAX_BREAKPOINTS 8
#define ARM_MAX_WATCHPOINTS 1
#ifndef CONTEXT_UNWOUND_TO_CALL
#define CONTEXT_UNWOUND_TO_CALL 0x20000000
#endif
#if !defined(HOST_ARM64)
typedef struct _NEON128 {
ULONGLONG Low;
LONGLONG High;
} NEON128, *PNEON128;
#endif // !defined(HOST_ARM64)
typedef struct DECLSPEC_ALIGN(8) _T_CONTEXT {
//
// Control flags.
//
DWORD ContextFlags;
//
// Integer registers
//
DWORD R0;
DWORD R1;
DWORD R2;
DWORD R3;
DWORD R4;
DWORD R5;
DWORD R6;
DWORD R7;
DWORD R8;
DWORD R9;
DWORD R10;
DWORD R11;
DWORD R12;
//
// Control Registers
//
DWORD Sp;
DWORD Lr;
DWORD Pc;
DWORD Cpsr;
//
// Floating Point/NEON Registers
//
DWORD Fpscr;
DWORD Padding;
union {
NEON128 Q[16];
ULONGLONG D[32];
DWORD S[32];
};
//
// Debug registers
//
DWORD Bvr[ARM_MAX_BREAKPOINTS];
DWORD Bcr[ARM_MAX_BREAKPOINTS];
DWORD Wvr[ARM_MAX_WATCHPOINTS];
DWORD Wcr[ARM_MAX_WATCHPOINTS];
DWORD Padding2[2];
} T_CONTEXT, *PT_CONTEXT;
//
// Define function table entry - a function table entry is generated for
// each frame function.
//
#if defined(HOST_WINDOWS)
typedef struct _T_RUNTIME_FUNCTION {
DWORD BeginAddress;
DWORD UnwindData;
} T_RUNTIME_FUNCTION, *PT_RUNTIME_FUNCTION;
#else // HOST_WINDOWS
#define T_RUNTIME_FUNCTION RUNTIME_FUNCTION
#define PT_RUNTIME_FUNCTION PRUNTIME_FUNCTION
#endif // HOST_WINDOWS
//
// Nonvolatile context pointer record.
//
typedef struct _T_KNONVOLATILE_CONTEXT_POINTERS {
PDWORD R4;
PDWORD R5;
PDWORD R6;
PDWORD R7;
PDWORD R8;
PDWORD R9;
PDWORD R10;
PDWORD R11;
PDWORD Lr;
PULONGLONG D8;
PULONGLONG D9;
PULONGLONG D10;
PULONGLONG D11;
PULONGLONG D12;
PULONGLONG D13;
PULONGLONG D14;
PULONGLONG D15;
} T_KNONVOLATILE_CONTEXT_POINTERS, *PT_KNONVOLATILE_CONTEXT_POINTERS;
//
// Define dynamic function table entry.
//
#if defined(HOST_X86)
typedef
PT_RUNTIME_FUNCTION
(*PGET_RUNTIME_FUNCTION_CALLBACK) (
IN DWORD64 ControlPc,
IN PVOID Context
);
#endif // defined(HOST_X86)
typedef struct _T_DISPATCHER_CONTEXT {
ULONG ControlPc;
ULONG ImageBase;
PT_RUNTIME_FUNCTION FunctionEntry;
ULONG EstablisherFrame;
ULONG TargetPc;
PT_CONTEXT ContextRecord;
PEXCEPTION_ROUTINE LanguageHandler;
PVOID HandlerData;
PVOID HistoryTable;
ULONG ScopeIndex;
BOOLEAN ControlPcIsUnwound;
PUCHAR NonVolatileRegisters;
} T_DISPATCHER_CONTEXT, *PT_DISPATCHER_CONTEXT;
#elif defined(HOST_AMD64) && defined(TARGET_ARM64) // Host amd64 managing ARM64 related code
#ifndef CROSS_COMPILE
#define CROSS_COMPILE
#endif
//
// Specify the number of breakpoints and watchpoints that the OS
// will track. Architecturally, ARM64 supports up to 16. In practice,
// however, almost no one implements more than 4 of each.
//
#define ARM64_MAX_BREAKPOINTS 8
#define ARM64_MAX_WATCHPOINTS 2
#define CONTEXT_UNWOUND_TO_CALL 0x20000000
typedef union _NEON128 {
struct {
ULONGLONG Low;
LONGLONG High;
};
double D[2];
float S[4];
WORD H[8];
BYTE B[16];
} NEON128, *PNEON128;
typedef struct DECLSPEC_ALIGN(16) _T_CONTEXT {
//
// Control flags.
//
/* +0x000 */ DWORD ContextFlags;
//
// Integer registers
//
/* +0x004 */ DWORD Cpsr; // NZVF + DAIF + CurrentEL + SPSel
/* +0x008 */ union {
struct {
DWORD64 X0;
DWORD64 X1;
DWORD64 X2;
DWORD64 X3;
DWORD64 X4;
DWORD64 X5;
DWORD64 X6;
DWORD64 X7;
DWORD64 X8;
DWORD64 X9;
DWORD64 X10;
DWORD64 X11;
DWORD64 X12;
DWORD64 X13;
DWORD64 X14;
DWORD64 X15;
DWORD64 X16;
DWORD64 X17;
DWORD64 X18;
DWORD64 X19;
DWORD64 X20;
DWORD64 X21;
DWORD64 X22;
DWORD64 X23;
DWORD64 X24;
DWORD64 X25;
DWORD64 X26;
DWORD64 X27;
DWORD64 X28;
};
DWORD64 X[29];
};
/* +0x0f0 */ DWORD64 Fp;
/* +0x0f8 */ DWORD64 Lr;
/* +0x100 */ DWORD64 Sp;
/* +0x108 */ DWORD64 Pc;
//
// Floating Point/NEON Registers
//
/* +0x110 */ NEON128 V[32];
/* +0x310 */ DWORD Fpcr;
/* +0x314 */ DWORD Fpsr;
//
// Debug registers
//
/* +0x318 */ DWORD Bcr[ARM64_MAX_BREAKPOINTS];
/* +0x338 */ DWORD64 Bvr[ARM64_MAX_BREAKPOINTS];
/* +0x378 */ DWORD Wcr[ARM64_MAX_WATCHPOINTS];
/* +0x380 */ DWORD64 Wvr[ARM64_MAX_WATCHPOINTS];
/* +0x390 */
} T_CONTEXT, *PT_CONTEXT;
// _IMAGE_ARM64_RUNTIME_FUNCTION_ENTRY (see ExternalAPIs\Win9CoreSystem\inc\winnt.h)
typedef struct _T_RUNTIME_FUNCTION {
DWORD BeginAddress;
union {
DWORD UnwindData;
struct {
DWORD Flag : 2;
DWORD FunctionLength : 11;
DWORD RegF : 3;
DWORD RegI : 4;
DWORD H : 1;
DWORD CR : 2;
DWORD FrameSize : 9;
} PackedUnwindData;
};
} T_RUNTIME_FUNCTION, *PT_RUNTIME_FUNCTION;
#ifdef HOST_UNIX
typedef
EXCEPTION_DISPOSITION
(*PEXCEPTION_ROUTINE) (
PEXCEPTION_RECORD ExceptionRecord,
ULONG64 EstablisherFrame,
PCONTEXT ContextRecord,
PVOID DispatcherContext
);
#endif
//
// Define exception dispatch context structure.
//
typedef struct _T_DISPATCHER_CONTEXT {
DWORD64 ControlPc;
DWORD64 ImageBase;
PT_RUNTIME_FUNCTION FunctionEntry;
DWORD64 EstablisherFrame;
DWORD64 TargetPc;
PCONTEXT ContextRecord;
PEXCEPTION_ROUTINE LanguageHandler;
PVOID HandlerData;
PVOID HistoryTable;
DWORD ScopeIndex;
BOOLEAN ControlPcIsUnwound;
PBYTE NonVolatileRegisters;
} T_DISPATCHER_CONTEXT, *PT_DISPATCHER_CONTEXT;
//
// Nonvolatile context pointer record.
//
typedef struct _T_KNONVOLATILE_CONTEXT_POINTERS {
PDWORD64 X19;
PDWORD64 X20;
PDWORD64 X21;
PDWORD64 X22;
PDWORD64 X23;
PDWORD64 X24;
PDWORD64 X25;
PDWORD64 X26;
PDWORD64 X27;
PDWORD64 X28;
PDWORD64 Fp;
PDWORD64 Lr;
PDWORD64 D8;
PDWORD64 D9;
PDWORD64 D10;
PDWORD64 D11;
PDWORD64 D12;
PDWORD64 D13;
PDWORD64 D14;
PDWORD64 D15;
} T_KNONVOLATILE_CONTEXT_POINTERS, *PT_KNONVOLATILE_CONTEXT_POINTERS;
#if defined(HOST_UNIX) && defined(TARGET_ARM64) && !defined(HOST_ARM64)
enum
{
UNW_AARCH64_X19 = 19,
UNW_AARCH64_X20 = 20,
UNW_AARCH64_X21 = 21,
UNW_AARCH64_X22 = 22,
UNW_AARCH64_X23 = 23,
UNW_AARCH64_X24 = 24,
UNW_AARCH64_X25 = 25,
UNW_AARCH64_X26 = 26,
UNW_AARCH64_X27 = 27,
UNW_AARCH64_X28 = 28,
UNW_AARCH64_X29 = 29,
UNW_AARCH64_X30 = 30,
UNW_AARCH64_SP = 31,
UNW_AARCH64_PC = 32
};
#endif // TARGET_ARM64 && !HOST_ARM64
#elif defined(HOST_AMD64) && defined(TARGET_LOONGARCH64) // Host amd64 managing LOONGARCH64 related code
#ifndef CROSS_COMPILE
#define CROSS_COMPILE
#endif
//
// Specify the number of breakpoints and watchpoints that the OS
// will track. Architecturally, LOONGARCH64 supports up to 16. In practice,
// however, almost no one implements more than 4 of each.
//
#define LOONGARCH64_MAX_BREAKPOINTS 8
#define LOONGARCH64_MAX_WATCHPOINTS 2
#define CONTEXT_UNWOUND_TO_CALL 0x20000000
typedef struct DECLSPEC_ALIGN(16) _T_CONTEXT {
//
// Control flags.
//
/* +0x000 */ DWORD ContextFlags;
//
// Integer registers
//
DWORD64 R0;
DWORD64 Ra;
DWORD64 Tp;
DWORD64 Sp;
DWORD64 A0;//DWORD64 V0;
DWORD64 A1;//DWORD64 V1;
DWORD64 A2;
DWORD64 A3;
DWORD64 A4;
DWORD64 A5;
DWORD64 A6;
DWORD64 A7;
DWORD64 T0;
DWORD64 T1;
DWORD64 T2;
DWORD64 T3;
DWORD64 T4;
DWORD64 T5;
DWORD64 T6;
DWORD64 T7;
DWORD64 T8;
DWORD64 X0;
DWORD64 Fp;
DWORD64 S0;
DWORD64 S1;
DWORD64 S2;
DWORD64 S3;
DWORD64 S4;
DWORD64 S5;
DWORD64 S6;
DWORD64 S7;
DWORD64 S8;
DWORD64 Pc;
//
// Floating Point Registers
//
//TODO: support the SIMD.
DWORD64 F[32];
DWORD Fcsr;
} T_CONTEXT, *PT_CONTEXT;
// _IMAGE_LOONGARCH64_RUNTIME_FUNCTION_ENTRY (see ExternalAPIs\Win9CoreSystem\inc\winnt.h)
typedef struct _T_RUNTIME_FUNCTION {
DWORD BeginAddress;
union {
DWORD UnwindData;
struct {
DWORD Flag : 2;
DWORD FunctionLength : 11;
DWORD RegF : 3;
DWORD RegI : 4;
DWORD H : 1;
DWORD CR : 2;
DWORD FrameSize : 9;
} PackedUnwindData;
};
} T_RUNTIME_FUNCTION, *PT_RUNTIME_FUNCTION;
//
// Define exception dispatch context structure.
//
typedef struct _T_DISPATCHER_CONTEXT {
DWORD64 ControlPc;
DWORD64 ImageBase;
PT_RUNTIME_FUNCTION FunctionEntry;
DWORD64 EstablisherFrame;
DWORD64 TargetPc;
PCONTEXT ContextRecord;
PEXCEPTION_ROUTINE LanguageHandler;
PVOID HandlerData;
PVOID HistoryTable;
DWORD ScopeIndex;
BOOLEAN ControlPcIsUnwound;
PBYTE NonVolatileRegisters;
} T_DISPATCHER_CONTEXT, *PT_DISPATCHER_CONTEXT;
//
// Nonvolatile context pointer record.
//
typedef struct _T_KNONVOLATILE_CONTEXT_POINTERS {
PDWORD64 S0;
PDWORD64 S1;
PDWORD64 S2;
PDWORD64 S3;
PDWORD64 S4;
PDWORD64 S5;
PDWORD64 S6;
PDWORD64 S7;
PDWORD64 S8;
PDWORD64 Fp;
PDWORD64 Tp;
PDWORD64 Ra;
PDWORD64 F24;
PDWORD64 F25;
PDWORD64 F26;
PDWORD64 F27;
PDWORD64 F28;
PDWORD64 F29;
PDWORD64 F30;
PDWORD64 F31;
} T_KNONVOLATILE_CONTEXT_POINTERS, *PT_KNONVOLATILE_CONTEXT_POINTERS;
#else
#define T_CONTEXT CONTEXT
#define PT_CONTEXT PCONTEXT
#define T_DISPATCHER_CONTEXT DISPATCHER_CONTEXT
#define PT_DISPATCHER_CONTEXT PDISPATCHER_CONTEXT
#define T_KNONVOLATILE_CONTEXT_POINTERS KNONVOLATILE_CONTEXT_POINTERS
#define PT_KNONVOLATILE_CONTEXT_POINTERS PKNONVOLATILE_CONTEXT_POINTERS
#define T_RUNTIME_FUNCTION RUNTIME_FUNCTION
#define PT_RUNTIME_FUNCTION PRUNTIME_FUNCTION
#endif
#if defined(DACCESS_COMPILE) && defined(TARGET_UNIX)
// This is a TARGET oriented copy of CRITICAL_SECTION and PAL_CS_NATIVE_DATA_SIZE
// It is configured based on TARGET configuration rather than HOST configuration
// There is validation code in src/coreclr/vm/crst.cpp to keep these from
// getting out of sync
#define T_CRITICAL_SECTION_VALIDATION_MESSAGE "T_CRITICAL_SECTION validation failed. It is not in sync with CRITICAL_SECTION"
#if defined(TARGET_OSX) && defined(TARGET_X86)
#define DAC_CS_NATIVE_DATA_SIZE 76
#elif defined(TARGET_OSX) && defined(TARGET_AMD64)
#define DAC_CS_NATIVE_DATA_SIZE 120
#elif defined(TARGET_OSX) && defined(TARGET_ARM64)
#define DAC_CS_NATIVE_DATA_SIZE 120
#elif defined(TARGET_FREEBSD) && defined(TARGET_X86)
#define DAC_CS_NATIVE_DATA_SIZE 12
#elif defined(TARGET_FREEBSD) && defined(TARGET_AMD64)
#define DAC_CS_NATIVE_DATA_SIZE 24
#elif defined(TARGET_LINUX) && defined(TARGET_ARM)
#define DAC_CS_NATIVE_DATA_SIZE 80
#elif defined(TARGET_LINUX) && defined(TARGET_ARM64)
#define DAC_CS_NATIVE_DATA_SIZE 116
#elif defined(TARGET_LINUX) && defined(TARGET_LOONGARCH64)
#define DAC_CS_NATIVE_DATA_SIZE 96
#elif defined(TARGET_LINUX) && defined(TARGET_X86)
#define DAC_CS_NATIVE_DATA_SIZE 76
#elif defined(TARGET_LINUX) && defined(TARGET_AMD64)
#define DAC_CS_NATIVE_DATA_SIZE 96
#elif defined(TARGET_LINUX) && defined(TARGET_S390X)
#define DAC_CS_NATIVE_DATA_SIZE 96
#elif defined(TARGET_LINUX) && defined(TARGET_LOONGARCH64)
#define DAC_CS_NATIVE_DATA_SIZE 96
#elif defined(TARGET_NETBSD) && defined(TARGET_AMD64)
#define DAC_CS_NATIVE_DATA_SIZE 96
#elif defined(TARGET_NETBSD) && defined(TARGET_ARM)
#define DAC_CS_NATIVE_DATA_SIZE 56
#elif defined(TARGET_NETBSD) && defined(TARGET_X86)
#define DAC_CS_NATIVE_DATA_SIZE 56
#elif defined(__sun) && defined(TARGET_AMD64)
#define DAC_CS_NATIVE_DATA_SIZE 48
#else
#warning
#error DAC_CS_NATIVE_DATA_SIZE is not defined for this architecture. This should be same value as PAL_CS_NATIVE_DATA_SIZE (aka sizeof(PAL_CS_NATIVE_DATA)).
#endif
struct T_CRITICAL_SECTION {
PVOID DebugInfo;
LONG LockCount;
LONG RecursionCount;
HANDLE OwningThread;
ULONG_PTR SpinCount;
#ifdef PAL_TRACK_CRITICAL_SECTIONS_DATA
BOOL bInternal;
#endif // PAL_TRACK_CRITICAL_SECTIONS_DATA
volatile DWORD dwInitState;
union CSNativeDataStorage
{
BYTE rgNativeDataStorage[DAC_CS_NATIVE_DATA_SIZE];
PVOID pvAlign; // make sure the storage is machine-pointer-size aligned
} csnds;
};
#else
#define T_CRITICAL_SECTION CRITICAL_SECTION
#endif
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/mono/mono/eventpipe/test/ep-tests.c | #if defined(_MSC_VER) && defined(_DEBUG)
#include "ep-tests-debug.h"
#endif
#include <eventpipe/ep.h>
#include <eventpipe/ep-config.h>
#include <eventpipe/ep-event.h>
#include <eventpipe/ep-session.h>
#include <eventpipe/ep-event-instance.h>
#include <eventpipe/ep-event-payload.h>
#include <eventpipe/ep-sample-profiler.h>
#include <eglib/test/test.h>
#define TEST_PROVIDER_NAME "MyTestProvider"
#define TEST_FILE "./ep_test_create_file.txt"
#define TEST_FILE_2 "./ep_test_create_file_2.txt"
//#define TEST_PERF
#ifdef _CRTDBG_MAP_ALLOC
static _CrtMemState eventpipe_memory_start_snapshot;
static _CrtMemState eventpipe_memory_end_snapshot;
static _CrtMemState eventpipe_memory_diff_snapshot;
#endif
static RESULT
test_eventpipe_setup (void)
{
uint32_t test_location = 0;
// Lazy initialized, force now to not show up as leak.
ep_rt_os_command_line_get ();
ep_rt_managed_command_line_get ();
test_location = 1;
// Init profiler, force now to not show up as leaks.
// Set long sampling rate to reduce impact.
EP_LOCK_ENTER (section1)
ep_sample_profiler_init (NULL);
ep_sample_profiler_set_sampling_rate (1000 * 1000 * 100);
EP_LOCK_EXIT (section1)
test_location = 2;
#ifdef _CRTDBG_MAP_ALLOC
_CrtMemCheckpoint (&eventpipe_memory_start_snapshot);
#endif
ep_thread_get_or_create ();
return NULL;
ep_on_error:
return FAILED ("Failed at test location=%i", test_location);
}
static RESULT
test_create_delete_provider (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *test_provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
if (!test_provider) {
result = FAILED ("Failed to create provider %s, ep_create_provider returned NULL", TEST_PROVIDER_NAME);
ep_raise_error ();
}
ep_on_exit:
ep_delete_provider (test_provider);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_stress_create_delete_provider (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *test_providers [1000] = {0};
for (uint32_t i = 0; i < 1000; ++i) {
char *provider_name = g_strdup_printf (TEST_PROVIDER_NAME "_%i", i);
test_providers [i] = ep_create_provider (provider_name, NULL, NULL, NULL);
g_free (provider_name);
if (!test_providers [i]) {
result = FAILED ("Failed to create provider %s_%i, ep_create_provider returned NULL", TEST_PROVIDER_NAME, i);
ep_raise_error ();
}
}
ep_on_exit:
for (uint32_t i = 0; i < 1000; ++i) {
if (test_providers [i])
ep_delete_provider (test_providers [i]);
}
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_get_provider (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *test_provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
if (!test_provider) {
result = FAILED ("Failed to create provider %s, ep_create_provider returned NULL", TEST_PROVIDER_NAME);
ep_raise_error ();
}
test_location = 1;
EventPipeProvider *returned_test_provider = ep_get_provider (TEST_PROVIDER_NAME);
if (!returned_test_provider) {
result = FAILED ("Failed to get provider %s, ep_get_provider returned NULL", TEST_PROVIDER_NAME);
ep_raise_error ();
}
test_location = 2;
ep_delete_provider (test_provider);
test_provider = NULL;
returned_test_provider = ep_get_provider (TEST_PROVIDER_NAME);
if (returned_test_provider) {
result = FAILED ("Provider %s, still returned from ep_get_provider after deleted", TEST_PROVIDER_NAME);
ep_raise_error ();
}
ep_on_exit:
ep_delete_provider (test_provider);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_create_same_provider_twice (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *test_provider = NULL;
EventPipeProvider *test_provider2 = NULL;
EventPipeProvider *returned_test_provider = NULL;
test_provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
if (!test_provider) {
result = FAILED ("Failed to create provider %s, ep_create_provider returned NULL", TEST_PROVIDER_NAME);
ep_raise_error ();
}
test_location = 1;
returned_test_provider = ep_get_provider (TEST_PROVIDER_NAME);
if (!returned_test_provider) {
result = FAILED ("Failed to get provider %s, ep_get_provider returned NULL", TEST_PROVIDER_NAME);
ep_raise_error ();
}
test_location = 2;
test_provider2 = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
if (!test_provider2) {
result = FAILED ("Creating to create an already existing provider %s", TEST_PROVIDER_NAME);
ep_raise_error ();
}
test_location = 3;
returned_test_provider = ep_get_provider (TEST_PROVIDER_NAME);
if (!returned_test_provider) {
result = FAILED ("Failed to get provider %s, ep_get_provider returned NULL", TEST_PROVIDER_NAME);
ep_raise_error ();
}
test_location = 4;
if (returned_test_provider != test_provider) {
result = FAILED ("Failed to get provider %s, ep_get_provider returned unexpected provider instance", TEST_PROVIDER_NAME);
ep_raise_error ();
}
ep_on_exit:
ep_delete_provider (test_provider2);
ep_delete_provider (test_provider);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_enable_disable (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
session_id = ep_enable (
TEST_FILE,
1,
current_provider_config,
1,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
ep_start_streaming (session_id);
if (!ep_enabled ()) {
result = FAILED ("event pipe disabled");
ep_raise_error ();
}
ep_on_exit:
ep_disable (session_id);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
validate_default_provider_config (EventPipeSession *session)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeSessionProviderList *provider_list = ep_session_get_providers (session);
EventPipeSessionProvider *session_provider = ep_rt_session_provider_list_find_by_name (ep_session_provider_list_get_providers_cref (provider_list), "Microsoft-Windows-DotNETRuntime");
ep_raise_error_if_nok (session_provider != NULL);
test_location = 1;
ep_raise_error_if_nok (!ep_rt_utf8_string_compare (ep_session_provider_get_provider_name (session_provider), "Microsoft-Windows-DotNETRuntime"));
test_location = 2;
ep_raise_error_if_nok (ep_session_provider_get_keywords (session_provider) == 0x4c14fccbd);
test_location = 3;
ep_raise_error_if_nok (ep_session_provider_get_logging_level (session_provider) == EP_EVENT_LEVEL_VERBOSE);
test_location = 4;
ep_raise_error_if_nok (ep_session_provider_get_filter_data (session_provider) == NULL);
test_location = 5;
session_provider = ep_rt_session_provider_list_find_by_name (ep_session_provider_list_get_providers_cref (provider_list), "Microsoft-Windows-DotNETRuntimePrivate");
ep_raise_error_if_nok (session_provider != NULL);
test_location = 6;
ep_raise_error_if_nok (!ep_rt_utf8_string_compare (ep_session_provider_get_provider_name (session_provider), "Microsoft-Windows-DotNETRuntimePrivate"));
test_location = 7;
ep_raise_error_if_nok (ep_session_provider_get_keywords (session_provider) == 0x4002000b);
test_location = 8;
ep_raise_error_if_nok (ep_session_provider_get_logging_level (session_provider) == EP_EVENT_LEVEL_VERBOSE);
test_location = 9;
ep_raise_error_if_nok (ep_session_provider_get_filter_data (session_provider) == NULL);
test_location = 10;
session_provider = ep_rt_session_provider_list_find_by_name (ep_session_provider_list_get_providers_cref (provider_list), "Microsoft-DotNETCore-SampleProfiler");
ep_raise_error_if_nok (session_provider != NULL);
test_location = 11;
ep_raise_error_if_nok (!ep_rt_utf8_string_compare (ep_session_provider_get_provider_name (session_provider), "Microsoft-DotNETCore-SampleProfiler"));
test_location = 12;
ep_raise_error_if_nok (ep_session_provider_get_keywords (session_provider) == 0);
test_location = 13;
ep_raise_error_if_nok (ep_session_provider_get_logging_level (session_provider) == EP_EVENT_LEVEL_VERBOSE);
test_location = 14;
ep_raise_error_if_nok (ep_session_provider_get_filter_data (session_provider) == NULL);
test_location = 15;
ep_on_exit:
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_enable_disable_default_provider_config (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeSessionID session_id = 0;
session_id = ep_enable_2 (
TEST_FILE,
1,
NULL,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
result = validate_default_provider_config ((EventPipeSession *)session_id);
ep_raise_error_if_nok (result == NULL);
test_location = 3;
ep_start_streaming (session_id);
if (!ep_enabled ()) {
result = FAILED ("event pipe disabled");
ep_raise_error ();
}
ep_on_exit:
ep_disable (session_id);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_enable_disable_multiple_default_provider_config (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeSessionID session_id_1 = 0;
EventPipeSessionID session_id_2 = 0;
session_id_1 = ep_enable_2 (
TEST_FILE,
1,
NULL,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id_1) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
result = validate_default_provider_config ((EventPipeSession *)session_id_1);
ep_raise_error_if_nok (result == NULL);
test_location = 3;
ep_start_streaming (session_id_1);
if (!ep_enabled ()) {
result = FAILED ("event pipe disabled");
ep_raise_error ();
}
test_location = 4;
session_id_2 = ep_enable_2 (
TEST_FILE_2,
1,
NULL,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id_2) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 5;
result = validate_default_provider_config ((EventPipeSession *)session_id_2);
ep_raise_error_if_nok (result == NULL);
test_location = 6;
ep_start_streaming (session_id_2);
if (!ep_enabled ()) {
result = FAILED ("event pipe disabled");
ep_raise_error ();
}
ep_on_exit:
ep_disable (session_id_1);
ep_disable (session_id_2);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_enable_disable_provider_config (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
const ep_char8_t *provider_config = TEST_PROVIDER_NAME ":1:0:";
EventPipeSessionID session_id = 0;
session_id = ep_enable_2 (
TEST_FILE,
1,
provider_config,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
EventPipeSessionProviderList *provider_list = ep_session_get_providers ((EventPipeSession *)session_id);
EventPipeSessionProvider *session_provider = ep_rt_session_provider_list_find_by_name (ep_session_provider_list_get_providers_cref (provider_list), TEST_PROVIDER_NAME);
ep_raise_error_if_nok (session_provider != NULL);
test_location = 3;
ep_raise_error_if_nok (!ep_rt_utf8_string_compare (ep_session_provider_get_provider_name (session_provider), TEST_PROVIDER_NAME));
test_location = 4;
ep_raise_error_if_nok (ep_session_provider_get_keywords (session_provider) == 1);
test_location = 5;
ep_raise_error_if_nok (ep_session_provider_get_logging_level (session_provider) == EP_EVENT_LEVEL_LOGALWAYS);
test_location = 6;
ep_raise_error_if_nok (ep_session_provider_get_filter_data (session_provider) == NULL);
test_location = 7;
ep_start_streaming (session_id);
if (!ep_enabled ()) {
result = FAILED ("event pipe disabled");
ep_raise_error ();
}
ep_on_exit:
ep_disable (session_id);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_enable_disable_provider_parse_default_config (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
const ep_char8_t *provider_config =
"Microsoft-Windows-DotNETRuntime"
":0x4c14fccbd"
":5"
":"
","
"Microsoft-Windows-DotNETRuntimePrivate"
":0x4002000b"
":5"
":"
","
"Microsoft-DotNETCore-SampleProfiler"
":0"
":5"
":";
EventPipeSessionID session_id = 0;
session_id = ep_enable_2 (
TEST_FILE,
1,
provider_config,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
result = validate_default_provider_config ((EventPipeSession *)session_id);
ep_raise_error_if_nok (result == NULL);
test_location = 3;
ep_start_streaming (session_id);
if (!ep_enabled ()) {
result = FAILED ("event pipe disabled");
ep_raise_error ();
}
ep_on_exit:
ep_disable (session_id);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static bool provider_callback_data;
static
void
provider_callback (
const uint8_t *source_id,
unsigned long is_enabled,
uint8_t level,
uint64_t match_any_keywords,
uint64_t match_all_keywords,
EventFilterDescriptor *filter_data,
void *callback_context)
{
*(bool *)callback_context = true;
}
static RESULT
test_create_delete_provider_with_callback (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeSessionID session_id = 0;
EventPipeProvider *test_provider = NULL;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
session_id = ep_enable (
TEST_FILE,
1,
current_provider_config,
1,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
ep_start_streaming (session_id);
test_provider = ep_create_provider (TEST_PROVIDER_NAME, provider_callback, NULL, &provider_callback_data);
ep_raise_error_if_nok (test_provider != NULL);
test_location = 3;
if (!provider_callback_data) {
result = FAILED ("Provider callback not called");
ep_raise_error ();
}
ep_on_exit:
ep_delete_provider (test_provider);
ep_disable (session_id);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_build_event_metadata (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeEventInstance *ep_event_instance = NULL;
EventPipeEventMetadataEvent *metadata_event = NULL;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 1;
ep_event = ep_event_alloc (provider, 1, 1, 1, EP_EVENT_LEVEL_VERBOSE, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 2;
ep_event_instance = ep_event_instance_alloc (ep_event, 0, 0, NULL, 0, NULL, NULL);
ep_raise_error_if_nok (ep_event_instance != NULL);
test_location = 3;
metadata_event = ep_build_event_metadata_event (ep_event_instance, 1);
ep_raise_error_if_nok (metadata_event != NULL);
test_location = 4;
ep_on_exit:
ep_delete_provider (provider);
ep_event_free (ep_event);
ep_event_instance_free (ep_event_instance);
ep_event_metdata_event_free (metadata_event);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_session_start_streaming (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config =ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
session_id = ep_enable (
TEST_FILE,
1,
current_provider_config,
1,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
ep_start_streaming (session_id);
ep_on_exit:
ep_disable (session_id);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_session_write_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
bool write_result = false;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4,false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventPipeEventPayload payload;;
ep_event_payload_init (&payload, NULL, 0);
write_result = ep_session_write_event ((EventPipeSession *)session_id, ep_rt_thread_get_handle (), ep_event, &payload, NULL, NULL, NULL, NULL);
ep_event_payload_fini (&payload);
ep_raise_error_if_nok (write_result == true);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_session_write_event_seq_point (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
bool write_result = false;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventPipeEventPayload payload;;
ep_event_payload_init (&payload, NULL, 0);
write_result = ep_session_write_event ((EventPipeSession *)session_id, ep_rt_thread_get_handle (), ep_event, &payload, NULL, NULL, NULL, NULL);
ep_event_payload_fini (&payload);
ep_raise_error_if_nok (write_result == true);
test_location = 5;
ep_session_write_sequence_point_unbuffered ((EventPipeSession *)session_id);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_session_write_wait_get_next_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
bool write_result = false;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventPipeEventPayload payload;;
ep_event_payload_init (&payload, NULL, 0);
write_result = ep_session_write_event ((EventPipeSession *)session_id, ep_rt_thread_get_handle (), ep_event, &payload, NULL, NULL, NULL, NULL);
ep_event_payload_fini (&payload);
ep_raise_error_if_nok (write_result == true);
test_location = 5;
EventPipeEventInstance *event_instance = ep_session_get_next_event ((EventPipeSession *)session_id);
ep_raise_error_if_nok (event_instance != NULL);
test_location = 6;
event_instance = ep_session_get_next_event ((EventPipeSession *)session_id);
ep_raise_error_if_nok (event_instance == NULL);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_session_write_get_next_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
bool write_result = false;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
// Starts as signaled.
// TODO: Is this expected behavior, just a way to notify observer that we are up and running?
uint32_t test = ep_rt_wait_event_wait ((ep_rt_wait_event_handle_t *)ep_session_get_wait_event ((EventPipeSession *)session_id), 0, false);
ep_raise_error_if_nok (test == 0);
test_location = 5;
EventPipeEventPayload payload;;
ep_event_payload_init (&payload, NULL, 0);
write_result = ep_session_write_event ((EventPipeSession *)session_id, ep_rt_thread_get_handle (), ep_event, &payload, NULL, NULL, NULL, NULL);
ep_event_payload_fini (&payload);
ep_raise_error_if_nok (write_result == true);
test_location = 6;
// TODO: Is this really the correct behavior, first write signals event, meaning that buffer will converted to read only
// with just one event in it.
test = ep_rt_wait_event_wait ((ep_rt_wait_event_handle_t *)ep_session_get_wait_event ((EventPipeSession *)session_id), 0, false);
ep_raise_error_if_nok (test == 0);
test_location = 7;
EventPipeEventInstance *event_instance = ep_session_get_next_event ((EventPipeSession *)session_id);
ep_raise_error_if_nok (event_instance != NULL);
test_location = 8;
event_instance = ep_session_get_next_event ((EventPipeSession *)session_id);
ep_raise_error_if_nok (event_instance == NULL);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_session_write_suspend_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
bool write_result = false;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventPipeEventPayload payload;;
ep_event_payload_init (&payload, NULL, 0);
write_result = ep_session_write_event ((EventPipeSession *)session_id, ep_rt_thread_get_handle (), ep_event, &payload, NULL, NULL, NULL, NULL);
ep_event_payload_fini (&payload);
ep_raise_error_if_nok (write_result == true);
test_location = 5;
// ep_session_suspend_write_event_happens in disable session.
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
// TODO: Add test setting rundown and write events.
// TODO: Suspend write and write events.
static RESULT
test_write_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventData data[1];
ep_event_data_init (&data[0], 0, 0, 0);
ep_write_event_2 (ep_event, data, ARRAY_SIZE (data), NULL, NULL);
ep_event_data_fini (data);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_write_get_next_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
EventPipeEventInstance *event_instance = NULL;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventData data[1];
ep_event_data_init (&data[0], 0, 0, 0);
ep_write_event_2 (ep_event, data, ARRAY_SIZE (data), NULL, NULL);
ep_event_data_fini (data);
event_instance = ep_get_next_event (session_id);
ep_raise_error_if_nok (event_instance != NULL);
test_location = 5;
event_instance = ep_get_next_event (session_id);
ep_raise_error_if_nok (event_instance == NULL);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_write_wait_get_next_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeSession *session = NULL;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
EventPipeEventInstance *event_instance = NULL;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
session = ep_get_session (session_id);
ep_raise_error_if_nok (session != NULL);
test_location = 4;
ep_start_streaming (session_id);
// Starts as signaled.
// TODO: Is this expected behavior, just a way to notify observer that we are up and running?
uint32_t test = ep_rt_wait_event_wait (ep_session_get_wait_event (session), 0, false);
ep_raise_error_if_nok (test == 0);
test_location = 5;
test = ep_rt_wait_event_wait (ep_session_get_wait_event (session), 0, false);
ep_raise_error_if_nok (test != 0);
test_location = 6;
EventData data[1];
ep_event_data_init (&data[0], 0, 0, 0);
for (int i = 0; i < 100; i++)
ep_write_event_2 (ep_event, data, ARRAY_SIZE (data), NULL, NULL);
ep_event_data_fini (data);
//Should be signaled, since we should have buffers put in readonly by now.
test = ep_rt_wait_event_wait (ep_session_get_wait_event (session), 0, false);
ep_raise_error_if_nok (test == 0);
test_location = 7;
event_instance = ep_get_next_event (session_id);
ep_raise_error_if_nok (event_instance != NULL);
//Drain all events.
while (ep_get_next_event (session_id));
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_write_event_perf (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
int64_t accumulted_write_time_ticks = 0;
uint32_t events_written = 0;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventData data[1];
ep_event_data_init (&data[0], 0, 0, 0);
// Write in chunks of 1000 events, all should fit into buffer manager.
for (events_written = 0; events_written < 10 * 1000 * 1000; events_written += 1000) {
int64_t start = ep_perf_timestamp_get ();
for (uint32_t i = 0; i < 1000; i++)
ep_write_event_2 (ep_event, data, ARRAY_SIZE (data), NULL, NULL);
int64_t stop = ep_perf_timestamp_get ();
accumulted_write_time_ticks += stop - start;
// Drain events to not end up in having buffer manager OOM.
while (ep_get_next_event (session_id));
}
ep_event_data_fini (data);
float accumulted_write_time_sec = ((float)accumulted_write_time_ticks / (float)ep_perf_frequency_query ());
float events_written_per_sec = (float)events_written / (accumulted_write_time_sec ? accumulted_write_time_sec : 1.0);
// Measured number of events/second for one thread.
// TODO: Setup acceptable pass/failure metrics.
printf ("\n\tPerformance stats:\n");
printf ("\t\tTotal number of events: %i\n", events_written);
printf ("\t\tTotal time in sec: %.2f\n\t\tTotal number of events written per sec/core: %.2f\n\t", accumulted_write_time_sec, events_written_per_sec);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
// TODO: Add multithreaded test writing into private/shared sessions.
// TODO: Add consumer thread test, flushing file buffers/session, acting on signal.
static RESULT
test_eventpipe_mem_checkpoint (void)
{
RESULT result = NULL;
#ifdef _CRTDBG_MAP_ALLOC
// Need to emulate a thread exit to make sure TLS gets cleaned up for current thread
// or we will get memory leaks reported.
extern void ep_rt_mono_thread_exited (void);
ep_rt_mono_thread_exited ();
_CrtMemCheckpoint (&eventpipe_memory_end_snapshot);
if ( _CrtMemDifference(&eventpipe_memory_diff_snapshot, &eventpipe_memory_start_snapshot, &eventpipe_memory_end_snapshot) ) {
_CrtMemDumpStatistics( &eventpipe_memory_diff_snapshot );
result = FAILED ("Memory leak detected!");
}
_CrtMemCheckpoint (&eventpipe_memory_start_snapshot);
#endif
return result;
}
static RESULT
test_eventpipe_reset_mem_checkpoint (void)
{
#ifdef _CRTDBG_MAP_ALLOC
_CrtMemCheckpoint (&eventpipe_memory_start_snapshot);
#endif
return NULL;
}
static RESULT
test_eventpipe_teardown (void)
{
uint32_t test_location = 0;
#ifdef _CRTDBG_MAP_ALLOC
_CrtMemCheckpoint (&eventpipe_memory_end_snapshot);
if ( _CrtMemDifference(&eventpipe_memory_diff_snapshot, &eventpipe_memory_start_snapshot, &eventpipe_memory_end_snapshot) ) {
_CrtMemDumpStatistics( &eventpipe_memory_diff_snapshot );
return FAILED ("Memory leak detected!");
}
#endif
test_location = 1;
EP_LOCK_ENTER (section1)
ep_sample_profiler_shutdown ();
EP_LOCK_EXIT (section1)
return NULL;
ep_on_error:
return FAILED ("Failed at test location=%i", test_location);
}
static Test ep_tests [] = {
{"test_eventpipe_setup", test_eventpipe_setup},
{"test_create_delete_provider", test_create_delete_provider},
{"test_stress_create_delete_provider", test_stress_create_delete_provider},
{"test_get_provider", test_get_provider},
{"test_create_same_provider_twice", test_create_same_provider_twice},
{"test_enable_disable", test_enable_disable},
{"test_enable_disable_provider_config", test_enable_disable_provider_config},
{"test_create_delete_provider_with_callback", test_create_delete_provider_with_callback},
{"test_build_event_metadata", test_build_event_metadata},
{"test_session_start_streaming", test_session_start_streaming},
{"test_session_write_event", test_session_write_event_seq_point},
{"test_session_write_event_seq_point", test_session_write_event_seq_point},
{"test_session_write_get_next_event", test_session_write_get_next_event},
{"test_session_write_wait_get_next_event", test_session_write_wait_get_next_event},
{"test_session_write_suspend_event", test_session_write_suspend_event},
{"test_write_event", test_write_event},
{"test_write_get_next_event", test_write_get_next_event},
{"test_write_wait_get_next_event", test_write_wait_get_next_event},
#ifdef TEST_PERF
{"test_write_event_perf", test_write_event_perf},
#endif
{"test_eventpipe_mem_checkpoint", test_eventpipe_mem_checkpoint},
{"test_enable_disable_default_provider_config", test_enable_disable_default_provider_config},
{"test_enable_disable_multiple_default_provider_config", test_enable_disable_multiple_default_provider_config},
{"test_enable_disable_provider_parse_default_config", test_enable_disable_provider_parse_default_config},
{"test_eventpipe_reset_mem_checkpoint", test_eventpipe_reset_mem_checkpoint},
{"test_eventpipe_teardown", test_eventpipe_teardown},
{NULL, NULL}
};
DEFINE_TEST_GROUP_INIT(ep_tests_init, ep_tests)
| #if defined(_MSC_VER) && defined(_DEBUG)
#include "ep-tests-debug.h"
#endif
#include <eventpipe/ep.h>
#include <eventpipe/ep-config.h>
#include <eventpipe/ep-event.h>
#include <eventpipe/ep-session.h>
#include <eventpipe/ep-event-instance.h>
#include <eventpipe/ep-event-payload.h>
#include <eventpipe/ep-sample-profiler.h>
#include <eglib/test/test.h>
#define TEST_PROVIDER_NAME "MyTestProvider"
#define TEST_FILE "./ep_test_create_file.txt"
#define TEST_FILE_2 "./ep_test_create_file_2.txt"
//#define TEST_PERF
#ifdef _CRTDBG_MAP_ALLOC
static _CrtMemState eventpipe_memory_start_snapshot;
static _CrtMemState eventpipe_memory_end_snapshot;
static _CrtMemState eventpipe_memory_diff_snapshot;
#endif
static RESULT
test_eventpipe_setup (void)
{
uint32_t test_location = 0;
// Lazy initialized, force now to not show up as leak.
ep_rt_os_command_line_get ();
ep_rt_managed_command_line_get ();
test_location = 1;
// Init profiler, force now to not show up as leaks.
// Set long sampling rate to reduce impact.
EP_LOCK_ENTER (section1)
ep_sample_profiler_init (NULL);
ep_sample_profiler_set_sampling_rate (1000 * 1000 * 100);
EP_LOCK_EXIT (section1)
test_location = 2;
#ifdef _CRTDBG_MAP_ALLOC
_CrtMemCheckpoint (&eventpipe_memory_start_snapshot);
#endif
ep_thread_get_or_create ();
return NULL;
ep_on_error:
return FAILED ("Failed at test location=%i", test_location);
}
static RESULT
test_create_delete_provider (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *test_provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
if (!test_provider) {
result = FAILED ("Failed to create provider %s, ep_create_provider returned NULL", TEST_PROVIDER_NAME);
ep_raise_error ();
}
ep_on_exit:
ep_delete_provider (test_provider);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_stress_create_delete_provider (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *test_providers [1000] = {0};
for (uint32_t i = 0; i < 1000; ++i) {
char *provider_name = g_strdup_printf (TEST_PROVIDER_NAME "_%i", i);
test_providers [i] = ep_create_provider (provider_name, NULL, NULL, NULL);
g_free (provider_name);
if (!test_providers [i]) {
result = FAILED ("Failed to create provider %s_%i, ep_create_provider returned NULL", TEST_PROVIDER_NAME, i);
ep_raise_error ();
}
}
ep_on_exit:
for (uint32_t i = 0; i < 1000; ++i) {
if (test_providers [i])
ep_delete_provider (test_providers [i]);
}
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_get_provider (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *test_provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
if (!test_provider) {
result = FAILED ("Failed to create provider %s, ep_create_provider returned NULL", TEST_PROVIDER_NAME);
ep_raise_error ();
}
test_location = 1;
EventPipeProvider *returned_test_provider = ep_get_provider (TEST_PROVIDER_NAME);
if (!returned_test_provider) {
result = FAILED ("Failed to get provider %s, ep_get_provider returned NULL", TEST_PROVIDER_NAME);
ep_raise_error ();
}
test_location = 2;
ep_delete_provider (test_provider);
test_provider = NULL;
returned_test_provider = ep_get_provider (TEST_PROVIDER_NAME);
if (returned_test_provider) {
result = FAILED ("Provider %s, still returned from ep_get_provider after deleted", TEST_PROVIDER_NAME);
ep_raise_error ();
}
ep_on_exit:
ep_delete_provider (test_provider);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_create_same_provider_twice (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *test_provider = NULL;
EventPipeProvider *test_provider2 = NULL;
EventPipeProvider *returned_test_provider = NULL;
test_provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
if (!test_provider) {
result = FAILED ("Failed to create provider %s, ep_create_provider returned NULL", TEST_PROVIDER_NAME);
ep_raise_error ();
}
test_location = 1;
returned_test_provider = ep_get_provider (TEST_PROVIDER_NAME);
if (!returned_test_provider) {
result = FAILED ("Failed to get provider %s, ep_get_provider returned NULL", TEST_PROVIDER_NAME);
ep_raise_error ();
}
test_location = 2;
test_provider2 = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
if (!test_provider2) {
result = FAILED ("Creating to create an already existing provider %s", TEST_PROVIDER_NAME);
ep_raise_error ();
}
test_location = 3;
returned_test_provider = ep_get_provider (TEST_PROVIDER_NAME);
if (!returned_test_provider) {
result = FAILED ("Failed to get provider %s, ep_get_provider returned NULL", TEST_PROVIDER_NAME);
ep_raise_error ();
}
test_location = 4;
if (returned_test_provider != test_provider) {
result = FAILED ("Failed to get provider %s, ep_get_provider returned unexpected provider instance", TEST_PROVIDER_NAME);
ep_raise_error ();
}
ep_on_exit:
ep_delete_provider (test_provider2);
ep_delete_provider (test_provider);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_enable_disable (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
session_id = ep_enable (
TEST_FILE,
1,
current_provider_config,
1,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
ep_start_streaming (session_id);
if (!ep_enabled ()) {
result = FAILED ("event pipe disabled");
ep_raise_error ();
}
ep_on_exit:
ep_disable (session_id);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
validate_default_provider_config (EventPipeSession *session)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeSessionProviderList *provider_list = ep_session_get_providers (session);
EventPipeSessionProvider *session_provider = ep_rt_session_provider_list_find_by_name (ep_session_provider_list_get_providers_cref (provider_list), "Microsoft-Windows-DotNETRuntime");
ep_raise_error_if_nok (session_provider != NULL);
test_location = 1;
ep_raise_error_if_nok (!ep_rt_utf8_string_compare (ep_session_provider_get_provider_name (session_provider), "Microsoft-Windows-DotNETRuntime"));
test_location = 2;
ep_raise_error_if_nok (ep_session_provider_get_keywords (session_provider) == 0x4c14fccbd);
test_location = 3;
ep_raise_error_if_nok (ep_session_provider_get_logging_level (session_provider) == EP_EVENT_LEVEL_VERBOSE);
test_location = 4;
ep_raise_error_if_nok (ep_session_provider_get_filter_data (session_provider) == NULL);
test_location = 5;
session_provider = ep_rt_session_provider_list_find_by_name (ep_session_provider_list_get_providers_cref (provider_list), "Microsoft-Windows-DotNETRuntimePrivate");
ep_raise_error_if_nok (session_provider != NULL);
test_location = 6;
ep_raise_error_if_nok (!ep_rt_utf8_string_compare (ep_session_provider_get_provider_name (session_provider), "Microsoft-Windows-DotNETRuntimePrivate"));
test_location = 7;
ep_raise_error_if_nok (ep_session_provider_get_keywords (session_provider) == 0x4002000b);
test_location = 8;
ep_raise_error_if_nok (ep_session_provider_get_logging_level (session_provider) == EP_EVENT_LEVEL_VERBOSE);
test_location = 9;
ep_raise_error_if_nok (ep_session_provider_get_filter_data (session_provider) == NULL);
test_location = 10;
session_provider = ep_rt_session_provider_list_find_by_name (ep_session_provider_list_get_providers_cref (provider_list), "Microsoft-DotNETCore-SampleProfiler");
ep_raise_error_if_nok (session_provider != NULL);
test_location = 11;
ep_raise_error_if_nok (!ep_rt_utf8_string_compare (ep_session_provider_get_provider_name (session_provider), "Microsoft-DotNETCore-SampleProfiler"));
test_location = 12;
ep_raise_error_if_nok (ep_session_provider_get_keywords (session_provider) == 0);
test_location = 13;
ep_raise_error_if_nok (ep_session_provider_get_logging_level (session_provider) == EP_EVENT_LEVEL_VERBOSE);
test_location = 14;
ep_raise_error_if_nok (ep_session_provider_get_filter_data (session_provider) == NULL);
test_location = 15;
ep_on_exit:
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_enable_disable_default_provider_config (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeSessionID session_id = 0;
session_id = ep_enable_2 (
TEST_FILE,
1,
NULL,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
result = validate_default_provider_config ((EventPipeSession *)session_id);
ep_raise_error_if_nok (result == NULL);
test_location = 3;
ep_start_streaming (session_id);
if (!ep_enabled ()) {
result = FAILED ("event pipe disabled");
ep_raise_error ();
}
ep_on_exit:
ep_disable (session_id);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_enable_disable_multiple_default_provider_config (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeSessionID session_id_1 = 0;
EventPipeSessionID session_id_2 = 0;
session_id_1 = ep_enable_2 (
TEST_FILE,
1,
NULL,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id_1) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
result = validate_default_provider_config ((EventPipeSession *)session_id_1);
ep_raise_error_if_nok (result == NULL);
test_location = 3;
ep_start_streaming (session_id_1);
if (!ep_enabled ()) {
result = FAILED ("event pipe disabled");
ep_raise_error ();
}
test_location = 4;
session_id_2 = ep_enable_2 (
TEST_FILE_2,
1,
NULL,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id_2) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 5;
result = validate_default_provider_config ((EventPipeSession *)session_id_2);
ep_raise_error_if_nok (result == NULL);
test_location = 6;
ep_start_streaming (session_id_2);
if (!ep_enabled ()) {
result = FAILED ("event pipe disabled");
ep_raise_error ();
}
ep_on_exit:
ep_disable (session_id_1);
ep_disable (session_id_2);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_enable_disable_provider_config (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
const ep_char8_t *provider_config = TEST_PROVIDER_NAME ":1:0:";
EventPipeSessionID session_id = 0;
session_id = ep_enable_2 (
TEST_FILE,
1,
provider_config,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
EventPipeSessionProviderList *provider_list = ep_session_get_providers ((EventPipeSession *)session_id);
EventPipeSessionProvider *session_provider = ep_rt_session_provider_list_find_by_name (ep_session_provider_list_get_providers_cref (provider_list), TEST_PROVIDER_NAME);
ep_raise_error_if_nok (session_provider != NULL);
test_location = 3;
ep_raise_error_if_nok (!ep_rt_utf8_string_compare (ep_session_provider_get_provider_name (session_provider), TEST_PROVIDER_NAME));
test_location = 4;
ep_raise_error_if_nok (ep_session_provider_get_keywords (session_provider) == 1);
test_location = 5;
ep_raise_error_if_nok (ep_session_provider_get_logging_level (session_provider) == EP_EVENT_LEVEL_LOGALWAYS);
test_location = 6;
ep_raise_error_if_nok (ep_session_provider_get_filter_data (session_provider) == NULL);
test_location = 7;
ep_start_streaming (session_id);
if (!ep_enabled ()) {
result = FAILED ("event pipe disabled");
ep_raise_error ();
}
ep_on_exit:
ep_disable (session_id);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_enable_disable_provider_parse_default_config (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
const ep_char8_t *provider_config =
"Microsoft-Windows-DotNETRuntime"
":0x4c14fccbd"
":5"
":"
","
"Microsoft-Windows-DotNETRuntimePrivate"
":0x4002000b"
":5"
":"
","
"Microsoft-DotNETCore-SampleProfiler"
":0"
":5"
":";
EventPipeSessionID session_id = 0;
session_id = ep_enable_2 (
TEST_FILE,
1,
provider_config,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
result = validate_default_provider_config ((EventPipeSession *)session_id);
ep_raise_error_if_nok (result == NULL);
test_location = 3;
ep_start_streaming (session_id);
if (!ep_enabled ()) {
result = FAILED ("event pipe disabled");
ep_raise_error ();
}
ep_on_exit:
ep_disable (session_id);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static bool provider_callback_data;
static
void
provider_callback (
const uint8_t *source_id,
unsigned long is_enabled,
uint8_t level,
uint64_t match_any_keywords,
uint64_t match_all_keywords,
EventFilterDescriptor *filter_data,
void *callback_context)
{
*(bool *)callback_context = true;
}
static RESULT
test_create_delete_provider_with_callback (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeSessionID session_id = 0;
EventPipeProvider *test_provider = NULL;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
session_id = ep_enable (
TEST_FILE,
1,
current_provider_config,
1,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
ep_start_streaming (session_id);
test_provider = ep_create_provider (TEST_PROVIDER_NAME, provider_callback, NULL, &provider_callback_data);
ep_raise_error_if_nok (test_provider != NULL);
test_location = 3;
if (!provider_callback_data) {
result = FAILED ("Provider callback not called");
ep_raise_error ();
}
ep_on_exit:
ep_delete_provider (test_provider);
ep_disable (session_id);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_build_event_metadata (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeEventInstance *ep_event_instance = NULL;
EventPipeEventMetadataEvent *metadata_event = NULL;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 1;
ep_event = ep_event_alloc (provider, 1, 1, 1, EP_EVENT_LEVEL_VERBOSE, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 2;
ep_event_instance = ep_event_instance_alloc (ep_event, 0, 0, NULL, 0, NULL, NULL);
ep_raise_error_if_nok (ep_event_instance != NULL);
test_location = 3;
metadata_event = ep_build_event_metadata_event (ep_event_instance, 1);
ep_raise_error_if_nok (metadata_event != NULL);
test_location = 4;
ep_on_exit:
ep_delete_provider (provider);
ep_event_free (ep_event);
ep_event_instance_free (ep_event_instance);
ep_event_metdata_event_free (metadata_event);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_session_start_streaming (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config =ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
session_id = ep_enable (
TEST_FILE,
1,
current_provider_config,
1,
EP_SESSION_TYPE_FILE,
EP_SERIALIZATION_FORMAT_NETTRACE_V4,
false,
NULL,
NULL,
NULL);
if (!session_id) {
result = FAILED ("Failed to enable session");
ep_raise_error ();
}
test_location = 2;
ep_start_streaming (session_id);
ep_on_exit:
ep_disable (session_id);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_session_write_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
bool write_result = false;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4,false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventPipeEventPayload payload;;
ep_event_payload_init (&payload, NULL, 0);
write_result = ep_session_write_event ((EventPipeSession *)session_id, ep_rt_thread_get_handle (), ep_event, &payload, NULL, NULL, NULL, NULL);
ep_event_payload_fini (&payload);
ep_raise_error_if_nok (write_result == true);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_session_write_event_seq_point (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
bool write_result = false;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventPipeEventPayload payload;;
ep_event_payload_init (&payload, NULL, 0);
write_result = ep_session_write_event ((EventPipeSession *)session_id, ep_rt_thread_get_handle (), ep_event, &payload, NULL, NULL, NULL, NULL);
ep_event_payload_fini (&payload);
ep_raise_error_if_nok (write_result == true);
test_location = 5;
ep_session_write_sequence_point_unbuffered ((EventPipeSession *)session_id);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_session_write_wait_get_next_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
bool write_result = false;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventPipeEventPayload payload;;
ep_event_payload_init (&payload, NULL, 0);
write_result = ep_session_write_event ((EventPipeSession *)session_id, ep_rt_thread_get_handle (), ep_event, &payload, NULL, NULL, NULL, NULL);
ep_event_payload_fini (&payload);
ep_raise_error_if_nok (write_result == true);
test_location = 5;
EventPipeEventInstance *event_instance = ep_session_get_next_event ((EventPipeSession *)session_id);
ep_raise_error_if_nok (event_instance != NULL);
test_location = 6;
event_instance = ep_session_get_next_event ((EventPipeSession *)session_id);
ep_raise_error_if_nok (event_instance == NULL);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_session_write_get_next_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
bool write_result = false;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
// Starts as signaled.
// TODO: Is this expected behavior, just a way to notify observer that we are up and running?
uint32_t test = ep_rt_wait_event_wait ((ep_rt_wait_event_handle_t *)ep_session_get_wait_event ((EventPipeSession *)session_id), 0, false);
ep_raise_error_if_nok (test == 0);
test_location = 5;
EventPipeEventPayload payload;;
ep_event_payload_init (&payload, NULL, 0);
write_result = ep_session_write_event ((EventPipeSession *)session_id, ep_rt_thread_get_handle (), ep_event, &payload, NULL, NULL, NULL, NULL);
ep_event_payload_fini (&payload);
ep_raise_error_if_nok (write_result == true);
test_location = 6;
// TODO: Is this really the correct behavior, first write signals event, meaning that buffer will converted to read only
// with just one event in it.
test = ep_rt_wait_event_wait ((ep_rt_wait_event_handle_t *)ep_session_get_wait_event ((EventPipeSession *)session_id), 0, false);
ep_raise_error_if_nok (test == 0);
test_location = 7;
EventPipeEventInstance *event_instance = ep_session_get_next_event ((EventPipeSession *)session_id);
ep_raise_error_if_nok (event_instance != NULL);
test_location = 8;
event_instance = ep_session_get_next_event ((EventPipeSession *)session_id);
ep_raise_error_if_nok (event_instance == NULL);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_session_write_suspend_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
bool write_result = false;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventPipeEventPayload payload;;
ep_event_payload_init (&payload, NULL, 0);
write_result = ep_session_write_event ((EventPipeSession *)session_id, ep_rt_thread_get_handle (), ep_event, &payload, NULL, NULL, NULL, NULL);
ep_event_payload_fini (&payload);
ep_raise_error_if_nok (write_result == true);
test_location = 5;
// ep_session_suspend_write_event_happens in disable session.
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
// TODO: Add test setting rundown and write events.
// TODO: Suspend write and write events.
static RESULT
test_write_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventData data[1];
ep_event_data_init (&data[0], 0, 0, 0);
ep_write_event_2 (ep_event, data, ARRAY_SIZE (data), NULL, NULL);
ep_event_data_fini (data);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_write_get_next_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
EventPipeEventInstance *event_instance = NULL;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventData data[1];
ep_event_data_init (&data[0], 0, 0, 0);
ep_write_event_2 (ep_event, data, ARRAY_SIZE (data), NULL, NULL);
ep_event_data_fini (data);
event_instance = ep_get_next_event (session_id);
ep_raise_error_if_nok (event_instance != NULL);
test_location = 5;
event_instance = ep_get_next_event (session_id);
ep_raise_error_if_nok (event_instance == NULL);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_write_wait_get_next_event (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeSession *session = NULL;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
EventPipeEventInstance *event_instance = NULL;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
session = ep_get_session (session_id);
ep_raise_error_if_nok (session != NULL);
test_location = 4;
ep_start_streaming (session_id);
// Starts as signaled.
// TODO: Is this expected behavior, just a way to notify observer that we are up and running?
uint32_t test = ep_rt_wait_event_wait (ep_session_get_wait_event (session), 0, false);
ep_raise_error_if_nok (test == 0);
test_location = 5;
test = ep_rt_wait_event_wait (ep_session_get_wait_event (session), 0, false);
ep_raise_error_if_nok (test != 0);
test_location = 6;
EventData data[1];
ep_event_data_init (&data[0], 0, 0, 0);
for (int i = 0; i < 100; i++)
ep_write_event_2 (ep_event, data, ARRAY_SIZE (data), NULL, NULL);
ep_event_data_fini (data);
//Should be signaled, since we should have buffers put in readonly by now.
test = ep_rt_wait_event_wait (ep_session_get_wait_event (session), 0, false);
ep_raise_error_if_nok (test == 0);
test_location = 7;
event_instance = ep_get_next_event (session_id);
ep_raise_error_if_nok (event_instance != NULL);
//Drain all events.
while (ep_get_next_event (session_id));
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
static RESULT
test_write_event_perf (void)
{
RESULT result = NULL;
uint32_t test_location = 0;
EventPipeProvider *provider = NULL;
EventPipeEvent *ep_event = NULL;
EventPipeSessionID session_id = 0;
EventPipeProviderConfiguration provider_config;
EventPipeProviderConfiguration *current_provider_config = NULL;
int64_t accumulted_write_time_ticks = 0;
uint32_t events_written = 0;
current_provider_config = ep_provider_config_init (&provider_config, TEST_PROVIDER_NAME, 1, EP_EVENT_LEVEL_LOGALWAYS, "");
ep_raise_error_if_nok (current_provider_config != NULL);
test_location = 1;
provider = ep_create_provider (TEST_PROVIDER_NAME, NULL, NULL, NULL);
ep_raise_error_if_nok (provider != NULL);
test_location = 2;
ep_event = ep_provider_add_event (provider, 1, 1, 1, EP_EVENT_LEVEL_LOGALWAYS, false, NULL, 0);
ep_raise_error_if_nok (ep_event != NULL);
test_location = 3;
session_id = ep_enable (TEST_FILE, 1, current_provider_config, 1, EP_SESSION_TYPE_FILE, EP_SERIALIZATION_FORMAT_NETTRACE_V4, false, NULL, NULL, NULL);
ep_raise_error_if_nok (session_id != 0);
test_location = 4;
ep_start_streaming (session_id);
EventData data[1];
ep_event_data_init (&data[0], 0, 0, 0);
// Write in chunks of 1000 events, all should fit into buffer manager.
for (events_written = 0; events_written < 10 * 1000 * 1000; events_written += 1000) {
int64_t start = ep_perf_timestamp_get ();
for (uint32_t i = 0; i < 1000; i++)
ep_write_event_2 (ep_event, data, ARRAY_SIZE (data), NULL, NULL);
int64_t stop = ep_perf_timestamp_get ();
accumulted_write_time_ticks += stop - start;
// Drain events to not end up in having buffer manager OOM.
while (ep_get_next_event (session_id));
}
ep_event_data_fini (data);
float accumulted_write_time_sec = ((float)accumulted_write_time_ticks / (float)ep_perf_frequency_query ());
float events_written_per_sec = (float)events_written / (accumulted_write_time_sec ? accumulted_write_time_sec : 1.0);
// Measured number of events/second for one thread.
// TODO: Setup acceptable pass/failure metrics.
printf ("\n\tPerformance stats:\n");
printf ("\t\tTotal number of events: %i\n", events_written);
printf ("\t\tTotal time in sec: %.2f\n\t\tTotal number of events written per sec/core: %.2f\n\t", accumulted_write_time_sec, events_written_per_sec);
ep_on_exit:
ep_disable (session_id);
ep_delete_provider (provider);
ep_provider_config_fini (current_provider_config);
return result;
ep_on_error:
if (!result)
result = FAILED ("Failed at test location=%i", test_location);
ep_exit_error_handler ();
}
// TODO: Add multithreaded test writing into private/shared sessions.
// TODO: Add consumer thread test, flushing file buffers/session, acting on signal.
static RESULT
test_eventpipe_mem_checkpoint (void)
{
RESULT result = NULL;
#ifdef _CRTDBG_MAP_ALLOC
// Need to emulate a thread exit to make sure TLS gets cleaned up for current thread
// or we will get memory leaks reported.
extern void ep_rt_mono_thread_exited (void);
ep_rt_mono_thread_exited ();
_CrtMemCheckpoint (&eventpipe_memory_end_snapshot);
if ( _CrtMemDifference(&eventpipe_memory_diff_snapshot, &eventpipe_memory_start_snapshot, &eventpipe_memory_end_snapshot) ) {
_CrtMemDumpStatistics( &eventpipe_memory_diff_snapshot );
result = FAILED ("Memory leak detected!");
}
_CrtMemCheckpoint (&eventpipe_memory_start_snapshot);
#endif
return result;
}
static RESULT
test_eventpipe_reset_mem_checkpoint (void)
{
#ifdef _CRTDBG_MAP_ALLOC
_CrtMemCheckpoint (&eventpipe_memory_start_snapshot);
#endif
return NULL;
}
static RESULT
test_eventpipe_teardown (void)
{
uint32_t test_location = 0;
#ifdef _CRTDBG_MAP_ALLOC
_CrtMemCheckpoint (&eventpipe_memory_end_snapshot);
if ( _CrtMemDifference(&eventpipe_memory_diff_snapshot, &eventpipe_memory_start_snapshot, &eventpipe_memory_end_snapshot) ) {
_CrtMemDumpStatistics( &eventpipe_memory_diff_snapshot );
return FAILED ("Memory leak detected!");
}
#endif
test_location = 1;
EP_LOCK_ENTER (section1)
ep_sample_profiler_shutdown ();
EP_LOCK_EXIT (section1)
return NULL;
ep_on_error:
return FAILED ("Failed at test location=%i", test_location);
}
static Test ep_tests [] = {
{"test_eventpipe_setup", test_eventpipe_setup},
{"test_create_delete_provider", test_create_delete_provider},
{"test_stress_create_delete_provider", test_stress_create_delete_provider},
{"test_get_provider", test_get_provider},
{"test_create_same_provider_twice", test_create_same_provider_twice},
{"test_enable_disable", test_enable_disable},
{"test_enable_disable_provider_config", test_enable_disable_provider_config},
{"test_create_delete_provider_with_callback", test_create_delete_provider_with_callback},
{"test_build_event_metadata", test_build_event_metadata},
{"test_session_start_streaming", test_session_start_streaming},
{"test_session_write_event", test_session_write_event_seq_point},
{"test_session_write_event_seq_point", test_session_write_event_seq_point},
{"test_session_write_get_next_event", test_session_write_get_next_event},
{"test_session_write_wait_get_next_event", test_session_write_wait_get_next_event},
{"test_session_write_suspend_event", test_session_write_suspend_event},
{"test_write_event", test_write_event},
{"test_write_get_next_event", test_write_get_next_event},
{"test_write_wait_get_next_event", test_write_wait_get_next_event},
#ifdef TEST_PERF
{"test_write_event_perf", test_write_event_perf},
#endif
{"test_eventpipe_mem_checkpoint", test_eventpipe_mem_checkpoint},
{"test_enable_disable_default_provider_config", test_enable_disable_default_provider_config},
{"test_enable_disable_multiple_default_provider_config", test_enable_disable_multiple_default_provider_config},
{"test_enable_disable_provider_parse_default_config", test_enable_disable_provider_parse_default_config},
{"test_eventpipe_reset_mem_checkpoint", test_eventpipe_reset_mem_checkpoint},
{"test_eventpipe_teardown", test_eventpipe_teardown},
{NULL, NULL}
};
DEFINE_TEST_GROUP_INIT(ep_tests_init, ep_tests)
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/mono/mono/mini/ir-emit.h | /**
* \file
* IR Creation/Emission Macros
*
* Author:
* Zoltan Varga ([email protected])
*
* (C) 2002 Ximian, Inc.
*/
#ifndef __MONO_IR_EMIT_H__
#define __MONO_IR_EMIT_H__
#include "mini.h"
static inline guint32
alloc_ireg (MonoCompile *cfg)
{
return cfg->next_vreg ++;
}
static inline guint32
alloc_preg (MonoCompile *cfg)
{
return alloc_ireg (cfg);
}
static inline guint32
alloc_lreg (MonoCompile *cfg)
{
#if SIZEOF_REGISTER == 8
return cfg->next_vreg ++;
#else
/* Use a pair of consecutive vregs */
guint32 res = cfg->next_vreg;
cfg->next_vreg += 3;
return res;
#endif
}
static inline guint32
alloc_freg (MonoCompile *cfg)
{
if (mono_arch_is_soft_float ()) {
/* Allocate an lvreg so float ops can be decomposed into long ops */
return alloc_lreg (cfg);
} else {
/* Allocate these from the same pool as the int regs */
return cfg->next_vreg ++;
}
}
static inline guint32
alloc_ireg_ref (MonoCompile *cfg)
{
int vreg = alloc_ireg (cfg);
if (cfg->compute_gc_maps)
mono_mark_vreg_as_ref (cfg, vreg);
#ifdef TARGET_WASM
mono_mark_vreg_as_ref (cfg, vreg);
#endif
return vreg;
}
static inline guint32
alloc_ireg_mp (MonoCompile *cfg)
{
int vreg = alloc_ireg (cfg);
if (cfg->compute_gc_maps)
mono_mark_vreg_as_mp (cfg, vreg);
return vreg;
}
static inline guint32
alloc_xreg (MonoCompile *cfg)
{
return alloc_ireg (cfg);
}
static inline guint32
alloc_dreg (MonoCompile *cfg, MonoStackType stack_type)
{
switch (stack_type) {
case STACK_I4:
case STACK_PTR:
return alloc_ireg (cfg);
case STACK_MP:
return alloc_ireg_mp (cfg);
case STACK_OBJ:
return alloc_ireg_ref (cfg);
case STACK_R4:
case STACK_R8:
return alloc_freg (cfg);
case STACK_I8:
return alloc_lreg (cfg);
case STACK_VTYPE:
return alloc_ireg (cfg);
default:
g_warning ("Unknown stack type %x\n", stack_type);
g_assert_not_reached ();
return -1;
}
}
/*
* Macros used to generate intermediate representation macros
*
* The macros use a `MonoConfig` object as its context, and among other
* things it is used to associate instructions with the memory pool with
* it.
*
* The macros come in three variations with slightly different
* features, the patter is: NEW_OP, EMIT_NEW_OP, MONO_EMIT_NEW_OP,
* the differences are as follows:
*
* `NEW_OP`: these are the basic macros to setup an instruction that is
* passed as an argument.
*
* `EMIT_NEW_OP`: these macros in addition to creating the instruction
* add the instruction to the current basic block in the `MonoConfig`
* object passed. Usually these are used when further customization of
* the `inst` parameter is desired before the instruction is added to the
* MonoConfig current basic block.
*
* `MONO_EMIT_NEW_OP`: These variations of the instructions are used when
* you are merely interested in emitting the instruction into the `MonoConfig`
* parameter.
*/
#undef MONO_INST_NEW
/*
* FIXME: zeroing out some fields is not needed with the new IR, but the old
* JIT code still uses the left and right fields, so it has to stay.
*/
/*
* MONO_INST_NEW: create a new MonoInst instance that is allocated on the MonoConfig pool.
*
* @cfg: the MonoConfig object that will be used as the context for the
* instruction.
* @dest: this is the place where the instance of the `MonoInst` is stored.
* @op: the value that should be stored in the MonoInst.opcode field
*
* This initializes an empty MonoInst that has been nulled out, it is allocated
* from the memory pool associated with the MonoConfig, but it is not linked anywhere.
* the cil_code is set to the cfg->ip address.
*/
#define MONO_INST_NEW(cfg,dest,op) do { \
(dest) = (MonoInst *)mono_mempool_alloc ((cfg)->mempool, sizeof (MonoInst)); \
(dest)->inst_i0 = (dest)->inst_i1 = 0; \
(dest)->next = (dest)->prev = NULL; \
(dest)->opcode = (op); \
(dest)->flags = 0; \
(dest)->type = 0; \
(dest)->dreg = -1; \
MONO_INST_NULLIFY_SREGS ((dest)); \
(dest)->cil_code = (cfg)->ip; \
} while (0)
/*
* Variants which take a dest argument and don't do an emit
*/
#define NEW_ICONST(cfg,dest,val) do { \
MONO_INST_NEW ((cfg), (dest), OP_ICONST); \
(dest)->inst_c0 = (val); \
(dest)->type = STACK_I4; \
(dest)->dreg = alloc_dreg ((cfg), STACK_I4); \
} while (0)
/*
* Avoid using this with a non-NULL val if possible as it is not AOT
* compatible. Use one of the NEW_xxxCONST variants instead.
*/
#define NEW_PCONST(cfg,dest,val) do { \
MONO_INST_NEW ((cfg), (dest), OP_PCONST); \
(dest)->inst_p0 = (val); \
(dest)->type = STACK_PTR; \
(dest)->dreg = alloc_dreg ((cfg), STACK_PTR); \
} while (0)
#define NEW_I8CONST(cfg,dest,val) do { \
MONO_INST_NEW ((cfg), (dest), OP_I8CONST); \
(dest)->dreg = alloc_lreg ((cfg)); \
(dest)->type = STACK_I8; \
(dest)->inst_l = (val); \
} while (0)
#define NEW_STORE_MEMBASE(cfg,dest,op,base,offset,sr) do { \
MONO_INST_NEW ((cfg), (dest), (op)); \
(dest)->sreg1 = sr; \
(dest)->inst_destbasereg = base; \
(dest)->inst_offset = offset; \
} while (0)
#define NEW_LOAD_MEMBASE(cfg,dest,op,dr,base,offset) do { \
MONO_INST_NEW ((cfg), (dest), (op)); \
(dest)->dreg = (dr); \
(dest)->inst_basereg = (base); \
(dest)->inst_offset = (offset); \
(dest)->type = STACK_I4; \
} while (0)
#define NEW_LOAD_MEM(cfg,dest,op,dr,mem) do { \
MONO_INST_NEW ((cfg), (dest), (op)); \
(dest)->dreg = (dr); \
(dest)->inst_p0 = (gpointer)(gssize)(mem); \
(dest)->type = STACK_I4; \
} while (0)
#define NEW_UNALU(cfg,dest,op,dr,sr1) do { \
MONO_INST_NEW ((cfg), (dest), (op)); \
(dest)->dreg = dr; \
(dest)->sreg1 = sr1; \
} while (0)
#define NEW_BIALU(cfg,dest,op,dr,sr1,sr2) do { \
MONO_INST_NEW ((cfg), (dest), (op)); \
(dest)->dreg = (dr); \
(dest)->sreg1 = (sr1); \
(dest)->sreg2 = (sr2); \
} while (0)
#define NEW_BIALU_IMM(cfg,dest,op,dr,sr,imm) do { \
MONO_INST_NEW ((cfg), (dest), (op)); \
(dest)->dreg = dr; \
(dest)->sreg1 = sr; \
(dest)->inst_imm = (imm); \
} while (0)
#define NEW_PATCH_INFO(cfg,dest,el1,el2) do { \
MONO_INST_NEW ((cfg), (dest), OP_PATCH_INFO); \
(dest)->inst_left = (MonoInst*)(el1); \
(dest)->inst_right = (MonoInst*)(el2); \
} while (0)
#define NEW_AOTCONST_GOT_VAR(cfg,dest,patch_type,cons) do { \
MONO_INST_NEW ((cfg), (dest), cfg->compile_aot ? OP_GOT_ENTRY : OP_PCONST); \
if (cfg->compile_aot) { \
MonoInst *group, *got_loc; \
got_loc = mono_get_got_var (cfg); \
NEW_PATCH_INFO ((cfg), group, cons, patch_type); \
(dest)->inst_basereg = got_loc->dreg; \
(dest)->inst_p1 = group; \
} else { \
(dest)->inst_p0 = (cons); \
(dest)->inst_i1 = (MonoInst*)(patch_type); \
} \
(dest)->type = STACK_PTR; \
(dest)->dreg = alloc_dreg ((cfg), STACK_PTR); \
} while (0)
#define NEW_AOTCONST_TOKEN_GOT_VAR(cfg,dest,patch_type,image,token,generic_context,stack_type,stack_class) do { \
MonoInst *group, *got_loc; \
MONO_INST_NEW ((cfg), (dest), OP_GOT_ENTRY); \
got_loc = mono_get_got_var (cfg); \
NEW_PATCH_INFO ((cfg), group, NULL, patch_type); \
group->inst_p0 = mono_jump_info_token_new2 ((cfg)->mempool, (image), (token), (generic_context)); \
(dest)->inst_basereg = got_loc->dreg; \
(dest)->inst_p1 = group; \
(dest)->type = (stack_type); \
(dest)->klass = (stack_class); \
(dest)->dreg = alloc_dreg ((cfg), (stack_type)); \
} while (0)
#define NEW_AOTCONST(cfg,dest,patch_type,cons) do { \
if (cfg->backend->need_got_var && !cfg->llvm_only) { \
NEW_AOTCONST_GOT_VAR ((cfg), (dest), (patch_type), (cons)); \
} else { \
MONO_INST_NEW ((cfg), (dest), cfg->compile_aot ? OP_AOTCONST : OP_PCONST); \
(dest)->inst_p0 = (cons); \
(dest)->inst_p1 = GUINT_TO_POINTER (patch_type); \
(dest)->type = STACK_PTR; \
(dest)->dreg = alloc_dreg ((cfg), STACK_PTR); \
} \
} while (0)
#define NEW_AOTCONST_TOKEN(cfg,dest,patch_type,image,token,generic_context,stack_type,stack_class) do { \
if (cfg->backend->need_got_var && !cfg->llvm_only) { \
NEW_AOTCONST_TOKEN_GOT_VAR ((cfg), (dest), (patch_type), (image), (token), (generic_context), (stack_type), (stack_class)); \
} else { \
MONO_INST_NEW ((cfg), (dest), OP_AOTCONST); \
(dest)->inst_p0 = mono_jump_info_token_new2 ((cfg)->mempool, (image), (token), (generic_context)); \
(dest)->inst_p1 = (gpointer)(patch_type); \
(dest)->type = (stack_type); \
(dest)->klass = (stack_class); \
(dest)->dreg = alloc_dreg ((cfg), (stack_type)); \
} \
} while (0)
#define NEW_CLASSCONST(cfg,dest,val) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_CLASS, (val))
#define NEW_IMAGECONST(cfg,dest,val) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_IMAGE, (val))
#define NEW_FIELDCONST(cfg,dest,val) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_FIELD, (val))
#define NEW_METHODCONST(cfg,dest,val) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_METHODCONST, (val))
#define NEW_VTABLECONST(cfg,dest,vtable) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_VTABLE, cfg->compile_aot ? (gpointer)((vtable)->klass) : (vtable))
#define NEW_SFLDACONST(cfg,dest,val) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_SFLDA, (val))
#define NEW_LDSTRCONST(cfg,dest,image,token) NEW_AOTCONST_TOKEN ((cfg), (dest), MONO_PATCH_INFO_LDSTR, (image), (token), NULL, STACK_OBJ, mono_defaults.string_class)
#define NEW_LDSTRLITCONST(cfg,dest,val) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_LDSTR_LIT, (val))
#define NEW_TYPE_FROM_HANDLE_CONST(cfg,dest,image,token,generic_context) NEW_AOTCONST_TOKEN ((cfg), (dest), MONO_PATCH_INFO_TYPE_FROM_HANDLE, (image), (token), (generic_context), STACK_OBJ, mono_defaults.runtimetype_class)
#define NEW_LDTOKENCONST(cfg,dest,image,token,generic_context) NEW_AOTCONST_TOKEN ((cfg), (dest), MONO_PATCH_INFO_LDTOKEN, (image), (token), (generic_context), STACK_PTR, NULL)
#define NEW_DECLSECCONST(cfg,dest,image,entry) do { \
if (cfg->compile_aot) { \
NEW_AOTCONST_TOKEN (cfg, dest, MONO_PATCH_INFO_DECLSEC, image, (entry).index, NULL, STACK_OBJ, NULL); \
} else { \
NEW_PCONST (cfg, args [0], (entry).blob); \
} \
} while (0)
#define NEW_METHOD_RGCTX_CONST(cfg,dest,method) do { \
if (cfg->compile_aot) { \
NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_METHOD_RGCTX, (method)); \
} else { \
MonoMethodRuntimeGenericContext *mrgctx; \
mrgctx = (MonoMethodRuntimeGenericContext*)mini_method_get_rgctx ((method)); \
NEW_PCONST ((cfg), (dest), (mrgctx)); \
} \
} while (0)
#define NEW_JIT_ICALL_ADDRCONST(cfg, dest, jit_icall_id) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_JIT_ICALL_ADDR, (jit_icall_id))
#define NEW_VARLOAD(cfg,dest,var,vartype) do { \
MONO_INST_NEW ((cfg), (dest), OP_MOVE); \
(dest)->opcode = mono_type_to_regmove ((cfg), (vartype)); \
mini_type_to_eval_stack_type ((cfg), (vartype), (dest)); \
(dest)->klass = var->klass; \
(dest)->sreg1 = var->dreg; \
(dest)->dreg = alloc_dreg ((cfg), (MonoStackType)(dest)->type); \
if ((dest)->opcode == OP_VMOVE) (dest)->klass = mono_class_from_mono_type_internal ((vartype)); \
} while (0)
#define DECOMPOSE_INTO_REGPAIR(stack_type) (mono_arch_is_soft_float () ? ((stack_type) == STACK_I8 || (stack_type) == STACK_R8) : ((stack_type) == STACK_I8))
static inline void
handle_gsharedvt_ldaddr (MonoCompile *cfg)
{
/* The decomposition of ldaddr makes use of these two variables, so add uses for them */
MonoInst *use;
MONO_INST_NEW (cfg, use, OP_DUMMY_USE);
use->sreg1 = cfg->gsharedvt_info_var->dreg;
MONO_ADD_INS (cfg->cbb, use);
MONO_INST_NEW (cfg, use, OP_DUMMY_USE);
use->sreg1 = cfg->gsharedvt_locals_var->dreg;
MONO_ADD_INS (cfg->cbb, use);
}
#define NEW_VARLOADA(cfg,dest,var,vartype) do { \
MONO_INST_NEW ((cfg), (dest), OP_LDADDR); \
(dest)->inst_p0 = (var); \
(var)->flags |= MONO_INST_INDIRECT; \
(dest)->type = STACK_MP; \
(dest)->klass = (var)->klass; \
(dest)->dreg = alloc_dreg ((cfg), STACK_MP); \
(cfg)->has_indirection = TRUE; \
if (G_UNLIKELY (cfg->gsharedvt) && mini_is_gsharedvt_variable_type ((var)->inst_vtype)) { handle_gsharedvt_ldaddr ((cfg)); } \
if (SIZEOF_REGISTER == 4 && DECOMPOSE_INTO_REGPAIR ((var)->type)) { MonoInst *var1 = get_vreg_to_inst (cfg, MONO_LVREG_LS ((var)->dreg)); MonoInst *var2 = get_vreg_to_inst (cfg, MONO_LVREG_MS ((var)->dreg)); g_assert (var1); g_assert (var2); var1->flags |= MONO_INST_INDIRECT; var2->flags |= MONO_INST_INDIRECT; } \
} while (0)
#define NEW_VARSTORE(cfg,dest,var,vartype,inst) do { \
MONO_INST_NEW ((cfg), (dest), OP_MOVE); \
(dest)->opcode = mono_type_to_regmove ((cfg), (vartype)); \
(dest)->klass = (var)->klass; \
(dest)->sreg1 = (inst)->dreg; \
(dest)->dreg = (var)->dreg; \
if ((dest)->opcode == OP_VMOVE) (dest)->klass = mono_class_from_mono_type_internal ((vartype)); \
} while (0)
#define NEW_TEMPLOAD(cfg,dest,num) NEW_VARLOAD ((cfg), (dest), (cfg)->varinfo [(num)], (cfg)->varinfo [(num)]->inst_vtype)
#define NEW_TEMPLOADA(cfg,dest,num) NEW_VARLOADA ((cfg), (dest), cfg->varinfo [(num)], cfg->varinfo [(num)]->inst_vtype)
#define NEW_TEMPSTORE(cfg,dest,num,inst) NEW_VARSTORE ((cfg), (dest), (cfg)->varinfo [(num)], (cfg)->varinfo [(num)]->inst_vtype, (inst))
#define NEW_ARGLOAD(cfg,dest,num) NEW_VARLOAD ((cfg), (dest), cfg->args [(num)], cfg->arg_types [(num)])
#define NEW_LOCLOAD(cfg,dest,num) NEW_VARLOAD ((cfg), (dest), cfg->locals [(num)], header->locals [(num)])
#define NEW_LOCSTORE(cfg,dest,num,inst) NEW_VARSTORE ((cfg), (dest), (cfg)->locals [(num)], (cfg)->locals [(num)]->inst_vtype, (inst))
#define NEW_ARGSTORE(cfg,dest,num,inst) NEW_VARSTORE ((cfg), (dest), cfg->args [(num)], cfg->arg_types [(num)], (inst))
#define NEW_LOCLOADA(cfg,dest,num) NEW_VARLOADA ((cfg), (dest), (cfg)->locals [(num)], (cfg)->locals [(num)]->inst_vtype)
#define NEW_RETLOADA(cfg,dest) do { \
MONO_INST_NEW ((cfg), (dest), OP_MOVE); \
(dest)->type = STACK_MP; \
(dest)->klass = cfg->ret->klass; \
(dest)->sreg1 = cfg->vret_addr->dreg; \
(dest)->dreg = alloc_dreg ((cfg), (MonoStackType)(dest)->type); \
} while (0)
#define NEW_ARGLOADA(cfg,dest,num) NEW_VARLOADA ((cfg), (dest), arg_array [(num)], param_types [(num)])
/* Promote the vreg to a variable so its address can be taken */
#define NEW_VARLOADA_VREG(cfg,dest,vreg,ltype) do { \
MonoInst *var = get_vreg_to_inst ((cfg), (vreg)); \
if (!var) \
var = mono_compile_create_var_for_vreg ((cfg), (ltype), OP_LOCAL, (vreg)); \
NEW_VARLOADA ((cfg), (dest), (var), (ltype)); \
} while (0)
#define NEW_DUMMY_USE(cfg,dest,var) do { \
MONO_INST_NEW ((cfg), (dest), OP_DUMMY_USE); \
(dest)->sreg1 = var->dreg; \
} while (0)
/* Variants which take a type argument and handle vtypes as well */
#define NEW_LOAD_MEMBASE_TYPE(cfg,dest,ltype,base,offset) do { \
NEW_LOAD_MEMBASE ((cfg), (dest), mono_type_to_load_membase ((cfg), (ltype)), 0, (base), (offset)); \
mini_type_to_eval_stack_type ((cfg), (ltype), (dest)); \
(dest)->dreg = alloc_dreg ((cfg), (MonoStackType)(dest)->type); \
} while (0)
#define NEW_STORE_MEMBASE_TYPE(cfg,dest,ltype,base,offset,sr) do { \
MONO_INST_NEW ((cfg), (dest), mono_type_to_store_membase ((cfg), (ltype))); \
(dest)->sreg1 = sr; \
(dest)->inst_destbasereg = base; \
(dest)->inst_offset = offset; \
mini_type_to_eval_stack_type ((cfg), (ltype), (dest)); \
(dest)->klass = mono_class_from_mono_type_internal (ltype); \
} while (0)
#define NEW_SEQ_POINT(cfg,dest,il_offset,intr_loc) do { \
MONO_INST_NEW ((cfg), (dest), cfg->gen_sdb_seq_points ? OP_SEQ_POINT : OP_IL_SEQ_POINT); \
(dest)->inst_imm = (il_offset); \
(dest)->flags = intr_loc ? MONO_INST_SINGLE_STEP_LOC : 0; \
} while (0)
#define NEW_GC_PARAM_SLOT_LIVENESS_DEF(cfg,dest,offset,type) do { \
MONO_INST_NEW ((cfg), (dest), OP_GC_PARAM_SLOT_LIVENESS_DEF); \
(dest)->inst_offset = (offset); \
(dest)->inst_vtype = (type); \
} while (0)
/*
* Variants which do an emit as well.
*/
#define EMIT_NEW_ICONST(cfg,dest,val) do { NEW_ICONST ((cfg), (dest), (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_PCONST(cfg,dest,val) do { NEW_PCONST ((cfg), (dest), (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_I8CONST(cfg,dest,val) do { NEW_I8CONST ((cfg), (dest), (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_AOTCONST(cfg,dest,patch_type,cons) do { NEW_AOTCONST ((cfg), (dest), (patch_type), (cons)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_AOTCONST_TOKEN(cfg,dest,patch_type,image,token,stack_type,stack_class) do { NEW_AOTCONST_TOKEN ((cfg), (dest), (patch_type), (image), (token), NULL, (stack_type), (stack_class)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_CLASSCONST(cfg,dest,val) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_CLASS, (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_IMAGECONST(cfg,dest,val) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_IMAGE, (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_FIELDCONST(cfg,dest,val) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_FIELD, (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_METHODCONST(cfg,dest,val) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_METHODCONST, (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_VTABLECONST(cfg,dest,vtable) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_VTABLE, cfg->compile_aot ? (gpointer)((vtable)->klass) : (vtable)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_SFLDACONST(cfg,dest,val) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_SFLDA, (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LDSTRCONST(cfg,dest,image,token) do { NEW_AOTCONST_TOKEN ((cfg), (dest), MONO_PATCH_INFO_LDSTR, (image), (token), NULL, STACK_OBJ, mono_defaults.string_class); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LDSTRLITCONST(cfg,dest,val) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_LDSTR_LIT, (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_TYPE_FROM_HANDLE_CONST(cfg,dest,image,token,generic_context) do { NEW_AOTCONST_TOKEN ((cfg), (dest), MONO_PATCH_INFO_TYPE_FROM_HANDLE, (image), (token), (generic_context), STACK_OBJ, mono_defaults.runtimetype_class); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LDTOKENCONST(cfg,dest,image,token,generic_context) do { NEW_AOTCONST_TOKEN ((cfg), (dest), MONO_PATCH_INFO_LDTOKEN, (image), (token), (generic_context), STACK_PTR, NULL); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_TLS_OFFSETCONST(cfg,dest,key) do { NEW_TLS_OFFSETCONST ((cfg), (dest), (key)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_DECLSECCONST(cfg,dest,image,entry) do { NEW_DECLSECCONST ((cfg), (dest), (image), (entry)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_METHOD_RGCTX_CONST(cfg,dest,method) do { NEW_METHOD_RGCTX_CONST ((cfg), (dest), (method)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_JIT_ICALL_ADDRCONST(cfg, dest, jit_icall_id) do { NEW_JIT_ICALL_ADDRCONST ((cfg), (dest), (jit_icall_id)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_VARLOAD(cfg,dest,var,vartype) do { NEW_VARLOAD ((cfg), (dest), (var), (vartype)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_VARSTORE(cfg,dest,var,vartype,inst) do { NEW_VARSTORE ((cfg), (dest), (var), (vartype), (inst)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_VARLOADA(cfg,dest,var,vartype) do { NEW_VARLOADA ((cfg), (dest), (var), (vartype)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
/*
* Since the IL stack (and our vregs) contain double values, we have to do a conversion
* when loading/storing args/locals of type R4.
*/
#define EMIT_NEW_VARLOAD_SFLOAT(cfg,dest,var,vartype) do { \
if (!COMPILE_LLVM ((cfg)) && !m_type_is_byref ((vartype)) && (vartype)->type == MONO_TYPE_R4) { \
MonoInst *iargs [1]; \
EMIT_NEW_VARLOADA (cfg, iargs [0], (var), (vartype)); \
(dest) = mono_emit_jit_icall (cfg, mono_fload_r4, iargs); \
} else { \
EMIT_NEW_VARLOAD ((cfg), (dest), (var), (vartype)); \
} \
} while (0)
#define EMIT_NEW_VARSTORE_SFLOAT(cfg,dest,var,vartype,inst) do { \
if (COMPILE_SOFT_FLOAT ((cfg)) && !m_type_is_byref ((vartype)) && (vartype)->type == MONO_TYPE_R4) { \
MonoInst *iargs [2]; \
iargs [0] = (inst); \
EMIT_NEW_VARLOADA (cfg, iargs [1], (var), (vartype)); \
(dest) = mono_emit_jit_icall (cfg, mono_fstore_r4, iargs); \
} else { \
EMIT_NEW_VARSTORE ((cfg), (dest), (var), (vartype), (inst)); \
} \
} while (0)
#define EMIT_NEW_ARGLOAD(cfg,dest,num) do { \
if (mono_arch_is_soft_float ()) { \
EMIT_NEW_VARLOAD_SFLOAT ((cfg), (dest), cfg->args [(num)], cfg->arg_types [(num)]); \
} else { \
NEW_ARGLOAD ((cfg), (dest), (num)); \
MONO_ADD_INS ((cfg)->cbb, (dest)); \
} \
} while (0)
#define EMIT_NEW_LOCLOAD(cfg,dest,num) do { \
if (mono_arch_is_soft_float ()) { \
EMIT_NEW_VARLOAD_SFLOAT ((cfg), (dest), cfg->locals [(num)], header->locals [(num)]); \
} else { \
NEW_LOCLOAD ((cfg), (dest), (num)); \
MONO_ADD_INS ((cfg)->cbb, (dest)); \
} \
} while (0)
#define EMIT_NEW_LOCSTORE(cfg,dest,num,inst) do { \
if (mono_arch_is_soft_float ()) { \
EMIT_NEW_VARSTORE_SFLOAT ((cfg), (dest), (cfg)->locals [(num)], (cfg)->locals [(num)]->inst_vtype, (inst)); \
} else { \
NEW_LOCSTORE ((cfg), (dest), (num), (inst)); \
MONO_ADD_INS ((cfg)->cbb, (dest)); \
} \
} while (0)
#define EMIT_NEW_ARGSTORE(cfg,dest,num,inst) do { \
if (mono_arch_is_soft_float ()) { \
EMIT_NEW_VARSTORE_SFLOAT ((cfg), (dest), cfg->args [(num)], cfg->arg_types [(num)], (inst)); \
} else { \
NEW_ARGSTORE ((cfg), (dest), (num), (inst)); \
MONO_ADD_INS ((cfg)->cbb, (dest)); \
} \
} while (0)
#else
#define EMIT_NEW_ARGLOAD(cfg,dest,num) do { NEW_ARGLOAD ((cfg), (dest), (num)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LOCLOAD(cfg,dest,num) do { NEW_LOCLOAD ((cfg), (dest), (num)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LOCSTORE(cfg,dest,num,inst) do { NEW_LOCSTORE ((cfg), (dest), (num), (inst)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_ARGSTORE(cfg,dest,num,inst) do { NEW_ARGSTORE ((cfg), (dest), (num), (inst)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#endif
#define EMIT_NEW_TEMPLOAD(cfg,dest,num) do { NEW_TEMPLOAD ((cfg), (dest), (num)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_TEMPLOADA(cfg,dest,num) do { NEW_TEMPLOADA ((cfg), (dest), (num)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LOCLOADA(cfg,dest,num) do { NEW_LOCLOADA ((cfg), (dest), (num)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_ARGLOADA(cfg,dest,num) do { NEW_ARGLOADA ((cfg), (dest), (num)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_RETLOADA(cfg,dest) do { NEW_RETLOADA ((cfg), (dest)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_TEMPSTORE(cfg,dest,num,inst) do { NEW_TEMPSTORE ((cfg), (dest), (num), (inst)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_VARLOADA_VREG(cfg,dest,vreg,ltype) do { NEW_VARLOADA_VREG ((cfg), (dest), (vreg), (ltype)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_DUMMY_USE(cfg,dest,var) do { NEW_DUMMY_USE ((cfg), (dest), (var)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_UNALU(cfg,dest,op,dr,sr1) do { NEW_UNALU ((cfg), (dest), (op), (dr), (sr1)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_BIALU(cfg,dest,op,dr,sr1,sr2) do { NEW_BIALU ((cfg), (dest), (op), (dr), (sr1), (sr2)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_BIALU_IMM(cfg,dest,op,dr,sr,imm) do { NEW_BIALU_IMM ((cfg), (dest), (op), (dr), (sr), (imm)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LOAD_MEMBASE(cfg,dest,op,dr,base,offset) do { NEW_LOAD_MEMBASE ((cfg), (dest), (op), (dr), (base), (offset)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_STORE_MEMBASE(cfg,dest,op,base,offset,sr) do { NEW_STORE_MEMBASE ((cfg), (dest), (op), (base), (offset), (sr)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LOAD_MEMBASE_TYPE(cfg,dest,ltype,base,offset) do { NEW_LOAD_MEMBASE_TYPE ((cfg), (dest), (ltype), (base), (offset)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_STORE_MEMBASE_TYPE(cfg,dest,ltype,base,offset,sr) do { NEW_STORE_MEMBASE_TYPE ((cfg), (dest), (ltype), (base), (offset), (sr)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_GC_PARAM_SLOT_LIVENESS_DEF(cfg,dest,offset,type) do { NEW_GC_PARAM_SLOT_LIVENESS_DEF ((cfg), (dest), (offset), (type)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
/*
* Variants which do not take an dest argument, but take a dreg argument.
*/
#define MONO_EMIT_NEW_ICONST(cfg,dr,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), OP_ICONST); \
inst->dreg = dr; \
inst->inst_c0 = imm; \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_PCONST(cfg,dr,val) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), OP_PCONST); \
inst->dreg = dr; \
(inst)->inst_p0 = (val); \
(inst)->type = STACK_PTR; \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_I8CONST(cfg,dr,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), OP_I8CONST); \
inst->dreg = dr; \
inst->inst_l = imm; \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_DUMMY_INIT(cfg,dr,op) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (op)); \
inst->dreg = dr; \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#ifdef MONO_ARCH_NEED_GOT_VAR
#define MONO_EMIT_NEW_AOTCONST(cfg,dr,cons,patch_type) do { \
MonoInst *inst; \
NEW_AOTCONST ((cfg), (inst), (patch_type), (cons)); \
inst->dreg = (dr); \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#else
#define MONO_EMIT_NEW_AOTCONST(cfg,dr,imm,type) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), cfg->compile_aot ? OP_AOTCONST : OP_PCONST); \
inst->dreg = dr; \
inst->inst_p0 = imm; \
inst->inst_c1 = type; \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#endif
#define MONO_EMIT_NEW_CLASSCONST(cfg,dr,imm) MONO_EMIT_NEW_AOTCONST(cfg,dr,imm,MONO_PATCH_INFO_CLASS)
#define MONO_EMIT_NEW_VTABLECONST(cfg,dest,vtable) MONO_EMIT_NEW_AOTCONST ((cfg), (dest), (cfg)->compile_aot ? (gpointer)((vtable)->klass) : (vtable), MONO_PATCH_INFO_VTABLE)
#define MONO_EMIT_NEW_SIGNATURECONST(cfg,dr,sig) MONO_EMIT_NEW_AOTCONST ((cfg), (dr), (sig), MONO_PATCH_INFO_SIGNATURE)
#define MONO_EMIT_NEW_VZERO(cfg,dr,kl) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), MONO_CLASS_IS_SIMD (cfg, kl) ? OP_XZERO : OP_VZERO); \
inst->dreg = dr; \
(inst)->type = STACK_VTYPE; \
(inst)->klass = (kl); \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_UNALU(cfg,op,dr,sr1) do { \
MonoInst *inst; \
EMIT_NEW_UNALU ((cfg), (inst), (op), (dr), (sr1)); \
} while (0)
#define MONO_EMIT_NEW_BIALU(cfg,op,dr,sr1,sr2) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (op)); \
inst->dreg = dr; \
inst->sreg1 = sr1; \
inst->sreg2 = sr2; \
MONO_ADD_INS (cfg->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_BIALU_IMM(cfg,op,dr,sr,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (op)); \
inst->dreg = dr; \
inst->sreg1 = sr; \
inst->inst_imm = (target_mgreg_t)(imm); \
MONO_ADD_INS (cfg->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_COMPARE_IMM(cfg,sr1,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (OP_COMPARE_IMM)); \
inst->sreg1 = sr1; \
inst->inst_imm = (imm); \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_ICOMPARE_IMM(cfg,sr1,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), sizeof (target_mgreg_t) == 8 ? OP_ICOMPARE_IMM : OP_COMPARE_IMM); \
inst->sreg1 = sr1; \
inst->inst_imm = (imm); \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
/* This is used on 32 bit machines too when running with LLVM */
#define MONO_EMIT_NEW_LCOMPARE_IMM(cfg,sr1,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (OP_LCOMPARE_IMM)); \
inst->sreg1 = sr1; \
if (SIZEOF_REGISTER == 4 && COMPILE_LLVM (cfg)) { \
inst->inst_l = (imm); \
} else { \
inst->inst_imm = (imm); \
} \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_LOAD_MEMBASE_OP(cfg,op,dr,base,offset) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (op)); \
inst->dreg = dr; \
inst->inst_basereg = base; \
inst->inst_offset = offset; \
MONO_ADD_INS (cfg->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_LOAD_MEMBASE(cfg,dr,base,offset) MONO_EMIT_NEW_LOAD_MEMBASE_OP ((cfg), (OP_LOAD_MEMBASE), (dr), (base), (offset))
#define MONO_EMIT_NEW_STORE_MEMBASE(cfg,op,base,offset,sr) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (op)); \
(inst)->sreg1 = sr; \
(inst)->inst_destbasereg = base; \
(inst)->inst_offset = offset; \
MONO_ADD_INS (cfg->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_STORE_MEMBASE_IMM(cfg,op,base,offset,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (op)); \
inst->inst_destbasereg = base; \
inst->inst_offset = offset; \
inst->inst_imm = (target_mgreg_t)(imm); \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_COND_EXC(cfg,cond,name) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (OP_COND_EXC_##cond)); \
inst->inst_p1 = (char*)name; \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
/* Branch support */
/*
* Basic blocks have two numeric identifiers:
* dfn: Depth First Number
* block_num: unique ID assigned at bblock creation
*/
#define NEW_BBLOCK(cfg,bblock) do { \
(bblock) = (MonoBasicBlock *)mono_mempool_alloc0 ((cfg)->mempool, sizeof (MonoBasicBlock)); \
(bblock)->block_num = cfg->num_bblocks++; \
} while (0)
#define ADD_BBLOCK(cfg,b) do { \
if ((b)->cil_code) {\
cfg->cil_offset_to_bb [(b)->cil_code - cfg->cil_start] = (b); \
} \
(b)->real_offset = cfg->real_offset; \
} while (0)
/*
* Emit a one-way conditional branch and start a new bblock.
* The inst_false_bb field of the cond branch will not be set, the JIT code should be
* prepared to deal with this.
*/
#ifdef DEBUG_EXTENDED_BBLOCKS
static int ccount = 0;
#define MONO_EMIT_NEW_BRANCH_BLOCK(cfg,op,truebb) do { \
MonoInst *ins; \
MonoBasicBlock *falsebb; \
MONO_INST_NEW ((cfg), (ins), (op)); \
if ((op) == OP_BR) { \
NEW_BBLOCK ((cfg), falsebb); \
ins->inst_target_bb = (truebb); \
mono_link_bblock ((cfg), (cfg)->cbb, (truebb)); \
MONO_ADD_INS ((cfg)->cbb, ins); \
MONO_START_BB ((cfg), falsebb); \
} else { \
ccount ++; \
ins->inst_many_bb = mono_mempool_alloc (cfg->mempool, sizeof(gpointer)*2); \
ins->inst_true_bb = (truebb); \
ins->inst_false_bb = NULL; \
mono_link_bblock ((cfg), (cfg)->cbb, (truebb)); \
MONO_ADD_INS ((cfg)->cbb, ins); \
char *count2 = g_getenv ("COUNT2"); \
if (count2 && ccount == atoi (count2) - 1) { printf ("HIT: %d\n", cfg->cbb->block_num); } \
if (count2 && ccount < atoi (count2)) { \
cfg->cbb->extended = TRUE; \
} else { NEW_BBLOCK ((cfg), falsebb); ins->inst_false_bb = (falsebb); mono_link_bblock ((cfg), (cfg)->cbb, (falsebb)); MONO_START_BB ((cfg), falsebb); } \
if (count2) g_free (count2); \
} \
} while (0)
#else
#define MONO_EMIT_NEW_BRANCH_BLOCK(cfg,op,truebb) do { \
MonoInst *ins; \
MonoBasicBlock *falsebb; \
MONO_INST_NEW ((cfg), (ins), (op)); \
if ((op) == OP_BR) { \
NEW_BBLOCK ((cfg), falsebb); \
ins->inst_target_bb = (truebb); \
mono_link_bblock ((cfg), (cfg)->cbb, (truebb)); \
MONO_ADD_INS ((cfg)->cbb, ins); \
MONO_START_BB ((cfg), falsebb); \
} else { \
ins->inst_many_bb = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof(gpointer)*2); \
ins->inst_true_bb = (truebb); \
ins->inst_false_bb = NULL; \
mono_link_bblock ((cfg), (cfg)->cbb, (truebb)); \
MONO_ADD_INS ((cfg)->cbb, ins); \
if (!cfg->enable_extended_bblocks) { \
NEW_BBLOCK ((cfg), falsebb); \
ins->inst_false_bb = falsebb; \
mono_link_bblock ((cfg), (cfg)->cbb, (falsebb)); \
MONO_START_BB ((cfg), falsebb); \
} else { \
cfg->cbb->extended = TRUE; \
} \
} \
} while (0)
#endif
/* Emit a two-way conditional branch */
#define MONO_EMIT_NEW_BRANCH_BLOCK2(cfg,op,truebb,falsebb) do { \
MonoInst *ins; \
MONO_INST_NEW ((cfg), (ins), (op)); \
ins->inst_many_bb = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof(gpointer)*2); \
ins->inst_true_bb = (truebb); \
ins->inst_false_bb = (falsebb); \
mono_link_bblock ((cfg), (cfg)->cbb, (truebb)); \
mono_link_bblock ((cfg), (cfg)->cbb, (falsebb)); \
MONO_ADD_INS ((cfg)->cbb, ins); \
} while (0)
#define MONO_START_BB(cfg, bblock) do { \
ADD_BBLOCK ((cfg), (bblock)); \
if (cfg->cbb->last_ins && MONO_IS_COND_BRANCH_OP (cfg->cbb->last_ins) && !cfg->cbb->last_ins->inst_false_bb) { \
cfg->cbb->last_ins->inst_false_bb = (bblock); \
mono_link_bblock ((cfg), (cfg)->cbb, (bblock)); \
} else if (! (cfg->cbb->last_ins && ((cfg->cbb->last_ins->opcode == OP_BR) || (cfg->cbb->last_ins->opcode == OP_BR_REG) || MONO_IS_COND_BRANCH_OP (cfg->cbb->last_ins)))) { \
mono_link_bblock ((cfg), (cfg)->cbb, (bblock)); \
} \
(cfg)->cbb->next_bb = (bblock); \
(cfg)->cbb = (bblock); \
} while (0)
/* This marks a place in code where an implicit exception could be thrown */
#define MONO_EMIT_NEW_IMPLICIT_EXCEPTION(cfg) do { \
if (COMPILE_LLVM ((cfg))) { \
MONO_EMIT_NEW_UNALU (cfg, OP_IMPLICIT_EXCEPTION, -1, -1); \
} \
} while (0)
/* Loads/Stores which can fault are handled correctly by the LLVM mono branch */
#define MONO_EMIT_NEW_IMPLICIT_EXCEPTION_LOAD_STORE(cfg) do { \
} while (0)
/* Emit an explicit null check which doesn't depend on SIGSEGV signal handling */
#define MONO_EMIT_NULL_CHECK(cfg, reg, out_of_page) do { \
if (cfg->explicit_null_checks || (out_of_page)) { \
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, (reg), 0); \
MONO_EMIT_NEW_COND_EXC (cfg, EQ, "NullReferenceException"); \
} else { \
MONO_EMIT_NEW_IMPLICIT_EXCEPTION_LOAD_STORE (cfg); \
} \
MONO_EMIT_NEW_UNALU (cfg, OP_NOT_NULL, -1, reg); \
} while (0)
#define MONO_EMIT_NEW_CHECK_THIS(cfg, sreg) do { \
cfg->flags |= MONO_CFG_HAS_CHECK_THIS; \
if (cfg->explicit_null_checks) { \
MONO_EMIT_NULL_CHECK (cfg, sreg, FALSE); \
} else { \
MONO_EMIT_NEW_UNALU (cfg, OP_CHECK_THIS, -1, sreg); \
MONO_EMIT_NEW_IMPLICIT_EXCEPTION_LOAD_STORE (cfg); \
MONO_EMIT_NEW_UNALU (cfg, OP_NOT_NULL, -1, sreg); \
} \
} while (0)
#define NEW_LOAD_MEMBASE_FLAGS(cfg,dest,op,dr,base,offset,ins_flags) do { \
int __ins_flags = ins_flags; \
if (__ins_flags & MONO_INST_FAULT) { \
gboolean __out_of_page = offset > mono_target_pagesize (); \
MONO_EMIT_NULL_CHECK ((cfg), (base), __out_of_page); \
} \
NEW_LOAD_MEMBASE ((cfg), (dest), (op), (dr), (base), (offset)); \
(dest)->flags = (__ins_flags); \
} while (0)
#define MONO_EMIT_NEW_LOAD_MEMBASE_OP_FLAGS(cfg,op,dr,base,offset,ins_flags) do { \
MonoInst *inst; \
int __ins_flags = ins_flags; \
if (__ins_flags & MONO_INST_FAULT) { \
int __out_of_page = offset > mono_target_pagesize (); \
MONO_EMIT_NULL_CHECK ((cfg), (base), __out_of_page); \
} \
NEW_LOAD_MEMBASE ((cfg), (inst), (op), (dr), (base), (offset)); \
inst->flags = (__ins_flags); \
MONO_ADD_INS (cfg->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_LOAD_MEMBASE_FLAGS(cfg,dr,base,offset,ins_flags) MONO_EMIT_NEW_LOAD_MEMBASE_OP_FLAGS ((cfg), (OP_LOAD_MEMBASE), (dr), (base), (offset),(ins_flags))
/* A load which can cause a nullref */
#define NEW_LOAD_MEMBASE_FAULT(cfg,dest,op,dr,base,offset) NEW_LOAD_MEMBASE_FLAGS ((cfg), (dest), (op), (dr), (base), (offset), MONO_INST_FAULT)
#define EMIT_NEW_LOAD_MEMBASE_FAULT(cfg,dest,op,dr,base,offset) do { \
NEW_LOAD_MEMBASE_FAULT ((cfg), (dest), (op), (dr), (base), (offset)); \
MONO_ADD_INS ((cfg)->cbb, (dest)); \
} while (0)
#define MONO_EMIT_NEW_LOAD_MEMBASE_OP_FAULT(cfg,op,dr,base,offset) MONO_EMIT_NEW_LOAD_MEMBASE_OP_FLAGS ((cfg), (op), (dr), (base), (offset), MONO_INST_FAULT)
#define MONO_EMIT_NEW_LOAD_MEMBASE_FAULT(cfg,dr,base,offset) MONO_EMIT_NEW_LOAD_MEMBASE_OP_FAULT ((cfg), (OP_LOAD_MEMBASE), (dr), (base), (offset))
#define NEW_LOAD_MEMBASE_INVARIANT(cfg,dest,op,dr,base,offset) NEW_LOAD_MEMBASE_FLAGS ((cfg), (dest), (op), (dr), (base), (offset), MONO_INST_INVARIANT_LOAD)
#define MONO_EMIT_NEW_LOAD_MEMBASE_OP_INVARIANT(cfg,op,dr,base,offset) MONO_EMIT_NEW_LOAD_MEMBASE_OP_FLAGS ((cfg), (op), (dr), (base), (offset), MONO_INST_INVARIANT_LOAD)
#define MONO_EMIT_NEW_LOAD_MEMBASE_INVARIANT(cfg,dr,base,offset) MONO_EMIT_NEW_LOAD_MEMBASE_OP_INVARIANT ((cfg), (OP_LOAD_MEMBASE), (dr), (base), (offset))
/*Object Model related macros*/
/* Default bounds check implementation for most architectures + llvm */
#define MONO_EMIT_DEFAULT_BOUNDS_CHECK(cfg, array_reg, offset, index_reg, fault, ex_name) do { \
int _length_reg = alloc_ireg (cfg); \
if (fault) \
MONO_EMIT_NEW_LOAD_MEMBASE_OP_FAULT (cfg, OP_LOADI4_MEMBASE, _length_reg, array_reg, offset); \
else \
MONO_EMIT_NEW_LOAD_MEMBASE_OP_FLAGS (cfg, OP_LOADI4_MEMBASE, _length_reg, array_reg, offset, MONO_INST_INVARIANT_LOAD); \
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, _length_reg, index_reg); \
MONO_EMIT_NEW_COND_EXC (cfg, LE_UN, ex_name); \
} while (0)
#ifndef MONO_ARCH_EMIT_BOUNDS_CHECK
#define MONO_ARCH_EMIT_BOUNDS_CHECK(cfg, array_reg, offset, index_reg, ex_name) MONO_EMIT_DEFAULT_BOUNDS_CHECK ((cfg), (array_reg), (offset), (index_reg), TRUE, ex_name)
#endif
static inline void
mini_emit_bounds_check_offset (MonoCompile *cfg, int array_reg, int array_length_offset, int index_reg, const char *ex_name)
{
if (!(cfg->opt & MONO_OPT_UNSAFE)) {
ex_name = ex_name ? ex_name : "IndexOutOfRangeException";
if (!(cfg->opt & MONO_OPT_ABCREM)) {
MONO_EMIT_NULL_CHECK (cfg, array_reg, FALSE);
if (COMPILE_LLVM (cfg))
MONO_EMIT_DEFAULT_BOUNDS_CHECK ((cfg), (array_reg), (array_length_offset), (index_reg), TRUE, ex_name);
else
MONO_ARCH_EMIT_BOUNDS_CHECK ((cfg), (array_reg), (array_length_offset), (index_reg), ex_name);
} else {
MonoInst *ins;
MONO_INST_NEW ((cfg), ins, OP_BOUNDS_CHECK);
ins->sreg1 = array_reg;
ins->sreg2 = index_reg;
ins->inst_p0 = (gpointer)ex_name;
ins->inst_imm = (array_length_offset);
ins->flags |= MONO_INST_FAULT;
MONO_ADD_INS ((cfg)->cbb, ins);
(cfg)->flags |= MONO_CFG_NEEDS_DECOMPOSE;
(cfg)->cbb->needs_decompose = TRUE;
}
}
}
/* cfg is the MonoCompile been used
* array_reg is the vreg holding the array object
* array_type is a struct (usually MonoArray or MonoString)
* array_length_field is the field in the previous struct with the length
* index_reg is the vreg holding the index
*/
#define MONO_EMIT_BOUNDS_CHECK(cfg, array_reg, array_type, array_length_field, index_reg) do { \
mini_emit_bounds_check_offset ((cfg), (array_reg), MONO_STRUCT_OFFSET (array_type, array_length_field), (index_reg), NULL); \
} while (0)
#endif
| /**
* \file
* IR Creation/Emission Macros
*
* Author:
* Zoltan Varga ([email protected])
*
* (C) 2002 Ximian, Inc.
*/
#ifndef __MONO_IR_EMIT_H__
#define __MONO_IR_EMIT_H__
#include "mini.h"
static inline guint32
alloc_ireg (MonoCompile *cfg)
{
return cfg->next_vreg ++;
}
static inline guint32
alloc_preg (MonoCompile *cfg)
{
return alloc_ireg (cfg);
}
static inline guint32
alloc_lreg (MonoCompile *cfg)
{
#if SIZEOF_REGISTER == 8
return cfg->next_vreg ++;
#else
/* Use a pair of consecutive vregs */
guint32 res = cfg->next_vreg;
cfg->next_vreg += 3;
return res;
#endif
}
static inline guint32
alloc_freg (MonoCompile *cfg)
{
if (mono_arch_is_soft_float ()) {
/* Allocate an lvreg so float ops can be decomposed into long ops */
return alloc_lreg (cfg);
} else {
/* Allocate these from the same pool as the int regs */
return cfg->next_vreg ++;
}
}
static inline guint32
alloc_ireg_ref (MonoCompile *cfg)
{
int vreg = alloc_ireg (cfg);
if (cfg->compute_gc_maps)
mono_mark_vreg_as_ref (cfg, vreg);
#ifdef TARGET_WASM
mono_mark_vreg_as_ref (cfg, vreg);
#endif
return vreg;
}
static inline guint32
alloc_ireg_mp (MonoCompile *cfg)
{
int vreg = alloc_ireg (cfg);
if (cfg->compute_gc_maps)
mono_mark_vreg_as_mp (cfg, vreg);
return vreg;
}
static inline guint32
alloc_xreg (MonoCompile *cfg)
{
return alloc_ireg (cfg);
}
static inline guint32
alloc_dreg (MonoCompile *cfg, MonoStackType stack_type)
{
switch (stack_type) {
case STACK_I4:
case STACK_PTR:
return alloc_ireg (cfg);
case STACK_MP:
return alloc_ireg_mp (cfg);
case STACK_OBJ:
return alloc_ireg_ref (cfg);
case STACK_R4:
case STACK_R8:
return alloc_freg (cfg);
case STACK_I8:
return alloc_lreg (cfg);
case STACK_VTYPE:
return alloc_ireg (cfg);
default:
g_warning ("Unknown stack type %x\n", stack_type);
g_assert_not_reached ();
return -1;
}
}
/*
* Macros used to generate intermediate representation macros
*
* The macros use a `MonoConfig` object as its context, and among other
* things it is used to associate instructions with the memory pool with
* it.
*
* The macros come in three variations with slightly different
* features, the patter is: NEW_OP, EMIT_NEW_OP, MONO_EMIT_NEW_OP,
* the differences are as follows:
*
* `NEW_OP`: these are the basic macros to setup an instruction that is
* passed as an argument.
*
* `EMIT_NEW_OP`: these macros in addition to creating the instruction
* add the instruction to the current basic block in the `MonoConfig`
* object passed. Usually these are used when further customization of
* the `inst` parameter is desired before the instruction is added to the
* MonoConfig current basic block.
*
* `MONO_EMIT_NEW_OP`: These variations of the instructions are used when
* you are merely interested in emitting the instruction into the `MonoConfig`
* parameter.
*/
#undef MONO_INST_NEW
/*
* FIXME: zeroing out some fields is not needed with the new IR, but the old
* JIT code still uses the left and right fields, so it has to stay.
*/
/*
* MONO_INST_NEW: create a new MonoInst instance that is allocated on the MonoConfig pool.
*
* @cfg: the MonoConfig object that will be used as the context for the
* instruction.
* @dest: this is the place where the instance of the `MonoInst` is stored.
* @op: the value that should be stored in the MonoInst.opcode field
*
* This initializes an empty MonoInst that has been nulled out, it is allocated
* from the memory pool associated with the MonoConfig, but it is not linked anywhere.
* the cil_code is set to the cfg->ip address.
*/
#define MONO_INST_NEW(cfg,dest,op) do { \
(dest) = (MonoInst *)mono_mempool_alloc ((cfg)->mempool, sizeof (MonoInst)); \
(dest)->inst_i0 = (dest)->inst_i1 = 0; \
(dest)->next = (dest)->prev = NULL; \
(dest)->opcode = (op); \
(dest)->flags = 0; \
(dest)->type = 0; \
(dest)->dreg = -1; \
MONO_INST_NULLIFY_SREGS ((dest)); \
(dest)->cil_code = (cfg)->ip; \
} while (0)
/*
* Variants which take a dest argument and don't do an emit
*/
#define NEW_ICONST(cfg,dest,val) do { \
MONO_INST_NEW ((cfg), (dest), OP_ICONST); \
(dest)->inst_c0 = (val); \
(dest)->type = STACK_I4; \
(dest)->dreg = alloc_dreg ((cfg), STACK_I4); \
} while (0)
/*
* Avoid using this with a non-NULL val if possible as it is not AOT
* compatible. Use one of the NEW_xxxCONST variants instead.
*/
#define NEW_PCONST(cfg,dest,val) do { \
MONO_INST_NEW ((cfg), (dest), OP_PCONST); \
(dest)->inst_p0 = (val); \
(dest)->type = STACK_PTR; \
(dest)->dreg = alloc_dreg ((cfg), STACK_PTR); \
} while (0)
#define NEW_I8CONST(cfg,dest,val) do { \
MONO_INST_NEW ((cfg), (dest), OP_I8CONST); \
(dest)->dreg = alloc_lreg ((cfg)); \
(dest)->type = STACK_I8; \
(dest)->inst_l = (val); \
} while (0)
#define NEW_STORE_MEMBASE(cfg,dest,op,base,offset,sr) do { \
MONO_INST_NEW ((cfg), (dest), (op)); \
(dest)->sreg1 = sr; \
(dest)->inst_destbasereg = base; \
(dest)->inst_offset = offset; \
} while (0)
#define NEW_LOAD_MEMBASE(cfg,dest,op,dr,base,offset) do { \
MONO_INST_NEW ((cfg), (dest), (op)); \
(dest)->dreg = (dr); \
(dest)->inst_basereg = (base); \
(dest)->inst_offset = (offset); \
(dest)->type = STACK_I4; \
} while (0)
#define NEW_LOAD_MEM(cfg,dest,op,dr,mem) do { \
MONO_INST_NEW ((cfg), (dest), (op)); \
(dest)->dreg = (dr); \
(dest)->inst_p0 = (gpointer)(gssize)(mem); \
(dest)->type = STACK_I4; \
} while (0)
#define NEW_UNALU(cfg,dest,op,dr,sr1) do { \
MONO_INST_NEW ((cfg), (dest), (op)); \
(dest)->dreg = dr; \
(dest)->sreg1 = sr1; \
} while (0)
#define NEW_BIALU(cfg,dest,op,dr,sr1,sr2) do { \
MONO_INST_NEW ((cfg), (dest), (op)); \
(dest)->dreg = (dr); \
(dest)->sreg1 = (sr1); \
(dest)->sreg2 = (sr2); \
} while (0)
#define NEW_BIALU_IMM(cfg,dest,op,dr,sr,imm) do { \
MONO_INST_NEW ((cfg), (dest), (op)); \
(dest)->dreg = dr; \
(dest)->sreg1 = sr; \
(dest)->inst_imm = (imm); \
} while (0)
#define NEW_PATCH_INFO(cfg,dest,el1,el2) do { \
MONO_INST_NEW ((cfg), (dest), OP_PATCH_INFO); \
(dest)->inst_left = (MonoInst*)(el1); \
(dest)->inst_right = (MonoInst*)(el2); \
} while (0)
#define NEW_AOTCONST_GOT_VAR(cfg,dest,patch_type,cons) do { \
MONO_INST_NEW ((cfg), (dest), cfg->compile_aot ? OP_GOT_ENTRY : OP_PCONST); \
if (cfg->compile_aot) { \
MonoInst *group, *got_loc; \
got_loc = mono_get_got_var (cfg); \
NEW_PATCH_INFO ((cfg), group, cons, patch_type); \
(dest)->inst_basereg = got_loc->dreg; \
(dest)->inst_p1 = group; \
} else { \
(dest)->inst_p0 = (cons); \
(dest)->inst_i1 = (MonoInst*)(patch_type); \
} \
(dest)->type = STACK_PTR; \
(dest)->dreg = alloc_dreg ((cfg), STACK_PTR); \
} while (0)
#define NEW_AOTCONST_TOKEN_GOT_VAR(cfg,dest,patch_type,image,token,generic_context,stack_type,stack_class) do { \
MonoInst *group, *got_loc; \
MONO_INST_NEW ((cfg), (dest), OP_GOT_ENTRY); \
got_loc = mono_get_got_var (cfg); \
NEW_PATCH_INFO ((cfg), group, NULL, patch_type); \
group->inst_p0 = mono_jump_info_token_new2 ((cfg)->mempool, (image), (token), (generic_context)); \
(dest)->inst_basereg = got_loc->dreg; \
(dest)->inst_p1 = group; \
(dest)->type = (stack_type); \
(dest)->klass = (stack_class); \
(dest)->dreg = alloc_dreg ((cfg), (stack_type)); \
} while (0)
#define NEW_AOTCONST(cfg,dest,patch_type,cons) do { \
if (cfg->backend->need_got_var && !cfg->llvm_only) { \
NEW_AOTCONST_GOT_VAR ((cfg), (dest), (patch_type), (cons)); \
} else { \
MONO_INST_NEW ((cfg), (dest), cfg->compile_aot ? OP_AOTCONST : OP_PCONST); \
(dest)->inst_p0 = (cons); \
(dest)->inst_p1 = GUINT_TO_POINTER (patch_type); \
(dest)->type = STACK_PTR; \
(dest)->dreg = alloc_dreg ((cfg), STACK_PTR); \
} \
} while (0)
#define NEW_AOTCONST_TOKEN(cfg,dest,patch_type,image,token,generic_context,stack_type,stack_class) do { \
if (cfg->backend->need_got_var && !cfg->llvm_only) { \
NEW_AOTCONST_TOKEN_GOT_VAR ((cfg), (dest), (patch_type), (image), (token), (generic_context), (stack_type), (stack_class)); \
} else { \
MONO_INST_NEW ((cfg), (dest), OP_AOTCONST); \
(dest)->inst_p0 = mono_jump_info_token_new2 ((cfg)->mempool, (image), (token), (generic_context)); \
(dest)->inst_p1 = (gpointer)(patch_type); \
(dest)->type = (stack_type); \
(dest)->klass = (stack_class); \
(dest)->dreg = alloc_dreg ((cfg), (stack_type)); \
} \
} while (0)
#define NEW_CLASSCONST(cfg,dest,val) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_CLASS, (val))
#define NEW_IMAGECONST(cfg,dest,val) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_IMAGE, (val))
#define NEW_FIELDCONST(cfg,dest,val) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_FIELD, (val))
#define NEW_METHODCONST(cfg,dest,val) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_METHODCONST, (val))
#define NEW_VTABLECONST(cfg,dest,vtable) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_VTABLE, cfg->compile_aot ? (gpointer)((vtable)->klass) : (vtable))
#define NEW_SFLDACONST(cfg,dest,val) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_SFLDA, (val))
#define NEW_LDSTRCONST(cfg,dest,image,token) NEW_AOTCONST_TOKEN ((cfg), (dest), MONO_PATCH_INFO_LDSTR, (image), (token), NULL, STACK_OBJ, mono_defaults.string_class)
#define NEW_LDSTRLITCONST(cfg,dest,val) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_LDSTR_LIT, (val))
#define NEW_TYPE_FROM_HANDLE_CONST(cfg,dest,image,token,generic_context) NEW_AOTCONST_TOKEN ((cfg), (dest), MONO_PATCH_INFO_TYPE_FROM_HANDLE, (image), (token), (generic_context), STACK_OBJ, mono_defaults.runtimetype_class)
#define NEW_LDTOKENCONST(cfg,dest,image,token,generic_context) NEW_AOTCONST_TOKEN ((cfg), (dest), MONO_PATCH_INFO_LDTOKEN, (image), (token), (generic_context), STACK_PTR, NULL)
#define NEW_DECLSECCONST(cfg,dest,image,entry) do { \
if (cfg->compile_aot) { \
NEW_AOTCONST_TOKEN (cfg, dest, MONO_PATCH_INFO_DECLSEC, image, (entry).index, NULL, STACK_OBJ, NULL); \
} else { \
NEW_PCONST (cfg, args [0], (entry).blob); \
} \
} while (0)
#define NEW_METHOD_RGCTX_CONST(cfg,dest,method) do { \
if (cfg->compile_aot) { \
NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_METHOD_RGCTX, (method)); \
} else { \
MonoMethodRuntimeGenericContext *mrgctx; \
mrgctx = (MonoMethodRuntimeGenericContext*)mini_method_get_rgctx ((method)); \
NEW_PCONST ((cfg), (dest), (mrgctx)); \
} \
} while (0)
#define NEW_JIT_ICALL_ADDRCONST(cfg, dest, jit_icall_id) NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_JIT_ICALL_ADDR, (jit_icall_id))
#define NEW_VARLOAD(cfg,dest,var,vartype) do { \
MONO_INST_NEW ((cfg), (dest), OP_MOVE); \
(dest)->opcode = mono_type_to_regmove ((cfg), (vartype)); \
mini_type_to_eval_stack_type ((cfg), (vartype), (dest)); \
(dest)->klass = var->klass; \
(dest)->sreg1 = var->dreg; \
(dest)->dreg = alloc_dreg ((cfg), (MonoStackType)(dest)->type); \
if ((dest)->opcode == OP_VMOVE) (dest)->klass = mono_class_from_mono_type_internal ((vartype)); \
} while (0)
#define DECOMPOSE_INTO_REGPAIR(stack_type) (mono_arch_is_soft_float () ? ((stack_type) == STACK_I8 || (stack_type) == STACK_R8) : ((stack_type) == STACK_I8))
static inline void
handle_gsharedvt_ldaddr (MonoCompile *cfg)
{
/* The decomposition of ldaddr makes use of these two variables, so add uses for them */
MonoInst *use;
MONO_INST_NEW (cfg, use, OP_DUMMY_USE);
use->sreg1 = cfg->gsharedvt_info_var->dreg;
MONO_ADD_INS (cfg->cbb, use);
MONO_INST_NEW (cfg, use, OP_DUMMY_USE);
use->sreg1 = cfg->gsharedvt_locals_var->dreg;
MONO_ADD_INS (cfg->cbb, use);
}
#define NEW_VARLOADA(cfg,dest,var,vartype) do { \
MONO_INST_NEW ((cfg), (dest), OP_LDADDR); \
(dest)->inst_p0 = (var); \
(var)->flags |= MONO_INST_INDIRECT; \
(dest)->type = STACK_MP; \
(dest)->klass = (var)->klass; \
(dest)->dreg = alloc_dreg ((cfg), STACK_MP); \
(cfg)->has_indirection = TRUE; \
if (G_UNLIKELY (cfg->gsharedvt) && mini_is_gsharedvt_variable_type ((var)->inst_vtype)) { handle_gsharedvt_ldaddr ((cfg)); } \
if (SIZEOF_REGISTER == 4 && DECOMPOSE_INTO_REGPAIR ((var)->type)) { MonoInst *var1 = get_vreg_to_inst (cfg, MONO_LVREG_LS ((var)->dreg)); MonoInst *var2 = get_vreg_to_inst (cfg, MONO_LVREG_MS ((var)->dreg)); g_assert (var1); g_assert (var2); var1->flags |= MONO_INST_INDIRECT; var2->flags |= MONO_INST_INDIRECT; } \
} while (0)
#define NEW_VARSTORE(cfg,dest,var,vartype,inst) do { \
MONO_INST_NEW ((cfg), (dest), OP_MOVE); \
(dest)->opcode = mono_type_to_regmove ((cfg), (vartype)); \
(dest)->klass = (var)->klass; \
(dest)->sreg1 = (inst)->dreg; \
(dest)->dreg = (var)->dreg; \
if ((dest)->opcode == OP_VMOVE) (dest)->klass = mono_class_from_mono_type_internal ((vartype)); \
} while (0)
#define NEW_TEMPLOAD(cfg,dest,num) NEW_VARLOAD ((cfg), (dest), (cfg)->varinfo [(num)], (cfg)->varinfo [(num)]->inst_vtype)
#define NEW_TEMPLOADA(cfg,dest,num) NEW_VARLOADA ((cfg), (dest), cfg->varinfo [(num)], cfg->varinfo [(num)]->inst_vtype)
#define NEW_TEMPSTORE(cfg,dest,num,inst) NEW_VARSTORE ((cfg), (dest), (cfg)->varinfo [(num)], (cfg)->varinfo [(num)]->inst_vtype, (inst))
#define NEW_ARGLOAD(cfg,dest,num) NEW_VARLOAD ((cfg), (dest), cfg->args [(num)], cfg->arg_types [(num)])
#define NEW_LOCLOAD(cfg,dest,num) NEW_VARLOAD ((cfg), (dest), cfg->locals [(num)], header->locals [(num)])
#define NEW_LOCSTORE(cfg,dest,num,inst) NEW_VARSTORE ((cfg), (dest), (cfg)->locals [(num)], (cfg)->locals [(num)]->inst_vtype, (inst))
#define NEW_ARGSTORE(cfg,dest,num,inst) NEW_VARSTORE ((cfg), (dest), cfg->args [(num)], cfg->arg_types [(num)], (inst))
#define NEW_LOCLOADA(cfg,dest,num) NEW_VARLOADA ((cfg), (dest), (cfg)->locals [(num)], (cfg)->locals [(num)]->inst_vtype)
#define NEW_RETLOADA(cfg,dest) do { \
MONO_INST_NEW ((cfg), (dest), OP_MOVE); \
(dest)->type = STACK_MP; \
(dest)->klass = cfg->ret->klass; \
(dest)->sreg1 = cfg->vret_addr->dreg; \
(dest)->dreg = alloc_dreg ((cfg), (MonoStackType)(dest)->type); \
} while (0)
#define NEW_ARGLOADA(cfg,dest,num) NEW_VARLOADA ((cfg), (dest), arg_array [(num)], param_types [(num)])
/* Promote the vreg to a variable so its address can be taken */
#define NEW_VARLOADA_VREG(cfg,dest,vreg,ltype) do { \
MonoInst *var = get_vreg_to_inst ((cfg), (vreg)); \
if (!var) \
var = mono_compile_create_var_for_vreg ((cfg), (ltype), OP_LOCAL, (vreg)); \
NEW_VARLOADA ((cfg), (dest), (var), (ltype)); \
} while (0)
#define NEW_DUMMY_USE(cfg,dest,var) do { \
MONO_INST_NEW ((cfg), (dest), OP_DUMMY_USE); \
(dest)->sreg1 = var->dreg; \
} while (0)
/* Variants which take a type argument and handle vtypes as well */
#define NEW_LOAD_MEMBASE_TYPE(cfg,dest,ltype,base,offset) do { \
NEW_LOAD_MEMBASE ((cfg), (dest), mono_type_to_load_membase ((cfg), (ltype)), 0, (base), (offset)); \
mini_type_to_eval_stack_type ((cfg), (ltype), (dest)); \
(dest)->dreg = alloc_dreg ((cfg), (MonoStackType)(dest)->type); \
} while (0)
#define NEW_STORE_MEMBASE_TYPE(cfg,dest,ltype,base,offset,sr) do { \
MONO_INST_NEW ((cfg), (dest), mono_type_to_store_membase ((cfg), (ltype))); \
(dest)->sreg1 = sr; \
(dest)->inst_destbasereg = base; \
(dest)->inst_offset = offset; \
mini_type_to_eval_stack_type ((cfg), (ltype), (dest)); \
(dest)->klass = mono_class_from_mono_type_internal (ltype); \
} while (0)
#define NEW_SEQ_POINT(cfg,dest,il_offset,intr_loc) do { \
MONO_INST_NEW ((cfg), (dest), cfg->gen_sdb_seq_points ? OP_SEQ_POINT : OP_IL_SEQ_POINT); \
(dest)->inst_imm = (il_offset); \
(dest)->flags = intr_loc ? MONO_INST_SINGLE_STEP_LOC : 0; \
} while (0)
#define NEW_GC_PARAM_SLOT_LIVENESS_DEF(cfg,dest,offset,type) do { \
MONO_INST_NEW ((cfg), (dest), OP_GC_PARAM_SLOT_LIVENESS_DEF); \
(dest)->inst_offset = (offset); \
(dest)->inst_vtype = (type); \
} while (0)
/*
* Variants which do an emit as well.
*/
#define EMIT_NEW_ICONST(cfg,dest,val) do { NEW_ICONST ((cfg), (dest), (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_PCONST(cfg,dest,val) do { NEW_PCONST ((cfg), (dest), (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_I8CONST(cfg,dest,val) do { NEW_I8CONST ((cfg), (dest), (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_AOTCONST(cfg,dest,patch_type,cons) do { NEW_AOTCONST ((cfg), (dest), (patch_type), (cons)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_AOTCONST_TOKEN(cfg,dest,patch_type,image,token,stack_type,stack_class) do { NEW_AOTCONST_TOKEN ((cfg), (dest), (patch_type), (image), (token), NULL, (stack_type), (stack_class)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_CLASSCONST(cfg,dest,val) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_CLASS, (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_IMAGECONST(cfg,dest,val) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_IMAGE, (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_FIELDCONST(cfg,dest,val) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_FIELD, (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_METHODCONST(cfg,dest,val) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_METHODCONST, (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_VTABLECONST(cfg,dest,vtable) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_VTABLE, cfg->compile_aot ? (gpointer)((vtable)->klass) : (vtable)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_SFLDACONST(cfg,dest,val) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_SFLDA, (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LDSTRCONST(cfg,dest,image,token) do { NEW_AOTCONST_TOKEN ((cfg), (dest), MONO_PATCH_INFO_LDSTR, (image), (token), NULL, STACK_OBJ, mono_defaults.string_class); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LDSTRLITCONST(cfg,dest,val) do { NEW_AOTCONST ((cfg), (dest), MONO_PATCH_INFO_LDSTR_LIT, (val)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_TYPE_FROM_HANDLE_CONST(cfg,dest,image,token,generic_context) do { NEW_AOTCONST_TOKEN ((cfg), (dest), MONO_PATCH_INFO_TYPE_FROM_HANDLE, (image), (token), (generic_context), STACK_OBJ, mono_defaults.runtimetype_class); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LDTOKENCONST(cfg,dest,image,token,generic_context) do { NEW_AOTCONST_TOKEN ((cfg), (dest), MONO_PATCH_INFO_LDTOKEN, (image), (token), (generic_context), STACK_PTR, NULL); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_TLS_OFFSETCONST(cfg,dest,key) do { NEW_TLS_OFFSETCONST ((cfg), (dest), (key)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_DECLSECCONST(cfg,dest,image,entry) do { NEW_DECLSECCONST ((cfg), (dest), (image), (entry)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_METHOD_RGCTX_CONST(cfg,dest,method) do { NEW_METHOD_RGCTX_CONST ((cfg), (dest), (method)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_JIT_ICALL_ADDRCONST(cfg, dest, jit_icall_id) do { NEW_JIT_ICALL_ADDRCONST ((cfg), (dest), (jit_icall_id)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_VARLOAD(cfg,dest,var,vartype) do { NEW_VARLOAD ((cfg), (dest), (var), (vartype)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_VARSTORE(cfg,dest,var,vartype,inst) do { NEW_VARSTORE ((cfg), (dest), (var), (vartype), (inst)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_VARLOADA(cfg,dest,var,vartype) do { NEW_VARLOADA ((cfg), (dest), (var), (vartype)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
/*
* Since the IL stack (and our vregs) contain double values, we have to do a conversion
* when loading/storing args/locals of type R4.
*/
#define EMIT_NEW_VARLOAD_SFLOAT(cfg,dest,var,vartype) do { \
if (!COMPILE_LLVM ((cfg)) && !m_type_is_byref ((vartype)) && (vartype)->type == MONO_TYPE_R4) { \
MonoInst *iargs [1]; \
EMIT_NEW_VARLOADA (cfg, iargs [0], (var), (vartype)); \
(dest) = mono_emit_jit_icall (cfg, mono_fload_r4, iargs); \
} else { \
EMIT_NEW_VARLOAD ((cfg), (dest), (var), (vartype)); \
} \
} while (0)
#define EMIT_NEW_VARSTORE_SFLOAT(cfg,dest,var,vartype,inst) do { \
if (COMPILE_SOFT_FLOAT ((cfg)) && !m_type_is_byref ((vartype)) && (vartype)->type == MONO_TYPE_R4) { \
MonoInst *iargs [2]; \
iargs [0] = (inst); \
EMIT_NEW_VARLOADA (cfg, iargs [1], (var), (vartype)); \
(dest) = mono_emit_jit_icall (cfg, mono_fstore_r4, iargs); \
} else { \
EMIT_NEW_VARSTORE ((cfg), (dest), (var), (vartype), (inst)); \
} \
} while (0)
#define EMIT_NEW_ARGLOAD(cfg,dest,num) do { \
if (mono_arch_is_soft_float ()) { \
EMIT_NEW_VARLOAD_SFLOAT ((cfg), (dest), cfg->args [(num)], cfg->arg_types [(num)]); \
} else { \
NEW_ARGLOAD ((cfg), (dest), (num)); \
MONO_ADD_INS ((cfg)->cbb, (dest)); \
} \
} while (0)
#define EMIT_NEW_LOCLOAD(cfg,dest,num) do { \
if (mono_arch_is_soft_float ()) { \
EMIT_NEW_VARLOAD_SFLOAT ((cfg), (dest), cfg->locals [(num)], header->locals [(num)]); \
} else { \
NEW_LOCLOAD ((cfg), (dest), (num)); \
MONO_ADD_INS ((cfg)->cbb, (dest)); \
} \
} while (0)
#define EMIT_NEW_LOCSTORE(cfg,dest,num,inst) do { \
if (mono_arch_is_soft_float ()) { \
EMIT_NEW_VARSTORE_SFLOAT ((cfg), (dest), (cfg)->locals [(num)], (cfg)->locals [(num)]->inst_vtype, (inst)); \
} else { \
NEW_LOCSTORE ((cfg), (dest), (num), (inst)); \
MONO_ADD_INS ((cfg)->cbb, (dest)); \
} \
} while (0)
#define EMIT_NEW_ARGSTORE(cfg,dest,num,inst) do { \
if (mono_arch_is_soft_float ()) { \
EMIT_NEW_VARSTORE_SFLOAT ((cfg), (dest), cfg->args [(num)], cfg->arg_types [(num)], (inst)); \
} else { \
NEW_ARGSTORE ((cfg), (dest), (num), (inst)); \
MONO_ADD_INS ((cfg)->cbb, (dest)); \
} \
} while (0)
#else
#define EMIT_NEW_ARGLOAD(cfg,dest,num) do { NEW_ARGLOAD ((cfg), (dest), (num)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LOCLOAD(cfg,dest,num) do { NEW_LOCLOAD ((cfg), (dest), (num)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LOCSTORE(cfg,dest,num,inst) do { NEW_LOCSTORE ((cfg), (dest), (num), (inst)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_ARGSTORE(cfg,dest,num,inst) do { NEW_ARGSTORE ((cfg), (dest), (num), (inst)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#endif
#define EMIT_NEW_TEMPLOAD(cfg,dest,num) do { NEW_TEMPLOAD ((cfg), (dest), (num)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_TEMPLOADA(cfg,dest,num) do { NEW_TEMPLOADA ((cfg), (dest), (num)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LOCLOADA(cfg,dest,num) do { NEW_LOCLOADA ((cfg), (dest), (num)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_ARGLOADA(cfg,dest,num) do { NEW_ARGLOADA ((cfg), (dest), (num)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_RETLOADA(cfg,dest) do { NEW_RETLOADA ((cfg), (dest)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_TEMPSTORE(cfg,dest,num,inst) do { NEW_TEMPSTORE ((cfg), (dest), (num), (inst)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_VARLOADA_VREG(cfg,dest,vreg,ltype) do { NEW_VARLOADA_VREG ((cfg), (dest), (vreg), (ltype)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_DUMMY_USE(cfg,dest,var) do { NEW_DUMMY_USE ((cfg), (dest), (var)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_UNALU(cfg,dest,op,dr,sr1) do { NEW_UNALU ((cfg), (dest), (op), (dr), (sr1)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_BIALU(cfg,dest,op,dr,sr1,sr2) do { NEW_BIALU ((cfg), (dest), (op), (dr), (sr1), (sr2)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_BIALU_IMM(cfg,dest,op,dr,sr,imm) do { NEW_BIALU_IMM ((cfg), (dest), (op), (dr), (sr), (imm)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LOAD_MEMBASE(cfg,dest,op,dr,base,offset) do { NEW_LOAD_MEMBASE ((cfg), (dest), (op), (dr), (base), (offset)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_STORE_MEMBASE(cfg,dest,op,base,offset,sr) do { NEW_STORE_MEMBASE ((cfg), (dest), (op), (base), (offset), (sr)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_LOAD_MEMBASE_TYPE(cfg,dest,ltype,base,offset) do { NEW_LOAD_MEMBASE_TYPE ((cfg), (dest), (ltype), (base), (offset)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_STORE_MEMBASE_TYPE(cfg,dest,ltype,base,offset,sr) do { NEW_STORE_MEMBASE_TYPE ((cfg), (dest), (ltype), (base), (offset), (sr)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
#define EMIT_NEW_GC_PARAM_SLOT_LIVENESS_DEF(cfg,dest,offset,type) do { NEW_GC_PARAM_SLOT_LIVENESS_DEF ((cfg), (dest), (offset), (type)); MONO_ADD_INS ((cfg)->cbb, (dest)); } while (0)
/*
* Variants which do not take an dest argument, but take a dreg argument.
*/
#define MONO_EMIT_NEW_ICONST(cfg,dr,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), OP_ICONST); \
inst->dreg = dr; \
inst->inst_c0 = imm; \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_PCONST(cfg,dr,val) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), OP_PCONST); \
inst->dreg = dr; \
(inst)->inst_p0 = (val); \
(inst)->type = STACK_PTR; \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_I8CONST(cfg,dr,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), OP_I8CONST); \
inst->dreg = dr; \
inst->inst_l = imm; \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_DUMMY_INIT(cfg,dr,op) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (op)); \
inst->dreg = dr; \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#ifdef MONO_ARCH_NEED_GOT_VAR
#define MONO_EMIT_NEW_AOTCONST(cfg,dr,cons,patch_type) do { \
MonoInst *inst; \
NEW_AOTCONST ((cfg), (inst), (patch_type), (cons)); \
inst->dreg = (dr); \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#else
#define MONO_EMIT_NEW_AOTCONST(cfg,dr,imm,type) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), cfg->compile_aot ? OP_AOTCONST : OP_PCONST); \
inst->dreg = dr; \
inst->inst_p0 = imm; \
inst->inst_c1 = type; \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#endif
#define MONO_EMIT_NEW_CLASSCONST(cfg,dr,imm) MONO_EMIT_NEW_AOTCONST(cfg,dr,imm,MONO_PATCH_INFO_CLASS)
#define MONO_EMIT_NEW_VTABLECONST(cfg,dest,vtable) MONO_EMIT_NEW_AOTCONST ((cfg), (dest), (cfg)->compile_aot ? (gpointer)((vtable)->klass) : (vtable), MONO_PATCH_INFO_VTABLE)
#define MONO_EMIT_NEW_SIGNATURECONST(cfg,dr,sig) MONO_EMIT_NEW_AOTCONST ((cfg), (dr), (sig), MONO_PATCH_INFO_SIGNATURE)
#define MONO_EMIT_NEW_VZERO(cfg,dr,kl) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), MONO_CLASS_IS_SIMD (cfg, kl) ? OP_XZERO : OP_VZERO); \
inst->dreg = dr; \
(inst)->type = STACK_VTYPE; \
(inst)->klass = (kl); \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_UNALU(cfg,op,dr,sr1) do { \
MonoInst *inst; \
EMIT_NEW_UNALU ((cfg), (inst), (op), (dr), (sr1)); \
} while (0)
#define MONO_EMIT_NEW_BIALU(cfg,op,dr,sr1,sr2) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (op)); \
inst->dreg = dr; \
inst->sreg1 = sr1; \
inst->sreg2 = sr2; \
MONO_ADD_INS (cfg->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_BIALU_IMM(cfg,op,dr,sr,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (op)); \
inst->dreg = dr; \
inst->sreg1 = sr; \
inst->inst_imm = (target_mgreg_t)(imm); \
MONO_ADD_INS (cfg->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_COMPARE_IMM(cfg,sr1,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (OP_COMPARE_IMM)); \
inst->sreg1 = sr1; \
inst->inst_imm = (imm); \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_ICOMPARE_IMM(cfg,sr1,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), sizeof (target_mgreg_t) == 8 ? OP_ICOMPARE_IMM : OP_COMPARE_IMM); \
inst->sreg1 = sr1; \
inst->inst_imm = (imm); \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
/* This is used on 32 bit machines too when running with LLVM */
#define MONO_EMIT_NEW_LCOMPARE_IMM(cfg,sr1,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (OP_LCOMPARE_IMM)); \
inst->sreg1 = sr1; \
if (SIZEOF_REGISTER == 4 && COMPILE_LLVM (cfg)) { \
inst->inst_l = (imm); \
} else { \
inst->inst_imm = (imm); \
} \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_LOAD_MEMBASE_OP(cfg,op,dr,base,offset) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (op)); \
inst->dreg = dr; \
inst->inst_basereg = base; \
inst->inst_offset = offset; \
MONO_ADD_INS (cfg->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_LOAD_MEMBASE(cfg,dr,base,offset) MONO_EMIT_NEW_LOAD_MEMBASE_OP ((cfg), (OP_LOAD_MEMBASE), (dr), (base), (offset))
#define MONO_EMIT_NEW_STORE_MEMBASE(cfg,op,base,offset,sr) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (op)); \
(inst)->sreg1 = sr; \
(inst)->inst_destbasereg = base; \
(inst)->inst_offset = offset; \
MONO_ADD_INS (cfg->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_STORE_MEMBASE_IMM(cfg,op,base,offset,imm) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (op)); \
inst->inst_destbasereg = base; \
inst->inst_offset = offset; \
inst->inst_imm = (target_mgreg_t)(imm); \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_COND_EXC(cfg,cond,name) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), (OP_COND_EXC_##cond)); \
inst->inst_p1 = (char*)name; \
MONO_ADD_INS ((cfg)->cbb, inst); \
} while (0)
/* Branch support */
/*
* Basic blocks have two numeric identifiers:
* dfn: Depth First Number
* block_num: unique ID assigned at bblock creation
*/
#define NEW_BBLOCK(cfg,bblock) do { \
(bblock) = (MonoBasicBlock *)mono_mempool_alloc0 ((cfg)->mempool, sizeof (MonoBasicBlock)); \
(bblock)->block_num = cfg->num_bblocks++; \
} while (0)
#define ADD_BBLOCK(cfg,b) do { \
if ((b)->cil_code) {\
cfg->cil_offset_to_bb [(b)->cil_code - cfg->cil_start] = (b); \
} \
(b)->real_offset = cfg->real_offset; \
} while (0)
/*
* Emit a one-way conditional branch and start a new bblock.
* The inst_false_bb field of the cond branch will not be set, the JIT code should be
* prepared to deal with this.
*/
#ifdef DEBUG_EXTENDED_BBLOCKS
static int ccount = 0;
#define MONO_EMIT_NEW_BRANCH_BLOCK(cfg,op,truebb) do { \
MonoInst *ins; \
MonoBasicBlock *falsebb; \
MONO_INST_NEW ((cfg), (ins), (op)); \
if ((op) == OP_BR) { \
NEW_BBLOCK ((cfg), falsebb); \
ins->inst_target_bb = (truebb); \
mono_link_bblock ((cfg), (cfg)->cbb, (truebb)); \
MONO_ADD_INS ((cfg)->cbb, ins); \
MONO_START_BB ((cfg), falsebb); \
} else { \
ccount ++; \
ins->inst_many_bb = mono_mempool_alloc (cfg->mempool, sizeof(gpointer)*2); \
ins->inst_true_bb = (truebb); \
ins->inst_false_bb = NULL; \
mono_link_bblock ((cfg), (cfg)->cbb, (truebb)); \
MONO_ADD_INS ((cfg)->cbb, ins); \
char *count2 = g_getenv ("COUNT2"); \
if (count2 && ccount == atoi (count2) - 1) { printf ("HIT: %d\n", cfg->cbb->block_num); } \
if (count2 && ccount < atoi (count2)) { \
cfg->cbb->extended = TRUE; \
} else { NEW_BBLOCK ((cfg), falsebb); ins->inst_false_bb = (falsebb); mono_link_bblock ((cfg), (cfg)->cbb, (falsebb)); MONO_START_BB ((cfg), falsebb); } \
if (count2) g_free (count2); \
} \
} while (0)
#else
#define MONO_EMIT_NEW_BRANCH_BLOCK(cfg,op,truebb) do { \
MonoInst *ins; \
MonoBasicBlock *falsebb; \
MONO_INST_NEW ((cfg), (ins), (op)); \
if ((op) == OP_BR) { \
NEW_BBLOCK ((cfg), falsebb); \
ins->inst_target_bb = (truebb); \
mono_link_bblock ((cfg), (cfg)->cbb, (truebb)); \
MONO_ADD_INS ((cfg)->cbb, ins); \
MONO_START_BB ((cfg), falsebb); \
} else { \
ins->inst_many_bb = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof(gpointer)*2); \
ins->inst_true_bb = (truebb); \
ins->inst_false_bb = NULL; \
mono_link_bblock ((cfg), (cfg)->cbb, (truebb)); \
MONO_ADD_INS ((cfg)->cbb, ins); \
if (!cfg->enable_extended_bblocks) { \
NEW_BBLOCK ((cfg), falsebb); \
ins->inst_false_bb = falsebb; \
mono_link_bblock ((cfg), (cfg)->cbb, (falsebb)); \
MONO_START_BB ((cfg), falsebb); \
} else { \
cfg->cbb->extended = TRUE; \
} \
} \
} while (0)
#endif
/* Emit a two-way conditional branch */
#define MONO_EMIT_NEW_BRANCH_BLOCK2(cfg,op,truebb,falsebb) do { \
MonoInst *ins; \
MONO_INST_NEW ((cfg), (ins), (op)); \
ins->inst_many_bb = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof(gpointer)*2); \
ins->inst_true_bb = (truebb); \
ins->inst_false_bb = (falsebb); \
mono_link_bblock ((cfg), (cfg)->cbb, (truebb)); \
mono_link_bblock ((cfg), (cfg)->cbb, (falsebb)); \
MONO_ADD_INS ((cfg)->cbb, ins); \
} while (0)
#define MONO_START_BB(cfg, bblock) do { \
ADD_BBLOCK ((cfg), (bblock)); \
if (cfg->cbb->last_ins && MONO_IS_COND_BRANCH_OP (cfg->cbb->last_ins) && !cfg->cbb->last_ins->inst_false_bb) { \
cfg->cbb->last_ins->inst_false_bb = (bblock); \
mono_link_bblock ((cfg), (cfg)->cbb, (bblock)); \
} else if (! (cfg->cbb->last_ins && ((cfg->cbb->last_ins->opcode == OP_BR) || (cfg->cbb->last_ins->opcode == OP_BR_REG) || MONO_IS_COND_BRANCH_OP (cfg->cbb->last_ins)))) { \
mono_link_bblock ((cfg), (cfg)->cbb, (bblock)); \
} \
(cfg)->cbb->next_bb = (bblock); \
(cfg)->cbb = (bblock); \
} while (0)
/* This marks a place in code where an implicit exception could be thrown */
#define MONO_EMIT_NEW_IMPLICIT_EXCEPTION(cfg) do { \
if (COMPILE_LLVM ((cfg))) { \
MONO_EMIT_NEW_UNALU (cfg, OP_IMPLICIT_EXCEPTION, -1, -1); \
} \
} while (0)
/* Loads/Stores which can fault are handled correctly by the LLVM mono branch */
#define MONO_EMIT_NEW_IMPLICIT_EXCEPTION_LOAD_STORE(cfg) do { \
} while (0)
/* Emit an explicit null check which doesn't depend on SIGSEGV signal handling */
#define MONO_EMIT_NULL_CHECK(cfg, reg, out_of_page) do { \
if (cfg->explicit_null_checks || (out_of_page)) { \
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, (reg), 0); \
MONO_EMIT_NEW_COND_EXC (cfg, EQ, "NullReferenceException"); \
} else { \
MONO_EMIT_NEW_IMPLICIT_EXCEPTION_LOAD_STORE (cfg); \
} \
MONO_EMIT_NEW_UNALU (cfg, OP_NOT_NULL, -1, reg); \
} while (0)
#define MONO_EMIT_NEW_CHECK_THIS(cfg, sreg) do { \
cfg->flags |= MONO_CFG_HAS_CHECK_THIS; \
if (cfg->explicit_null_checks) { \
MONO_EMIT_NULL_CHECK (cfg, sreg, FALSE); \
} else { \
MONO_EMIT_NEW_UNALU (cfg, OP_CHECK_THIS, -1, sreg); \
MONO_EMIT_NEW_IMPLICIT_EXCEPTION_LOAD_STORE (cfg); \
MONO_EMIT_NEW_UNALU (cfg, OP_NOT_NULL, -1, sreg); \
} \
} while (0)
#define NEW_LOAD_MEMBASE_FLAGS(cfg,dest,op,dr,base,offset,ins_flags) do { \
int __ins_flags = ins_flags; \
if (__ins_flags & MONO_INST_FAULT) { \
gboolean __out_of_page = offset > mono_target_pagesize (); \
MONO_EMIT_NULL_CHECK ((cfg), (base), __out_of_page); \
} \
NEW_LOAD_MEMBASE ((cfg), (dest), (op), (dr), (base), (offset)); \
(dest)->flags = (__ins_flags); \
} while (0)
#define MONO_EMIT_NEW_LOAD_MEMBASE_OP_FLAGS(cfg,op,dr,base,offset,ins_flags) do { \
MonoInst *inst; \
int __ins_flags = ins_flags; \
if (__ins_flags & MONO_INST_FAULT) { \
int __out_of_page = offset > mono_target_pagesize (); \
MONO_EMIT_NULL_CHECK ((cfg), (base), __out_of_page); \
} \
NEW_LOAD_MEMBASE ((cfg), (inst), (op), (dr), (base), (offset)); \
inst->flags = (__ins_flags); \
MONO_ADD_INS (cfg->cbb, inst); \
} while (0)
#define MONO_EMIT_NEW_LOAD_MEMBASE_FLAGS(cfg,dr,base,offset,ins_flags) MONO_EMIT_NEW_LOAD_MEMBASE_OP_FLAGS ((cfg), (OP_LOAD_MEMBASE), (dr), (base), (offset),(ins_flags))
/* A load which can cause a nullref */
#define NEW_LOAD_MEMBASE_FAULT(cfg,dest,op,dr,base,offset) NEW_LOAD_MEMBASE_FLAGS ((cfg), (dest), (op), (dr), (base), (offset), MONO_INST_FAULT)
#define EMIT_NEW_LOAD_MEMBASE_FAULT(cfg,dest,op,dr,base,offset) do { \
NEW_LOAD_MEMBASE_FAULT ((cfg), (dest), (op), (dr), (base), (offset)); \
MONO_ADD_INS ((cfg)->cbb, (dest)); \
} while (0)
#define MONO_EMIT_NEW_LOAD_MEMBASE_OP_FAULT(cfg,op,dr,base,offset) MONO_EMIT_NEW_LOAD_MEMBASE_OP_FLAGS ((cfg), (op), (dr), (base), (offset), MONO_INST_FAULT)
#define MONO_EMIT_NEW_LOAD_MEMBASE_FAULT(cfg,dr,base,offset) MONO_EMIT_NEW_LOAD_MEMBASE_OP_FAULT ((cfg), (OP_LOAD_MEMBASE), (dr), (base), (offset))
#define NEW_LOAD_MEMBASE_INVARIANT(cfg,dest,op,dr,base,offset) NEW_LOAD_MEMBASE_FLAGS ((cfg), (dest), (op), (dr), (base), (offset), MONO_INST_INVARIANT_LOAD)
#define MONO_EMIT_NEW_LOAD_MEMBASE_OP_INVARIANT(cfg,op,dr,base,offset) MONO_EMIT_NEW_LOAD_MEMBASE_OP_FLAGS ((cfg), (op), (dr), (base), (offset), MONO_INST_INVARIANT_LOAD)
#define MONO_EMIT_NEW_LOAD_MEMBASE_INVARIANT(cfg,dr,base,offset) MONO_EMIT_NEW_LOAD_MEMBASE_OP_INVARIANT ((cfg), (OP_LOAD_MEMBASE), (dr), (base), (offset))
/*Object Model related macros*/
/* Default bounds check implementation for most architectures + llvm */
#define MONO_EMIT_DEFAULT_BOUNDS_CHECK(cfg, array_reg, offset, index_reg, fault, ex_name) do { \
int _length_reg = alloc_ireg (cfg); \
if (fault) \
MONO_EMIT_NEW_LOAD_MEMBASE_OP_FAULT (cfg, OP_LOADI4_MEMBASE, _length_reg, array_reg, offset); \
else \
MONO_EMIT_NEW_LOAD_MEMBASE_OP_FLAGS (cfg, OP_LOADI4_MEMBASE, _length_reg, array_reg, offset, MONO_INST_INVARIANT_LOAD); \
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, _length_reg, index_reg); \
MONO_EMIT_NEW_COND_EXC (cfg, LE_UN, ex_name); \
} while (0)
#ifndef MONO_ARCH_EMIT_BOUNDS_CHECK
#define MONO_ARCH_EMIT_BOUNDS_CHECK(cfg, array_reg, offset, index_reg, ex_name) MONO_EMIT_DEFAULT_BOUNDS_CHECK ((cfg), (array_reg), (offset), (index_reg), TRUE, ex_name)
#endif
static inline void
mini_emit_bounds_check_offset (MonoCompile *cfg, int array_reg, int array_length_offset, int index_reg, const char *ex_name)
{
if (!(cfg->opt & MONO_OPT_UNSAFE)) {
ex_name = ex_name ? ex_name : "IndexOutOfRangeException";
if (!(cfg->opt & MONO_OPT_ABCREM)) {
MONO_EMIT_NULL_CHECK (cfg, array_reg, FALSE);
if (COMPILE_LLVM (cfg))
MONO_EMIT_DEFAULT_BOUNDS_CHECK ((cfg), (array_reg), (array_length_offset), (index_reg), TRUE, ex_name);
else
MONO_ARCH_EMIT_BOUNDS_CHECK ((cfg), (array_reg), (array_length_offset), (index_reg), ex_name);
} else {
MonoInst *ins;
MONO_INST_NEW ((cfg), ins, OP_BOUNDS_CHECK);
ins->sreg1 = array_reg;
ins->sreg2 = index_reg;
ins->inst_p0 = (gpointer)ex_name;
ins->inst_imm = (array_length_offset);
ins->flags |= MONO_INST_FAULT;
MONO_ADD_INS ((cfg)->cbb, ins);
(cfg)->flags |= MONO_CFG_NEEDS_DECOMPOSE;
(cfg)->cbb->needs_decompose = TRUE;
}
}
}
/* cfg is the MonoCompile been used
* array_reg is the vreg holding the array object
* array_type is a struct (usually MonoArray or MonoString)
* array_length_field is the field in the previous struct with the length
* index_reg is the vreg holding the index
*/
#define MONO_EMIT_BOUNDS_CHECK(cfg, array_reg, array_type, array_length_field, index_reg) do { \
mini_emit_bounds_check_offset ((cfg), (array_reg), MONO_STRUCT_OFFSET (array_type, array_length_field), (index_reg), NULL); \
} while (0)
#endif
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/inc/dlwrap.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//
#ifndef _DLWRAP_H
#define _DLWRAP_H
//include this file if you get contract violation because of delayload
//nothrow implementations
#if defined(VER_H) && !defined (GetFileVersionInfoSizeW_NoThrow)
DWORD
GetFileVersionInfoSizeW_NoThrow(
LPCWSTR lptstrFilename, /* Filename of version stamped file */
LPDWORD lpdwHandle
);
#endif
#if defined(VER_H) && !defined (GetFileVersionInfoW_NoThrow)
BOOL
GetFileVersionInfoW_NoThrow(
LPCWSTR lptstrFilename, /* Filename of version stamped file */
DWORD dwHandle, /* Information from GetFileVersionSize */
DWORD dwLen, /* Length of buffer for info */
LPVOID lpData
);
#endif
#if defined(VER_H) && !defined (VerQueryValueW_NoThrow)
BOOL
VerQueryValueW_NoThrow(
const LPVOID pBlock,
LPCWSTR lpSubBlock,
LPVOID * lplpBuffer,
PUINT puLen
);
#endif
#if defined(_WININET_) && !defined (CreateUrlCacheEntryW_NoThrow)
__success(return)
BOOL
CreateUrlCacheEntryW_NoThrow(
IN LPCWSTR lpszUrlName,
IN DWORD dwExpectedFileSize,
IN LPCWSTR lpszFileExtension,
_Out_writes_(MAX_LONGPATH+1) LPWSTR lpszFileName,
IN DWORD dwReserved
);
#endif
#if defined(_WININET_) && !defined (CommitUrlCacheEntryW_NoThrow)
BOOL
CommitUrlCacheEntryW_NoThrow(
IN LPCWSTR lpszUrlName,
IN LPCWSTR lpszLocalFileName,
IN FILETIME ExpireTime,
IN FILETIME LastModifiedTime,
IN DWORD CacheEntryType,
IN LPCWSTR lpHeaderInfo,
IN DWORD dwHeaderSize,
IN LPCWSTR lpszFileExtension,
IN LPCWSTR lpszOriginalUrl
);
#endif
#if defined(_WININET_) && !defined (InternetTimeToSystemTimeA_NoThrow)
BOOL
InternetTimeToSystemTimeA_NoThrow(
IN LPCSTR lpszTime, // NULL terminated string
OUT SYSTEMTIME *pst, // output in GMT time
IN DWORD dwReserved
);
#endif
#if defined(__urlmon_h__) && !defined(CoInternetCreateSecurityManager_NoThrow)
HRESULT
CoInternetCreateSecurityManager_NoThrow(
IServiceProvider *pSP,
IInternetSecurityManager **ppSM,
DWORD dwReserved
);
#endif
#if defined(__urlmon_h__) && !defined(URLDownloadToCacheFileW_NoThrow)
HRESULT
URLDownloadToCacheFileW_NoThrow(
LPUNKNOWN lpUnkcaller,
LPCWSTR szURL,
_Out_writes_(dwBufLength) LPWSTR szFileName,
DWORD dwBufLength,
DWORD dwReserved,
IBindStatusCallback *pBSC
);
#endif
#if defined(__urlmon_h__) && !defined(CoInternetGetSession_NoThrow)
HRESULT
CoInternetGetSession_NoThrow(
WORD dwSessionMode,
IInternetSession **ppIInternetSession,
DWORD dwReserved
);
#endif
#if defined(__urlmon_h__) && !defined(CopyBindInfo_NoThrow)
HRESULT
CopyBindInfo_NoThrow(
const BINDINFO * pcbiSrc, BINDINFO * pbiDest
);
#endif
//overrides
#undef InternetTimeToSystemTimeA
#undef CommitUrlCacheEntryW
#undef HttpQueryInfoA
#undef InternetCloseHandle
#undef HttpSendRequestA
#undef HttpOpenRequestA
#undef InternetConnectA
#undef InternetOpenA
#undef InternetReadFile
#undef CreateUrlCacheEntryW
#undef CoInternetGetSession
#undef CopyBindInfo
#undef CoInternetCreateSecurityManager
#undef URLDownloadToCacheFileW
#undef FDICreate
#undef FDIIsCabinet
#undef FDICopy
#undef FDIDestroy
#undef VerQueryValueW
#undef GetFileVersionInfoW
#undef GetFileVersionInfoSizeW
#undef VerQueryValueA
#undef GetFileVersionInfoA
#undef GetFileVersionInfoSizeA
#define InternetTimeToSystemTimeA InternetTimeToSystemTimeA_NoThrow
#define CommitUrlCacheEntryW CommitUrlCacheEntryW_NoThrow
#define CreateUrlCacheEntryW CreateUrlCacheEntryW_NoThrow
#define CoInternetGetSession CoInternetGetSession_NoThrow
#define CopyBindInfo CopyBindInfo_NoThrow
#define CoInternetCreateSecurityManager CoInternetCreateSecurityManager_NoThrow
#define URLDownloadToCacheFileW URLDownloadToCacheFileW_NoThrow
#define VerQueryValueW VerQueryValueW_NoThrow
#define GetFileVersionInfoW GetFileVersionInfoW_NoThrow
#define GetFileVersionInfoSizeW GetFileVersionInfoSizeW_NoThrow
#define VerQueryValueA Use_VerQueryValueW
#define GetFileVersionInfoA Use_GetFileVersionInfoW
#define GetFileVersionInfoSizeA Use_GetFileVersionInfoSizeW
#if defined(_WININET_)
inline
HRESULT HrCreateUrlCacheEntryW(
IN LPCWSTR lpszUrlName,
IN DWORD dwExpectedFileSize,
IN LPCWSTR lpszFileExtension,
_Out_writes_(MAX_LONGPATH+1) LPWSTR lpszFileName,
IN DWORD dwReserved
)
{
if (!CreateUrlCacheEntryW(lpszUrlName, dwExpectedFileSize, lpszFileExtension, lpszFileName, dwReserved))
{
return HRESULT_FROM_WIN32(GetLastError());
}
else
{
return S_OK;
}
}
inline
HRESULT HrCommitUrlCacheEntryW(
IN LPCWSTR lpszUrlName,
IN LPCWSTR lpszLocalFileName,
IN FILETIME ExpireTime,
IN FILETIME LastModifiedTime,
IN DWORD CacheEntryType,
IN LPCWSTR lpHeaderInfo,
IN DWORD dwHeaderSize,
IN LPCWSTR lpszFileExtension,
IN LPCWSTR lpszOriginalUrl
)
{
if (!CommitUrlCacheEntryW(lpszUrlName, lpszLocalFileName, ExpireTime, LastModifiedTime, CacheEntryType,
lpHeaderInfo, dwHeaderSize, lpszFileExtension, lpszOriginalUrl))
{
return HRESULT_FROM_WIN32(GetLastError());
}
else
{
return S_OK;
}
}
#endif // defined(_WININET_)
#endif
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//
#ifndef _DLWRAP_H
#define _DLWRAP_H
//include this file if you get contract violation because of delayload
//nothrow implementations
#if defined(VER_H) && !defined (GetFileVersionInfoSizeW_NoThrow)
DWORD
GetFileVersionInfoSizeW_NoThrow(
LPCWSTR lptstrFilename, /* Filename of version stamped file */
LPDWORD lpdwHandle
);
#endif
#if defined(VER_H) && !defined (GetFileVersionInfoW_NoThrow)
BOOL
GetFileVersionInfoW_NoThrow(
LPCWSTR lptstrFilename, /* Filename of version stamped file */
DWORD dwHandle, /* Information from GetFileVersionSize */
DWORD dwLen, /* Length of buffer for info */
LPVOID lpData
);
#endif
#if defined(VER_H) && !defined (VerQueryValueW_NoThrow)
BOOL
VerQueryValueW_NoThrow(
const LPVOID pBlock,
LPCWSTR lpSubBlock,
LPVOID * lplpBuffer,
PUINT puLen
);
#endif
#if defined(_WININET_) && !defined (CreateUrlCacheEntryW_NoThrow)
__success(return)
BOOL
CreateUrlCacheEntryW_NoThrow(
IN LPCWSTR lpszUrlName,
IN DWORD dwExpectedFileSize,
IN LPCWSTR lpszFileExtension,
_Out_writes_(MAX_LONGPATH+1) LPWSTR lpszFileName,
IN DWORD dwReserved
);
#endif
#if defined(_WININET_) && !defined (CommitUrlCacheEntryW_NoThrow)
BOOL
CommitUrlCacheEntryW_NoThrow(
IN LPCWSTR lpszUrlName,
IN LPCWSTR lpszLocalFileName,
IN FILETIME ExpireTime,
IN FILETIME LastModifiedTime,
IN DWORD CacheEntryType,
IN LPCWSTR lpHeaderInfo,
IN DWORD dwHeaderSize,
IN LPCWSTR lpszFileExtension,
IN LPCWSTR lpszOriginalUrl
);
#endif
#if defined(_WININET_) && !defined (InternetTimeToSystemTimeA_NoThrow)
BOOL
InternetTimeToSystemTimeA_NoThrow(
IN LPCSTR lpszTime, // NULL terminated string
OUT SYSTEMTIME *pst, // output in GMT time
IN DWORD dwReserved
);
#endif
#if defined(__urlmon_h__) && !defined(CoInternetCreateSecurityManager_NoThrow)
HRESULT
CoInternetCreateSecurityManager_NoThrow(
IServiceProvider *pSP,
IInternetSecurityManager **ppSM,
DWORD dwReserved
);
#endif
#if defined(__urlmon_h__) && !defined(URLDownloadToCacheFileW_NoThrow)
HRESULT
URLDownloadToCacheFileW_NoThrow(
LPUNKNOWN lpUnkcaller,
LPCWSTR szURL,
_Out_writes_(dwBufLength) LPWSTR szFileName,
DWORD dwBufLength,
DWORD dwReserved,
IBindStatusCallback *pBSC
);
#endif
#if defined(__urlmon_h__) && !defined(CoInternetGetSession_NoThrow)
HRESULT
CoInternetGetSession_NoThrow(
WORD dwSessionMode,
IInternetSession **ppIInternetSession,
DWORD dwReserved
);
#endif
#if defined(__urlmon_h__) && !defined(CopyBindInfo_NoThrow)
HRESULT
CopyBindInfo_NoThrow(
const BINDINFO * pcbiSrc, BINDINFO * pbiDest
);
#endif
//overrides
#undef InternetTimeToSystemTimeA
#undef CommitUrlCacheEntryW
#undef HttpQueryInfoA
#undef InternetCloseHandle
#undef HttpSendRequestA
#undef HttpOpenRequestA
#undef InternetConnectA
#undef InternetOpenA
#undef InternetReadFile
#undef CreateUrlCacheEntryW
#undef CoInternetGetSession
#undef CopyBindInfo
#undef CoInternetCreateSecurityManager
#undef URLDownloadToCacheFileW
#undef FDICreate
#undef FDIIsCabinet
#undef FDICopy
#undef FDIDestroy
#undef VerQueryValueW
#undef GetFileVersionInfoW
#undef GetFileVersionInfoSizeW
#undef VerQueryValueA
#undef GetFileVersionInfoA
#undef GetFileVersionInfoSizeA
#define InternetTimeToSystemTimeA InternetTimeToSystemTimeA_NoThrow
#define CommitUrlCacheEntryW CommitUrlCacheEntryW_NoThrow
#define CreateUrlCacheEntryW CreateUrlCacheEntryW_NoThrow
#define CoInternetGetSession CoInternetGetSession_NoThrow
#define CopyBindInfo CopyBindInfo_NoThrow
#define CoInternetCreateSecurityManager CoInternetCreateSecurityManager_NoThrow
#define URLDownloadToCacheFileW URLDownloadToCacheFileW_NoThrow
#define VerQueryValueW VerQueryValueW_NoThrow
#define GetFileVersionInfoW GetFileVersionInfoW_NoThrow
#define GetFileVersionInfoSizeW GetFileVersionInfoSizeW_NoThrow
#define VerQueryValueA Use_VerQueryValueW
#define GetFileVersionInfoA Use_GetFileVersionInfoW
#define GetFileVersionInfoSizeA Use_GetFileVersionInfoSizeW
#if defined(_WININET_)
inline
HRESULT HrCreateUrlCacheEntryW(
IN LPCWSTR lpszUrlName,
IN DWORD dwExpectedFileSize,
IN LPCWSTR lpszFileExtension,
_Out_writes_(MAX_LONGPATH+1) LPWSTR lpszFileName,
IN DWORD dwReserved
)
{
if (!CreateUrlCacheEntryW(lpszUrlName, dwExpectedFileSize, lpszFileExtension, lpszFileName, dwReserved))
{
return HRESULT_FROM_WIN32(GetLastError());
}
else
{
return S_OK;
}
}
inline
HRESULT HrCommitUrlCacheEntryW(
IN LPCWSTR lpszUrlName,
IN LPCWSTR lpszLocalFileName,
IN FILETIME ExpireTime,
IN FILETIME LastModifiedTime,
IN DWORD CacheEntryType,
IN LPCWSTR lpHeaderInfo,
IN DWORD dwHeaderSize,
IN LPCWSTR lpszFileExtension,
IN LPCWSTR lpszOriginalUrl
)
{
if (!CommitUrlCacheEntryW(lpszUrlName, lpszLocalFileName, ExpireTime, LastModifiedTime, CacheEntryType,
lpHeaderInfo, dwHeaderSize, lpszFileExtension, lpszOriginalUrl))
{
return HRESULT_FROM_WIN32(GetLastError());
}
else
{
return S_OK;
}
}
#endif // defined(_WININET_)
#endif
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/mono/mono/metadata/w32handle.h | /**
* \file
*/
#ifndef _MONO_METADATA_W32HANDLE_H_
#define _MONO_METADATA_W32HANDLE_H_
#include <config.h>
#include <glib.h>
#ifdef HOST_WIN32
#include <windows.h>
#else
#define INVALID_HANDLE_VALUE ((gpointer)-1)
#endif
#include "mono/utils/mono-coop-mutex.h"
#include "mono/utils/mono-error.h"
#include <mono/utils/mono-compiler.h>
#define MONO_W32HANDLE_MAXIMUM_WAIT_OBJECTS 64
#define MONO_INFINITE_WAIT ((guint32) 0xFFFFFFFF)
typedef enum {
MONO_W32TYPE_UNUSED = 0,
MONO_W32TYPE_EVENT,
MONO_W32TYPE_COUNT
} MonoW32Type;
typedef struct {
MonoW32Type type;
guint ref;
gboolean signalled;
gboolean in_use;
MonoCoopMutex signal_mutex;
MonoCoopCond signal_cond;
gpointer specific;
} MonoW32Handle;
typedef enum {
MONO_W32HANDLE_WAIT_RET_SUCCESS_0 = 0,
MONO_W32HANDLE_WAIT_RET_ABANDONED_0 = MONO_W32HANDLE_WAIT_RET_SUCCESS_0 + MONO_W32HANDLE_MAXIMUM_WAIT_OBJECTS,
MONO_W32HANDLE_WAIT_RET_ALERTED = -1,
MONO_W32HANDLE_WAIT_RET_TIMEOUT = -2,
MONO_W32HANDLE_WAIT_RET_FAILED = -3,
MONO_W32HANDLE_WAIT_RET_TOO_MANY_POSTS = -4,
MONO_W32HANDLE_WAIT_RET_NOT_OWNED_BY_CALLER = -5
} MonoW32HandleWaitRet;
typedef struct
{
void (*close)(gpointer data);
/* mono_w32handle_signal_and_wait */
gint32 (*signal)(MonoW32Handle *handle_data);
/* Called by mono_w32handle_wait_one and mono_w32handle_wait_multiple,
* with the handle locked (shared handles aren't locked.)
* Returns TRUE if ownership was established, false otherwise.
* If TRUE, *abandoned contains a status code such as
* WAIT_OBJECT_0 or WAIT_ABANDONED_0.
*/
gboolean (*own_handle)(MonoW32Handle *handle_data, gboolean *abandoned);
/* Called by mono_w32handle_wait_one and mono_w32handle_wait_multiple, if the
* handle in question is "ownable" (ie mutexes), to see if the current
* thread already owns this handle
*/
gboolean (*is_owned)(MonoW32Handle *handle_data);
/* Called by mono_w32handle_wait_one and mono_w32handle_wait_multiple,
* if the handle in question needs a special wait function
* instead of using the normal handle signal mechanism.
* Returns the mono_w32handle_wait_one return code.
*/
MonoW32HandleWaitRet (*special_wait)(MonoW32Handle *handle_data, guint32 timeout, gboolean *alerted);
/* Called by mono_w32handle_wait_one and mono_w32handle_wait_multiple,
* if the handle in question needs some preprocessing before the
* signal wait.
*/
void (*prewait)(MonoW32Handle *handle_data);
/* Called when dumping the handles */
void (*details)(MonoW32Handle *handle_data);
/* Called to get the name of the handle type */
const char* (*type_name) (void);
/* Called to get the size of the handle type */
gsize (*typesize) (void);
} MonoW32HandleOps;
typedef enum {
MONO_W32HANDLE_CAP_WAIT = 0x01,
MONO_W32HANDLE_CAP_SIGNAL = 0x02,
MONO_W32HANDLE_CAP_OWN = 0x04,
MONO_W32HANDLE_CAP_SPECIAL_WAIT = 0x08,
} MonoW32HandleCapability;
void
mono_w32handle_init (void);
void
mono_w32handle_register_ops (MonoW32Type type, const MonoW32HandleOps *ops);
gpointer
mono_w32handle_new (MonoW32Type type, gpointer handle_specific);
gpointer
mono_w32handle_duplicate (MonoW32Handle *handle_data);
gboolean
mono_w32handle_close (gpointer handle);
const gchar*
mono_w32handle_get_typename (MonoW32Type type);
gboolean
mono_w32handle_lookup_and_ref (gpointer handle, MonoW32Handle **handle_data);
void
mono_w32handle_unref (MonoW32Handle *handle_data);
void
mono_w32handle_register_capabilities (MonoW32Type type, MonoW32HandleCapability caps);
void
mono_w32handle_set_signal_state (MonoW32Handle *handle_data, gboolean state, gboolean broadcast);
gboolean
mono_w32handle_issignalled (MonoW32Handle *handle_data);
void
mono_w32handle_lock (MonoW32Handle *handle_data);
void
mono_w32handle_unlock (MonoW32Handle *handle_data);
MONO_COMPONENT_API
MonoW32HandleWaitRet
mono_w32handle_wait_one (gpointer handle, guint32 timeout, gboolean alertable);
#ifdef HOST_WIN32
static inline MonoW32HandleWaitRet
mono_w32handle_convert_wait_ret (guint32 res, guint32 numobjects)
{
if (res >= WAIT_OBJECT_0 && res <= WAIT_OBJECT_0 + numobjects - 1)
return (MonoW32HandleWaitRet)(MONO_W32HANDLE_WAIT_RET_SUCCESS_0 + (res - WAIT_OBJECT_0));
else if (res >= WAIT_ABANDONED_0 && res <= WAIT_ABANDONED_0 + numobjects - 1)
return (MonoW32HandleWaitRet)(MONO_W32HANDLE_WAIT_RET_ABANDONED_0 + (res - WAIT_ABANDONED_0));
else if (res == WAIT_IO_COMPLETION)
return MONO_W32HANDLE_WAIT_RET_ALERTED;
else if (res == WAIT_TIMEOUT)
return MONO_W32HANDLE_WAIT_RET_TIMEOUT;
else if (res == WAIT_FAILED)
return MONO_W32HANDLE_WAIT_RET_FAILED;
else
g_error ("%s: unknown res value %d", __func__, res);
}
#endif
#endif /* _MONO_METADATA_W32HANDLE_H_ */
| /**
* \file
*/
#ifndef _MONO_METADATA_W32HANDLE_H_
#define _MONO_METADATA_W32HANDLE_H_
#include <config.h>
#include <glib.h>
#ifdef HOST_WIN32
#include <windows.h>
#else
#define INVALID_HANDLE_VALUE ((gpointer)-1)
#endif
#include "mono/utils/mono-coop-mutex.h"
#include "mono/utils/mono-error.h"
#include <mono/utils/mono-compiler.h>
#define MONO_W32HANDLE_MAXIMUM_WAIT_OBJECTS 64
#define MONO_INFINITE_WAIT ((guint32) 0xFFFFFFFF)
typedef enum {
MONO_W32TYPE_UNUSED = 0,
MONO_W32TYPE_EVENT,
MONO_W32TYPE_COUNT
} MonoW32Type;
typedef struct {
MonoW32Type type;
guint ref;
gboolean signalled;
gboolean in_use;
MonoCoopMutex signal_mutex;
MonoCoopCond signal_cond;
gpointer specific;
} MonoW32Handle;
typedef enum {
MONO_W32HANDLE_WAIT_RET_SUCCESS_0 = 0,
MONO_W32HANDLE_WAIT_RET_ABANDONED_0 = MONO_W32HANDLE_WAIT_RET_SUCCESS_0 + MONO_W32HANDLE_MAXIMUM_WAIT_OBJECTS,
MONO_W32HANDLE_WAIT_RET_ALERTED = -1,
MONO_W32HANDLE_WAIT_RET_TIMEOUT = -2,
MONO_W32HANDLE_WAIT_RET_FAILED = -3,
MONO_W32HANDLE_WAIT_RET_TOO_MANY_POSTS = -4,
MONO_W32HANDLE_WAIT_RET_NOT_OWNED_BY_CALLER = -5
} MonoW32HandleWaitRet;
typedef struct
{
void (*close)(gpointer data);
/* mono_w32handle_signal_and_wait */
gint32 (*signal)(MonoW32Handle *handle_data);
/* Called by mono_w32handle_wait_one and mono_w32handle_wait_multiple,
* with the handle locked (shared handles aren't locked.)
* Returns TRUE if ownership was established, false otherwise.
* If TRUE, *abandoned contains a status code such as
* WAIT_OBJECT_0 or WAIT_ABANDONED_0.
*/
gboolean (*own_handle)(MonoW32Handle *handle_data, gboolean *abandoned);
/* Called by mono_w32handle_wait_one and mono_w32handle_wait_multiple, if the
* handle in question is "ownable" (ie mutexes), to see if the current
* thread already owns this handle
*/
gboolean (*is_owned)(MonoW32Handle *handle_data);
/* Called by mono_w32handle_wait_one and mono_w32handle_wait_multiple,
* if the handle in question needs a special wait function
* instead of using the normal handle signal mechanism.
* Returns the mono_w32handle_wait_one return code.
*/
MonoW32HandleWaitRet (*special_wait)(MonoW32Handle *handle_data, guint32 timeout, gboolean *alerted);
/* Called by mono_w32handle_wait_one and mono_w32handle_wait_multiple,
* if the handle in question needs some preprocessing before the
* signal wait.
*/
void (*prewait)(MonoW32Handle *handle_data);
/* Called when dumping the handles */
void (*details)(MonoW32Handle *handle_data);
/* Called to get the name of the handle type */
const char* (*type_name) (void);
/* Called to get the size of the handle type */
gsize (*typesize) (void);
} MonoW32HandleOps;
typedef enum {
MONO_W32HANDLE_CAP_WAIT = 0x01,
MONO_W32HANDLE_CAP_SIGNAL = 0x02,
MONO_W32HANDLE_CAP_OWN = 0x04,
MONO_W32HANDLE_CAP_SPECIAL_WAIT = 0x08,
} MonoW32HandleCapability;
void
mono_w32handle_init (void);
void
mono_w32handle_register_ops (MonoW32Type type, const MonoW32HandleOps *ops);
gpointer
mono_w32handle_new (MonoW32Type type, gpointer handle_specific);
gpointer
mono_w32handle_duplicate (MonoW32Handle *handle_data);
gboolean
mono_w32handle_close (gpointer handle);
const gchar*
mono_w32handle_get_typename (MonoW32Type type);
gboolean
mono_w32handle_lookup_and_ref (gpointer handle, MonoW32Handle **handle_data);
void
mono_w32handle_unref (MonoW32Handle *handle_data);
void
mono_w32handle_register_capabilities (MonoW32Type type, MonoW32HandleCapability caps);
void
mono_w32handle_set_signal_state (MonoW32Handle *handle_data, gboolean state, gboolean broadcast);
gboolean
mono_w32handle_issignalled (MonoW32Handle *handle_data);
void
mono_w32handle_lock (MonoW32Handle *handle_data);
void
mono_w32handle_unlock (MonoW32Handle *handle_data);
MONO_COMPONENT_API
MonoW32HandleWaitRet
mono_w32handle_wait_one (gpointer handle, guint32 timeout, gboolean alertable);
#ifdef HOST_WIN32
static inline MonoW32HandleWaitRet
mono_w32handle_convert_wait_ret (guint32 res, guint32 numobjects)
{
if (res >= WAIT_OBJECT_0 && res <= WAIT_OBJECT_0 + numobjects - 1)
return (MonoW32HandleWaitRet)(MONO_W32HANDLE_WAIT_RET_SUCCESS_0 + (res - WAIT_OBJECT_0));
else if (res >= WAIT_ABANDONED_0 && res <= WAIT_ABANDONED_0 + numobjects - 1)
return (MonoW32HandleWaitRet)(MONO_W32HANDLE_WAIT_RET_ABANDONED_0 + (res - WAIT_ABANDONED_0));
else if (res == WAIT_IO_COMPLETION)
return MONO_W32HANDLE_WAIT_RET_ALERTED;
else if (res == WAIT_TIMEOUT)
return MONO_W32HANDLE_WAIT_RET_TIMEOUT;
else if (res == WAIT_FAILED)
return MONO_W32HANDLE_WAIT_RET_FAILED;
else
g_error ("%s: unknown res value %d", __func__, res);
}
#endif
#endif /* _MONO_METADATA_W32HANDLE_H_ */
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/mono/mono/metadata/appdomain-icalls.h | /**
* \file
* Appdomain-related icalls.
* Copyright 2016 Microsoft
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#ifndef __MONO_METADATA_APPDOMAIN_ICALLS_H__
#define __MONO_METADATA_APPDOMAIN_ICALLS_H__
#include <mono/metadata/appdomain.h>
#include <mono/metadata/handle.h>
#include <mono/metadata/object-internals.h>
#include <mono/metadata/icalls.h>
#include "reflection-internals.h"
#endif /*__MONO_METADATA_APPDOMAIN_ICALLS_H__*/
| /**
* \file
* Appdomain-related icalls.
* Copyright 2016 Microsoft
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#ifndef __MONO_METADATA_APPDOMAIN_ICALLS_H__
#define __MONO_METADATA_APPDOMAIN_ICALLS_H__
#include <mono/metadata/appdomain.h>
#include <mono/metadata/handle.h>
#include <mono/metadata/object-internals.h>
#include <mono/metadata/icalls.h>
#include "reflection-internals.h"
#endif /*__MONO_METADATA_APPDOMAIN_ICALLS_H__*/
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/native/libs/System.IO.Compression.Native/pal_zlib.c | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#include <assert.h>
#include <stdlib.h>
#include "pal_zlib.h"
#ifdef INTERNAL_ZLIB
#ifdef _WIN32
#define c_static_assert(e) static_assert((e),"")
#endif
#include <external/zlib/zlib.h>
#else
#include "pal_utilities.h"
#include <zlib.h>
#endif
c_static_assert(PAL_Z_NOFLUSH == Z_NO_FLUSH);
c_static_assert(PAL_Z_FINISH == Z_FINISH);
c_static_assert(PAL_Z_OK == Z_OK);
c_static_assert(PAL_Z_STREAMEND == Z_STREAM_END);
c_static_assert(PAL_Z_STREAMERROR == Z_STREAM_ERROR);
c_static_assert(PAL_Z_DATAERROR == Z_DATA_ERROR);
c_static_assert(PAL_Z_MEMERROR == Z_MEM_ERROR);
c_static_assert(PAL_Z_BUFERROR == Z_BUF_ERROR);
c_static_assert(PAL_Z_VERSIONERROR == Z_VERSION_ERROR);
c_static_assert(PAL_Z_NOCOMPRESSION == Z_NO_COMPRESSION);
c_static_assert(PAL_Z_BESTSPEED == Z_BEST_SPEED);
c_static_assert(PAL_Z_DEFAULTCOMPRESSION == Z_DEFAULT_COMPRESSION);
c_static_assert(PAL_Z_DEFAULTSTRATEGY == Z_DEFAULT_STRATEGY);
c_static_assert(PAL_Z_DEFLATED == Z_DEFLATED);
/*
Initializes the PAL_ZStream by creating and setting its underlying z_stream.
*/
static int32_t Init(PAL_ZStream* stream)
{
z_stream* zStream = (z_stream*)malloc(sizeof(z_stream));
stream->internalState = zStream;
if (zStream != NULL)
{
zStream->zalloc = Z_NULL;
zStream->zfree = Z_NULL;
zStream->opaque = Z_NULL;
return PAL_Z_OK;
}
else
{
return PAL_Z_MEMERROR;
}
}
/*
Frees any memory on the PAL_ZStream that was created by Init.
*/
static void End(PAL_ZStream* stream)
{
z_stream* zStream = (z_stream*)(stream->internalState);
assert(zStream != NULL);
if (zStream != NULL)
{
free(zStream);
stream->internalState = NULL;
}
}
/*
Transfers the output values from the underlying z_stream to the PAL_ZStream.
*/
static void TransferStateToPalZStream(z_stream* from, PAL_ZStream* to)
{
to->nextIn = from->next_in;
to->availIn = from->avail_in;
to->nextOut = from->next_out;
to->availOut = from->avail_out;
to->msg = from->msg;
}
/*
Transfers the input values from the PAL_ZStream to the underlying z_stream object.
*/
static void TransferStateFromPalZStream(PAL_ZStream* from, z_stream* to)
{
to->next_in = from->nextIn;
to->avail_in = from->availIn;
to->next_out = from->nextOut;
to->avail_out = from->availOut;
}
/*
Gets the current z_stream object for the specified PAL_ZStream.
This ensures any inputs are transferred from the PAL_ZStream to the underlying z_stream,
since the current values are always needed.
*/
static z_stream* GetCurrentZStream(PAL_ZStream* stream)
{
z_stream* zStream = (z_stream*)(stream->internalState);
assert(zStream != NULL);
TransferStateFromPalZStream(stream, zStream);
return zStream;
}
int32_t CompressionNative_DeflateInit2_(
PAL_ZStream* stream, int32_t level, int32_t method, int32_t windowBits, int32_t memLevel, int32_t strategy)
{
assert(stream != NULL);
int32_t result = Init(stream);
if (result == PAL_Z_OK)
{
z_stream* zStream = GetCurrentZStream(stream);
result = deflateInit2(zStream, level, method, windowBits, memLevel, strategy);
TransferStateToPalZStream(zStream, stream);
}
return result;
}
int32_t CompressionNative_Deflate(PAL_ZStream* stream, int32_t flush)
{
assert(stream != NULL);
z_stream* zStream = GetCurrentZStream(stream);
int32_t result = deflate(zStream, flush);
TransferStateToPalZStream(zStream, stream);
return result;
}
int32_t CompressionNative_DeflateReset(PAL_ZStream* stream)
{
assert(stream != NULL);
z_stream* zStream = GetCurrentZStream(stream);
int32_t result = deflateReset(zStream);
TransferStateToPalZStream(zStream, stream);
return result;
}
int32_t CompressionNative_DeflateEnd(PAL_ZStream* stream)
{
assert(stream != NULL);
z_stream* zStream = GetCurrentZStream(stream);
int32_t result = deflateEnd(zStream);
End(stream);
return result;
}
int32_t CompressionNative_InflateInit2_(PAL_ZStream* stream, int32_t windowBits)
{
assert(stream != NULL);
int32_t result = Init(stream);
if (result == PAL_Z_OK)
{
z_stream* zStream = GetCurrentZStream(stream);
result = inflateInit2(zStream, windowBits);
TransferStateToPalZStream(zStream, stream);
}
return result;
}
int32_t CompressionNative_Inflate(PAL_ZStream* stream, int32_t flush)
{
assert(stream != NULL);
z_stream* zStream = GetCurrentZStream(stream);
int32_t result = inflate(zStream, flush);
TransferStateToPalZStream(zStream, stream);
return result;
}
int32_t CompressionNative_InflateReset(PAL_ZStream* stream)
{
assert(stream != NULL);
z_stream* zStream = GetCurrentZStream(stream);
int32_t result = inflateReset(zStream);
TransferStateToPalZStream(zStream, stream);
return result;
}
int32_t CompressionNative_InflateEnd(PAL_ZStream* stream)
{
assert(stream != NULL);
z_stream* zStream = GetCurrentZStream(stream);
int32_t result = inflateEnd(zStream);
End(stream);
return result;
}
uint32_t CompressionNative_Crc32(uint32_t crc, uint8_t* buffer, int32_t len)
{
assert(buffer != NULL);
unsigned long result = crc32(crc, buffer, len);
assert(result <= UINT32_MAX);
return (uint32_t)result;
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#include <assert.h>
#include <stdlib.h>
#include "pal_zlib.h"
#ifdef INTERNAL_ZLIB
#ifdef _WIN32
#define c_static_assert(e) static_assert((e),"")
#endif
#include <external/zlib/zlib.h>
#else
#include "pal_utilities.h"
#include <zlib.h>
#endif
c_static_assert(PAL_Z_NOFLUSH == Z_NO_FLUSH);
c_static_assert(PAL_Z_FINISH == Z_FINISH);
c_static_assert(PAL_Z_OK == Z_OK);
c_static_assert(PAL_Z_STREAMEND == Z_STREAM_END);
c_static_assert(PAL_Z_STREAMERROR == Z_STREAM_ERROR);
c_static_assert(PAL_Z_DATAERROR == Z_DATA_ERROR);
c_static_assert(PAL_Z_MEMERROR == Z_MEM_ERROR);
c_static_assert(PAL_Z_BUFERROR == Z_BUF_ERROR);
c_static_assert(PAL_Z_VERSIONERROR == Z_VERSION_ERROR);
c_static_assert(PAL_Z_NOCOMPRESSION == Z_NO_COMPRESSION);
c_static_assert(PAL_Z_BESTSPEED == Z_BEST_SPEED);
c_static_assert(PAL_Z_DEFAULTCOMPRESSION == Z_DEFAULT_COMPRESSION);
c_static_assert(PAL_Z_DEFAULTSTRATEGY == Z_DEFAULT_STRATEGY);
c_static_assert(PAL_Z_DEFLATED == Z_DEFLATED);
/*
Initializes the PAL_ZStream by creating and setting its underlying z_stream.
*/
static int32_t Init(PAL_ZStream* stream)
{
z_stream* zStream = (z_stream*)malloc(sizeof(z_stream));
stream->internalState = zStream;
if (zStream != NULL)
{
zStream->zalloc = Z_NULL;
zStream->zfree = Z_NULL;
zStream->opaque = Z_NULL;
return PAL_Z_OK;
}
else
{
return PAL_Z_MEMERROR;
}
}
/*
Frees any memory on the PAL_ZStream that was created by Init.
*/
static void End(PAL_ZStream* stream)
{
z_stream* zStream = (z_stream*)(stream->internalState);
assert(zStream != NULL);
if (zStream != NULL)
{
free(zStream);
stream->internalState = NULL;
}
}
/*
Transfers the output values from the underlying z_stream to the PAL_ZStream.
*/
static void TransferStateToPalZStream(z_stream* from, PAL_ZStream* to)
{
to->nextIn = from->next_in;
to->availIn = from->avail_in;
to->nextOut = from->next_out;
to->availOut = from->avail_out;
to->msg = from->msg;
}
/*
Transfers the input values from the PAL_ZStream to the underlying z_stream object.
*/
static void TransferStateFromPalZStream(PAL_ZStream* from, z_stream* to)
{
to->next_in = from->nextIn;
to->avail_in = from->availIn;
to->next_out = from->nextOut;
to->avail_out = from->availOut;
}
/*
Gets the current z_stream object for the specified PAL_ZStream.
This ensures any inputs are transferred from the PAL_ZStream to the underlying z_stream,
since the current values are always needed.
*/
static z_stream* GetCurrentZStream(PAL_ZStream* stream)
{
z_stream* zStream = (z_stream*)(stream->internalState);
assert(zStream != NULL);
TransferStateFromPalZStream(stream, zStream);
return zStream;
}
int32_t CompressionNative_DeflateInit2_(
PAL_ZStream* stream, int32_t level, int32_t method, int32_t windowBits, int32_t memLevel, int32_t strategy)
{
assert(stream != NULL);
int32_t result = Init(stream);
if (result == PAL_Z_OK)
{
z_stream* zStream = GetCurrentZStream(stream);
result = deflateInit2(zStream, level, method, windowBits, memLevel, strategy);
TransferStateToPalZStream(zStream, stream);
}
return result;
}
int32_t CompressionNative_Deflate(PAL_ZStream* stream, int32_t flush)
{
assert(stream != NULL);
z_stream* zStream = GetCurrentZStream(stream);
int32_t result = deflate(zStream, flush);
TransferStateToPalZStream(zStream, stream);
return result;
}
int32_t CompressionNative_DeflateReset(PAL_ZStream* stream)
{
assert(stream != NULL);
z_stream* zStream = GetCurrentZStream(stream);
int32_t result = deflateReset(zStream);
TransferStateToPalZStream(zStream, stream);
return result;
}
int32_t CompressionNative_DeflateEnd(PAL_ZStream* stream)
{
assert(stream != NULL);
z_stream* zStream = GetCurrentZStream(stream);
int32_t result = deflateEnd(zStream);
End(stream);
return result;
}
int32_t CompressionNative_InflateInit2_(PAL_ZStream* stream, int32_t windowBits)
{
assert(stream != NULL);
int32_t result = Init(stream);
if (result == PAL_Z_OK)
{
z_stream* zStream = GetCurrentZStream(stream);
result = inflateInit2(zStream, windowBits);
TransferStateToPalZStream(zStream, stream);
}
return result;
}
int32_t CompressionNative_Inflate(PAL_ZStream* stream, int32_t flush)
{
assert(stream != NULL);
z_stream* zStream = GetCurrentZStream(stream);
int32_t result = inflate(zStream, flush);
TransferStateToPalZStream(zStream, stream);
return result;
}
int32_t CompressionNative_InflateReset(PAL_ZStream* stream)
{
assert(stream != NULL);
z_stream* zStream = GetCurrentZStream(stream);
int32_t result = inflateReset(zStream);
TransferStateToPalZStream(zStream, stream);
return result;
}
int32_t CompressionNative_InflateEnd(PAL_ZStream* stream)
{
assert(stream != NULL);
z_stream* zStream = GetCurrentZStream(stream);
int32_t result = inflateEnd(zStream);
End(stream);
return result;
}
uint32_t CompressionNative_Crc32(uint32_t crc, uint8_t* buffer, int32_t len)
{
assert(buffer != NULL);
unsigned long result = crc32(crc, buffer, len);
assert(result <= UINT32_MAX);
return (uint32_t)result;
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/src/libunwind/src/mi/Lset_reg.c | #define UNW_LOCAL_ONLY
#include <libunwind.h>
#if defined(UNW_LOCAL_ONLY) && !defined(UNW_REMOTE_ONLY)
#include "Gset_reg.c"
#endif
| #define UNW_LOCAL_ONLY
#include <libunwind.h>
#if defined(UNW_LOCAL_ONLY) && !defined(UNW_REMOTE_ONLY)
#include "Gset_reg.c"
#endif
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/src/libunwind/src/ia64/Ginit_local.c | /* libunwind - a platform-independent unwind library
Copyright (C) 2001-2003, 2005 Hewlett-Packard Co
Contributed by David Mosberger-Tang <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "init.h"
#include "unwind_i.h"
#ifdef UNW_REMOTE_ONLY
int
unw_init_local (unw_cursor_t *cursor, unw_context_t *uc)
{
return -UNW_EINVAL;
}
#else /* !UNW_REMOTE_ONLY */
static inline void
set_as_arg (struct cursor *c, unw_context_t *uc)
{
#if defined(__linux__) && defined(__KERNEL__)
c->task = current;
c->as_arg = &uc->sw;
#else
c->as_arg = uc;
#endif
}
static inline int
get_initial_stack_pointers (struct cursor *c, unw_context_t *uc,
unw_word_t *sp, unw_word_t *bsp)
{
#if defined(__linux__)
unw_word_t sol, bspstore;
#ifdef __KERNEL__
sol = (uc->sw.ar_pfs >> 7) & 0x7f;
bspstore = uc->sw.ar_bspstore;
*sp = uc->ksp;
# else
sol = (uc->uc_mcontext.sc_ar_pfs >> 7) & 0x7f;
bspstore = uc->uc_mcontext.sc_ar_bsp;
*sp = uc->uc_mcontext.sc_gr[12];
# endif
*bsp = rse_skip_regs (bspstore, -sol);
#elif defined(__hpux)
int ret;
if ((ret = ia64_get (c, IA64_REG_LOC (c, UNW_IA64_GR + 12), sp)) < 0
|| (ret = ia64_get (c, IA64_REG_LOC (c, UNW_IA64_AR_BSP), bsp)) < 0)
return ret;
#else
# error Fix me.
#endif
return 0;
}
int
unw_init_local (unw_cursor_t *cursor, unw_context_t *uc)
{
struct cursor *c = (struct cursor *) cursor;
unw_word_t sp, bsp;
int ret;
if (!atomic_load(&tdep_init_done))
tdep_init ();
Debug (1, "(cursor=%p)\n", c);
c->as = unw_local_addr_space;
set_as_arg (c, uc);
if ((ret = get_initial_stack_pointers (c, uc, &sp, &bsp)) < 0)
return ret;
Debug (4, "initial bsp=%lx, sp=%lx\n", bsp, sp);
if ((ret = common_init (c, sp, bsp)) < 0)
return ret;
#ifdef __hpux
/* On HP-UX, the context created by getcontext() points to the
getcontext() system call stub. Step over it: */
ret = unw_step (cursor);
#endif
return ret;
}
#endif /* !UNW_REMOTE_ONLY */
| /* libunwind - a platform-independent unwind library
Copyright (C) 2001-2003, 2005 Hewlett-Packard Co
Contributed by David Mosberger-Tang <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "init.h"
#include "unwind_i.h"
#ifdef UNW_REMOTE_ONLY
int
unw_init_local (unw_cursor_t *cursor, unw_context_t *uc)
{
return -UNW_EINVAL;
}
#else /* !UNW_REMOTE_ONLY */
static inline void
set_as_arg (struct cursor *c, unw_context_t *uc)
{
#if defined(__linux__) && defined(__KERNEL__)
c->task = current;
c->as_arg = &uc->sw;
#else
c->as_arg = uc;
#endif
}
static inline int
get_initial_stack_pointers (struct cursor *c, unw_context_t *uc,
unw_word_t *sp, unw_word_t *bsp)
{
#if defined(__linux__)
unw_word_t sol, bspstore;
#ifdef __KERNEL__
sol = (uc->sw.ar_pfs >> 7) & 0x7f;
bspstore = uc->sw.ar_bspstore;
*sp = uc->ksp;
# else
sol = (uc->uc_mcontext.sc_ar_pfs >> 7) & 0x7f;
bspstore = uc->uc_mcontext.sc_ar_bsp;
*sp = uc->uc_mcontext.sc_gr[12];
# endif
*bsp = rse_skip_regs (bspstore, -sol);
#elif defined(__hpux)
int ret;
if ((ret = ia64_get (c, IA64_REG_LOC (c, UNW_IA64_GR + 12), sp)) < 0
|| (ret = ia64_get (c, IA64_REG_LOC (c, UNW_IA64_AR_BSP), bsp)) < 0)
return ret;
#else
# error Fix me.
#endif
return 0;
}
int
unw_init_local (unw_cursor_t *cursor, unw_context_t *uc)
{
struct cursor *c = (struct cursor *) cursor;
unw_word_t sp, bsp;
int ret;
if (!atomic_load(&tdep_init_done))
tdep_init ();
Debug (1, "(cursor=%p)\n", c);
c->as = unw_local_addr_space;
set_as_arg (c, uc);
if ((ret = get_initial_stack_pointers (c, uc, &sp, &bsp)) < 0)
return ret;
Debug (4, "initial bsp=%lx, sp=%lx\n", bsp, sp);
if ((ret = common_init (c, sp, bsp)) < 0)
return ret;
#ifdef __hpux
/* On HP-UX, the context created by getcontext() points to the
getcontext() system call stub. Step over it: */
ret = unw_step (cursor);
#endif
return ret;
}
#endif /* !UNW_REMOTE_ONLY */
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/gc/handletablepriv.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/*
* Generational GC handle manager. Internal Implementation Header.
*
* Shared defines and declarations for handle table implementation.
*
*
*/
#include "common.h"
#include "handletable.h"
/*--------------------------------------------------------------------------*/
//<TODO>@TODO: find a home for this in a project-level header file</TODO>
#define BITS_PER_BYTE (8)
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* MAJOR TABLE DEFINITIONS THAT CHANGE DEPENDING ON THE WEATHER
*
****************************************************************************/
// 64k reserved per segment with 4k as header.
#define HANDLE_SEGMENT_SIZE (0x10000) // MUST be a power of 2 (and currently must be 64K due to VirtualAlloc semantics)
#define HANDLE_HEADER_SIZE (0x1000) // SHOULD be <= OS page size
#define HANDLE_SEGMENT_ALIGNMENT HANDLE_SEGMENT_SIZE
#if !BIGENDIAN
// little-endian write barrier mask manipulation
#define GEN_CLUMP_0_MASK (0x000000FF)
#define NEXT_CLUMP_IN_MASK(dw) ((dw) >> BITS_PER_BYTE)
#else
// big-endian write barrier mask manipulation
#define GEN_CLUMP_0_MASK (0xFF000000)
#define NEXT_CLUMP_IN_MASK(dw) ((dw) << BITS_PER_BYTE)
#endif
// if the above numbers change than these will likely change as well
#define HANDLE_HANDLES_PER_CLUMP (16) // segment write-barrier granularity
#define HANDLE_HANDLES_PER_BLOCK (64) // segment suballocation granularity
#define HANDLE_OPTIMIZE_FOR_64_HANDLE_BLOCKS // flag for certain optimizations
// number of types allowed for public callers
#define HANDLE_MAX_PUBLIC_TYPES (HANDLE_MAX_INTERNAL_TYPES - 1) // reserve one internal type
// internal block types
#define HNDTYPE_INTERNAL_DATABLOCK (HANDLE_MAX_INTERNAL_TYPES - 1) // reserve last type for data blocks
// max number of generations to support statistics on
#define MAXSTATGEN (5)
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* MORE DEFINITIONS
*
****************************************************************************/
// fast handle-to-segment mapping
#define HANDLE_SEGMENT_CONTENT_MASK (HANDLE_SEGMENT_SIZE - 1)
#define HANDLE_SEGMENT_ALIGN_MASK (~HANDLE_SEGMENT_CONTENT_MASK)
// table layout metrics
#define HANDLE_SIZE sizeof(_UNCHECKED_OBJECTREF)
#define HANDLE_HANDLES_PER_SEGMENT ((HANDLE_SEGMENT_SIZE - HANDLE_HEADER_SIZE) / HANDLE_SIZE)
#define HANDLE_BLOCKS_PER_SEGMENT (HANDLE_HANDLES_PER_SEGMENT / HANDLE_HANDLES_PER_BLOCK)
#define HANDLE_CLUMPS_PER_SEGMENT (HANDLE_HANDLES_PER_SEGMENT / HANDLE_HANDLES_PER_CLUMP)
#define HANDLE_CLUMPS_PER_BLOCK (HANDLE_HANDLES_PER_BLOCK / HANDLE_HANDLES_PER_CLUMP)
#define HANDLE_BYTES_PER_BLOCK (HANDLE_HANDLES_PER_BLOCK * HANDLE_SIZE)
#define HANDLE_HANDLES_PER_MASK (sizeof(uint32_t) * BITS_PER_BYTE)
#define HANDLE_MASKS_PER_SEGMENT (HANDLE_HANDLES_PER_SEGMENT / HANDLE_HANDLES_PER_MASK)
#define HANDLE_MASKS_PER_BLOCK (HANDLE_HANDLES_PER_BLOCK / HANDLE_HANDLES_PER_MASK)
#define HANDLE_CLUMPS_PER_MASK (HANDLE_HANDLES_PER_MASK / HANDLE_HANDLES_PER_CLUMP)
// We use this relation to check for free mask per block.
C_ASSERT (HANDLE_HANDLES_PER_MASK * 2 == HANDLE_HANDLES_PER_BLOCK);
// cache layout metrics
#define HANDLE_CACHE_TYPE_SIZE 128 // 128 == 63 handles per bank
#define HANDLES_PER_CACHE_BANK ((HANDLE_CACHE_TYPE_SIZE / 2) - 1)
// cache policy defines
#define REBALANCE_TOLERANCE (HANDLES_PER_CACHE_BANK / 3)
#define REBALANCE_LOWATER_MARK (HANDLES_PER_CACHE_BANK - REBALANCE_TOLERANCE)
#define REBALANCE_HIWATER_MARK (HANDLES_PER_CACHE_BANK + REBALANCE_TOLERANCE)
// bulk alloc policy defines
#define SMALL_ALLOC_COUNT (HANDLES_PER_CACHE_BANK / 10)
// misc constants
#define MASK_FULL (0)
#define MASK_EMPTY (0xFFFFFFFF)
#define MASK_LOBYTE (0x000000FF)
#define TYPE_INVALID ((uint8_t)0xFF)
#define BLOCK_INVALID ((uint8_t)0xFF)
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* CORE TABLE LAYOUT STRUCTURES
*
****************************************************************************/
/*
* we need byte packing for the handle table layout to work
*/
#pragma pack(push,1)
/*
* Table Segment Header
*
* Defines the layout for a segment's header data.
*/
struct _TableSegmentHeader
{
/*
* Write Barrier Generation Numbers
*
* Each slot holds four bytes. Each byte corresponds to a clump of handles.
* The value of the byte corresponds to the lowest possible generation that a
* handle in that clump could point into.
*
* WARNING: Although this array is logically organized as a uint8_t[], it is sometimes
* accessed as uint32_t[] when processing bytes in parallel. Code which treats the
* array as an array of ULONG32s must handle big/little endian issues itself.
*/
uint8_t rgGeneration[HANDLE_BLOCKS_PER_SEGMENT * sizeof(uint32_t) / sizeof(uint8_t)];
/*
* Block Allocation Chains
*
* Each slot indexes the next block in an allocation chain.
*/
uint8_t rgAllocation[HANDLE_BLOCKS_PER_SEGMENT];
/*
* Block Free Masks
*
* Masks - 1 bit for every handle in the segment.
*/
uint32_t rgFreeMask[HANDLE_MASKS_PER_SEGMENT];
/*
* Block Handle Types
*
* Each slot holds the handle type of the associated block.
*/
uint8_t rgBlockType[HANDLE_BLOCKS_PER_SEGMENT];
/*
* Block User Data Map
*
* Each slot holds the index of a user data block (if any) for the associated block.
*/
uint8_t rgUserData[HANDLE_BLOCKS_PER_SEGMENT];
/*
* Block Lock Count
*
* Each slot holds a lock count for its associated block.
* Locked blocks are not freed, even when empty.
*/
uint8_t rgLocks[HANDLE_BLOCKS_PER_SEGMENT];
/*
* Allocation Chain Tails
*
* Each slot holds the tail block index for an allocation chain.
*/
uint8_t rgTail[HANDLE_MAX_INTERNAL_TYPES];
/*
* Allocation Chain Hints
*
* Each slot holds a hint block index for an allocation chain.
*/
uint8_t rgHint[HANDLE_MAX_INTERNAL_TYPES];
/*
* Free Count
*
* Each slot holds the number of free handles in an allocation chain.
*/
uint32_t rgFreeCount[HANDLE_MAX_INTERNAL_TYPES];
/*
* Next Segment
*
* Points to the next segment in the chain (if we ran out of space in this one).
*/
#ifdef DACCESS_COMPILE
TADDR pNextSegment;
#else
struct TableSegment *pNextSegment;
#endif // DACCESS_COMPILE
/*
* Handle Table
*
* Points to owning handle table for this table segment.
*/
PTR_HandleTable pHandleTable;
/*
* Flags
*/
uint8_t fResortChains : 1; // allocation chains need sorting
uint8_t fNeedsScavenging : 1; // free blocks need scavenging
uint8_t _fUnused : 6; // unused
/*
* Free List Head
*
* Index of the first free block in the segment.
*/
uint8_t bFreeList;
/*
* Empty Line
*
* Index of the first KNOWN block of the last group of unused blocks in the segment.
*/
uint8_t bEmptyLine;
/*
* Commit Line
*
* Index of the first uncommited block in the segment.
*/
uint8_t bCommitLine;
/*
* Decommit Line
*
* Index of the first block in the highest committed page of the segment.
*/
uint8_t bDecommitLine;
/*
* Sequence
*
* Indicates the segment sequence number.
*/
uint8_t bSequence;
};
typedef DPTR(struct _TableSegmentHeader) PTR__TableSegmentHeader;
typedef DPTR(uintptr_t) PTR_uintptr_t;
// The handle table is large and may not be entirely mapped. That's one reason for splitting out the table
// segment and the header as two separate classes. In DAC builds, we generally need only a single element from
// the table segment, so we can use the DAC to retrieve just the information we require.
/*
* Table Segment
*
* Defines the layout for a handle table segment.
*/
struct TableSegment : public _TableSegmentHeader
{
/*
* Filler
*/
uint8_t rgUnused[HANDLE_HEADER_SIZE - sizeof(_TableSegmentHeader)];
/*
* Handles
*/
_UNCHECKED_OBJECTREF rgValue[HANDLE_HANDLES_PER_SEGMENT];
#ifdef DACCESS_COMPILE
static uint32_t DacSize(TADDR addr);
#endif
};
typedef SPTR(struct TableSegment) PTR_TableSegment;
/*
* restore default packing
*/
#pragma pack(pop)
/*
* Handle Type Cache
*
* Defines the layout of a per-type handle cache.
*/
struct HandleTypeCache
{
/*
* reserve bank
*/
OBJECTHANDLE rgReserveBank[HANDLES_PER_CACHE_BANK];
/*
* index of next available handle slot in the reserve bank
*/
int32_t lReserveIndex;
/*---------------------------------------------------------------------------------
* N.B. this structure is split up this way so that when HANDLES_PER_CACHE_BANK is
* large enough, lReserveIndex and lFreeIndex will reside in different cache lines
*--------------------------------------------------------------------------------*/
/*
* free bank
*/
OBJECTHANDLE rgFreeBank[HANDLES_PER_CACHE_BANK];
/*
* index of next empty slot in the free bank
*/
int32_t lFreeIndex;
};
/*---------------------------------------------------------------------------*/
/****************************************************************************
*
* SCANNING PROTOTYPES
*
****************************************************************************/
/*
* ScanCallbackInfo
*
* Carries parameters for per-segment and per-block scanning callbacks.
*
*/
struct ScanCallbackInfo
{
PTR_TableSegment pCurrentSegment; // segment we are presently scanning, if any
uint32_t uFlags; // HNDGCF_* flags
BOOL fEnumUserData; // whether user data is being enumerated as well
HANDLESCANPROC pfnScan; // per-handle scan callback
uintptr_t param1; // callback param 1
uintptr_t param2; // callback param 2
uint32_t dwAgeMask; // generation mask for ephemeral GCs
#ifdef _DEBUG
uint32_t DEBUG_BlocksScanned;
uint32_t DEBUG_BlocksScannedNonTrivially;
uint32_t DEBUG_HandleSlotsScanned;
uint32_t DEBUG_HandlesActuallyScanned;
#endif
};
/*
* BLOCKSCANPROC
*
* Prototype for callbacks that implement per-block scanning logic.
*
*/
typedef void (CALLBACK *BLOCKSCANPROC)(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* SEGMENTITERATOR
*
* Prototype for callbacks that implement per-segment scanning logic.
*
*/
typedef PTR_TableSegment (CALLBACK *SEGMENTITERATOR)(PTR_HandleTable pTable, PTR_TableSegment pPrevSegment, CrstHolderWithState *pCrstHolder);
/*
* TABLESCANPROC
*
* Prototype for TableScanHandles and xxxTableScanHandlesAsync.
*
*/
typedef void (CALLBACK *TABLESCANPROC)(PTR_HandleTable pTable,
const uint32_t *puType, uint32_t uTypeCount,
SEGMENTITERATOR pfnSegmentIterator,
BLOCKSCANPROC pfnBlockHandler,
ScanCallbackInfo *pInfo,
CrstHolderWithState *pCrstHolder);
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* ADDITIONAL TABLE STRUCTURES
*
****************************************************************************/
/*
* AsyncScanInfo
*
* Tracks the state of an async scan for a handle table.
*
*/
struct AsyncScanInfo
{
/*
* Underlying Callback Info
*
* Specifies callback info for the underlying block handler.
*/
struct ScanCallbackInfo *pCallbackInfo;
/*
* Underlying Segment Iterator
*
* Specifies the segment iterator to be used during async scanning.
*/
SEGMENTITERATOR pfnSegmentIterator;
/*
* Underlying Block Handler
*
* Specifies the block handler to be used during async scanning.
*/
BLOCKSCANPROC pfnBlockHandler;
/*
* Scan Queue
*
* Specifies the nodes to be processed asynchronously.
*/
struct ScanQNode *pScanQueue;
/*
* Queue Tail
*
* Specifies the tail node in the queue, or NULL if the queue is empty.
*/
struct ScanQNode *pQueueTail;
};
/*
* Handle Table
*
* Defines the layout of a handle table object.
*/
#ifdef _MSC_VER
#pragma warning(push)
#pragma warning(disable : 4200 ) // zero-sized array
#endif
struct HandleTable
{
/*
* flags describing handle attributes
*
* N.B. this is at offset 0 due to frequent access by cache free codepath
*/
uint32_t rgTypeFlags[HANDLE_MAX_INTERNAL_TYPES];
/*
* lock for this table
*/
CrstStatic Lock;
/*
* number of types this table supports
*/
uint32_t uTypeCount;
/*
* number of handles owned by this table that are marked as "used"
* (this includes the handles residing in rgMainCache and rgQuickCache)
*/
uint32_t dwCount;
/*
* head of segment list for this table
*/
PTR_TableSegment pSegmentList;
/*
* information on current async scan (if any)
*/
AsyncScanInfo *pAsyncScanInfo;
/*
* per-table user info
*/
uint32_t uTableIndex;
/*
* one-level per-type 'quick' handle cache
*/
OBJECTHANDLE rgQuickCache[HANDLE_MAX_INTERNAL_TYPES]; // interlocked ops used here
/*
* debug-only statistics
*/
#ifdef _DEBUG
int _DEBUG_iMaxGen;
int64_t _DEBUG_TotalBlocksScanned [MAXSTATGEN];
int64_t _DEBUG_TotalBlocksScannedNonTrivially[MAXSTATGEN];
int64_t _DEBUG_TotalHandleSlotsScanned [MAXSTATGEN];
int64_t _DEBUG_TotalHandlesActuallyScanned [MAXSTATGEN];
#endif
/*
* primary per-type handle cache
*/
HandleTypeCache rgMainCache[0]; // interlocked ops used here
};
#ifdef _MSC_VER
#pragma warning(pop)
#endif
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* HELPERS
*
****************************************************************************/
/*
* A 32/64 comparison callback
*<TODO>
* @TODO: move/merge into common util file
*</TODO>
*/
typedef int (*PFNCOMPARE)(uintptr_t p, uintptr_t q);
/*
* A 32/64 neutral quicksort
*<TODO>
* @TODO: move/merge into common util file
*</TODO>
*/
void QuickSort(uintptr_t *pData, int left, int right, PFNCOMPARE pfnCompare);
/*
* CompareHandlesByFreeOrder
*
* Returns:
* <0 - handle P should be freed before handle Q
* =0 - handles are eqivalent for free order purposes
* >0 - handle Q should be freed before handle P
*
*/
int CompareHandlesByFreeOrder(uintptr_t p, uintptr_t q);
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* CORE TABLE MANAGEMENT
*
****************************************************************************/
/*
* TypeHasUserData
*
* Determines whether a given handle type has user data.
*
*/
__inline BOOL TypeHasUserData(HandleTable *pTable, uint32_t uType)
{
LIMITED_METHOD_CONTRACT;
// sanity
_ASSERTE(uType < HANDLE_MAX_INTERNAL_TYPES);
// consult the type flags
return (pTable->rgTypeFlags[uType] & HNDF_EXTRAINFO);
}
/*
* TableCanFreeSegmentNow
*
* Determines if it is OK to free the specified segment at this time.
*
*/
BOOL TableCanFreeSegmentNow(HandleTable *pTable, TableSegment *pSegment);
/*
* BlockIsLocked
*
* Determines if the lock count for the specified block is currently non-zero.
*
*/
__inline BOOL BlockIsLocked(TableSegment *pSegment, uint32_t uBlock)
{
LIMITED_METHOD_CONTRACT;
// sanity
_ASSERTE(uBlock < HANDLE_BLOCKS_PER_SEGMENT);
// fetch the lock count and compare it to zero
return (pSegment->rgLocks[uBlock] != 0);
}
/*
* BlockLock
*
* Increases the lock count for a block.
*
*/
__inline void BlockLock(TableSegment *pSegment, uint32_t uBlock)
{
LIMITED_METHOD_CONTRACT;
// fetch the old lock count
uint8_t bLocks = pSegment->rgLocks[uBlock];
// assert if we are about to trash the count
_ASSERTE(bLocks < 0xFF);
// store the incremented lock count
pSegment->rgLocks[uBlock] = bLocks + 1;
}
/*
* BlockUnlock
*
* Decreases the lock count for a block.
*
*/
__inline void BlockUnlock(TableSegment *pSegment, uint32_t uBlock)
{
LIMITED_METHOD_CONTRACT;
// fetch the old lock count
uint8_t bLocks = pSegment->rgLocks[uBlock];
// assert if we are about to trash the count
_ASSERTE(bLocks > 0);
// store the decremented lock count
pSegment->rgLocks[uBlock] = bLocks - 1;
}
/*
* BlockFetchUserDataPointer
*
* Gets the user data pointer for the first handle in a block.
*
*/
PTR_uintptr_t BlockFetchUserDataPointer(PTR__TableSegmentHeader pSegment, uint32_t uBlock, BOOL fAssertOnError);
/*
* HandleValidateAndFetchUserDataPointer
*
* Gets the user data pointer for a handle.
* ASSERTs and returns NULL if handle is not of the expected type.
*
*/
uintptr_t *HandleValidateAndFetchUserDataPointer(OBJECTHANDLE handle, uint32_t uTypeExpected);
/*
* HandleQuickFetchUserDataPointer
*
* Gets the user data pointer for a handle.
* Less validation is performed.
*
*/
PTR_uintptr_t HandleQuickFetchUserDataPointer(OBJECTHANDLE handle);
/*
* HandleQuickSetUserData
*
* Stores user data with a handle.
* Less validation is performed.
*
*/
void HandleQuickSetUserData(OBJECTHANDLE handle, uintptr_t lUserData);
/*
* HandleFetchType
*
* Computes the type index for a given handle.
*
*/
uint32_t HandleFetchType(OBJECTHANDLE handle);
/*
* HandleFetchHandleTable
*
* Returns the containing handle table of a given handle.
*
*/
PTR_HandleTable HandleFetchHandleTable(OBJECTHANDLE handle);
/*
* SegmentAlloc
*
* Allocates a new segment.
*
*/
TableSegment *SegmentAlloc(HandleTable *pTable);
/*
* SegmentFree
*
* Frees the specified segment.
*
*/
void SegmentFree(TableSegment *pSegment);
/*
* Check if a handle is part of a HandleTable
*/
BOOL TableContainHandle(HandleTable *pTable, OBJECTHANDLE handle);
/*
* SegmentRemoveFreeBlocks
*
* Removes a block from a block list in a segment. The block is returned to
* the segment's free list.
*
*/
void SegmentRemoveFreeBlocks(TableSegment *pSegment, uint32_t uType);
/*
* SegmentResortChains
*
* Sorts the block chains for optimal scanning order.
* Sorts the free list to combat fragmentation.
*
*/
void SegmentResortChains(TableSegment *pSegment);
/*
* DoesSegmentNeedsToTrimExcessPages
*
* Checks to see if any pages can be decommitted from the segment.
*
*/
BOOL DoesSegmentNeedsToTrimExcessPages(TableSegment *pSegment);
/*
* SegmentTrimExcessPages
*
* Checks to see if any pages can be decommitted from the segment.
* In case there any unused pages it goes and decommits them.
*
*/
void SegmentTrimExcessPages(TableSegment *pSegment);
/*
* TableAllocBulkHandles
*
* Attempts to allocate the requested number of handes of the specified type.
*
* Returns the number of handles that were actually allocated. This is always
* the same as the number of handles requested except in out-of-memory conditions,
* in which case it is the number of handles that were successfully allocated.
*
*/
uint32_t TableAllocBulkHandles(HandleTable *pTable, uint32_t uType, OBJECTHANDLE *pHandleBase, uint32_t uCount);
/*
* TableFreeBulkPreparedHandles
*
* Frees an array of handles of the specified type.
*
* This routine is optimized for a sorted array of handles but will accept any order.
*
*/
void TableFreeBulkPreparedHandles(HandleTable *pTable, uint32_t uType, OBJECTHANDLE *pHandleBase, uint32_t uCount);
/*
* TableFreeBulkUnpreparedHandles
*
* Frees an array of handles of the specified type by preparing them and calling TableFreeBulkPreparedHandles.
*
*/
void TableFreeBulkUnpreparedHandles(HandleTable *pTable, uint32_t uType, const OBJECTHANDLE *pHandles, uint32_t uCount);
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* HANDLE CACHE
*
****************************************************************************/
/*
* TableAllocSingleHandleFromCache
*
* Gets a single handle of the specified type from the handle table by
* trying to fetch it from the reserve cache for that handle type. If the
* reserve cache is empty, this routine calls TableCacheMissOnAlloc.
*
*/
OBJECTHANDLE TableAllocSingleHandleFromCache(HandleTable *pTable, uint32_t uType);
/*
* TableFreeSingleHandleToCache
*
* Returns a single handle of the specified type to the handle table
* by trying to store it in the free cache for that handle type. If the
* free cache is full, this routine calls TableCacheMissOnFree.
*
*/
void TableFreeSingleHandleToCache(HandleTable *pTable, uint32_t uType, OBJECTHANDLE handle);
/*
* TableAllocHandlesFromCache
*
* Allocates multiple handles of the specified type by repeatedly
* calling TableAllocSingleHandleFromCache.
*
*/
uint32_t TableAllocHandlesFromCache(HandleTable *pTable, uint32_t uType, OBJECTHANDLE *pHandleBase, uint32_t uCount);
/*
* TableFreeHandlesToCache
*
* Frees multiple handles of the specified type by repeatedly
* calling TableFreeSingleHandleToCache.
*
*/
void TableFreeHandlesToCache(HandleTable *pTable, uint32_t uType, const OBJECTHANDLE *pHandleBase, uint32_t uCount);
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* TABLE SCANNING
*
****************************************************************************/
/*
* TableScanHandles
*
* Implements the core handle scanning loop for a table.
*
*/
void CALLBACK TableScanHandles(PTR_HandleTable pTable,
const uint32_t *puType,
uint32_t uTypeCount,
SEGMENTITERATOR pfnSegmentIterator,
BLOCKSCANPROC pfnBlockHandler,
ScanCallbackInfo *pInfo,
CrstHolderWithState *pCrstHolder);
/*
* xxxTableScanHandlesAsync
*
* Implements asynchronous handle scanning for a table.
*
*/
void CALLBACK xxxTableScanHandlesAsync(PTR_HandleTable pTable,
const uint32_t *puType,
uint32_t uTypeCount,
SEGMENTITERATOR pfnSegmentIterator,
BLOCKSCANPROC pfnBlockHandler,
ScanCallbackInfo *pInfo,
CrstHolderWithState *pCrstHolder);
/*
* TypesRequireUserDataScanning
*
* Determines whether the set of types listed should get user data during scans
*
* if ALL types passed have user data then this function will enable user data support
* otherwise it will disable user data support
*
* IN OTHER WORDS, SCANNING WITH A MIX OF USER-DATA AND NON-USER-DATA TYPES IS NOT SUPPORTED
*
*/
BOOL TypesRequireUserDataScanning(HandleTable *pTable, const uint32_t *types, uint32_t typeCount);
/*
* BuildAgeMask
*
* Builds an age mask to be used when examining/updating the write barrier.
*
*/
uint32_t BuildAgeMask(uint32_t uGen, uint32_t uMaxGen);
/*
* QuickSegmentIterator
*
* Returns the next segment to be scanned in a scanning loop.
*
*/
PTR_TableSegment CALLBACK QuickSegmentIterator(PTR_HandleTable pTable, PTR_TableSegment pPrevSegment, CrstHolderWithState *pCrstHolder = 0);
/*
* StandardSegmentIterator
*
* Returns the next segment to be scanned in a scanning loop.
*
* This iterator performs some maintenance on the segments,
* primarily making sure the block chains are sorted so that
* g0 scans are more likely to operate on contiguous blocks.
*
*/
PTR_TableSegment CALLBACK StandardSegmentIterator(PTR_HandleTable pTable, PTR_TableSegment pPrevSegment, CrstHolderWithState *pCrstHolder = 0);
/*
* FullSegmentIterator
*
* Returns the next segment to be scanned in a scanning loop.
*
* This iterator performs full maintenance on the segments,
* including freeing those it notices are empty along the way.
*
*/
PTR_TableSegment CALLBACK FullSegmentIterator(PTR_HandleTable pTable, PTR_TableSegment pPrevSegment, CrstHolderWithState *pCrstHolder = 0);
/*
* BlockScanBlocksWithoutUserData
*
* Calls the specified callback for each handle, optionally aging the corresponding generation clumps.
* NEVER propagates per-handle user data to the callback.
*
*/
void CALLBACK BlockScanBlocksWithoutUserData(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* BlockScanBlocksWithUserData
*
* Calls the specified callback for each handle, optionally aging the corresponding generation clumps.
* ALWAYS propagates per-handle user data to the callback.
*
*/
void CALLBACK BlockScanBlocksWithUserData(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* BlockScanBlocksEphemeral
*
* Calls the specified callback for each handle from the specified generation.
* Propagates per-handle user data to the callback if present.
*
*/
void CALLBACK BlockScanBlocksEphemeral(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* BlockAgeBlocks
*
* Ages all clumps in a range of consecutive blocks.
*
*/
void CALLBACK BlockAgeBlocks(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* BlockAgeBlocksEphemeral
*
* Ages all clumps within the specified generation.
*
*/
void CALLBACK BlockAgeBlocksEphemeral(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* BlockResetAgeMapForBlocks
*
* Clears the age maps for a range of blocks.
*
*/
void CALLBACK BlockResetAgeMapForBlocks(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* BlockVerifyAgeMapForBlocks
*
* Verifies the age maps for a range of blocks, and also validates the objects pointed to.
*
*/
void CALLBACK BlockVerifyAgeMapForBlocks(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* xxxAsyncSegmentIterator
*
* Implements the core handle scanning loop for a table.
*
*/
PTR_TableSegment CALLBACK xxxAsyncSegmentIterator(PTR_HandleTable pTable, TableSegment *pPrevSegment, CrstHolderWithState *pCrstHolder);
/*--------------------------------------------------------------------------*/
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/*
* Generational GC handle manager. Internal Implementation Header.
*
* Shared defines and declarations for handle table implementation.
*
*
*/
#include "common.h"
#include "handletable.h"
/*--------------------------------------------------------------------------*/
//<TODO>@TODO: find a home for this in a project-level header file</TODO>
#define BITS_PER_BYTE (8)
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* MAJOR TABLE DEFINITIONS THAT CHANGE DEPENDING ON THE WEATHER
*
****************************************************************************/
// 64k reserved per segment with 4k as header.
#define HANDLE_SEGMENT_SIZE (0x10000) // MUST be a power of 2 (and currently must be 64K due to VirtualAlloc semantics)
#define HANDLE_HEADER_SIZE (0x1000) // SHOULD be <= OS page size
#define HANDLE_SEGMENT_ALIGNMENT HANDLE_SEGMENT_SIZE
#if !BIGENDIAN
// little-endian write barrier mask manipulation
#define GEN_CLUMP_0_MASK (0x000000FF)
#define NEXT_CLUMP_IN_MASK(dw) ((dw) >> BITS_PER_BYTE)
#else
// big-endian write barrier mask manipulation
#define GEN_CLUMP_0_MASK (0xFF000000)
#define NEXT_CLUMP_IN_MASK(dw) ((dw) << BITS_PER_BYTE)
#endif
// if the above numbers change than these will likely change as well
#define HANDLE_HANDLES_PER_CLUMP (16) // segment write-barrier granularity
#define HANDLE_HANDLES_PER_BLOCK (64) // segment suballocation granularity
#define HANDLE_OPTIMIZE_FOR_64_HANDLE_BLOCKS // flag for certain optimizations
// number of types allowed for public callers
#define HANDLE_MAX_PUBLIC_TYPES (HANDLE_MAX_INTERNAL_TYPES - 1) // reserve one internal type
// internal block types
#define HNDTYPE_INTERNAL_DATABLOCK (HANDLE_MAX_INTERNAL_TYPES - 1) // reserve last type for data blocks
// max number of generations to support statistics on
#define MAXSTATGEN (5)
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* MORE DEFINITIONS
*
****************************************************************************/
// fast handle-to-segment mapping
#define HANDLE_SEGMENT_CONTENT_MASK (HANDLE_SEGMENT_SIZE - 1)
#define HANDLE_SEGMENT_ALIGN_MASK (~HANDLE_SEGMENT_CONTENT_MASK)
// table layout metrics
#define HANDLE_SIZE sizeof(_UNCHECKED_OBJECTREF)
#define HANDLE_HANDLES_PER_SEGMENT ((HANDLE_SEGMENT_SIZE - HANDLE_HEADER_SIZE) / HANDLE_SIZE)
#define HANDLE_BLOCKS_PER_SEGMENT (HANDLE_HANDLES_PER_SEGMENT / HANDLE_HANDLES_PER_BLOCK)
#define HANDLE_CLUMPS_PER_SEGMENT (HANDLE_HANDLES_PER_SEGMENT / HANDLE_HANDLES_PER_CLUMP)
#define HANDLE_CLUMPS_PER_BLOCK (HANDLE_HANDLES_PER_BLOCK / HANDLE_HANDLES_PER_CLUMP)
#define HANDLE_BYTES_PER_BLOCK (HANDLE_HANDLES_PER_BLOCK * HANDLE_SIZE)
#define HANDLE_HANDLES_PER_MASK (sizeof(uint32_t) * BITS_PER_BYTE)
#define HANDLE_MASKS_PER_SEGMENT (HANDLE_HANDLES_PER_SEGMENT / HANDLE_HANDLES_PER_MASK)
#define HANDLE_MASKS_PER_BLOCK (HANDLE_HANDLES_PER_BLOCK / HANDLE_HANDLES_PER_MASK)
#define HANDLE_CLUMPS_PER_MASK (HANDLE_HANDLES_PER_MASK / HANDLE_HANDLES_PER_CLUMP)
// We use this relation to check for free mask per block.
C_ASSERT (HANDLE_HANDLES_PER_MASK * 2 == HANDLE_HANDLES_PER_BLOCK);
// cache layout metrics
#define HANDLE_CACHE_TYPE_SIZE 128 // 128 == 63 handles per bank
#define HANDLES_PER_CACHE_BANK ((HANDLE_CACHE_TYPE_SIZE / 2) - 1)
// cache policy defines
#define REBALANCE_TOLERANCE (HANDLES_PER_CACHE_BANK / 3)
#define REBALANCE_LOWATER_MARK (HANDLES_PER_CACHE_BANK - REBALANCE_TOLERANCE)
#define REBALANCE_HIWATER_MARK (HANDLES_PER_CACHE_BANK + REBALANCE_TOLERANCE)
// bulk alloc policy defines
#define SMALL_ALLOC_COUNT (HANDLES_PER_CACHE_BANK / 10)
// misc constants
#define MASK_FULL (0)
#define MASK_EMPTY (0xFFFFFFFF)
#define MASK_LOBYTE (0x000000FF)
#define TYPE_INVALID ((uint8_t)0xFF)
#define BLOCK_INVALID ((uint8_t)0xFF)
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* CORE TABLE LAYOUT STRUCTURES
*
****************************************************************************/
/*
* we need byte packing for the handle table layout to work
*/
#pragma pack(push,1)
/*
* Table Segment Header
*
* Defines the layout for a segment's header data.
*/
struct _TableSegmentHeader
{
/*
* Write Barrier Generation Numbers
*
* Each slot holds four bytes. Each byte corresponds to a clump of handles.
* The value of the byte corresponds to the lowest possible generation that a
* handle in that clump could point into.
*
* WARNING: Although this array is logically organized as a uint8_t[], it is sometimes
* accessed as uint32_t[] when processing bytes in parallel. Code which treats the
* array as an array of ULONG32s must handle big/little endian issues itself.
*/
uint8_t rgGeneration[HANDLE_BLOCKS_PER_SEGMENT * sizeof(uint32_t) / sizeof(uint8_t)];
/*
* Block Allocation Chains
*
* Each slot indexes the next block in an allocation chain.
*/
uint8_t rgAllocation[HANDLE_BLOCKS_PER_SEGMENT];
/*
* Block Free Masks
*
* Masks - 1 bit for every handle in the segment.
*/
uint32_t rgFreeMask[HANDLE_MASKS_PER_SEGMENT];
/*
* Block Handle Types
*
* Each slot holds the handle type of the associated block.
*/
uint8_t rgBlockType[HANDLE_BLOCKS_PER_SEGMENT];
/*
* Block User Data Map
*
* Each slot holds the index of a user data block (if any) for the associated block.
*/
uint8_t rgUserData[HANDLE_BLOCKS_PER_SEGMENT];
/*
* Block Lock Count
*
* Each slot holds a lock count for its associated block.
* Locked blocks are not freed, even when empty.
*/
uint8_t rgLocks[HANDLE_BLOCKS_PER_SEGMENT];
/*
* Allocation Chain Tails
*
* Each slot holds the tail block index for an allocation chain.
*/
uint8_t rgTail[HANDLE_MAX_INTERNAL_TYPES];
/*
* Allocation Chain Hints
*
* Each slot holds a hint block index for an allocation chain.
*/
uint8_t rgHint[HANDLE_MAX_INTERNAL_TYPES];
/*
* Free Count
*
* Each slot holds the number of free handles in an allocation chain.
*/
uint32_t rgFreeCount[HANDLE_MAX_INTERNAL_TYPES];
/*
* Next Segment
*
* Points to the next segment in the chain (if we ran out of space in this one).
*/
#ifdef DACCESS_COMPILE
TADDR pNextSegment;
#else
struct TableSegment *pNextSegment;
#endif // DACCESS_COMPILE
/*
* Handle Table
*
* Points to owning handle table for this table segment.
*/
PTR_HandleTable pHandleTable;
/*
* Flags
*/
uint8_t fResortChains : 1; // allocation chains need sorting
uint8_t fNeedsScavenging : 1; // free blocks need scavenging
uint8_t _fUnused : 6; // unused
/*
* Free List Head
*
* Index of the first free block in the segment.
*/
uint8_t bFreeList;
/*
* Empty Line
*
* Index of the first KNOWN block of the last group of unused blocks in the segment.
*/
uint8_t bEmptyLine;
/*
* Commit Line
*
* Index of the first uncommited block in the segment.
*/
uint8_t bCommitLine;
/*
* Decommit Line
*
* Index of the first block in the highest committed page of the segment.
*/
uint8_t bDecommitLine;
/*
* Sequence
*
* Indicates the segment sequence number.
*/
uint8_t bSequence;
};
typedef DPTR(struct _TableSegmentHeader) PTR__TableSegmentHeader;
typedef DPTR(uintptr_t) PTR_uintptr_t;
// The handle table is large and may not be entirely mapped. That's one reason for splitting out the table
// segment and the header as two separate classes. In DAC builds, we generally need only a single element from
// the table segment, so we can use the DAC to retrieve just the information we require.
/*
* Table Segment
*
* Defines the layout for a handle table segment.
*/
struct TableSegment : public _TableSegmentHeader
{
/*
* Filler
*/
uint8_t rgUnused[HANDLE_HEADER_SIZE - sizeof(_TableSegmentHeader)];
/*
* Handles
*/
_UNCHECKED_OBJECTREF rgValue[HANDLE_HANDLES_PER_SEGMENT];
#ifdef DACCESS_COMPILE
static uint32_t DacSize(TADDR addr);
#endif
};
typedef SPTR(struct TableSegment) PTR_TableSegment;
/*
* restore default packing
*/
#pragma pack(pop)
/*
* Handle Type Cache
*
* Defines the layout of a per-type handle cache.
*/
struct HandleTypeCache
{
/*
* reserve bank
*/
OBJECTHANDLE rgReserveBank[HANDLES_PER_CACHE_BANK];
/*
* index of next available handle slot in the reserve bank
*/
int32_t lReserveIndex;
/*---------------------------------------------------------------------------------
* N.B. this structure is split up this way so that when HANDLES_PER_CACHE_BANK is
* large enough, lReserveIndex and lFreeIndex will reside in different cache lines
*--------------------------------------------------------------------------------*/
/*
* free bank
*/
OBJECTHANDLE rgFreeBank[HANDLES_PER_CACHE_BANK];
/*
* index of next empty slot in the free bank
*/
int32_t lFreeIndex;
};
/*---------------------------------------------------------------------------*/
/****************************************************************************
*
* SCANNING PROTOTYPES
*
****************************************************************************/
/*
* ScanCallbackInfo
*
* Carries parameters for per-segment and per-block scanning callbacks.
*
*/
struct ScanCallbackInfo
{
PTR_TableSegment pCurrentSegment; // segment we are presently scanning, if any
uint32_t uFlags; // HNDGCF_* flags
BOOL fEnumUserData; // whether user data is being enumerated as well
HANDLESCANPROC pfnScan; // per-handle scan callback
uintptr_t param1; // callback param 1
uintptr_t param2; // callback param 2
uint32_t dwAgeMask; // generation mask for ephemeral GCs
#ifdef _DEBUG
uint32_t DEBUG_BlocksScanned;
uint32_t DEBUG_BlocksScannedNonTrivially;
uint32_t DEBUG_HandleSlotsScanned;
uint32_t DEBUG_HandlesActuallyScanned;
#endif
};
/*
* BLOCKSCANPROC
*
* Prototype for callbacks that implement per-block scanning logic.
*
*/
typedef void (CALLBACK *BLOCKSCANPROC)(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* SEGMENTITERATOR
*
* Prototype for callbacks that implement per-segment scanning logic.
*
*/
typedef PTR_TableSegment (CALLBACK *SEGMENTITERATOR)(PTR_HandleTable pTable, PTR_TableSegment pPrevSegment, CrstHolderWithState *pCrstHolder);
/*
* TABLESCANPROC
*
* Prototype for TableScanHandles and xxxTableScanHandlesAsync.
*
*/
typedef void (CALLBACK *TABLESCANPROC)(PTR_HandleTable pTable,
const uint32_t *puType, uint32_t uTypeCount,
SEGMENTITERATOR pfnSegmentIterator,
BLOCKSCANPROC pfnBlockHandler,
ScanCallbackInfo *pInfo,
CrstHolderWithState *pCrstHolder);
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* ADDITIONAL TABLE STRUCTURES
*
****************************************************************************/
/*
* AsyncScanInfo
*
* Tracks the state of an async scan for a handle table.
*
*/
struct AsyncScanInfo
{
/*
* Underlying Callback Info
*
* Specifies callback info for the underlying block handler.
*/
struct ScanCallbackInfo *pCallbackInfo;
/*
* Underlying Segment Iterator
*
* Specifies the segment iterator to be used during async scanning.
*/
SEGMENTITERATOR pfnSegmentIterator;
/*
* Underlying Block Handler
*
* Specifies the block handler to be used during async scanning.
*/
BLOCKSCANPROC pfnBlockHandler;
/*
* Scan Queue
*
* Specifies the nodes to be processed asynchronously.
*/
struct ScanQNode *pScanQueue;
/*
* Queue Tail
*
* Specifies the tail node in the queue, or NULL if the queue is empty.
*/
struct ScanQNode *pQueueTail;
};
/*
* Handle Table
*
* Defines the layout of a handle table object.
*/
#ifdef _MSC_VER
#pragma warning(push)
#pragma warning(disable : 4200 ) // zero-sized array
#endif
struct HandleTable
{
/*
* flags describing handle attributes
*
* N.B. this is at offset 0 due to frequent access by cache free codepath
*/
uint32_t rgTypeFlags[HANDLE_MAX_INTERNAL_TYPES];
/*
* lock for this table
*/
CrstStatic Lock;
/*
* number of types this table supports
*/
uint32_t uTypeCount;
/*
* number of handles owned by this table that are marked as "used"
* (this includes the handles residing in rgMainCache and rgQuickCache)
*/
uint32_t dwCount;
/*
* head of segment list for this table
*/
PTR_TableSegment pSegmentList;
/*
* information on current async scan (if any)
*/
AsyncScanInfo *pAsyncScanInfo;
/*
* per-table user info
*/
uint32_t uTableIndex;
/*
* one-level per-type 'quick' handle cache
*/
OBJECTHANDLE rgQuickCache[HANDLE_MAX_INTERNAL_TYPES]; // interlocked ops used here
/*
* debug-only statistics
*/
#ifdef _DEBUG
int _DEBUG_iMaxGen;
int64_t _DEBUG_TotalBlocksScanned [MAXSTATGEN];
int64_t _DEBUG_TotalBlocksScannedNonTrivially[MAXSTATGEN];
int64_t _DEBUG_TotalHandleSlotsScanned [MAXSTATGEN];
int64_t _DEBUG_TotalHandlesActuallyScanned [MAXSTATGEN];
#endif
/*
* primary per-type handle cache
*/
HandleTypeCache rgMainCache[0]; // interlocked ops used here
};
#ifdef _MSC_VER
#pragma warning(pop)
#endif
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* HELPERS
*
****************************************************************************/
/*
* A 32/64 comparison callback
*<TODO>
* @TODO: move/merge into common util file
*</TODO>
*/
typedef int (*PFNCOMPARE)(uintptr_t p, uintptr_t q);
/*
* A 32/64 neutral quicksort
*<TODO>
* @TODO: move/merge into common util file
*</TODO>
*/
void QuickSort(uintptr_t *pData, int left, int right, PFNCOMPARE pfnCompare);
/*
* CompareHandlesByFreeOrder
*
* Returns:
* <0 - handle P should be freed before handle Q
* =0 - handles are eqivalent for free order purposes
* >0 - handle Q should be freed before handle P
*
*/
int CompareHandlesByFreeOrder(uintptr_t p, uintptr_t q);
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* CORE TABLE MANAGEMENT
*
****************************************************************************/
/*
* TypeHasUserData
*
* Determines whether a given handle type has user data.
*
*/
__inline BOOL TypeHasUserData(HandleTable *pTable, uint32_t uType)
{
LIMITED_METHOD_CONTRACT;
// sanity
_ASSERTE(uType < HANDLE_MAX_INTERNAL_TYPES);
// consult the type flags
return (pTable->rgTypeFlags[uType] & HNDF_EXTRAINFO);
}
/*
* TableCanFreeSegmentNow
*
* Determines if it is OK to free the specified segment at this time.
*
*/
BOOL TableCanFreeSegmentNow(HandleTable *pTable, TableSegment *pSegment);
/*
* BlockIsLocked
*
* Determines if the lock count for the specified block is currently non-zero.
*
*/
__inline BOOL BlockIsLocked(TableSegment *pSegment, uint32_t uBlock)
{
LIMITED_METHOD_CONTRACT;
// sanity
_ASSERTE(uBlock < HANDLE_BLOCKS_PER_SEGMENT);
// fetch the lock count and compare it to zero
return (pSegment->rgLocks[uBlock] != 0);
}
/*
* BlockLock
*
* Increases the lock count for a block.
*
*/
__inline void BlockLock(TableSegment *pSegment, uint32_t uBlock)
{
LIMITED_METHOD_CONTRACT;
// fetch the old lock count
uint8_t bLocks = pSegment->rgLocks[uBlock];
// assert if we are about to trash the count
_ASSERTE(bLocks < 0xFF);
// store the incremented lock count
pSegment->rgLocks[uBlock] = bLocks + 1;
}
/*
* BlockUnlock
*
* Decreases the lock count for a block.
*
*/
__inline void BlockUnlock(TableSegment *pSegment, uint32_t uBlock)
{
LIMITED_METHOD_CONTRACT;
// fetch the old lock count
uint8_t bLocks = pSegment->rgLocks[uBlock];
// assert if we are about to trash the count
_ASSERTE(bLocks > 0);
// store the decremented lock count
pSegment->rgLocks[uBlock] = bLocks - 1;
}
/*
* BlockFetchUserDataPointer
*
* Gets the user data pointer for the first handle in a block.
*
*/
PTR_uintptr_t BlockFetchUserDataPointer(PTR__TableSegmentHeader pSegment, uint32_t uBlock, BOOL fAssertOnError);
/*
* HandleValidateAndFetchUserDataPointer
*
* Gets the user data pointer for a handle.
* ASSERTs and returns NULL if handle is not of the expected type.
*
*/
uintptr_t *HandleValidateAndFetchUserDataPointer(OBJECTHANDLE handle, uint32_t uTypeExpected);
/*
* HandleQuickFetchUserDataPointer
*
* Gets the user data pointer for a handle.
* Less validation is performed.
*
*/
PTR_uintptr_t HandleQuickFetchUserDataPointer(OBJECTHANDLE handle);
/*
* HandleQuickSetUserData
*
* Stores user data with a handle.
* Less validation is performed.
*
*/
void HandleQuickSetUserData(OBJECTHANDLE handle, uintptr_t lUserData);
/*
* HandleFetchType
*
* Computes the type index for a given handle.
*
*/
uint32_t HandleFetchType(OBJECTHANDLE handle);
/*
* HandleFetchHandleTable
*
* Returns the containing handle table of a given handle.
*
*/
PTR_HandleTable HandleFetchHandleTable(OBJECTHANDLE handle);
/*
* SegmentAlloc
*
* Allocates a new segment.
*
*/
TableSegment *SegmentAlloc(HandleTable *pTable);
/*
* SegmentFree
*
* Frees the specified segment.
*
*/
void SegmentFree(TableSegment *pSegment);
/*
* Check if a handle is part of a HandleTable
*/
BOOL TableContainHandle(HandleTable *pTable, OBJECTHANDLE handle);
/*
* SegmentRemoveFreeBlocks
*
* Removes a block from a block list in a segment. The block is returned to
* the segment's free list.
*
*/
void SegmentRemoveFreeBlocks(TableSegment *pSegment, uint32_t uType);
/*
* SegmentResortChains
*
* Sorts the block chains for optimal scanning order.
* Sorts the free list to combat fragmentation.
*
*/
void SegmentResortChains(TableSegment *pSegment);
/*
* DoesSegmentNeedsToTrimExcessPages
*
* Checks to see if any pages can be decommitted from the segment.
*
*/
BOOL DoesSegmentNeedsToTrimExcessPages(TableSegment *pSegment);
/*
* SegmentTrimExcessPages
*
* Checks to see if any pages can be decommitted from the segment.
* In case there any unused pages it goes and decommits them.
*
*/
void SegmentTrimExcessPages(TableSegment *pSegment);
/*
* TableAllocBulkHandles
*
* Attempts to allocate the requested number of handes of the specified type.
*
* Returns the number of handles that were actually allocated. This is always
* the same as the number of handles requested except in out-of-memory conditions,
* in which case it is the number of handles that were successfully allocated.
*
*/
uint32_t TableAllocBulkHandles(HandleTable *pTable, uint32_t uType, OBJECTHANDLE *pHandleBase, uint32_t uCount);
/*
* TableFreeBulkPreparedHandles
*
* Frees an array of handles of the specified type.
*
* This routine is optimized for a sorted array of handles but will accept any order.
*
*/
void TableFreeBulkPreparedHandles(HandleTable *pTable, uint32_t uType, OBJECTHANDLE *pHandleBase, uint32_t uCount);
/*
* TableFreeBulkUnpreparedHandles
*
* Frees an array of handles of the specified type by preparing them and calling TableFreeBulkPreparedHandles.
*
*/
void TableFreeBulkUnpreparedHandles(HandleTable *pTable, uint32_t uType, const OBJECTHANDLE *pHandles, uint32_t uCount);
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* HANDLE CACHE
*
****************************************************************************/
/*
* TableAllocSingleHandleFromCache
*
* Gets a single handle of the specified type from the handle table by
* trying to fetch it from the reserve cache for that handle type. If the
* reserve cache is empty, this routine calls TableCacheMissOnAlloc.
*
*/
OBJECTHANDLE TableAllocSingleHandleFromCache(HandleTable *pTable, uint32_t uType);
/*
* TableFreeSingleHandleToCache
*
* Returns a single handle of the specified type to the handle table
* by trying to store it in the free cache for that handle type. If the
* free cache is full, this routine calls TableCacheMissOnFree.
*
*/
void TableFreeSingleHandleToCache(HandleTable *pTable, uint32_t uType, OBJECTHANDLE handle);
/*
* TableAllocHandlesFromCache
*
* Allocates multiple handles of the specified type by repeatedly
* calling TableAllocSingleHandleFromCache.
*
*/
uint32_t TableAllocHandlesFromCache(HandleTable *pTable, uint32_t uType, OBJECTHANDLE *pHandleBase, uint32_t uCount);
/*
* TableFreeHandlesToCache
*
* Frees multiple handles of the specified type by repeatedly
* calling TableFreeSingleHandleToCache.
*
*/
void TableFreeHandlesToCache(HandleTable *pTable, uint32_t uType, const OBJECTHANDLE *pHandleBase, uint32_t uCount);
/*--------------------------------------------------------------------------*/
/****************************************************************************
*
* TABLE SCANNING
*
****************************************************************************/
/*
* TableScanHandles
*
* Implements the core handle scanning loop for a table.
*
*/
void CALLBACK TableScanHandles(PTR_HandleTable pTable,
const uint32_t *puType,
uint32_t uTypeCount,
SEGMENTITERATOR pfnSegmentIterator,
BLOCKSCANPROC pfnBlockHandler,
ScanCallbackInfo *pInfo,
CrstHolderWithState *pCrstHolder);
/*
* xxxTableScanHandlesAsync
*
* Implements asynchronous handle scanning for a table.
*
*/
void CALLBACK xxxTableScanHandlesAsync(PTR_HandleTable pTable,
const uint32_t *puType,
uint32_t uTypeCount,
SEGMENTITERATOR pfnSegmentIterator,
BLOCKSCANPROC pfnBlockHandler,
ScanCallbackInfo *pInfo,
CrstHolderWithState *pCrstHolder);
/*
* TypesRequireUserDataScanning
*
* Determines whether the set of types listed should get user data during scans
*
* if ALL types passed have user data then this function will enable user data support
* otherwise it will disable user data support
*
* IN OTHER WORDS, SCANNING WITH A MIX OF USER-DATA AND NON-USER-DATA TYPES IS NOT SUPPORTED
*
*/
BOOL TypesRequireUserDataScanning(HandleTable *pTable, const uint32_t *types, uint32_t typeCount);
/*
* BuildAgeMask
*
* Builds an age mask to be used when examining/updating the write barrier.
*
*/
uint32_t BuildAgeMask(uint32_t uGen, uint32_t uMaxGen);
/*
* QuickSegmentIterator
*
* Returns the next segment to be scanned in a scanning loop.
*
*/
PTR_TableSegment CALLBACK QuickSegmentIterator(PTR_HandleTable pTable, PTR_TableSegment pPrevSegment, CrstHolderWithState *pCrstHolder = 0);
/*
* StandardSegmentIterator
*
* Returns the next segment to be scanned in a scanning loop.
*
* This iterator performs some maintenance on the segments,
* primarily making sure the block chains are sorted so that
* g0 scans are more likely to operate on contiguous blocks.
*
*/
PTR_TableSegment CALLBACK StandardSegmentIterator(PTR_HandleTable pTable, PTR_TableSegment pPrevSegment, CrstHolderWithState *pCrstHolder = 0);
/*
* FullSegmentIterator
*
* Returns the next segment to be scanned in a scanning loop.
*
* This iterator performs full maintenance on the segments,
* including freeing those it notices are empty along the way.
*
*/
PTR_TableSegment CALLBACK FullSegmentIterator(PTR_HandleTable pTable, PTR_TableSegment pPrevSegment, CrstHolderWithState *pCrstHolder = 0);
/*
* BlockScanBlocksWithoutUserData
*
* Calls the specified callback for each handle, optionally aging the corresponding generation clumps.
* NEVER propagates per-handle user data to the callback.
*
*/
void CALLBACK BlockScanBlocksWithoutUserData(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* BlockScanBlocksWithUserData
*
* Calls the specified callback for each handle, optionally aging the corresponding generation clumps.
* ALWAYS propagates per-handle user data to the callback.
*
*/
void CALLBACK BlockScanBlocksWithUserData(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* BlockScanBlocksEphemeral
*
* Calls the specified callback for each handle from the specified generation.
* Propagates per-handle user data to the callback if present.
*
*/
void CALLBACK BlockScanBlocksEphemeral(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* BlockAgeBlocks
*
* Ages all clumps in a range of consecutive blocks.
*
*/
void CALLBACK BlockAgeBlocks(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* BlockAgeBlocksEphemeral
*
* Ages all clumps within the specified generation.
*
*/
void CALLBACK BlockAgeBlocksEphemeral(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* BlockResetAgeMapForBlocks
*
* Clears the age maps for a range of blocks.
*
*/
void CALLBACK BlockResetAgeMapForBlocks(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* BlockVerifyAgeMapForBlocks
*
* Verifies the age maps for a range of blocks, and also validates the objects pointed to.
*
*/
void CALLBACK BlockVerifyAgeMapForBlocks(PTR_TableSegment pSegment, uint32_t uBlock, uint32_t uCount, ScanCallbackInfo *pInfo);
/*
* xxxAsyncSegmentIterator
*
* Implements the core handle scanning loop for a table.
*
*/
PTR_TableSegment CALLBACK xxxAsyncSegmentIterator(PTR_HandleTable pTable, TableSegment *pPrevSegment, CrstHolderWithState *pCrstHolder);
/*--------------------------------------------------------------------------*/
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/inc/stgpooli.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//*****************************************************************************
// StgPooli.h
//
//
// This is helper code for the string and blob pools. It is here because it is
// secondary to the pooling interface and reduces clutter in the main file.
//
//*****************************************************************************
#ifndef __StgPooli_h__
#define __StgPooli_h__
#include "utilcode.h" // Base hashing code.
//
//
// CPackedLen
//
//
//*****************************************************************************
// Helper class to pack and unpack lengths.
//*****************************************************************************
struct CPackedLen
{
enum {MAX_LEN = 0x1fffffff};
static int Size(ULONG len)
{
LIMITED_METHOD_CONTRACT;
// Smallest.
if (len <= 0x7F)
return 1;
// Medium.
if (len <= 0x3FFF)
return 2;
// Large (too large?).
_ASSERTE(len <= MAX_LEN);
return 4;
}
// Get a pointer to the data, and store the length.
static void const *GetData(void const *pData, ULONG *pLength);
// Get the length value encoded at *pData. Update ppData to point past data.
static ULONG GetLength(void const *pData, void const **ppData=0);
// Get the length value encoded at *pData, and the size of that encoded value.
static ULONG GetLength(void const *pData, int *pSizeOfLength);
// Pack a length at *pData; return a pointer to the next byte.
static void* PutLength(void *pData, ULONG len);
// This is used for just getting an encoded length, and verifies that
// there is no buffer or integer overflow.
static HRESULT SafeGetLength( // S_OK, or error
void const *pDataSource, // First byte of length.
void const *pDataSourceEnd, // End of valid source data memory
ULONG *pLength, // Encoded value
void const **ppDataNext); // Pointer immediately following encoded length
static HRESULT SafeGetLength( // S_OK, or error
BYTE const *pDataSource, // First byte of length.
BYTE const *pDataSourceEnd, // End of valid source data memory
ULONG *pLength, // Encoded value
BYTE const **ppDataNext) // Pointer immediately following encoded length
{
return SafeGetLength(
reinterpret_cast<void const *>(pDataSource),
reinterpret_cast<void const *>(pDataSourceEnd),
pLength,
reinterpret_cast<void const **>(ppDataNext));
}
// This performs the same tasks as GetLength above in addition to checking
// that the value in *pcbData does not extend *ppData beyond pDataSourceEnd
// and does not cause an integer overflow.
static HRESULT SafeGetData(
void const *pDataSource, // First byte of length.
void const *pDataSourceEnd, // End of valid source data memory
ULONG *pcbData, // Length of data
void const **ppData); // Start of data
static HRESULT SafeGetData(
BYTE const *pDataSource, // First byte of length.
BYTE const *pDataSourceEnd, // End of valid source data memory
ULONG *pcbData, // Length of data
BYTE const **ppData) // Start of data
{
return SafeGetData(
reinterpret_cast<void const *>(pDataSource),
reinterpret_cast<void const *>(pDataSourceEnd),
pcbData,
reinterpret_cast<void const **>(ppData));
}
// This is the same as GetData above except it takes a byte count instead
// of pointer to determine the source data length.
static HRESULT SafeGetData( // S_OK, or error
void const *pDataSource, // First byte of data
ULONG cbDataSource, // Count of valid bytes in data source
ULONG *pcbData, // Length of data
void const **ppData); // Start of data
static HRESULT SafeGetData(
BYTE const *pDataSource, // First byte of length.
ULONG cbDataSource, // Count of valid bytes in data source
ULONG *pcbData, // Length of data
BYTE const **ppData) // Start of data
{
return SafeGetData(
reinterpret_cast<void const *>(pDataSource),
cbDataSource,
pcbData,
reinterpret_cast<void const **>(ppData));
}
};
class StgPoolReadOnly;
//*****************************************************************************
// This hash class will handle strings inside of a chunk of the pool.
//*****************************************************************************
struct STRINGHASH : HASHLINK
{
ULONG iOffset; // Offset of this item.
};
class CStringPoolHash : public CChainedHash<STRINGHASH>
{
friend class VerifyLayoutsMD;
public:
CStringPoolHash(StgPoolReadOnly *pool) : m_Pool(pool)
{
LIMITED_METHOD_CONTRACT;
}
virtual bool InUse(STRINGHASH *pItem)
{
LIMITED_METHOD_CONTRACT;
return (pItem->iOffset != 0xffffffff);
}
virtual void SetFree(STRINGHASH *pItem)
{
LIMITED_METHOD_CONTRACT;
pItem->iOffset = 0xffffffff;
}
virtual ULONG Hash(const void *pData)
{
WRAPPER_NO_CONTRACT;
return (HashStringA(reinterpret_cast<LPCSTR>(pData)));
}
virtual int Cmp(const void *pData, void *pItem);
private:
StgPoolReadOnly *m_Pool; // String pool which this hashes.
};
//*****************************************************************************
// This version is for byte streams with a 2 byte WORD giving the length of
// the data.
//*****************************************************************************
typedef STRINGHASH BLOBHASH;
class CBlobPoolHash : public CChainedHash<STRINGHASH>
{
friend class VerifyLayoutsMD;
public:
CBlobPoolHash(StgPoolReadOnly *pool) : m_Pool(pool)
{
LIMITED_METHOD_CONTRACT;
}
virtual bool InUse(BLOBHASH *pItem)
{
LIMITED_METHOD_CONTRACT;
return (pItem->iOffset != 0xffffffff);
}
virtual void SetFree(BLOBHASH *pItem)
{
LIMITED_METHOD_CONTRACT;
pItem->iOffset = 0xffffffff;
}
virtual ULONG Hash(const void *pData)
{
STATIC_CONTRACT_NOTHROW;
STATIC_CONTRACT_GC_NOTRIGGER;
STATIC_CONTRACT_FORBID_FAULT;
ULONG ulSize;
ulSize = CPackedLen::GetLength(pData);
ulSize += CPackedLen::Size(ulSize);
return (HashBytes(reinterpret_cast<BYTE const *>(pData), ulSize));
}
virtual int Cmp(const void *pData, void *pItem);
private:
StgPoolReadOnly *m_Pool; // Blob pool which this hashes.
};
//*****************************************************************************
// This hash class will handle guids inside of a chunk of the pool.
//*****************************************************************************
struct GUIDHASH : HASHLINK
{
ULONG iIndex; // Index of this item.
};
class CGuidPoolHash : public CChainedHash<GUIDHASH>
{
friend class VerifyLayoutsMD;
public:
CGuidPoolHash(StgPoolReadOnly *pool) : m_Pool(pool)
{
LIMITED_METHOD_CONTRACT;
}
virtual bool InUse(GUIDHASH *pItem)
{
LIMITED_METHOD_CONTRACT;
return (pItem->iIndex != 0xffffffff);
}
virtual void SetFree(GUIDHASH *pItem)
{
LIMITED_METHOD_CONTRACT;
pItem->iIndex = 0xffffffff;
}
virtual ULONG Hash(const void *pData)
{
WRAPPER_NO_CONTRACT;
return (HashBytes(reinterpret_cast<BYTE const *>(pData), sizeof(GUID)));
}
virtual int Cmp(const void *pData, void *pItem);
private:
StgPoolReadOnly *m_Pool; // The GUID pool which this hashes.
};
#endif // __StgPooli_h__
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//*****************************************************************************
// StgPooli.h
//
//
// This is helper code for the string and blob pools. It is here because it is
// secondary to the pooling interface and reduces clutter in the main file.
//
//*****************************************************************************
#ifndef __StgPooli_h__
#define __StgPooli_h__
#include "utilcode.h" // Base hashing code.
//
//
// CPackedLen
//
//
//*****************************************************************************
// Helper class to pack and unpack lengths.
//*****************************************************************************
struct CPackedLen
{
enum {MAX_LEN = 0x1fffffff};
static int Size(ULONG len)
{
LIMITED_METHOD_CONTRACT;
// Smallest.
if (len <= 0x7F)
return 1;
// Medium.
if (len <= 0x3FFF)
return 2;
// Large (too large?).
_ASSERTE(len <= MAX_LEN);
return 4;
}
// Get a pointer to the data, and store the length.
static void const *GetData(void const *pData, ULONG *pLength);
// Get the length value encoded at *pData. Update ppData to point past data.
static ULONG GetLength(void const *pData, void const **ppData=0);
// Get the length value encoded at *pData, and the size of that encoded value.
static ULONG GetLength(void const *pData, int *pSizeOfLength);
// Pack a length at *pData; return a pointer to the next byte.
static void* PutLength(void *pData, ULONG len);
// This is used for just getting an encoded length, and verifies that
// there is no buffer or integer overflow.
static HRESULT SafeGetLength( // S_OK, or error
void const *pDataSource, // First byte of length.
void const *pDataSourceEnd, // End of valid source data memory
ULONG *pLength, // Encoded value
void const **ppDataNext); // Pointer immediately following encoded length
static HRESULT SafeGetLength( // S_OK, or error
BYTE const *pDataSource, // First byte of length.
BYTE const *pDataSourceEnd, // End of valid source data memory
ULONG *pLength, // Encoded value
BYTE const **ppDataNext) // Pointer immediately following encoded length
{
return SafeGetLength(
reinterpret_cast<void const *>(pDataSource),
reinterpret_cast<void const *>(pDataSourceEnd),
pLength,
reinterpret_cast<void const **>(ppDataNext));
}
// This performs the same tasks as GetLength above in addition to checking
// that the value in *pcbData does not extend *ppData beyond pDataSourceEnd
// and does not cause an integer overflow.
static HRESULT SafeGetData(
void const *pDataSource, // First byte of length.
void const *pDataSourceEnd, // End of valid source data memory
ULONG *pcbData, // Length of data
void const **ppData); // Start of data
static HRESULT SafeGetData(
BYTE const *pDataSource, // First byte of length.
BYTE const *pDataSourceEnd, // End of valid source data memory
ULONG *pcbData, // Length of data
BYTE const **ppData) // Start of data
{
return SafeGetData(
reinterpret_cast<void const *>(pDataSource),
reinterpret_cast<void const *>(pDataSourceEnd),
pcbData,
reinterpret_cast<void const **>(ppData));
}
// This is the same as GetData above except it takes a byte count instead
// of pointer to determine the source data length.
static HRESULT SafeGetData( // S_OK, or error
void const *pDataSource, // First byte of data
ULONG cbDataSource, // Count of valid bytes in data source
ULONG *pcbData, // Length of data
void const **ppData); // Start of data
static HRESULT SafeGetData(
BYTE const *pDataSource, // First byte of length.
ULONG cbDataSource, // Count of valid bytes in data source
ULONG *pcbData, // Length of data
BYTE const **ppData) // Start of data
{
return SafeGetData(
reinterpret_cast<void const *>(pDataSource),
cbDataSource,
pcbData,
reinterpret_cast<void const **>(ppData));
}
};
class StgPoolReadOnly;
//*****************************************************************************
// This hash class will handle strings inside of a chunk of the pool.
//*****************************************************************************
struct STRINGHASH : HASHLINK
{
ULONG iOffset; // Offset of this item.
};
class CStringPoolHash : public CChainedHash<STRINGHASH>
{
friend class VerifyLayoutsMD;
public:
CStringPoolHash(StgPoolReadOnly *pool) : m_Pool(pool)
{
LIMITED_METHOD_CONTRACT;
}
virtual bool InUse(STRINGHASH *pItem)
{
LIMITED_METHOD_CONTRACT;
return (pItem->iOffset != 0xffffffff);
}
virtual void SetFree(STRINGHASH *pItem)
{
LIMITED_METHOD_CONTRACT;
pItem->iOffset = 0xffffffff;
}
virtual ULONG Hash(const void *pData)
{
WRAPPER_NO_CONTRACT;
return (HashStringA(reinterpret_cast<LPCSTR>(pData)));
}
virtual int Cmp(const void *pData, void *pItem);
private:
StgPoolReadOnly *m_Pool; // String pool which this hashes.
};
//*****************************************************************************
// This version is for byte streams with a 2 byte WORD giving the length of
// the data.
//*****************************************************************************
typedef STRINGHASH BLOBHASH;
class CBlobPoolHash : public CChainedHash<STRINGHASH>
{
friend class VerifyLayoutsMD;
public:
CBlobPoolHash(StgPoolReadOnly *pool) : m_Pool(pool)
{
LIMITED_METHOD_CONTRACT;
}
virtual bool InUse(BLOBHASH *pItem)
{
LIMITED_METHOD_CONTRACT;
return (pItem->iOffset != 0xffffffff);
}
virtual void SetFree(BLOBHASH *pItem)
{
LIMITED_METHOD_CONTRACT;
pItem->iOffset = 0xffffffff;
}
virtual ULONG Hash(const void *pData)
{
STATIC_CONTRACT_NOTHROW;
STATIC_CONTRACT_GC_NOTRIGGER;
STATIC_CONTRACT_FORBID_FAULT;
ULONG ulSize;
ulSize = CPackedLen::GetLength(pData);
ulSize += CPackedLen::Size(ulSize);
return (HashBytes(reinterpret_cast<BYTE const *>(pData), ulSize));
}
virtual int Cmp(const void *pData, void *pItem);
private:
StgPoolReadOnly *m_Pool; // Blob pool which this hashes.
};
//*****************************************************************************
// This hash class will handle guids inside of a chunk of the pool.
//*****************************************************************************
struct GUIDHASH : HASHLINK
{
ULONG iIndex; // Index of this item.
};
class CGuidPoolHash : public CChainedHash<GUIDHASH>
{
friend class VerifyLayoutsMD;
public:
CGuidPoolHash(StgPoolReadOnly *pool) : m_Pool(pool)
{
LIMITED_METHOD_CONTRACT;
}
virtual bool InUse(GUIDHASH *pItem)
{
LIMITED_METHOD_CONTRACT;
return (pItem->iIndex != 0xffffffff);
}
virtual void SetFree(GUIDHASH *pItem)
{
LIMITED_METHOD_CONTRACT;
pItem->iIndex = 0xffffffff;
}
virtual ULONG Hash(const void *pData)
{
WRAPPER_NO_CONTRACT;
return (HashBytes(reinterpret_cast<BYTE const *>(pData), sizeof(GUID)));
}
virtual int Cmp(const void *pData, void *pItem);
private:
StgPoolReadOnly *m_Pool; // The GUID pool which this hashes.
};
#endif // __StgPooli_h__
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/mono/mono/mini/mini-gc.c | /**
* \file
* GC interface for the mono JIT
*
* Author:
* Zoltan Varga ([email protected])
*
* Copyright 2009 Novell, Inc (http://www.novell.com)
* Copyright 2011 Xamarin, Inc (http://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#include "config.h"
#include "mini-gc.h"
#include "mini-runtime.h"
#include <mono/metadata/gc-internals.h>
static gboolean
get_provenance (StackFrameInfo *frame, MonoContext *ctx, gpointer data)
{
MonoJitInfo *ji = frame->ji;
MonoMethod *method;
if (!ji)
return FALSE;
method = jinfo_get_method (ji);
if (method->wrapper_type != MONO_WRAPPER_NONE)
return FALSE;
*(gpointer *)data = method;
return TRUE;
}
static gpointer
get_provenance_func (void)
{
gpointer provenance = NULL;
mono_walk_stack (get_provenance, MONO_UNWIND_DEFAULT, (gpointer)&provenance);
return provenance;
}
#if 0
//#if defined(MONO_ARCH_GC_MAPS_SUPPORTED)
#include <mono/metadata/sgen-conf.h>
#include <mono/metadata/gc-internals.h>
#include <mono/utils/mono-counters.h>
#include <mono/utils/unlocked.h>
//#define SIZEOF_SLOT ((int)sizeof (host_mgreg_t))
//#define SIZEOF_SLOT ((int)sizeof (target_mgreg_t))
#define GC_BITS_PER_WORD (sizeof (mword) * 8)
/* Contains state needed by the GC Map construction code */
typedef struct {
/*
* This contains information about stack slots initialized in the prolog, encoded using
* (slot_index << 16) | slot_type. The slot_index is relative to the CFA, i.e. 0
* means cfa+0, 1 means cfa-4/8, etc.
*/
GSList *stack_slots_from_cfa;
/* Same for stack slots relative to the frame pointer */
GSList *stack_slots_from_fp;
/* Number of slots in the map */
int nslots;
/* The number of registers in the map */
int nregs;
/* Min and Max offsets of the stack frame relative to fp */
int min_offset, max_offset;
/* Same for the locals area */
int locals_min_offset, locals_max_offset;
/* The call sites where this frame can be stopped during GC */
GCCallSite **callsites;
/* The number of call sites */
int ncallsites;
/*
* The width of the stack bitmaps in bytes. This is not equal to the bitmap width at
* runtime, since it includes columns which are 0.
*/
int stack_bitmap_width;
/*
* A bitmap whose width equals nslots, and whose height equals ncallsites.
* The bitmap contains a 1 if the corresponding stack slot has type SLOT_REF at the
* given callsite.
*/
guint8 *stack_ref_bitmap;
/* Same for SLOT_PIN */
guint8 *stack_pin_bitmap;
/*
* Similar bitmaps for registers. These have width MONO_MAX_IREGS in bits.
*/
int reg_bitmap_width;
guint8 *reg_ref_bitmap;
guint8 *reg_pin_bitmap;
} MonoCompileGC;
#undef DEBUG
#if 0
/* We don't support debug levels, its all-or-nothing */
#define DEBUG(s) do { s; fflush (logfile); } while (0)
#define DEBUG_ENABLED 1
#else
#define DEBUG(s)
#endif
#ifdef DEBUG_ENABLED
//#if 1
#define DEBUG_PRECISE(s) do { s; } while (0)
#define DEBUG_PRECISE_ENABLED
#else
#define DEBUG_PRECISE(s)
#endif
/*
* Contains information collected during the conservative stack marking pass,
* used during the precise pass. This helps to avoid doing a stack walk twice, which
* is expensive.
*/
typedef struct {
guint8 *bitmap;
int nslots;
int frame_start_offset;
int nreg_locations;
/* Relative to stack_start */
int reg_locations [MONO_MAX_IREGS];
#ifdef DEBUG_PRECISE_ENABLED
MonoJitInfo *ji;
gpointer fp;
int regs [MONO_MAX_IREGS];
#endif
} FrameInfo;
/* Max number of frames stored in the TLS data */
#define MAX_FRAMES 50
/*
* Per-thread data kept by this module. This is stored in the GC and passed to us as
* parameters, instead of being stored in a TLS variable, since during a collection,
* only the collection thread is active.
*/
typedef struct {
MonoThreadUnwindState unwind_state;
MonoThreadInfo *info;
/* For debugging */
host_mgreg_t tid;
gpointer ref_to_track;
/* Number of frames collected during the !precise pass */
int nframes;
FrameInfo frames [MAX_FRAMES];
} TlsData;
/* These are constant so don't store them in the GC Maps */
/* Number of registers stored in gc maps */
#define NREGS MONO_MAX_IREGS
/*
* The GC Map itself.
* Contains information needed to mark a stack frame.
* This is a transient structure, created from a compressed representation on-demand.
*/
typedef struct {
/*
* The offsets of the GC tracked area inside the stack frame relative to the frame pointer.
* This includes memory which is NOREF thus doesn't need GC maps.
*/
int start_offset;
int end_offset;
/*
* The offset relative to frame_offset where the the memory described by the GC maps
* begins.
*/
int map_offset;
/* The number of stack slots in the map */
int nslots;
/* The frame pointer register */
guint8 frame_reg;
/* The size of each callsite table entry */
guint8 callsite_entry_size;
guint has_pin_slots : 1;
guint has_ref_slots : 1;
guint has_ref_regs : 1;
guint has_pin_regs : 1;
/* The offsets below are into an external bitmaps array */
/*
* A bitmap whose width is equal to bitmap_width, and whose height is equal to ncallsites.
* The bitmap contains a 1 if the corresponding stack slot has type SLOT_REF at the
* given callsite.
*/
guint32 stack_ref_bitmap_offset;
/*
* Same for SLOT_PIN. It is possible that the same bit is set in both bitmaps at
* different callsites, if the slot starts out as PIN, and later changes to REF.
*/
guint32 stack_pin_bitmap_offset;
/*
* Corresponding bitmaps for registers
* These have width equal to the number of bits set in reg_ref_mask/reg_pin_mask.
* FIXME: Merge these with the normal bitmaps, i.e. reserve the first x slots for them ?
*/
guint32 reg_pin_bitmap_offset;
guint32 reg_ref_bitmap_offset;
guint32 used_int_regs, reg_ref_mask, reg_pin_mask;
/* The number of bits set in the two masks above */
guint8 nref_regs, npin_regs;
/*
* A bit array marking slots which contain refs.
* This is used only for debugging.
*/
//guint8 *ref_slots;
/* Callsite offsets */
/* These can take up a lot of space, so encode them compactly */
union {
guint8 *offsets8;
guint16 *offsets16;
guint32 *offsets32;
} callsites;
int ncallsites;
} GCMap;
/*
* A compressed version of GCMap. This is what gets stored in MonoJitInfo.
*/
typedef struct {
//guint8 *ref_slots;
//guint8 encoded_size;
/*
* The arrays below are embedded after the struct.
* Their address needs to be computed.
*/
/* The fixed fields of the GCMap encoded using LEB128 */
guint8 encoded [MONO_ZERO_LEN_ARRAY];
/* An array of ncallsites entries, each entry is callsite_entry_size bytes long */
guint8 callsites [MONO_ZERO_LEN_ARRAY];
/* The GC bitmaps */
guint8 bitmaps [MONO_ZERO_LEN_ARRAY];
} GCEncodedMap;
static int precise_frame_count [2], precise_frame_limit = -1;
static gboolean precise_frame_limit_inited;
/* Stats */
typedef struct {
gint32 scanned_stacks;
gint32 scanned;
gint32 scanned_precisely;
gint32 scanned_conservatively;
gint32 scanned_registers;
gint32 scanned_native;
gint32 scanned_other;
gint32 all_slots;
gint32 noref_slots;
gint32 ref_slots;
gint32 pin_slots;
gint32 gc_maps_size;
gint32 gc_callsites_size;
gint32 gc_callsites8_size;
gint32 gc_callsites16_size;
gint32 gc_callsites32_size;
gint32 gc_bitmaps_size;
gint32 gc_map_struct_size;
gint32 tlsdata_size;
} JITGCStats;
static JITGCStats stats;
static FILE *logfile;
static gboolean enable_gc_maps_for_aot;
void
mini_gc_enable_gc_maps_for_aot (void)
{
enable_gc_maps_for_aot = TRUE;
}
// FIXME: Move these to a shared place
static void
encode_uleb128 (guint32 value, guint8 *buf, guint8 **endbuf)
{
guint8 *p = buf;
do {
guint8 b = value & 0x7f;
value >>= 7;
if (value != 0) /* more bytes to come */
b |= 0x80;
*p ++ = b;
} while (value);
*endbuf = p;
}
static G_GNUC_UNUSED void
encode_sleb128 (gint32 value, guint8 *buf, guint8 **endbuf)
{
gboolean more = 1;
gboolean negative = (value < 0);
guint32 size = 32;
guint8 byte;
guint8 *p = buf;
while (more) {
byte = value & 0x7f;
value >>= 7;
/* the following is unnecessary if the
* implementation of >>= uses an arithmetic rather
* than logical shift for a signed left operand
*/
if (negative)
/* sign extend */
value |= - (1 <<(size - 7));
/* sign bit of byte is second high order bit (0x40) */
if ((value == 0 && !(byte & 0x40)) ||
(value == -1 && (byte & 0x40)))
more = 0;
else
byte |= 0x80;
*p ++= byte;
}
*endbuf = p;
}
static guint32
decode_uleb128 (guint8 *buf, guint8 **endbuf)
{
guint8 *p = buf;
guint32 res = 0;
int shift = 0;
while (TRUE) {
guint8 b = *p;
p ++;
res = res | (((int)(b & 0x7f)) << shift);
if (!(b & 0x80))
break;
shift += 7;
}
*endbuf = p;
return res;
}
static gint32
decode_sleb128 (guint8 *buf, guint8 **endbuf)
{
guint8 *p = buf;
gint32 res = 0;
int shift = 0;
while (TRUE) {
guint8 b = *p;
p ++;
res = res | (((int)(b & 0x7f)) << shift);
shift += 7;
if (!(b & 0x80)) {
if (shift < 32 && (b & 0x40))
res |= - (1 << shift);
break;
}
}
*endbuf = p;
return res;
}
static int
encode_frame_reg (int frame_reg)
{
#ifdef TARGET_AMD64
if (frame_reg == AMD64_RSP)
return 0;
else if (frame_reg == AMD64_RBP)
return 1;
#elif defined(TARGET_X86)
if (frame_reg == X86_EBP)
return 0;
else if (frame_reg == X86_ESP)
return 1;
#elif defined(TARGET_ARM)
if (frame_reg == ARMREG_SP)
return 0;
else if (frame_reg == ARMREG_FP)
return 1;
#elif defined(TARGET_S390X)
if (frame_reg == S390_SP)
return 0;
else if (frame_reg == S390_FP)
return 1;
#elif defined (TARGET_RISCV)
if (frame_reg == RISCV_SP)
return 0;
else if (frame_reg == RISCV_FP)
return 1;
#else
NOT_IMPLEMENTED;
#endif
g_assert_not_reached ();
return -1;
}
static int
decode_frame_reg (int encoded)
{
#ifdef TARGET_AMD64
if (encoded == 0)
return AMD64_RSP;
else if (encoded == 1)
return AMD64_RBP;
#elif defined(TARGET_X86)
if (encoded == 0)
return X86_EBP;
else if (encoded == 1)
return X86_ESP;
#elif defined(TARGET_ARM)
if (encoded == 0)
return ARMREG_SP;
else if (encoded == 1)
return ARMREG_FP;
#elif defined(TARGET_S390X)
if (encoded == 0)
return S390_SP;
else if (encoded == 1)
return S390_FP;
#elif defined (TARGET_RISCV)
if (encoded == 0)
return RISCV_SP;
else if (encoded == 1)
return RISCV_FP;
#else
NOT_IMPLEMENTED;
#endif
g_assert_not_reached ();
return -1;
}
#ifdef TARGET_AMD64
#ifdef HOST_WIN32
static int callee_saved_regs [] = { AMD64_RBP, AMD64_RBX, AMD64_R12, AMD64_R13, AMD64_R14, AMD64_R15, AMD64_RDI, AMD64_RSI };
#else
static int callee_saved_regs [] = { AMD64_RBP, AMD64_RBX, AMD64_R12, AMD64_R13, AMD64_R14, AMD64_R15 };
#endif
#elif defined(TARGET_X86)
static int callee_saved_regs [] = { X86_EBX, X86_ESI, X86_EDI };
#elif defined(TARGET_ARM)
static int callee_saved_regs [] = { ARMREG_V1, ARMREG_V2, ARMREG_V3, ARMREG_V4, ARMREG_V5, ARMREG_V7, ARMREG_FP };
#elif defined(TARGET_ARM64)
// FIXME:
static int callee_saved_regs [] = { };
#elif defined(TARGET_S390X)
static int callee_saved_regs [] = { s390_r6, s390_r7, s390_r8, s390_r9, s390_r10, s390_r11, s390_r12, s390_r13, s390_r14 };
#elif defined(TARGET_POWERPC64) && _CALL_ELF == 2
static int callee_saved_regs [] = {
ppc_r13, ppc_r14, ppc_r15, ppc_r16,
ppc_r17, ppc_r18, ppc_r19, ppc_r20,
ppc_r21, ppc_r22, ppc_r23, ppc_r24,
ppc_r25, ppc_r26, ppc_r27, ppc_r28,
ppc_r29, ppc_r30, ppc_r31 };
#elif defined(TARGET_POWERPC)
static int callee_saved_regs [] = { ppc_r6, ppc_r7, ppc_r8, ppc_r9, ppc_r10, ppc_r11, ppc_r12, ppc_r13, ppc_r14 };
#elif defined (TARGET_RISCV)
static int callee_saved_regs [] = {
RISCV_S0, RISCV_S1, RISCV_S2, RISCV_S3, RISCV_S4, RISCV_S5,
RISCV_S6, RISCV_S7, RISCV_S8, RISCV_S9, RISCV_S10, RISCV_S11,
};
#endif
static guint32
encode_regmask (guint32 regmask)
{
int i;
guint32 res;
res = 0;
for (i = 0; i < sizeof (callee_saved_regs) / sizeof (int); ++i) {
if (regmask & (1 << callee_saved_regs [i])) {
res |= (1 << i);
regmask -= (1 << callee_saved_regs [i]);
}
}
g_assert (regmask == 0);
return res;
}
static guint32
decode_regmask (guint32 regmask)
{
int i;
guint32 res;
res = 0;
for (i = 0; i < sizeof (callee_saved_regs) / sizeof (int); ++i)
if (regmask & (1 << i))
res |= (1 << callee_saved_regs [i]);
return res;
}
/*
* encode_gc_map:
*
* Encode the fixed fields of MAP into a buffer pointed to by BUF.
*/
static void
encode_gc_map (GCMap *map, guint8 *buf, guint8 **endbuf)
{
guint32 flags, freg;
encode_sleb128 (map->start_offset / SIZEOF_SLOT, buf, &buf);
encode_sleb128 (map->end_offset / SIZEOF_SLOT, buf, &buf);
encode_sleb128 (map->map_offset / SIZEOF_SLOT, buf, &buf);
encode_uleb128 (map->nslots, buf, &buf);
g_assert (map->callsite_entry_size <= 4);
freg = encode_frame_reg (map->frame_reg);
g_assert (freg < 2);
flags = (map->has_ref_slots ? 1 : 0) | (map->has_pin_slots ? 2 : 0) | (map->has_ref_regs ? 4 : 0) | (map->has_pin_regs ? 8 : 0) | ((map->callsite_entry_size - 1) << 4) | (freg << 6);
encode_uleb128 (flags, buf, &buf);
encode_uleb128 (encode_regmask (map->used_int_regs), buf, &buf);
if (map->has_ref_regs)
encode_uleb128 (encode_regmask (map->reg_ref_mask), buf, &buf);
if (map->has_pin_regs)
encode_uleb128 (encode_regmask (map->reg_pin_mask), buf, &buf);
encode_uleb128 (map->ncallsites, buf, &buf);
*endbuf = buf;
}
/*
* decode_gc_map:
*
* Decode the encoded GC map representation in BUF and store the result into MAP.
*/
static void
decode_gc_map (guint8 *buf, GCMap *map, guint8 **endbuf)
{
guint32 flags;
int stack_bitmap_size, reg_ref_bitmap_size, reg_pin_bitmap_size, offset, freg;
int i, n;
map->start_offset = decode_sleb128 (buf, &buf) * SIZEOF_SLOT;
map->end_offset = decode_sleb128 (buf, &buf) * SIZEOF_SLOT;
map->map_offset = decode_sleb128 (buf, &buf) * SIZEOF_SLOT;
map->nslots = decode_uleb128 (buf, &buf);
flags = decode_uleb128 (buf, &buf);
map->has_ref_slots = (flags & 1) ? 1 : 0;
map->has_pin_slots = (flags & 2) ? 1 : 0;
map->has_ref_regs = (flags & 4) ? 1 : 0;
map->has_pin_regs = (flags & 8) ? 1 : 0;
map->callsite_entry_size = ((flags >> 4) & 0x3) + 1;
freg = flags >> 6;
map->frame_reg = decode_frame_reg (freg);
map->used_int_regs = decode_regmask (decode_uleb128 (buf, &buf));
if (map->has_ref_regs) {
map->reg_ref_mask = decode_regmask (decode_uleb128 (buf, &buf));
n = 0;
for (i = 0; i < NREGS; ++i)
if (map->reg_ref_mask & (1 << i))
n ++;
map->nref_regs = n;
}
if (map->has_pin_regs) {
map->reg_pin_mask = decode_regmask (decode_uleb128 (buf, &buf));
n = 0;
for (i = 0; i < NREGS; ++i)
if (map->reg_pin_mask & (1 << i))
n ++;
map->npin_regs = n;
}
map->ncallsites = decode_uleb128 (buf, &buf);
stack_bitmap_size = (ALIGN_TO (map->nslots, 8) / 8) * map->ncallsites;
reg_ref_bitmap_size = (ALIGN_TO (map->nref_regs, 8) / 8) * map->ncallsites;
reg_pin_bitmap_size = (ALIGN_TO (map->npin_regs, 8) / 8) * map->ncallsites;
offset = 0;
map->stack_ref_bitmap_offset = offset;
if (map->has_ref_slots)
offset += stack_bitmap_size;
map->stack_pin_bitmap_offset = offset;
if (map->has_pin_slots)
offset += stack_bitmap_size;
map->reg_ref_bitmap_offset = offset;
if (map->has_ref_regs)
offset += reg_ref_bitmap_size;
map->reg_pin_bitmap_offset = offset;
if (map->has_pin_regs)
offset += reg_pin_bitmap_size;
*endbuf = buf;
}
static gpointer
thread_attach_func (void)
{
TlsData *tls;
tls = g_new0 (TlsData, 1);
tls->tid = mono_native_thread_id_get ();
tls->info = mono_thread_info_current ();
UnlockedAdd (&stats.tlsdata_size, sizeof (TlsData));
return tls;
}
static void
thread_detach_func (gpointer user_data)
{
TlsData *tls = user_data;
g_free (tls);
}
static void
thread_suspend_func (gpointer user_data, void *sigctx, MonoContext *ctx)
{
TlsData *tls = user_data;
if (!tls) {
/* Happens during startup */
return;
}
if (tls->tid != mono_native_thread_id_get ()) {
/* Happens on osx because threads are not suspended using signals */
#ifndef TARGET_WIN32
gboolean res;
#endif
g_assert (tls->info);
#ifdef TARGET_WIN32
return;
#else
res = mono_thread_state_init_from_handle (&tls->unwind_state, tls->info, NULL);
#endif
} else {
tls->unwind_state.unwind_data [MONO_UNWIND_DATA_LMF] = mono_get_lmf ();
if (sigctx) {
mono_sigctx_to_monoctx (sigctx, &tls->unwind_state.ctx);
tls->unwind_state.valid = TRUE;
} else if (ctx) {
memcpy (&tls->unwind_state.ctx, ctx, sizeof (MonoContext));
tls->unwind_state.valid = TRUE;
} else {
tls->unwind_state.valid = FALSE;
}
tls->unwind_state.unwind_data [MONO_UNWIND_DATA_JIT_TLS] = mono_tls_get_jit_tls ();
tls->unwind_state.unwind_data [MONO_UNWIND_DATA_DOMAIN] = mono_domain_get ();
}
if (!tls->unwind_state.unwind_data [MONO_UNWIND_DATA_DOMAIN]) {
/* Happens during startup */
tls->unwind_state.valid = FALSE;
return;
}
}
#define DEAD_REF ((gpointer)(gssize)0x2a2a2a2a2a2a2a2aULL)
static void
set_bit (guint8 *bitmap, int width, int y, int x)
{
bitmap [(width * y) + (x / 8)] |= (1 << (x % 8));
}
static void
clear_bit (guint8 *bitmap, int width, int y, int x)
{
bitmap [(width * y) + (x / 8)] &= ~(1 << (x % 8));
}
static int
get_bit (guint8 *bitmap, int width, int y, int x)
{
return bitmap [(width * y) + (x / 8)] & (1 << (x % 8));
}
static const char*
slot_type_to_string (GCSlotType type)
{
switch (type) {
case SLOT_REF:
return "ref";
case SLOT_NOREF:
return "noref";
case SLOT_PIN:
return "pin";
default:
g_assert_not_reached ();
return NULL;
}
}
static host_mgreg_t
get_frame_pointer (MonoContext *ctx, int frame_reg)
{
#if defined(TARGET_AMD64)
if (frame_reg == AMD64_RSP)
return ctx->rsp;
else if (frame_reg == AMD64_RBP)
return ctx->rbp;
#elif defined(TARGET_X86)
if (frame_reg == X86_ESP)
return ctx->esp;
else if (frame_reg == X86_EBP)
return ctx->ebp;
#elif defined(TARGET_ARM)
if (frame_reg == ARMREG_SP)
return (host_mgreg_t)MONO_CONTEXT_GET_SP (ctx);
else if (frame_reg == ARMREG_FP)
return (host_mgreg_t)MONO_CONTEXT_GET_BP (ctx);
#elif defined(TARGET_S390X)
if (frame_reg == S390_SP)
return (host_mgreg_t)MONO_CONTEXT_GET_SP (ctx);
else if (frame_reg == S390_FP)
return (host_mgreg_t)MONO_CONTEXT_GET_BP (ctx);
#elif defined (TARGET_RISCV)
if (frame_reg == RISCV_SP)
return MONO_CONTEXT_GET_SP (ctx);
else if (frame_reg == RISCV_FP)
return MONO_CONTEXT_GET_BP (ctx);
#endif
g_assert_not_reached ();
return 0;
}
/*
* conservatively_pass:
*
* Mark a thread stack conservatively and collect information needed by the precise pass.
*/
static void
conservative_pass (TlsData *tls, guint8 *stack_start, guint8 *stack_end)
{
MonoJitInfo *ji;
MonoMethod *method;
MonoContext ctx, new_ctx;
MonoLMF *lmf;
guint8 *stack_limit;
gboolean last = TRUE;
GCMap *map;
GCMap map_tmp;
GCEncodedMap *emap;
guint8* fp, *p, *real_frame_start, *frame_start, *frame_end;
int i, pc_offset, cindex, bitmap_width;
int scanned = 0, scanned_precisely, scanned_conservatively, scanned_registers;
gboolean res;
StackFrameInfo frame;
host_mgreg_t *reg_locations [MONO_MAX_IREGS];
host_mgreg_t *new_reg_locations [MONO_MAX_IREGS];
guint8 *bitmaps;
FrameInfo *fi;
guint32 precise_regmask;
if (tls) {
tls->nframes = 0;
tls->ref_to_track = NULL;
}
/* tls == NULL can happen during startup */
if (mono_thread_internal_current () == NULL || !tls) {
mono_gc_conservatively_scan_area (stack_start, stack_end);
UnlockedAdd (&stats.scanned_stacks, stack_end - stack_start);
return;
}
lmf = tls->unwind_state.unwind_data [MONO_UNWIND_DATA_LMF];
frame.domain = NULL;
/* Number of bytes scanned based on GC map data */
scanned = 0;
/* Number of bytes scanned precisely based on GC map data */
scanned_precisely = 0;
/* Number of bytes scanned conservatively based on GC map data */
scanned_conservatively = 0;
/* Number of bytes scanned conservatively in register save areas */
scanned_registers = 0;
/* This is one past the last address which we have scanned */
stack_limit = stack_start;
if (!tls->unwind_state.valid)
memset (&new_ctx, 0, sizeof (ctx));
else
memcpy (&new_ctx, &tls->unwind_state.ctx, sizeof (MonoContext));
memset (reg_locations, 0, sizeof (reg_locations));
memset (new_reg_locations, 0, sizeof (new_reg_locations));
while (TRUE) {
if (!tls->unwind_state.valid)
break;
memcpy (&ctx, &new_ctx, sizeof (ctx));
for (i = 0; i < MONO_MAX_IREGS; ++i) {
if (new_reg_locations [i]) {
/*
* If the current frame saves the register, it means it might modify its
* value, thus the old location might not contain the same value, so
* we have to mark it conservatively.
*/
if (reg_locations [i]) {
DEBUG (fprintf (logfile, "\tscan saved reg %s location %p.\n", mono_arch_regname (i), reg_locations [i]));
mono_gc_conservatively_scan_area (reg_locations [i], (char*)reg_locations [i] + SIZEOF_SLOT);
scanned_registers += SIZEOF_SLOT;
}
reg_locations [i] = new_reg_locations [i];
DEBUG (fprintf (logfile, "\treg %s is now at location %p.\n", mono_arch_regname (i), reg_locations [i]));
}
}
g_assert ((gsize)stack_limit % SIZEOF_SLOT == 0);
res = mono_find_jit_info_ext (tls->unwind_state.unwind_data [MONO_UNWIND_DATA_JIT_TLS], NULL, &ctx, &new_ctx, NULL, &lmf, new_reg_locations, &frame);
if (!res)
break;
ji = frame.ji;
// FIXME: For skipped frames, scan the param area of the parent frame conservatively ?
// FIXME: trampolines
if (frame.type == FRAME_TYPE_MANAGED_TO_NATIVE) {
/*
* These frames are problematic for several reasons:
* - they are unwound through an LMF, and we have no precise register tracking for those.
* - the LMF might not contain a precise ip, so we can't compute the call site.
* - the LMF only unwinds to the wrapper frame, so we get these methods twice.
*/
DEBUG (fprintf (logfile, "Mark(0): <Managed-to-native transition>\n"));
for (i = 0; i < MONO_MAX_IREGS; ++i) {
if (reg_locations [i]) {
DEBUG (fprintf (logfile, "\tscan saved reg %s location %p.\n", mono_arch_regname (i), reg_locations [i]));
mono_gc_conservatively_scan_area (reg_locations [i], (char*)reg_locations [i] + SIZEOF_SLOT);
scanned_registers += SIZEOF_SLOT;
}
reg_locations [i] = NULL;
new_reg_locations [i] = NULL;
}
ctx = new_ctx;
continue;
}
if (ji)
method = jinfo_get_method (ji);
else
method = NULL;
/* The last frame can be in any state so mark conservatively */
if (last) {
if (ji) {
DEBUG (char *fname = mono_method_full_name (method, TRUE); fprintf (logfile, "Mark(0): %s+0x%x (%p)\n", fname, pc_offset, (gpointer)MONO_CONTEXT_GET_IP (&ctx)); g_free (fname));
}
DEBUG (fprintf (logfile, "\t <Last frame>\n"));
last = FALSE;
/*
* new_reg_locations is not precise when a method is interrupted during its epilog, so clear it.
*/
for (i = 0; i < MONO_MAX_IREGS; ++i) {
if (reg_locations [i]) {
DEBUG (fprintf (logfile, "\tscan saved reg %s location %p.\n", mono_arch_regname (i), reg_locations [i]));
mono_gc_conservatively_scan_area (reg_locations [i], (char*)reg_locations [i] + SIZEOF_SLOT);
scanned_registers += SIZEOF_SLOT;
}
if (new_reg_locations [i]) {
DEBUG (fprintf (logfile, "\tscan saved reg %s location %p.\n", mono_arch_regname (i), new_reg_locations [i]));
mono_gc_conservatively_scan_area (new_reg_locations [i], (char*)new_reg_locations [i] + SIZEOF_SLOT);
scanned_registers += SIZEOF_SLOT;
}
reg_locations [i] = NULL;
new_reg_locations [i] = NULL;
}
continue;
}
pc_offset = (guint8*)MONO_CONTEXT_GET_IP (&ctx) - (guint8*)ji->code_start;
/* These frames are very problematic */
if (method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE) {
DEBUG (char *fname = mono_method_full_name (method, TRUE); fprintf (logfile, "Mark(0): %s+0x%x (%p)\n", fname, pc_offset, (gpointer)MONO_CONTEXT_GET_IP (&ctx)); g_free (fname));
DEBUG (fprintf (logfile, "\tSkip.\n"));
continue;
}
/* All the other frames are at a call site */
if (tls->nframes == MAX_FRAMES) {
/*
* Can't save information since the array is full. So scan the rest of the
* stack conservatively.
*/
DEBUG (fprintf (logfile, "Mark (0): Frame stack full.\n"));
break;
}
/* Scan the frame of this method */
/*
* A frame contains the following:
* - saved registers
* - saved args
* - locals
* - spill area
* - localloc-ed memory
*/
g_assert (pc_offset >= 0);
emap = ji->gc_info;
if (!emap) {
DEBUG (char *fname = mono_method_full_name (jinfo_get_method (ji), TRUE); fprintf (logfile, "Mark(0): %s+0x%x (%p)\n", fname, pc_offset, (gpointer)MONO_CONTEXT_GET_IP (&ctx)); g_free (fname));
DEBUG (fprintf (logfile, "\tNo GC Map.\n"));
continue;
}
/* The embedded callsite table requires this */
g_assert (((gsize)emap % 4) == 0);
/*
* Debugging aid to control the number of frames scanned precisely
*/
if (!precise_frame_limit_inited) {
char *mono_precise_count = g_getenv ("MONO_PRECISE_COUNT");
if (mono_precise_count) {
precise_frame_limit = atoi (mono_precise_count);
g_free (mono_precise_count);
}
precise_frame_limit_inited = TRUE;
}
if (precise_frame_limit != -1) {
if (precise_frame_count [FALSE] == precise_frame_limit)
printf ("LAST PRECISE FRAME: %s\n", mono_method_full_name (method, TRUE));
if (precise_frame_count [FALSE] > precise_frame_limit)
continue;
}
precise_frame_count [FALSE] ++;
/* Decode the encoded GC map */
map = &map_tmp;
memset (map, 0, sizeof (GCMap));
decode_gc_map (&emap->encoded [0], map, &p);
p = (guint8*)ALIGN_TO (p, map->callsite_entry_size);
map->callsites.offsets8 = p;
p += map->callsite_entry_size * map->ncallsites;
bitmaps = p;
fp = (guint8*)get_frame_pointer (&ctx, map->frame_reg);
real_frame_start = fp + map->start_offset;
frame_start = fp + map->start_offset + map->map_offset;
frame_end = fp + map->end_offset;
DEBUG (char *fname = mono_method_full_name (jinfo_get_method (ji), TRUE); fprintf (logfile, "Mark(0): %s+0x%x (%p) limit=%p fp=%p frame=%p-%p (%d)\n", fname, pc_offset, (gpointer)MONO_CONTEXT_GET_IP (&ctx), stack_limit, fp, frame_start, frame_end, (int)(frame_end - frame_start)); g_free (fname));
/* Find the callsite index */
if (map->callsite_entry_size == 1) {
for (i = 0; i < map->ncallsites; ++i)
/* ip points inside the call instruction */
if (map->callsites.offsets8 [i] == pc_offset + 1)
break;
} else if (map->callsite_entry_size == 2) {
// FIXME: Use a binary search
for (i = 0; i < map->ncallsites; ++i)
/* ip points inside the call instruction */
if (map->callsites.offsets16 [i] == pc_offset + 1)
break;
} else {
// FIXME: Use a binary search
for (i = 0; i < map->ncallsites; ++i)
/* ip points inside the call instruction */
if (map->callsites.offsets32 [i] == pc_offset + 1)
break;
}
if (i == map->ncallsites) {
printf ("Unable to find ip offset 0x%x in callsite list of %s.\n", pc_offset + 1, mono_method_full_name (method, TRUE));
g_assert_not_reached ();
}
cindex = i;
/*
* This is not neccessary true on x86 because frames have a different size at each
* call site.
*/
//g_assert (real_frame_start >= stack_limit);
if (real_frame_start > stack_limit) {
/* This scans the previously skipped frames as well */
DEBUG (fprintf (logfile, "\tscan area %p-%p (%d).\n", stack_limit, real_frame_start, (int)(real_frame_start - stack_limit)));
mono_gc_conservatively_scan_area (stack_limit, real_frame_start);
UnlockedAdd (&stats.scanned_other, real_frame_start - stack_limit);
}
/* Mark stack slots */
if (map->has_pin_slots) {
int bitmap_width = ALIGN_TO (map->nslots, 8) / 8;
guint8 *pin_bitmap = &bitmaps [map->stack_pin_bitmap_offset + (bitmap_width * cindex)];
guint8 *p;
gboolean pinned;
p = frame_start;
for (i = 0; i < map->nslots; ++i) {
pinned = pin_bitmap [i / 8] & (1 << (i % 8));
if (pinned) {
DEBUG (fprintf (logfile, "\tscan slot %s0x%x(fp)=%p.\n", (guint8*)p > (guint8*)fp ? "" : "-", ABS ((int)((gssize)p - (gssize)fp)), p));
mono_gc_conservatively_scan_area (p, p + SIZEOF_SLOT);
scanned_conservatively += SIZEOF_SLOT;
} else {
scanned_precisely += SIZEOF_SLOT;
}
p += SIZEOF_SLOT;
}
} else {
scanned_precisely += (map->nslots * SIZEOF_SLOT);
}
/* The area outside of start-end is NOREF */
scanned_precisely += (map->end_offset - map->start_offset) - (map->nslots * SIZEOF_SLOT);
/* Mark registers */
precise_regmask = map->used_int_regs | (1 << map->frame_reg);
if (map->has_pin_regs) {
int bitmap_width = ALIGN_TO (map->npin_regs, 8) / 8;
guint8 *pin_bitmap = &bitmaps [map->reg_pin_bitmap_offset + (bitmap_width * cindex)];
int bindex = 0;
for (i = 0; i < NREGS; ++i) {
if (!(map->used_int_regs & (1 << i)))
continue;
if (!(map->reg_pin_mask & (1 << i)))
continue;
if (pin_bitmap [bindex / 8] & (1 << (bindex % 8))) {
DEBUG (fprintf (logfile, "\treg %s saved at 0x%p is pinning.\n", mono_arch_regname (i), reg_locations [i]));
precise_regmask &= ~(1 << i);
}
bindex ++;
}
}
scanned += map->end_offset - map->start_offset;
g_assert (scanned == scanned_precisely + scanned_conservatively);
stack_limit = frame_end;
/* Save information for the precise pass */
fi = &tls->frames [tls->nframes];
fi->nslots = map->nslots;
bitmap_width = ALIGN_TO (map->nslots, 8) / 8;
if (map->has_ref_slots)
fi->bitmap = &bitmaps [map->stack_ref_bitmap_offset + (bitmap_width * cindex)];
else
fi->bitmap = NULL;
fi->frame_start_offset = frame_start - stack_start;
fi->nreg_locations = 0;
DEBUG_PRECISE (fi->ji = ji);
DEBUG_PRECISE (fi->fp = fp);
if (map->has_ref_regs) {
int bitmap_width = ALIGN_TO (map->nref_regs, 8) / 8;
guint8 *ref_bitmap = &bitmaps [map->reg_ref_bitmap_offset + (bitmap_width * cindex)];
int bindex = 0;
for (i = 0; i < NREGS; ++i) {
if (!(map->reg_ref_mask & (1 << i)))
continue;
if (reg_locations [i] && (ref_bitmap [bindex / 8] & (1 << (bindex % 8)))) {
DEBUG_PRECISE (fi->regs [fi->nreg_locations] = i);
DEBUG (fprintf (logfile, "\treg %s saved at 0x%p is ref.\n", mono_arch_regname (i), reg_locations [i]));
fi->reg_locations [fi->nreg_locations] = (guint8*)reg_locations [i] - stack_start;
fi->nreg_locations ++;
}
bindex ++;
}
}
/*
* Clear locations of precisely tracked registers.
*/
if (precise_regmask) {
for (i = 0; i < NREGS; ++i) {
if (precise_regmask & (1 << i)) {
/*
* The method uses this register, and we have precise info for it.
* This means the location will be scanned precisely.
* Tell the code at the beginning of the loop that this location is
* processed.
*/
if (reg_locations [i])
DEBUG (fprintf (logfile, "\treg %s at location %p (==%p) is precise.\n", mono_arch_regname (i), reg_locations [i], (gpointer)*reg_locations [i]));
reg_locations [i] = NULL;
}
}
}
tls->nframes ++;
}
/* Scan the remaining register save locations */
for (i = 0; i < MONO_MAX_IREGS; ++i) {
if (reg_locations [i]) {
DEBUG (fprintf (logfile, "\tscan saved reg location %p.\n", reg_locations [i]));
mono_gc_conservatively_scan_area (reg_locations [i], (char*)reg_locations [i] + SIZEOF_SLOT);
scanned_registers += SIZEOF_SLOT;
}
if (new_reg_locations [i]) {
DEBUG (fprintf (logfile, "\tscan saved reg location %p.\n", new_reg_locations [i]));
mono_gc_conservatively_scan_area (new_reg_locations [i], (char*)new_reg_locations [i] + SIZEOF_SLOT);
scanned_registers += SIZEOF_SLOT;
}
}
if (stack_limit < stack_end) {
DEBUG (fprintf (logfile, "\tscan remaining stack %p-%p (%d).\n", stack_limit, stack_end, (int)(stack_end - stack_limit)));
mono_gc_conservatively_scan_area (stack_limit, stack_end);
UnlockedAdd (&stats.scanned_native, stack_end - stack_limit);
}
DEBUG (fprintf (logfile, "Marked %d bytes, p=%d,c=%d out of %d.\n", scanned, scanned_precisely, scanned_conservatively, (int)(stack_end - stack_start)));
UnlockedAdd (&stats.scanned_stacks, stack_end - stack_start);
UnlockedAdd (&stats.scanned, scanned);
UnlockedAdd (&stats.scanned_precisely, scanned_precisely);
UnlockedAdd (&stats.scanned_conservatively, scanned_conservatively);
UnlockedAdd (&stats.scanned_registers, scanned_registers);
//mono_gc_conservatively_scan_area (stack_start, stack_end);
}
/*
* precise_pass:
*
* Mark a thread stack precisely based on information saved during the conservative
* pass.
*/
static void
precise_pass (TlsData *tls, guint8 *stack_start, guint8 *stack_end, void *gc_data)
{
int findex, i;
FrameInfo *fi;
guint8 *frame_start;
if (!tls)
return;
if (!tls->unwind_state.valid)
return;
for (findex = 0; findex < tls->nframes; findex ++) {
/* Load information saved by the !precise pass */
fi = &tls->frames [findex];
frame_start = stack_start + fi->frame_start_offset;
DEBUG (char *fname = mono_method_full_name (jinfo_get_method (fi->ji), TRUE); fprintf (logfile, "Mark(1): %s\n", fname); g_free (fname));
/*
* FIXME: Add a function to mark using a bitmap, to avoid doing a
* call for each object.
*/
/* Mark stack slots */
if (fi->bitmap) {
guint8 *ref_bitmap = fi->bitmap;
gboolean live;
for (i = 0; i < fi->nslots; ++i) {
MonoObject **ptr = (MonoObject**)(frame_start + (i * SIZEOF_SLOT));
live = ref_bitmap [i / 8] & (1 << (i % 8));
if (live) {
MonoObject *obj = *ptr;
if (obj) {
DEBUG (fprintf (logfile, "\tref %s0x%x(fp)=%p: %p ->", (guint8*)ptr >= (guint8*)fi->fp ? "" : "-", ABS ((int)((gssize)ptr - (gssize)fi->fp)), ptr, obj));
*ptr = mono_gc_scan_object (obj, gc_data);
DEBUG (fprintf (logfile, " %p.\n", *ptr));
} else {
DEBUG (fprintf (logfile, "\tref %s0x%x(fp)=%p: %p.\n", (guint8*)ptr >= (guint8*)fi->fp ? "" : "-", ABS ((int)((gssize)ptr - (gssize)fi->fp)), ptr, obj));
}
} else {
#if 0
/*
* This is disabled because the pointer takes up a lot of space.
* Stack slots might be shared between ref and non-ref variables ?
*/
if (map->ref_slots [i / 8] & (1 << (i % 8))) {
DEBUG (fprintf (logfile, "\tref %s0x%x(fp)=%p: dead (%p)\n", (guint8*)ptr >= (guint8*)fi->fp ? "" : "-", ABS ((int)((gssize)ptr - (gssize)fi->fp)), ptr, *ptr));
/*
* Fail fast if the live range is incorrect, and
* the JITted code tries to access this object
*/
*ptr = DEAD_REF;
}
#endif
}
}
}
/* Mark registers */
/*
* Registers are different from stack slots, they have no address where they
* are stored. Instead, some frame below this frame in the stack saves them
* in its prolog to the stack. We can mark this location precisely.
*/
for (i = 0; i < fi->nreg_locations; ++i) {
/*
* reg_locations [i] contains the address of the stack slot where
* a reg was last saved, so mark that slot.
*/
MonoObject **ptr = (MonoObject**)((guint8*)stack_start + fi->reg_locations [i]);
MonoObject *obj = *ptr;
if (obj) {
DEBUG (fprintf (logfile, "\treg %s saved at %p: %p ->", mono_arch_regname (fi->regs [i]), ptr, obj));
*ptr = mono_gc_scan_object (obj, gc_data);
DEBUG (fprintf (logfile, " %p.\n", *ptr));
} else {
DEBUG (fprintf (logfile, "\treg %s saved at %p: %p\n", mono_arch_regname (fi->regs [i]), ptr, obj));
}
}
}
/*
* Debugging aid to check for missed refs.
*/
if (tls->ref_to_track) {
gpointer *p;
for (p = (gpointer*)stack_start; p < (gpointer*)stack_end; ++p)
if (*p == tls->ref_to_track)
printf ("REF AT %p.\n", p);
}
}
/*
* thread_mark_func:
*
* This is called by the GC twice to mark a thread stack. PRECISE is FALSE at the first
* call, and TRUE at the second. USER_DATA points to a TlsData
* structure filled up by thread_suspend_func.
*/
static void
thread_mark_func (gpointer user_data, guint8 *stack_start, guint8 *stack_end, gboolean precise, void *gc_data)
{
TlsData *tls = user_data;
DEBUG (fprintf (logfile, "****************************************\n"));
DEBUG (fprintf (logfile, "*** %s stack marking for thread %p (%p-%p) ***\n", precise ? "Precise" : "Conservative", tls ? GUINT_TO_POINTER (tls->tid) : NULL, stack_start, stack_end));
DEBUG (fprintf (logfile, "****************************************\n"));
if (!precise)
conservative_pass (tls, stack_start, stack_end);
else
precise_pass (tls, stack_start, stack_end, gc_data);
}
#ifndef DISABLE_JIT
static void
mini_gc_init_gc_map (MonoCompile *cfg)
{
if (COMPILE_LLVM (cfg))
return;
if (!mono_gc_is_moving ())
return;
if (cfg->compile_aot) {
if (!enable_gc_maps_for_aot)
return;
} else if (!mono_gc_precise_stack_mark_enabled ())
return;
#if 1
/* Debugging support */
{
static int precise_count;
precise_count ++;
char *mono_gcmap_count = g_getenv ("MONO_GCMAP_COUNT");
if (mono_gcmap_count) {
int count = atoi (mono_gcmap_count);
g_free (mono_gcmap_count);
if (precise_count == count)
printf ("LAST: %s\n", mono_method_full_name (cfg->method, TRUE));
if (precise_count > count)
return;
}
}
#endif
cfg->compute_gc_maps = TRUE;
cfg->gc_info = mono_mempool_alloc0 (cfg->mempool, sizeof (MonoCompileGC));
}
/*
* mini_gc_set_slot_type_from_fp:
*
* Set the GC slot type of the stack slot identified by SLOT_OFFSET, which should be
* relative to the frame pointer. By default, all stack slots are type PIN, so there is no
* need to call this function for those slots.
*/
void
mini_gc_set_slot_type_from_fp (MonoCompile *cfg, int slot_offset, GCSlotType type)
{
MonoCompileGC *gcfg = (MonoCompileGC*)cfg->gc_info;
if (!cfg->compute_gc_maps)
return;
g_assert (slot_offset % SIZEOF_SLOT == 0);
gcfg->stack_slots_from_fp = g_slist_prepend_mempool (cfg->mempool, gcfg->stack_slots_from_fp, GINT_TO_POINTER (((slot_offset) << 16) | type));
}
/*
* mini_gc_set_slot_type_from_cfa:
*
* Set the GC slot type of the stack slot identified by SLOT_OFFSET, which should be
* relative to the DWARF CFA value. This should be called from mono_arch_emit_prolog ().
* If type is STACK_REF, the slot is assumed to be live from the end of the prolog until
* the end of the method. By default, all stack slots are type PIN, so there is no need to
* call this function for those slots.
*/
void
mini_gc_set_slot_type_from_cfa (MonoCompile *cfg, int slot_offset, GCSlotType type)
{
MonoCompileGC *gcfg = (MonoCompileGC*)cfg->gc_info;
int slot = - (slot_offset / SIZEOF_SLOT);
if (!cfg->compute_gc_maps)
return;
g_assert (slot_offset <= 0);
g_assert (slot_offset % SIZEOF_SLOT == 0);
gcfg->stack_slots_from_cfa = g_slist_prepend_mempool (cfg->mempool, gcfg->stack_slots_from_cfa, GUINT_TO_POINTER (((slot) << 16) | type));
}
static int
fp_offset_to_slot (MonoCompile *cfg, int offset)
{
MonoCompileGC *gcfg = cfg->gc_info;
return (offset - gcfg->min_offset) / SIZEOF_SLOT;
}
static int
slot_to_fp_offset (MonoCompile *cfg, int slot)
{
MonoCompileGC *gcfg = cfg->gc_info;
return (slot * SIZEOF_SLOT) + gcfg->min_offset;
}
static MONO_ALWAYS_INLINE void
set_slot (MonoCompileGC *gcfg, int slot, int callsite_index, GCSlotType type)
{
g_assert (slot >= 0 && slot < gcfg->nslots);
if (type == SLOT_PIN) {
clear_bit (gcfg->stack_ref_bitmap, gcfg->stack_bitmap_width, slot, callsite_index);
set_bit (gcfg->stack_pin_bitmap, gcfg->stack_bitmap_width, slot, callsite_index);
} else if (type == SLOT_REF) {
set_bit (gcfg->stack_ref_bitmap, gcfg->stack_bitmap_width, slot, callsite_index);
clear_bit (gcfg->stack_pin_bitmap, gcfg->stack_bitmap_width, slot, callsite_index);
} else if (type == SLOT_NOREF) {
clear_bit (gcfg->stack_ref_bitmap, gcfg->stack_bitmap_width, slot, callsite_index);
clear_bit (gcfg->stack_pin_bitmap, gcfg->stack_bitmap_width, slot, callsite_index);
}
}
static void
set_slot_everywhere (MonoCompileGC *gcfg, int slot, GCSlotType type)
{
int width, pos;
guint8 *ref_bitmap, *pin_bitmap;
/*
int cindex;
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex)
set_slot (gcfg, slot, cindex, type);
*/
ref_bitmap = gcfg->stack_ref_bitmap;
pin_bitmap = gcfg->stack_pin_bitmap;
width = gcfg->stack_bitmap_width;
pos = width * slot;
if (type == SLOT_PIN) {
memset (ref_bitmap + pos, 0, width);
memset (pin_bitmap + pos, 0xff, width);
} else if (type == SLOT_REF) {
memset (ref_bitmap + pos, 0xff, width);
memset (pin_bitmap + pos, 0, width);
} else if (type == SLOT_NOREF) {
memset (ref_bitmap + pos, 0, width);
memset (pin_bitmap + pos, 0, width);
}
}
static void
set_slot_in_range (MonoCompileGC *gcfg, int slot, int from, int to, GCSlotType type)
{
int cindex;
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex) {
int callsite_offset = gcfg->callsites [cindex]->pc_offset;
if (callsite_offset >= from && callsite_offset < to)
set_slot (gcfg, slot, cindex, type);
}
}
static void
set_reg_slot (MonoCompileGC *gcfg, int slot, int callsite_index, GCSlotType type)
{
g_assert (slot >= 0 && slot < gcfg->nregs);
if (type == SLOT_PIN) {
clear_bit (gcfg->reg_ref_bitmap, gcfg->reg_bitmap_width, slot, callsite_index);
set_bit (gcfg->reg_pin_bitmap, gcfg->reg_bitmap_width, slot, callsite_index);
} else if (type == SLOT_REF) {
set_bit (gcfg->reg_ref_bitmap, gcfg->reg_bitmap_width, slot, callsite_index);
clear_bit (gcfg->reg_pin_bitmap, gcfg->reg_bitmap_width, slot, callsite_index);
} else if (type == SLOT_NOREF) {
clear_bit (gcfg->reg_ref_bitmap, gcfg->reg_bitmap_width, slot, callsite_index);
clear_bit (gcfg->reg_pin_bitmap, gcfg->reg_bitmap_width, slot, callsite_index);
}
}
static void
set_reg_slot_everywhere (MonoCompileGC *gcfg, int slot, GCSlotType type)
{
int cindex;
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex)
set_reg_slot (gcfg, slot, cindex, type);
}
static void
set_reg_slot_in_range (MonoCompileGC *gcfg, int slot, int from, int to, GCSlotType type)
{
int cindex;
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex) {
int callsite_offset = gcfg->callsites [cindex]->pc_offset;
if (callsite_offset >= from && callsite_offset < to)
set_reg_slot (gcfg, slot, cindex, type);
}
}
static void
process_spill_slots (MonoCompile *cfg)
{
MonoCompileGC *gcfg = cfg->gc_info;
MonoBasicBlock *bb;
GSList *l;
int i;
/* Mark all ref/pin spill slots as NOREF by default outside of their live range */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
for (l = bb->spill_slot_defs; l; l = l->next) {
MonoInst *def = l->data;
int spill_slot = def->inst_c0;
int bank = def->inst_c1;
int offset = cfg->spill_info [bank][spill_slot].offset;
int slot = fp_offset_to_slot (cfg, offset);
if (bank == MONO_REG_INT_MP || bank == MONO_REG_INT_REF)
set_slot_everywhere (gcfg, slot, SLOT_NOREF);
}
}
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
for (l = bb->spill_slot_defs; l; l = l->next) {
MonoInst *def = l->data;
int spill_slot = def->inst_c0;
int bank = def->inst_c1;
int offset = cfg->spill_info [bank][spill_slot].offset;
int slot = fp_offset_to_slot (cfg, offset);
GCSlotType type;
if (bank == MONO_REG_INT_MP)
type = SLOT_PIN;
else
type = SLOT_REF;
/*
* Extend the live interval for the GC tracked spill slots
* defined in this bblock.
* FIXME: This is not needed.
*/
set_slot_in_range (gcfg, slot, def->backend.pc_offset, bb->native_offset + bb->native_length, type);
if (cfg->verbose_level > 1)
printf ("\t%s spill slot at %s0x%x(fp) (slot = %d)\n", slot_type_to_string (type), offset >= 0 ? "" : "-", ABS (offset), slot);
}
}
/* Set fp spill slots to NOREF */
for (i = 0; i < cfg->spill_info_len [MONO_REG_DOUBLE]; ++i) {
int offset = cfg->spill_info [MONO_REG_DOUBLE][i].offset;
int slot;
if (offset == -1)
continue;
slot = fp_offset_to_slot (cfg, offset);
set_slot_everywhere (gcfg, slot, SLOT_NOREF);
/* FIXME: 32 bit */
if (cfg->verbose_level > 1)
printf ("\tfp spill slot at %s0x%x(fp) (slot = %d)\n", offset >= 0 ? "" : "-", ABS (offset), slot);
}
/* Set int spill slots to NOREF */
for (i = 0; i < cfg->spill_info_len [MONO_REG_INT]; ++i) {
int offset = cfg->spill_info [MONO_REG_INT][i].offset;
int slot;
if (offset == -1)
continue;
slot = fp_offset_to_slot (cfg, offset);
set_slot_everywhere (gcfg, slot, SLOT_NOREF);
if (cfg->verbose_level > 1)
printf ("\tint spill slot at %s0x%x(fp) (slot = %d)\n", offset >= 0 ? "" : "-", ABS (offset), slot);
}
}
/*
* process_other_slots:
*
* Process stack slots registered using mini_gc_set_slot_type_... ().
*/
static void
process_other_slots (MonoCompile *cfg)
{
MonoCompileGC *gcfg = cfg->gc_info;
GSList *l;
/* Relative to the CFA */
for (l = gcfg->stack_slots_from_cfa; l; l = l->next) {
guint data = GPOINTER_TO_UINT (l->data);
int cfa_slot = data >> 16;
GCSlotType type = data & 0xff;
int slot;
/*
* Map the cfa relative slot to an fp relative slot.
* slot_addr == cfa - <cfa_slot>*4/8
* fp + cfa_offset == cfa
* -> slot_addr == fp + (cfa_offset - <cfa_slot>*4/8)
*/
slot = (cfg->cfa_offset / SIZEOF_SLOT) - cfa_slot - (gcfg->min_offset / SIZEOF_SLOT);
set_slot_everywhere (gcfg, slot, type);
if (cfg->verbose_level > 1) {
int fp_offset = slot_to_fp_offset (cfg, slot);
if (type == SLOT_NOREF)
printf ("\tnoref slot at %s0x%x(fp) (slot = %d) (cfa - 0x%x)\n", fp_offset >= 0 ? "" : "-", ABS (fp_offset), slot, (int)(cfa_slot * SIZEOF_SLOT));
}
}
/* Relative to the FP */
for (l = gcfg->stack_slots_from_fp; l; l = l->next) {
gint data = GPOINTER_TO_INT (l->data);
int offset = data >> 16;
GCSlotType type = data & 0xff;
int slot;
slot = fp_offset_to_slot (cfg, offset);
set_slot_everywhere (gcfg, slot, type);
/* Liveness for these slots is handled by process_spill_slots () */
if (cfg->verbose_level > 1) {
if (type == SLOT_REF)
printf ("\tref slot at fp+0x%x (slot = %d)\n", offset, slot);
else if (type == SLOT_NOREF)
printf ("\tnoref slot at 0x%x(fp) (slot = %d)\n", offset, slot);
}
}
}
static gsize*
get_vtype_bitmap (MonoType *t, int *numbits)
{
MonoClass *klass = mono_class_from_mono_type_internal (t);
if (klass->generic_container || mono_class_is_open_constructed_type (t)) {
/* FIXME: Generic sharing */
return NULL;
} else {
mono_class_compute_gc_descriptor (klass);
return mono_gc_get_bitmap_for_descr (klass->gc_descr, numbits);
}
}
static const char*
get_offset_sign (int offset)
{
return offset < 0 ? "-" : "+";
}
static int
get_offset_val (int offset)
{
return offset < 0 ? (- offset) : offset;
}
static void
process_variables (MonoCompile *cfg)
{
MonoCompileGC *gcfg = cfg->gc_info;
MonoMethodSignature *sig = mono_method_signature_internal (cfg->method);
int i, locals_min_slot, locals_max_slot, cindex;
MonoBasicBlock *bb;
MonoInst *tmp;
int *pc_offsets;
int locals_min_offset = gcfg->locals_min_offset;
int locals_max_offset = gcfg->locals_max_offset;
/* Slots for locals are NOREF by default */
locals_min_slot = (locals_min_offset - gcfg->min_offset) / SIZEOF_SLOT;
locals_max_slot = (locals_max_offset - gcfg->min_offset) / SIZEOF_SLOT;
for (i = locals_min_slot; i < locals_max_slot; ++i) {
set_slot_everywhere (gcfg, i, SLOT_NOREF);
}
/*
* Compute the offset where variables are initialized in the first bblock, if any.
*/
pc_offsets = g_new0 (int, cfg->next_vreg);
bb = cfg->bb_entry->next_bb;
MONO_BB_FOR_EACH_INS (bb, tmp) {
if (tmp->opcode == OP_GC_LIVENESS_DEF) {
int vreg = tmp->inst_c1;
if (pc_offsets [vreg] == 0) {
g_assert (tmp->backend.pc_offset > 0);
pc_offsets [vreg] = tmp->backend.pc_offset;
}
}
}
/*
* Stack slots holding arguments are initialized in the prolog.
* This means we can treat them alive for the whole method.
*/
for (i = 0; i < cfg->num_varinfo; i++) {
MonoInst *ins = cfg->varinfo [i];
MonoType *t = ins->inst_vtype;
MonoMethodVar *vmv;
guint32 pos;
gboolean byref, is_this = FALSE;
gboolean is_arg = i < cfg->locals_start;
if (ins == cfg->ret) {
if (!(ins->opcode == OP_REGOFFSET && MONO_TYPE_ISSTRUCT (t)))
continue;
}
vmv = MONO_VARINFO (cfg, i);
/* For some reason, 'this' is byref */
if (sig->hasthis && ins == cfg->args [0] && !cfg->method->klass->valuetype) {
t = m_class_get_byval_arg (cfg->method->klass);
is_this = TRUE;
}
byref = m_type_is_byref (t);
if (ins->opcode == OP_REGVAR) {
int hreg;
GCSlotType slot_type;
t = mini_get_underlying_type (t);
hreg = ins->dreg;
g_assert (hreg < MONO_MAX_IREGS);
if (byref)
slot_type = SLOT_PIN;
else
slot_type = mini_type_is_reference (t) ? SLOT_REF : SLOT_NOREF;
if (slot_type == SLOT_PIN) {
/* These have no live interval, be conservative */
set_reg_slot_everywhere (gcfg, hreg, slot_type);
} else {
/*
* Unlike variables allocated to the stack, we generate liveness info
* for noref vars in registers in mono_spill_global_vars (), because
* knowing that a register doesn't contain a ref allows us to mark its save
* locations precisely.
*/
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex)
if (gcfg->callsites [cindex]->liveness [i / 8] & (1 << (i % 8)))
set_reg_slot (gcfg, hreg, cindex, slot_type);
}
if (cfg->verbose_level > 1) {
printf ("\t%s %sreg %s(R%d)\n", slot_type_to_string (slot_type), is_arg ? "arg " : "", mono_arch_regname (hreg), vmv->vreg);
}
continue;
}
if (ins->opcode != OP_REGOFFSET)
continue;
if (ins->inst_offset % SIZEOF_SLOT != 0)
continue;
pos = fp_offset_to_slot (cfg, ins->inst_offset);
if (is_arg && ins->flags & MONO_INST_IS_DEAD) {
/* These do not get stored in the prolog */
set_slot_everywhere (gcfg, pos, SLOT_NOREF);
if (cfg->verbose_level > 1) {
printf ("\tdead arg at fp%s0x%x (slot = %d): %s\n", get_offset_sign (ins->inst_offset), get_offset_val (ins->inst_offset), pos, mono_type_full_name (ins->inst_vtype));
}
continue;
}
if (MONO_TYPE_ISSTRUCT (t)) {
int numbits = 0, j;
gsize *bitmap = NULL;
gboolean pin = FALSE;
int size;
int size_in_slots;
if (ins->backend.is_pinvoke)
size = mono_class_native_size (ins->klass, NULL);
else
size = mono_class_value_size (ins->klass, NULL);
size_in_slots = ALIGN_TO (size, SIZEOF_SLOT) / SIZEOF_SLOT;
if (cfg->verbose_level > 1)
printf ("\tvtype R%d at %s0x%x(fp)-%s0x%x(fp) (slot %d-%d): %s\n", vmv->vreg, get_offset_sign (ins->inst_offset), get_offset_val (ins->inst_offset), get_offset_sign (ins->inst_offset), get_offset_val (ins->inst_offset + (size_in_slots * SIZEOF_SLOT)), pos, pos + size_in_slots, mono_type_full_name (ins->inst_vtype));
if (!ins->klass->has_references) {
if (is_arg) {
for (j = 0; j < size_in_slots; ++j)
set_slot_everywhere (gcfg, pos + j, SLOT_NOREF);
}
continue;
}
bitmap = get_vtype_bitmap (t, &numbits);
if (!bitmap)
pin = TRUE;
/*
* Most vtypes are marked volatile because of the LDADDR instructions,
* and they have no liveness information since they are decomposed
* before the liveness pass. We emit OP_GC_LIVENESS_DEF instructions for
* them during VZERO decomposition.
*/
if (!is_arg) {
if (!pc_offsets [vmv->vreg])
pin = TRUE;
if (ins->backend.is_pinvoke)
pin = TRUE;
}
if (bitmap) {
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex) {
if (gcfg->callsites [cindex]->pc_offset > pc_offsets [vmv->vreg]) {
for (j = 0; j < numbits; ++j) {
if (bitmap [j / GC_BITS_PER_WORD] & ((gsize)1 << (j % GC_BITS_PER_WORD))) {
/* The descriptor is for the boxed object */
set_slot (gcfg, (pos + j - (MONO_ABI_SIZEOF (MonoObject) / SIZEOF_SLOT)), cindex, pin ? SLOT_PIN : SLOT_REF);
}
}
}
}
if (cfg->verbose_level > 1) {
for (j = 0; j < numbits; ++j) {
if (bitmap [j / GC_BITS_PER_WORD] & ((gsize)1 << (j % GC_BITS_PER_WORD)))
printf ("\t\t%s slot at 0x%x(fp) (slot = %d)\n", pin ? "pin" : "ref", (int)(ins->inst_offset + (j * SIZEOF_SLOT)), (int)(pos + j - (MONO_ABI_SIZEOF (MonoObject) / SIZEOF_SLOT)));
}
}
} else {
if (cfg->verbose_level > 1)
printf ("\t\tpinned\n");
for (j = 0; j < size_in_slots; ++j) {
set_slot_everywhere (gcfg, pos + j, SLOT_PIN);
}
}
g_free (bitmap);
continue;
}
if (!is_arg && (ins->inst_offset < gcfg->min_offset || ins->inst_offset >= gcfg->max_offset))
/* Vret addr etc. */
continue;
if (m_type_is_byref (t)) {
if (is_arg) {
set_slot_everywhere (gcfg, pos, SLOT_PIN);
} else {
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex)
if (gcfg->callsites [cindex]->liveness [i / 8] & (1 << (i % 8)))
set_slot (gcfg, pos, cindex, SLOT_PIN);
}
if (cfg->verbose_level > 1)
printf ("\tbyref at %s0x%x(fp) (R%d, slot = %d): %s\n", ins->inst_offset < 0 ? "-" : "", (ins->inst_offset < 0) ? -(int)ins->inst_offset : (int)ins->inst_offset, vmv->vreg, pos, mono_type_full_name (ins->inst_vtype));
continue;
}
/*
* This is currently disabled, but could be enabled to debug crashes.
*/
#if 0
if (t->type == MONO_TYPE_I) {
/*
* Variables created in mono_handle_global_vregs have type I, but they
* could hold GC refs since the vregs they were created from might not been
* marked as holding a GC ref. So be conservative.
*/
set_slot_everywhere (gcfg, pos, SLOT_PIN);
continue;
}
#endif
t = mini_get_underlying_type (t);
if (!mini_type_is_reference (t)) {
set_slot_everywhere (gcfg, pos, SLOT_NOREF);
if (cfg->verbose_level > 1)
printf ("\tnoref%s at %s0x%x(fp) (R%d, slot = %d): %s\n", (is_arg ? " arg" : ""), ins->inst_offset < 0 ? "-" : "", (ins->inst_offset < 0) ? -(int)ins->inst_offset : (int)ins->inst_offset, vmv->vreg, pos, mono_type_full_name (ins->inst_vtype));
if (!m_type_is_byref (t) && sizeof (host_mgreg_t) == 4 && (t->type == MONO_TYPE_I8 || t->type == MONO_TYPE_U8 || t->type == MONO_TYPE_R8)) {
set_slot_everywhere (gcfg, pos + 1, SLOT_NOREF);
if (cfg->verbose_level > 1)
printf ("\tnoref at %s0x%x(fp) (R%d, slot = %d): %s\n", ins->inst_offset < 0 ? "-" : "", (ins->inst_offset < 0) ? -(int)(ins->inst_offset + 4) : (int)ins->inst_offset + 4, vmv->vreg, pos + 1, mono_type_full_name (ins->inst_vtype));
}
continue;
}
/* 'this' is marked INDIRECT for gshared methods */
if (ins->flags & (MONO_INST_VOLATILE | MONO_INST_INDIRECT) && !is_this) {
/*
* For volatile variables, treat them alive from the point they are
* initialized in the first bblock until the end of the method.
*/
if (is_arg) {
set_slot_everywhere (gcfg, pos, SLOT_REF);
} else if (pc_offsets [vmv->vreg]) {
set_slot_in_range (gcfg, pos, 0, pc_offsets [vmv->vreg], SLOT_PIN);
set_slot_in_range (gcfg, pos, pc_offsets [vmv->vreg], cfg->code_size, SLOT_REF);
} else {
set_slot_everywhere (gcfg, pos, SLOT_PIN);
}
if (cfg->verbose_level > 1)
printf ("\tvolatile ref at %s0x%x(fp) (R%d, slot = %d): %s\n", ins->inst_offset < 0 ? "-" : "", (ins->inst_offset < 0) ? -(int)ins->inst_offset : (int)ins->inst_offset, vmv->vreg, pos, mono_type_full_name (ins->inst_vtype));
continue;
}
if (is_arg) {
/* Live for the whole method */
set_slot_everywhere (gcfg, pos, SLOT_REF);
} else {
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex)
if (gcfg->callsites [cindex]->liveness [i / 8] & (1 << (i % 8)))
set_slot (gcfg, pos, cindex, SLOT_REF);
}
if (cfg->verbose_level > 1) {
printf ("\tref%s at %s0x%x(fp) (R%d, slot = %d): %s\n", (is_arg ? " arg" : ""), ins->inst_offset < 0 ? "-" : "", (ins->inst_offset < 0) ? -(int)ins->inst_offset : (int)ins->inst_offset, vmv->vreg, pos, mono_type_full_name (ins->inst_vtype));
}
}
g_free (pc_offsets);
}
static int
sp_offset_to_fp_offset (MonoCompile *cfg, int sp_offset)
{
/*
* Convert a sp relative offset to a slot index. This is
* platform specific.
*/
#ifdef TARGET_AMD64
/* fp = sp + offset */
g_assert (cfg->frame_reg == AMD64_RBP);
return (- cfg->arch.sp_fp_offset + sp_offset);
#elif defined(TARGET_X86)
/* The offset is computed from the sp at the start of the call sequence */
g_assert (cfg->frame_reg == X86_EBP);
#ifdef MONO_X86_NO_PUSHES
return (- cfg->arch.sp_fp_offset + sp_offset);
#else
return (- cfg->arch.sp_fp_offset - sp_offset);
#endif
#else
NOT_IMPLEMENTED;
return -1;
#endif
}
static void
process_param_area_slots (MonoCompile *cfg)
{
MonoCompileGC *gcfg = cfg->gc_info;
int cindex, i;
gboolean *is_param;
/*
* These slots are used for passing parameters during calls. They are sp relative, not
* fp relative, so they are harder to handle.
*/
if (cfg->flags & MONO_CFG_HAS_ALLOCA)
/* The distance between fp and sp is not constant */
return;
is_param = mono_mempool_alloc0 (cfg->mempool, gcfg->nslots * sizeof (gboolean));
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex) {
GCCallSite *callsite = gcfg->callsites [cindex];
GSList *l;
for (l = callsite->param_slots; l; l = l->next) {
MonoInst *def = l->data;
MonoType *t = def->inst_vtype;
int sp_offset = def->inst_offset;
int fp_offset = sp_offset_to_fp_offset (cfg, sp_offset);
int slot = fp_offset_to_slot (cfg, fp_offset);
guint32 align;
guint32 size;
if (MONO_TYPE_ISSTRUCT (t)) {
size = mini_type_stack_size_full (t, &align, FALSE);
} else {
size = sizeof (target_mgreg_t);
}
for (i = 0; i < size / sizeof (target_mgreg_t); ++i) {
g_assert (slot + i >= 0 && slot + i < gcfg->nslots);
is_param [slot + i] = TRUE;
}
}
}
/* All param area slots are noref by default */
for (i = 0; i < gcfg->nslots; ++i) {
if (is_param [i])
set_slot_everywhere (gcfg, i, SLOT_NOREF);
}
/*
* We treat param area slots as being part of the callee's frame, to be able to handle tailcalls which overwrite
* the argument area of the caller.
*/
}
static void
process_finally_clauses (MonoCompile *cfg)
{
MonoCompileGC *gcfg = cfg->gc_info;
GCCallSite **callsites;
int ncallsites;
gboolean has_finally;
int i, j, nslots, nregs;
ncallsites = gcfg->ncallsites;
nslots = gcfg->nslots;
nregs = gcfg->nregs;
callsites = gcfg->callsites;
/*
* The calls to the finally clauses don't show up in the cfg. See
* test_0_liveness_8 ().
* Variables accessed inside the finally clause are already marked VOLATILE by
* mono_liveness_handle_exception_clauses (). Variables not accessed inside the finally clause have
* correct liveness outside the finally clause. So mark them PIN inside the finally clauses.
*/
has_finally = FALSE;
for (i = 0; i < cfg->header->num_clauses; ++i) {
MonoExceptionClause *clause = &cfg->header->clauses [i];
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY) {
has_finally = TRUE;
}
}
if (has_finally) {
if (cfg->verbose_level > 1)
printf ("\tMethod has finally clauses, pessimizing live ranges.\n");
for (j = 0; j < ncallsites; ++j) {
MonoBasicBlock *bb = callsites [j]->bb;
MonoExceptionClause *clause;
gboolean is_in_finally = FALSE;
for (i = 0; i < cfg->header->num_clauses; ++i) {
clause = &cfg->header->clauses [i];
if (MONO_OFFSET_IN_HANDLER (clause, bb->real_offset)) {
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY) {
is_in_finally = TRUE;
break;
}
}
}
if (is_in_finally) {
for (i = 0; i < nslots; ++i)
set_slot (gcfg, i, j, SLOT_PIN);
for (i = 0; i < nregs; ++i)
set_reg_slot (gcfg, i, j, SLOT_PIN);
}
}
}
}
static void
compute_frame_size (MonoCompile *cfg)
{
int i, locals_min_offset, locals_max_offset, cfa_min_offset, cfa_max_offset;
int min_offset, max_offset;
MonoCompileGC *gcfg = cfg->gc_info;
MonoMethodSignature *sig = mono_method_signature_internal (cfg->method);
GSList *l;
/* Compute min/max offsets from the fp */
/* Locals */
#if defined(TARGET_AMD64) || defined(TARGET_X86) || defined(TARGET_ARM) || defined(TARGET_S390X)
locals_min_offset = ALIGN_TO (cfg->locals_min_stack_offset, SIZEOF_SLOT);
locals_max_offset = cfg->locals_max_stack_offset;
#else
/* min/max stack offset needs to be computed in mono_arch_allocate_vars () */
NOT_IMPLEMENTED;
#endif
locals_min_offset = ALIGN_TO (locals_min_offset, SIZEOF_SLOT);
locals_max_offset = ALIGN_TO (locals_max_offset, SIZEOF_SLOT);
min_offset = locals_min_offset;
max_offset = locals_max_offset;
/* Arguments */
for (i = 0; i < sig->param_count + sig->hasthis; ++i) {
MonoInst *ins = cfg->args [i];
if (ins->opcode == OP_REGOFFSET) {
int size, size_in_slots;
size = mini_type_stack_size_full (ins->inst_vtype, NULL, ins->backend.is_pinvoke);
size_in_slots = ALIGN_TO (size, SIZEOF_SLOT) / SIZEOF_SLOT;
min_offset = MIN (min_offset, ins->inst_offset);
max_offset = MAX ((int)max_offset, (int)(ins->inst_offset + (size_in_slots * SIZEOF_SLOT)));
}
}
/* Cfa slots */
g_assert (cfg->frame_reg == cfg->cfa_reg);
g_assert (cfg->cfa_offset > 0);
cfa_min_offset = 0;
cfa_max_offset = cfg->cfa_offset;
min_offset = MIN (min_offset, cfa_min_offset);
max_offset = MAX (max_offset, cfa_max_offset);
/* Fp relative slots */
for (l = gcfg->stack_slots_from_fp; l; l = l->next) {
gint data = GPOINTER_TO_INT (l->data);
int offset = data >> 16;
min_offset = MIN (min_offset, offset);
}
/* Spill slots */
if (!(cfg->flags & MONO_CFG_HAS_SPILLUP)) {
int stack_offset = ALIGN_TO (cfg->stack_offset, SIZEOF_SLOT);
min_offset = MIN (min_offset, (-stack_offset));
}
/* Param area slots */
#ifdef TARGET_AMD64
min_offset = MIN (min_offset, -cfg->arch.sp_fp_offset);
#elif defined(TARGET_X86)
#ifdef MONO_X86_NO_PUSHES
min_offset = MIN (min_offset, -cfg->arch.sp_fp_offset);
#else
min_offset = MIN (min_offset, - (cfg->arch.sp_fp_offset + cfg->arch.param_area_size));
#endif
#elif defined(TARGET_ARM)
// FIXME:
#elif defined(TARGET_s390X)
// FIXME:
#else
NOT_IMPLEMENTED;
#endif
gcfg->min_offset = min_offset;
gcfg->max_offset = max_offset;
gcfg->locals_min_offset = locals_min_offset;
gcfg->locals_max_offset = locals_max_offset;
}
static void
init_gcfg (MonoCompile *cfg)
{
int i, nregs, nslots;
MonoCompileGC *gcfg = cfg->gc_info;
GCCallSite **callsites;
int ncallsites;
MonoBasicBlock *bb;
GSList *l;
/*
* Collect callsites
*/
ncallsites = 0;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
ncallsites += g_slist_length (bb->gc_callsites);
}
callsites = mono_mempool_alloc0 (cfg->mempool, ncallsites * sizeof (GCCallSite*));
i = 0;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
for (l = bb->gc_callsites; l; l = l->next)
callsites [i++] = l->data;
}
/* The callsites should already be ordered by pc offset */
for (i = 1; i < ncallsites; ++i)
g_assert (callsites [i - 1]->pc_offset < callsites [i]->pc_offset);
/*
* The stack frame looks like this:
*
* <fp + max_offset> == cfa -> <end of previous frame>
* <other stack slots>
* <locals>
* <other stack slots>
* fp + min_offset ->
* ...
* fp ->
*/
if (cfg->verbose_level > 1)
printf ("GC Map for %s: 0x%x-0x%x\n", mono_method_full_name (cfg->method, TRUE), gcfg->min_offset, gcfg->max_offset);
nslots = (gcfg->max_offset - gcfg->min_offset) / SIZEOF_SLOT;
nregs = NREGS;
gcfg->nslots = nslots;
gcfg->nregs = nregs;
gcfg->callsites = callsites;
gcfg->ncallsites = ncallsites;
gcfg->stack_bitmap_width = ALIGN_TO (ncallsites, 8) / 8;
gcfg->reg_bitmap_width = ALIGN_TO (ncallsites, 8) / 8;
gcfg->stack_ref_bitmap = mono_mempool_alloc0 (cfg->mempool, gcfg->stack_bitmap_width * nslots);
gcfg->stack_pin_bitmap = mono_mempool_alloc0 (cfg->mempool, gcfg->stack_bitmap_width * nslots);
gcfg->reg_ref_bitmap = mono_mempool_alloc0 (cfg->mempool, gcfg->reg_bitmap_width * nregs);
gcfg->reg_pin_bitmap = mono_mempool_alloc0 (cfg->mempool, gcfg->reg_bitmap_width * nregs);
/* All slots start out as PIN */
memset (gcfg->stack_pin_bitmap, 0xff, gcfg->stack_bitmap_width * nregs);
for (i = 0; i < nregs; ++i) {
/*
* By default, registers are NOREF.
* It is possible for a callee to save them before being defined in this method,
* but the saved value is dead too, so it doesn't need to be marked.
*/
if ((cfg->used_int_regs & (1 << i)))
set_reg_slot_everywhere (gcfg, i, SLOT_NOREF);
}
}
static gboolean
has_bit_set (guint8 *bitmap, int width, int slot)
{
int i;
int pos = width * slot;
for (i = 0; i < width; ++i) {
if (bitmap [pos + i])
break;
}
return i < width;
}
static void
create_map (MonoCompile *cfg)
{
GCMap *map;
int i, j, nregs, nslots, nref_regs, npin_regs, alloc_size, bitmaps_size, bitmaps_offset;
int ntypes [16];
int stack_bitmap_width, stack_bitmap_size, reg_ref_bitmap_width, reg_ref_bitmap_size;
int reg_pin_bitmap_width, reg_pin_bitmap_size, bindex;
int start, end;
gboolean has_ref_slots, has_pin_slots, has_ref_regs, has_pin_regs;
MonoCompileGC *gcfg = cfg->gc_info;
GCCallSite **callsites;
int ncallsites;
guint8 *bitmap, *bitmaps;
guint32 reg_ref_mask, reg_pin_mask;
ncallsites = gcfg->ncallsites;
nslots = gcfg->nslots;
nregs = gcfg->nregs;
callsites = gcfg->callsites;
/*
* Compute the real size of the bitmap i.e. ignore NOREF columns at the beginning and at
* the end. Also, compute whenever the map needs ref/pin bitmaps, and collect stats.
*/
has_ref_slots = FALSE;
has_pin_slots = FALSE;
start = -1;
end = -1;
memset (ntypes, 0, sizeof (ntypes));
for (i = 0; i < nslots; ++i) {
gboolean has_ref = FALSE;
gboolean has_pin = FALSE;
if (has_bit_set (gcfg->stack_pin_bitmap, gcfg->stack_bitmap_width, i))
has_pin = TRUE;
if (has_bit_set (gcfg->stack_ref_bitmap, gcfg->stack_bitmap_width, i))
has_ref = TRUE;
if (has_ref)
has_ref_slots = TRUE;
if (has_pin)
has_pin_slots = TRUE;
if (has_ref)
ntypes [SLOT_REF] ++;
else if (has_pin)
ntypes [SLOT_PIN] ++;
else
ntypes [SLOT_NOREF] ++;
if (has_ref || has_pin) {
if (start == -1)
start = i;
end = i + 1;
}
}
if (start == -1) {
start = end = nslots;
} else {
g_assert (start != -1);
g_assert (start < end);
}
has_ref_regs = FALSE;
has_pin_regs = FALSE;
reg_ref_mask = 0;
reg_pin_mask = 0;
nref_regs = 0;
npin_regs = 0;
for (i = 0; i < nregs; ++i) {
gboolean has_ref = FALSE;
gboolean has_pin = FALSE;
if (!(cfg->used_int_regs & (1 << i)))
continue;
if (has_bit_set (gcfg->reg_pin_bitmap, gcfg->reg_bitmap_width, i))
has_pin = TRUE;
if (has_bit_set (gcfg->reg_ref_bitmap, gcfg->reg_bitmap_width, i))
has_ref = TRUE;
if (has_ref) {
reg_ref_mask |= (1 << i);
has_ref_regs = TRUE;
nref_regs ++;
}
if (has_pin) {
reg_pin_mask |= (1 << i);
has_pin_regs = TRUE;
npin_regs ++;
}
}
if (cfg->verbose_level > 1)
printf ("Slots: %d Start: %d End: %d Refs: %d NoRefs: %d Pin: %d Callsites: %d\n", nslots, start, end, ntypes [SLOT_REF], ntypes [SLOT_NOREF], ntypes [SLOT_PIN], ncallsites);
/* Create the GC Map */
/* The work bitmaps have one row for each slot, since this is how we access them during construction */
stack_bitmap_width = ALIGN_TO (end - start, 8) / 8;
stack_bitmap_size = stack_bitmap_width * ncallsites;
reg_ref_bitmap_width = ALIGN_TO (nref_regs, 8) / 8;
reg_ref_bitmap_size = reg_ref_bitmap_width * ncallsites;
reg_pin_bitmap_width = ALIGN_TO (npin_regs, 8) / 8;
reg_pin_bitmap_size = reg_pin_bitmap_width * ncallsites;
bitmaps_size = (has_ref_slots ? stack_bitmap_size : 0) + (has_pin_slots ? stack_bitmap_size : 0) + (has_ref_regs ? reg_ref_bitmap_size : 0) + (has_pin_regs ? reg_pin_bitmap_size : 0);
map = mono_mempool_alloc0 (cfg->mempool, sizeof (GCMap));
map->frame_reg = cfg->frame_reg;
map->start_offset = gcfg->min_offset;
map->end_offset = gcfg->min_offset + (nslots * SIZEOF_SLOT);
map->map_offset = start * SIZEOF_SLOT;
map->nslots = end - start;
map->has_ref_slots = has_ref_slots;
map->has_pin_slots = has_pin_slots;
map->has_ref_regs = has_ref_regs;
map->has_pin_regs = has_pin_regs;
g_assert (nregs < 32);
map->used_int_regs = cfg->used_int_regs;
map->reg_ref_mask = reg_ref_mask;
map->reg_pin_mask = reg_pin_mask;
map->nref_regs = nref_regs;
map->npin_regs = npin_regs;
bitmaps = mono_mempool_alloc0 (cfg->mempool, bitmaps_size);
bitmaps_offset = 0;
if (has_ref_slots) {
map->stack_ref_bitmap_offset = bitmaps_offset;
bitmaps_offset += stack_bitmap_size;
bitmap = &bitmaps [map->stack_ref_bitmap_offset];
for (i = 0; i < nslots; ++i) {
for (j = 0; j < ncallsites; ++j) {
if (get_bit (gcfg->stack_ref_bitmap, gcfg->stack_bitmap_width, i, j))
set_bit (bitmap, stack_bitmap_width, j, i - start);
}
}
}
if (has_pin_slots) {
map->stack_pin_bitmap_offset = bitmaps_offset;
bitmaps_offset += stack_bitmap_size;
bitmap = &bitmaps [map->stack_pin_bitmap_offset];
for (i = 0; i < nslots; ++i) {
for (j = 0; j < ncallsites; ++j) {
if (get_bit (gcfg->stack_pin_bitmap, gcfg->stack_bitmap_width, i, j))
set_bit (bitmap, stack_bitmap_width, j, i - start);
}
}
}
if (has_ref_regs) {
map->reg_ref_bitmap_offset = bitmaps_offset;
bitmaps_offset += reg_ref_bitmap_size;
bitmap = &bitmaps [map->reg_ref_bitmap_offset];
bindex = 0;
for (i = 0; i < nregs; ++i) {
if (reg_ref_mask & (1 << i)) {
for (j = 0; j < ncallsites; ++j) {
if (get_bit (gcfg->reg_ref_bitmap, gcfg->reg_bitmap_width, i, j))
set_bit (bitmap, reg_ref_bitmap_width, j, bindex);
}
bindex ++;
}
}
}
if (has_pin_regs) {
map->reg_pin_bitmap_offset = bitmaps_offset;
bitmaps_offset += reg_pin_bitmap_size;
bitmap = &bitmaps [map->reg_pin_bitmap_offset];
bindex = 0;
for (i = 0; i < nregs; ++i) {
if (reg_pin_mask & (1 << i)) {
for (j = 0; j < ncallsites; ++j) {
if (get_bit (gcfg->reg_pin_bitmap, gcfg->reg_bitmap_width, i, j))
set_bit (bitmap, reg_pin_bitmap_width, j, bindex);
}
bindex ++;
}
}
}
/* Call sites */
map->ncallsites = ncallsites;
if (cfg->code_len < 256)
map->callsite_entry_size = 1;
else if (cfg->code_len < 65536)
map->callsite_entry_size = 2;
else
map->callsite_entry_size = 4;
/* Encode the GC Map */
{
guint8 buf [256];
guint8 *endbuf;
GCEncodedMap *emap;
int encoded_size;
guint8 *p;
encode_gc_map (map, buf, &endbuf);
g_assert (endbuf - buf < 256);
encoded_size = endbuf - buf;
alloc_size = sizeof (GCEncodedMap) + ALIGN_TO (encoded_size, map->callsite_entry_size) + (map->callsite_entry_size * map->ncallsites) + bitmaps_size;
emap = mono_mem_manager_alloc0 (cfg->mem_manager, alloc_size);
//emap->ref_slots = map->ref_slots;
/* Encoded fixed fields */
p = &emap->encoded [0];
//emap->encoded_size = encoded_size;
memcpy (p, buf, encoded_size);
p += encoded_size;
/* Callsite table */
p = (guint8*)ALIGN_TO ((gsize)p, map->callsite_entry_size);
if (map->callsite_entry_size == 1) {
guint8 *offsets = p;
for (i = 0; i < ncallsites; ++i)
offsets [i] = callsites [i]->pc_offset;
UnlockedAdd (&stats.gc_callsites8_size, ncallsites * sizeof (guint8));
} else if (map->callsite_entry_size == 2) {
guint16 *offsets = (guint16*)p;
for (i = 0; i < ncallsites; ++i)
offsets [i] = callsites [i]->pc_offset;
UnlockedAdd (&stats.gc_callsites16_size, ncallsites * sizeof (guint16));
} else {
guint32 *offsets = (guint32*)p;
for (i = 0; i < ncallsites; ++i)
offsets [i] = callsites [i]->pc_offset;
UnlockedAdd (&stats.gc_callsites32_size, ncallsites * sizeof (guint32));
}
p += ncallsites * map->callsite_entry_size;
/* Bitmaps */
memcpy (p, bitmaps, bitmaps_size);
p += bitmaps_size;
g_assert ((guint8*)p - (guint8*)emap <= alloc_size);
UnlockedAdd (&stats.gc_maps_size, alloc_size);
UnlockedAdd (&stats.gc_callsites_size, ncallsites * map->callsite_entry_size);
UnlockedAdd (&stats.gc_bitmaps_size, bitmaps_size);
UnlockedAdd (&stats.gc_map_struct_size, sizeof (GCEncodedMap) + encoded_size);
cfg->jit_info->gc_info = emap;
cfg->gc_map = (guint8*)emap;
cfg->gc_map_size = alloc_size;
}
UnlockedAdd (&stats.all_slots, nslots);
UnlockedAdd (&stats.ref_slots, ntypes [SLOT_REF]);
UnlockedAdd (&stats.noref_slots, ntypes [SLOT_NOREF]);
UnlockedAdd (&stats.pin_slots, ntypes [SLOT_PIN]);
}
void
mini_gc_create_gc_map (MonoCompile *cfg)
{
if (!cfg->compute_gc_maps)
return;
/*
* During marking, all frames except the top frame are at a call site, and we mark the
* top frame conservatively. This means that we only need to compute and record
* GC maps for call sites.
*/
if (!(cfg->comp_done & MONO_COMP_LIVENESS))
/* Without liveness info, the live ranges are not precise enough */
return;
mono_analyze_liveness_gc (cfg);
compute_frame_size (cfg);
init_gcfg (cfg);
process_spill_slots (cfg);
process_other_slots (cfg);
process_param_area_slots (cfg);
process_variables (cfg);
process_finally_clauses (cfg);
create_map (cfg);
}
#endif /* DISABLE_JIT */
static void
parse_debug_options (void)
{
char **opts, **ptr;
const char *env;
env = g_getenv ("MONO_GCMAP_DEBUG");
if (!env)
return;
opts = g_strsplit (env, ",", -1);
for (ptr = opts; ptr && *ptr; ptr ++) {
/* No options yet */
fprintf (stderr, "Invalid format for the MONO_GCMAP_DEBUG env variable: '%s'\n", env);
exit (1);
}
g_strfreev (opts);
g_free (env);
}
void
mini_gc_init (void)
{
MonoGCCallbacks cb;
memset (&cb, 0, sizeof (cb));
cb.thread_attach_func = thread_attach_func;
cb.thread_detach_func = thread_detach_func;
cb.thread_suspend_func = thread_suspend_func;
/* Comment this out to disable precise stack marking */
cb.thread_mark_func = thread_mark_func;
cb.get_provenance_func = get_provenance_func;
if (mono_use_interpreter)
cb.interp_mark_func = mini_get_interp_callbacks ()->mark_stack;
mono_gc_set_gc_callbacks (&cb);
logfile = mono_gc_get_logfile ();
parse_debug_options ();
mono_counters_register ("GC Maps size",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_maps_size);
mono_counters_register ("GC Call Sites size",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_callsites_size);
mono_counters_register ("GC Bitmaps size",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_bitmaps_size);
mono_counters_register ("GC Map struct size",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_map_struct_size);
mono_counters_register ("GC Call Sites encoded using 8 bits",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_callsites8_size);
mono_counters_register ("GC Call Sites encoded using 16 bits",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_callsites16_size);
mono_counters_register ("GC Call Sites encoded using 32 bits",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_callsites32_size);
mono_counters_register ("GC Map slots (all)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.all_slots);
mono_counters_register ("GC Map slots (ref)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.ref_slots);
mono_counters_register ("GC Map slots (noref)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.noref_slots);
mono_counters_register ("GC Map slots (pin)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.pin_slots);
mono_counters_register ("GC TLS Data size",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.tlsdata_size);
mono_counters_register ("Stack space scanned (all)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned_stacks);
mono_counters_register ("Stack space scanned (native)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned_native);
mono_counters_register ("Stack space scanned (other)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned_other);
mono_counters_register ("Stack space scanned (using GC Maps)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned);
mono_counters_register ("Stack space scanned (precise)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned_precisely);
mono_counters_register ("Stack space scanned (pin)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned_conservatively);
mono_counters_register ("Stack space scanned (pin registers)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned_registers);
}
#else
void
mini_gc_enable_gc_maps_for_aot (void)
{
}
void
mini_gc_init (void)
{
MonoGCCallbacks cb;
memset (&cb, 0, sizeof (cb));
cb.get_provenance_func = get_provenance_func;
if (mono_use_interpreter)
cb.interp_mark_func = mini_get_interp_callbacks ()->mark_stack;
mono_gc_set_gc_callbacks (&cb);
}
#ifndef DISABLE_JIT
static void
mini_gc_init_gc_map (MonoCompile *cfg)
{
}
void
mini_gc_create_gc_map (MonoCompile *cfg)
{
}
void
mini_gc_set_slot_type_from_fp (MonoCompile *cfg, int slot_offset, GCSlotType type)
{
}
void
mini_gc_set_slot_type_from_cfa (MonoCompile *cfg, int slot_offset, GCSlotType type)
{
}
#endif /* DISABLE_JIT */
#endif
#ifndef DISABLE_JIT
/*
* mini_gc_init_cfg:
*
* Set GC specific options in CFG.
*/
void
mini_gc_init_cfg (MonoCompile *cfg)
{
if (mono_gc_is_moving ()) {
cfg->disable_ref_noref_stack_slot_share = TRUE;
cfg->gen_write_barriers = TRUE;
}
mini_gc_init_gc_map (cfg);
}
#endif /* DISABLE_JIT */
/*
* Problems with the current code:
* - the stack walk is slow
* - vtypes/refs used in EH regions are treated conservatively
* - if the code is finished, less pinning will be done, causing problems because
* we promote all surviving objects to old-gen.
* - the unwind code can't handle a method stopped inside a finally region, it thinks the caller is
* another method, but in reality it is either the exception handling code or the CALL_HANDLER opcode.
* This manifests in "Unable to find ip offset x in callsite list" assertions.
* - the unwind code also can't handle frames which are in the epilog, since the unwind info is not
* precise there.
*/
/*
* Ideas for creating smaller GC maps:
* - remove empty columns from the bitmaps. This requires adding a mask bit array for
* each bitmap.
* - merge reg and stack slot bitmaps, so the unused bits at the end of the reg bitmap are
* not wasted.
* - if the bitmap width is not a multiple of 8, the remaining bits are wasted.
* - group ref and non-ref stack slots together in mono_allocate_stack_slots ().
* - add an index for the callsite table so that each entry can be encoded as a 1 byte difference
* from an index entry.
*/
| /**
* \file
* GC interface for the mono JIT
*
* Author:
* Zoltan Varga ([email protected])
*
* Copyright 2009 Novell, Inc (http://www.novell.com)
* Copyright 2011 Xamarin, Inc (http://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#include "config.h"
#include "mini-gc.h"
#include "mini-runtime.h"
#include <mono/metadata/gc-internals.h>
static gboolean
get_provenance (StackFrameInfo *frame, MonoContext *ctx, gpointer data)
{
MonoJitInfo *ji = frame->ji;
MonoMethod *method;
if (!ji)
return FALSE;
method = jinfo_get_method (ji);
if (method->wrapper_type != MONO_WRAPPER_NONE)
return FALSE;
*(gpointer *)data = method;
return TRUE;
}
static gpointer
get_provenance_func (void)
{
gpointer provenance = NULL;
mono_walk_stack (get_provenance, MONO_UNWIND_DEFAULT, (gpointer)&provenance);
return provenance;
}
#if 0
//#if defined(MONO_ARCH_GC_MAPS_SUPPORTED)
#include <mono/metadata/sgen-conf.h>
#include <mono/metadata/gc-internals.h>
#include <mono/utils/mono-counters.h>
#include <mono/utils/unlocked.h>
//#define SIZEOF_SLOT ((int)sizeof (host_mgreg_t))
//#define SIZEOF_SLOT ((int)sizeof (target_mgreg_t))
#define GC_BITS_PER_WORD (sizeof (mword) * 8)
/* Contains state needed by the GC Map construction code */
typedef struct {
/*
* This contains information about stack slots initialized in the prolog, encoded using
* (slot_index << 16) | slot_type. The slot_index is relative to the CFA, i.e. 0
* means cfa+0, 1 means cfa-4/8, etc.
*/
GSList *stack_slots_from_cfa;
/* Same for stack slots relative to the frame pointer */
GSList *stack_slots_from_fp;
/* Number of slots in the map */
int nslots;
/* The number of registers in the map */
int nregs;
/* Min and Max offsets of the stack frame relative to fp */
int min_offset, max_offset;
/* Same for the locals area */
int locals_min_offset, locals_max_offset;
/* The call sites where this frame can be stopped during GC */
GCCallSite **callsites;
/* The number of call sites */
int ncallsites;
/*
* The width of the stack bitmaps in bytes. This is not equal to the bitmap width at
* runtime, since it includes columns which are 0.
*/
int stack_bitmap_width;
/*
* A bitmap whose width equals nslots, and whose height equals ncallsites.
* The bitmap contains a 1 if the corresponding stack slot has type SLOT_REF at the
* given callsite.
*/
guint8 *stack_ref_bitmap;
/* Same for SLOT_PIN */
guint8 *stack_pin_bitmap;
/*
* Similar bitmaps for registers. These have width MONO_MAX_IREGS in bits.
*/
int reg_bitmap_width;
guint8 *reg_ref_bitmap;
guint8 *reg_pin_bitmap;
} MonoCompileGC;
#undef DEBUG
#if 0
/* We don't support debug levels, its all-or-nothing */
#define DEBUG(s) do { s; fflush (logfile); } while (0)
#define DEBUG_ENABLED 1
#else
#define DEBUG(s)
#endif
#ifdef DEBUG_ENABLED
//#if 1
#define DEBUG_PRECISE(s) do { s; } while (0)
#define DEBUG_PRECISE_ENABLED
#else
#define DEBUG_PRECISE(s)
#endif
/*
* Contains information collected during the conservative stack marking pass,
* used during the precise pass. This helps to avoid doing a stack walk twice, which
* is expensive.
*/
typedef struct {
guint8 *bitmap;
int nslots;
int frame_start_offset;
int nreg_locations;
/* Relative to stack_start */
int reg_locations [MONO_MAX_IREGS];
#ifdef DEBUG_PRECISE_ENABLED
MonoJitInfo *ji;
gpointer fp;
int regs [MONO_MAX_IREGS];
#endif
} FrameInfo;
/* Max number of frames stored in the TLS data */
#define MAX_FRAMES 50
/*
* Per-thread data kept by this module. This is stored in the GC and passed to us as
* parameters, instead of being stored in a TLS variable, since during a collection,
* only the collection thread is active.
*/
typedef struct {
MonoThreadUnwindState unwind_state;
MonoThreadInfo *info;
/* For debugging */
host_mgreg_t tid;
gpointer ref_to_track;
/* Number of frames collected during the !precise pass */
int nframes;
FrameInfo frames [MAX_FRAMES];
} TlsData;
/* These are constant so don't store them in the GC Maps */
/* Number of registers stored in gc maps */
#define NREGS MONO_MAX_IREGS
/*
* The GC Map itself.
* Contains information needed to mark a stack frame.
* This is a transient structure, created from a compressed representation on-demand.
*/
typedef struct {
/*
* The offsets of the GC tracked area inside the stack frame relative to the frame pointer.
* This includes memory which is NOREF thus doesn't need GC maps.
*/
int start_offset;
int end_offset;
/*
* The offset relative to frame_offset where the the memory described by the GC maps
* begins.
*/
int map_offset;
/* The number of stack slots in the map */
int nslots;
/* The frame pointer register */
guint8 frame_reg;
/* The size of each callsite table entry */
guint8 callsite_entry_size;
guint has_pin_slots : 1;
guint has_ref_slots : 1;
guint has_ref_regs : 1;
guint has_pin_regs : 1;
/* The offsets below are into an external bitmaps array */
/*
* A bitmap whose width is equal to bitmap_width, and whose height is equal to ncallsites.
* The bitmap contains a 1 if the corresponding stack slot has type SLOT_REF at the
* given callsite.
*/
guint32 stack_ref_bitmap_offset;
/*
* Same for SLOT_PIN. It is possible that the same bit is set in both bitmaps at
* different callsites, if the slot starts out as PIN, and later changes to REF.
*/
guint32 stack_pin_bitmap_offset;
/*
* Corresponding bitmaps for registers
* These have width equal to the number of bits set in reg_ref_mask/reg_pin_mask.
* FIXME: Merge these with the normal bitmaps, i.e. reserve the first x slots for them ?
*/
guint32 reg_pin_bitmap_offset;
guint32 reg_ref_bitmap_offset;
guint32 used_int_regs, reg_ref_mask, reg_pin_mask;
/* The number of bits set in the two masks above */
guint8 nref_regs, npin_regs;
/*
* A bit array marking slots which contain refs.
* This is used only for debugging.
*/
//guint8 *ref_slots;
/* Callsite offsets */
/* These can take up a lot of space, so encode them compactly */
union {
guint8 *offsets8;
guint16 *offsets16;
guint32 *offsets32;
} callsites;
int ncallsites;
} GCMap;
/*
* A compressed version of GCMap. This is what gets stored in MonoJitInfo.
*/
typedef struct {
//guint8 *ref_slots;
//guint8 encoded_size;
/*
* The arrays below are embedded after the struct.
* Their address needs to be computed.
*/
/* The fixed fields of the GCMap encoded using LEB128 */
guint8 encoded [MONO_ZERO_LEN_ARRAY];
/* An array of ncallsites entries, each entry is callsite_entry_size bytes long */
guint8 callsites [MONO_ZERO_LEN_ARRAY];
/* The GC bitmaps */
guint8 bitmaps [MONO_ZERO_LEN_ARRAY];
} GCEncodedMap;
static int precise_frame_count [2], precise_frame_limit = -1;
static gboolean precise_frame_limit_inited;
/* Stats */
typedef struct {
gint32 scanned_stacks;
gint32 scanned;
gint32 scanned_precisely;
gint32 scanned_conservatively;
gint32 scanned_registers;
gint32 scanned_native;
gint32 scanned_other;
gint32 all_slots;
gint32 noref_slots;
gint32 ref_slots;
gint32 pin_slots;
gint32 gc_maps_size;
gint32 gc_callsites_size;
gint32 gc_callsites8_size;
gint32 gc_callsites16_size;
gint32 gc_callsites32_size;
gint32 gc_bitmaps_size;
gint32 gc_map_struct_size;
gint32 tlsdata_size;
} JITGCStats;
static JITGCStats stats;
static FILE *logfile;
static gboolean enable_gc_maps_for_aot;
void
mini_gc_enable_gc_maps_for_aot (void)
{
enable_gc_maps_for_aot = TRUE;
}
// FIXME: Move these to a shared place
static void
encode_uleb128 (guint32 value, guint8 *buf, guint8 **endbuf)
{
guint8 *p = buf;
do {
guint8 b = value & 0x7f;
value >>= 7;
if (value != 0) /* more bytes to come */
b |= 0x80;
*p ++ = b;
} while (value);
*endbuf = p;
}
static G_GNUC_UNUSED void
encode_sleb128 (gint32 value, guint8 *buf, guint8 **endbuf)
{
gboolean more = 1;
gboolean negative = (value < 0);
guint32 size = 32;
guint8 byte;
guint8 *p = buf;
while (more) {
byte = value & 0x7f;
value >>= 7;
/* the following is unnecessary if the
* implementation of >>= uses an arithmetic rather
* than logical shift for a signed left operand
*/
if (negative)
/* sign extend */
value |= - (1 <<(size - 7));
/* sign bit of byte is second high order bit (0x40) */
if ((value == 0 && !(byte & 0x40)) ||
(value == -1 && (byte & 0x40)))
more = 0;
else
byte |= 0x80;
*p ++= byte;
}
*endbuf = p;
}
static guint32
decode_uleb128 (guint8 *buf, guint8 **endbuf)
{
guint8 *p = buf;
guint32 res = 0;
int shift = 0;
while (TRUE) {
guint8 b = *p;
p ++;
res = res | (((int)(b & 0x7f)) << shift);
if (!(b & 0x80))
break;
shift += 7;
}
*endbuf = p;
return res;
}
static gint32
decode_sleb128 (guint8 *buf, guint8 **endbuf)
{
guint8 *p = buf;
gint32 res = 0;
int shift = 0;
while (TRUE) {
guint8 b = *p;
p ++;
res = res | (((int)(b & 0x7f)) << shift);
shift += 7;
if (!(b & 0x80)) {
if (shift < 32 && (b & 0x40))
res |= - (1 << shift);
break;
}
}
*endbuf = p;
return res;
}
static int
encode_frame_reg (int frame_reg)
{
#ifdef TARGET_AMD64
if (frame_reg == AMD64_RSP)
return 0;
else if (frame_reg == AMD64_RBP)
return 1;
#elif defined(TARGET_X86)
if (frame_reg == X86_EBP)
return 0;
else if (frame_reg == X86_ESP)
return 1;
#elif defined(TARGET_ARM)
if (frame_reg == ARMREG_SP)
return 0;
else if (frame_reg == ARMREG_FP)
return 1;
#elif defined(TARGET_S390X)
if (frame_reg == S390_SP)
return 0;
else if (frame_reg == S390_FP)
return 1;
#elif defined (TARGET_RISCV)
if (frame_reg == RISCV_SP)
return 0;
else if (frame_reg == RISCV_FP)
return 1;
#else
NOT_IMPLEMENTED;
#endif
g_assert_not_reached ();
return -1;
}
static int
decode_frame_reg (int encoded)
{
#ifdef TARGET_AMD64
if (encoded == 0)
return AMD64_RSP;
else if (encoded == 1)
return AMD64_RBP;
#elif defined(TARGET_X86)
if (encoded == 0)
return X86_EBP;
else if (encoded == 1)
return X86_ESP;
#elif defined(TARGET_ARM)
if (encoded == 0)
return ARMREG_SP;
else if (encoded == 1)
return ARMREG_FP;
#elif defined(TARGET_S390X)
if (encoded == 0)
return S390_SP;
else if (encoded == 1)
return S390_FP;
#elif defined (TARGET_RISCV)
if (encoded == 0)
return RISCV_SP;
else if (encoded == 1)
return RISCV_FP;
#else
NOT_IMPLEMENTED;
#endif
g_assert_not_reached ();
return -1;
}
#ifdef TARGET_AMD64
#ifdef HOST_WIN32
static int callee_saved_regs [] = { AMD64_RBP, AMD64_RBX, AMD64_R12, AMD64_R13, AMD64_R14, AMD64_R15, AMD64_RDI, AMD64_RSI };
#else
static int callee_saved_regs [] = { AMD64_RBP, AMD64_RBX, AMD64_R12, AMD64_R13, AMD64_R14, AMD64_R15 };
#endif
#elif defined(TARGET_X86)
static int callee_saved_regs [] = { X86_EBX, X86_ESI, X86_EDI };
#elif defined(TARGET_ARM)
static int callee_saved_regs [] = { ARMREG_V1, ARMREG_V2, ARMREG_V3, ARMREG_V4, ARMREG_V5, ARMREG_V7, ARMREG_FP };
#elif defined(TARGET_ARM64)
// FIXME:
static int callee_saved_regs [] = { };
#elif defined(TARGET_S390X)
static int callee_saved_regs [] = { s390_r6, s390_r7, s390_r8, s390_r9, s390_r10, s390_r11, s390_r12, s390_r13, s390_r14 };
#elif defined(TARGET_POWERPC64) && _CALL_ELF == 2
static int callee_saved_regs [] = {
ppc_r13, ppc_r14, ppc_r15, ppc_r16,
ppc_r17, ppc_r18, ppc_r19, ppc_r20,
ppc_r21, ppc_r22, ppc_r23, ppc_r24,
ppc_r25, ppc_r26, ppc_r27, ppc_r28,
ppc_r29, ppc_r30, ppc_r31 };
#elif defined(TARGET_POWERPC)
static int callee_saved_regs [] = { ppc_r6, ppc_r7, ppc_r8, ppc_r9, ppc_r10, ppc_r11, ppc_r12, ppc_r13, ppc_r14 };
#elif defined (TARGET_RISCV)
static int callee_saved_regs [] = {
RISCV_S0, RISCV_S1, RISCV_S2, RISCV_S3, RISCV_S4, RISCV_S5,
RISCV_S6, RISCV_S7, RISCV_S8, RISCV_S9, RISCV_S10, RISCV_S11,
};
#endif
static guint32
encode_regmask (guint32 regmask)
{
int i;
guint32 res;
res = 0;
for (i = 0; i < sizeof (callee_saved_regs) / sizeof (int); ++i) {
if (regmask & (1 << callee_saved_regs [i])) {
res |= (1 << i);
regmask -= (1 << callee_saved_regs [i]);
}
}
g_assert (regmask == 0);
return res;
}
static guint32
decode_regmask (guint32 regmask)
{
int i;
guint32 res;
res = 0;
for (i = 0; i < sizeof (callee_saved_regs) / sizeof (int); ++i)
if (regmask & (1 << i))
res |= (1 << callee_saved_regs [i]);
return res;
}
/*
* encode_gc_map:
*
* Encode the fixed fields of MAP into a buffer pointed to by BUF.
*/
static void
encode_gc_map (GCMap *map, guint8 *buf, guint8 **endbuf)
{
guint32 flags, freg;
encode_sleb128 (map->start_offset / SIZEOF_SLOT, buf, &buf);
encode_sleb128 (map->end_offset / SIZEOF_SLOT, buf, &buf);
encode_sleb128 (map->map_offset / SIZEOF_SLOT, buf, &buf);
encode_uleb128 (map->nslots, buf, &buf);
g_assert (map->callsite_entry_size <= 4);
freg = encode_frame_reg (map->frame_reg);
g_assert (freg < 2);
flags = (map->has_ref_slots ? 1 : 0) | (map->has_pin_slots ? 2 : 0) | (map->has_ref_regs ? 4 : 0) | (map->has_pin_regs ? 8 : 0) | ((map->callsite_entry_size - 1) << 4) | (freg << 6);
encode_uleb128 (flags, buf, &buf);
encode_uleb128 (encode_regmask (map->used_int_regs), buf, &buf);
if (map->has_ref_regs)
encode_uleb128 (encode_regmask (map->reg_ref_mask), buf, &buf);
if (map->has_pin_regs)
encode_uleb128 (encode_regmask (map->reg_pin_mask), buf, &buf);
encode_uleb128 (map->ncallsites, buf, &buf);
*endbuf = buf;
}
/*
* decode_gc_map:
*
* Decode the encoded GC map representation in BUF and store the result into MAP.
*/
static void
decode_gc_map (guint8 *buf, GCMap *map, guint8 **endbuf)
{
guint32 flags;
int stack_bitmap_size, reg_ref_bitmap_size, reg_pin_bitmap_size, offset, freg;
int i, n;
map->start_offset = decode_sleb128 (buf, &buf) * SIZEOF_SLOT;
map->end_offset = decode_sleb128 (buf, &buf) * SIZEOF_SLOT;
map->map_offset = decode_sleb128 (buf, &buf) * SIZEOF_SLOT;
map->nslots = decode_uleb128 (buf, &buf);
flags = decode_uleb128 (buf, &buf);
map->has_ref_slots = (flags & 1) ? 1 : 0;
map->has_pin_slots = (flags & 2) ? 1 : 0;
map->has_ref_regs = (flags & 4) ? 1 : 0;
map->has_pin_regs = (flags & 8) ? 1 : 0;
map->callsite_entry_size = ((flags >> 4) & 0x3) + 1;
freg = flags >> 6;
map->frame_reg = decode_frame_reg (freg);
map->used_int_regs = decode_regmask (decode_uleb128 (buf, &buf));
if (map->has_ref_regs) {
map->reg_ref_mask = decode_regmask (decode_uleb128 (buf, &buf));
n = 0;
for (i = 0; i < NREGS; ++i)
if (map->reg_ref_mask & (1 << i))
n ++;
map->nref_regs = n;
}
if (map->has_pin_regs) {
map->reg_pin_mask = decode_regmask (decode_uleb128 (buf, &buf));
n = 0;
for (i = 0; i < NREGS; ++i)
if (map->reg_pin_mask & (1 << i))
n ++;
map->npin_regs = n;
}
map->ncallsites = decode_uleb128 (buf, &buf);
stack_bitmap_size = (ALIGN_TO (map->nslots, 8) / 8) * map->ncallsites;
reg_ref_bitmap_size = (ALIGN_TO (map->nref_regs, 8) / 8) * map->ncallsites;
reg_pin_bitmap_size = (ALIGN_TO (map->npin_regs, 8) / 8) * map->ncallsites;
offset = 0;
map->stack_ref_bitmap_offset = offset;
if (map->has_ref_slots)
offset += stack_bitmap_size;
map->stack_pin_bitmap_offset = offset;
if (map->has_pin_slots)
offset += stack_bitmap_size;
map->reg_ref_bitmap_offset = offset;
if (map->has_ref_regs)
offset += reg_ref_bitmap_size;
map->reg_pin_bitmap_offset = offset;
if (map->has_pin_regs)
offset += reg_pin_bitmap_size;
*endbuf = buf;
}
static gpointer
thread_attach_func (void)
{
TlsData *tls;
tls = g_new0 (TlsData, 1);
tls->tid = mono_native_thread_id_get ();
tls->info = mono_thread_info_current ();
UnlockedAdd (&stats.tlsdata_size, sizeof (TlsData));
return tls;
}
static void
thread_detach_func (gpointer user_data)
{
TlsData *tls = user_data;
g_free (tls);
}
static void
thread_suspend_func (gpointer user_data, void *sigctx, MonoContext *ctx)
{
TlsData *tls = user_data;
if (!tls) {
/* Happens during startup */
return;
}
if (tls->tid != mono_native_thread_id_get ()) {
/* Happens on osx because threads are not suspended using signals */
#ifndef TARGET_WIN32
gboolean res;
#endif
g_assert (tls->info);
#ifdef TARGET_WIN32
return;
#else
res = mono_thread_state_init_from_handle (&tls->unwind_state, tls->info, NULL);
#endif
} else {
tls->unwind_state.unwind_data [MONO_UNWIND_DATA_LMF] = mono_get_lmf ();
if (sigctx) {
mono_sigctx_to_monoctx (sigctx, &tls->unwind_state.ctx);
tls->unwind_state.valid = TRUE;
} else if (ctx) {
memcpy (&tls->unwind_state.ctx, ctx, sizeof (MonoContext));
tls->unwind_state.valid = TRUE;
} else {
tls->unwind_state.valid = FALSE;
}
tls->unwind_state.unwind_data [MONO_UNWIND_DATA_JIT_TLS] = mono_tls_get_jit_tls ();
tls->unwind_state.unwind_data [MONO_UNWIND_DATA_DOMAIN] = mono_domain_get ();
}
if (!tls->unwind_state.unwind_data [MONO_UNWIND_DATA_DOMAIN]) {
/* Happens during startup */
tls->unwind_state.valid = FALSE;
return;
}
}
#define DEAD_REF ((gpointer)(gssize)0x2a2a2a2a2a2a2a2aULL)
static void
set_bit (guint8 *bitmap, int width, int y, int x)
{
bitmap [(width * y) + (x / 8)] |= (1 << (x % 8));
}
static void
clear_bit (guint8 *bitmap, int width, int y, int x)
{
bitmap [(width * y) + (x / 8)] &= ~(1 << (x % 8));
}
static int
get_bit (guint8 *bitmap, int width, int y, int x)
{
return bitmap [(width * y) + (x / 8)] & (1 << (x % 8));
}
static const char*
slot_type_to_string (GCSlotType type)
{
switch (type) {
case SLOT_REF:
return "ref";
case SLOT_NOREF:
return "noref";
case SLOT_PIN:
return "pin";
default:
g_assert_not_reached ();
return NULL;
}
}
static host_mgreg_t
get_frame_pointer (MonoContext *ctx, int frame_reg)
{
#if defined(TARGET_AMD64)
if (frame_reg == AMD64_RSP)
return ctx->rsp;
else if (frame_reg == AMD64_RBP)
return ctx->rbp;
#elif defined(TARGET_X86)
if (frame_reg == X86_ESP)
return ctx->esp;
else if (frame_reg == X86_EBP)
return ctx->ebp;
#elif defined(TARGET_ARM)
if (frame_reg == ARMREG_SP)
return (host_mgreg_t)MONO_CONTEXT_GET_SP (ctx);
else if (frame_reg == ARMREG_FP)
return (host_mgreg_t)MONO_CONTEXT_GET_BP (ctx);
#elif defined(TARGET_S390X)
if (frame_reg == S390_SP)
return (host_mgreg_t)MONO_CONTEXT_GET_SP (ctx);
else if (frame_reg == S390_FP)
return (host_mgreg_t)MONO_CONTEXT_GET_BP (ctx);
#elif defined (TARGET_RISCV)
if (frame_reg == RISCV_SP)
return MONO_CONTEXT_GET_SP (ctx);
else if (frame_reg == RISCV_FP)
return MONO_CONTEXT_GET_BP (ctx);
#endif
g_assert_not_reached ();
return 0;
}
/*
* conservatively_pass:
*
* Mark a thread stack conservatively and collect information needed by the precise pass.
*/
static void
conservative_pass (TlsData *tls, guint8 *stack_start, guint8 *stack_end)
{
MonoJitInfo *ji;
MonoMethod *method;
MonoContext ctx, new_ctx;
MonoLMF *lmf;
guint8 *stack_limit;
gboolean last = TRUE;
GCMap *map;
GCMap map_tmp;
GCEncodedMap *emap;
guint8* fp, *p, *real_frame_start, *frame_start, *frame_end;
int i, pc_offset, cindex, bitmap_width;
int scanned = 0, scanned_precisely, scanned_conservatively, scanned_registers;
gboolean res;
StackFrameInfo frame;
host_mgreg_t *reg_locations [MONO_MAX_IREGS];
host_mgreg_t *new_reg_locations [MONO_MAX_IREGS];
guint8 *bitmaps;
FrameInfo *fi;
guint32 precise_regmask;
if (tls) {
tls->nframes = 0;
tls->ref_to_track = NULL;
}
/* tls == NULL can happen during startup */
if (mono_thread_internal_current () == NULL || !tls) {
mono_gc_conservatively_scan_area (stack_start, stack_end);
UnlockedAdd (&stats.scanned_stacks, stack_end - stack_start);
return;
}
lmf = tls->unwind_state.unwind_data [MONO_UNWIND_DATA_LMF];
frame.domain = NULL;
/* Number of bytes scanned based on GC map data */
scanned = 0;
/* Number of bytes scanned precisely based on GC map data */
scanned_precisely = 0;
/* Number of bytes scanned conservatively based on GC map data */
scanned_conservatively = 0;
/* Number of bytes scanned conservatively in register save areas */
scanned_registers = 0;
/* This is one past the last address which we have scanned */
stack_limit = stack_start;
if (!tls->unwind_state.valid)
memset (&new_ctx, 0, sizeof (ctx));
else
memcpy (&new_ctx, &tls->unwind_state.ctx, sizeof (MonoContext));
memset (reg_locations, 0, sizeof (reg_locations));
memset (new_reg_locations, 0, sizeof (new_reg_locations));
while (TRUE) {
if (!tls->unwind_state.valid)
break;
memcpy (&ctx, &new_ctx, sizeof (ctx));
for (i = 0; i < MONO_MAX_IREGS; ++i) {
if (new_reg_locations [i]) {
/*
* If the current frame saves the register, it means it might modify its
* value, thus the old location might not contain the same value, so
* we have to mark it conservatively.
*/
if (reg_locations [i]) {
DEBUG (fprintf (logfile, "\tscan saved reg %s location %p.\n", mono_arch_regname (i), reg_locations [i]));
mono_gc_conservatively_scan_area (reg_locations [i], (char*)reg_locations [i] + SIZEOF_SLOT);
scanned_registers += SIZEOF_SLOT;
}
reg_locations [i] = new_reg_locations [i];
DEBUG (fprintf (logfile, "\treg %s is now at location %p.\n", mono_arch_regname (i), reg_locations [i]));
}
}
g_assert ((gsize)stack_limit % SIZEOF_SLOT == 0);
res = mono_find_jit_info_ext (tls->unwind_state.unwind_data [MONO_UNWIND_DATA_JIT_TLS], NULL, &ctx, &new_ctx, NULL, &lmf, new_reg_locations, &frame);
if (!res)
break;
ji = frame.ji;
// FIXME: For skipped frames, scan the param area of the parent frame conservatively ?
// FIXME: trampolines
if (frame.type == FRAME_TYPE_MANAGED_TO_NATIVE) {
/*
* These frames are problematic for several reasons:
* - they are unwound through an LMF, and we have no precise register tracking for those.
* - the LMF might not contain a precise ip, so we can't compute the call site.
* - the LMF only unwinds to the wrapper frame, so we get these methods twice.
*/
DEBUG (fprintf (logfile, "Mark(0): <Managed-to-native transition>\n"));
for (i = 0; i < MONO_MAX_IREGS; ++i) {
if (reg_locations [i]) {
DEBUG (fprintf (logfile, "\tscan saved reg %s location %p.\n", mono_arch_regname (i), reg_locations [i]));
mono_gc_conservatively_scan_area (reg_locations [i], (char*)reg_locations [i] + SIZEOF_SLOT);
scanned_registers += SIZEOF_SLOT;
}
reg_locations [i] = NULL;
new_reg_locations [i] = NULL;
}
ctx = new_ctx;
continue;
}
if (ji)
method = jinfo_get_method (ji);
else
method = NULL;
/* The last frame can be in any state so mark conservatively */
if (last) {
if (ji) {
DEBUG (char *fname = mono_method_full_name (method, TRUE); fprintf (logfile, "Mark(0): %s+0x%x (%p)\n", fname, pc_offset, (gpointer)MONO_CONTEXT_GET_IP (&ctx)); g_free (fname));
}
DEBUG (fprintf (logfile, "\t <Last frame>\n"));
last = FALSE;
/*
* new_reg_locations is not precise when a method is interrupted during its epilog, so clear it.
*/
for (i = 0; i < MONO_MAX_IREGS; ++i) {
if (reg_locations [i]) {
DEBUG (fprintf (logfile, "\tscan saved reg %s location %p.\n", mono_arch_regname (i), reg_locations [i]));
mono_gc_conservatively_scan_area (reg_locations [i], (char*)reg_locations [i] + SIZEOF_SLOT);
scanned_registers += SIZEOF_SLOT;
}
if (new_reg_locations [i]) {
DEBUG (fprintf (logfile, "\tscan saved reg %s location %p.\n", mono_arch_regname (i), new_reg_locations [i]));
mono_gc_conservatively_scan_area (new_reg_locations [i], (char*)new_reg_locations [i] + SIZEOF_SLOT);
scanned_registers += SIZEOF_SLOT;
}
reg_locations [i] = NULL;
new_reg_locations [i] = NULL;
}
continue;
}
pc_offset = (guint8*)MONO_CONTEXT_GET_IP (&ctx) - (guint8*)ji->code_start;
/* These frames are very problematic */
if (method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE) {
DEBUG (char *fname = mono_method_full_name (method, TRUE); fprintf (logfile, "Mark(0): %s+0x%x (%p)\n", fname, pc_offset, (gpointer)MONO_CONTEXT_GET_IP (&ctx)); g_free (fname));
DEBUG (fprintf (logfile, "\tSkip.\n"));
continue;
}
/* All the other frames are at a call site */
if (tls->nframes == MAX_FRAMES) {
/*
* Can't save information since the array is full. So scan the rest of the
* stack conservatively.
*/
DEBUG (fprintf (logfile, "Mark (0): Frame stack full.\n"));
break;
}
/* Scan the frame of this method */
/*
* A frame contains the following:
* - saved registers
* - saved args
* - locals
* - spill area
* - localloc-ed memory
*/
g_assert (pc_offset >= 0);
emap = ji->gc_info;
if (!emap) {
DEBUG (char *fname = mono_method_full_name (jinfo_get_method (ji), TRUE); fprintf (logfile, "Mark(0): %s+0x%x (%p)\n", fname, pc_offset, (gpointer)MONO_CONTEXT_GET_IP (&ctx)); g_free (fname));
DEBUG (fprintf (logfile, "\tNo GC Map.\n"));
continue;
}
/* The embedded callsite table requires this */
g_assert (((gsize)emap % 4) == 0);
/*
* Debugging aid to control the number of frames scanned precisely
*/
if (!precise_frame_limit_inited) {
char *mono_precise_count = g_getenv ("MONO_PRECISE_COUNT");
if (mono_precise_count) {
precise_frame_limit = atoi (mono_precise_count);
g_free (mono_precise_count);
}
precise_frame_limit_inited = TRUE;
}
if (precise_frame_limit != -1) {
if (precise_frame_count [FALSE] == precise_frame_limit)
printf ("LAST PRECISE FRAME: %s\n", mono_method_full_name (method, TRUE));
if (precise_frame_count [FALSE] > precise_frame_limit)
continue;
}
precise_frame_count [FALSE] ++;
/* Decode the encoded GC map */
map = &map_tmp;
memset (map, 0, sizeof (GCMap));
decode_gc_map (&emap->encoded [0], map, &p);
p = (guint8*)ALIGN_TO (p, map->callsite_entry_size);
map->callsites.offsets8 = p;
p += map->callsite_entry_size * map->ncallsites;
bitmaps = p;
fp = (guint8*)get_frame_pointer (&ctx, map->frame_reg);
real_frame_start = fp + map->start_offset;
frame_start = fp + map->start_offset + map->map_offset;
frame_end = fp + map->end_offset;
DEBUG (char *fname = mono_method_full_name (jinfo_get_method (ji), TRUE); fprintf (logfile, "Mark(0): %s+0x%x (%p) limit=%p fp=%p frame=%p-%p (%d)\n", fname, pc_offset, (gpointer)MONO_CONTEXT_GET_IP (&ctx), stack_limit, fp, frame_start, frame_end, (int)(frame_end - frame_start)); g_free (fname));
/* Find the callsite index */
if (map->callsite_entry_size == 1) {
for (i = 0; i < map->ncallsites; ++i)
/* ip points inside the call instruction */
if (map->callsites.offsets8 [i] == pc_offset + 1)
break;
} else if (map->callsite_entry_size == 2) {
// FIXME: Use a binary search
for (i = 0; i < map->ncallsites; ++i)
/* ip points inside the call instruction */
if (map->callsites.offsets16 [i] == pc_offset + 1)
break;
} else {
// FIXME: Use a binary search
for (i = 0; i < map->ncallsites; ++i)
/* ip points inside the call instruction */
if (map->callsites.offsets32 [i] == pc_offset + 1)
break;
}
if (i == map->ncallsites) {
printf ("Unable to find ip offset 0x%x in callsite list of %s.\n", pc_offset + 1, mono_method_full_name (method, TRUE));
g_assert_not_reached ();
}
cindex = i;
/*
* This is not neccessary true on x86 because frames have a different size at each
* call site.
*/
//g_assert (real_frame_start >= stack_limit);
if (real_frame_start > stack_limit) {
/* This scans the previously skipped frames as well */
DEBUG (fprintf (logfile, "\tscan area %p-%p (%d).\n", stack_limit, real_frame_start, (int)(real_frame_start - stack_limit)));
mono_gc_conservatively_scan_area (stack_limit, real_frame_start);
UnlockedAdd (&stats.scanned_other, real_frame_start - stack_limit);
}
/* Mark stack slots */
if (map->has_pin_slots) {
int bitmap_width = ALIGN_TO (map->nslots, 8) / 8;
guint8 *pin_bitmap = &bitmaps [map->stack_pin_bitmap_offset + (bitmap_width * cindex)];
guint8 *p;
gboolean pinned;
p = frame_start;
for (i = 0; i < map->nslots; ++i) {
pinned = pin_bitmap [i / 8] & (1 << (i % 8));
if (pinned) {
DEBUG (fprintf (logfile, "\tscan slot %s0x%x(fp)=%p.\n", (guint8*)p > (guint8*)fp ? "" : "-", ABS ((int)((gssize)p - (gssize)fp)), p));
mono_gc_conservatively_scan_area (p, p + SIZEOF_SLOT);
scanned_conservatively += SIZEOF_SLOT;
} else {
scanned_precisely += SIZEOF_SLOT;
}
p += SIZEOF_SLOT;
}
} else {
scanned_precisely += (map->nslots * SIZEOF_SLOT);
}
/* The area outside of start-end is NOREF */
scanned_precisely += (map->end_offset - map->start_offset) - (map->nslots * SIZEOF_SLOT);
/* Mark registers */
precise_regmask = map->used_int_regs | (1 << map->frame_reg);
if (map->has_pin_regs) {
int bitmap_width = ALIGN_TO (map->npin_regs, 8) / 8;
guint8 *pin_bitmap = &bitmaps [map->reg_pin_bitmap_offset + (bitmap_width * cindex)];
int bindex = 0;
for (i = 0; i < NREGS; ++i) {
if (!(map->used_int_regs & (1 << i)))
continue;
if (!(map->reg_pin_mask & (1 << i)))
continue;
if (pin_bitmap [bindex / 8] & (1 << (bindex % 8))) {
DEBUG (fprintf (logfile, "\treg %s saved at 0x%p is pinning.\n", mono_arch_regname (i), reg_locations [i]));
precise_regmask &= ~(1 << i);
}
bindex ++;
}
}
scanned += map->end_offset - map->start_offset;
g_assert (scanned == scanned_precisely + scanned_conservatively);
stack_limit = frame_end;
/* Save information for the precise pass */
fi = &tls->frames [tls->nframes];
fi->nslots = map->nslots;
bitmap_width = ALIGN_TO (map->nslots, 8) / 8;
if (map->has_ref_slots)
fi->bitmap = &bitmaps [map->stack_ref_bitmap_offset + (bitmap_width * cindex)];
else
fi->bitmap = NULL;
fi->frame_start_offset = frame_start - stack_start;
fi->nreg_locations = 0;
DEBUG_PRECISE (fi->ji = ji);
DEBUG_PRECISE (fi->fp = fp);
if (map->has_ref_regs) {
int bitmap_width = ALIGN_TO (map->nref_regs, 8) / 8;
guint8 *ref_bitmap = &bitmaps [map->reg_ref_bitmap_offset + (bitmap_width * cindex)];
int bindex = 0;
for (i = 0; i < NREGS; ++i) {
if (!(map->reg_ref_mask & (1 << i)))
continue;
if (reg_locations [i] && (ref_bitmap [bindex / 8] & (1 << (bindex % 8)))) {
DEBUG_PRECISE (fi->regs [fi->nreg_locations] = i);
DEBUG (fprintf (logfile, "\treg %s saved at 0x%p is ref.\n", mono_arch_regname (i), reg_locations [i]));
fi->reg_locations [fi->nreg_locations] = (guint8*)reg_locations [i] - stack_start;
fi->nreg_locations ++;
}
bindex ++;
}
}
/*
* Clear locations of precisely tracked registers.
*/
if (precise_regmask) {
for (i = 0; i < NREGS; ++i) {
if (precise_regmask & (1 << i)) {
/*
* The method uses this register, and we have precise info for it.
* This means the location will be scanned precisely.
* Tell the code at the beginning of the loop that this location is
* processed.
*/
if (reg_locations [i])
DEBUG (fprintf (logfile, "\treg %s at location %p (==%p) is precise.\n", mono_arch_regname (i), reg_locations [i], (gpointer)*reg_locations [i]));
reg_locations [i] = NULL;
}
}
}
tls->nframes ++;
}
/* Scan the remaining register save locations */
for (i = 0; i < MONO_MAX_IREGS; ++i) {
if (reg_locations [i]) {
DEBUG (fprintf (logfile, "\tscan saved reg location %p.\n", reg_locations [i]));
mono_gc_conservatively_scan_area (reg_locations [i], (char*)reg_locations [i] + SIZEOF_SLOT);
scanned_registers += SIZEOF_SLOT;
}
if (new_reg_locations [i]) {
DEBUG (fprintf (logfile, "\tscan saved reg location %p.\n", new_reg_locations [i]));
mono_gc_conservatively_scan_area (new_reg_locations [i], (char*)new_reg_locations [i] + SIZEOF_SLOT);
scanned_registers += SIZEOF_SLOT;
}
}
if (stack_limit < stack_end) {
DEBUG (fprintf (logfile, "\tscan remaining stack %p-%p (%d).\n", stack_limit, stack_end, (int)(stack_end - stack_limit)));
mono_gc_conservatively_scan_area (stack_limit, stack_end);
UnlockedAdd (&stats.scanned_native, stack_end - stack_limit);
}
DEBUG (fprintf (logfile, "Marked %d bytes, p=%d,c=%d out of %d.\n", scanned, scanned_precisely, scanned_conservatively, (int)(stack_end - stack_start)));
UnlockedAdd (&stats.scanned_stacks, stack_end - stack_start);
UnlockedAdd (&stats.scanned, scanned);
UnlockedAdd (&stats.scanned_precisely, scanned_precisely);
UnlockedAdd (&stats.scanned_conservatively, scanned_conservatively);
UnlockedAdd (&stats.scanned_registers, scanned_registers);
//mono_gc_conservatively_scan_area (stack_start, stack_end);
}
/*
* precise_pass:
*
* Mark a thread stack precisely based on information saved during the conservative
* pass.
*/
static void
precise_pass (TlsData *tls, guint8 *stack_start, guint8 *stack_end, void *gc_data)
{
int findex, i;
FrameInfo *fi;
guint8 *frame_start;
if (!tls)
return;
if (!tls->unwind_state.valid)
return;
for (findex = 0; findex < tls->nframes; findex ++) {
/* Load information saved by the !precise pass */
fi = &tls->frames [findex];
frame_start = stack_start + fi->frame_start_offset;
DEBUG (char *fname = mono_method_full_name (jinfo_get_method (fi->ji), TRUE); fprintf (logfile, "Mark(1): %s\n", fname); g_free (fname));
/*
* FIXME: Add a function to mark using a bitmap, to avoid doing a
* call for each object.
*/
/* Mark stack slots */
if (fi->bitmap) {
guint8 *ref_bitmap = fi->bitmap;
gboolean live;
for (i = 0; i < fi->nslots; ++i) {
MonoObject **ptr = (MonoObject**)(frame_start + (i * SIZEOF_SLOT));
live = ref_bitmap [i / 8] & (1 << (i % 8));
if (live) {
MonoObject *obj = *ptr;
if (obj) {
DEBUG (fprintf (logfile, "\tref %s0x%x(fp)=%p: %p ->", (guint8*)ptr >= (guint8*)fi->fp ? "" : "-", ABS ((int)((gssize)ptr - (gssize)fi->fp)), ptr, obj));
*ptr = mono_gc_scan_object (obj, gc_data);
DEBUG (fprintf (logfile, " %p.\n", *ptr));
} else {
DEBUG (fprintf (logfile, "\tref %s0x%x(fp)=%p: %p.\n", (guint8*)ptr >= (guint8*)fi->fp ? "" : "-", ABS ((int)((gssize)ptr - (gssize)fi->fp)), ptr, obj));
}
} else {
#if 0
/*
* This is disabled because the pointer takes up a lot of space.
* Stack slots might be shared between ref and non-ref variables ?
*/
if (map->ref_slots [i / 8] & (1 << (i % 8))) {
DEBUG (fprintf (logfile, "\tref %s0x%x(fp)=%p: dead (%p)\n", (guint8*)ptr >= (guint8*)fi->fp ? "" : "-", ABS ((int)((gssize)ptr - (gssize)fi->fp)), ptr, *ptr));
/*
* Fail fast if the live range is incorrect, and
* the JITted code tries to access this object
*/
*ptr = DEAD_REF;
}
#endif
}
}
}
/* Mark registers */
/*
* Registers are different from stack slots, they have no address where they
* are stored. Instead, some frame below this frame in the stack saves them
* in its prolog to the stack. We can mark this location precisely.
*/
for (i = 0; i < fi->nreg_locations; ++i) {
/*
* reg_locations [i] contains the address of the stack slot where
* a reg was last saved, so mark that slot.
*/
MonoObject **ptr = (MonoObject**)((guint8*)stack_start + fi->reg_locations [i]);
MonoObject *obj = *ptr;
if (obj) {
DEBUG (fprintf (logfile, "\treg %s saved at %p: %p ->", mono_arch_regname (fi->regs [i]), ptr, obj));
*ptr = mono_gc_scan_object (obj, gc_data);
DEBUG (fprintf (logfile, " %p.\n", *ptr));
} else {
DEBUG (fprintf (logfile, "\treg %s saved at %p: %p\n", mono_arch_regname (fi->regs [i]), ptr, obj));
}
}
}
/*
* Debugging aid to check for missed refs.
*/
if (tls->ref_to_track) {
gpointer *p;
for (p = (gpointer*)stack_start; p < (gpointer*)stack_end; ++p)
if (*p == tls->ref_to_track)
printf ("REF AT %p.\n", p);
}
}
/*
* thread_mark_func:
*
* This is called by the GC twice to mark a thread stack. PRECISE is FALSE at the first
* call, and TRUE at the second. USER_DATA points to a TlsData
* structure filled up by thread_suspend_func.
*/
static void
thread_mark_func (gpointer user_data, guint8 *stack_start, guint8 *stack_end, gboolean precise, void *gc_data)
{
TlsData *tls = user_data;
DEBUG (fprintf (logfile, "****************************************\n"));
DEBUG (fprintf (logfile, "*** %s stack marking for thread %p (%p-%p) ***\n", precise ? "Precise" : "Conservative", tls ? GUINT_TO_POINTER (tls->tid) : NULL, stack_start, stack_end));
DEBUG (fprintf (logfile, "****************************************\n"));
if (!precise)
conservative_pass (tls, stack_start, stack_end);
else
precise_pass (tls, stack_start, stack_end, gc_data);
}
#ifndef DISABLE_JIT
static void
mini_gc_init_gc_map (MonoCompile *cfg)
{
if (COMPILE_LLVM (cfg))
return;
if (!mono_gc_is_moving ())
return;
if (cfg->compile_aot) {
if (!enable_gc_maps_for_aot)
return;
} else if (!mono_gc_precise_stack_mark_enabled ())
return;
#if 1
/* Debugging support */
{
static int precise_count;
precise_count ++;
char *mono_gcmap_count = g_getenv ("MONO_GCMAP_COUNT");
if (mono_gcmap_count) {
int count = atoi (mono_gcmap_count);
g_free (mono_gcmap_count);
if (precise_count == count)
printf ("LAST: %s\n", mono_method_full_name (cfg->method, TRUE));
if (precise_count > count)
return;
}
}
#endif
cfg->compute_gc_maps = TRUE;
cfg->gc_info = mono_mempool_alloc0 (cfg->mempool, sizeof (MonoCompileGC));
}
/*
* mini_gc_set_slot_type_from_fp:
*
* Set the GC slot type of the stack slot identified by SLOT_OFFSET, which should be
* relative to the frame pointer. By default, all stack slots are type PIN, so there is no
* need to call this function for those slots.
*/
void
mini_gc_set_slot_type_from_fp (MonoCompile *cfg, int slot_offset, GCSlotType type)
{
MonoCompileGC *gcfg = (MonoCompileGC*)cfg->gc_info;
if (!cfg->compute_gc_maps)
return;
g_assert (slot_offset % SIZEOF_SLOT == 0);
gcfg->stack_slots_from_fp = g_slist_prepend_mempool (cfg->mempool, gcfg->stack_slots_from_fp, GINT_TO_POINTER (((slot_offset) << 16) | type));
}
/*
* mini_gc_set_slot_type_from_cfa:
*
* Set the GC slot type of the stack slot identified by SLOT_OFFSET, which should be
* relative to the DWARF CFA value. This should be called from mono_arch_emit_prolog ().
* If type is STACK_REF, the slot is assumed to be live from the end of the prolog until
* the end of the method. By default, all stack slots are type PIN, so there is no need to
* call this function for those slots.
*/
void
mini_gc_set_slot_type_from_cfa (MonoCompile *cfg, int slot_offset, GCSlotType type)
{
MonoCompileGC *gcfg = (MonoCompileGC*)cfg->gc_info;
int slot = - (slot_offset / SIZEOF_SLOT);
if (!cfg->compute_gc_maps)
return;
g_assert (slot_offset <= 0);
g_assert (slot_offset % SIZEOF_SLOT == 0);
gcfg->stack_slots_from_cfa = g_slist_prepend_mempool (cfg->mempool, gcfg->stack_slots_from_cfa, GUINT_TO_POINTER (((slot) << 16) | type));
}
static int
fp_offset_to_slot (MonoCompile *cfg, int offset)
{
MonoCompileGC *gcfg = cfg->gc_info;
return (offset - gcfg->min_offset) / SIZEOF_SLOT;
}
static int
slot_to_fp_offset (MonoCompile *cfg, int slot)
{
MonoCompileGC *gcfg = cfg->gc_info;
return (slot * SIZEOF_SLOT) + gcfg->min_offset;
}
static MONO_ALWAYS_INLINE void
set_slot (MonoCompileGC *gcfg, int slot, int callsite_index, GCSlotType type)
{
g_assert (slot >= 0 && slot < gcfg->nslots);
if (type == SLOT_PIN) {
clear_bit (gcfg->stack_ref_bitmap, gcfg->stack_bitmap_width, slot, callsite_index);
set_bit (gcfg->stack_pin_bitmap, gcfg->stack_bitmap_width, slot, callsite_index);
} else if (type == SLOT_REF) {
set_bit (gcfg->stack_ref_bitmap, gcfg->stack_bitmap_width, slot, callsite_index);
clear_bit (gcfg->stack_pin_bitmap, gcfg->stack_bitmap_width, slot, callsite_index);
} else if (type == SLOT_NOREF) {
clear_bit (gcfg->stack_ref_bitmap, gcfg->stack_bitmap_width, slot, callsite_index);
clear_bit (gcfg->stack_pin_bitmap, gcfg->stack_bitmap_width, slot, callsite_index);
}
}
static void
set_slot_everywhere (MonoCompileGC *gcfg, int slot, GCSlotType type)
{
int width, pos;
guint8 *ref_bitmap, *pin_bitmap;
/*
int cindex;
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex)
set_slot (gcfg, slot, cindex, type);
*/
ref_bitmap = gcfg->stack_ref_bitmap;
pin_bitmap = gcfg->stack_pin_bitmap;
width = gcfg->stack_bitmap_width;
pos = width * slot;
if (type == SLOT_PIN) {
memset (ref_bitmap + pos, 0, width);
memset (pin_bitmap + pos, 0xff, width);
} else if (type == SLOT_REF) {
memset (ref_bitmap + pos, 0xff, width);
memset (pin_bitmap + pos, 0, width);
} else if (type == SLOT_NOREF) {
memset (ref_bitmap + pos, 0, width);
memset (pin_bitmap + pos, 0, width);
}
}
static void
set_slot_in_range (MonoCompileGC *gcfg, int slot, int from, int to, GCSlotType type)
{
int cindex;
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex) {
int callsite_offset = gcfg->callsites [cindex]->pc_offset;
if (callsite_offset >= from && callsite_offset < to)
set_slot (gcfg, slot, cindex, type);
}
}
static void
set_reg_slot (MonoCompileGC *gcfg, int slot, int callsite_index, GCSlotType type)
{
g_assert (slot >= 0 && slot < gcfg->nregs);
if (type == SLOT_PIN) {
clear_bit (gcfg->reg_ref_bitmap, gcfg->reg_bitmap_width, slot, callsite_index);
set_bit (gcfg->reg_pin_bitmap, gcfg->reg_bitmap_width, slot, callsite_index);
} else if (type == SLOT_REF) {
set_bit (gcfg->reg_ref_bitmap, gcfg->reg_bitmap_width, slot, callsite_index);
clear_bit (gcfg->reg_pin_bitmap, gcfg->reg_bitmap_width, slot, callsite_index);
} else if (type == SLOT_NOREF) {
clear_bit (gcfg->reg_ref_bitmap, gcfg->reg_bitmap_width, slot, callsite_index);
clear_bit (gcfg->reg_pin_bitmap, gcfg->reg_bitmap_width, slot, callsite_index);
}
}
static void
set_reg_slot_everywhere (MonoCompileGC *gcfg, int slot, GCSlotType type)
{
int cindex;
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex)
set_reg_slot (gcfg, slot, cindex, type);
}
static void
set_reg_slot_in_range (MonoCompileGC *gcfg, int slot, int from, int to, GCSlotType type)
{
int cindex;
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex) {
int callsite_offset = gcfg->callsites [cindex]->pc_offset;
if (callsite_offset >= from && callsite_offset < to)
set_reg_slot (gcfg, slot, cindex, type);
}
}
static void
process_spill_slots (MonoCompile *cfg)
{
MonoCompileGC *gcfg = cfg->gc_info;
MonoBasicBlock *bb;
GSList *l;
int i;
/* Mark all ref/pin spill slots as NOREF by default outside of their live range */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
for (l = bb->spill_slot_defs; l; l = l->next) {
MonoInst *def = l->data;
int spill_slot = def->inst_c0;
int bank = def->inst_c1;
int offset = cfg->spill_info [bank][spill_slot].offset;
int slot = fp_offset_to_slot (cfg, offset);
if (bank == MONO_REG_INT_MP || bank == MONO_REG_INT_REF)
set_slot_everywhere (gcfg, slot, SLOT_NOREF);
}
}
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
for (l = bb->spill_slot_defs; l; l = l->next) {
MonoInst *def = l->data;
int spill_slot = def->inst_c0;
int bank = def->inst_c1;
int offset = cfg->spill_info [bank][spill_slot].offset;
int slot = fp_offset_to_slot (cfg, offset);
GCSlotType type;
if (bank == MONO_REG_INT_MP)
type = SLOT_PIN;
else
type = SLOT_REF;
/*
* Extend the live interval for the GC tracked spill slots
* defined in this bblock.
* FIXME: This is not needed.
*/
set_slot_in_range (gcfg, slot, def->backend.pc_offset, bb->native_offset + bb->native_length, type);
if (cfg->verbose_level > 1)
printf ("\t%s spill slot at %s0x%x(fp) (slot = %d)\n", slot_type_to_string (type), offset >= 0 ? "" : "-", ABS (offset), slot);
}
}
/* Set fp spill slots to NOREF */
for (i = 0; i < cfg->spill_info_len [MONO_REG_DOUBLE]; ++i) {
int offset = cfg->spill_info [MONO_REG_DOUBLE][i].offset;
int slot;
if (offset == -1)
continue;
slot = fp_offset_to_slot (cfg, offset);
set_slot_everywhere (gcfg, slot, SLOT_NOREF);
/* FIXME: 32 bit */
if (cfg->verbose_level > 1)
printf ("\tfp spill slot at %s0x%x(fp) (slot = %d)\n", offset >= 0 ? "" : "-", ABS (offset), slot);
}
/* Set int spill slots to NOREF */
for (i = 0; i < cfg->spill_info_len [MONO_REG_INT]; ++i) {
int offset = cfg->spill_info [MONO_REG_INT][i].offset;
int slot;
if (offset == -1)
continue;
slot = fp_offset_to_slot (cfg, offset);
set_slot_everywhere (gcfg, slot, SLOT_NOREF);
if (cfg->verbose_level > 1)
printf ("\tint spill slot at %s0x%x(fp) (slot = %d)\n", offset >= 0 ? "" : "-", ABS (offset), slot);
}
}
/*
* process_other_slots:
*
* Process stack slots registered using mini_gc_set_slot_type_... ().
*/
static void
process_other_slots (MonoCompile *cfg)
{
MonoCompileGC *gcfg = cfg->gc_info;
GSList *l;
/* Relative to the CFA */
for (l = gcfg->stack_slots_from_cfa; l; l = l->next) {
guint data = GPOINTER_TO_UINT (l->data);
int cfa_slot = data >> 16;
GCSlotType type = data & 0xff;
int slot;
/*
* Map the cfa relative slot to an fp relative slot.
* slot_addr == cfa - <cfa_slot>*4/8
* fp + cfa_offset == cfa
* -> slot_addr == fp + (cfa_offset - <cfa_slot>*4/8)
*/
slot = (cfg->cfa_offset / SIZEOF_SLOT) - cfa_slot - (gcfg->min_offset / SIZEOF_SLOT);
set_slot_everywhere (gcfg, slot, type);
if (cfg->verbose_level > 1) {
int fp_offset = slot_to_fp_offset (cfg, slot);
if (type == SLOT_NOREF)
printf ("\tnoref slot at %s0x%x(fp) (slot = %d) (cfa - 0x%x)\n", fp_offset >= 0 ? "" : "-", ABS (fp_offset), slot, (int)(cfa_slot * SIZEOF_SLOT));
}
}
/* Relative to the FP */
for (l = gcfg->stack_slots_from_fp; l; l = l->next) {
gint data = GPOINTER_TO_INT (l->data);
int offset = data >> 16;
GCSlotType type = data & 0xff;
int slot;
slot = fp_offset_to_slot (cfg, offset);
set_slot_everywhere (gcfg, slot, type);
/* Liveness for these slots is handled by process_spill_slots () */
if (cfg->verbose_level > 1) {
if (type == SLOT_REF)
printf ("\tref slot at fp+0x%x (slot = %d)\n", offset, slot);
else if (type == SLOT_NOREF)
printf ("\tnoref slot at 0x%x(fp) (slot = %d)\n", offset, slot);
}
}
}
static gsize*
get_vtype_bitmap (MonoType *t, int *numbits)
{
MonoClass *klass = mono_class_from_mono_type_internal (t);
if (klass->generic_container || mono_class_is_open_constructed_type (t)) {
/* FIXME: Generic sharing */
return NULL;
} else {
mono_class_compute_gc_descriptor (klass);
return mono_gc_get_bitmap_for_descr (klass->gc_descr, numbits);
}
}
static const char*
get_offset_sign (int offset)
{
return offset < 0 ? "-" : "+";
}
static int
get_offset_val (int offset)
{
return offset < 0 ? (- offset) : offset;
}
static void
process_variables (MonoCompile *cfg)
{
MonoCompileGC *gcfg = cfg->gc_info;
MonoMethodSignature *sig = mono_method_signature_internal (cfg->method);
int i, locals_min_slot, locals_max_slot, cindex;
MonoBasicBlock *bb;
MonoInst *tmp;
int *pc_offsets;
int locals_min_offset = gcfg->locals_min_offset;
int locals_max_offset = gcfg->locals_max_offset;
/* Slots for locals are NOREF by default */
locals_min_slot = (locals_min_offset - gcfg->min_offset) / SIZEOF_SLOT;
locals_max_slot = (locals_max_offset - gcfg->min_offset) / SIZEOF_SLOT;
for (i = locals_min_slot; i < locals_max_slot; ++i) {
set_slot_everywhere (gcfg, i, SLOT_NOREF);
}
/*
* Compute the offset where variables are initialized in the first bblock, if any.
*/
pc_offsets = g_new0 (int, cfg->next_vreg);
bb = cfg->bb_entry->next_bb;
MONO_BB_FOR_EACH_INS (bb, tmp) {
if (tmp->opcode == OP_GC_LIVENESS_DEF) {
int vreg = tmp->inst_c1;
if (pc_offsets [vreg] == 0) {
g_assert (tmp->backend.pc_offset > 0);
pc_offsets [vreg] = tmp->backend.pc_offset;
}
}
}
/*
* Stack slots holding arguments are initialized in the prolog.
* This means we can treat them alive for the whole method.
*/
for (i = 0; i < cfg->num_varinfo; i++) {
MonoInst *ins = cfg->varinfo [i];
MonoType *t = ins->inst_vtype;
MonoMethodVar *vmv;
guint32 pos;
gboolean byref, is_this = FALSE;
gboolean is_arg = i < cfg->locals_start;
if (ins == cfg->ret) {
if (!(ins->opcode == OP_REGOFFSET && MONO_TYPE_ISSTRUCT (t)))
continue;
}
vmv = MONO_VARINFO (cfg, i);
/* For some reason, 'this' is byref */
if (sig->hasthis && ins == cfg->args [0] && !cfg->method->klass->valuetype) {
t = m_class_get_byval_arg (cfg->method->klass);
is_this = TRUE;
}
byref = m_type_is_byref (t);
if (ins->opcode == OP_REGVAR) {
int hreg;
GCSlotType slot_type;
t = mini_get_underlying_type (t);
hreg = ins->dreg;
g_assert (hreg < MONO_MAX_IREGS);
if (byref)
slot_type = SLOT_PIN;
else
slot_type = mini_type_is_reference (t) ? SLOT_REF : SLOT_NOREF;
if (slot_type == SLOT_PIN) {
/* These have no live interval, be conservative */
set_reg_slot_everywhere (gcfg, hreg, slot_type);
} else {
/*
* Unlike variables allocated to the stack, we generate liveness info
* for noref vars in registers in mono_spill_global_vars (), because
* knowing that a register doesn't contain a ref allows us to mark its save
* locations precisely.
*/
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex)
if (gcfg->callsites [cindex]->liveness [i / 8] & (1 << (i % 8)))
set_reg_slot (gcfg, hreg, cindex, slot_type);
}
if (cfg->verbose_level > 1) {
printf ("\t%s %sreg %s(R%d)\n", slot_type_to_string (slot_type), is_arg ? "arg " : "", mono_arch_regname (hreg), vmv->vreg);
}
continue;
}
if (ins->opcode != OP_REGOFFSET)
continue;
if (ins->inst_offset % SIZEOF_SLOT != 0)
continue;
pos = fp_offset_to_slot (cfg, ins->inst_offset);
if (is_arg && ins->flags & MONO_INST_IS_DEAD) {
/* These do not get stored in the prolog */
set_slot_everywhere (gcfg, pos, SLOT_NOREF);
if (cfg->verbose_level > 1) {
printf ("\tdead arg at fp%s0x%x (slot = %d): %s\n", get_offset_sign (ins->inst_offset), get_offset_val (ins->inst_offset), pos, mono_type_full_name (ins->inst_vtype));
}
continue;
}
if (MONO_TYPE_ISSTRUCT (t)) {
int numbits = 0, j;
gsize *bitmap = NULL;
gboolean pin = FALSE;
int size;
int size_in_slots;
if (ins->backend.is_pinvoke)
size = mono_class_native_size (ins->klass, NULL);
else
size = mono_class_value_size (ins->klass, NULL);
size_in_slots = ALIGN_TO (size, SIZEOF_SLOT) / SIZEOF_SLOT;
if (cfg->verbose_level > 1)
printf ("\tvtype R%d at %s0x%x(fp)-%s0x%x(fp) (slot %d-%d): %s\n", vmv->vreg, get_offset_sign (ins->inst_offset), get_offset_val (ins->inst_offset), get_offset_sign (ins->inst_offset), get_offset_val (ins->inst_offset + (size_in_slots * SIZEOF_SLOT)), pos, pos + size_in_slots, mono_type_full_name (ins->inst_vtype));
if (!ins->klass->has_references) {
if (is_arg) {
for (j = 0; j < size_in_slots; ++j)
set_slot_everywhere (gcfg, pos + j, SLOT_NOREF);
}
continue;
}
bitmap = get_vtype_bitmap (t, &numbits);
if (!bitmap)
pin = TRUE;
/*
* Most vtypes are marked volatile because of the LDADDR instructions,
* and they have no liveness information since they are decomposed
* before the liveness pass. We emit OP_GC_LIVENESS_DEF instructions for
* them during VZERO decomposition.
*/
if (!is_arg) {
if (!pc_offsets [vmv->vreg])
pin = TRUE;
if (ins->backend.is_pinvoke)
pin = TRUE;
}
if (bitmap) {
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex) {
if (gcfg->callsites [cindex]->pc_offset > pc_offsets [vmv->vreg]) {
for (j = 0; j < numbits; ++j) {
if (bitmap [j / GC_BITS_PER_WORD] & ((gsize)1 << (j % GC_BITS_PER_WORD))) {
/* The descriptor is for the boxed object */
set_slot (gcfg, (pos + j - (MONO_ABI_SIZEOF (MonoObject) / SIZEOF_SLOT)), cindex, pin ? SLOT_PIN : SLOT_REF);
}
}
}
}
if (cfg->verbose_level > 1) {
for (j = 0; j < numbits; ++j) {
if (bitmap [j / GC_BITS_PER_WORD] & ((gsize)1 << (j % GC_BITS_PER_WORD)))
printf ("\t\t%s slot at 0x%x(fp) (slot = %d)\n", pin ? "pin" : "ref", (int)(ins->inst_offset + (j * SIZEOF_SLOT)), (int)(pos + j - (MONO_ABI_SIZEOF (MonoObject) / SIZEOF_SLOT)));
}
}
} else {
if (cfg->verbose_level > 1)
printf ("\t\tpinned\n");
for (j = 0; j < size_in_slots; ++j) {
set_slot_everywhere (gcfg, pos + j, SLOT_PIN);
}
}
g_free (bitmap);
continue;
}
if (!is_arg && (ins->inst_offset < gcfg->min_offset || ins->inst_offset >= gcfg->max_offset))
/* Vret addr etc. */
continue;
if (m_type_is_byref (t)) {
if (is_arg) {
set_slot_everywhere (gcfg, pos, SLOT_PIN);
} else {
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex)
if (gcfg->callsites [cindex]->liveness [i / 8] & (1 << (i % 8)))
set_slot (gcfg, pos, cindex, SLOT_PIN);
}
if (cfg->verbose_level > 1)
printf ("\tbyref at %s0x%x(fp) (R%d, slot = %d): %s\n", ins->inst_offset < 0 ? "-" : "", (ins->inst_offset < 0) ? -(int)ins->inst_offset : (int)ins->inst_offset, vmv->vreg, pos, mono_type_full_name (ins->inst_vtype));
continue;
}
/*
* This is currently disabled, but could be enabled to debug crashes.
*/
#if 0
if (t->type == MONO_TYPE_I) {
/*
* Variables created in mono_handle_global_vregs have type I, but they
* could hold GC refs since the vregs they were created from might not been
* marked as holding a GC ref. So be conservative.
*/
set_slot_everywhere (gcfg, pos, SLOT_PIN);
continue;
}
#endif
t = mini_get_underlying_type (t);
if (!mini_type_is_reference (t)) {
set_slot_everywhere (gcfg, pos, SLOT_NOREF);
if (cfg->verbose_level > 1)
printf ("\tnoref%s at %s0x%x(fp) (R%d, slot = %d): %s\n", (is_arg ? " arg" : ""), ins->inst_offset < 0 ? "-" : "", (ins->inst_offset < 0) ? -(int)ins->inst_offset : (int)ins->inst_offset, vmv->vreg, pos, mono_type_full_name (ins->inst_vtype));
if (!m_type_is_byref (t) && sizeof (host_mgreg_t) == 4 && (t->type == MONO_TYPE_I8 || t->type == MONO_TYPE_U8 || t->type == MONO_TYPE_R8)) {
set_slot_everywhere (gcfg, pos + 1, SLOT_NOREF);
if (cfg->verbose_level > 1)
printf ("\tnoref at %s0x%x(fp) (R%d, slot = %d): %s\n", ins->inst_offset < 0 ? "-" : "", (ins->inst_offset < 0) ? -(int)(ins->inst_offset + 4) : (int)ins->inst_offset + 4, vmv->vreg, pos + 1, mono_type_full_name (ins->inst_vtype));
}
continue;
}
/* 'this' is marked INDIRECT for gshared methods */
if (ins->flags & (MONO_INST_VOLATILE | MONO_INST_INDIRECT) && !is_this) {
/*
* For volatile variables, treat them alive from the point they are
* initialized in the first bblock until the end of the method.
*/
if (is_arg) {
set_slot_everywhere (gcfg, pos, SLOT_REF);
} else if (pc_offsets [vmv->vreg]) {
set_slot_in_range (gcfg, pos, 0, pc_offsets [vmv->vreg], SLOT_PIN);
set_slot_in_range (gcfg, pos, pc_offsets [vmv->vreg], cfg->code_size, SLOT_REF);
} else {
set_slot_everywhere (gcfg, pos, SLOT_PIN);
}
if (cfg->verbose_level > 1)
printf ("\tvolatile ref at %s0x%x(fp) (R%d, slot = %d): %s\n", ins->inst_offset < 0 ? "-" : "", (ins->inst_offset < 0) ? -(int)ins->inst_offset : (int)ins->inst_offset, vmv->vreg, pos, mono_type_full_name (ins->inst_vtype));
continue;
}
if (is_arg) {
/* Live for the whole method */
set_slot_everywhere (gcfg, pos, SLOT_REF);
} else {
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex)
if (gcfg->callsites [cindex]->liveness [i / 8] & (1 << (i % 8)))
set_slot (gcfg, pos, cindex, SLOT_REF);
}
if (cfg->verbose_level > 1) {
printf ("\tref%s at %s0x%x(fp) (R%d, slot = %d): %s\n", (is_arg ? " arg" : ""), ins->inst_offset < 0 ? "-" : "", (ins->inst_offset < 0) ? -(int)ins->inst_offset : (int)ins->inst_offset, vmv->vreg, pos, mono_type_full_name (ins->inst_vtype));
}
}
g_free (pc_offsets);
}
static int
sp_offset_to_fp_offset (MonoCompile *cfg, int sp_offset)
{
/*
* Convert a sp relative offset to a slot index. This is
* platform specific.
*/
#ifdef TARGET_AMD64
/* fp = sp + offset */
g_assert (cfg->frame_reg == AMD64_RBP);
return (- cfg->arch.sp_fp_offset + sp_offset);
#elif defined(TARGET_X86)
/* The offset is computed from the sp at the start of the call sequence */
g_assert (cfg->frame_reg == X86_EBP);
#ifdef MONO_X86_NO_PUSHES
return (- cfg->arch.sp_fp_offset + sp_offset);
#else
return (- cfg->arch.sp_fp_offset - sp_offset);
#endif
#else
NOT_IMPLEMENTED;
return -1;
#endif
}
static void
process_param_area_slots (MonoCompile *cfg)
{
MonoCompileGC *gcfg = cfg->gc_info;
int cindex, i;
gboolean *is_param;
/*
* These slots are used for passing parameters during calls. They are sp relative, not
* fp relative, so they are harder to handle.
*/
if (cfg->flags & MONO_CFG_HAS_ALLOCA)
/* The distance between fp and sp is not constant */
return;
is_param = mono_mempool_alloc0 (cfg->mempool, gcfg->nslots * sizeof (gboolean));
for (cindex = 0; cindex < gcfg->ncallsites; ++cindex) {
GCCallSite *callsite = gcfg->callsites [cindex];
GSList *l;
for (l = callsite->param_slots; l; l = l->next) {
MonoInst *def = l->data;
MonoType *t = def->inst_vtype;
int sp_offset = def->inst_offset;
int fp_offset = sp_offset_to_fp_offset (cfg, sp_offset);
int slot = fp_offset_to_slot (cfg, fp_offset);
guint32 align;
guint32 size;
if (MONO_TYPE_ISSTRUCT (t)) {
size = mini_type_stack_size_full (t, &align, FALSE);
} else {
size = sizeof (target_mgreg_t);
}
for (i = 0; i < size / sizeof (target_mgreg_t); ++i) {
g_assert (slot + i >= 0 && slot + i < gcfg->nslots);
is_param [slot + i] = TRUE;
}
}
}
/* All param area slots are noref by default */
for (i = 0; i < gcfg->nslots; ++i) {
if (is_param [i])
set_slot_everywhere (gcfg, i, SLOT_NOREF);
}
/*
* We treat param area slots as being part of the callee's frame, to be able to handle tailcalls which overwrite
* the argument area of the caller.
*/
}
static void
process_finally_clauses (MonoCompile *cfg)
{
MonoCompileGC *gcfg = cfg->gc_info;
GCCallSite **callsites;
int ncallsites;
gboolean has_finally;
int i, j, nslots, nregs;
ncallsites = gcfg->ncallsites;
nslots = gcfg->nslots;
nregs = gcfg->nregs;
callsites = gcfg->callsites;
/*
* The calls to the finally clauses don't show up in the cfg. See
* test_0_liveness_8 ().
* Variables accessed inside the finally clause are already marked VOLATILE by
* mono_liveness_handle_exception_clauses (). Variables not accessed inside the finally clause have
* correct liveness outside the finally clause. So mark them PIN inside the finally clauses.
*/
has_finally = FALSE;
for (i = 0; i < cfg->header->num_clauses; ++i) {
MonoExceptionClause *clause = &cfg->header->clauses [i];
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY) {
has_finally = TRUE;
}
}
if (has_finally) {
if (cfg->verbose_level > 1)
printf ("\tMethod has finally clauses, pessimizing live ranges.\n");
for (j = 0; j < ncallsites; ++j) {
MonoBasicBlock *bb = callsites [j]->bb;
MonoExceptionClause *clause;
gboolean is_in_finally = FALSE;
for (i = 0; i < cfg->header->num_clauses; ++i) {
clause = &cfg->header->clauses [i];
if (MONO_OFFSET_IN_HANDLER (clause, bb->real_offset)) {
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY) {
is_in_finally = TRUE;
break;
}
}
}
if (is_in_finally) {
for (i = 0; i < nslots; ++i)
set_slot (gcfg, i, j, SLOT_PIN);
for (i = 0; i < nregs; ++i)
set_reg_slot (gcfg, i, j, SLOT_PIN);
}
}
}
}
static void
compute_frame_size (MonoCompile *cfg)
{
int i, locals_min_offset, locals_max_offset, cfa_min_offset, cfa_max_offset;
int min_offset, max_offset;
MonoCompileGC *gcfg = cfg->gc_info;
MonoMethodSignature *sig = mono_method_signature_internal (cfg->method);
GSList *l;
/* Compute min/max offsets from the fp */
/* Locals */
#if defined(TARGET_AMD64) || defined(TARGET_X86) || defined(TARGET_ARM) || defined(TARGET_S390X)
locals_min_offset = ALIGN_TO (cfg->locals_min_stack_offset, SIZEOF_SLOT);
locals_max_offset = cfg->locals_max_stack_offset;
#else
/* min/max stack offset needs to be computed in mono_arch_allocate_vars () */
NOT_IMPLEMENTED;
#endif
locals_min_offset = ALIGN_TO (locals_min_offset, SIZEOF_SLOT);
locals_max_offset = ALIGN_TO (locals_max_offset, SIZEOF_SLOT);
min_offset = locals_min_offset;
max_offset = locals_max_offset;
/* Arguments */
for (i = 0; i < sig->param_count + sig->hasthis; ++i) {
MonoInst *ins = cfg->args [i];
if (ins->opcode == OP_REGOFFSET) {
int size, size_in_slots;
size = mini_type_stack_size_full (ins->inst_vtype, NULL, ins->backend.is_pinvoke);
size_in_slots = ALIGN_TO (size, SIZEOF_SLOT) / SIZEOF_SLOT;
min_offset = MIN (min_offset, ins->inst_offset);
max_offset = MAX ((int)max_offset, (int)(ins->inst_offset + (size_in_slots * SIZEOF_SLOT)));
}
}
/* Cfa slots */
g_assert (cfg->frame_reg == cfg->cfa_reg);
g_assert (cfg->cfa_offset > 0);
cfa_min_offset = 0;
cfa_max_offset = cfg->cfa_offset;
min_offset = MIN (min_offset, cfa_min_offset);
max_offset = MAX (max_offset, cfa_max_offset);
/* Fp relative slots */
for (l = gcfg->stack_slots_from_fp; l; l = l->next) {
gint data = GPOINTER_TO_INT (l->data);
int offset = data >> 16;
min_offset = MIN (min_offset, offset);
}
/* Spill slots */
if (!(cfg->flags & MONO_CFG_HAS_SPILLUP)) {
int stack_offset = ALIGN_TO (cfg->stack_offset, SIZEOF_SLOT);
min_offset = MIN (min_offset, (-stack_offset));
}
/* Param area slots */
#ifdef TARGET_AMD64
min_offset = MIN (min_offset, -cfg->arch.sp_fp_offset);
#elif defined(TARGET_X86)
#ifdef MONO_X86_NO_PUSHES
min_offset = MIN (min_offset, -cfg->arch.sp_fp_offset);
#else
min_offset = MIN (min_offset, - (cfg->arch.sp_fp_offset + cfg->arch.param_area_size));
#endif
#elif defined(TARGET_ARM)
// FIXME:
#elif defined(TARGET_s390X)
// FIXME:
#else
NOT_IMPLEMENTED;
#endif
gcfg->min_offset = min_offset;
gcfg->max_offset = max_offset;
gcfg->locals_min_offset = locals_min_offset;
gcfg->locals_max_offset = locals_max_offset;
}
static void
init_gcfg (MonoCompile *cfg)
{
int i, nregs, nslots;
MonoCompileGC *gcfg = cfg->gc_info;
GCCallSite **callsites;
int ncallsites;
MonoBasicBlock *bb;
GSList *l;
/*
* Collect callsites
*/
ncallsites = 0;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
ncallsites += g_slist_length (bb->gc_callsites);
}
callsites = mono_mempool_alloc0 (cfg->mempool, ncallsites * sizeof (GCCallSite*));
i = 0;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
for (l = bb->gc_callsites; l; l = l->next)
callsites [i++] = l->data;
}
/* The callsites should already be ordered by pc offset */
for (i = 1; i < ncallsites; ++i)
g_assert (callsites [i - 1]->pc_offset < callsites [i]->pc_offset);
/*
* The stack frame looks like this:
*
* <fp + max_offset> == cfa -> <end of previous frame>
* <other stack slots>
* <locals>
* <other stack slots>
* fp + min_offset ->
* ...
* fp ->
*/
if (cfg->verbose_level > 1)
printf ("GC Map for %s: 0x%x-0x%x\n", mono_method_full_name (cfg->method, TRUE), gcfg->min_offset, gcfg->max_offset);
nslots = (gcfg->max_offset - gcfg->min_offset) / SIZEOF_SLOT;
nregs = NREGS;
gcfg->nslots = nslots;
gcfg->nregs = nregs;
gcfg->callsites = callsites;
gcfg->ncallsites = ncallsites;
gcfg->stack_bitmap_width = ALIGN_TO (ncallsites, 8) / 8;
gcfg->reg_bitmap_width = ALIGN_TO (ncallsites, 8) / 8;
gcfg->stack_ref_bitmap = mono_mempool_alloc0 (cfg->mempool, gcfg->stack_bitmap_width * nslots);
gcfg->stack_pin_bitmap = mono_mempool_alloc0 (cfg->mempool, gcfg->stack_bitmap_width * nslots);
gcfg->reg_ref_bitmap = mono_mempool_alloc0 (cfg->mempool, gcfg->reg_bitmap_width * nregs);
gcfg->reg_pin_bitmap = mono_mempool_alloc0 (cfg->mempool, gcfg->reg_bitmap_width * nregs);
/* All slots start out as PIN */
memset (gcfg->stack_pin_bitmap, 0xff, gcfg->stack_bitmap_width * nregs);
for (i = 0; i < nregs; ++i) {
/*
* By default, registers are NOREF.
* It is possible for a callee to save them before being defined in this method,
* but the saved value is dead too, so it doesn't need to be marked.
*/
if ((cfg->used_int_regs & (1 << i)))
set_reg_slot_everywhere (gcfg, i, SLOT_NOREF);
}
}
static gboolean
has_bit_set (guint8 *bitmap, int width, int slot)
{
int i;
int pos = width * slot;
for (i = 0; i < width; ++i) {
if (bitmap [pos + i])
break;
}
return i < width;
}
static void
create_map (MonoCompile *cfg)
{
GCMap *map;
int i, j, nregs, nslots, nref_regs, npin_regs, alloc_size, bitmaps_size, bitmaps_offset;
int ntypes [16];
int stack_bitmap_width, stack_bitmap_size, reg_ref_bitmap_width, reg_ref_bitmap_size;
int reg_pin_bitmap_width, reg_pin_bitmap_size, bindex;
int start, end;
gboolean has_ref_slots, has_pin_slots, has_ref_regs, has_pin_regs;
MonoCompileGC *gcfg = cfg->gc_info;
GCCallSite **callsites;
int ncallsites;
guint8 *bitmap, *bitmaps;
guint32 reg_ref_mask, reg_pin_mask;
ncallsites = gcfg->ncallsites;
nslots = gcfg->nslots;
nregs = gcfg->nregs;
callsites = gcfg->callsites;
/*
* Compute the real size of the bitmap i.e. ignore NOREF columns at the beginning and at
* the end. Also, compute whenever the map needs ref/pin bitmaps, and collect stats.
*/
has_ref_slots = FALSE;
has_pin_slots = FALSE;
start = -1;
end = -1;
memset (ntypes, 0, sizeof (ntypes));
for (i = 0; i < nslots; ++i) {
gboolean has_ref = FALSE;
gboolean has_pin = FALSE;
if (has_bit_set (gcfg->stack_pin_bitmap, gcfg->stack_bitmap_width, i))
has_pin = TRUE;
if (has_bit_set (gcfg->stack_ref_bitmap, gcfg->stack_bitmap_width, i))
has_ref = TRUE;
if (has_ref)
has_ref_slots = TRUE;
if (has_pin)
has_pin_slots = TRUE;
if (has_ref)
ntypes [SLOT_REF] ++;
else if (has_pin)
ntypes [SLOT_PIN] ++;
else
ntypes [SLOT_NOREF] ++;
if (has_ref || has_pin) {
if (start == -1)
start = i;
end = i + 1;
}
}
if (start == -1) {
start = end = nslots;
} else {
g_assert (start != -1);
g_assert (start < end);
}
has_ref_regs = FALSE;
has_pin_regs = FALSE;
reg_ref_mask = 0;
reg_pin_mask = 0;
nref_regs = 0;
npin_regs = 0;
for (i = 0; i < nregs; ++i) {
gboolean has_ref = FALSE;
gboolean has_pin = FALSE;
if (!(cfg->used_int_regs & (1 << i)))
continue;
if (has_bit_set (gcfg->reg_pin_bitmap, gcfg->reg_bitmap_width, i))
has_pin = TRUE;
if (has_bit_set (gcfg->reg_ref_bitmap, gcfg->reg_bitmap_width, i))
has_ref = TRUE;
if (has_ref) {
reg_ref_mask |= (1 << i);
has_ref_regs = TRUE;
nref_regs ++;
}
if (has_pin) {
reg_pin_mask |= (1 << i);
has_pin_regs = TRUE;
npin_regs ++;
}
}
if (cfg->verbose_level > 1)
printf ("Slots: %d Start: %d End: %d Refs: %d NoRefs: %d Pin: %d Callsites: %d\n", nslots, start, end, ntypes [SLOT_REF], ntypes [SLOT_NOREF], ntypes [SLOT_PIN], ncallsites);
/* Create the GC Map */
/* The work bitmaps have one row for each slot, since this is how we access them during construction */
stack_bitmap_width = ALIGN_TO (end - start, 8) / 8;
stack_bitmap_size = stack_bitmap_width * ncallsites;
reg_ref_bitmap_width = ALIGN_TO (nref_regs, 8) / 8;
reg_ref_bitmap_size = reg_ref_bitmap_width * ncallsites;
reg_pin_bitmap_width = ALIGN_TO (npin_regs, 8) / 8;
reg_pin_bitmap_size = reg_pin_bitmap_width * ncallsites;
bitmaps_size = (has_ref_slots ? stack_bitmap_size : 0) + (has_pin_slots ? stack_bitmap_size : 0) + (has_ref_regs ? reg_ref_bitmap_size : 0) + (has_pin_regs ? reg_pin_bitmap_size : 0);
map = mono_mempool_alloc0 (cfg->mempool, sizeof (GCMap));
map->frame_reg = cfg->frame_reg;
map->start_offset = gcfg->min_offset;
map->end_offset = gcfg->min_offset + (nslots * SIZEOF_SLOT);
map->map_offset = start * SIZEOF_SLOT;
map->nslots = end - start;
map->has_ref_slots = has_ref_slots;
map->has_pin_slots = has_pin_slots;
map->has_ref_regs = has_ref_regs;
map->has_pin_regs = has_pin_regs;
g_assert (nregs < 32);
map->used_int_regs = cfg->used_int_regs;
map->reg_ref_mask = reg_ref_mask;
map->reg_pin_mask = reg_pin_mask;
map->nref_regs = nref_regs;
map->npin_regs = npin_regs;
bitmaps = mono_mempool_alloc0 (cfg->mempool, bitmaps_size);
bitmaps_offset = 0;
if (has_ref_slots) {
map->stack_ref_bitmap_offset = bitmaps_offset;
bitmaps_offset += stack_bitmap_size;
bitmap = &bitmaps [map->stack_ref_bitmap_offset];
for (i = 0; i < nslots; ++i) {
for (j = 0; j < ncallsites; ++j) {
if (get_bit (gcfg->stack_ref_bitmap, gcfg->stack_bitmap_width, i, j))
set_bit (bitmap, stack_bitmap_width, j, i - start);
}
}
}
if (has_pin_slots) {
map->stack_pin_bitmap_offset = bitmaps_offset;
bitmaps_offset += stack_bitmap_size;
bitmap = &bitmaps [map->stack_pin_bitmap_offset];
for (i = 0; i < nslots; ++i) {
for (j = 0; j < ncallsites; ++j) {
if (get_bit (gcfg->stack_pin_bitmap, gcfg->stack_bitmap_width, i, j))
set_bit (bitmap, stack_bitmap_width, j, i - start);
}
}
}
if (has_ref_regs) {
map->reg_ref_bitmap_offset = bitmaps_offset;
bitmaps_offset += reg_ref_bitmap_size;
bitmap = &bitmaps [map->reg_ref_bitmap_offset];
bindex = 0;
for (i = 0; i < nregs; ++i) {
if (reg_ref_mask & (1 << i)) {
for (j = 0; j < ncallsites; ++j) {
if (get_bit (gcfg->reg_ref_bitmap, gcfg->reg_bitmap_width, i, j))
set_bit (bitmap, reg_ref_bitmap_width, j, bindex);
}
bindex ++;
}
}
}
if (has_pin_regs) {
map->reg_pin_bitmap_offset = bitmaps_offset;
bitmaps_offset += reg_pin_bitmap_size;
bitmap = &bitmaps [map->reg_pin_bitmap_offset];
bindex = 0;
for (i = 0; i < nregs; ++i) {
if (reg_pin_mask & (1 << i)) {
for (j = 0; j < ncallsites; ++j) {
if (get_bit (gcfg->reg_pin_bitmap, gcfg->reg_bitmap_width, i, j))
set_bit (bitmap, reg_pin_bitmap_width, j, bindex);
}
bindex ++;
}
}
}
/* Call sites */
map->ncallsites = ncallsites;
if (cfg->code_len < 256)
map->callsite_entry_size = 1;
else if (cfg->code_len < 65536)
map->callsite_entry_size = 2;
else
map->callsite_entry_size = 4;
/* Encode the GC Map */
{
guint8 buf [256];
guint8 *endbuf;
GCEncodedMap *emap;
int encoded_size;
guint8 *p;
encode_gc_map (map, buf, &endbuf);
g_assert (endbuf - buf < 256);
encoded_size = endbuf - buf;
alloc_size = sizeof (GCEncodedMap) + ALIGN_TO (encoded_size, map->callsite_entry_size) + (map->callsite_entry_size * map->ncallsites) + bitmaps_size;
emap = mono_mem_manager_alloc0 (cfg->mem_manager, alloc_size);
//emap->ref_slots = map->ref_slots;
/* Encoded fixed fields */
p = &emap->encoded [0];
//emap->encoded_size = encoded_size;
memcpy (p, buf, encoded_size);
p += encoded_size;
/* Callsite table */
p = (guint8*)ALIGN_TO ((gsize)p, map->callsite_entry_size);
if (map->callsite_entry_size == 1) {
guint8 *offsets = p;
for (i = 0; i < ncallsites; ++i)
offsets [i] = callsites [i]->pc_offset;
UnlockedAdd (&stats.gc_callsites8_size, ncallsites * sizeof (guint8));
} else if (map->callsite_entry_size == 2) {
guint16 *offsets = (guint16*)p;
for (i = 0; i < ncallsites; ++i)
offsets [i] = callsites [i]->pc_offset;
UnlockedAdd (&stats.gc_callsites16_size, ncallsites * sizeof (guint16));
} else {
guint32 *offsets = (guint32*)p;
for (i = 0; i < ncallsites; ++i)
offsets [i] = callsites [i]->pc_offset;
UnlockedAdd (&stats.gc_callsites32_size, ncallsites * sizeof (guint32));
}
p += ncallsites * map->callsite_entry_size;
/* Bitmaps */
memcpy (p, bitmaps, bitmaps_size);
p += bitmaps_size;
g_assert ((guint8*)p - (guint8*)emap <= alloc_size);
UnlockedAdd (&stats.gc_maps_size, alloc_size);
UnlockedAdd (&stats.gc_callsites_size, ncallsites * map->callsite_entry_size);
UnlockedAdd (&stats.gc_bitmaps_size, bitmaps_size);
UnlockedAdd (&stats.gc_map_struct_size, sizeof (GCEncodedMap) + encoded_size);
cfg->jit_info->gc_info = emap;
cfg->gc_map = (guint8*)emap;
cfg->gc_map_size = alloc_size;
}
UnlockedAdd (&stats.all_slots, nslots);
UnlockedAdd (&stats.ref_slots, ntypes [SLOT_REF]);
UnlockedAdd (&stats.noref_slots, ntypes [SLOT_NOREF]);
UnlockedAdd (&stats.pin_slots, ntypes [SLOT_PIN]);
}
void
mini_gc_create_gc_map (MonoCompile *cfg)
{
if (!cfg->compute_gc_maps)
return;
/*
* During marking, all frames except the top frame are at a call site, and we mark the
* top frame conservatively. This means that we only need to compute and record
* GC maps for call sites.
*/
if (!(cfg->comp_done & MONO_COMP_LIVENESS))
/* Without liveness info, the live ranges are not precise enough */
return;
mono_analyze_liveness_gc (cfg);
compute_frame_size (cfg);
init_gcfg (cfg);
process_spill_slots (cfg);
process_other_slots (cfg);
process_param_area_slots (cfg);
process_variables (cfg);
process_finally_clauses (cfg);
create_map (cfg);
}
#endif /* DISABLE_JIT */
static void
parse_debug_options (void)
{
char **opts, **ptr;
const char *env;
env = g_getenv ("MONO_GCMAP_DEBUG");
if (!env)
return;
opts = g_strsplit (env, ",", -1);
for (ptr = opts; ptr && *ptr; ptr ++) {
/* No options yet */
fprintf (stderr, "Invalid format for the MONO_GCMAP_DEBUG env variable: '%s'\n", env);
exit (1);
}
g_strfreev (opts);
g_free (env);
}
void
mini_gc_init (void)
{
MonoGCCallbacks cb;
memset (&cb, 0, sizeof (cb));
cb.thread_attach_func = thread_attach_func;
cb.thread_detach_func = thread_detach_func;
cb.thread_suspend_func = thread_suspend_func;
/* Comment this out to disable precise stack marking */
cb.thread_mark_func = thread_mark_func;
cb.get_provenance_func = get_provenance_func;
if (mono_use_interpreter)
cb.interp_mark_func = mini_get_interp_callbacks ()->mark_stack;
mono_gc_set_gc_callbacks (&cb);
logfile = mono_gc_get_logfile ();
parse_debug_options ();
mono_counters_register ("GC Maps size",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_maps_size);
mono_counters_register ("GC Call Sites size",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_callsites_size);
mono_counters_register ("GC Bitmaps size",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_bitmaps_size);
mono_counters_register ("GC Map struct size",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_map_struct_size);
mono_counters_register ("GC Call Sites encoded using 8 bits",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_callsites8_size);
mono_counters_register ("GC Call Sites encoded using 16 bits",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_callsites16_size);
mono_counters_register ("GC Call Sites encoded using 32 bits",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.gc_callsites32_size);
mono_counters_register ("GC Map slots (all)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.all_slots);
mono_counters_register ("GC Map slots (ref)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.ref_slots);
mono_counters_register ("GC Map slots (noref)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.noref_slots);
mono_counters_register ("GC Map slots (pin)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.pin_slots);
mono_counters_register ("GC TLS Data size",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.tlsdata_size);
mono_counters_register ("Stack space scanned (all)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned_stacks);
mono_counters_register ("Stack space scanned (native)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned_native);
mono_counters_register ("Stack space scanned (other)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned_other);
mono_counters_register ("Stack space scanned (using GC Maps)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned);
mono_counters_register ("Stack space scanned (precise)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned_precisely);
mono_counters_register ("Stack space scanned (pin)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned_conservatively);
mono_counters_register ("Stack space scanned (pin registers)",
MONO_COUNTER_GC | MONO_COUNTER_INT, &stats.scanned_registers);
}
#else
void
mini_gc_enable_gc_maps_for_aot (void)
{
}
void
mini_gc_init (void)
{
MonoGCCallbacks cb;
memset (&cb, 0, sizeof (cb));
cb.get_provenance_func = get_provenance_func;
if (mono_use_interpreter)
cb.interp_mark_func = mini_get_interp_callbacks ()->mark_stack;
mono_gc_set_gc_callbacks (&cb);
}
#ifndef DISABLE_JIT
static void
mini_gc_init_gc_map (MonoCompile *cfg)
{
}
void
mini_gc_create_gc_map (MonoCompile *cfg)
{
}
void
mini_gc_set_slot_type_from_fp (MonoCompile *cfg, int slot_offset, GCSlotType type)
{
}
void
mini_gc_set_slot_type_from_cfa (MonoCompile *cfg, int slot_offset, GCSlotType type)
{
}
#endif /* DISABLE_JIT */
#endif
#ifndef DISABLE_JIT
/*
* mini_gc_init_cfg:
*
* Set GC specific options in CFG.
*/
void
mini_gc_init_cfg (MonoCompile *cfg)
{
if (mono_gc_is_moving ()) {
cfg->disable_ref_noref_stack_slot_share = TRUE;
cfg->gen_write_barriers = TRUE;
}
mini_gc_init_gc_map (cfg);
}
#endif /* DISABLE_JIT */
/*
* Problems with the current code:
* - the stack walk is slow
* - vtypes/refs used in EH regions are treated conservatively
* - if the code is finished, less pinning will be done, causing problems because
* we promote all surviving objects to old-gen.
* - the unwind code can't handle a method stopped inside a finally region, it thinks the caller is
* another method, but in reality it is either the exception handling code or the CALL_HANDLER opcode.
* This manifests in "Unable to find ip offset x in callsite list" assertions.
* - the unwind code also can't handle frames which are in the epilog, since the unwind info is not
* precise there.
*/
/*
* Ideas for creating smaller GC maps:
* - remove empty columns from the bitmaps. This requires adding a mask bit array for
* each bitmap.
* - merge reg and stack slot bitmaps, so the unused bits at the end of the reg bitmap are
* not wasted.
* - if the bitmap width is not a multiple of 8, the remaining bits are wasted.
* - group ref and non-ref stack slots together in mono_allocate_stack_slots ().
* - add an index for the callsite table so that each entry can be encoded as a 1 byte difference
* from an index entry.
*/
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/vm/ilinstrumentation.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
// ===========================================================================
// File: ILInstrumentation.h
//
// ===========================================================================
#ifndef IL_INSTRUMENTATION_H
#define IL_INSTRUMENTATION_H
// declare an array type of COR_IL_MAP entries
typedef ArrayDPTR(COR_IL_MAP) ARRAY_PTR_COR_IL_MAP;
//---------------------------------------------------------------------------------------
//
// A profiler may instrument a method by changing the IL. This is typically done when the profiler receives
// a JITCompilationStarted notification. The profiler also has the option to provide the runtime with
// a mapping between original IL offsets and instrumented IL offsets. This struct is a simple container
// for storing the mapping information. We store the mapping information on the Module class, where it can
// be accessed by the debugger from out-of-process.
//
class InstrumentedILOffsetMapping
{
public:
InstrumentedILOffsetMapping();
// Check whether there is any mapping information stored in this object.
BOOL IsNull() const;
#if !defined(DACCESS_COMPILE)
// Release the memory used by the array of COR_IL_MAPs.
void Clear();
void SetMappingInfo(SIZE_T cMap, COR_IL_MAP * rgMap);
#endif // !DACCESS_COMPILE
SIZE_T GetCount() const;
ARRAY_PTR_COR_IL_MAP GetOffsets() const;
private:
SIZE_T m_cMap; // the number of elements in m_rgMap
ARRAY_PTR_COR_IL_MAP m_rgMap; // an array of COR_IL_MAPs
};
//---------------------------------------------------------------------------------------
//
// Hash table entry for storing InstrumentedILOffsetMapping. This is keyed by the MethodDef token.
//
struct ILOffsetMappingEntry
{
ILOffsetMappingEntry()
{
LIMITED_METHOD_DAC_CONTRACT;
m_methodToken = mdMethodDefNil;
// No need to initialize m_mapping. The default ctor of InstrumentedILOffsetMapping does the job.
}
ILOffsetMappingEntry(mdMethodDef token, InstrumentedILOffsetMapping mapping)
{
LIMITED_METHOD_DAC_CONTRACT;
m_methodToken = token;
m_mapping = mapping;
}
mdMethodDef m_methodToken;
InstrumentedILOffsetMapping m_mapping;
};
//---------------------------------------------------------------------------------------
//
// This class is used to create the hash table for the instrumented IL offset mapping.
// It encapsulates the desired behaviour of the templated hash table and implements
// the various functions needed by the hash table.
//
class ILOffsetMappingTraits : public NoRemoveSHashTraits<DefaultSHashTraits<ILOffsetMappingEntry> >
{
public:
typedef mdMethodDef key_t;
static key_t GetKey(element_t e)
{
LIMITED_METHOD_DAC_CONTRACT;
return e.m_methodToken;
}
static BOOL Equals(key_t k1, key_t k2)
{
LIMITED_METHOD_DAC_CONTRACT;
return (k1 == k2);
}
static count_t Hash(key_t k)
{
LIMITED_METHOD_DAC_CONTRACT;
return (count_t)(size_t)k;
}
static const element_t Null()
{
LIMITED_METHOD_DAC_CONTRACT;
ILOffsetMappingEntry e;
return e;
}
static bool IsNull(const element_t &e) { LIMITED_METHOD_DAC_CONTRACT; return e.m_methodToken == mdMethodDefNil; }
};
// Hash table of profiler-provided instrumented IL offset mapping, keyed by the MethodDef token
typedef SHash<ILOffsetMappingTraits> ILOffsetMappingTable;
typedef DPTR(ILOffsetMappingTable) PTR_ILOffsetMappingTable;
#endif // IL_INSTRUMENTATION_H
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
// ===========================================================================
// File: ILInstrumentation.h
//
// ===========================================================================
#ifndef IL_INSTRUMENTATION_H
#define IL_INSTRUMENTATION_H
// declare an array type of COR_IL_MAP entries
typedef ArrayDPTR(COR_IL_MAP) ARRAY_PTR_COR_IL_MAP;
//---------------------------------------------------------------------------------------
//
// A profiler may instrument a method by changing the IL. This is typically done when the profiler receives
// a JITCompilationStarted notification. The profiler also has the option to provide the runtime with
// a mapping between original IL offsets and instrumented IL offsets. This struct is a simple container
// for storing the mapping information. We store the mapping information on the Module class, where it can
// be accessed by the debugger from out-of-process.
//
class InstrumentedILOffsetMapping
{
public:
InstrumentedILOffsetMapping();
// Check whether there is any mapping information stored in this object.
BOOL IsNull() const;
#if !defined(DACCESS_COMPILE)
// Release the memory used by the array of COR_IL_MAPs.
void Clear();
void SetMappingInfo(SIZE_T cMap, COR_IL_MAP * rgMap);
#endif // !DACCESS_COMPILE
SIZE_T GetCount() const;
ARRAY_PTR_COR_IL_MAP GetOffsets() const;
private:
SIZE_T m_cMap; // the number of elements in m_rgMap
ARRAY_PTR_COR_IL_MAP m_rgMap; // an array of COR_IL_MAPs
};
//---------------------------------------------------------------------------------------
//
// Hash table entry for storing InstrumentedILOffsetMapping. This is keyed by the MethodDef token.
//
struct ILOffsetMappingEntry
{
ILOffsetMappingEntry()
{
LIMITED_METHOD_DAC_CONTRACT;
m_methodToken = mdMethodDefNil;
// No need to initialize m_mapping. The default ctor of InstrumentedILOffsetMapping does the job.
}
ILOffsetMappingEntry(mdMethodDef token, InstrumentedILOffsetMapping mapping)
{
LIMITED_METHOD_DAC_CONTRACT;
m_methodToken = token;
m_mapping = mapping;
}
mdMethodDef m_methodToken;
InstrumentedILOffsetMapping m_mapping;
};
//---------------------------------------------------------------------------------------
//
// This class is used to create the hash table for the instrumented IL offset mapping.
// It encapsulates the desired behaviour of the templated hash table and implements
// the various functions needed by the hash table.
//
class ILOffsetMappingTraits : public NoRemoveSHashTraits<DefaultSHashTraits<ILOffsetMappingEntry> >
{
public:
typedef mdMethodDef key_t;
static key_t GetKey(element_t e)
{
LIMITED_METHOD_DAC_CONTRACT;
return e.m_methodToken;
}
static BOOL Equals(key_t k1, key_t k2)
{
LIMITED_METHOD_DAC_CONTRACT;
return (k1 == k2);
}
static count_t Hash(key_t k)
{
LIMITED_METHOD_DAC_CONTRACT;
return (count_t)(size_t)k;
}
static const element_t Null()
{
LIMITED_METHOD_DAC_CONTRACT;
ILOffsetMappingEntry e;
return e;
}
static bool IsNull(const element_t &e) { LIMITED_METHOD_DAC_CONTRACT; return e.m_methodToken == mdMethodDefNil; }
};
// Hash table of profiler-provided instrumented IL offset mapping, keyed by the MethodDef token
typedef SHash<ILOffsetMappingTraits> ILOffsetMappingTable;
typedef DPTR(ILOffsetMappingTable) PTR_ILOffsetMappingTable;
#endif // IL_INSTRUMENTATION_H
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/inc/rt/wtsapi32.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//
//
// ===========================================================================
// File: wtsapi32.h
//
// ===========================================================================
// dummy wtsapi32.h for PAL
#include "palrt.h"
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//
//
// ===========================================================================
// File: wtsapi32.h
//
// ===========================================================================
// dummy wtsapi32.h for PAL
#include "palrt.h"
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/mono/mono/mini/abcremoval.h | /**
* \file
* Array bounds check removal
*
* Author:
* Massimiliano Mantione ([email protected])
*
* (C) 2004 Ximian, Inc. http://www.ximian.com
*/
#ifndef __MONO_ABCREMOVAL_H__
#define __MONO_ABCREMOVAL_H__
#include <limits.h>
#include "mini.h"
typedef enum {
MONO_VALUE_MAYBE_NULL = 0,
MONO_VALUE_NOT_NULL = 1,
MONO_VALUE_NULLNESS_MASK = 1,
/*
* If this bit is set, and the enclosing MonoSummarizedValue is a
* MONO_VARIABLE_SUMMARIZED_VALUE, then the "nullness" value is related
* to the variable referenced in MonoSummarizedVariableValue. Otherwise,
* the "nullness" value is constant.
*/
MONO_VALUE_IS_VARIABLE = 2,
} MonoValueNullness;
/**
* All handled value types (expressions) in variable definitions and branch
* contitions:
* ANY: not handled
* CONSTANT: an integer constant
* VARIABLE: a reference to a variable, with an optional delta (can be zero)
* PHI: a PHI definition of the SSA representation
*/
typedef enum {
MONO_ANY_SUMMARIZED_VALUE,
MONO_CONSTANT_SUMMARIZED_VALUE,
MONO_VARIABLE_SUMMARIZED_VALUE,
MONO_PHI_SUMMARIZED_VALUE
} MonoSummarizedValueType;
/**
* A MONO_CONSTANT_SUMMARIZED_VALUE value.
* value: the value
*/
typedef struct MonoSummarizedConstantValue {
int value;
MonoValueNullness nullness;
} MonoSummarizedConstantValue;
/**
* A MONO_VARIABLE_SUMMARIZED_VALUE value
* variable: the variable index in the cfg
* delta: the delta (can be zero)
*/
typedef struct MonoSummarizedVariableValue {
int variable;
int delta;
MonoValueNullness nullness;
} MonoSummarizedVariableValue;
/**
* A MONO_PHI_SUMMARIZED_VALUE value.
* number_of_alternatives: the number of alternatives in the PHI definition
* phi_alternatives: an array of integers with the indexes of the variables
* which are the alternatives in this PHI definition
*/
typedef struct MonoSummarizedPhiValue {
int number_of_alternatives;
int *phi_alternatives;
} MonoSummarizedPhiValue;
/**
* A summarized value.
* In practice it is a "tagged union".
*/
typedef struct MonoSummarizedValue {
MonoSummarizedValueType type;
union {
MonoSummarizedConstantValue constant;
MonoSummarizedVariableValue variable;
MonoSummarizedPhiValue phi;
} value;
} MonoSummarizedValue;
/**
* A "relation" between two values.
* The enumeration is used as a bit field, with three significant bits.
* The degenerated cases are meaningful:
* MONO_ANY_RELATION: we know nothing of this relation
* MONO_NO_RELATION: no relation is possible (this code is unreachable)
*/
typedef enum {
MONO_EQ_RELATION = 1,
MONO_LT_RELATION = 2,
MONO_GT_RELATION = 4,
MONO_NE_RELATION = (MONO_LT_RELATION|MONO_GT_RELATION),
MONO_LE_RELATION = (MONO_LT_RELATION|MONO_EQ_RELATION),
MONO_GE_RELATION = (MONO_GT_RELATION|MONO_EQ_RELATION),
MONO_ANY_RELATION = (MONO_EQ_RELATION|MONO_LT_RELATION|MONO_GT_RELATION),
MONO_NO_RELATION = 0
} MonoValueRelation;
/**
* A "kind" of integer value.
* The enumeration is used as a bit field, with two fields.
* The first, four bits wide, is the "sizeof" in bytes.
* The second is a flag that is true if the value is unsigned.
*/
typedef enum {
MONO_INTEGER_VALUE_SIZE_1 = 1,
MONO_INTEGER_VALUE_SIZE_2 = 2,
MONO_INTEGER_VALUE_SIZE_4 = 4,
MONO_INTEGER_VALUE_SIZE_8 = 8,
MONO_INTEGER_VALUE_SIZE_BITMASK = 15,
MONO_UNSIGNED_VALUE_FLAG = 16,
MONO_UNSIGNED_INTEGER_VALUE_SIZE_1 = MONO_UNSIGNED_VALUE_FLAG|MONO_INTEGER_VALUE_SIZE_1,
MONO_UNSIGNED_INTEGER_VALUE_SIZE_2 = MONO_UNSIGNED_VALUE_FLAG|MONO_INTEGER_VALUE_SIZE_2,
MONO_UNSIGNED_INTEGER_VALUE_SIZE_4 = MONO_UNSIGNED_VALUE_FLAG|MONO_INTEGER_VALUE_SIZE_4,
MONO_UNSIGNED_INTEGER_VALUE_SIZE_8 = MONO_UNSIGNED_VALUE_FLAG|MONO_INTEGER_VALUE_SIZE_8,
MONO_UNKNOWN_INTEGER_VALUE = 0
} MonoIntegerValueKind;
/**
* A relation between variables (or a variable and a constant).
* The first variable (the one "on the left of the expression") is implicit.
* relation: the relation between the variable and the value
* related_value: the related value
* relation_is_static_definition: TRUE if the relation comes from a veriable
* definition, FALSE if it comes from a branch
* condition
* next: pointer to the next relation of this variable in the evaluation area
* (relations are always kept in the evaluation area, one list for each
* variable)
*/
typedef struct MonoSummarizedValueRelation {
MonoValueRelation relation;
MonoSummarizedValue related_value;
gboolean relation_is_static_definition;
struct MonoSummarizedValueRelation *next;
} MonoSummarizedValueRelation;
/**
* The evaluation status for one variable.
* The enumeration is used as a bit field, because the status has two
* distinct sections.
* The first is the "main" one (bits 0, 1 and 2), which is actually a proper
* enumeration (the bits are mutually exclusive, and their meaning is obvious).
* The other section (the bits in the MONO_RELATIONS_EVALUATION_IS_RECURSIVE
* set) is used to mark an evaluation as recursive (while backtracking through
* the evaluation contexts), to state if the graph loop gives a value that is
* ascending, descending or indefinite.
* The bits are handled separately because the same evaluation context could
* belong to more than one loop, so that each loop would set its bits.
* After the backtracking, the bits are examined and a decision is taken.
*
*/
typedef enum {
MONO_RELATIONS_EVALUATION_NOT_STARTED = 0,
MONO_RELATIONS_EVALUATION_IN_PROGRESS = 1,
MONO_RELATIONS_EVALUATION_COMPLETED = 2,
MONO_RELATIONS_EVALUATION_IS_RECURSIVELY_ASCENDING = 4,
MONO_RELATIONS_EVALUATION_IS_RECURSIVELY_DESCENDING = 8,
MONO_RELATIONS_EVALUATION_IS_RECURSIVELY_INDEFINITE = 16,
MONO_RELATIONS_EVALUATION_IS_RECURSIVE = (MONO_RELATIONS_EVALUATION_IS_RECURSIVELY_ASCENDING|MONO_RELATIONS_EVALUATION_IS_RECURSIVELY_DESCENDING|MONO_RELATIONS_EVALUATION_IS_RECURSIVELY_INDEFINITE)
} MonoRelationsEvaluationStatus;
/**
* A range of values (ranges include their limits).
* A range from MIN_INT to MAX_INT is "indefinite" (any value).
* A range where upper < lower means unreachable code (some of the relations
* that generated the range is incompatible, like x = 0 and x > 0).
* lower: the lower limit
* upper: the upper limit
*/
typedef struct MonoRelationsEvaluationRange {
int lower;
int upper;
MonoValueNullness nullness;
} MonoRelationsEvaluationRange;
/**
* The two ranges that contain the result of a variable evaluation.
* zero: the range with respect to zero
* variable: the range with respect to the target variable in this evaluation
*/
typedef struct MonoRelationsEvaluationRanges {
MonoRelationsEvaluationRange zero;
MonoRelationsEvaluationRange variable;
} MonoRelationsEvaluationRanges;
/**
* The context of a variable evaluation.
* current_relation: the relation that is currently evaluated.
* ranges: the result of the evaluation.
* father: the context of the evaluation that invoked this one (used to
* perform the backtracking when loops are detected.
*/
typedef struct MonoRelationsEvaluationContext {
MonoSummarizedValueRelation *current_relation;
MonoRelationsEvaluationRanges ranges;
struct MonoRelationsEvaluationContext *father;
} MonoRelationsEvaluationContext;
/*
* Basic macros to initialize and check ranges.
*/
#define MONO_MAKE_RELATIONS_EVALUATION_RANGE_WEAK(r) do{\
(r).lower = INT_MIN;\
(r).upper = INT_MAX;\
(r).nullness = MONO_VALUE_MAYBE_NULL; \
} while (0)
#define MONO_MAKE_RELATIONS_EVALUATION_RANGES_WEAK(rs) do{\
MONO_MAKE_RELATIONS_EVALUATION_RANGE_WEAK ((rs).zero); \
MONO_MAKE_RELATIONS_EVALUATION_RANGE_WEAK ((rs).variable); \
} while (0)
#define MONO_MAKE_RELATIONS_EVALUATION_RANGE_IMPOSSIBLE(r) do{\
(r).lower = INT_MAX;\
(r).upper = INT_MIN;\
(r).nullness = MONO_VALUE_MAYBE_NULL; \
} while (0)
#define MONO_MAKE_RELATIONS_EVALUATION_RANGES_IMPOSSIBLE(rs) do{\
MONO_MAKE_RELATIONS_EVALUATION_RANGE_IMPOSSIBLE ((rs).zero); \
MONO_MAKE_RELATIONS_EVALUATION_RANGE_IMPOSSIBLE ((rs).variable); \
} while (0)
#define MONO_RELATIONS_EVALUATION_RANGE_IS_WEAK(r) (((r).lower==INT_MIN)&&((r).upper==INT_MAX))
#define MONO_RELATIONS_EVALUATION_RANGES_ARE_WEAK(rs) \
(MONO_RELATIONS_EVALUATION_RANGE_IS_WEAK((rs).zero) && \
MONO_RELATIONS_EVALUATION_RANGE_IS_WEAK((rs).variable))
#define MONO_RELATIONS_EVALUATION_RANGE_IS_IMPOSSIBLE(r) (((r).lower)>((r).upper))
#define MONO_RELATIONS_EVALUATION_RANGES_ARE_IMPOSSIBLE(rs) \
(MONO_RELATIONS_EVALUATION_RANGE_IS_IMPOSSIBLE((rs).zero) || \
MONO_RELATIONS_EVALUATION_RANGE_IS_IMPOSSIBLE((rs).variable))
/*
* The following macros are needed because ranges include theit limits, but
* some relations explicitly exclude them (GT and LT).
*/
#define MONO_UPPER_EVALUATION_RANGE_NOT_EQUAL(ur) ((((ur)==INT_MIN)||((ur)==INT_MAX))?(ur):((ur)-1))
#define MONO_LOWER_EVALUATION_RANGE_NOT_EQUAL(lr) ((((lr)==INT_MIN)||((lr)==INT_MAX))?(lr):((lr)+1))
#define MONO_APPLY_INEQUALITY_TO_EVALUATION_RANGE(r) do{\
(r).lower = MONO_LOWER_EVALUATION_RANGE_NOT_EQUAL ((r).lower);\
(r).upper = MONO_UPPER_EVALUATION_RANGE_NOT_EQUAL ((r).upper);\
} while (0)
#define MONO_APPLY_INEQUALITY_TO_EVALUATION_RANGES(rs) do{\
MONO_APPLY_INEQUALITY_TO_EVALUATION_RANGE ((rs).zero); \
MONO_APPLY_INEQUALITY_TO_EVALUATION_RANGE ((rs).variable); \
} while (0)
/*
* The following macros perform union and intersection operations on ranges.
*/
#define MONO_LOWER_EVALUATION_RANGE_UNION(lr,other_lr) ((lr)=MIN(lr,other_lr))
#define MONO_UPPER_EVALUATION_RANGE_UNION(ur,other_ur) ((ur)=MAX(ur,other_ur))
#define MONO_LOWER_EVALUATION_RANGE_INTERSECTION(lr,other_lr) ((lr)=MAX(lr,other_lr))
#define MONO_UPPER_EVALUATION_RANGE_INTERSECTION(ur,other_ur) ((ur)=MIN(ur,other_ur))
#define MONO_RELATIONS_EVALUATION_RANGE_UNION(r,other_r) do{\
MONO_LOWER_EVALUATION_RANGE_UNION((r).lower,(other_r).lower);\
MONO_UPPER_EVALUATION_RANGE_UNION((r).upper,(other_r).upper);\
} while (0)
#define MONO_RELATIONS_EVALUATION_RANGE_INTERSECTION(r,other_r) do{\
MONO_LOWER_EVALUATION_RANGE_INTERSECTION((r).lower,(other_r).lower);\
MONO_UPPER_EVALUATION_RANGE_INTERSECTION((r).upper,(other_r).upper);\
} while (0)
#define MONO_RELATIONS_EVALUATION_RANGES_UNION(rs,other_rs) do{\
MONO_RELATIONS_EVALUATION_RANGE_UNION ((rs).zero,(other_rs).zero); \
MONO_RELATIONS_EVALUATION_RANGE_UNION ((rs).variable,(other_rs).variable); \
} while (0)
#define MONO_RELATIONS_EVALUATION_RANGES_INTERSECTION(rs,other_rs) do{\
MONO_RELATIONS_EVALUATION_RANGE_INTERSECTION ((rs).zero,(other_rs).zero); \
MONO_RELATIONS_EVALUATION_RANGE_INTERSECTION ((rs).variable,(other_rs).variable); \
} while (0)
/*
* The following macros add or subtract "safely" (without over/under-flow) a
* delta (constant) value to a range.
*/
#define MONO_ADD_DELTA_SAFELY(v,d) do{\
if (((d) > 0) && ((v) != INT_MIN)) {\
(v) = (((v)+(d))>(v))?((v)+(d)):INT_MAX;\
} else if (((d) < 0) && ((v) != INT_MAX)) {\
(v) = (((v)+(d))<(v))?((v)+(d)):INT_MIN;\
}\
} while (0)
#define MONO_SUB_DELTA_SAFELY(v,d) do{\
if (((d) < 0) && ((v) != INT_MIN)) {\
(v) = (((v)-(d))>(v))?((v)-(d)):INT_MAX;\
} else if (((d) > 0) && ((v) != INT_MAX)) {\
(v) = (((v)-(d))<(v))?((v)-(d)):INT_MIN;\
}\
} while (0)
#define MONO_ADD_DELTA_SAFELY_TO_RANGE(r,d) do{\
MONO_ADD_DELTA_SAFELY((r).lower,(d));\
MONO_ADD_DELTA_SAFELY((r).upper,(d));\
} while (0)
#define MONO_SUB_DELTA_SAFELY_FROM_RANGE(r,d) do{\
MONO_SUB_DELTA_SAFELY((r).lower,(d));\
MONO_SUB_DELTA_SAFELY((r).upper,(d));\
} while (0)
#define MONO_ADD_DELTA_SAFELY_TO_RANGES(rs,d) do{\
MONO_ADD_DELTA_SAFELY_TO_RANGE((rs).zero,(d));\
MONO_ADD_DELTA_SAFELY_TO_RANGE((rs).variable,(d));\
} while (0)
#define MONO_SUB_DELTA_SAFELY_FROM_RANGES(rs,d) do{\
MONO_SUB_DELTA_SAFELY_FROM_RANGE((rs).zero,(d));\
MONO_SUB_DELTA_SAFELY_FROM_RANGE((rs).variable,(d));\
} while (0)
/**
* The main evaluation area.
* cfg: the cfg of the method that is being examined.
* relations: and array of relations, one for each method variable (each
* relation is the head of a list); this is the evaluation graph
* contexts: an array of evaluation contexts (one for each method variable)
* variable_value_kind: an array of MonoIntegerValueKind, one for each local
* variable (or argument)
* defs: maps vregs to the instruction which defines it.
*/
typedef struct MonoVariableRelationsEvaluationArea {
MonoCompile *cfg;
MonoSummarizedValueRelation *relations;
/**
* statuses and contexts are parallel arrays. A given index into each refers to
* the same context. This is a performance optimization. Clean_context was
* coming to dominate the running time of abcremoval. By
* storing the statuses together, we can memset the entire
* region.
*/
MonoRelationsEvaluationStatus *statuses;
MonoRelationsEvaluationContext *contexts;
MonoIntegerValueKind *variable_value_kind;
MonoInst **defs;
} MonoVariableRelationsEvaluationArea;
/**
* Convenience structure to define an "additional" relation for the main
* evaluation graph.
* variable: the variable to which the relation is applied
* relation: the relation
* insertion_point: the point in the graph where the relation is inserted
* (useful for removing it from the list when backtracking
* in the traversal of the dominator tree)
*/
typedef struct MonoAdditionalVariableRelation {
int variable;
MonoSummarizedValueRelation relation;
MonoSummarizedValueRelation *insertion_point;
} MonoAdditionalVariableRelation;
/**
* Convenience structure that stores two additional relations.
* In the current code, each BB can add at most two relations to the main
* evaluation graph, so one of these structures is enough to hold all the
* modifications to the graph made examining one BB.
*/
typedef struct MonoAdditionalVariableRelationsForBB {
MonoAdditionalVariableRelation relation1;
MonoAdditionalVariableRelation relation2;
} MonoAdditionalVariableRelationsForBB;
#endif /* __MONO_ABCREMOVAL_H__ */
| /**
* \file
* Array bounds check removal
*
* Author:
* Massimiliano Mantione ([email protected])
*
* (C) 2004 Ximian, Inc. http://www.ximian.com
*/
#ifndef __MONO_ABCREMOVAL_H__
#define __MONO_ABCREMOVAL_H__
#include <limits.h>
#include "mini.h"
typedef enum {
MONO_VALUE_MAYBE_NULL = 0,
MONO_VALUE_NOT_NULL = 1,
MONO_VALUE_NULLNESS_MASK = 1,
/*
* If this bit is set, and the enclosing MonoSummarizedValue is a
* MONO_VARIABLE_SUMMARIZED_VALUE, then the "nullness" value is related
* to the variable referenced in MonoSummarizedVariableValue. Otherwise,
* the "nullness" value is constant.
*/
MONO_VALUE_IS_VARIABLE = 2,
} MonoValueNullness;
/**
* All handled value types (expressions) in variable definitions and branch
* contitions:
* ANY: not handled
* CONSTANT: an integer constant
* VARIABLE: a reference to a variable, with an optional delta (can be zero)
* PHI: a PHI definition of the SSA representation
*/
typedef enum {
MONO_ANY_SUMMARIZED_VALUE,
MONO_CONSTANT_SUMMARIZED_VALUE,
MONO_VARIABLE_SUMMARIZED_VALUE,
MONO_PHI_SUMMARIZED_VALUE
} MonoSummarizedValueType;
/**
* A MONO_CONSTANT_SUMMARIZED_VALUE value.
* value: the value
*/
typedef struct MonoSummarizedConstantValue {
int value;
MonoValueNullness nullness;
} MonoSummarizedConstantValue;
/**
* A MONO_VARIABLE_SUMMARIZED_VALUE value
* variable: the variable index in the cfg
* delta: the delta (can be zero)
*/
typedef struct MonoSummarizedVariableValue {
int variable;
int delta;
MonoValueNullness nullness;
} MonoSummarizedVariableValue;
/**
* A MONO_PHI_SUMMARIZED_VALUE value.
* number_of_alternatives: the number of alternatives in the PHI definition
* phi_alternatives: an array of integers with the indexes of the variables
* which are the alternatives in this PHI definition
*/
typedef struct MonoSummarizedPhiValue {
int number_of_alternatives;
int *phi_alternatives;
} MonoSummarizedPhiValue;
/**
* A summarized value.
* In practice it is a "tagged union".
*/
typedef struct MonoSummarizedValue {
MonoSummarizedValueType type;
union {
MonoSummarizedConstantValue constant;
MonoSummarizedVariableValue variable;
MonoSummarizedPhiValue phi;
} value;
} MonoSummarizedValue;
/**
* A "relation" between two values.
* The enumeration is used as a bit field, with three significant bits.
* The degenerated cases are meaningful:
* MONO_ANY_RELATION: we know nothing of this relation
* MONO_NO_RELATION: no relation is possible (this code is unreachable)
*/
typedef enum {
MONO_EQ_RELATION = 1,
MONO_LT_RELATION = 2,
MONO_GT_RELATION = 4,
MONO_NE_RELATION = (MONO_LT_RELATION|MONO_GT_RELATION),
MONO_LE_RELATION = (MONO_LT_RELATION|MONO_EQ_RELATION),
MONO_GE_RELATION = (MONO_GT_RELATION|MONO_EQ_RELATION),
MONO_ANY_RELATION = (MONO_EQ_RELATION|MONO_LT_RELATION|MONO_GT_RELATION),
MONO_NO_RELATION = 0
} MonoValueRelation;
/**
* A "kind" of integer value.
* The enumeration is used as a bit field, with two fields.
* The first, four bits wide, is the "sizeof" in bytes.
* The second is a flag that is true if the value is unsigned.
*/
typedef enum {
MONO_INTEGER_VALUE_SIZE_1 = 1,
MONO_INTEGER_VALUE_SIZE_2 = 2,
MONO_INTEGER_VALUE_SIZE_4 = 4,
MONO_INTEGER_VALUE_SIZE_8 = 8,
MONO_INTEGER_VALUE_SIZE_BITMASK = 15,
MONO_UNSIGNED_VALUE_FLAG = 16,
MONO_UNSIGNED_INTEGER_VALUE_SIZE_1 = MONO_UNSIGNED_VALUE_FLAG|MONO_INTEGER_VALUE_SIZE_1,
MONO_UNSIGNED_INTEGER_VALUE_SIZE_2 = MONO_UNSIGNED_VALUE_FLAG|MONO_INTEGER_VALUE_SIZE_2,
MONO_UNSIGNED_INTEGER_VALUE_SIZE_4 = MONO_UNSIGNED_VALUE_FLAG|MONO_INTEGER_VALUE_SIZE_4,
MONO_UNSIGNED_INTEGER_VALUE_SIZE_8 = MONO_UNSIGNED_VALUE_FLAG|MONO_INTEGER_VALUE_SIZE_8,
MONO_UNKNOWN_INTEGER_VALUE = 0
} MonoIntegerValueKind;
/**
* A relation between variables (or a variable and a constant).
* The first variable (the one "on the left of the expression") is implicit.
* relation: the relation between the variable and the value
* related_value: the related value
* relation_is_static_definition: TRUE if the relation comes from a veriable
* definition, FALSE if it comes from a branch
* condition
* next: pointer to the next relation of this variable in the evaluation area
* (relations are always kept in the evaluation area, one list for each
* variable)
*/
typedef struct MonoSummarizedValueRelation {
MonoValueRelation relation;
MonoSummarizedValue related_value;
gboolean relation_is_static_definition;
struct MonoSummarizedValueRelation *next;
} MonoSummarizedValueRelation;
/**
* The evaluation status for one variable.
* The enumeration is used as a bit field, because the status has two
* distinct sections.
* The first is the "main" one (bits 0, 1 and 2), which is actually a proper
* enumeration (the bits are mutually exclusive, and their meaning is obvious).
* The other section (the bits in the MONO_RELATIONS_EVALUATION_IS_RECURSIVE
* set) is used to mark an evaluation as recursive (while backtracking through
* the evaluation contexts), to state if the graph loop gives a value that is
* ascending, descending or indefinite.
* The bits are handled separately because the same evaluation context could
* belong to more than one loop, so that each loop would set its bits.
* After the backtracking, the bits are examined and a decision is taken.
*
*/
typedef enum {
MONO_RELATIONS_EVALUATION_NOT_STARTED = 0,
MONO_RELATIONS_EVALUATION_IN_PROGRESS = 1,
MONO_RELATIONS_EVALUATION_COMPLETED = 2,
MONO_RELATIONS_EVALUATION_IS_RECURSIVELY_ASCENDING = 4,
MONO_RELATIONS_EVALUATION_IS_RECURSIVELY_DESCENDING = 8,
MONO_RELATIONS_EVALUATION_IS_RECURSIVELY_INDEFINITE = 16,
MONO_RELATIONS_EVALUATION_IS_RECURSIVE = (MONO_RELATIONS_EVALUATION_IS_RECURSIVELY_ASCENDING|MONO_RELATIONS_EVALUATION_IS_RECURSIVELY_DESCENDING|MONO_RELATIONS_EVALUATION_IS_RECURSIVELY_INDEFINITE)
} MonoRelationsEvaluationStatus;
/**
* A range of values (ranges include their limits).
* A range from MIN_INT to MAX_INT is "indefinite" (any value).
* A range where upper < lower means unreachable code (some of the relations
* that generated the range is incompatible, like x = 0 and x > 0).
* lower: the lower limit
* upper: the upper limit
*/
typedef struct MonoRelationsEvaluationRange {
int lower;
int upper;
MonoValueNullness nullness;
} MonoRelationsEvaluationRange;
/**
* The two ranges that contain the result of a variable evaluation.
* zero: the range with respect to zero
* variable: the range with respect to the target variable in this evaluation
*/
typedef struct MonoRelationsEvaluationRanges {
MonoRelationsEvaluationRange zero;
MonoRelationsEvaluationRange variable;
} MonoRelationsEvaluationRanges;
/**
* The context of a variable evaluation.
* current_relation: the relation that is currently evaluated.
* ranges: the result of the evaluation.
* father: the context of the evaluation that invoked this one (used to
* perform the backtracking when loops are detected.
*/
typedef struct MonoRelationsEvaluationContext {
MonoSummarizedValueRelation *current_relation;
MonoRelationsEvaluationRanges ranges;
struct MonoRelationsEvaluationContext *father;
} MonoRelationsEvaluationContext;
/*
* Basic macros to initialize and check ranges.
*/
#define MONO_MAKE_RELATIONS_EVALUATION_RANGE_WEAK(r) do{\
(r).lower = INT_MIN;\
(r).upper = INT_MAX;\
(r).nullness = MONO_VALUE_MAYBE_NULL; \
} while (0)
#define MONO_MAKE_RELATIONS_EVALUATION_RANGES_WEAK(rs) do{\
MONO_MAKE_RELATIONS_EVALUATION_RANGE_WEAK ((rs).zero); \
MONO_MAKE_RELATIONS_EVALUATION_RANGE_WEAK ((rs).variable); \
} while (0)
#define MONO_MAKE_RELATIONS_EVALUATION_RANGE_IMPOSSIBLE(r) do{\
(r).lower = INT_MAX;\
(r).upper = INT_MIN;\
(r).nullness = MONO_VALUE_MAYBE_NULL; \
} while (0)
#define MONO_MAKE_RELATIONS_EVALUATION_RANGES_IMPOSSIBLE(rs) do{\
MONO_MAKE_RELATIONS_EVALUATION_RANGE_IMPOSSIBLE ((rs).zero); \
MONO_MAKE_RELATIONS_EVALUATION_RANGE_IMPOSSIBLE ((rs).variable); \
} while (0)
#define MONO_RELATIONS_EVALUATION_RANGE_IS_WEAK(r) (((r).lower==INT_MIN)&&((r).upper==INT_MAX))
#define MONO_RELATIONS_EVALUATION_RANGES_ARE_WEAK(rs) \
(MONO_RELATIONS_EVALUATION_RANGE_IS_WEAK((rs).zero) && \
MONO_RELATIONS_EVALUATION_RANGE_IS_WEAK((rs).variable))
#define MONO_RELATIONS_EVALUATION_RANGE_IS_IMPOSSIBLE(r) (((r).lower)>((r).upper))
#define MONO_RELATIONS_EVALUATION_RANGES_ARE_IMPOSSIBLE(rs) \
(MONO_RELATIONS_EVALUATION_RANGE_IS_IMPOSSIBLE((rs).zero) || \
MONO_RELATIONS_EVALUATION_RANGE_IS_IMPOSSIBLE((rs).variable))
/*
* The following macros are needed because ranges include theit limits, but
* some relations explicitly exclude them (GT and LT).
*/
#define MONO_UPPER_EVALUATION_RANGE_NOT_EQUAL(ur) ((((ur)==INT_MIN)||((ur)==INT_MAX))?(ur):((ur)-1))
#define MONO_LOWER_EVALUATION_RANGE_NOT_EQUAL(lr) ((((lr)==INT_MIN)||((lr)==INT_MAX))?(lr):((lr)+1))
#define MONO_APPLY_INEQUALITY_TO_EVALUATION_RANGE(r) do{\
(r).lower = MONO_LOWER_EVALUATION_RANGE_NOT_EQUAL ((r).lower);\
(r).upper = MONO_UPPER_EVALUATION_RANGE_NOT_EQUAL ((r).upper);\
} while (0)
#define MONO_APPLY_INEQUALITY_TO_EVALUATION_RANGES(rs) do{\
MONO_APPLY_INEQUALITY_TO_EVALUATION_RANGE ((rs).zero); \
MONO_APPLY_INEQUALITY_TO_EVALUATION_RANGE ((rs).variable); \
} while (0)
/*
* The following macros perform union and intersection operations on ranges.
*/
#define MONO_LOWER_EVALUATION_RANGE_UNION(lr,other_lr) ((lr)=MIN(lr,other_lr))
#define MONO_UPPER_EVALUATION_RANGE_UNION(ur,other_ur) ((ur)=MAX(ur,other_ur))
#define MONO_LOWER_EVALUATION_RANGE_INTERSECTION(lr,other_lr) ((lr)=MAX(lr,other_lr))
#define MONO_UPPER_EVALUATION_RANGE_INTERSECTION(ur,other_ur) ((ur)=MIN(ur,other_ur))
#define MONO_RELATIONS_EVALUATION_RANGE_UNION(r,other_r) do{\
MONO_LOWER_EVALUATION_RANGE_UNION((r).lower,(other_r).lower);\
MONO_UPPER_EVALUATION_RANGE_UNION((r).upper,(other_r).upper);\
} while (0)
#define MONO_RELATIONS_EVALUATION_RANGE_INTERSECTION(r,other_r) do{\
MONO_LOWER_EVALUATION_RANGE_INTERSECTION((r).lower,(other_r).lower);\
MONO_UPPER_EVALUATION_RANGE_INTERSECTION((r).upper,(other_r).upper);\
} while (0)
#define MONO_RELATIONS_EVALUATION_RANGES_UNION(rs,other_rs) do{\
MONO_RELATIONS_EVALUATION_RANGE_UNION ((rs).zero,(other_rs).zero); \
MONO_RELATIONS_EVALUATION_RANGE_UNION ((rs).variable,(other_rs).variable); \
} while (0)
#define MONO_RELATIONS_EVALUATION_RANGES_INTERSECTION(rs,other_rs) do{\
MONO_RELATIONS_EVALUATION_RANGE_INTERSECTION ((rs).zero,(other_rs).zero); \
MONO_RELATIONS_EVALUATION_RANGE_INTERSECTION ((rs).variable,(other_rs).variable); \
} while (0)
/*
* The following macros add or subtract "safely" (without over/under-flow) a
* delta (constant) value to a range.
*/
#define MONO_ADD_DELTA_SAFELY(v,d) do{\
if (((d) > 0) && ((v) != INT_MIN)) {\
(v) = (((v)+(d))>(v))?((v)+(d)):INT_MAX;\
} else if (((d) < 0) && ((v) != INT_MAX)) {\
(v) = (((v)+(d))<(v))?((v)+(d)):INT_MIN;\
}\
} while (0)
#define MONO_SUB_DELTA_SAFELY(v,d) do{\
if (((d) < 0) && ((v) != INT_MIN)) {\
(v) = (((v)-(d))>(v))?((v)-(d)):INT_MAX;\
} else if (((d) > 0) && ((v) != INT_MAX)) {\
(v) = (((v)-(d))<(v))?((v)-(d)):INT_MIN;\
}\
} while (0)
#define MONO_ADD_DELTA_SAFELY_TO_RANGE(r,d) do{\
MONO_ADD_DELTA_SAFELY((r).lower,(d));\
MONO_ADD_DELTA_SAFELY((r).upper,(d));\
} while (0)
#define MONO_SUB_DELTA_SAFELY_FROM_RANGE(r,d) do{\
MONO_SUB_DELTA_SAFELY((r).lower,(d));\
MONO_SUB_DELTA_SAFELY((r).upper,(d));\
} while (0)
#define MONO_ADD_DELTA_SAFELY_TO_RANGES(rs,d) do{\
MONO_ADD_DELTA_SAFELY_TO_RANGE((rs).zero,(d));\
MONO_ADD_DELTA_SAFELY_TO_RANGE((rs).variable,(d));\
} while (0)
#define MONO_SUB_DELTA_SAFELY_FROM_RANGES(rs,d) do{\
MONO_SUB_DELTA_SAFELY_FROM_RANGE((rs).zero,(d));\
MONO_SUB_DELTA_SAFELY_FROM_RANGE((rs).variable,(d));\
} while (0)
/**
* The main evaluation area.
* cfg: the cfg of the method that is being examined.
* relations: and array of relations, one for each method variable (each
* relation is the head of a list); this is the evaluation graph
* contexts: an array of evaluation contexts (one for each method variable)
* variable_value_kind: an array of MonoIntegerValueKind, one for each local
* variable (or argument)
* defs: maps vregs to the instruction which defines it.
*/
typedef struct MonoVariableRelationsEvaluationArea {
MonoCompile *cfg;
MonoSummarizedValueRelation *relations;
/**
* statuses and contexts are parallel arrays. A given index into each refers to
* the same context. This is a performance optimization. Clean_context was
* coming to dominate the running time of abcremoval. By
* storing the statuses together, we can memset the entire
* region.
*/
MonoRelationsEvaluationStatus *statuses;
MonoRelationsEvaluationContext *contexts;
MonoIntegerValueKind *variable_value_kind;
MonoInst **defs;
} MonoVariableRelationsEvaluationArea;
/**
* Convenience structure to define an "additional" relation for the main
* evaluation graph.
* variable: the variable to which the relation is applied
* relation: the relation
* insertion_point: the point in the graph where the relation is inserted
* (useful for removing it from the list when backtracking
* in the traversal of the dominator tree)
*/
typedef struct MonoAdditionalVariableRelation {
int variable;
MonoSummarizedValueRelation relation;
MonoSummarizedValueRelation *insertion_point;
} MonoAdditionalVariableRelation;
/**
* Convenience structure that stores two additional relations.
* In the current code, each BB can add at most two relations to the main
* evaluation graph, so one of these structures is enough to hold all the
* modifications to the graph made examining one BB.
*/
typedef struct MonoAdditionalVariableRelationsForBB {
MonoAdditionalVariableRelation relation1;
MonoAdditionalVariableRelation relation2;
} MonoAdditionalVariableRelationsForBB;
#endif /* __MONO_ABCREMOVAL_H__ */
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/src/libunwind/src/s390x/is_fpreg.c | /* libunwind - a platform-independent unwind library
Copyright (c) 2004-2005 Hewlett-Packard Development Company, L.P.
Contributed by David Mosberger-Tang <[email protected]>
Modified for x86_64 by Max Asbock <[email protected]>
Modified for s390x by Michael Munday <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "libunwind_i.h"
int
unw_is_fpreg (int regnum)
{
/* vector registers? */
return regnum >= UNW_S390X_F0 && regnum <= UNW_S390X_F15;
}
| /* libunwind - a platform-independent unwind library
Copyright (c) 2004-2005 Hewlett-Packard Development Company, L.P.
Contributed by David Mosberger-Tang <[email protected]>
Modified for x86_64 by Max Asbock <[email protected]>
Modified for s390x by Michael Munday <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "libunwind_i.h"
int
unw_is_fpreg (int regnum)
{
/* vector registers? */
return regnum >= UNW_S390X_F0 && regnum <= UNW_S390X_F15;
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/mono/mono/metadata/mono-endian.h | /**
* \file
*/
#ifndef _MONO_METADATA_ENDIAN_H_
#define _MONO_METADATA_ENDIAN_H_ 1
#include <glib.h>
#include <mono/utils/mono-compiler.h>
typedef union {
guint32 ival;
float fval;
} mono_rfloat;
typedef union {
guint64 ival;
double fval;
unsigned char cval [8];
} mono_rdouble;
#if defined(__s390x__)
#define read16(x) __builtin_bswap16(*((guint16 *)(x)))
#define read32(x) __builtin_bswap32(*((guint32 *)(x)))
#define read64(x) __builtin_bswap64(*((guint64 *)(x)))
#else
# if NO_UNALIGNED_ACCESS
MONO_COMPONENT_API guint16 mono_read16 (const unsigned char *x);
MONO_COMPONENT_API guint32 mono_read32 (const unsigned char *x);
MONO_COMPONENT_API guint64 mono_read64 (const unsigned char *x);
#define read16(x) (mono_read16 ((const unsigned char *)(x)))
#define read32(x) (mono_read32 ((const unsigned char *)(x)))
#define read64(x) (mono_read64 ((const unsigned char *)(x)))
# else
#define read16(x) GUINT16_FROM_LE (*((const guint16 *) (x)))
#define read32(x) GUINT32_FROM_LE (*((const guint32 *) (x)))
#define read64(x) GUINT64_FROM_LE (*((const guint64 *) (x)))
# endif
#endif
#define readr4(x,dest) \
do { \
mono_rfloat mf; \
mf.ival = read32 ((x)); \
*(dest) = mf.fval; \
} while (0)
#define readr8(x,dest) \
do { \
mono_rdouble mf; \
mf.ival = read64 ((x)); \
*(dest) = mf.fval; \
} while (0)
#endif /* _MONO_METADATA_ENDIAN_H_ */
| /**
* \file
*/
#ifndef _MONO_METADATA_ENDIAN_H_
#define _MONO_METADATA_ENDIAN_H_ 1
#include <glib.h>
#include <mono/utils/mono-compiler.h>
typedef union {
guint32 ival;
float fval;
} mono_rfloat;
typedef union {
guint64 ival;
double fval;
unsigned char cval [8];
} mono_rdouble;
#if defined(__s390x__)
#define read16(x) __builtin_bswap16(*((guint16 *)(x)))
#define read32(x) __builtin_bswap32(*((guint32 *)(x)))
#define read64(x) __builtin_bswap64(*((guint64 *)(x)))
#else
# if NO_UNALIGNED_ACCESS
MONO_COMPONENT_API guint16 mono_read16 (const unsigned char *x);
MONO_COMPONENT_API guint32 mono_read32 (const unsigned char *x);
MONO_COMPONENT_API guint64 mono_read64 (const unsigned char *x);
#define read16(x) (mono_read16 ((const unsigned char *)(x)))
#define read32(x) (mono_read32 ((const unsigned char *)(x)))
#define read64(x) (mono_read64 ((const unsigned char *)(x)))
# else
#define read16(x) GUINT16_FROM_LE (*((const guint16 *) (x)))
#define read32(x) GUINT32_FROM_LE (*((const guint32 *) (x)))
#define read64(x) GUINT64_FROM_LE (*((const guint64 *) (x)))
# endif
#endif
#define readr4(x,dest) \
do { \
mono_rfloat mf; \
mf.ival = read32 ((x)); \
*(dest) = mf.fval; \
} while (0)
#define readr8(x,dest) \
do { \
mono_rdouble mf; \
mf.ival = read64 ((x)); \
*(dest) = mf.fval; \
} while (0)
#endif /* _MONO_METADATA_ENDIAN_H_ */
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/mono/mono/metadata/sysmath.c | /**
* \file
* these are based on bob smith's csharp routines
*
* Author:
* Mono Project (http://www.mono-project.com)
* Ludovic Henry ([email protected])
*
* Copyright 2001-2003 Ximian, Inc (http://www.ximian.com)
* Copyright 2004-2009 Novell, Inc (http://www.novell.com)
* Copyright 2015 Xamarin, Inc (https://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
//
// Copyright (c) Microsoft. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
//
// Files:
// - src/classlibnative/float/floatnative.cpp
// - src/pal/src/cruntime/floatnative.cpp
//
// Ported from C++ to C and adjusted to Mono runtime
#define __USE_ISOC99
#include <math.h>
#include "utils/mono-compiler.h"
#include "utils/mono-math.h"
#include "icalls.h"
#include "icall-decl.h"
gdouble
ves_icall_System_Math_Floor (gdouble x)
{
return floor(x);
}
gdouble
ves_icall_System_Math_Round (gdouble x)
{
return mono_round_to_even (x);
}
gdouble
ves_icall_System_Math_FMod (gdouble x, gdouble y)
{
return fmod (x, y);
}
gdouble
ves_icall_System_Math_ModF (gdouble x, gdouble *d)
{
return modf (x, d);
}
gdouble
ves_icall_System_Math_Sin (gdouble x)
{
return sin (x);
}
gdouble
ves_icall_System_Math_Cos (gdouble x)
{
return cos (x);
}
gdouble
ves_icall_System_Math_Cbrt (gdouble x)
{
return cbrt (x);
}
gdouble
ves_icall_System_Math_Tan (gdouble x)
{
return tan (x);
}
gdouble
ves_icall_System_Math_Sinh (gdouble x)
{
return sinh (x);
}
gdouble
ves_icall_System_Math_Cosh (gdouble x)
{
return cosh (x);
}
gdouble
ves_icall_System_Math_Tanh (gdouble x)
{
return tanh (x);
}
gdouble
ves_icall_System_Math_Acos (gdouble x)
{
return acos (x);
}
gdouble
ves_icall_System_Math_Acosh (gdouble x)
{
return acosh (x);
}
gdouble
ves_icall_System_Math_Asin (gdouble x)
{
return asin (x);
}
gdouble
ves_icall_System_Math_Asinh (gdouble x)
{
return asinh (x);
}
gdouble
ves_icall_System_Math_Atan (gdouble x)
{
return atan (x);
}
gdouble
ves_icall_System_Math_Atan2 (gdouble y, gdouble x)
{
return atan2 (y, x);
}
gdouble
ves_icall_System_Math_Atanh (gdouble x)
{
return atanh (x);
}
gdouble
ves_icall_System_Math_Exp (gdouble x)
{
return exp (x);
}
gdouble
ves_icall_System_Math_Log (gdouble x)
{
return log (x);
}
gdouble
ves_icall_System_Math_Log10 (gdouble x)
{
return log10 (x);
}
gdouble
ves_icall_System_Math_Pow (gdouble x, gdouble y)
{
return pow (x, y);
}
gdouble
ves_icall_System_Math_Sqrt (gdouble x)
{
return sqrt (x);
}
gdouble
ves_icall_System_Math_Ceiling (gdouble v)
{
return ceil (v);
}
gdouble
ves_icall_System_Math_Log2 (gdouble x)
{
return log2 (x);
}
gdouble
ves_icall_System_Math_FusedMultiplyAdd (gdouble x, gdouble y, gdouble z)
{
return fma (x, y, z);
}
float
ves_icall_System_MathF_Acos (float x)
{
return acosf (x);
}
float
ves_icall_System_MathF_Acosh (float x)
{
return acoshf (x);
}
float
ves_icall_System_MathF_Asin (float x)
{
return asinf (x);
}
float
ves_icall_System_MathF_Asinh (float x)
{
return asinhf (x);
}
float
ves_icall_System_MathF_Atan (float x)
{
return atan (x);
}
float
ves_icall_System_MathF_Atan2 (float x, float y)
{
return atan2f (x, y);
}
float
ves_icall_System_MathF_Atanh (float x)
{
return atanhf (x);
}
float
ves_icall_System_MathF_Cbrt (float x)
{
return cbrtf (x);
}
float
ves_icall_System_MathF_Ceiling (float x)
{
return ceilf(x);
}
float
ves_icall_System_MathF_Cos (float x)
{
return cosf (x);
}
float
ves_icall_System_MathF_Cosh (float x)
{
return coshf (x);
}
float
ves_icall_System_MathF_Exp (float x)
{
return expf (x);
}
float
ves_icall_System_MathF_Floor (float x)
{
return floorf (x);
}
float
ves_icall_System_MathF_Log (float x)
{
return logf (x);
}
float
ves_icall_System_MathF_Log10 (float x)
{
return log10f (x);
}
float
ves_icall_System_MathF_Pow (float x, float y)
{
return powf (x, y);
}
float
ves_icall_System_MathF_Sin (float x)
{
return sinf (x);
}
float
ves_icall_System_MathF_Sinh (float x)
{
return sinh (x);
}
float
ves_icall_System_MathF_Sqrt (float x)
{
return sqrtf (x);
}
float
ves_icall_System_MathF_Tan (float x)
{
return tanf (x);
}
float
ves_icall_System_MathF_Tanh (float x)
{
return tanh (x);
}
float
ves_icall_System_MathF_FMod (float x, float y)
{
return fmodf (x, y);
}
float
ves_icall_System_MathF_ModF (float x, float *d)
{
return modff (x, d);
}
float
ves_icall_System_MathF_Log2 (float x)
{
return log2f (x);
}
float
ves_icall_System_MathF_FusedMultiplyAdd (float x, float y, float z)
{
return fmaf (x, y, z);
}
| /**
* \file
* these are based on bob smith's csharp routines
*
* Author:
* Mono Project (http://www.mono-project.com)
* Ludovic Henry ([email protected])
*
* Copyright 2001-2003 Ximian, Inc (http://www.ximian.com)
* Copyright 2004-2009 Novell, Inc (http://www.novell.com)
* Copyright 2015 Xamarin, Inc (https://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
//
// Copyright (c) Microsoft. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
//
// Files:
// - src/classlibnative/float/floatnative.cpp
// - src/pal/src/cruntime/floatnative.cpp
//
// Ported from C++ to C and adjusted to Mono runtime
#define __USE_ISOC99
#include <math.h>
#include "utils/mono-compiler.h"
#include "utils/mono-math.h"
#include "icalls.h"
#include "icall-decl.h"
gdouble
ves_icall_System_Math_Floor (gdouble x)
{
return floor(x);
}
gdouble
ves_icall_System_Math_Round (gdouble x)
{
return mono_round_to_even (x);
}
gdouble
ves_icall_System_Math_FMod (gdouble x, gdouble y)
{
return fmod (x, y);
}
gdouble
ves_icall_System_Math_ModF (gdouble x, gdouble *d)
{
return modf (x, d);
}
gdouble
ves_icall_System_Math_Sin (gdouble x)
{
return sin (x);
}
gdouble
ves_icall_System_Math_Cos (gdouble x)
{
return cos (x);
}
gdouble
ves_icall_System_Math_Cbrt (gdouble x)
{
return cbrt (x);
}
gdouble
ves_icall_System_Math_Tan (gdouble x)
{
return tan (x);
}
gdouble
ves_icall_System_Math_Sinh (gdouble x)
{
return sinh (x);
}
gdouble
ves_icall_System_Math_Cosh (gdouble x)
{
return cosh (x);
}
gdouble
ves_icall_System_Math_Tanh (gdouble x)
{
return tanh (x);
}
gdouble
ves_icall_System_Math_Acos (gdouble x)
{
return acos (x);
}
gdouble
ves_icall_System_Math_Acosh (gdouble x)
{
return acosh (x);
}
gdouble
ves_icall_System_Math_Asin (gdouble x)
{
return asin (x);
}
gdouble
ves_icall_System_Math_Asinh (gdouble x)
{
return asinh (x);
}
gdouble
ves_icall_System_Math_Atan (gdouble x)
{
return atan (x);
}
gdouble
ves_icall_System_Math_Atan2 (gdouble y, gdouble x)
{
return atan2 (y, x);
}
gdouble
ves_icall_System_Math_Atanh (gdouble x)
{
return atanh (x);
}
gdouble
ves_icall_System_Math_Exp (gdouble x)
{
return exp (x);
}
gdouble
ves_icall_System_Math_Log (gdouble x)
{
return log (x);
}
gdouble
ves_icall_System_Math_Log10 (gdouble x)
{
return log10 (x);
}
gdouble
ves_icall_System_Math_Pow (gdouble x, gdouble y)
{
return pow (x, y);
}
gdouble
ves_icall_System_Math_Sqrt (gdouble x)
{
return sqrt (x);
}
gdouble
ves_icall_System_Math_Ceiling (gdouble v)
{
return ceil (v);
}
gdouble
ves_icall_System_Math_Log2 (gdouble x)
{
return log2 (x);
}
gdouble
ves_icall_System_Math_FusedMultiplyAdd (gdouble x, gdouble y, gdouble z)
{
return fma (x, y, z);
}
float
ves_icall_System_MathF_Acos (float x)
{
return acosf (x);
}
float
ves_icall_System_MathF_Acosh (float x)
{
return acoshf (x);
}
float
ves_icall_System_MathF_Asin (float x)
{
return asinf (x);
}
float
ves_icall_System_MathF_Asinh (float x)
{
return asinhf (x);
}
float
ves_icall_System_MathF_Atan (float x)
{
return atan (x);
}
float
ves_icall_System_MathF_Atan2 (float x, float y)
{
return atan2f (x, y);
}
float
ves_icall_System_MathF_Atanh (float x)
{
return atanhf (x);
}
float
ves_icall_System_MathF_Cbrt (float x)
{
return cbrtf (x);
}
float
ves_icall_System_MathF_Ceiling (float x)
{
return ceilf(x);
}
float
ves_icall_System_MathF_Cos (float x)
{
return cosf (x);
}
float
ves_icall_System_MathF_Cosh (float x)
{
return coshf (x);
}
float
ves_icall_System_MathF_Exp (float x)
{
return expf (x);
}
float
ves_icall_System_MathF_Floor (float x)
{
return floorf (x);
}
float
ves_icall_System_MathF_Log (float x)
{
return logf (x);
}
float
ves_icall_System_MathF_Log10 (float x)
{
return log10f (x);
}
float
ves_icall_System_MathF_Pow (float x, float y)
{
return powf (x, y);
}
float
ves_icall_System_MathF_Sin (float x)
{
return sinf (x);
}
float
ves_icall_System_MathF_Sinh (float x)
{
return sinh (x);
}
float
ves_icall_System_MathF_Sqrt (float x)
{
return sqrtf (x);
}
float
ves_icall_System_MathF_Tan (float x)
{
return tanf (x);
}
float
ves_icall_System_MathF_Tanh (float x)
{
return tanh (x);
}
float
ves_icall_System_MathF_FMod (float x, float y)
{
return fmodf (x, y);
}
float
ves_icall_System_MathF_ModF (float x, float *d)
{
return modff (x, d);
}
float
ves_icall_System_MathF_Log2 (float x)
{
return log2f (x);
}
float
ves_icall_System_MathF_FusedMultiplyAdd (float x, float y, float z)
{
return fmaf (x, y, z);
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/native/public/mono/metadata/details/mono-private-unstable-types.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//
/**
*
* Private unstable APIs.
*
* WARNING: The declarations and behavior of functions in this header are NOT STABLE and can be modified or removed at
* any time.
*
*/
#ifndef _MONO_METADATA_PRIVATE_UNSTABLE_TYPES_H
#define _MONO_METADATA_PRIVATE_UNSTABLE_TYPES_H
#include <mono/utils/details/mono-publib-types.h>
#include <mono/utils/mono-forward.h>
MONO_BEGIN_DECLS
typedef MonoGCHandle MonoAssemblyLoadContextGCHandle;
typedef MonoAssembly * (*MonoAssemblyPreLoadFuncV3) (MonoAssemblyLoadContextGCHandle alc_gchandle, MonoAssemblyName *aname, char **assemblies_path, void *user_data, MonoError *error);
typedef struct _MonoBundledSatelliteAssembly MonoBundledSatelliteAssembly;
typedef void * (*PInvokeOverrideFn) (const char *libraryName, const char *entrypointName);
MONO_END_DECLS
#endif /* _MONO_METADATA_PRIVATE_UNSTABLE_TYPES_H */
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//
/**
*
* Private unstable APIs.
*
* WARNING: The declarations and behavior of functions in this header are NOT STABLE and can be modified or removed at
* any time.
*
*/
#ifndef _MONO_METADATA_PRIVATE_UNSTABLE_TYPES_H
#define _MONO_METADATA_PRIVATE_UNSTABLE_TYPES_H
#include <mono/utils/details/mono-publib-types.h>
#include <mono/utils/mono-forward.h>
MONO_BEGIN_DECLS
typedef MonoGCHandle MonoAssemblyLoadContextGCHandle;
typedef MonoAssembly * (*MonoAssemblyPreLoadFuncV3) (MonoAssemblyLoadContextGCHandle alc_gchandle, MonoAssemblyName *aname, char **assemblies_path, void *user_data, MonoError *error);
typedef struct _MonoBundledSatelliteAssembly MonoBundledSatelliteAssembly;
typedef void * (*PInvokeOverrideFn) (const char *libraryName, const char *entrypointName);
MONO_END_DECLS
#endif /* _MONO_METADATA_PRIVATE_UNSTABLE_TYPES_H */
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/nativeaot/Runtime/inc/stressLog.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
// ---------------------------------------------------------------------------
// StressLog.h
//
// StressLog infrastructure
//
// The StressLog is a binary, memory based circular queue of logging messages.
// It is intended to be used in retail builds during stress runs (activated
// by registry key), to help find bugs that only turn up during stress runs.
//
// Differently from the desktop implementation the RH implementation of the
// stress log will log all facilities, and only filter on logging level.
//
// The log has a very simple structure, and is meant to be dumped from an NTSD
// extention (eg. strike).
//
// debug\rhsos\stresslogdump.cpp contains the dumper utility that parses this
// log.
// ---------------------------------------------------------------------------
#ifndef StressLog_h
#define StressLog_h 1
#ifdef _MSC_VER
#define SUPPRESS_WARNING_4127 \
__pragma(warning(push)) \
__pragma(warning(disable:4127)) /* conditional expression is constant*/
#define POP_WARNING_STATE \
__pragma(warning(pop))
#else // _MSC_VER
#define SUPPRESS_WARNING_4127
#define POP_WARNING_STATE
#endif // _MSC_VER
#define WHILE_0 \
SUPPRESS_WARNING_4127 \
while(0) \
POP_WARNING_STATE \
// let's keep STRESS_LOG defined always...
#if !defined(STRESS_LOG) && !defined(NO_STRESS_LOG)
#define STRESS_LOG
#endif
#if defined(STRESS_LOG)
//
// Logging levels and facilities
//
#define DEFINE_LOG_FACILITY(logname, value) logname = value,
enum LogFacilitiesEnum: unsigned int {
#include "loglf.h"
LF_ALWAYS = 0x80000000u, // Log message irrepespective of LogFacility (if the level matches)
LF_ALL = 0xFFFFFFFFu, // Used only to mask bits. Never use as LOG((LF_ALL, ...))
};
#define LL_EVERYTHING 10
#define LL_INFO1000000 9 // can be expected to generate 1,000,000 logs per small but not trival run
#define LL_INFO100000 8 // can be expected to generate 100,000 logs per small but not trival run
#define LL_INFO10000 7 // can be expected to generate 10,000 logs per small but not trival run
#define LL_INFO1000 6 // can be expected to generate 1,000 logs per small but not trival run
#define LL_INFO100 5 // can be expected to generate 100 logs per small but not trival run
#define LL_INFO10 4 // can be expected to generate 10 logs per small but not trival run
#define LL_WARNING 3
#define LL_ERROR 2
#define LL_FATALERROR 1
#define LL_ALWAYS 0 // impossible to turn off (log level never negative)
//
//
//
#ifndef _ASSERTE
#define _ASSERTE(expr)
#endif
#ifndef DACCESS_COMPILE
//==========================================================================================
// The STRESS_LOG* macros
//
// The STRESS_LOG* macros work like printf. In fact the use printf in their implementation
// so all printf format specifications work. In addition the Stress log dumper knows
// about certain suffixes for the %p format specification (normally used to print a pointer)
//
// %pM // The pointer is a MethodInfo -- not supported yet (use %pK instead)
// %pT // The pointer is a type (MethodTable)
// %pV // The pointer is a C++ Vtable pointer
// %pK // The pointer is a code address (used for call stacks or method names)
//
// STRESS_LOG_VA was added to allow sending GC trace output to the stress log. msg must be enclosed
// in ()'s and contain a format string followed by 0 - 4 arguments. The arguments must be numbers or
// string literals. LogMsgOL is overloaded so that all of the possible sets of parameters are covered.
// This was done becasue GC Trace uses dprintf which dosen't contain info on how many arguments are
// getting passed in and using va_args would require parsing the format string during the GC
//
#define STRESS_LOG_VA(msg) do { \
if (StressLog::StressLogOn(LF_GC, LL_ALWAYS)) \
StressLog::LogMsgOL msg; \
} WHILE_0
#define STRESS_LOG0(facility, level, msg) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 0, msg); \
} WHILE_0 \
#define STRESS_LOG1(facility, level, msg, data1) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 1, msg, (void*)(size_t)(data1)); \
} WHILE_0
#define STRESS_LOG2(facility, level, msg, data1, data2) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 2, msg, \
(void*)(size_t)(data1), (void*)(size_t)(data2)); \
} WHILE_0
#define STRESS_LOG3(facility, level, msg, data1, data2, data3) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 3, msg, \
(void*)(size_t)(data1),(void*)(size_t)(data2),(void*)(size_t)(data3)); \
} WHILE_0
#define STRESS_LOG4(facility, level, msg, data1, data2, data3, data4) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 4, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4)); \
} WHILE_0
#define STRESS_LOG5(facility, level, msg, data1, data2, data3, data4, data5) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 5, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4), \
(void*)(size_t)(data5)); \
} WHILE_0
#define STRESS_LOG6(facility, level, msg, data1, data2, data3, data4, data5, data6) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 6, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4), \
(void*)(size_t)(data5), (void*)(size_t)(data6)); \
} WHILE_0
#define STRESS_LOG7(facility, level, msg, data1, data2, data3, data4, data5, data6, data7) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 7, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4), \
(void*)(size_t)(data5), (void*)(size_t)(data6), (void*)(size_t)(data7)); \
} WHILE_0
#define STRESS_LOG_COND0(facility, level, msg) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 0, msg); \
} WHILE_0
#define STRESS_LOG_COND1(facility, level, cond, msg, data1) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 1, msg, (void*)(size_t)(data1)); \
} WHILE_0
#define STRESS_LOG_COND2(facility, level, cond, msg, data1, data2) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 2, msg, \
(void*)(size_t)(data1), (void*)(size_t)(data2)); \
} WHILE_0
#define STRESS_LOG_COND3(facility, level, cond, msg, data1, data2, data3) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 3, msg, \
(void*)(size_t)(data1),(void*)(size_t)(data2),(void*)(size_t)(data3)); \
} WHILE_0
#define STRESS_LOG_COND4(facility, level, cond, msg, data1, data2, data3, data4) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 4, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4)); \
} WHILE_0
#define STRESS_LOG_COND5(facility, level, cond, msg, data1, data2, data3, data4, data5) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 5, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4), \
(void*)(size_t)(data5)); \
} WHILE_0
#define STRESS_LOG_COND6(facility, level, cond, msg, data1, data2, data3, data4, data5, data6) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 6, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4), \
(void*)(size_t)(data5), (void*)(size_t)(data6)); \
} WHILE_0
#define STRESS_LOG_COND7(facility, level, cond, msg, data1, data2, data3, data4, data5, data6, data7) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 7, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4), \
(void*)(size_t)(data5), (void*)(size_t)(data6), (void*)(size_t)(data7)); \
} WHILE_0
#define STRESS_LOG_RESERVE_MEM(numChunks) do { \
if (StressLog::StressLogOn(LF_ALL, LL_ALWAYS)) \
{StressLog::ReserveStressLogChunks (numChunks);} \
} WHILE_0
// !!! WARNING !!!
// !!! DO NOT ADD STRESS_LOG8, as the stress log infrastructure supports a maximum of 7 arguments
// !!! WARNING !!!
#define STRESS_LOG_PLUG_MOVE(plug_start, plug_end, plug_delta) do { \
if (StressLog::StressLogOn(LF_GC, LL_INFO1000)) \
StressLog::LogMsg(LF_GC, 3, ThreadStressLog::gcPlugMoveMsg(), \
(void*)(size_t)(plug_start), (void*)(size_t)(plug_end), (void*)(size_t)(plug_delta)); \
} WHILE_0
#define STRESS_LOG_ROOT_PROMOTE(root_addr, objPtr, methodTable) do { \
if (StressLog::StressLogOn(LF_GC|LF_GCROOTS, LL_INFO1000)) \
StressLog::LogMsg(LF_GC|LF_GCROOTS, 3, ThreadStressLog::gcRootPromoteMsg(), \
(void*)(size_t)(root_addr), (void*)(size_t)(objPtr), (void*)(size_t)(methodTable)); \
} WHILE_0
#define STRESS_LOG_ROOT_RELOCATE(root_addr, old_value, new_value, methodTable) do { \
if (StressLog::StressLogOn(LF_GC|LF_GCROOTS, LL_INFO1000) && ((size_t)(old_value) != (size_t)(new_value))) \
StressLog::LogMsg(LF_GC|LF_GCROOTS, 4, ThreadStressLog::gcRootMsg(), \
(void*)(size_t)(root_addr), (void*)(size_t)(old_value), \
(void*)(size_t)(new_value), (void*)(size_t)(methodTable)); \
} WHILE_0
#define STRESS_LOG_GC_START(gcCount, Gen, collectClasses) do { \
if (StressLog::StressLogOn(LF_GCROOTS|LF_GC|LF_GCALLOC, LL_INFO10)) \
StressLog::LogMsg(LF_GCROOTS|LF_GC|LF_GCALLOC, 3, ThreadStressLog::gcStartMsg(), \
(void*)(size_t)(gcCount), (void*)(size_t)(Gen), (void*)(size_t)(collectClasses)); \
} WHILE_0
#define STRESS_LOG_GC_END(gcCount, Gen, collectClasses) do { \
if (StressLog::StressLogOn(LF_GCROOTS|LF_GC|LF_GCALLOC, LL_INFO10)) \
StressLog::LogMsg(LF_GCROOTS|LF_GC|LF_GCALLOC, 3, ThreadStressLog::gcEndMsg(),\
(void*)(size_t)(gcCount), (void*)(size_t)(Gen), (void*)(size_t)(collectClasses), 0);\
} WHILE_0
#if defined(_DEBUG)
#define MAX_CALL_STACK_TRACE 20
#define STRESS_LOG_OOM_STACK(size) do { \
if (StressLog::StressLogOn(LF_ALWAYS, LL_ALWAYS)) \
{ \
StressLog::LogMsgOL("OOM on alloc of size %x \n", (void*)(size_t)(size)); \
StressLog::LogCallStack ("OOM"); \
} \
} WHILE_0
#define STRESS_LOG_GC_STACK do { \
if (StressLog::StressLogOn(LF_GC |LF_GCINFO, LL_ALWAYS)) \
{ \
StressLog::LogMsgOL("GC is triggered \n"); \
StressLog::LogCallStack ("GC"); \
} \
} WHILE_0
#else //_DEBUG
#define STRESS_LOG_OOM_STACK(size)
#define STRESS_LOG_GC_STACK
#endif //_DEBUG
#endif // DACCESS_COMPILE
//
// forward declarations:
//
class CrstStatic;
class Thread;
typedef DPTR(Thread) PTR_Thread;
class StressLog;
typedef DPTR(StressLog) PTR_StressLog;
class ThreadStressLog;
typedef DPTR(ThreadStressLog) PTR_ThreadStressLog;
struct StressLogChunk;
typedef DPTR(StressLogChunk) PTR_StressLogChunk;
struct DacpStressLogEnumCBArgs;
extern "C" void PopulateDebugHeaders();
//==========================================================================================
// StressLog - per-thread circular queue of stresslog messages
//
class StressLog {
friend void PopulateDebugHeaders();
public:
// private:
unsigned facilitiesToLog; // Bitvector of facilities to log (see loglf.h)
unsigned levelToLog; // log level
unsigned MaxSizePerThread; // maximum number of bytes each thread should have before wrapping
unsigned MaxSizeTotal; // maximum memory allowed for stress log
int32_t totalChunk; // current number of total chunks allocated
PTR_ThreadStressLog logs; // the list of logs for every thread.
int32_t deadCount; // count of dead threads in the log
CrstStatic *pLock; // lock
unsigned __int64 tickFrequency; // number of ticks per second
unsigned __int64 startTimeStamp; // start time from when tick counter started
FILETIME startTime; // time the application started
size_t moduleOffset; // Used to compute format strings.
#ifndef DACCESS_COMPILE
public:
static void Initialize(unsigned facilities, unsigned level, unsigned maxBytesPerThread,
unsigned maxBytesTotal, HANDLE hMod);
// Called at DllMain THREAD_DETACH to recycle thread's logs
static void ThreadDetach(ThreadStressLog *msgs);
static long NewChunk () { return PalInterlockedIncrement (&theLog.totalChunk); }
static long ChunkDeleted () { return PalInterlockedDecrement (&theLog.totalChunk); }
//the result is not 100% accurate. If multiple threads call this funciton at the same time,
//we could allow the total size be bigger than required. But the memory won't grow forever
//and this is not critical so we don't try to fix the race
static bool AllowNewChunk (long numChunksInCurThread);
//preallocate Stress log chunks for current thread. The memory we could preallocate is still
//bounded by per thread size limit and total size limit. If chunksToReserve is 0, we will try to
//preallocate up to per thread size limit
static bool ReserveStressLogChunks (unsigned int chunksToReserve);
// private:
static ThreadStressLog* CreateThreadStressLog(Thread * pThread);
static ThreadStressLog* CreateThreadStressLogHelper(Thread * pThread);
#else // DACCESS_COMPILE
public:
bool Initialize();
// Can't refer to the types in sospriv.h because it drags in windows.h
void EnumerateStressMsgs(/*STRESSMSGCALLBACK*/ void* smcb, /*ENDTHREADLOGCALLBACK*/ void* etcb,
void *token);
void EnumStressLogMemRanges(/*STRESSLOGMEMRANGECALLBACK*/ void* slmrcb, void *token);
// Called while dumping logs after operations are completed, to ensure DAC-caches
// allow the stress logs to be dumped again
void ResetForRead();
ThreadStressLog* FindLatestThreadLog() const;
friend class ClrDataAccess;
#endif // DACCESS_COMPILE
#ifndef DACCESS_COMPILE
public:
FORCEINLINE static bool StressLogOn(unsigned /*facility*/, unsigned level)
{
#if defined(DACCESS_COMPILE)
UNREFERENCED_PARAMETER(level);
return FALSE;
#else
// In Redhawk we have rationalized facility codes and have much
// fewer compared to desktop, as such we'll log all facilities and
// limit the filtering to the log level...
return
// (theLog.facilitiesToLog & facility)
// &&
(level <= theLog.levelToLog);
#endif
}
static void LogMsg(unsigned facility, int cArgs, const char* format, ... );
// Support functions for STRESS_LOG_VA
// We disable the warning "conversion from 'type' to 'type' of greater size" since everything will
// end up on the stack, and LogMsg will know the size of the variable based on the format string.
#ifdef _MSC_VER
#pragma warning( push )
#pragma warning( disable : 4312 )
#endif
static void LogMsgOL(const char* format)
{ LogMsg(LF_GC, 0, format); }
template < typename T1 >
static void LogMsgOL(const char* format, T1 data1)
{
C_ASSERT(sizeof(T1) <= sizeof(void*));
LogMsg(LF_GC, 1, format, (void*)(size_t)data1);
}
template < typename T1, typename T2 >
static void LogMsgOL(const char* format, T1 data1, T2 data2)
{
C_ASSERT(sizeof(T1) <= sizeof(void*) && sizeof(T2) <= sizeof(void*));
LogMsg(LF_GC, 2, format, (void*)(size_t)data1, (void*)(size_t)data2);
}
template < typename T1, typename T2, typename T3 >
static void LogMsgOL(const char* format, T1 data1, T2 data2, T3 data3)
{
C_ASSERT(sizeof(T1) <= sizeof(void*) && sizeof(T2) <= sizeof(void*) && sizeof(T3) <= sizeof(void*));
LogMsg(LF_GC, 3, format, (void*)(size_t)data1, (void*)(size_t)data2, (void*)(size_t)data3);
}
template < typename T1, typename T2, typename T3, typename T4 >
static void LogMsgOL(const char* format, T1 data1, T2 data2, T3 data3, T4 data4)
{
C_ASSERT(sizeof(T1) <= sizeof(void*) && sizeof(T2) <= sizeof(void*) && sizeof(T3) <= sizeof(void*) && sizeof(T4) <= sizeof(void*));
LogMsg(LF_GC, 4, format, (void*)(size_t)data1, (void*)(size_t)data2, (void*)(size_t)data3, (void*)(size_t)data4);
}
template < typename T1, typename T2, typename T3, typename T4, typename T5 >
static void LogMsgOL(const char* format, T1 data1, T2 data2, T3 data3, T4 data4, T5 data5)
{
C_ASSERT(sizeof(T1) <= sizeof(void*) && sizeof(T2) <= sizeof(void*) && sizeof(T3) <= sizeof(void*) && sizeof(T4) <= sizeof(void*) && sizeof(T5) <= sizeof(void*));
LogMsg(LF_GC, 5, format, (void*)(size_t)data1, (void*)(size_t)data2, (void*)(size_t)data3, (void*)(size_t)data4, (void*)(size_t)data5);
}
template < typename T1, typename T2, typename T3, typename T4, typename T5, typename T6 >
static void LogMsgOL(const char* format, T1 data1, T2 data2, T3 data3, T4 data4, T5 data5, T6 data6)
{
C_ASSERT(sizeof(T1) <= sizeof(void*) && sizeof(T2) <= sizeof(void*) && sizeof(T3) <= sizeof(void*) && sizeof(T4) <= sizeof(void*) && sizeof(T5) <= sizeof(void*) && sizeof(T6) <= sizeof(void*));
LogMsg(LF_GC, 6, format, (void*)(size_t)data1, (void*)(size_t)data2, (void*)(size_t)data3, (void*)(size_t)data4, (void*)(size_t)data5, (void*)(size_t)data6);
}
template < typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7 >
static void LogMsgOL(const char* format, T1 data1, T2 data2, T3 data3, T4 data4, T5 data5, T6 data6, T7 data7)
{
C_ASSERT(sizeof(T1) <= sizeof(void*) && sizeof(T2) <= sizeof(void*) && sizeof(T3) <= sizeof(void*) && sizeof(T4) <= sizeof(void*) && sizeof(T5) <= sizeof(void*) && sizeof(T6) <= sizeof(void*) && sizeof(T7) <= sizeof(void*));
LogMsg(LF_GC, 7, format, (void*)(size_t)data1, (void*)(size_t)data2, (void*)(size_t)data3, (void*)(size_t)data4, (void*)(size_t)data5, (void*)(size_t)data6, (void*)(size_t)data7);
}
#ifdef _MSC_VER
#pragma warning( pop )
#endif
// We can only log the stacktrace on DEBUG builds!
#ifdef _DEBUG
static void LogCallStack(const char *const callTag);
#endif //_DEBUG
#endif // DACCESS_COMPILE
// private: // static variables
static StressLog theLog; // We only have one log, and this is it
};
//==========================================================================================
// Private classes
//
#if defined(_MSC_VER)
// don't warn about 0 sized array below or unnamed structures
#pragma warning(disable:4200 4201)
#endif
//==========================================================================================
// StressMsg
//
// The order of fields is important. Keep the prefix length as the first field.
// And make sure the timeStamp field is naturally aligned, so we don't waste
// space on 32-bit platforms
//
struct StressMsg {
union {
struct {
uint32_t numberOfArgs : 3; // at most 7 arguments
uint32_t formatOffset : 29; // offset of string in mscorwks
};
uint32_t fmtOffsCArgs; // for optimized access
};
uint32_t facility; // facility used to log the entry
unsigned __int64 timeStamp; // time when mssg was logged
void* args[0]; // size given by numberOfArgs
static const size_t maxArgCnt = 7;
static const size_t maxOffset = 0x20000000;
static size_t maxMsgSize ()
{ return sizeof(StressMsg) + maxArgCnt*sizeof(void*); }
friend void PopulateDebugHeaders();
friend class ThreadStressLog;
friend class StressLog;
};
#ifdef _WIN64
#define STRESSLOG_CHUNK_SIZE (32 * 1024)
#else //_WIN64
#define STRESSLOG_CHUNK_SIZE (16 * 1024)
#endif //_WIN64
#define GC_STRESSLOG_MULTIPLY (5)
//==========================================================================================
// StressLogChunk
//
// A chunk of contiguous memory containing instances of StressMsg
//
struct StressLogChunk
{
PTR_StressLogChunk prev;
PTR_StressLogChunk next;
char buf[STRESSLOG_CHUNK_SIZE];
uint32_t dwSig1;
uint32_t dwSig2;
#ifndef DACCESS_COMPILE
StressLogChunk (PTR_StressLogChunk p = NULL, PTR_StressLogChunk n = NULL)
:prev (p), next (n), dwSig1 (0xCFCFCFCF), dwSig2 (0xCFCFCFCF)
{}
#endif //!DACCESS_COMPILE
char * StartPtr ()
{
return buf;
}
char * EndPtr ()
{
return buf + STRESSLOG_CHUNK_SIZE;
}
bool IsValid () const
{
return dwSig1 == 0xCFCFCFCF && dwSig2 == 0xCFCFCFCF;
}
};
//==========================================================================================
// ThreadStressLog
//
// This class implements a circular stack of variable sized elements
// .The buffer between startPtr-endPtr is used in a circular manner
// to store instances of the variable-sized struct StressMsg.
// The StressMsg are always aligned to endPtr, while the space
// left between startPtr and the last element is 0-padded.
// .curPtr points to the most recently written log message
// .readPtr points to the next log message to be dumped
// .hasWrapped is TRUE while dumping the log, if we had wrapped
// past the endPtr marker, back to startPtr
// The AdvanceRead/AdvanceWrite operations simply update the
// readPtr / curPtr fields. thecaller is responsible for reading/writing
// to the corresponding field
class ThreadStressLog {
PTR_ThreadStressLog next; // we keep a linked list of these
uint64_t threadId; // the id for the thread using this buffer
bool isDead; // Is this thread dead
bool readHasWrapped; // set when read ptr has passed chunListTail
bool writeHasWrapped; // set when write ptr has passed chunListHead
StressMsg* curPtr; // where packets are being put on the queue
StressMsg* readPtr; // where we are reading off the queue (used during dumping)
PTR_StressLogChunk chunkListHead; //head of a list of stress log chunks
PTR_StressLogChunk chunkListTail; //tail of a list of stress log chunks
PTR_StressLogChunk curReadChunk; //the stress log chunk we are currently reading
PTR_StressLogChunk curWriteChunk; //the stress log chunk we are currently writing
long chunkListLength; // how many stress log chunks are in this stress log
PTR_Thread pThread; // thread associated with these stress logs
StressMsg * origCurPtr; // this holds the original curPtr before we start the dump
friend void PopulateDebugHeaders();
friend class StressLog;
#ifndef DACCESS_COMPILE
public:
inline ThreadStressLog ();
inline ~ThreadStressLog ();
void LogMsg ( uint32_t facility, int cArgs, const char* format, ... )
{
va_list Args;
va_start(Args, format);
LogMsg (facility, cArgs, format, Args);
}
void LogMsg ( uint32_t facility, int cArgs, const char* format, va_list Args);
private:
FORCEINLINE StressMsg* AdvanceWrite(int cArgs);
inline StressMsg* AdvWritePastBoundary(int cArgs);
FORCEINLINE bool GrowChunkList ();
#else // DACCESS_COMPILE
public:
friend class ClrDataAccess;
// Called while dumping. Returns true after all messages in log were dumped
FORCEINLINE bool CompletedDump ();
private:
FORCEINLINE bool IsReadyForRead() { return readPtr != NULL; }
FORCEINLINE StressMsg* AdvanceRead();
inline StressMsg* AdvReadPastBoundary();
#endif //!DACCESS_COMPILE
public:
void Activate (Thread * pThread);
bool IsValid () const
{
return chunkListHead != NULL && (!curWriteChunk || curWriteChunk->IsValid ());
}
static const char* gcStartMsg()
{
return "{ =========== BEGINGC %d, (requested generation = %lu, collect_classes = %lu) ==========\n";
}
static const char* gcEndMsg()
{
return "========== ENDGC %d (gen = %lu, collect_classes = %lu) ===========}\n";
}
static const char* gcRootMsg()
{
return " GC Root %p RELOCATED %p -> %p MT = %pT\n";
}
static const char* gcRootPromoteMsg()
{
return " GCHeap::Promote: Promote GC Root *%p = %p MT = %pT\n";
}
static const char* gcPlugMoveMsg()
{
return "GC_HEAP RELOCATING Objects in heap within range [%p %p) by -0x%x bytes\n";
}
};
//==========================================================================================
// Inline implementations:
//
#ifdef DACCESS_COMPILE
//------------------------------------------------------------------------------------------
// Called while dumping. Returns true after all messages in log were dumped
FORCEINLINE bool ThreadStressLog::CompletedDump ()
{
return readPtr->timeStamp == 0
//if read has passed end of list but write has not passed head of list yet, we are done
//if write has also wrapped, we are at the end if read pointer passed write pointer
|| (readHasWrapped &&
(!writeHasWrapped || (curReadChunk == curWriteChunk && readPtr >= curPtr)));
}
//------------------------------------------------------------------------------------------
// Called when dumping the log (by StressLog::Dump())
// Updates readPtr to point to next stress messaage to be dumped
inline StressMsg* ThreadStressLog::AdvanceRead() {
// advance the marker
readPtr = (StressMsg*)((char*)readPtr + sizeof(StressMsg) + readPtr->numberOfArgs*sizeof(void*));
// wrap around if we need to
if (readPtr >= (StressMsg *)curReadChunk->EndPtr ())
{
AdvReadPastBoundary();
}
return readPtr;
}
//------------------------------------------------------------------------------------------
// The factored-out slow codepath for AdvanceRead(), only called by AdvanceRead().
// Updates readPtr to and returns the first stress message >= startPtr
inline StressMsg* ThreadStressLog::AdvReadPastBoundary() {
//if we pass boundary of tail list, we need to set has Wrapped
if (curReadChunk == chunkListTail)
{
readHasWrapped = true;
//If write has not wrapped, we know the contents from list head to
//cur pointer is garbage, we don't need to read them
if (!writeHasWrapped)
{
return readPtr;
}
}
curReadChunk = curReadChunk->next;
void** p = (void**)curReadChunk->StartPtr();
while (*p == NULL && (size_t)(p-(void**)curReadChunk->StartPtr ()) < (StressMsg::maxMsgSize()/sizeof(void*)))
{
++p;
}
// if we failed to find a valid start of a StressMsg fallback to startPtr (since timeStamp==0)
if (*p == NULL)
{
p = (void**) curReadChunk->StartPtr ();
}
readPtr = (StressMsg*)p;
return readPtr;
}
#else // DACCESS_COMPILE
//------------------------------------------------------------------------------------------
// Initialize a ThreadStressLog
inline ThreadStressLog::ThreadStressLog()
{
chunkListHead = chunkListTail = curWriteChunk = NULL;
StressLogChunk * newChunk = new (nothrow) StressLogChunk;
//OOM or in cantalloc region
if (newChunk == NULL)
{
return;
}
StressLog::NewChunk ();
newChunk->prev = newChunk;
newChunk->next = newChunk;
chunkListHead = chunkListTail = newChunk;
next = NULL;
isDead = TRUE;
curPtr = NULL;
readPtr = NULL;
writeHasWrapped = FALSE;
curReadChunk = NULL;
curWriteChunk = NULL;
chunkListLength = 1;
origCurPtr = NULL;
}
inline ThreadStressLog::~ThreadStressLog ()
{
//no thing to do if the list is empty (failed to initialize)
if (chunkListHead == NULL)
{
return;
}
StressLogChunk * chunk = chunkListHead;
do
{
StressLogChunk * tmp = chunk;
chunk = chunk->next;
delete tmp;
StressLog::ChunkDeleted ();
} while (chunk != chunkListHead);
}
//------------------------------------------------------------------------------------------
// Called when logging, checks if we can increase the number of stress log chunks associated
// with the current thread
FORCEINLINE bool ThreadStressLog::GrowChunkList ()
{
_ASSERTE (chunkListLength >= 1);
if (!StressLog::AllowNewChunk (chunkListLength))
{
return FALSE;
}
StressLogChunk * newChunk = new (nothrow) StressLogChunk (chunkListTail, chunkListHead);
if (newChunk == NULL)
{
return FALSE;
}
StressLog::NewChunk ();
chunkListLength++;
chunkListHead->prev = newChunk;
chunkListTail->next = newChunk;
chunkListHead = newChunk;
return TRUE;
}
//------------------------------------------------------------------------------------------
// Called at runtime when writing the log (by StressLog::LogMsg())
// Updates curPtr to point to the next spot in the log where we can write
// a stress message with cArgs arguments
// For convenience it returns a pointer to the empty slot where we can
// write the next stress message.
// cArgs is the number of arguments in the message to be written.
inline StressMsg* ThreadStressLog::AdvanceWrite(int cArgs) {
// _ASSERTE(cArgs <= StressMsg::maxArgCnt);
// advance the marker
StressMsg* p = (StressMsg*)((char*)curPtr - sizeof(StressMsg) - cArgs*sizeof(void*));
//past start of current chunk
//wrap around if we need to
if (p < (StressMsg*)curWriteChunk->StartPtr ())
{
curPtr = AdvWritePastBoundary(cArgs);
}
else
{
curPtr = p;
}
return curPtr;
}
//------------------------------------------------------------------------------------------
// This is the factored-out slow codepath for AdvanceWrite() and is only called by
// AdvanceWrite().
// Returns the stress message flushed against endPtr
// In addition it writes NULLs b/w the startPtr and curPtr
inline StressMsg* ThreadStressLog::AdvWritePastBoundary(int cArgs) {
//zeroed out remaining buffer
memset (curWriteChunk->StartPtr (), 0, (char *)curPtr - (char *)curWriteChunk->StartPtr ());
//if we are already at head of the list, try to grow the list
if (curWriteChunk == chunkListHead)
{
GrowChunkList ();
}
curWriteChunk = curWriteChunk->prev;
if (curWriteChunk == chunkListTail)
{
writeHasWrapped = TRUE;
}
curPtr = (StressMsg*)((char*)curWriteChunk->EndPtr () - sizeof(StressMsg) - cArgs * sizeof(void*));
return curPtr;
}
#endif // DACCESS_COMPILE
#endif // STRESS_LOG
#ifndef __GCENV_BASE_INCLUDED__
#if !defined(STRESS_LOG) || defined(DACCESS_COMPILE)
#define STRESS_LOG_VA(msg) do { } WHILE_0
#define STRESS_LOG0(facility, level, msg) do { } WHILE_0
#define STRESS_LOG1(facility, level, msg, data1) do { } WHILE_0
#define STRESS_LOG2(facility, level, msg, data1, data2) do { } WHILE_0
#define STRESS_LOG3(facility, level, msg, data1, data2, data3) do { } WHILE_0
#define STRESS_LOG4(facility, level, msg, data1, data2, data3, data4) do { } WHILE_0
#define STRESS_LOG5(facility, level, msg, data1, data2, data3, data4, data5) do { } WHILE_0
#define STRESS_LOG6(facility, level, msg, data1, data2, data3, data4, data5, data6) do { } WHILE_0
#define STRESS_LOG7(facility, level, msg, data1, data2, data3, data4, data5, data6, data7) do { } WHILE_0
#define STRESS_LOG_PLUG_MOVE(plug_start, plug_end, plug_delta) do { } WHILE_0
#define STRESS_LOG_ROOT_PROMOTE(root_addr, objPtr, methodTable) do { } WHILE_0
#define STRESS_LOG_ROOT_RELOCATE(root_addr, old_value, new_value, methodTable) do { } WHILE_0
#define STRESS_LOG_GC_START(gcCount, Gen, collectClasses) do { } WHILE_0
#define STRESS_LOG_GC_END(gcCount, Gen, collectClasses) do { } WHILE_0
#define STRESS_LOG_OOM_STACK(size) do { } WHILE_0
#define STRESS_LOG_GC_STACK do { } WHILE_0
#define STRESS_LOG_RESERVE_MEM(numChunks) do { } WHILE_0
#endif // !STRESS_LOG || DACCESS_COMPILE
#endif // !__GCENV_BASE_INCLUDED__
#endif // StressLog_h
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
// ---------------------------------------------------------------------------
// StressLog.h
//
// StressLog infrastructure
//
// The StressLog is a binary, memory based circular queue of logging messages.
// It is intended to be used in retail builds during stress runs (activated
// by registry key), to help find bugs that only turn up during stress runs.
//
// Differently from the desktop implementation the RH implementation of the
// stress log will log all facilities, and only filter on logging level.
//
// The log has a very simple structure, and is meant to be dumped from an NTSD
// extention (eg. strike).
//
// debug\rhsos\stresslogdump.cpp contains the dumper utility that parses this
// log.
// ---------------------------------------------------------------------------
#ifndef StressLog_h
#define StressLog_h 1
#ifdef _MSC_VER
#define SUPPRESS_WARNING_4127 \
__pragma(warning(push)) \
__pragma(warning(disable:4127)) /* conditional expression is constant*/
#define POP_WARNING_STATE \
__pragma(warning(pop))
#else // _MSC_VER
#define SUPPRESS_WARNING_4127
#define POP_WARNING_STATE
#endif // _MSC_VER
#define WHILE_0 \
SUPPRESS_WARNING_4127 \
while(0) \
POP_WARNING_STATE \
// let's keep STRESS_LOG defined always...
#if !defined(STRESS_LOG) && !defined(NO_STRESS_LOG)
#define STRESS_LOG
#endif
#if defined(STRESS_LOG)
//
// Logging levels and facilities
//
#define DEFINE_LOG_FACILITY(logname, value) logname = value,
enum LogFacilitiesEnum: unsigned int {
#include "loglf.h"
LF_ALWAYS = 0x80000000u, // Log message irrepespective of LogFacility (if the level matches)
LF_ALL = 0xFFFFFFFFu, // Used only to mask bits. Never use as LOG((LF_ALL, ...))
};
#define LL_EVERYTHING 10
#define LL_INFO1000000 9 // can be expected to generate 1,000,000 logs per small but not trival run
#define LL_INFO100000 8 // can be expected to generate 100,000 logs per small but not trival run
#define LL_INFO10000 7 // can be expected to generate 10,000 logs per small but not trival run
#define LL_INFO1000 6 // can be expected to generate 1,000 logs per small but not trival run
#define LL_INFO100 5 // can be expected to generate 100 logs per small but not trival run
#define LL_INFO10 4 // can be expected to generate 10 logs per small but not trival run
#define LL_WARNING 3
#define LL_ERROR 2
#define LL_FATALERROR 1
#define LL_ALWAYS 0 // impossible to turn off (log level never negative)
//
//
//
#ifndef _ASSERTE
#define _ASSERTE(expr)
#endif
#ifndef DACCESS_COMPILE
//==========================================================================================
// The STRESS_LOG* macros
//
// The STRESS_LOG* macros work like printf. In fact the use printf in their implementation
// so all printf format specifications work. In addition the Stress log dumper knows
// about certain suffixes for the %p format specification (normally used to print a pointer)
//
// %pM // The pointer is a MethodInfo -- not supported yet (use %pK instead)
// %pT // The pointer is a type (MethodTable)
// %pV // The pointer is a C++ Vtable pointer
// %pK // The pointer is a code address (used for call stacks or method names)
//
// STRESS_LOG_VA was added to allow sending GC trace output to the stress log. msg must be enclosed
// in ()'s and contain a format string followed by 0 - 4 arguments. The arguments must be numbers or
// string literals. LogMsgOL is overloaded so that all of the possible sets of parameters are covered.
// This was done becasue GC Trace uses dprintf which dosen't contain info on how many arguments are
// getting passed in and using va_args would require parsing the format string during the GC
//
#define STRESS_LOG_VA(msg) do { \
if (StressLog::StressLogOn(LF_GC, LL_ALWAYS)) \
StressLog::LogMsgOL msg; \
} WHILE_0
#define STRESS_LOG0(facility, level, msg) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 0, msg); \
} WHILE_0 \
#define STRESS_LOG1(facility, level, msg, data1) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 1, msg, (void*)(size_t)(data1)); \
} WHILE_0
#define STRESS_LOG2(facility, level, msg, data1, data2) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 2, msg, \
(void*)(size_t)(data1), (void*)(size_t)(data2)); \
} WHILE_0
#define STRESS_LOG3(facility, level, msg, data1, data2, data3) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 3, msg, \
(void*)(size_t)(data1),(void*)(size_t)(data2),(void*)(size_t)(data3)); \
} WHILE_0
#define STRESS_LOG4(facility, level, msg, data1, data2, data3, data4) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 4, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4)); \
} WHILE_0
#define STRESS_LOG5(facility, level, msg, data1, data2, data3, data4, data5) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 5, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4), \
(void*)(size_t)(data5)); \
} WHILE_0
#define STRESS_LOG6(facility, level, msg, data1, data2, data3, data4, data5, data6) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 6, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4), \
(void*)(size_t)(data5), (void*)(size_t)(data6)); \
} WHILE_0
#define STRESS_LOG7(facility, level, msg, data1, data2, data3, data4, data5, data6, data7) do { \
if (StressLog::StressLogOn(facility, level)) \
StressLog::LogMsg(facility, 7, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4), \
(void*)(size_t)(data5), (void*)(size_t)(data6), (void*)(size_t)(data7)); \
} WHILE_0
#define STRESS_LOG_COND0(facility, level, msg) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 0, msg); \
} WHILE_0
#define STRESS_LOG_COND1(facility, level, cond, msg, data1) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 1, msg, (void*)(size_t)(data1)); \
} WHILE_0
#define STRESS_LOG_COND2(facility, level, cond, msg, data1, data2) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 2, msg, \
(void*)(size_t)(data1), (void*)(size_t)(data2)); \
} WHILE_0
#define STRESS_LOG_COND3(facility, level, cond, msg, data1, data2, data3) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 3, msg, \
(void*)(size_t)(data1),(void*)(size_t)(data2),(void*)(size_t)(data3)); \
} WHILE_0
#define STRESS_LOG_COND4(facility, level, cond, msg, data1, data2, data3, data4) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 4, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4)); \
} WHILE_0
#define STRESS_LOG_COND5(facility, level, cond, msg, data1, data2, data3, data4, data5) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 5, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4), \
(void*)(size_t)(data5)); \
} WHILE_0
#define STRESS_LOG_COND6(facility, level, cond, msg, data1, data2, data3, data4, data5, data6) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 6, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4), \
(void*)(size_t)(data5), (void*)(size_t)(data6)); \
} WHILE_0
#define STRESS_LOG_COND7(facility, level, cond, msg, data1, data2, data3, data4, data5, data6, data7) do { \
if (StressLog::StressLogOn(facility, level) && (cond)) \
StressLog::LogMsg(facility, 7, msg, (void*)(size_t)(data1), \
(void*)(size_t)(data2),(void*)(size_t)(data3),(void*)(size_t)(data4), \
(void*)(size_t)(data5), (void*)(size_t)(data6), (void*)(size_t)(data7)); \
} WHILE_0
#define STRESS_LOG_RESERVE_MEM(numChunks) do { \
if (StressLog::StressLogOn(LF_ALL, LL_ALWAYS)) \
{StressLog::ReserveStressLogChunks (numChunks);} \
} WHILE_0
// !!! WARNING !!!
// !!! DO NOT ADD STRESS_LOG8, as the stress log infrastructure supports a maximum of 7 arguments
// !!! WARNING !!!
#define STRESS_LOG_PLUG_MOVE(plug_start, plug_end, plug_delta) do { \
if (StressLog::StressLogOn(LF_GC, LL_INFO1000)) \
StressLog::LogMsg(LF_GC, 3, ThreadStressLog::gcPlugMoveMsg(), \
(void*)(size_t)(plug_start), (void*)(size_t)(plug_end), (void*)(size_t)(plug_delta)); \
} WHILE_0
#define STRESS_LOG_ROOT_PROMOTE(root_addr, objPtr, methodTable) do { \
if (StressLog::StressLogOn(LF_GC|LF_GCROOTS, LL_INFO1000)) \
StressLog::LogMsg(LF_GC|LF_GCROOTS, 3, ThreadStressLog::gcRootPromoteMsg(), \
(void*)(size_t)(root_addr), (void*)(size_t)(objPtr), (void*)(size_t)(methodTable)); \
} WHILE_0
#define STRESS_LOG_ROOT_RELOCATE(root_addr, old_value, new_value, methodTable) do { \
if (StressLog::StressLogOn(LF_GC|LF_GCROOTS, LL_INFO1000) && ((size_t)(old_value) != (size_t)(new_value))) \
StressLog::LogMsg(LF_GC|LF_GCROOTS, 4, ThreadStressLog::gcRootMsg(), \
(void*)(size_t)(root_addr), (void*)(size_t)(old_value), \
(void*)(size_t)(new_value), (void*)(size_t)(methodTable)); \
} WHILE_0
#define STRESS_LOG_GC_START(gcCount, Gen, collectClasses) do { \
if (StressLog::StressLogOn(LF_GCROOTS|LF_GC|LF_GCALLOC, LL_INFO10)) \
StressLog::LogMsg(LF_GCROOTS|LF_GC|LF_GCALLOC, 3, ThreadStressLog::gcStartMsg(), \
(void*)(size_t)(gcCount), (void*)(size_t)(Gen), (void*)(size_t)(collectClasses)); \
} WHILE_0
#define STRESS_LOG_GC_END(gcCount, Gen, collectClasses) do { \
if (StressLog::StressLogOn(LF_GCROOTS|LF_GC|LF_GCALLOC, LL_INFO10)) \
StressLog::LogMsg(LF_GCROOTS|LF_GC|LF_GCALLOC, 3, ThreadStressLog::gcEndMsg(),\
(void*)(size_t)(gcCount), (void*)(size_t)(Gen), (void*)(size_t)(collectClasses), 0);\
} WHILE_0
#if defined(_DEBUG)
#define MAX_CALL_STACK_TRACE 20
#define STRESS_LOG_OOM_STACK(size) do { \
if (StressLog::StressLogOn(LF_ALWAYS, LL_ALWAYS)) \
{ \
StressLog::LogMsgOL("OOM on alloc of size %x \n", (void*)(size_t)(size)); \
StressLog::LogCallStack ("OOM"); \
} \
} WHILE_0
#define STRESS_LOG_GC_STACK do { \
if (StressLog::StressLogOn(LF_GC |LF_GCINFO, LL_ALWAYS)) \
{ \
StressLog::LogMsgOL("GC is triggered \n"); \
StressLog::LogCallStack ("GC"); \
} \
} WHILE_0
#else //_DEBUG
#define STRESS_LOG_OOM_STACK(size)
#define STRESS_LOG_GC_STACK
#endif //_DEBUG
#endif // DACCESS_COMPILE
//
// forward declarations:
//
class CrstStatic;
class Thread;
typedef DPTR(Thread) PTR_Thread;
class StressLog;
typedef DPTR(StressLog) PTR_StressLog;
class ThreadStressLog;
typedef DPTR(ThreadStressLog) PTR_ThreadStressLog;
struct StressLogChunk;
typedef DPTR(StressLogChunk) PTR_StressLogChunk;
struct DacpStressLogEnumCBArgs;
extern "C" void PopulateDebugHeaders();
//==========================================================================================
// StressLog - per-thread circular queue of stresslog messages
//
class StressLog {
friend void PopulateDebugHeaders();
public:
// private:
unsigned facilitiesToLog; // Bitvector of facilities to log (see loglf.h)
unsigned levelToLog; // log level
unsigned MaxSizePerThread; // maximum number of bytes each thread should have before wrapping
unsigned MaxSizeTotal; // maximum memory allowed for stress log
int32_t totalChunk; // current number of total chunks allocated
PTR_ThreadStressLog logs; // the list of logs for every thread.
int32_t deadCount; // count of dead threads in the log
CrstStatic *pLock; // lock
unsigned __int64 tickFrequency; // number of ticks per second
unsigned __int64 startTimeStamp; // start time from when tick counter started
FILETIME startTime; // time the application started
size_t moduleOffset; // Used to compute format strings.
#ifndef DACCESS_COMPILE
public:
static void Initialize(unsigned facilities, unsigned level, unsigned maxBytesPerThread,
unsigned maxBytesTotal, HANDLE hMod);
// Called at DllMain THREAD_DETACH to recycle thread's logs
static void ThreadDetach(ThreadStressLog *msgs);
static long NewChunk () { return PalInterlockedIncrement (&theLog.totalChunk); }
static long ChunkDeleted () { return PalInterlockedDecrement (&theLog.totalChunk); }
//the result is not 100% accurate. If multiple threads call this funciton at the same time,
//we could allow the total size be bigger than required. But the memory won't grow forever
//and this is not critical so we don't try to fix the race
static bool AllowNewChunk (long numChunksInCurThread);
//preallocate Stress log chunks for current thread. The memory we could preallocate is still
//bounded by per thread size limit and total size limit. If chunksToReserve is 0, we will try to
//preallocate up to per thread size limit
static bool ReserveStressLogChunks (unsigned int chunksToReserve);
// private:
static ThreadStressLog* CreateThreadStressLog(Thread * pThread);
static ThreadStressLog* CreateThreadStressLogHelper(Thread * pThread);
#else // DACCESS_COMPILE
public:
bool Initialize();
// Can't refer to the types in sospriv.h because it drags in windows.h
void EnumerateStressMsgs(/*STRESSMSGCALLBACK*/ void* smcb, /*ENDTHREADLOGCALLBACK*/ void* etcb,
void *token);
void EnumStressLogMemRanges(/*STRESSLOGMEMRANGECALLBACK*/ void* slmrcb, void *token);
// Called while dumping logs after operations are completed, to ensure DAC-caches
// allow the stress logs to be dumped again
void ResetForRead();
ThreadStressLog* FindLatestThreadLog() const;
friend class ClrDataAccess;
#endif // DACCESS_COMPILE
#ifndef DACCESS_COMPILE
public:
FORCEINLINE static bool StressLogOn(unsigned /*facility*/, unsigned level)
{
#if defined(DACCESS_COMPILE)
UNREFERENCED_PARAMETER(level);
return FALSE;
#else
// In Redhawk we have rationalized facility codes and have much
// fewer compared to desktop, as such we'll log all facilities and
// limit the filtering to the log level...
return
// (theLog.facilitiesToLog & facility)
// &&
(level <= theLog.levelToLog);
#endif
}
static void LogMsg(unsigned facility, int cArgs, const char* format, ... );
// Support functions for STRESS_LOG_VA
// We disable the warning "conversion from 'type' to 'type' of greater size" since everything will
// end up on the stack, and LogMsg will know the size of the variable based on the format string.
#ifdef _MSC_VER
#pragma warning( push )
#pragma warning( disable : 4312 )
#endif
static void LogMsgOL(const char* format)
{ LogMsg(LF_GC, 0, format); }
template < typename T1 >
static void LogMsgOL(const char* format, T1 data1)
{
C_ASSERT(sizeof(T1) <= sizeof(void*));
LogMsg(LF_GC, 1, format, (void*)(size_t)data1);
}
template < typename T1, typename T2 >
static void LogMsgOL(const char* format, T1 data1, T2 data2)
{
C_ASSERT(sizeof(T1) <= sizeof(void*) && sizeof(T2) <= sizeof(void*));
LogMsg(LF_GC, 2, format, (void*)(size_t)data1, (void*)(size_t)data2);
}
template < typename T1, typename T2, typename T3 >
static void LogMsgOL(const char* format, T1 data1, T2 data2, T3 data3)
{
C_ASSERT(sizeof(T1) <= sizeof(void*) && sizeof(T2) <= sizeof(void*) && sizeof(T3) <= sizeof(void*));
LogMsg(LF_GC, 3, format, (void*)(size_t)data1, (void*)(size_t)data2, (void*)(size_t)data3);
}
template < typename T1, typename T2, typename T3, typename T4 >
static void LogMsgOL(const char* format, T1 data1, T2 data2, T3 data3, T4 data4)
{
C_ASSERT(sizeof(T1) <= sizeof(void*) && sizeof(T2) <= sizeof(void*) && sizeof(T3) <= sizeof(void*) && sizeof(T4) <= sizeof(void*));
LogMsg(LF_GC, 4, format, (void*)(size_t)data1, (void*)(size_t)data2, (void*)(size_t)data3, (void*)(size_t)data4);
}
template < typename T1, typename T2, typename T3, typename T4, typename T5 >
static void LogMsgOL(const char* format, T1 data1, T2 data2, T3 data3, T4 data4, T5 data5)
{
C_ASSERT(sizeof(T1) <= sizeof(void*) && sizeof(T2) <= sizeof(void*) && sizeof(T3) <= sizeof(void*) && sizeof(T4) <= sizeof(void*) && sizeof(T5) <= sizeof(void*));
LogMsg(LF_GC, 5, format, (void*)(size_t)data1, (void*)(size_t)data2, (void*)(size_t)data3, (void*)(size_t)data4, (void*)(size_t)data5);
}
template < typename T1, typename T2, typename T3, typename T4, typename T5, typename T6 >
static void LogMsgOL(const char* format, T1 data1, T2 data2, T3 data3, T4 data4, T5 data5, T6 data6)
{
C_ASSERT(sizeof(T1) <= sizeof(void*) && sizeof(T2) <= sizeof(void*) && sizeof(T3) <= sizeof(void*) && sizeof(T4) <= sizeof(void*) && sizeof(T5) <= sizeof(void*) && sizeof(T6) <= sizeof(void*));
LogMsg(LF_GC, 6, format, (void*)(size_t)data1, (void*)(size_t)data2, (void*)(size_t)data3, (void*)(size_t)data4, (void*)(size_t)data5, (void*)(size_t)data6);
}
template < typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7 >
static void LogMsgOL(const char* format, T1 data1, T2 data2, T3 data3, T4 data4, T5 data5, T6 data6, T7 data7)
{
C_ASSERT(sizeof(T1) <= sizeof(void*) && sizeof(T2) <= sizeof(void*) && sizeof(T3) <= sizeof(void*) && sizeof(T4) <= sizeof(void*) && sizeof(T5) <= sizeof(void*) && sizeof(T6) <= sizeof(void*) && sizeof(T7) <= sizeof(void*));
LogMsg(LF_GC, 7, format, (void*)(size_t)data1, (void*)(size_t)data2, (void*)(size_t)data3, (void*)(size_t)data4, (void*)(size_t)data5, (void*)(size_t)data6, (void*)(size_t)data7);
}
#ifdef _MSC_VER
#pragma warning( pop )
#endif
// We can only log the stacktrace on DEBUG builds!
#ifdef _DEBUG
static void LogCallStack(const char *const callTag);
#endif //_DEBUG
#endif // DACCESS_COMPILE
// private: // static variables
static StressLog theLog; // We only have one log, and this is it
};
//==========================================================================================
// Private classes
//
#if defined(_MSC_VER)
// don't warn about 0 sized array below or unnamed structures
#pragma warning(disable:4200 4201)
#endif
//==========================================================================================
// StressMsg
//
// The order of fields is important. Keep the prefix length as the first field.
// And make sure the timeStamp field is naturally aligned, so we don't waste
// space on 32-bit platforms
//
struct StressMsg {
union {
struct {
uint32_t numberOfArgs : 3; // at most 7 arguments
uint32_t formatOffset : 29; // offset of string in mscorwks
};
uint32_t fmtOffsCArgs; // for optimized access
};
uint32_t facility; // facility used to log the entry
unsigned __int64 timeStamp; // time when mssg was logged
void* args[0]; // size given by numberOfArgs
static const size_t maxArgCnt = 7;
static const size_t maxOffset = 0x20000000;
static size_t maxMsgSize ()
{ return sizeof(StressMsg) + maxArgCnt*sizeof(void*); }
friend void PopulateDebugHeaders();
friend class ThreadStressLog;
friend class StressLog;
};
#ifdef _WIN64
#define STRESSLOG_CHUNK_SIZE (32 * 1024)
#else //_WIN64
#define STRESSLOG_CHUNK_SIZE (16 * 1024)
#endif //_WIN64
#define GC_STRESSLOG_MULTIPLY (5)
//==========================================================================================
// StressLogChunk
//
// A chunk of contiguous memory containing instances of StressMsg
//
struct StressLogChunk
{
PTR_StressLogChunk prev;
PTR_StressLogChunk next;
char buf[STRESSLOG_CHUNK_SIZE];
uint32_t dwSig1;
uint32_t dwSig2;
#ifndef DACCESS_COMPILE
StressLogChunk (PTR_StressLogChunk p = NULL, PTR_StressLogChunk n = NULL)
:prev (p), next (n), dwSig1 (0xCFCFCFCF), dwSig2 (0xCFCFCFCF)
{}
#endif //!DACCESS_COMPILE
char * StartPtr ()
{
return buf;
}
char * EndPtr ()
{
return buf + STRESSLOG_CHUNK_SIZE;
}
bool IsValid () const
{
return dwSig1 == 0xCFCFCFCF && dwSig2 == 0xCFCFCFCF;
}
};
//==========================================================================================
// ThreadStressLog
//
// This class implements a circular stack of variable sized elements
// .The buffer between startPtr-endPtr is used in a circular manner
// to store instances of the variable-sized struct StressMsg.
// The StressMsg are always aligned to endPtr, while the space
// left between startPtr and the last element is 0-padded.
// .curPtr points to the most recently written log message
// .readPtr points to the next log message to be dumped
// .hasWrapped is TRUE while dumping the log, if we had wrapped
// past the endPtr marker, back to startPtr
// The AdvanceRead/AdvanceWrite operations simply update the
// readPtr / curPtr fields. thecaller is responsible for reading/writing
// to the corresponding field
class ThreadStressLog {
PTR_ThreadStressLog next; // we keep a linked list of these
uint64_t threadId; // the id for the thread using this buffer
bool isDead; // Is this thread dead
bool readHasWrapped; // set when read ptr has passed chunListTail
bool writeHasWrapped; // set when write ptr has passed chunListHead
StressMsg* curPtr; // where packets are being put on the queue
StressMsg* readPtr; // where we are reading off the queue (used during dumping)
PTR_StressLogChunk chunkListHead; //head of a list of stress log chunks
PTR_StressLogChunk chunkListTail; //tail of a list of stress log chunks
PTR_StressLogChunk curReadChunk; //the stress log chunk we are currently reading
PTR_StressLogChunk curWriteChunk; //the stress log chunk we are currently writing
long chunkListLength; // how many stress log chunks are in this stress log
PTR_Thread pThread; // thread associated with these stress logs
StressMsg * origCurPtr; // this holds the original curPtr before we start the dump
friend void PopulateDebugHeaders();
friend class StressLog;
#ifndef DACCESS_COMPILE
public:
inline ThreadStressLog ();
inline ~ThreadStressLog ();
void LogMsg ( uint32_t facility, int cArgs, const char* format, ... )
{
va_list Args;
va_start(Args, format);
LogMsg (facility, cArgs, format, Args);
}
void LogMsg ( uint32_t facility, int cArgs, const char* format, va_list Args);
private:
FORCEINLINE StressMsg* AdvanceWrite(int cArgs);
inline StressMsg* AdvWritePastBoundary(int cArgs);
FORCEINLINE bool GrowChunkList ();
#else // DACCESS_COMPILE
public:
friend class ClrDataAccess;
// Called while dumping. Returns true after all messages in log were dumped
FORCEINLINE bool CompletedDump ();
private:
FORCEINLINE bool IsReadyForRead() { return readPtr != NULL; }
FORCEINLINE StressMsg* AdvanceRead();
inline StressMsg* AdvReadPastBoundary();
#endif //!DACCESS_COMPILE
public:
void Activate (Thread * pThread);
bool IsValid () const
{
return chunkListHead != NULL && (!curWriteChunk || curWriteChunk->IsValid ());
}
static const char* gcStartMsg()
{
return "{ =========== BEGINGC %d, (requested generation = %lu, collect_classes = %lu) ==========\n";
}
static const char* gcEndMsg()
{
return "========== ENDGC %d (gen = %lu, collect_classes = %lu) ===========}\n";
}
static const char* gcRootMsg()
{
return " GC Root %p RELOCATED %p -> %p MT = %pT\n";
}
static const char* gcRootPromoteMsg()
{
return " GCHeap::Promote: Promote GC Root *%p = %p MT = %pT\n";
}
static const char* gcPlugMoveMsg()
{
return "GC_HEAP RELOCATING Objects in heap within range [%p %p) by -0x%x bytes\n";
}
};
//==========================================================================================
// Inline implementations:
//
#ifdef DACCESS_COMPILE
//------------------------------------------------------------------------------------------
// Called while dumping. Returns true after all messages in log were dumped
FORCEINLINE bool ThreadStressLog::CompletedDump ()
{
return readPtr->timeStamp == 0
//if read has passed end of list but write has not passed head of list yet, we are done
//if write has also wrapped, we are at the end if read pointer passed write pointer
|| (readHasWrapped &&
(!writeHasWrapped || (curReadChunk == curWriteChunk && readPtr >= curPtr)));
}
//------------------------------------------------------------------------------------------
// Called when dumping the log (by StressLog::Dump())
// Updates readPtr to point to next stress messaage to be dumped
inline StressMsg* ThreadStressLog::AdvanceRead() {
// advance the marker
readPtr = (StressMsg*)((char*)readPtr + sizeof(StressMsg) + readPtr->numberOfArgs*sizeof(void*));
// wrap around if we need to
if (readPtr >= (StressMsg *)curReadChunk->EndPtr ())
{
AdvReadPastBoundary();
}
return readPtr;
}
//------------------------------------------------------------------------------------------
// The factored-out slow codepath for AdvanceRead(), only called by AdvanceRead().
// Updates readPtr to and returns the first stress message >= startPtr
inline StressMsg* ThreadStressLog::AdvReadPastBoundary() {
//if we pass boundary of tail list, we need to set has Wrapped
if (curReadChunk == chunkListTail)
{
readHasWrapped = true;
//If write has not wrapped, we know the contents from list head to
//cur pointer is garbage, we don't need to read them
if (!writeHasWrapped)
{
return readPtr;
}
}
curReadChunk = curReadChunk->next;
void** p = (void**)curReadChunk->StartPtr();
while (*p == NULL && (size_t)(p-(void**)curReadChunk->StartPtr ()) < (StressMsg::maxMsgSize()/sizeof(void*)))
{
++p;
}
// if we failed to find a valid start of a StressMsg fallback to startPtr (since timeStamp==0)
if (*p == NULL)
{
p = (void**) curReadChunk->StartPtr ();
}
readPtr = (StressMsg*)p;
return readPtr;
}
#else // DACCESS_COMPILE
//------------------------------------------------------------------------------------------
// Initialize a ThreadStressLog
inline ThreadStressLog::ThreadStressLog()
{
chunkListHead = chunkListTail = curWriteChunk = NULL;
StressLogChunk * newChunk = new (nothrow) StressLogChunk;
//OOM or in cantalloc region
if (newChunk == NULL)
{
return;
}
StressLog::NewChunk ();
newChunk->prev = newChunk;
newChunk->next = newChunk;
chunkListHead = chunkListTail = newChunk;
next = NULL;
isDead = TRUE;
curPtr = NULL;
readPtr = NULL;
writeHasWrapped = FALSE;
curReadChunk = NULL;
curWriteChunk = NULL;
chunkListLength = 1;
origCurPtr = NULL;
}
inline ThreadStressLog::~ThreadStressLog ()
{
//no thing to do if the list is empty (failed to initialize)
if (chunkListHead == NULL)
{
return;
}
StressLogChunk * chunk = chunkListHead;
do
{
StressLogChunk * tmp = chunk;
chunk = chunk->next;
delete tmp;
StressLog::ChunkDeleted ();
} while (chunk != chunkListHead);
}
//------------------------------------------------------------------------------------------
// Called when logging, checks if we can increase the number of stress log chunks associated
// with the current thread
FORCEINLINE bool ThreadStressLog::GrowChunkList ()
{
_ASSERTE (chunkListLength >= 1);
if (!StressLog::AllowNewChunk (chunkListLength))
{
return FALSE;
}
StressLogChunk * newChunk = new (nothrow) StressLogChunk (chunkListTail, chunkListHead);
if (newChunk == NULL)
{
return FALSE;
}
StressLog::NewChunk ();
chunkListLength++;
chunkListHead->prev = newChunk;
chunkListTail->next = newChunk;
chunkListHead = newChunk;
return TRUE;
}
//------------------------------------------------------------------------------------------
// Called at runtime when writing the log (by StressLog::LogMsg())
// Updates curPtr to point to the next spot in the log where we can write
// a stress message with cArgs arguments
// For convenience it returns a pointer to the empty slot where we can
// write the next stress message.
// cArgs is the number of arguments in the message to be written.
inline StressMsg* ThreadStressLog::AdvanceWrite(int cArgs) {
// _ASSERTE(cArgs <= StressMsg::maxArgCnt);
// advance the marker
StressMsg* p = (StressMsg*)((char*)curPtr - sizeof(StressMsg) - cArgs*sizeof(void*));
//past start of current chunk
//wrap around if we need to
if (p < (StressMsg*)curWriteChunk->StartPtr ())
{
curPtr = AdvWritePastBoundary(cArgs);
}
else
{
curPtr = p;
}
return curPtr;
}
//------------------------------------------------------------------------------------------
// This is the factored-out slow codepath for AdvanceWrite() and is only called by
// AdvanceWrite().
// Returns the stress message flushed against endPtr
// In addition it writes NULLs b/w the startPtr and curPtr
inline StressMsg* ThreadStressLog::AdvWritePastBoundary(int cArgs) {
//zeroed out remaining buffer
memset (curWriteChunk->StartPtr (), 0, (char *)curPtr - (char *)curWriteChunk->StartPtr ());
//if we are already at head of the list, try to grow the list
if (curWriteChunk == chunkListHead)
{
GrowChunkList ();
}
curWriteChunk = curWriteChunk->prev;
if (curWriteChunk == chunkListTail)
{
writeHasWrapped = TRUE;
}
curPtr = (StressMsg*)((char*)curWriteChunk->EndPtr () - sizeof(StressMsg) - cArgs * sizeof(void*));
return curPtr;
}
#endif // DACCESS_COMPILE
#endif // STRESS_LOG
#ifndef __GCENV_BASE_INCLUDED__
#if !defined(STRESS_LOG) || defined(DACCESS_COMPILE)
#define STRESS_LOG_VA(msg) do { } WHILE_0
#define STRESS_LOG0(facility, level, msg) do { } WHILE_0
#define STRESS_LOG1(facility, level, msg, data1) do { } WHILE_0
#define STRESS_LOG2(facility, level, msg, data1, data2) do { } WHILE_0
#define STRESS_LOG3(facility, level, msg, data1, data2, data3) do { } WHILE_0
#define STRESS_LOG4(facility, level, msg, data1, data2, data3, data4) do { } WHILE_0
#define STRESS_LOG5(facility, level, msg, data1, data2, data3, data4, data5) do { } WHILE_0
#define STRESS_LOG6(facility, level, msg, data1, data2, data3, data4, data5, data6) do { } WHILE_0
#define STRESS_LOG7(facility, level, msg, data1, data2, data3, data4, data5, data6, data7) do { } WHILE_0
#define STRESS_LOG_PLUG_MOVE(plug_start, plug_end, plug_delta) do { } WHILE_0
#define STRESS_LOG_ROOT_PROMOTE(root_addr, objPtr, methodTable) do { } WHILE_0
#define STRESS_LOG_ROOT_RELOCATE(root_addr, old_value, new_value, methodTable) do { } WHILE_0
#define STRESS_LOG_GC_START(gcCount, Gen, collectClasses) do { } WHILE_0
#define STRESS_LOG_GC_END(gcCount, Gen, collectClasses) do { } WHILE_0
#define STRESS_LOG_OOM_STACK(size) do { } WHILE_0
#define STRESS_LOG_GC_STACK do { } WHILE_0
#define STRESS_LOG_RESERVE_MEM(numChunks) do { } WHILE_0
#endif // !STRESS_LOG || DACCESS_COMPILE
#endif // !__GCENV_BASE_INCLUDED__
#endif // StressLog_h
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/tools/Common/TypeSystem/CodeGen/TypeDesc.CodeGen.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
namespace Internal.TypeSystem
{
partial class TypeDesc
{
/// <summary>
/// Gets a value indicating whether this is a type that needs to be treated
/// specially.
/// </summary>
public bool IsIntrinsic
{
get
{
return (GetTypeFlags(TypeFlags.IsIntrinsic | TypeFlags.AttributeCacheComputed) & TypeFlags.IsIntrinsic) != 0;
}
}
}
partial class InstantiatedType
{
partial void AddComputedIntrinsicFlag(ref TypeFlags flags)
{
if (_typeDef.IsIntrinsic)
flags |= TypeFlags.IsIntrinsic;
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
namespace Internal.TypeSystem
{
partial class TypeDesc
{
/// <summary>
/// Gets a value indicating whether this is a type that needs to be treated
/// specially.
/// </summary>
public bool IsIntrinsic
{
get
{
return (GetTypeFlags(TypeFlags.IsIntrinsic | TypeFlags.AttributeCacheComputed) & TypeFlags.IsIntrinsic) != 0;
}
}
}
partial class InstantiatedType
{
partial void AddComputedIntrinsicFlag(ref TypeFlags flags)
{
if (_typeDef.IsIntrinsic)
flags |= TypeFlags.IsIntrinsic;
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/Common/src/Interop/Windows/User32/Interop.GetWindowLong.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using System.Runtime.InteropServices;
internal static partial class Interop
{
internal static partial class User32
{
[GeneratedDllImport(Libraries.User32, EntryPoint = "GetWindowLongW")]
public static partial int GetWindowLong(IntPtr hWnd, int uCmd);
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using System.Runtime.InteropServices;
internal static partial class Interop
{
internal static partial class User32
{
[GeneratedDllImport(Libraries.User32, EntryPoint = "GetWindowLongW")]
public static partial int GetWindowLong(IntPtr hWnd, int uCmd);
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/utilcode/securityutil.cpp | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#include "stdafx.h"
#include "securityutil.h"
#include "ex.h"
#include "securitywrapper.h"
// These are the right that we will give to the global section and global events used
// in communicating between debugger and debugee
//
// SECTION_ALL_ACCESS is needed for the IPC block. Unfortunately, we DACL our events and
// IPC block identically. Or this particular right does not need to bleed into here.
//
#ifndef CLR_IPC_GENERIC_RIGHT
#define CLR_IPC_GENERIC_RIGHT (GENERIC_READ | GENERIC_WRITE | GENERIC_EXECUTE | STANDARD_RIGHTS_ALL | SECTION_ALL_ACCESS)
#endif
//*****************************************************************
// static helper function
//
// helper to form ACL that contains AllowedACE of users of current
// process and target process
//
// [IN] pid - target process id
// [OUT] ppACL - ACL for the process
//
// Clean up -
// Caller remember to call FreeACL() on *ppACL
//*****************************************************************
HRESULT SecurityUtil::GetACLOfPid(DWORD pid, PACL *ppACL)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
HRESULT hr = S_OK;
_ASSERTE(ppACL);
*ppACL = NULL;
PSID pCurrentProcessSid = NULL;
PSID pTargetProcessSid = NULL;
PSID pTargetProcessAppContainerSid = NULL;
DWORD cSid = 0;
DWORD dwAclSize = 0;
LOG((LF_CORDB, LL_INFO10000,
"SecurityUtil::GetACLOfPid on pid : 0x%08x\n",
pid));
SidBuffer sidCurrentProcess;
SidBuffer sidTargetProcess;
SidBuffer sidTargetProcessAppContainer;
// Get sid for current process.
if (SUCCEEDED(sidCurrentProcess.InitFromProcessNoThrow(GetCurrentProcessId())))
{
pCurrentProcessSid = sidCurrentProcess.GetSid().RawSid();
cSid++;
}
// Get sid for target process.
if (SUCCEEDED(sidTargetProcess.InitFromProcessNoThrow(pid)))
{
pTargetProcessSid = sidTargetProcess.GetSid().RawSid();
cSid++;
}
//FISHY: what is the scenario where only one of the above calls succeeds?
if (cSid == 0)
{
// failed to get any useful sid. Just return.
// need a better error.
//
hr = E_FAIL;
goto exit;
}
hr = sidTargetProcessAppContainer.InitFromProcessAppContainerSidNoThrow(pid);
if (FAILED(hr))
{
goto exit;
}
else if (hr == S_OK)
{
pTargetProcessAppContainerSid = sidTargetProcessAppContainer.GetSid().RawSid();
cSid++;
}
else if(hr == S_FALSE) //not an app container, no sid to add
{
hr = S_OK; // don't leak S_FALSE
}
LOG((LF_CORDB, LL_INFO10000,
"SecurityUtil::GetACLOfPid number of sid : 0x%08x\n",
cSid));
// Now allocate space for ACL. First calculate the space is need to hold ACL
dwAclSize = sizeof(ACL) + (sizeof(ACCESS_ALLOWED_ACE) - sizeof(DWORD)) * cSid;
if (pCurrentProcessSid)
{
dwAclSize += GetLengthSid(pCurrentProcessSid);
}
if (pTargetProcessSid)
{
dwAclSize += GetLengthSid(pTargetProcessSid);
}
if (pTargetProcessAppContainerSid)
{
dwAclSize += GetLengthSid(pTargetProcessAppContainerSid);
}
*ppACL = (PACL) new (nothrow) char[dwAclSize];
if (*ppACL == NULL)
{
hr = E_OUTOFMEMORY;
goto exit;
}
// Initialize ACL
// add each sid to the allowed ace list
if (!InitializeAcl(*ppACL, dwAclSize, ACL_REVISION))
{
hr = HRESULT_FROM_GetLastError();
goto exit;
}
if (pCurrentProcessSid)
{
// add the current process's sid into ACL if we have it
if (!AddAccessAllowedAce(*ppACL, ACL_REVISION, CLR_IPC_GENERIC_RIGHT, pCurrentProcessSid))
{
hr = HRESULT_FROM_GetLastError();
goto exit;
}
}
if (pTargetProcessSid)
{
// add the target process's sid into ACL if we have it
if (!AddAccessAllowedAce(*ppACL, ACL_REVISION, CLR_IPC_GENERIC_RIGHT, pTargetProcessSid))
{
hr = HRESULT_FROM_GetLastError();
goto exit;
}
}
if (pTargetProcessAppContainerSid)
{
// add the target process's AppContainer's sid into ACL if we have it
if (!AddAccessAllowedAce(*ppACL, ACL_REVISION, CLR_IPC_GENERIC_RIGHT, pTargetProcessAppContainerSid))
{
hr = HRESULT_FROM_GetLastError();
goto exit;
}
}
// we better to form a valid ACL to return
_ASSERTE(IsValidAcl(*ppACL));
exit:
if (FAILED(hr) && *ppACL)
{
delete [] (reinterpret_cast<char*>(ppACL));
}
return hr;
} // SecurityUtil::GetACLOfPid
//*****************************************************************
// static helper function
//
// free the ACL allocated by SecurityUtil::GetACLOfPid
//
// [IN] pACL - ACL to be freed
//
//*****************************************************************
void SecurityUtil::FreeACL(PACL pACL)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
if (pACL)
{
delete [] (reinterpret_cast<char*>(pACL));
}
} // SecurityUtil::FreeACL
//*****************************************************************
// constructor
//
// [IN] pACL - ACL that this instance of SecurityUtil will held on to
//
//*****************************************************************
SecurityUtil::SecurityUtil(PACL pACL)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
m_pACL = pACL;
m_pSacl = NULL;
m_fInitialized = false;
}
//*****************************************************************
// destructor
//
// free the ACL that this instance of SecurityUtil helds on to
//
//*****************************************************************
SecurityUtil::~SecurityUtil()
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
FreeACL(m_pACL);
FreeACL(m_pSacl);
}
//*****************************************************************
// Initialization function
//
// form the SecurityDescriptor that will represent the m_pACL
//
//*****************************************************************
HRESULT SecurityUtil::Init()
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
HRESULT hr = S_OK;
if (m_pACL)
{
if (!InitializeSecurityDescriptor(&m_SD, SECURITY_DESCRIPTOR_REVISION))
{
hr = HRESULT_FROM_GetLastError();
return hr;
}
if (!SetSecurityDescriptorDacl(&m_SD, TRUE, m_pACL, FALSE))
{
hr = HRESULT_FROM_GetLastError();
return hr;
}
m_SA.nLength = sizeof(SECURITY_ATTRIBUTES);
m_SA.lpSecurityDescriptor = &m_SD;
m_SA.bInheritHandle = FALSE;
m_fInitialized = true;
}
return S_OK;
}
// ***************************************************************************
// Initialization functions which will call the normal Init and add a
// mandatory label entry to the sacl
//
// Expects hProcess to be a valid handle to the process which has the desired
// mandatory label
// ***************************************************************************
HRESULT SecurityUtil::Init(HANDLE hProcess)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
HRESULT hr = Init();
if (FAILED(hr))
{
return hr;
}
NewArrayHolder<BYTE> pLabel;
hr = GetMandatoryLabelFromProcess(hProcess, &pLabel);
if (FAILED(hr))
{
return hr;
}
TOKEN_MANDATORY_LABEL * ptml = (TOKEN_MANDATORY_LABEL *) pLabel.GetValue();
hr = SetSecurityDescriptorMandatoryLabel(ptml->Label.Sid);
return hr;
}
// ***************************************************************************
// Given a process, this will put the mandatory label into a buffer and point
// ppbLabel at the buffer.
//
// Caller must free ppbLabel via the array "delete []" operator
// ***************************************************************************
HRESULT SecurityUtil::GetMandatoryLabelFromProcess(HANDLE hProcess, LPBYTE * ppbLabel)
{
*ppbLabel = NULL;
DWORD dwSize = 0;
HandleHolder hToken;
DWORD err = 0;
if(!OpenProcessToken(hProcess, TOKEN_QUERY, &hToken))
{
return HRESULT_FROM_GetLastError();
}
if(!GetTokenInformation(hToken, (TOKEN_INFORMATION_CLASS)TokenIntegrityLevel, NULL, 0, &dwSize))
{
err = GetLastError();
}
// We need to make sure that GetTokenInformation failed in a predictable manner so we know that
// dwSize has the correct buffer size in it.
if (err != ERROR_INSUFFICIENT_BUFFER || dwSize == 0)
{
return HRESULT_FROM_WIN32(err);
}
NewArrayHolder<BYTE> pLabel = new (nothrow) BYTE[dwSize];
if (pLabel == NULL)
{
return E_OUTOFMEMORY;
}
if(!GetTokenInformation(hToken, (TOKEN_INFORMATION_CLASS)TokenIntegrityLevel, pLabel, dwSize, &dwSize))
{
return HRESULT_FROM_GetLastError();
}
// Our caller will be freeing the memory so use Extract
*ppbLabel = pLabel.Extract();
return S_OK;
}
//---------------------------------------------------------------------------------------
//
// Returns pointer inside the specified mandatory SID to the DWORD representing the
// integrity level of the process. This DWORD will be one of the
// SECURITY_MANDATORY_*_RID constants.
//
// Arguments:
// psidIntegrityLevelLabel - [in] PSID in which to find the integrity level.
//
// Return Value:
// Pointer to the RID stored in the specified SID. This RID represents the
// integrity level of the process
//
// static
DWORD * SecurityUtil::GetIntegrityLevelFromMandatorySID(PSID psidIntegrityLevelLabel)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
return GetSidSubAuthority(psidIntegrityLevelLabel, (*GetSidSubAuthorityCount(psidIntegrityLevelLabel) - 1));
}
// Creates a mandatory label ace and sets it to be the entry in the
// security descriptor's sacl. This assumes there are no other entries
// in the sacl
HRESULT SecurityUtil::SetSecurityDescriptorMandatoryLabel(PSID psidIntegrityLevelLabel)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
DWORD cbSid = GetLengthSid(psidIntegrityLevelLabel);
DWORD cbAceStart = offsetof(SYSTEM_MANDATORY_LABEL_ACE, SidStart);
// We are about allocate memory for a ACL and an ACE so we need space for:
// 1) the ACL: sizeof(ACL)
// 2) the entry: the sid is of variable size, so the SYSTEM_MANDATORY_LABEL_ACE
// structure has only the first DWORD of the sid in its definition, to get the
// appropriate size we get size without SidStart and add on the actual size of the sid
DWORD cbSacl = sizeof(ACL) + cbAceStart + cbSid;
NewArrayHolder<BYTE> sacl = new (nothrow) BYTE[cbSacl];
m_pSacl = NULL;
if (sacl == NULL)
{
return E_OUTOFMEMORY;
}
ZeroMemory(sacl.GetValue(), cbSacl);
PACL pSacl = reinterpret_cast<ACL *>(sacl.GetValue());
SYSTEM_MANDATORY_LABEL_ACE * pLabelAce = reinterpret_cast<SYSTEM_MANDATORY_LABEL_ACE *>(sacl.GetValue() + sizeof(ACL));
PSID psid = reinterpret_cast<SID *>(&pLabelAce->SidStart);
// Our buffer looks like this now: (not drawn to scale)
// sacl pSacl pLabelAce psid
// - -
// | |
// | - -
// | |
// | | -
// | - |
// - -
DWORD dwIntegrityLevel = *(GetIntegrityLevelFromMandatorySID(psidIntegrityLevelLabel));
if (dwIntegrityLevel >= SECURITY_MANDATORY_MEDIUM_RID)
{
// No need to set the integrity level unless it's lower than medium
return S_OK;
}
if(!InitializeAcl(pSacl, cbSacl, ACL_REVISION))
{
return HRESULT_FROM_GetLastError();
}
pSacl->AceCount = 1;
pLabelAce->Header.AceType = SYSTEM_MANDATORY_LABEL_ACE_TYPE;
pLabelAce->Header.AceSize = WORD(cbAceStart + cbSid);
pLabelAce->Mask = SYSTEM_MANDATORY_LABEL_NO_WRITE_UP;
memcpy(psid, psidIntegrityLevelLabel, cbSid);
if(!SetSecurityDescriptorSacl(m_SA.lpSecurityDescriptor, TRUE, pSacl, FALSE))
{
return HRESULT_FROM_GetLastError();
}
// No need to delete the sacl buffer, it will be deleted in the
// destructor of this class
m_pSacl = (PACL)sacl.Extract();
return S_OK;
}
//*****************************************************************
// Return SECURITY_ATTRIBUTES that we form in the Init function
//
// No clean up is needed after calling this function. The destructor of the
// instance will do the right thing. Note that this is designed such that
// we minimize memory allocation, ie the SECURITY_DESCRIPTOR and
// SECURITY_ATTRIBUTES are embedded in the SecurityUtil instance.
//
// Caller should not modify the returned SECURITY_ATTRIBUTES!!!
//*****************************************************************
HRESULT SecurityUtil::GetSA(SECURITY_ATTRIBUTES **ppSA)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
_ASSERTE(ppSA);
if (m_fInitialized == false)
{
_ASSERTE(!"Bad code path!");
*ppSA = NULL;
return E_FAIL;
}
*ppSA = &m_SA;
return S_OK;
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#include "stdafx.h"
#include "securityutil.h"
#include "ex.h"
#include "securitywrapper.h"
// These are the right that we will give to the global section and global events used
// in communicating between debugger and debugee
//
// SECTION_ALL_ACCESS is needed for the IPC block. Unfortunately, we DACL our events and
// IPC block identically. Or this particular right does not need to bleed into here.
//
#ifndef CLR_IPC_GENERIC_RIGHT
#define CLR_IPC_GENERIC_RIGHT (GENERIC_READ | GENERIC_WRITE | GENERIC_EXECUTE | STANDARD_RIGHTS_ALL | SECTION_ALL_ACCESS)
#endif
//*****************************************************************
// static helper function
//
// helper to form ACL that contains AllowedACE of users of current
// process and target process
//
// [IN] pid - target process id
// [OUT] ppACL - ACL for the process
//
// Clean up -
// Caller remember to call FreeACL() on *ppACL
//*****************************************************************
HRESULT SecurityUtil::GetACLOfPid(DWORD pid, PACL *ppACL)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
HRESULT hr = S_OK;
_ASSERTE(ppACL);
*ppACL = NULL;
PSID pCurrentProcessSid = NULL;
PSID pTargetProcessSid = NULL;
PSID pTargetProcessAppContainerSid = NULL;
DWORD cSid = 0;
DWORD dwAclSize = 0;
LOG((LF_CORDB, LL_INFO10000,
"SecurityUtil::GetACLOfPid on pid : 0x%08x\n",
pid));
SidBuffer sidCurrentProcess;
SidBuffer sidTargetProcess;
SidBuffer sidTargetProcessAppContainer;
// Get sid for current process.
if (SUCCEEDED(sidCurrentProcess.InitFromProcessNoThrow(GetCurrentProcessId())))
{
pCurrentProcessSid = sidCurrentProcess.GetSid().RawSid();
cSid++;
}
// Get sid for target process.
if (SUCCEEDED(sidTargetProcess.InitFromProcessNoThrow(pid)))
{
pTargetProcessSid = sidTargetProcess.GetSid().RawSid();
cSid++;
}
//FISHY: what is the scenario where only one of the above calls succeeds?
if (cSid == 0)
{
// failed to get any useful sid. Just return.
// need a better error.
//
hr = E_FAIL;
goto exit;
}
hr = sidTargetProcessAppContainer.InitFromProcessAppContainerSidNoThrow(pid);
if (FAILED(hr))
{
goto exit;
}
else if (hr == S_OK)
{
pTargetProcessAppContainerSid = sidTargetProcessAppContainer.GetSid().RawSid();
cSid++;
}
else if(hr == S_FALSE) //not an app container, no sid to add
{
hr = S_OK; // don't leak S_FALSE
}
LOG((LF_CORDB, LL_INFO10000,
"SecurityUtil::GetACLOfPid number of sid : 0x%08x\n",
cSid));
// Now allocate space for ACL. First calculate the space is need to hold ACL
dwAclSize = sizeof(ACL) + (sizeof(ACCESS_ALLOWED_ACE) - sizeof(DWORD)) * cSid;
if (pCurrentProcessSid)
{
dwAclSize += GetLengthSid(pCurrentProcessSid);
}
if (pTargetProcessSid)
{
dwAclSize += GetLengthSid(pTargetProcessSid);
}
if (pTargetProcessAppContainerSid)
{
dwAclSize += GetLengthSid(pTargetProcessAppContainerSid);
}
*ppACL = (PACL) new (nothrow) char[dwAclSize];
if (*ppACL == NULL)
{
hr = E_OUTOFMEMORY;
goto exit;
}
// Initialize ACL
// add each sid to the allowed ace list
if (!InitializeAcl(*ppACL, dwAclSize, ACL_REVISION))
{
hr = HRESULT_FROM_GetLastError();
goto exit;
}
if (pCurrentProcessSid)
{
// add the current process's sid into ACL if we have it
if (!AddAccessAllowedAce(*ppACL, ACL_REVISION, CLR_IPC_GENERIC_RIGHT, pCurrentProcessSid))
{
hr = HRESULT_FROM_GetLastError();
goto exit;
}
}
if (pTargetProcessSid)
{
// add the target process's sid into ACL if we have it
if (!AddAccessAllowedAce(*ppACL, ACL_REVISION, CLR_IPC_GENERIC_RIGHT, pTargetProcessSid))
{
hr = HRESULT_FROM_GetLastError();
goto exit;
}
}
if (pTargetProcessAppContainerSid)
{
// add the target process's AppContainer's sid into ACL if we have it
if (!AddAccessAllowedAce(*ppACL, ACL_REVISION, CLR_IPC_GENERIC_RIGHT, pTargetProcessAppContainerSid))
{
hr = HRESULT_FROM_GetLastError();
goto exit;
}
}
// we better to form a valid ACL to return
_ASSERTE(IsValidAcl(*ppACL));
exit:
if (FAILED(hr) && *ppACL)
{
delete [] (reinterpret_cast<char*>(ppACL));
}
return hr;
} // SecurityUtil::GetACLOfPid
//*****************************************************************
// static helper function
//
// free the ACL allocated by SecurityUtil::GetACLOfPid
//
// [IN] pACL - ACL to be freed
//
//*****************************************************************
void SecurityUtil::FreeACL(PACL pACL)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
if (pACL)
{
delete [] (reinterpret_cast<char*>(pACL));
}
} // SecurityUtil::FreeACL
//*****************************************************************
// constructor
//
// [IN] pACL - ACL that this instance of SecurityUtil will held on to
//
//*****************************************************************
SecurityUtil::SecurityUtil(PACL pACL)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
m_pACL = pACL;
m_pSacl = NULL;
m_fInitialized = false;
}
//*****************************************************************
// destructor
//
// free the ACL that this instance of SecurityUtil helds on to
//
//*****************************************************************
SecurityUtil::~SecurityUtil()
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
FreeACL(m_pACL);
FreeACL(m_pSacl);
}
//*****************************************************************
// Initialization function
//
// form the SecurityDescriptor that will represent the m_pACL
//
//*****************************************************************
HRESULT SecurityUtil::Init()
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
HRESULT hr = S_OK;
if (m_pACL)
{
if (!InitializeSecurityDescriptor(&m_SD, SECURITY_DESCRIPTOR_REVISION))
{
hr = HRESULT_FROM_GetLastError();
return hr;
}
if (!SetSecurityDescriptorDacl(&m_SD, TRUE, m_pACL, FALSE))
{
hr = HRESULT_FROM_GetLastError();
return hr;
}
m_SA.nLength = sizeof(SECURITY_ATTRIBUTES);
m_SA.lpSecurityDescriptor = &m_SD;
m_SA.bInheritHandle = FALSE;
m_fInitialized = true;
}
return S_OK;
}
// ***************************************************************************
// Initialization functions which will call the normal Init and add a
// mandatory label entry to the sacl
//
// Expects hProcess to be a valid handle to the process which has the desired
// mandatory label
// ***************************************************************************
HRESULT SecurityUtil::Init(HANDLE hProcess)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
HRESULT hr = Init();
if (FAILED(hr))
{
return hr;
}
NewArrayHolder<BYTE> pLabel;
hr = GetMandatoryLabelFromProcess(hProcess, &pLabel);
if (FAILED(hr))
{
return hr;
}
TOKEN_MANDATORY_LABEL * ptml = (TOKEN_MANDATORY_LABEL *) pLabel.GetValue();
hr = SetSecurityDescriptorMandatoryLabel(ptml->Label.Sid);
return hr;
}
// ***************************************************************************
// Given a process, this will put the mandatory label into a buffer and point
// ppbLabel at the buffer.
//
// Caller must free ppbLabel via the array "delete []" operator
// ***************************************************************************
HRESULT SecurityUtil::GetMandatoryLabelFromProcess(HANDLE hProcess, LPBYTE * ppbLabel)
{
*ppbLabel = NULL;
DWORD dwSize = 0;
HandleHolder hToken;
DWORD err = 0;
if(!OpenProcessToken(hProcess, TOKEN_QUERY, &hToken))
{
return HRESULT_FROM_GetLastError();
}
if(!GetTokenInformation(hToken, (TOKEN_INFORMATION_CLASS)TokenIntegrityLevel, NULL, 0, &dwSize))
{
err = GetLastError();
}
// We need to make sure that GetTokenInformation failed in a predictable manner so we know that
// dwSize has the correct buffer size in it.
if (err != ERROR_INSUFFICIENT_BUFFER || dwSize == 0)
{
return HRESULT_FROM_WIN32(err);
}
NewArrayHolder<BYTE> pLabel = new (nothrow) BYTE[dwSize];
if (pLabel == NULL)
{
return E_OUTOFMEMORY;
}
if(!GetTokenInformation(hToken, (TOKEN_INFORMATION_CLASS)TokenIntegrityLevel, pLabel, dwSize, &dwSize))
{
return HRESULT_FROM_GetLastError();
}
// Our caller will be freeing the memory so use Extract
*ppbLabel = pLabel.Extract();
return S_OK;
}
//---------------------------------------------------------------------------------------
//
// Returns pointer inside the specified mandatory SID to the DWORD representing the
// integrity level of the process. This DWORD will be one of the
// SECURITY_MANDATORY_*_RID constants.
//
// Arguments:
// psidIntegrityLevelLabel - [in] PSID in which to find the integrity level.
//
// Return Value:
// Pointer to the RID stored in the specified SID. This RID represents the
// integrity level of the process
//
// static
DWORD * SecurityUtil::GetIntegrityLevelFromMandatorySID(PSID psidIntegrityLevelLabel)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
return GetSidSubAuthority(psidIntegrityLevelLabel, (*GetSidSubAuthorityCount(psidIntegrityLevelLabel) - 1));
}
// Creates a mandatory label ace and sets it to be the entry in the
// security descriptor's sacl. This assumes there are no other entries
// in the sacl
HRESULT SecurityUtil::SetSecurityDescriptorMandatoryLabel(PSID psidIntegrityLevelLabel)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
DWORD cbSid = GetLengthSid(psidIntegrityLevelLabel);
DWORD cbAceStart = offsetof(SYSTEM_MANDATORY_LABEL_ACE, SidStart);
// We are about allocate memory for a ACL and an ACE so we need space for:
// 1) the ACL: sizeof(ACL)
// 2) the entry: the sid is of variable size, so the SYSTEM_MANDATORY_LABEL_ACE
// structure has only the first DWORD of the sid in its definition, to get the
// appropriate size we get size without SidStart and add on the actual size of the sid
DWORD cbSacl = sizeof(ACL) + cbAceStart + cbSid;
NewArrayHolder<BYTE> sacl = new (nothrow) BYTE[cbSacl];
m_pSacl = NULL;
if (sacl == NULL)
{
return E_OUTOFMEMORY;
}
ZeroMemory(sacl.GetValue(), cbSacl);
PACL pSacl = reinterpret_cast<ACL *>(sacl.GetValue());
SYSTEM_MANDATORY_LABEL_ACE * pLabelAce = reinterpret_cast<SYSTEM_MANDATORY_LABEL_ACE *>(sacl.GetValue() + sizeof(ACL));
PSID psid = reinterpret_cast<SID *>(&pLabelAce->SidStart);
// Our buffer looks like this now: (not drawn to scale)
// sacl pSacl pLabelAce psid
// - -
// | |
// | - -
// | |
// | | -
// | - |
// - -
DWORD dwIntegrityLevel = *(GetIntegrityLevelFromMandatorySID(psidIntegrityLevelLabel));
if (dwIntegrityLevel >= SECURITY_MANDATORY_MEDIUM_RID)
{
// No need to set the integrity level unless it's lower than medium
return S_OK;
}
if(!InitializeAcl(pSacl, cbSacl, ACL_REVISION))
{
return HRESULT_FROM_GetLastError();
}
pSacl->AceCount = 1;
pLabelAce->Header.AceType = SYSTEM_MANDATORY_LABEL_ACE_TYPE;
pLabelAce->Header.AceSize = WORD(cbAceStart + cbSid);
pLabelAce->Mask = SYSTEM_MANDATORY_LABEL_NO_WRITE_UP;
memcpy(psid, psidIntegrityLevelLabel, cbSid);
if(!SetSecurityDescriptorSacl(m_SA.lpSecurityDescriptor, TRUE, pSacl, FALSE))
{
return HRESULT_FROM_GetLastError();
}
// No need to delete the sacl buffer, it will be deleted in the
// destructor of this class
m_pSacl = (PACL)sacl.Extract();
return S_OK;
}
//*****************************************************************
// Return SECURITY_ATTRIBUTES that we form in the Init function
//
// No clean up is needed after calling this function. The destructor of the
// instance will do the right thing. Note that this is designed such that
// we minimize memory allocation, ie the SECURITY_DESCRIPTOR and
// SECURITY_ATTRIBUTES are embedded in the SecurityUtil instance.
//
// Caller should not modify the returned SECURITY_ATTRIBUTES!!!
//*****************************************************************
HRESULT SecurityUtil::GetSA(SECURITY_ATTRIBUTES **ppSA)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
}
CONTRACTL_END;
_ASSERTE(ppSA);
if (m_fInitialized == false)
{
_ASSERTE(!"Bad code path!");
*ppSA = NULL;
return E_FAIL;
}
*ppSA = &m_SA;
return S_OK;
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/JIT/Methodical/refany/gcreport_r.csproj | <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
</PropertyGroup>
<PropertyGroup>
<DebugType>None</DebugType>
<Optimize>False</Optimize>
</PropertyGroup>
<ItemGroup>
<Compile Include="gcreport.cs" />
</ItemGroup>
</Project>
| <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
</PropertyGroup>
<PropertyGroup>
<DebugType>None</DebugType>
<Optimize>False</Optimize>
</PropertyGroup>
<ItemGroup>
<Compile Include="gcreport.cs" />
</ItemGroup>
</Project>
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.Private.Xml/src/System/Xml/Xsl/XPath/XPathOperator.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
namespace System.Xml.Xsl.XPath
{
// order is importent. We are using them as an index in OperatorGroup & QilOperator & XPathOperatorToQilNodeType arrays
// (ValEq - Eq) == (ValGe - Ge)
internal enum XPathOperator
{
/*Unknown */
Unknown = 0,
// XPath 1.0 operators:
/*Logical */
Or,
And,
/*Equality */
Eq,
Ne,
/*Relational*/
Lt,
Le,
Gt,
Ge,
/*Arithmetic*/
Plus,
Minus,
Multiply,
Divide,
Modulo,
/*Negate */
UnaryMinus,
/*Union */
Union,
LastXPath1Operator = Union,
/* XQuery & XPath 2.0 Operators: */
UnaryPlus,
Idiv,
Is,
After,
Before,
Range,
Except,
Intersect,
ValEq,
ValNe,
ValLt,
ValLe,
ValGt,
ValGe
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
namespace System.Xml.Xsl.XPath
{
// order is importent. We are using them as an index in OperatorGroup & QilOperator & XPathOperatorToQilNodeType arrays
// (ValEq - Eq) == (ValGe - Ge)
internal enum XPathOperator
{
/*Unknown */
Unknown = 0,
// XPath 1.0 operators:
/*Logical */
Or,
And,
/*Equality */
Eq,
Ne,
/*Relational*/
Lt,
Le,
Gt,
Ge,
/*Arithmetic*/
Plus,
Minus,
Multiply,
Divide,
Modulo,
/*Negate */
UnaryMinus,
/*Union */
Union,
LastXPath1Operator = Union,
/* XQuery & XPath 2.0 Operators: */
UnaryPlus,
Idiv,
Is,
After,
Before,
Range,
Except,
Intersect,
ValEq,
ValNe,
ValLt,
ValLe,
ValGt,
ValGe
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/Microsoft.Win32.Registry/tests/RegistryKey/RegistryKeyCreateSubKeyTestsBase.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using Xunit;
namespace Microsoft.Win32.RegistryTests
{
public abstract class RegistryKeyCreateSubKeyTestsBase : RegistryTestsBase
{
protected void Verify_CreateSubKey_KeyExists_OpensKeyWithFixedUpName(string expected, Func<RegistryKey> createSubKey)
{
CreateTestRegistrySubKey(expected);
using (RegistryKey key = createSubKey())
{
Assert.NotNull(key);
Assert.Equal(1, TestRegistryKey.SubKeyCount);
Assert.Equal(TestRegistryKey.Name + @"\" + expected, key.Name);
}
}
protected void Verify_CreateSubKey_KeyDoesNotExist_CreatesKeyWithFixedUpName(string expected, Func<RegistryKey> createSubKey)
{
Assert.Null(TestRegistryKey.OpenSubKey(expected));
Assert.Equal(0, TestRegistryKey.SubKeyCount);
using (RegistryKey key = createSubKey())
{
Assert.NotNull(key);
Assert.Equal(1, TestRegistryKey.SubKeyCount);
Assert.Equal(TestRegistryKey.Name + @"\" + expected, key.Name);
}
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using Xunit;
namespace Microsoft.Win32.RegistryTests
{
public abstract class RegistryKeyCreateSubKeyTestsBase : RegistryTestsBase
{
protected void Verify_CreateSubKey_KeyExists_OpensKeyWithFixedUpName(string expected, Func<RegistryKey> createSubKey)
{
CreateTestRegistrySubKey(expected);
using (RegistryKey key = createSubKey())
{
Assert.NotNull(key);
Assert.Equal(1, TestRegistryKey.SubKeyCount);
Assert.Equal(TestRegistryKey.Name + @"\" + expected, key.Name);
}
}
protected void Verify_CreateSubKey_KeyDoesNotExist_CreatesKeyWithFixedUpName(string expected, Func<RegistryKey> createSubKey)
{
Assert.Null(TestRegistryKey.OpenSubKey(expected));
Assert.Equal(0, TestRegistryKey.SubKeyCount);
using (RegistryKey key = createSubKey())
{
Assert.NotNull(key);
Assert.Equal(1, TestRegistryKey.SubKeyCount);
Assert.Equal(TestRegistryKey.Name + @"\" + expected, key.Name);
}
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/JIT/Methodical/MDArray/DataTypes/char_cs_r.csproj | <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<CLRTestPriority>1</CLRTestPriority>
</PropertyGroup>
<PropertyGroup>
<DebugType>None</DebugType>
<Optimize>False</Optimize>
</PropertyGroup>
<ItemGroup>
<Compile Include="char.cs" />
</ItemGroup>
</Project>
| <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<CLRTestPriority>1</CLRTestPriority>
</PropertyGroup>
<PropertyGroup>
<DebugType>None</DebugType>
<Optimize>False</Optimize>
</PropertyGroup>
<ItemGroup>
<Compile Include="char.cs" />
</ItemGroup>
</Project>
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/src/libunwind/src/unwind/ForcedUnwind.c | /* libunwind - a platform-independent unwind library
Copyright (C) 2003-2004 Hewlett-Packard Co
Contributed by David Mosberger-Tang <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "unwind-internal.h"
_Unwind_Reason_Code
_Unwind_ForcedUnwind (struct _Unwind_Exception *exception_object,
_Unwind_Stop_Fn stop, void *stop_parameter)
{
struct _Unwind_Context context;
unw_context_t uc;
/* We check "stop" here to tell the compiler's inliner that
exception_object->private_1 isn't NULL when calling
_Unwind_Phase2(). */
if (!stop)
return _URC_FATAL_PHASE2_ERROR;
if (_Unwind_InitContext (&context, &uc) < 0)
return _URC_FATAL_PHASE2_ERROR;
exception_object->private_1 = (unsigned long) stop;
exception_object->private_2 = (unsigned long) stop_parameter;
return _Unwind_Phase2 (exception_object, &context);
}
_Unwind_Reason_Code __libunwind_Unwind_ForcedUnwind (struct _Unwind_Exception*,
_Unwind_Stop_Fn, void *)
ALIAS (_Unwind_ForcedUnwind);
| /* libunwind - a platform-independent unwind library
Copyright (C) 2003-2004 Hewlett-Packard Co
Contributed by David Mosberger-Tang <[email protected]>
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "unwind-internal.h"
_Unwind_Reason_Code
_Unwind_ForcedUnwind (struct _Unwind_Exception *exception_object,
_Unwind_Stop_Fn stop, void *stop_parameter)
{
struct _Unwind_Context context;
unw_context_t uc;
/* We check "stop" here to tell the compiler's inliner that
exception_object->private_1 isn't NULL when calling
_Unwind_Phase2(). */
if (!stop)
return _URC_FATAL_PHASE2_ERROR;
if (_Unwind_InitContext (&context, &uc) < 0)
return _URC_FATAL_PHASE2_ERROR;
exception_object->private_1 = (unsigned long) stop;
exception_object->private_2 = (unsigned long) stop_parameter;
return _Unwind_Phase2 (exception_object, &context);
}
_Unwind_Reason_Code __libunwind_Unwind_ForcedUnwind (struct _Unwind_Exception*,
_Unwind_Stop_Fn, void *)
ALIAS (_Unwind_ForcedUnwind);
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.ComponentModel.TypeConverter/src/System/ComponentModel/PropertyDescriptor.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics.CodeAnalysis;
using System.Reflection;
namespace System.ComponentModel
{
/// <summary>
/// Provides a description of a property.
/// </summary>
public abstract class PropertyDescriptor : MemberDescriptor
{
internal const string PropertyDescriptorPropertyTypeMessage = "PropertyDescriptor's PropertyType cannot be statically discovered.";
private TypeConverter? _converter;
private Dictionary<object, EventHandler?>? _valueChangedHandlers;
private object?[]? _editors;
private Type[]? _editorTypes;
private int _editorCount;
/// <summary>
/// Initializes a new instance of the <see cref='System.ComponentModel.PropertyDescriptor'/> class with the specified name and
/// attributes.
/// </summary>
protected PropertyDescriptor(string name, Attribute[]? attrs) : base(name, attrs)
{
}
/// <summary>
/// Initializes a new instance of the <see cref='System.ComponentModel.PropertyDescriptor'/> class with
/// the name and attributes in the specified <see cref='System.ComponentModel.MemberDescriptor'/>.
/// </summary>
protected PropertyDescriptor(MemberDescriptor descr) : base(descr)
{
}
/// <summary>
///
/// Initializes a new instance of the <see cref='System.ComponentModel.PropertyDescriptor'/> class with
/// the name in the specified <see cref='System.ComponentModel.MemberDescriptor'/> and the
/// attributes in both the <see cref='System.ComponentModel.MemberDescriptor'/> and the
/// <see cref='System.Attribute'/> array.
///
/// </summary>
protected PropertyDescriptor(MemberDescriptor descr, Attribute[]? attrs) : base(descr, attrs)
{
}
/// <summary>
/// When overridden in a derived class, gets the type of the
/// component this property is bound to.
/// </summary>
public abstract Type ComponentType { get; }
/// <summary>
/// Gets the type converter for this property.
/// </summary>
public virtual TypeConverter Converter
{
[RequiresUnreferencedCode(PropertyDescriptorPropertyTypeMessage)]
get
{
// Always grab the attribute collection first here, because if the metadata version
// changes it will invalidate our type converter cache.
AttributeCollection attrs = Attributes;
if (_converter == null)
{
TypeConverterAttribute attr = (TypeConverterAttribute)attrs[typeof(TypeConverterAttribute)]!;
if (attr.ConverterTypeName != null && attr.ConverterTypeName.Length > 0)
{
Type? converterType = GetTypeFromName(attr.ConverterTypeName);
if (converterType != null && typeof(TypeConverter).IsAssignableFrom(converterType))
{
_converter = (TypeConverter)CreateInstance(converterType)!;
}
}
if (_converter == null)
{
_converter = TypeDescriptor.GetConverter(PropertyType);
}
}
return _converter;
}
}
/// <summary>
/// Gets a value
/// indicating whether this property should be localized, as
/// specified in the <see cref='System.ComponentModel.LocalizableAttribute'/>.
/// </summary>
public virtual bool IsLocalizable => (LocalizableAttribute.Yes.Equals(Attributes[typeof(LocalizableAttribute)]));
/// <summary>
/// When overridden in a derived class, gets a value indicating whether this
/// property is read-only.
/// </summary>
public abstract bool IsReadOnly { get; }
/// <summary>
/// Gets a value indicating whether this property should be serialized as specified
/// in the <see cref='System.ComponentModel.DesignerSerializationVisibilityAttribute'/>.
/// </summary>
public DesignerSerializationVisibility SerializationVisibility
{
get
{
DesignerSerializationVisibilityAttribute attr = (DesignerSerializationVisibilityAttribute)Attributes[typeof(DesignerSerializationVisibilityAttribute)]!;
return attr.Visibility;
}
}
/// <summary>
/// When overridden in a derived class, gets the type of the property.
/// </summary>
public abstract Type PropertyType { get; }
/// <summary>
/// Allows interested objects to be notified when this property changes.
/// </summary>
public virtual void AddValueChanged(object component!!, EventHandler handler!!)
{
if (_valueChangedHandlers == null)
{
_valueChangedHandlers = new Dictionary<object, EventHandler?>();
}
EventHandler? h = _valueChangedHandlers.GetValueOrDefault(component, defaultValue: null);
_valueChangedHandlers[component] = (EventHandler?)Delegate.Combine(h, handler);
}
/// <summary>
/// When overridden in a derived class, indicates whether
/// resetting the <paramref name="component "/>will change the value of the
/// <paramref name="component"/>.
/// </summary>
public abstract bool CanResetValue(object component);
/// <summary>
/// Compares this to another <see cref='System.ComponentModel.PropertyDescriptor'/>
/// to see if they are equivalent.
/// NOTE: If you make a change here, you likely need to change GetHashCode() as well.
/// </summary>
public override bool Equals([NotNullWhen(true)] object? obj)
{
try
{
if (obj == this)
{
return true;
}
if (obj == null)
{
return false;
}
// Assume that 90% of the time we will only do a .Equals(...) for
// propertydescriptor vs. propertydescriptor... avoid the overhead
// of an instanceof call.
if (obj is PropertyDescriptor pd && pd.NameHashCode == NameHashCode
&& pd.PropertyType == PropertyType
&& pd.Name.Equals(Name))
{
return true;
}
}
catch { }
return false;
}
/// <summary>
/// Creates an instance of the specified type.
/// </summary>
protected object? CreateInstance(
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors)] Type type)
{
Type[] typeArgs = new Type[] { typeof(Type) };
ConstructorInfo? ctor = type.GetConstructor(typeArgs);
if (ctor != null)
{
return TypeDescriptor.CreateInstance(null, type, typeArgs, new object[] { PropertyType });
}
return TypeDescriptor.CreateInstance(null, type, null, null);
}
/// <summary>
/// In an inheriting class, adds the attributes of the inheriting class to the
/// specified list of attributes in the parent class. For duplicate attributes,
/// the last one added to the list will be kept.
/// </summary>
protected override void FillAttributes(IList attributeList)
{
// Each time we fill our attributes, we should clear our cached
// stuff.
_converter = null;
_editors = null;
_editorTypes = null;
_editorCount = 0;
base.FillAttributes(attributeList);
}
[RequiresUnreferencedCode(PropertyDescriptorPropertyTypeMessage)]
public PropertyDescriptorCollection GetChildProperties() => GetChildProperties(null, null);
[RequiresUnreferencedCode(PropertyDescriptorPropertyTypeMessage + " " + AttributeCollection.FilterRequiresUnreferencedCodeMessage)]
public PropertyDescriptorCollection GetChildProperties(Attribute[] filter) => GetChildProperties(null, filter);
[RequiresUnreferencedCode(PropertyDescriptorPropertyTypeMessage + " The Type of instance cannot be statically discovered.")]
public PropertyDescriptorCollection GetChildProperties(object instance) => GetChildProperties(instance, null);
/// <summary>
/// Retrieves the properties
/// </summary>
[RequiresUnreferencedCode(PropertyDescriptorPropertyTypeMessage + " The Type of instance cannot be statically discovered. " + AttributeCollection.FilterRequiresUnreferencedCodeMessage)]
public virtual PropertyDescriptorCollection GetChildProperties(object? instance, Attribute[]? filter)
{
if (instance == null)
{
return TypeDescriptor.GetProperties(PropertyType, filter);
}
else
{
return TypeDescriptor.GetProperties(instance, filter);
}
}
/// <summary>
/// Gets an editor of the specified type.
/// </summary>
[RequiresUnreferencedCode(TypeDescriptor.EditorRequiresUnreferencedCode + " " + PropertyDescriptorPropertyTypeMessage)]
public virtual object? GetEditor(Type editorBaseType)
{
object? editor = null;
// Always grab the attribute collection first here, because if the metadata version
// changes it will invalidate our editor cache.
AttributeCollection attrs = Attributes;
// Check the editors we've already created for this type.
if (_editorTypes != null)
{
for (int i = 0; i < _editorCount; i++)
{
if (_editorTypes[i] == editorBaseType)
{
return _editors![i];
}
}
}
// If one wasn't found, then we must go through the attributes.
if (editor == null)
{
for (int i = 0; i < attrs.Count; i++)
{
if (!(attrs[i] is EditorAttribute attr))
{
continue;
}
Type? editorType = GetTypeFromName(attr.EditorBaseTypeName);
if (editorBaseType == editorType)
{
Type? type = GetTypeFromName(attr.EditorTypeName);
if (type != null)
{
editor = CreateInstance(type);
break;
}
}
}
// Now, if we failed to find it in our own attributes, go to the
// component descriptor.
if (editor == null)
{
editor = TypeDescriptor.GetEditor(PropertyType, editorBaseType);
}
// Now, another slot in our editor cache for next time
if (_editorTypes == null)
{
_editorTypes = new Type[5];
_editors = new object[5];
}
if (_editorCount >= _editorTypes.Length)
{
Type[] newTypes = new Type[_editorTypes.Length * 2];
object[] newEditors = new object[_editors!.Length * 2];
Array.Copy(_editorTypes, newTypes, _editorTypes.Length);
Array.Copy(_editors, newEditors, _editors.Length);
_editorTypes = newTypes;
_editors = newEditors;
}
_editorTypes[_editorCount] = editorBaseType;
_editors![_editorCount++] = editor;
}
return editor;
}
/// <summary>
/// Try to keep this reasonable in [....] with Equals(). Specifically,
/// if A.Equals(B) returns true, A & B should have the same hash code.
/// </summary>
public override int GetHashCode() => NameHashCode ^ PropertyType.GetHashCode();
/// <summary>
/// This method returns the object that should be used during invocation of members.
/// Normally the return value will be the same as the instance passed in. If
/// someone associated another object with this instance, or if the instance is a
/// custom type descriptor, GetInvocationTarget may return a different value.
/// </summary>
protected override object? GetInvocationTarget(Type type, object instance)
{
object? target = base.GetInvocationTarget(type, instance);
if (target is ICustomTypeDescriptor td)
{
target = td.GetPropertyOwner(this);
}
return target;
}
/// <summary>
/// Gets a type using its name.
/// </summary>
[RequiresUnreferencedCode("Calls ComponentType.Assembly.GetType on the non-fully qualified typeName, which the trimmer cannot recognize.")]
[return: DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors)]
protected Type? GetTypeFromName(
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors)] string? typeName)
{
if (typeName == null || typeName.Length == 0)
{
return null;
}
// try the generic method.
Type? typeFromGetType = Type.GetType(typeName);
// If we didn't get a type from the generic method, or if the assembly we found the type
// in is the same as our Component's assembly, use or Component's assembly instead. This is
// because the CLR may have cached an older version if the assembly's version number didn't change
Type? typeFromComponent = null;
if (ComponentType != null)
{
if ((typeFromGetType == null) ||
(ComponentType.Assembly.FullName!.Equals(typeFromGetType.Assembly.FullName)))
{
int comma = typeName.IndexOf(',');
if (comma != -1)
typeName = typeName.Substring(0, comma);
typeFromComponent = ComponentType.Assembly.GetType(typeName);
}
}
return typeFromComponent ?? typeFromGetType;
}
/// <summary>
/// When overridden in a derived class, gets the current value of the property on a component.
/// </summary>
public abstract object? GetValue(object? component);
/// <summary>
/// This should be called by your property descriptor implementation
/// when the property value has changed.
/// </summary>
protected virtual void OnValueChanged(object? component, EventArgs e)
{
if (component != null)
{
_valueChangedHandlers?.GetValueOrDefault(component, defaultValue: null)?.Invoke(component, e);
}
}
/// <summary>
/// Allows interested objects to be notified when this property changes.
/// </summary>
public virtual void RemoveValueChanged(object component!!, EventHandler handler!!)
{
if (_valueChangedHandlers != null)
{
EventHandler? h = _valueChangedHandlers.GetValueOrDefault(component, defaultValue: null);
h = (EventHandler?)Delegate.Remove(h, handler);
if (h != null)
{
_valueChangedHandlers[component] = h;
}
else
{
_valueChangedHandlers.Remove(component);
}
}
}
/// <summary>
/// Return current set of ValueChanged event handlers for a specific
/// component, in the form of a combined multicast event handler.
/// Returns null if no event handlers currently assigned to component.
/// </summary>
protected internal EventHandler? GetValueChangedHandler(object component)
{
if (component != null && _valueChangedHandlers != null)
{
return _valueChangedHandlers.GetValueOrDefault(component, defaultValue: null);
}
else
{
return null;
}
}
/// <summary>
/// When overridden in a derived class, resets the value for this property of the component.
/// </summary>
public abstract void ResetValue(object component);
/// <summary>
/// When overridden in a derived class, sets the value of
/// the component to a different value.
/// </summary>
public abstract void SetValue(object? component, object? value);
/// <summary>
/// When overridden in a derived class, indicates whether the
/// value of this property needs to be persisted.
/// </summary>
public abstract bool ShouldSerializeValue(object component);
/// <summary>
/// Indicates whether value change notifications for this property may originate from outside the property
/// descriptor, such as from the component itself (value=true), or whether notifications will only originate
/// from direct calls made to PropertyDescriptor.SetValue (value=false). For example, the component may
/// implement the INotifyPropertyChanged interface, or may have an explicit '{name}Changed' event for this property.
/// </summary>
public virtual bool SupportsChangeEvents => false;
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics.CodeAnalysis;
using System.Reflection;
namespace System.ComponentModel
{
/// <summary>
/// Provides a description of a property.
/// </summary>
public abstract class PropertyDescriptor : MemberDescriptor
{
internal const string PropertyDescriptorPropertyTypeMessage = "PropertyDescriptor's PropertyType cannot be statically discovered.";
private TypeConverter? _converter;
private Dictionary<object, EventHandler?>? _valueChangedHandlers;
private object?[]? _editors;
private Type[]? _editorTypes;
private int _editorCount;
/// <summary>
/// Initializes a new instance of the <see cref='System.ComponentModel.PropertyDescriptor'/> class with the specified name and
/// attributes.
/// </summary>
protected PropertyDescriptor(string name, Attribute[]? attrs) : base(name, attrs)
{
}
/// <summary>
/// Initializes a new instance of the <see cref='System.ComponentModel.PropertyDescriptor'/> class with
/// the name and attributes in the specified <see cref='System.ComponentModel.MemberDescriptor'/>.
/// </summary>
protected PropertyDescriptor(MemberDescriptor descr) : base(descr)
{
}
/// <summary>
///
/// Initializes a new instance of the <see cref='System.ComponentModel.PropertyDescriptor'/> class with
/// the name in the specified <see cref='System.ComponentModel.MemberDescriptor'/> and the
/// attributes in both the <see cref='System.ComponentModel.MemberDescriptor'/> and the
/// <see cref='System.Attribute'/> array.
///
/// </summary>
protected PropertyDescriptor(MemberDescriptor descr, Attribute[]? attrs) : base(descr, attrs)
{
}
/// <summary>
/// When overridden in a derived class, gets the type of the
/// component this property is bound to.
/// </summary>
public abstract Type ComponentType { get; }
/// <summary>
/// Gets the type converter for this property.
/// </summary>
public virtual TypeConverter Converter
{
[RequiresUnreferencedCode(PropertyDescriptorPropertyTypeMessage)]
get
{
// Always grab the attribute collection first here, because if the metadata version
// changes it will invalidate our type converter cache.
AttributeCollection attrs = Attributes;
if (_converter == null)
{
TypeConverterAttribute attr = (TypeConverterAttribute)attrs[typeof(TypeConverterAttribute)]!;
if (attr.ConverterTypeName != null && attr.ConverterTypeName.Length > 0)
{
Type? converterType = GetTypeFromName(attr.ConverterTypeName);
if (converterType != null && typeof(TypeConverter).IsAssignableFrom(converterType))
{
_converter = (TypeConverter)CreateInstance(converterType)!;
}
}
if (_converter == null)
{
_converter = TypeDescriptor.GetConverter(PropertyType);
}
}
return _converter;
}
}
/// <summary>
/// Gets a value
/// indicating whether this property should be localized, as
/// specified in the <see cref='System.ComponentModel.LocalizableAttribute'/>.
/// </summary>
public virtual bool IsLocalizable => (LocalizableAttribute.Yes.Equals(Attributes[typeof(LocalizableAttribute)]));
/// <summary>
/// When overridden in a derived class, gets a value indicating whether this
/// property is read-only.
/// </summary>
public abstract bool IsReadOnly { get; }
/// <summary>
/// Gets a value indicating whether this property should be serialized as specified
/// in the <see cref='System.ComponentModel.DesignerSerializationVisibilityAttribute'/>.
/// </summary>
public DesignerSerializationVisibility SerializationVisibility
{
get
{
DesignerSerializationVisibilityAttribute attr = (DesignerSerializationVisibilityAttribute)Attributes[typeof(DesignerSerializationVisibilityAttribute)]!;
return attr.Visibility;
}
}
/// <summary>
/// When overridden in a derived class, gets the type of the property.
/// </summary>
public abstract Type PropertyType { get; }
/// <summary>
/// Allows interested objects to be notified when this property changes.
/// </summary>
public virtual void AddValueChanged(object component!!, EventHandler handler!!)
{
if (_valueChangedHandlers == null)
{
_valueChangedHandlers = new Dictionary<object, EventHandler?>();
}
EventHandler? h = _valueChangedHandlers.GetValueOrDefault(component, defaultValue: null);
_valueChangedHandlers[component] = (EventHandler?)Delegate.Combine(h, handler);
}
/// <summary>
/// When overridden in a derived class, indicates whether
/// resetting the <paramref name="component "/>will change the value of the
/// <paramref name="component"/>.
/// </summary>
public abstract bool CanResetValue(object component);
/// <summary>
/// Compares this to another <see cref='System.ComponentModel.PropertyDescriptor'/>
/// to see if they are equivalent.
/// NOTE: If you make a change here, you likely need to change GetHashCode() as well.
/// </summary>
public override bool Equals([NotNullWhen(true)] object? obj)
{
try
{
if (obj == this)
{
return true;
}
if (obj == null)
{
return false;
}
// Assume that 90% of the time we will only do a .Equals(...) for
// propertydescriptor vs. propertydescriptor... avoid the overhead
// of an instanceof call.
if (obj is PropertyDescriptor pd && pd.NameHashCode == NameHashCode
&& pd.PropertyType == PropertyType
&& pd.Name.Equals(Name))
{
return true;
}
}
catch { }
return false;
}
/// <summary>
/// Creates an instance of the specified type.
/// </summary>
protected object? CreateInstance(
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors)] Type type)
{
Type[] typeArgs = new Type[] { typeof(Type) };
ConstructorInfo? ctor = type.GetConstructor(typeArgs);
if (ctor != null)
{
return TypeDescriptor.CreateInstance(null, type, typeArgs, new object[] { PropertyType });
}
return TypeDescriptor.CreateInstance(null, type, null, null);
}
/// <summary>
/// In an inheriting class, adds the attributes of the inheriting class to the
/// specified list of attributes in the parent class. For duplicate attributes,
/// the last one added to the list will be kept.
/// </summary>
protected override void FillAttributes(IList attributeList)
{
// Each time we fill our attributes, we should clear our cached
// stuff.
_converter = null;
_editors = null;
_editorTypes = null;
_editorCount = 0;
base.FillAttributes(attributeList);
}
[RequiresUnreferencedCode(PropertyDescriptorPropertyTypeMessage)]
public PropertyDescriptorCollection GetChildProperties() => GetChildProperties(null, null);
[RequiresUnreferencedCode(PropertyDescriptorPropertyTypeMessage + " " + AttributeCollection.FilterRequiresUnreferencedCodeMessage)]
public PropertyDescriptorCollection GetChildProperties(Attribute[] filter) => GetChildProperties(null, filter);
[RequiresUnreferencedCode(PropertyDescriptorPropertyTypeMessage + " The Type of instance cannot be statically discovered.")]
public PropertyDescriptorCollection GetChildProperties(object instance) => GetChildProperties(instance, null);
/// <summary>
/// Retrieves the properties
/// </summary>
[RequiresUnreferencedCode(PropertyDescriptorPropertyTypeMessage + " The Type of instance cannot be statically discovered. " + AttributeCollection.FilterRequiresUnreferencedCodeMessage)]
public virtual PropertyDescriptorCollection GetChildProperties(object? instance, Attribute[]? filter)
{
if (instance == null)
{
return TypeDescriptor.GetProperties(PropertyType, filter);
}
else
{
return TypeDescriptor.GetProperties(instance, filter);
}
}
/// <summary>
/// Gets an editor of the specified type.
/// </summary>
[RequiresUnreferencedCode(TypeDescriptor.EditorRequiresUnreferencedCode + " " + PropertyDescriptorPropertyTypeMessage)]
public virtual object? GetEditor(Type editorBaseType)
{
object? editor = null;
// Always grab the attribute collection first here, because if the metadata version
// changes it will invalidate our editor cache.
AttributeCollection attrs = Attributes;
// Check the editors we've already created for this type.
if (_editorTypes != null)
{
for (int i = 0; i < _editorCount; i++)
{
if (_editorTypes[i] == editorBaseType)
{
return _editors![i];
}
}
}
// If one wasn't found, then we must go through the attributes.
if (editor == null)
{
for (int i = 0; i < attrs.Count; i++)
{
if (!(attrs[i] is EditorAttribute attr))
{
continue;
}
Type? editorType = GetTypeFromName(attr.EditorBaseTypeName);
if (editorBaseType == editorType)
{
Type? type = GetTypeFromName(attr.EditorTypeName);
if (type != null)
{
editor = CreateInstance(type);
break;
}
}
}
// Now, if we failed to find it in our own attributes, go to the
// component descriptor.
if (editor == null)
{
editor = TypeDescriptor.GetEditor(PropertyType, editorBaseType);
}
// Now, another slot in our editor cache for next time
if (_editorTypes == null)
{
_editorTypes = new Type[5];
_editors = new object[5];
}
if (_editorCount >= _editorTypes.Length)
{
Type[] newTypes = new Type[_editorTypes.Length * 2];
object[] newEditors = new object[_editors!.Length * 2];
Array.Copy(_editorTypes, newTypes, _editorTypes.Length);
Array.Copy(_editors, newEditors, _editors.Length);
_editorTypes = newTypes;
_editors = newEditors;
}
_editorTypes[_editorCount] = editorBaseType;
_editors![_editorCount++] = editor;
}
return editor;
}
/// <summary>
/// Try to keep this reasonable in [....] with Equals(). Specifically,
/// if A.Equals(B) returns true, A & B should have the same hash code.
/// </summary>
public override int GetHashCode() => NameHashCode ^ PropertyType.GetHashCode();
/// <summary>
/// This method returns the object that should be used during invocation of members.
/// Normally the return value will be the same as the instance passed in. If
/// someone associated another object with this instance, or if the instance is a
/// custom type descriptor, GetInvocationTarget may return a different value.
/// </summary>
protected override object? GetInvocationTarget(Type type, object instance)
{
object? target = base.GetInvocationTarget(type, instance);
if (target is ICustomTypeDescriptor td)
{
target = td.GetPropertyOwner(this);
}
return target;
}
/// <summary>
/// Gets a type using its name.
/// </summary>
[RequiresUnreferencedCode("Calls ComponentType.Assembly.GetType on the non-fully qualified typeName, which the trimmer cannot recognize.")]
[return: DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors)]
protected Type? GetTypeFromName(
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors)] string? typeName)
{
if (typeName == null || typeName.Length == 0)
{
return null;
}
// try the generic method.
Type? typeFromGetType = Type.GetType(typeName);
// If we didn't get a type from the generic method, or if the assembly we found the type
// in is the same as our Component's assembly, use or Component's assembly instead. This is
// because the CLR may have cached an older version if the assembly's version number didn't change
Type? typeFromComponent = null;
if (ComponentType != null)
{
if ((typeFromGetType == null) ||
(ComponentType.Assembly.FullName!.Equals(typeFromGetType.Assembly.FullName)))
{
int comma = typeName.IndexOf(',');
if (comma != -1)
typeName = typeName.Substring(0, comma);
typeFromComponent = ComponentType.Assembly.GetType(typeName);
}
}
return typeFromComponent ?? typeFromGetType;
}
/// <summary>
/// When overridden in a derived class, gets the current value of the property on a component.
/// </summary>
public abstract object? GetValue(object? component);
/// <summary>
/// This should be called by your property descriptor implementation
/// when the property value has changed.
/// </summary>
protected virtual void OnValueChanged(object? component, EventArgs e)
{
if (component != null)
{
_valueChangedHandlers?.GetValueOrDefault(component, defaultValue: null)?.Invoke(component, e);
}
}
/// <summary>
/// Allows interested objects to be notified when this property changes.
/// </summary>
public virtual void RemoveValueChanged(object component!!, EventHandler handler!!)
{
if (_valueChangedHandlers != null)
{
EventHandler? h = _valueChangedHandlers.GetValueOrDefault(component, defaultValue: null);
h = (EventHandler?)Delegate.Remove(h, handler);
if (h != null)
{
_valueChangedHandlers[component] = h;
}
else
{
_valueChangedHandlers.Remove(component);
}
}
}
/// <summary>
/// Return current set of ValueChanged event handlers for a specific
/// component, in the form of a combined multicast event handler.
/// Returns null if no event handlers currently assigned to component.
/// </summary>
protected internal EventHandler? GetValueChangedHandler(object component)
{
if (component != null && _valueChangedHandlers != null)
{
return _valueChangedHandlers.GetValueOrDefault(component, defaultValue: null);
}
else
{
return null;
}
}
/// <summary>
/// When overridden in a derived class, resets the value for this property of the component.
/// </summary>
public abstract void ResetValue(object component);
/// <summary>
/// When overridden in a derived class, sets the value of
/// the component to a different value.
/// </summary>
public abstract void SetValue(object? component, object? value);
/// <summary>
/// When overridden in a derived class, indicates whether the
/// value of this property needs to be persisted.
/// </summary>
public abstract bool ShouldSerializeValue(object component);
/// <summary>
/// Indicates whether value change notifications for this property may originate from outside the property
/// descriptor, such as from the component itself (value=true), or whether notifications will only originate
/// from direct calls made to PropertyDescriptor.SetValue (value=false). For example, the component may
/// implement the INotifyPropertyChanged interface, or may have an explicit '{name}Changed' event for this property.
/// </summary>
public virtual bool SupportsChangeEvents => false;
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.ServiceModel.Syndication/tests/TestFeeds/AtomFeeds/multiple-authors-source.xml | <!--
Description: a source with two atom:author elements produces no error
Expect: !Error
-->
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Example Feed</title>
<link href="http://contoso.com/"/>
<updated>2003-12-13T18:30:02Z</updated>
<author>
<name>Author Name</name>
</author>
<id>urn:uuid:60a76c80-d399-11d9-b93C-0003939e0af6</id>
<entry>
<source>
<title>Source of all knowledge</title>
<id>urn:uuid:28213c50-f84c-11d9-8cd6-0800200c9a66</id>
<updated>2003-12-13T17:46:27Z</updated>
<author>
<name>Author Name</name>
</author>
<author>
<name>input name</name>
</author>
</source>
<title>Atom-Powered Robots Run Amok</title>
<link href="http://contoso.com/2003/12/13/atom03"/>
<id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id>
<updated>2003-12-13T18:30:02Z</updated>
<summary>Some text.</summary>
</entry>
</feed>
| <!--
Description: a source with two atom:author elements produces no error
Expect: !Error
-->
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Example Feed</title>
<link href="http://contoso.com/"/>
<updated>2003-12-13T18:30:02Z</updated>
<author>
<name>Author Name</name>
</author>
<id>urn:uuid:60a76c80-d399-11d9-b93C-0003939e0af6</id>
<entry>
<source>
<title>Source of all knowledge</title>
<id>urn:uuid:28213c50-f84c-11d9-8cd6-0800200c9a66</id>
<updated>2003-12-13T17:46:27Z</updated>
<author>
<name>Author Name</name>
</author>
<author>
<name>input name</name>
</author>
</source>
<title>Atom-Powered Robots Run Amok</title>
<link href="http://contoso.com/2003/12/13/atom03"/>
<id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id>
<updated>2003-12-13T18:30:02Z</updated>
<summary>Some text.</summary>
</entry>
</feed>
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.Private.Xml/src/System/Xml/Xsl/QIL/QilCloneVisitor.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Diagnostics.CodeAnalysis;
using System.Xml;
using System.Xml.Xsl;
namespace System.Xml.Xsl.Qil
{
// Create an exact replica of a QIL graph
internal class QilCloneVisitor : QilScopedVisitor
{
private readonly QilFactory _fac;
private readonly SubstitutionList _subs;
//-----------------------------------------------
// Constructors
//-----------------------------------------------
public QilCloneVisitor(QilFactory fac) : this(fac, new SubstitutionList())
{
}
public QilCloneVisitor(QilFactory fac, SubstitutionList subs)
{
_fac = fac;
_subs = subs;
}
//-----------------------------------------------
// Entry
//-----------------------------------------------
public QilNode Clone(QilNode node)
{
QilDepthChecker.Check(node);
// Assume that iterator nodes at the top-level are references rather than definitions
return VisitAssumeReference(node);
}
//-----------------------------------------------
// QilVisitor overrides
//-----------------------------------------------
/// <summary>
/// Visit all children of "parent", replacing each child with a copy of each child.
/// </summary>
protected override QilNode Visit(QilNode oldNode)
{
QilNode? newNode = null;
if (oldNode == null)
return null!;
// ShallowClone any nodes which have not yet been cloned
if (oldNode is QilReference)
{
// Reference nodes may have been cloned previously and put into scope
newNode = FindClonedReference(oldNode);
}
if (newNode == null)
newNode = oldNode.ShallowClone(_fac);
return base.Visit(newNode);
}
/// <summary>
/// Visit all children of "parent", replacing each child with a copy of each child.
/// </summary>
protected override QilNode VisitChildren(QilNode parent)
{
// Visit children
for (int i = 0; i < parent.Count; i++)
{
QilNode child = parent[i];
// If child is a reference,
if (IsReference(parent, i))
{
// Visit the reference and substitute its copy
parent[i] = VisitReference(child);
// If no substutition found, then use original child
if (parent[i] == null)
parent[i] = child;
}
else
{
// Otherwise, visit the node and substitute its copy
parent[i] = Visit(child)!;
}
}
return parent;
}
/// <summary>
/// If a cloned reference is in scope, replace "oldNode". Otherwise, return "oldNode".
/// </summary>
protected override QilNode VisitReference(QilNode oldNode)
{
QilNode? newNode = FindClonedReference(oldNode);
return base.VisitReference(newNode == null ? oldNode : newNode);
}
//-----------------------------------------------
// QilScopedVisitor methods
//-----------------------------------------------
/// <summary>
/// Push node and its shallow clone onto the substitution list.
/// </summary>
protected override void BeginScope(QilNode node)
{
_subs.AddSubstitutionPair(node, node.ShallowClone(_fac));
}
/// <summary>
/// Pop entry from substitution list.
/// </summary>
protected override void EndScope(QilNode node)
{
_subs.RemoveLastSubstitutionPair();
}
//-----------------------------------------------
// QilCloneVisitor methods
//-----------------------------------------------
/// <summary>
/// Find the clone of an in-scope reference.
/// </summary>
protected QilNode? FindClonedReference(QilNode node)
{
return _subs.FindReplacement(node);
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Diagnostics.CodeAnalysis;
using System.Xml;
using System.Xml.Xsl;
namespace System.Xml.Xsl.Qil
{
// Create an exact replica of a QIL graph
internal class QilCloneVisitor : QilScopedVisitor
{
private readonly QilFactory _fac;
private readonly SubstitutionList _subs;
//-----------------------------------------------
// Constructors
//-----------------------------------------------
public QilCloneVisitor(QilFactory fac) : this(fac, new SubstitutionList())
{
}
public QilCloneVisitor(QilFactory fac, SubstitutionList subs)
{
_fac = fac;
_subs = subs;
}
//-----------------------------------------------
// Entry
//-----------------------------------------------
public QilNode Clone(QilNode node)
{
QilDepthChecker.Check(node);
// Assume that iterator nodes at the top-level are references rather than definitions
return VisitAssumeReference(node);
}
//-----------------------------------------------
// QilVisitor overrides
//-----------------------------------------------
/// <summary>
/// Visit all children of "parent", replacing each child with a copy of each child.
/// </summary>
protected override QilNode Visit(QilNode oldNode)
{
QilNode? newNode = null;
if (oldNode == null)
return null!;
// ShallowClone any nodes which have not yet been cloned
if (oldNode is QilReference)
{
// Reference nodes may have been cloned previously and put into scope
newNode = FindClonedReference(oldNode);
}
if (newNode == null)
newNode = oldNode.ShallowClone(_fac);
return base.Visit(newNode);
}
/// <summary>
/// Visit all children of "parent", replacing each child with a copy of each child.
/// </summary>
protected override QilNode VisitChildren(QilNode parent)
{
// Visit children
for (int i = 0; i < parent.Count; i++)
{
QilNode child = parent[i];
// If child is a reference,
if (IsReference(parent, i))
{
// Visit the reference and substitute its copy
parent[i] = VisitReference(child);
// If no substutition found, then use original child
if (parent[i] == null)
parent[i] = child;
}
else
{
// Otherwise, visit the node and substitute its copy
parent[i] = Visit(child)!;
}
}
return parent;
}
/// <summary>
/// If a cloned reference is in scope, replace "oldNode". Otherwise, return "oldNode".
/// </summary>
protected override QilNode VisitReference(QilNode oldNode)
{
QilNode? newNode = FindClonedReference(oldNode);
return base.VisitReference(newNode == null ? oldNode : newNode);
}
//-----------------------------------------------
// QilScopedVisitor methods
//-----------------------------------------------
/// <summary>
/// Push node and its shallow clone onto the substitution list.
/// </summary>
protected override void BeginScope(QilNode node)
{
_subs.AddSubstitutionPair(node, node.ShallowClone(_fac));
}
/// <summary>
/// Pop entry from substitution list.
/// </summary>
protected override void EndScope(QilNode node)
{
_subs.RemoveLastSubstitutionPair();
}
//-----------------------------------------------
// QilCloneVisitor methods
//-----------------------------------------------
/// <summary>
/// Find the clone of an in-scope reference.
/// </summary>
protected QilNode? FindClonedReference(QilNode node)
{
return _subs.FindReplacement(node);
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.Net.Requests/src/System/Net/TaskExtensions.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
namespace System.Net
{
internal static class TaskExtensions
{
public static TaskCompletionSource<TResult> ToApm<TResult>(
this Task<TResult> task,
AsyncCallback? callback,
object? state)
{
TaskCompletionSource<TResult> tcs = new TaskCompletionSource<TResult>(state);
task.ContinueWith(completedTask =>
{
bool shouldInvokeCallback = false;
if (completedTask.IsFaulted)
{
shouldInvokeCallback = tcs.TrySetException(completedTask.Exception!.InnerExceptions);
}
else if (completedTask.IsCanceled)
{
shouldInvokeCallback = tcs.TrySetCanceled();
}
else
{
shouldInvokeCallback = tcs.TrySetResult(completedTask.Result);
}
// Only invoke the callback if it exists AND we were able to transition the TCS
// to the terminal state. If we couldn't transition the task it is because it was
// already transitioned to a Canceled state via a previous HttpWebRequest.Abort() call.
if (shouldInvokeCallback)
{
if (callback != null)
{
callback(tcs.Task);
}
}
else
{
// Verify that the current status of the tcs.Task is 'Canceled'. This
// occurred due to a previous call of tcs.TrySetCanceled() from the
// HttpWebRequest.Abort() method.
Debug.Assert(tcs.Task.IsCanceled);
}
}, CancellationToken.None, TaskContinuationOptions.None, TaskScheduler.Default);
return tcs;
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
namespace System.Net
{
internal static class TaskExtensions
{
public static TaskCompletionSource<TResult> ToApm<TResult>(
this Task<TResult> task,
AsyncCallback? callback,
object? state)
{
TaskCompletionSource<TResult> tcs = new TaskCompletionSource<TResult>(state);
task.ContinueWith(completedTask =>
{
bool shouldInvokeCallback = false;
if (completedTask.IsFaulted)
{
shouldInvokeCallback = tcs.TrySetException(completedTask.Exception!.InnerExceptions);
}
else if (completedTask.IsCanceled)
{
shouldInvokeCallback = tcs.TrySetCanceled();
}
else
{
shouldInvokeCallback = tcs.TrySetResult(completedTask.Result);
}
// Only invoke the callback if it exists AND we were able to transition the TCS
// to the terminal state. If we couldn't transition the task it is because it was
// already transitioned to a Canceled state via a previous HttpWebRequest.Abort() call.
if (shouldInvokeCallback)
{
if (callback != null)
{
callback(tcs.Task);
}
}
else
{
// Verify that the current status of the tcs.Task is 'Canceled'. This
// occurred due to a previous call of tcs.TrySetCanceled() from the
// HttpWebRequest.Abort() method.
Debug.Assert(tcs.Task.IsCanceled);
}
}, CancellationToken.None, TaskContinuationOptions.None, TaskScheduler.Default);
return tcs;
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.Reflection.Metadata/src/System/Reflection/Internal/MemoryBlocks/ExternalMemoryBlockProvider.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System.Diagnostics;
using System.IO;
namespace System.Reflection.Internal
{
/// <summary>
/// Represents raw memory owned by an external object.
/// </summary>
internal unsafe sealed class ExternalMemoryBlockProvider : MemoryBlockProvider
{
private byte* _memory;
private int _size;
public unsafe ExternalMemoryBlockProvider(byte* memory, int size)
{
_memory = memory;
_size = size;
}
public override int Size
{
get
{
return _size;
}
}
protected override AbstractMemoryBlock GetMemoryBlockImpl(int start, int size)
{
return new ExternalMemoryBlock(this, _memory + start, size);
}
public override Stream GetStream(out StreamConstraints constraints)
{
constraints = new StreamConstraints(null, 0, _size);
return new ReadOnlyUnmanagedMemoryStream(_memory, _size);
}
protected override void Dispose(bool disposing)
{
Debug.Assert(disposing);
// we don't own the memory, just null out the pointer.
_memory = null;
_size = 0;
}
public byte* Pointer
{
get
{
return _memory;
}
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System.Diagnostics;
using System.IO;
namespace System.Reflection.Internal
{
/// <summary>
/// Represents raw memory owned by an external object.
/// </summary>
internal unsafe sealed class ExternalMemoryBlockProvider : MemoryBlockProvider
{
private byte* _memory;
private int _size;
public unsafe ExternalMemoryBlockProvider(byte* memory, int size)
{
_memory = memory;
_size = size;
}
public override int Size
{
get
{
return _size;
}
}
protected override AbstractMemoryBlock GetMemoryBlockImpl(int start, int size)
{
return new ExternalMemoryBlock(this, _memory + start, size);
}
public override Stream GetStream(out StreamConstraints constraints)
{
constraints = new StreamConstraints(null, 0, _size);
return new ReadOnlyUnmanagedMemoryStream(_memory, _size);
}
protected override void Dispose(bool disposing)
{
Debug.Assert(disposing);
// we don't own the memory, just null out the pointer.
_memory = null;
_size = 0;
}
public byte* Pointer
{
get
{
return _memory;
}
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.ComponentModel.TypeConverter/src/System/ComponentModel/Design/ComponentChangedEventArgs.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
namespace System.ComponentModel.Design
{
/// <summary>
/// Provides data for the <see cref='System.ComponentModel.Design.IComponentChangeService.ComponentChanged'/> event.
/// </summary>
public sealed class ComponentChangedEventArgs : EventArgs
{
/// <summary>
/// Gets or sets the component that is the cause of this event.
/// </summary>
public object? Component { get; }
/// <summary>
/// Gets or sets the member that is about to change.
/// </summary>
public MemberDescriptor? Member { get; }
/// <summary>
/// Gets or sets the new value of the changed member.
/// </summary>
public object? NewValue { get; }
/// <summary>
/// Gets or sets the old value of the changed member.
/// </summary>
public object? OldValue { get; }
/// <summary>
/// Initializes a new instance of the <see cref='System.ComponentModel.Design.ComponentChangedEventArgs'/> class.
/// </summary>
public ComponentChangedEventArgs(object? component, MemberDescriptor? member, object? oldValue, object? newValue)
{
Component = component;
Member = member;
OldValue = oldValue;
NewValue = newValue;
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
namespace System.ComponentModel.Design
{
/// <summary>
/// Provides data for the <see cref='System.ComponentModel.Design.IComponentChangeService.ComponentChanged'/> event.
/// </summary>
public sealed class ComponentChangedEventArgs : EventArgs
{
/// <summary>
/// Gets or sets the component that is the cause of this event.
/// </summary>
public object? Component { get; }
/// <summary>
/// Gets or sets the member that is about to change.
/// </summary>
public MemberDescriptor? Member { get; }
/// <summary>
/// Gets or sets the new value of the changed member.
/// </summary>
public object? NewValue { get; }
/// <summary>
/// Gets or sets the old value of the changed member.
/// </summary>
public object? OldValue { get; }
/// <summary>
/// Initializes a new instance of the <see cref='System.ComponentModel.Design.ComponentChangedEventArgs'/> class.
/// </summary>
public ComponentChangedEventArgs(object? component, MemberDescriptor? member, object? oldValue, object? newValue)
{
Component = component;
Member = member;
OldValue = oldValue;
NewValue = newValue;
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/mono/mono/mini/mini-llvm.c | /**
* \file
* llvm "Backend" for the mono JIT
*
* Copyright 2009-2011 Novell Inc (http://www.novell.com)
* Copyright 2011 Xamarin Inc (http://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#include "config.h"
#include <mono/metadata/debug-helpers.h>
#include <mono/metadata/debug-internals.h>
#include <mono/metadata/mempool-internals.h>
#include <mono/metadata/environment.h>
#include <mono/metadata/object-internals.h>
#include <mono/metadata/abi-details.h>
#include <mono/metadata/tokentype.h>
#include <mono/utils/mono-tls.h>
#include <mono/utils/mono-dl.h>
#include <mono/utils/mono-time.h>
#include <mono/utils/freebsd-dwarf.h>
#ifndef __STDC_LIMIT_MACROS
#define __STDC_LIMIT_MACROS
#endif
#ifndef __STDC_CONSTANT_MACROS
#define __STDC_CONSTANT_MACROS
#endif
#include "llvm-c/BitWriter.h"
#include "llvm-c/Analysis.h"
#include "mini-llvm-cpp.h"
#include "llvm-jit.h"
#include "aot-compiler.h"
#include "mini-llvm.h"
#include "mini-runtime.h"
#include <mono/utils/mono-math.h>
#ifndef DISABLE_JIT
#if defined(TARGET_AMD64) && defined(TARGET_WIN32) && defined(HOST_WIN32) && defined(_MSC_VER)
#define TARGET_X86_64_WIN32_MSVC
#endif
#if defined(TARGET_X86_64_WIN32_MSVC)
#define TARGET_WIN32_MSVC
#endif
#if LLVM_API_VERSION < 900
#error "The version of the mono llvm repository is too old."
#endif
/*
* Information associated by mono with LLVM modules.
*/
typedef struct {
LLVMModuleRef lmodule;
LLVMValueRef throw_icall, rethrow, throw_corlib_exception;
GHashTable *llvm_types;
LLVMValueRef dummy_got_var;
const char *get_method_symbol;
const char *get_unbox_tramp_symbol;
const char *init_aotconst_symbol;
GHashTable *plt_entries;
GHashTable *plt_entries_ji;
GHashTable *method_to_lmethod;
GHashTable *method_to_call_info;
GHashTable *lvalue_to_lcalls;
GHashTable *direct_callables;
/* Maps got slot index -> LLVMValueRef */
GHashTable *aotconst_vars;
char **bb_names;
int bb_names_len;
GPtrArray *used;
LLVMTypeRef ptr_type;
GPtrArray *subprogram_mds;
MonoEERef *mono_ee;
LLVMExecutionEngineRef ee;
gboolean external_symbols;
gboolean emit_dwarf;
int max_got_offset;
LLVMValueRef personality;
gpointer gc_poll_cold_wrapper_compiled;
/* For AOT */
MonoAssembly *assembly;
char *global_prefix;
MonoAotFileInfo aot_info;
const char *eh_frame_symbol;
LLVMValueRef get_method, get_unbox_tramp, init_aotconst_func;
LLVMValueRef init_methods [AOT_INIT_METHOD_NUM];
LLVMValueRef code_start, code_end;
LLVMValueRef inited_var;
LLVMValueRef unbox_tramp_indexes;
LLVMValueRef unbox_trampolines;
LLVMValueRef gc_poll_cold_wrapper;
LLVMValueRef info_var;
LLVMTypeRef *info_var_eltypes;
int max_inited_idx, max_method_idx;
gboolean has_jitted_code;
gboolean static_link;
gboolean llvm_only;
gboolean interp;
GHashTable *idx_to_lmethod;
GHashTable *idx_to_unbox_tramp;
GPtrArray *callsite_list;
LLVMContextRef context;
LLVMValueRef sentinel_exception;
LLVMValueRef gc_safe_point_flag_var;
LLVMValueRef interrupt_flag_var;
void *di_builder, *cu;
GHashTable *objc_selector_to_var;
GPtrArray *cfgs;
int unbox_tramp_num, unbox_tramp_elemsize;
GHashTable *got_idx_to_type;
GHashTable *no_method_table_lmethods;
} MonoLLVMModule;
/*
* Information associated by the backend with mono basic blocks.
*/
typedef struct {
LLVMBasicBlockRef bblock, end_bblock;
LLVMValueRef finally_ind;
gboolean added, invoke_target;
/*
* If this bblock is the start of a finally clause, this is a list of bblocks it
* needs to branch to in ENDFINALLY.
*/
GSList *call_handler_return_bbs;
/*
* If this bblock is the start of a finally clause, this is the bblock that
* CALL_HANDLER needs to branch to.
*/
LLVMBasicBlockRef call_handler_target_bb;
/* The list of switch statements generated by ENDFINALLY instructions */
GSList *endfinally_switch_ins_list;
GSList *phi_nodes;
} BBInfo;
/*
* Structure containing emit state
*/
typedef struct {
MonoMemPool *mempool;
/* Maps method names to the corresponding LLVMValueRef */
GHashTable *emitted_method_decls;
MonoCompile *cfg;
LLVMValueRef lmethod;
MonoLLVMModule *module;
LLVMModuleRef lmodule;
BBInfo *bblocks;
int sindex, default_index, ex_index;
LLVMBuilderRef builder;
LLVMValueRef *values, *addresses;
MonoType **vreg_cli_types;
LLVMCallInfo *linfo;
MonoMethodSignature *sig;
GSList *builders;
GHashTable *region_to_handler;
GHashTable *clause_to_handler;
LLVMBuilderRef alloca_builder;
LLVMValueRef last_alloca;
LLVMValueRef rgctx_arg;
LLVMValueRef this_arg;
LLVMTypeRef *vreg_types;
gboolean *is_vphi;
LLVMTypeRef method_type;
LLVMBasicBlockRef init_bb, inited_bb;
gboolean *is_dead;
gboolean *unreachable;
gboolean llvm_only;
gboolean has_got_access;
gboolean is_linkonce;
gboolean emit_dummy_arg;
gboolean has_safepoints;
gboolean has_catch;
int this_arg_pindex, rgctx_arg_pindex;
LLVMValueRef imt_rgctx_loc;
GHashTable *llvm_types;
LLVMValueRef dbg_md;
MonoDebugMethodInfo *minfo;
/* For every clause, the clauses it is nested in */
GSList **nested_in;
LLVMValueRef ex_var;
GHashTable *exc_meta;
GPtrArray *callsite_list;
GPtrArray *phi_values;
GPtrArray *bblock_list;
char *method_name;
GHashTable *jit_callees;
LLVMValueRef long_bb_break_var;
int *gc_var_indexes;
LLVMValueRef gc_pin_area;
LLVMValueRef il_state;
LLVMValueRef il_state_ret;
} EmitContext;
typedef struct {
MonoBasicBlock *bb;
MonoInst *phi;
MonoBasicBlock *in_bb;
int sreg;
} PhiNode;
/*
* Instruction metadata
* This is the same as ins_info, but LREG != IREG.
*/
#ifdef MINI_OP
#undef MINI_OP
#endif
#ifdef MINI_OP3
#undef MINI_OP3
#endif
#define MINI_OP(a,b,dest,src1,src2) dest, src1, src2, ' ',
#define MINI_OP3(a,b,dest,src1,src2,src3) dest, src1, src2, src3,
#define NONE ' '
#define IREG 'i'
#define FREG 'f'
#define VREG 'v'
#define XREG 'x'
#define LREG 'l'
/* keep in sync with the enum in mini.h */
const char
mini_llvm_ins_info[] = {
#include "mini-ops.h"
};
#undef MINI_OP
#undef MINI_OP3
#if TARGET_SIZEOF_VOID_P == 4
#define GET_LONG_IMM(ins) ((ins)->inst_l)
#else
#define GET_LONG_IMM(ins) ((ins)->inst_imm)
#endif
#define LLVM_INS_INFO(opcode) (&mini_llvm_ins_info [((opcode) - OP_START - 1) * 4])
#if 0
#define TRACE_FAILURE(msg) do { printf ("%s\n", msg); } while (0)
#else
#define TRACE_FAILURE(msg)
#endif
#ifdef TARGET_X86
#define IS_TARGET_X86 1
#else
#define IS_TARGET_X86 0
#endif
#ifdef TARGET_AMD64
#define IS_TARGET_AMD64 1
#else
#define IS_TARGET_AMD64 0
#endif
#define ctx_ok(ctx) (!(ctx)->cfg->disable_llvm)
enum {
MAX_VECTOR_ELEMS = 32, // 2 vectors * 128 bits per vector / 8 bits per element
ARM64_MAX_VECTOR_ELEMS = 16,
};
const int mask_0_incr_1 [] = {
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
};
static LLVMIntPredicate cond_to_llvm_cond [] = {
LLVMIntEQ,
LLVMIntNE,
LLVMIntSLE,
LLVMIntSGE,
LLVMIntSLT,
LLVMIntSGT,
LLVMIntULE,
LLVMIntUGE,
LLVMIntULT,
LLVMIntUGT,
};
static LLVMRealPredicate fpcond_to_llvm_cond [] = {
LLVMRealOEQ,
LLVMRealUNE,
LLVMRealOLE,
LLVMRealOGE,
LLVMRealOLT,
LLVMRealOGT,
LLVMRealULE,
LLVMRealUGE,
LLVMRealULT,
LLVMRealUGT,
LLVMRealORD,
LLVMRealUNO
};
/* See Table 3-1 ("Comparison Predicate for CMPPD and CMPPS Instructions") in
* Vol. 2A of the Intel SDM.
*/
enum {
SSE_eq_ord_nosignal = 0,
SSE_lt_ord_signal = 1,
SSE_le_ord_signal = 2,
SSE_unord_nosignal = 3,
SSE_neq_unord_nosignal = 4,
SSE_nlt_unord_signal = 5,
SSE_nle_unord_signal = 6,
SSE_ord_nosignal = 7,
};
static MonoLLVMModule aot_module;
static GHashTable *intrins_id_to_intrins;
static LLVMTypeRef i1_t, i2_t, i4_t, i8_t, r4_t, r8_t;
static LLVMTypeRef sse_i1_t, sse_i2_t, sse_i4_t, sse_i8_t, sse_r4_t, sse_r8_t;
static LLVMTypeRef v64_i1_t, v64_i2_t, v64_i4_t, v64_i8_t, v64_r4_t, v64_r8_t;
static LLVMTypeRef v128_i1_t, v128_i2_t, v128_i4_t, v128_i8_t, v128_r4_t, v128_r8_t;
static LLVMTypeRef void_func_t;
static MonoLLVMModule *init_jit_module (void);
static void emit_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder, const unsigned char *cil_code);
static void emit_default_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder);
static LLVMValueRef emit_dbg_subprogram (EmitContext *ctx, MonoCompile *cfg, LLVMValueRef method, const char *name);
static void emit_dbg_info (MonoLLVMModule *module, const char *filename, const char *cu_name);
static void emit_cond_system_exception (EmitContext *ctx, MonoBasicBlock *bb, const char *exc_type, LLVMValueRef cmp, gboolean force_explicit);
static LLVMValueRef get_intrins (EmitContext *ctx, int id);
static LLVMValueRef get_intrins_from_module (LLVMModuleRef lmodule, int id);
static void llvm_jit_finalize_method (EmitContext *ctx);
static void mono_llvm_nonnull_state_update (EmitContext *ctx, LLVMValueRef lcall, MonoMethod *call_method, LLVMValueRef *args, int num_params);
static void mono_llvm_propagate_nonnull_final (GHashTable *all_specializable, MonoLLVMModule *module);
static void create_aot_info_var (MonoLLVMModule *module);
static void set_invariant_load_flag (LLVMValueRef v);
static void set_nonnull_load_flag (LLVMValueRef v);
enum {
INTRIN_scalar = 1 << 0,
INTRIN_vector64 = 1 << 1,
INTRIN_vector128 = 1 << 2,
INTRIN_vectorwidths = 3,
INTRIN_vectormask = 0x7,
INTRIN_int8 = 1 << 3,
INTRIN_int16 = 1 << 4,
INTRIN_int32 = 1 << 5,
INTRIN_int64 = 1 << 6,
INTRIN_float32 = 1 << 7,
INTRIN_float64 = 1 << 8,
INTRIN_elementwidths = 6,
};
typedef uint16_t llvm_ovr_tag_t;
static LLVMTypeRef intrin_types [INTRIN_vectorwidths][INTRIN_elementwidths];
static const llvm_ovr_tag_t intrin_arm64_ovr [] = {
#define INTRINS(sym, ...) 0,
#define INTRINS_OVR(sym, ...) 0,
#define INTRINS_OVR_2_ARG(sym, ...) 0,
#define INTRINS_OVR_3_ARG(sym, ...) 0,
#define INTRINS_OVR_TAG(sym, _, arch, spec) spec,
#define INTRINS_OVR_TAG_KIND(sym, _, kind, arch, spec) spec,
#include "llvm-intrinsics.h"
};
enum {
INTRIN_kind_ftoi = 1,
INTRIN_kind_widen,
INTRIN_kind_widen_across,
INTRIN_kind_across,
INTRIN_kind_arm64_dot_prod,
};
static const uint8_t intrin_kind [] = {
#define INTRINS(sym, ...) 0,
#define INTRINS_OVR(sym, ...) 0,
#define INTRINS_OVR_2_ARG(sym, ...) 0,
#define INTRINS_OVR_3_ARG(sym, ...) 0,
#define INTRINS_OVR_TAG(sym, _, arch, spec) 0,
#define INTRINS_OVR_TAG_KIND(sym, _, arch, kind, spec) kind,
#include "llvm-intrinsics.h"
};
static inline llvm_ovr_tag_t
ovr_tag_force_scalar (llvm_ovr_tag_t tag)
{
return (tag & ~INTRIN_vectormask) | INTRIN_scalar;
}
static inline llvm_ovr_tag_t
ovr_tag_smaller_vector (llvm_ovr_tag_t tag)
{
return (tag & ~INTRIN_vectormask) | ((tag & INTRIN_vectormask) >> 1);
}
static inline llvm_ovr_tag_t
ovr_tag_smaller_elements (llvm_ovr_tag_t tag)
{
return ((tag & ~INTRIN_vectormask) >> 1) | (tag & INTRIN_vectormask);
}
static inline llvm_ovr_tag_t
ovr_tag_corresponding_integer (llvm_ovr_tag_t tag)
{
return ((tag & ~INTRIN_vectormask) >> 2) | (tag & INTRIN_vectormask);
}
static LLVMTypeRef
ovr_tag_to_llvm_type (llvm_ovr_tag_t tag)
{
int vw = 0;
int ew = 0;
if (tag & INTRIN_vector64) vw = 1;
else if (tag & INTRIN_vector128) vw = 2;
if (tag & INTRIN_int16) ew = 1;
else if (tag & INTRIN_int32) ew = 2;
else if (tag & INTRIN_int64) ew = 3;
else if (tag & INTRIN_float32) ew = 4;
else if (tag & INTRIN_float64) ew = 5;
return intrin_types [vw][ew];
}
static int
key_from_id_and_tag (int id, llvm_ovr_tag_t ovr_tag)
{
return (((int) ovr_tag) << 23) | id;
}
static llvm_ovr_tag_t
ovr_tag_from_mono_vector_class (MonoClass *klass) {
int size = mono_class_value_size (klass, NULL);
llvm_ovr_tag_t ret = 0;
switch (size) {
case 8: ret |= INTRIN_vector64; break;
case 16: ret |= INTRIN_vector128; break;
}
MonoType *etype = mono_class_get_context (klass)->class_inst->type_argv [0];
switch (etype->type) {
case MONO_TYPE_I1: case MONO_TYPE_U1: ret |= INTRIN_int8; break;
case MONO_TYPE_I2: case MONO_TYPE_U2: ret |= INTRIN_int16; break;
case MONO_TYPE_I4: case MONO_TYPE_U4: ret |= INTRIN_int32; break;
case MONO_TYPE_I8: case MONO_TYPE_U8: ret |= INTRIN_int64; break;
case MONO_TYPE_R4: ret |= INTRIN_float32; break;
case MONO_TYPE_R8: ret |= INTRIN_float64; break;
}
return ret;
}
static llvm_ovr_tag_t
ovr_tag_from_llvm_type (LLVMTypeRef type)
{
llvm_ovr_tag_t ret = 0;
LLVMTypeKind kind = LLVMGetTypeKind (type);
LLVMTypeRef elem_t = NULL;
switch (kind) {
case LLVMVectorTypeKind: {
elem_t = LLVMGetElementType (type);
unsigned int bits = mono_llvm_get_prim_size_bits (type);
switch (bits) {
case 64: ret |= INTRIN_vector64; break;
case 128: ret |= INTRIN_vector128; break;
default: g_assert_not_reached ();
}
break;
}
default:
g_assert_not_reached ();
}
if (elem_t == i1_t) ret |= INTRIN_int8;
if (elem_t == i2_t) ret |= INTRIN_int16;
if (elem_t == i4_t) ret |= INTRIN_int32;
if (elem_t == i8_t) ret |= INTRIN_int64;
if (elem_t == r4_t) ret |= INTRIN_float32;
if (elem_t == r8_t) ret |= INTRIN_float64;
return ret;
}
static inline void
set_failure (EmitContext *ctx, const char *message)
{
TRACE_FAILURE (reason);
ctx->cfg->exception_message = g_strdup (message);
ctx->cfg->disable_llvm = TRUE;
}
static LLVMValueRef
const_int1 (int v)
{
return LLVMConstInt (LLVMInt1Type (), v ? 1 : 0, FALSE);
}
static LLVMValueRef
const_int8 (int v)
{
return LLVMConstInt (LLVMInt8Type (), v, FALSE);
}
static LLVMValueRef
const_int32 (int v)
{
return LLVMConstInt (LLVMInt32Type (), v, FALSE);
}
static LLVMValueRef
const_int64 (int64_t v)
{
return LLVMConstInt (LLVMInt64Type (), v, FALSE);
}
/*
* IntPtrType:
*
* The LLVM type with width == TARGET_SIZEOF_VOID_P
*/
static LLVMTypeRef
IntPtrType (void)
{
return TARGET_SIZEOF_VOID_P == 8 ? LLVMInt64Type () : LLVMInt32Type ();
}
static LLVMTypeRef
ObjRefType (void)
{
return TARGET_SIZEOF_VOID_P == 8 ? LLVMPointerType (LLVMInt64Type (), 0) : LLVMPointerType (LLVMInt32Type (), 0);
}
static LLVMTypeRef
ThisType (void)
{
return TARGET_SIZEOF_VOID_P == 8 ? LLVMPointerType (LLVMInt64Type (), 0) : LLVMPointerType (LLVMInt32Type (), 0);
}
typedef struct {
int32_t size;
uint32_t align;
} MonoSizeAlign;
/*
* get_vtype_size:
*
* Return the size of the LLVM representation of the vtype T.
*/
static MonoSizeAlign
get_vtype_size_align (MonoType *t)
{
uint32_t align = 0;
int32_t size = mono_class_value_size (mono_class_from_mono_type_internal (t), &align);
/* LLVMArgAsIArgs depends on this since it stores whole words */
while (size < 2 * TARGET_SIZEOF_VOID_P && mono_is_power_of_two (size) == -1)
size ++;
MonoSizeAlign ret = { size, align };
return ret;
}
/*
* simd_class_to_llvm_type:
*
* Return the LLVM type corresponding to the Mono.SIMD class KLASS
*/
static LLVMTypeRef
simd_class_to_llvm_type (EmitContext *ctx, MonoClass *klass)
{
const char *klass_name = m_class_get_name (klass);
if (!strcmp (klass_name, "Vector2d")) {
return LLVMVectorType (LLVMDoubleType (), 2);
} else if (!strcmp (klass_name, "Vector2l")) {
return LLVMVectorType (LLVMInt64Type (), 2);
} else if (!strcmp (klass_name, "Vector2ul")) {
return LLVMVectorType (LLVMInt64Type (), 2);
} else if (!strcmp (klass_name, "Vector4i")) {
return LLVMVectorType (LLVMInt32Type (), 4);
} else if (!strcmp (klass_name, "Vector4ui")) {
return LLVMVectorType (LLVMInt32Type (), 4);
} else if (!strcmp (klass_name, "Vector4f")) {
return LLVMVectorType (LLVMFloatType (), 4);
} else if (!strcmp (klass_name, "Vector8s")) {
return LLVMVectorType (LLVMInt16Type (), 8);
} else if (!strcmp (klass_name, "Vector8us")) {
return LLVMVectorType (LLVMInt16Type (), 8);
} else if (!strcmp (klass_name, "Vector16sb")) {
return LLVMVectorType (LLVMInt8Type (), 16);
} else if (!strcmp (klass_name, "Vector16b")) {
return LLVMVectorType (LLVMInt8Type (), 16);
} else if (!strcmp (klass_name, "Vector2")) {
/* System.Numerics */
return LLVMVectorType (LLVMFloatType (), 4);
} else if (!strcmp (klass_name, "Vector3")) {
return LLVMVectorType (LLVMFloatType (), 4);
} else if (!strcmp (klass_name, "Vector4")) {
return LLVMVectorType (LLVMFloatType (), 4);
} else if (!strcmp (klass_name, "Vector`1") || !strcmp (klass_name, "Vector64`1") || !strcmp (klass_name, "Vector128`1") || !strcmp (klass_name, "Vector256`1")) {
MonoType *etype = mono_class_get_generic_class (klass)->context.class_inst->type_argv [0];
int size = mono_class_value_size (klass, NULL);
switch (etype->type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
return LLVMVectorType (LLVMInt8Type (), size);
case MONO_TYPE_I2:
case MONO_TYPE_U2:
return LLVMVectorType (LLVMInt16Type (), size / 2);
case MONO_TYPE_I4:
case MONO_TYPE_U4:
return LLVMVectorType (LLVMInt32Type (), size / 4);
case MONO_TYPE_I8:
case MONO_TYPE_U8:
return LLVMVectorType (LLVMInt64Type (), size / 8);
case MONO_TYPE_I:
case MONO_TYPE_U:
#if TARGET_SIZEOF_VOID_P == 8
return LLVMVectorType (LLVMInt64Type (), size / 8);
#else
return LLVMVectorType (LLVMInt32Type (), size / 4);
#endif
case MONO_TYPE_R4:
return LLVMVectorType (LLVMFloatType (), size / 4);
case MONO_TYPE_R8:
return LLVMVectorType (LLVMDoubleType (), size / 8);
default:
g_assert_not_reached ();
return NULL;
}
} else {
printf ("%s\n", klass_name);
NOT_IMPLEMENTED;
return NULL;
}
}
static LLVMTypeRef
simd_valuetuple_to_llvm_type (EmitContext *ctx, MonoClass *klass)
{
const char *klass_name = m_class_get_name (klass);
if (!strcmp (klass_name, "ValueTuple`2")) {
MonoType *etype = mono_class_get_generic_class (klass)->context.class_inst->type_argv [0];
if (etype->type != MONO_TYPE_GENERICINST)
g_assert_not_reached ();
MonoClass *eklass = etype->data.generic_class->cached_class;
LLVMTypeRef ltype = simd_class_to_llvm_type (ctx, eklass);
return LLVMArrayType (ltype, 2);
}
g_assert_not_reached ();
}
/* Return the 128 bit SIMD type corresponding to the mono type TYPE */
static inline G_GNUC_UNUSED LLVMTypeRef
type_to_sse_type (int type)
{
switch (type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
return LLVMVectorType (LLVMInt8Type (), 16);
case MONO_TYPE_U2:
case MONO_TYPE_I2:
return LLVMVectorType (LLVMInt16Type (), 8);
case MONO_TYPE_U4:
case MONO_TYPE_I4:
return LLVMVectorType (LLVMInt32Type (), 4);
case MONO_TYPE_U8:
case MONO_TYPE_I8:
return LLVMVectorType (LLVMInt64Type (), 2);
case MONO_TYPE_I:
case MONO_TYPE_U:
#if TARGET_SIZEOF_VOID_P == 8
return LLVMVectorType (LLVMInt64Type (), 2);
#else
return LLVMVectorType (LLVMInt32Type (), 4);
#endif
case MONO_TYPE_R8:
return LLVMVectorType (LLVMDoubleType (), 2);
case MONO_TYPE_R4:
return LLVMVectorType (LLVMFloatType (), 4);
default:
g_assert_not_reached ();
return NULL;
}
}
static LLVMTypeRef
create_llvm_type_for_type (MonoLLVMModule *module, MonoClass *klass)
{
int i, size, nfields, esize;
LLVMTypeRef *eltypes;
char *name;
MonoType *t;
LLVMTypeRef ltype;
t = m_class_get_byval_arg (klass);
if (mini_type_is_hfa (t, &nfields, &esize)) {
/*
* This is needed on arm64 where HFAs are returned in
* registers.
*/
/* SIMD types have size 16 in mono_class_value_size () */
if (m_class_is_simd_type (klass))
nfields = 16/ esize;
size = nfields;
eltypes = g_new (LLVMTypeRef, size);
for (i = 0; i < size; ++i)
eltypes [i] = esize == 4 ? LLVMFloatType () : LLVMDoubleType ();
} else {
MonoSizeAlign size_align = get_vtype_size_align (t);
eltypes = g_new (LLVMTypeRef, size_align.size);
size = 0;
uint32_t bytes = 0;
uint32_t chunk = size_align.align < TARGET_SIZEOF_VOID_P ? size_align.align : TARGET_SIZEOF_VOID_P;
for (; chunk > 0; chunk = chunk >> 1) {
for (; (bytes + chunk) <= size_align.size; bytes += chunk) {
eltypes [size] = LLVMIntType (chunk * 8);
++size;
}
}
}
name = mono_type_full_name (m_class_get_byval_arg (klass));
ltype = LLVMStructCreateNamed (module->context, name);
LLVMStructSetBody (ltype, eltypes, size, FALSE);
g_free (eltypes);
g_free (name);
return ltype;
}
static LLVMTypeRef
primitive_type_to_llvm_type (MonoTypeEnum type)
{
switch (type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
return LLVMInt8Type ();
case MONO_TYPE_I2:
case MONO_TYPE_U2:
return LLVMInt16Type ();
case MONO_TYPE_I4:
case MONO_TYPE_U4:
return LLVMInt32Type ();
case MONO_TYPE_I8:
case MONO_TYPE_U8:
return LLVMInt64Type ();
case MONO_TYPE_R4:
return LLVMFloatType ();
case MONO_TYPE_R8:
return LLVMDoubleType ();
case MONO_TYPE_I:
case MONO_TYPE_U:
return IntPtrType ();
default:
return NULL;
}
}
static MonoTypeEnum
inst_c1_type (const MonoInst *ins)
{
return (MonoTypeEnum)ins->inst_c1;
}
/*
* type_to_llvm_type:
*
* Return the LLVM type corresponding to T.
*/
static LLVMTypeRef
type_to_llvm_type (EmitContext *ctx, MonoType *t)
{
if (m_type_is_byref (t))
return ThisType ();
t = mini_get_underlying_type (t);
LLVMTypeRef prim_llvm_type = primitive_type_to_llvm_type (t->type);
if (prim_llvm_type != NULL)
return prim_llvm_type;
switch (t->type) {
case MONO_TYPE_VOID:
return LLVMVoidType ();
case MONO_TYPE_OBJECT:
return ObjRefType ();
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR: {
MonoClass *klass = mono_class_from_mono_type_internal (t);
MonoClass *ptr_klass = m_class_get_element_class (klass);
MonoType *ptr_type = m_class_get_byval_arg (ptr_klass);
/* Handle primitive pointers */
switch (ptr_type->type) {
case MONO_TYPE_I1:
case MONO_TYPE_I2:
case MONO_TYPE_I4:
case MONO_TYPE_U1:
case MONO_TYPE_U2:
case MONO_TYPE_U4:
return LLVMPointerType (type_to_llvm_type (ctx, ptr_type), 0);
}
return ObjRefType ();
}
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
/* Because of generic sharing */
return ObjRefType ();
case MONO_TYPE_GENERICINST:
if (!mono_type_generic_inst_is_valuetype (t))
return ObjRefType ();
/* Fall through */
case MONO_TYPE_VALUETYPE:
case MONO_TYPE_TYPEDBYREF: {
MonoClass *klass;
LLVMTypeRef ltype;
klass = mono_class_from_mono_type_internal (t);
if (MONO_CLASS_IS_SIMD (ctx->cfg, klass))
return simd_class_to_llvm_type (ctx, klass);
if (m_class_is_enumtype (klass))
return type_to_llvm_type (ctx, mono_class_enum_basetype_internal (klass));
ltype = (LLVMTypeRef)g_hash_table_lookup (ctx->module->llvm_types, klass);
if (!ltype) {
ltype = create_llvm_type_for_type (ctx->module, klass);
g_hash_table_insert (ctx->module->llvm_types, klass, ltype);
}
return ltype;
}
default:
printf ("X: %d\n", t->type);
ctx->cfg->exception_message = g_strdup_printf ("type %s", mono_type_full_name (t));
ctx->cfg->disable_llvm = TRUE;
return NULL;
}
}
static gboolean
primitive_type_is_unsigned (MonoTypeEnum t)
{
switch (t) {
case MONO_TYPE_U1:
case MONO_TYPE_U2:
case MONO_TYPE_CHAR:
case MONO_TYPE_U4:
case MONO_TYPE_U8:
case MONO_TYPE_U:
return TRUE;
default:
return FALSE;
}
}
/*
* type_is_unsigned:
*
* Return whenever T is an unsigned int type.
*/
static gboolean
type_is_unsigned (EmitContext *ctx, MonoType *t)
{
t = mini_get_underlying_type (t);
if (m_type_is_byref (t))
return FALSE;
return primitive_type_is_unsigned (t->type);
}
/*
* type_to_llvm_arg_type:
*
* Same as type_to_llvm_type, but treat i8/i16 as i32.
*/
static LLVMTypeRef
type_to_llvm_arg_type (EmitContext *ctx, MonoType *t)
{
LLVMTypeRef ptype = type_to_llvm_type (ctx, t);
if (ctx->cfg->llvm_only)
return ptype;
/*
* This works on all abis except arm64/ios which passes multiple
* arguments in one stack slot.
*/
#ifndef TARGET_ARM64
if (ptype == LLVMInt8Type () || ptype == LLVMInt16Type ()) {
/*
* LLVM generates code which only sets the lower bits, while JITted
* code expects all the bits to be set.
*/
ptype = LLVMInt32Type ();
}
#endif
return ptype;
}
/*
* llvm_type_to_stack_type:
*
* Return the LLVM type which needs to be used when a value of type TYPE is pushed
* on the IL stack.
*/
static G_GNUC_UNUSED LLVMTypeRef
llvm_type_to_stack_type (MonoCompile *cfg, LLVMTypeRef type)
{
if (type == NULL)
return NULL;
if (type == LLVMInt8Type ())
return LLVMInt32Type ();
else if (type == LLVMInt16Type ())
return LLVMInt32Type ();
else if (!cfg->r4fp && type == LLVMFloatType ())
return LLVMDoubleType ();
else
return type;
}
/*
* regtype_to_llvm_type:
*
* Return the LLVM type corresponding to the regtype C used in instruction
* descriptions.
*/
static LLVMTypeRef
regtype_to_llvm_type (char c)
{
switch (c) {
case 'i':
return LLVMInt32Type ();
case 'l':
return LLVMInt64Type ();
case 'f':
return LLVMDoubleType ();
default:
return NULL;
}
}
/*
* op_to_llvm_type:
*
* Return the LLVM type corresponding to the unary/binary opcode OPCODE.
*/
static LLVMTypeRef
op_to_llvm_type (int opcode)
{
switch (opcode) {
case OP_ICONV_TO_I1:
case OP_LCONV_TO_I1:
return LLVMInt8Type ();
case OP_ICONV_TO_U1:
case OP_LCONV_TO_U1:
return LLVMInt8Type ();
case OP_ICONV_TO_I2:
case OP_LCONV_TO_I2:
return LLVMInt16Type ();
case OP_ICONV_TO_U2:
case OP_LCONV_TO_U2:
return LLVMInt16Type ();
case OP_ICONV_TO_I4:
case OP_LCONV_TO_I4:
return LLVMInt32Type ();
case OP_ICONV_TO_U4:
case OP_LCONV_TO_U4:
return LLVMInt32Type ();
case OP_ICONV_TO_I8:
return LLVMInt64Type ();
case OP_ICONV_TO_R4:
return LLVMFloatType ();
case OP_ICONV_TO_R8:
return LLVMDoubleType ();
case OP_ICONV_TO_U8:
return LLVMInt64Type ();
case OP_FCONV_TO_I4:
return LLVMInt32Type ();
case OP_FCONV_TO_I8:
return LLVMInt64Type ();
case OP_FCONV_TO_I1:
case OP_FCONV_TO_U1:
case OP_RCONV_TO_I1:
case OP_RCONV_TO_U1:
return LLVMInt8Type ();
case OP_FCONV_TO_I2:
case OP_FCONV_TO_U2:
case OP_RCONV_TO_I2:
case OP_RCONV_TO_U2:
return LLVMInt16Type ();
case OP_FCONV_TO_U4:
case OP_RCONV_TO_U4:
return LLVMInt32Type ();
case OP_FCONV_TO_U8:
case OP_RCONV_TO_U8:
return LLVMInt64Type ();
case OP_FCONV_TO_I:
case OP_RCONV_TO_I:
return TARGET_SIZEOF_VOID_P == 8 ? LLVMInt64Type () : LLVMInt32Type ();
case OP_IADD_OVF:
case OP_IADD_OVF_UN:
case OP_ISUB_OVF:
case OP_ISUB_OVF_UN:
case OP_IMUL_OVF:
case OP_IMUL_OVF_UN:
return LLVMInt32Type ();
case OP_LADD_OVF:
case OP_LADD_OVF_UN:
case OP_LSUB_OVF:
case OP_LSUB_OVF_UN:
case OP_LMUL_OVF:
case OP_LMUL_OVF_UN:
return LLVMInt64Type ();
default:
printf ("%s\n", mono_inst_name (opcode));
g_assert_not_reached ();
return NULL;
}
}
#define CLAUSE_START(clause) ((clause)->try_offset)
#define CLAUSE_END(clause) (((clause))->try_offset + ((clause))->try_len)
/*
* load_store_to_llvm_type:
*
* Return the size/sign/zero extension corresponding to the load/store opcode
* OPCODE.
*/
static LLVMTypeRef
load_store_to_llvm_type (int opcode, int *size, gboolean *sext, gboolean *zext)
{
*sext = FALSE;
*zext = FALSE;
switch (opcode) {
case OP_LOADI1_MEMBASE:
case OP_STOREI1_MEMBASE_REG:
case OP_STOREI1_MEMBASE_IMM:
case OP_ATOMIC_LOAD_I1:
case OP_ATOMIC_STORE_I1:
*size = 1;
*sext = TRUE;
return LLVMInt8Type ();
case OP_LOADU1_MEMBASE:
case OP_LOADU1_MEM:
case OP_ATOMIC_LOAD_U1:
case OP_ATOMIC_STORE_U1:
*size = 1;
*zext = TRUE;
return LLVMInt8Type ();
case OP_LOADI2_MEMBASE:
case OP_STOREI2_MEMBASE_REG:
case OP_STOREI2_MEMBASE_IMM:
case OP_ATOMIC_LOAD_I2:
case OP_ATOMIC_STORE_I2:
*size = 2;
*sext = TRUE;
return LLVMInt16Type ();
case OP_LOADU2_MEMBASE:
case OP_LOADU2_MEM:
case OP_ATOMIC_LOAD_U2:
case OP_ATOMIC_STORE_U2:
*size = 2;
*zext = TRUE;
return LLVMInt16Type ();
case OP_LOADI4_MEMBASE:
case OP_LOADU4_MEMBASE:
case OP_LOADI4_MEM:
case OP_LOADU4_MEM:
case OP_STOREI4_MEMBASE_REG:
case OP_STOREI4_MEMBASE_IMM:
case OP_ATOMIC_LOAD_I4:
case OP_ATOMIC_STORE_I4:
case OP_ATOMIC_LOAD_U4:
case OP_ATOMIC_STORE_U4:
*size = 4;
return LLVMInt32Type ();
case OP_LOADI8_MEMBASE:
case OP_LOADI8_MEM:
case OP_STOREI8_MEMBASE_REG:
case OP_STOREI8_MEMBASE_IMM:
case OP_ATOMIC_LOAD_I8:
case OP_ATOMIC_STORE_I8:
case OP_ATOMIC_LOAD_U8:
case OP_ATOMIC_STORE_U8:
*size = 8;
return LLVMInt64Type ();
case OP_LOADR4_MEMBASE:
case OP_STORER4_MEMBASE_REG:
case OP_ATOMIC_LOAD_R4:
case OP_ATOMIC_STORE_R4:
*size = 4;
return LLVMFloatType ();
case OP_LOADR8_MEMBASE:
case OP_STORER8_MEMBASE_REG:
case OP_ATOMIC_LOAD_R8:
case OP_ATOMIC_STORE_R8:
*size = 8;
return LLVMDoubleType ();
case OP_LOAD_MEMBASE:
case OP_LOAD_MEM:
case OP_STORE_MEMBASE_REG:
case OP_STORE_MEMBASE_IMM:
*size = TARGET_SIZEOF_VOID_P;
return IntPtrType ();
default:
g_assert_not_reached ();
return NULL;
}
}
/*
* ovf_op_to_intrins:
*
* Return the LLVM intrinsics corresponding to the overflow opcode OPCODE.
*/
static IntrinsicId
ovf_op_to_intrins (int opcode)
{
switch (opcode) {
case OP_IADD_OVF:
return INTRINS_SADD_OVF_I32;
case OP_IADD_OVF_UN:
return INTRINS_UADD_OVF_I32;
case OP_ISUB_OVF:
return INTRINS_SSUB_OVF_I32;
case OP_ISUB_OVF_UN:
return INTRINS_USUB_OVF_I32;
case OP_IMUL_OVF:
return INTRINS_SMUL_OVF_I32;
case OP_IMUL_OVF_UN:
return INTRINS_UMUL_OVF_I32;
case OP_LADD_OVF:
return INTRINS_SADD_OVF_I64;
case OP_LADD_OVF_UN:
return INTRINS_UADD_OVF_I64;
case OP_LSUB_OVF:
return INTRINS_SSUB_OVF_I64;
case OP_LSUB_OVF_UN:
return INTRINS_USUB_OVF_I64;
case OP_LMUL_OVF:
return INTRINS_SMUL_OVF_I64;
case OP_LMUL_OVF_UN:
return INTRINS_UMUL_OVF_I64;
default:
g_assert_not_reached ();
return (IntrinsicId)0;
}
}
static IntrinsicId
simd_ins_to_intrins (int opcode)
{
switch (opcode) {
#if defined(TARGET_X86) || defined(TARGET_AMD64)
case OP_CVTPD2DQ:
return INTRINS_SSE_CVTPD2DQ;
case OP_CVTPS2DQ:
return INTRINS_SSE_CVTPS2DQ;
case OP_CVTPD2PS:
return INTRINS_SSE_CVTPD2PS;
case OP_CVTTPD2DQ:
return INTRINS_SSE_CVTTPD2DQ;
case OP_CVTTPS2DQ:
return INTRINS_SSE_CVTTPS2DQ;
case OP_SSE_SQRTSS:
return INTRINS_SSE_SQRT_SS;
case OP_SSE2_SQRTSD:
return INTRINS_SSE_SQRT_SD;
#endif
default:
g_assert_not_reached ();
return (IntrinsicId)0;
}
}
static LLVMTypeRef
simd_op_to_llvm_type (int opcode)
{
#if defined(TARGET_X86) || defined(TARGET_AMD64)
switch (opcode) {
case OP_EXTRACT_R8:
case OP_EXPAND_R8:
return sse_r8_t;
case OP_EXTRACT_I8:
case OP_EXPAND_I8:
return sse_i8_t;
case OP_EXTRACT_I4:
case OP_EXPAND_I4:
return sse_i4_t;
case OP_EXTRACT_I2:
case OP_EXTRACTX_U2:
case OP_EXPAND_I2:
return sse_i2_t;
case OP_EXTRACT_I1:
case OP_EXPAND_I1:
return sse_i1_t;
case OP_EXTRACT_R4:
case OP_EXPAND_R4:
return sse_r4_t;
case OP_CVTPD2DQ:
case OP_CVTPD2PS:
case OP_CVTTPD2DQ:
return sse_r8_t;
case OP_CVTPS2DQ:
case OP_CVTTPS2DQ:
return sse_r4_t;
case OP_SQRTPS:
case OP_RSQRTPS:
case OP_DUPPS_LOW:
case OP_DUPPS_HIGH:
return sse_r4_t;
case OP_SQRTPD:
case OP_DUPPD:
return sse_r8_t;
default:
g_assert_not_reached ();
return NULL;
}
#else
return NULL;
#endif
}
static void
set_cold_cconv (LLVMValueRef func)
{
/*
* xcode10 (watchOS) and ARM/ARM64 doesn't seem to support preserveall, it fails with:
* fatal error: error in backend: Unsupported calling convention
*/
#if !defined(TARGET_WATCHOS) && !defined(TARGET_ARM) && !defined(TARGET_ARM64)
LLVMSetFunctionCallConv (func, LLVMColdCallConv);
#endif
}
static void
set_call_cold_cconv (LLVMValueRef func)
{
#if !defined(TARGET_WATCHOS) && !defined(TARGET_ARM) && !defined(TARGET_ARM64)
LLVMSetInstructionCallConv (func, LLVMColdCallConv);
#endif
}
/*
* get_bb:
*
* Return the LLVM basic block corresponding to BB.
*/
static LLVMBasicBlockRef
get_bb (EmitContext *ctx, MonoBasicBlock *bb)
{
char bb_name_buf [128];
char *bb_name;
if (ctx->bblocks [bb->block_num].bblock == NULL) {
if (bb->flags & BB_EXCEPTION_HANDLER) {
int clause_index = (mono_get_block_region_notry (ctx->cfg, bb->region) >> 8) - 1;
sprintf (bb_name_buf, "EH_CLAUSE%d_BB%d", clause_index, bb->block_num);
bb_name = bb_name_buf;
} else if (bb->block_num < 256) {
if (!ctx->module->bb_names) {
ctx->module->bb_names_len = 256;
ctx->module->bb_names = g_new0 (char*, ctx->module->bb_names_len);
}
if (!ctx->module->bb_names [bb->block_num]) {
char *n;
n = g_strdup_printf ("BB%d", bb->block_num);
mono_memory_barrier ();
ctx->module->bb_names [bb->block_num] = n;
}
bb_name = ctx->module->bb_names [bb->block_num];
} else {
sprintf (bb_name_buf, "BB%d", bb->block_num);
bb_name = bb_name_buf;
}
ctx->bblocks [bb->block_num].bblock = LLVMAppendBasicBlock (ctx->lmethod, bb_name);
ctx->bblocks [bb->block_num].end_bblock = ctx->bblocks [bb->block_num].bblock;
}
return ctx->bblocks [bb->block_num].bblock;
}
/*
* get_end_bb:
*
* Return the last LLVM bblock corresponding to BB.
* This might not be equal to the bb returned by get_bb () since we need to generate
* multiple LLVM bblocks for a mono bblock to handle throwing exceptions.
*/
static LLVMBasicBlockRef
get_end_bb (EmitContext *ctx, MonoBasicBlock *bb)
{
get_bb (ctx, bb);
return ctx->bblocks [bb->block_num].end_bblock;
}
static LLVMBasicBlockRef
gen_bb (EmitContext *ctx, const char *prefix)
{
char bb_name [128];
sprintf (bb_name, "%s%d", prefix, ++ ctx->ex_index);
return LLVMAppendBasicBlock (ctx->lmethod, bb_name);
}
/*
* resolve_patch:
*
* Return the target of the patch identified by TYPE and TARGET.
*/
static gpointer
resolve_patch (MonoCompile *cfg, MonoJumpInfoType type, gconstpointer target)
{
MonoJumpInfo ji;
ERROR_DECL (error);
gpointer res;
memset (&ji, 0, sizeof (ji));
ji.type = type;
ji.data.target = target;
res = mono_resolve_patch_target (cfg->method, NULL, &ji, FALSE, error);
mono_error_assert_ok (error);
return res;
}
/*
* convert_full:
*
* Emit code to convert the LLVM value V to DTYPE.
*/
static LLVMValueRef
convert_full (EmitContext *ctx, LLVMValueRef v, LLVMTypeRef dtype, gboolean is_unsigned)
{
LLVMTypeRef stype = LLVMTypeOf (v);
if (stype != dtype) {
gboolean ext = FALSE;
/* Extend */
if (dtype == LLVMInt64Type () && (stype == LLVMInt32Type () || stype == LLVMInt16Type () || stype == LLVMInt8Type ()))
ext = TRUE;
else if (dtype == LLVMInt32Type () && (stype == LLVMInt16Type () || stype == LLVMInt8Type ()))
ext = TRUE;
else if (dtype == LLVMInt16Type () && (stype == LLVMInt8Type ()))
ext = TRUE;
if (ext)
return is_unsigned ? LLVMBuildZExt (ctx->builder, v, dtype, "") : LLVMBuildSExt (ctx->builder, v, dtype, "");
if (dtype == LLVMDoubleType () && stype == LLVMFloatType ())
return LLVMBuildFPExt (ctx->builder, v, dtype, "");
/* Trunc */
if (stype == LLVMInt64Type () && (dtype == LLVMInt32Type () || dtype == LLVMInt16Type () || dtype == LLVMInt8Type ()))
return LLVMBuildTrunc (ctx->builder, v, dtype, "");
if (stype == LLVMInt32Type () && (dtype == LLVMInt16Type () || dtype == LLVMInt8Type ()))
return LLVMBuildTrunc (ctx->builder, v, dtype, "");
if (stype == LLVMInt16Type () && dtype == LLVMInt8Type ())
return LLVMBuildTrunc (ctx->builder, v, dtype, "");
if (stype == LLVMDoubleType () && dtype == LLVMFloatType ())
return LLVMBuildFPTrunc (ctx->builder, v, dtype, "");
if (LLVMGetTypeKind (stype) == LLVMPointerTypeKind && LLVMGetTypeKind (dtype) == LLVMPointerTypeKind)
return LLVMBuildBitCast (ctx->builder, v, dtype, "");
if (LLVMGetTypeKind (dtype) == LLVMPointerTypeKind)
return LLVMBuildIntToPtr (ctx->builder, v, dtype, "");
if (LLVMGetTypeKind (stype) == LLVMPointerTypeKind)
return LLVMBuildPtrToInt (ctx->builder, v, dtype, "");
if (mono_arch_is_soft_float ()) {
if (stype == LLVMInt32Type () && dtype == LLVMFloatType ())
return LLVMBuildBitCast (ctx->builder, v, dtype, "");
if (stype == LLVMInt32Type () && dtype == LLVMDoubleType ())
return LLVMBuildBitCast (ctx->builder, LLVMBuildZExt (ctx->builder, v, LLVMInt64Type (), ""), dtype, "");
}
if (LLVMGetTypeKind (stype) == LLVMVectorTypeKind && LLVMGetTypeKind (dtype) == LLVMVectorTypeKind) {
if (mono_llvm_get_prim_size_bits (stype) == mono_llvm_get_prim_size_bits (dtype))
return LLVMBuildBitCast (ctx->builder, v, dtype, "");
}
mono_llvm_dump_value (v);
mono_llvm_dump_type (dtype);
printf ("\n");
g_assert_not_reached ();
return NULL;
} else {
return v;
}
}
static LLVMValueRef
convert (EmitContext *ctx, LLVMValueRef v, LLVMTypeRef dtype)
{
return convert_full (ctx, v, dtype, FALSE);
}
static void
emit_memset (EmitContext *ctx, LLVMBuilderRef builder, LLVMValueRef v, LLVMValueRef size, int alignment)
{
LLVMValueRef args [5];
int aindex = 0;
args [aindex ++] = v;
args [aindex ++] = LLVMConstInt (LLVMInt8Type (), 0, FALSE);
args [aindex ++] = size;
args [aindex ++] = LLVMConstInt (LLVMInt1Type (), 0, FALSE);
LLVMBuildCall (builder, get_intrins (ctx, INTRINS_MEMSET), args, aindex, "");
}
/*
* emit_volatile_load:
*
* If vreg is volatile, emit a load from its address.
*/
static LLVMValueRef
emit_volatile_load (EmitContext *ctx, int vreg)
{
MonoType *t;
LLVMValueRef v;
// On arm64, we pass the rgctx in a callee saved
// register on arm64 (x15), and llvm might keep the value in that register
// even through the register is marked as 'reserved' inside llvm.
v = mono_llvm_build_load (ctx->builder, ctx->addresses [vreg], "", TRUE);
t = ctx->vreg_cli_types [vreg];
if (t && !m_type_is_byref (t)) {
/*
* Might have to zero extend since llvm doesn't have
* unsigned types.
*/
if (t->type == MONO_TYPE_U1 || t->type == MONO_TYPE_U2 || t->type == MONO_TYPE_CHAR || t->type == MONO_TYPE_BOOLEAN)
v = LLVMBuildZExt (ctx->builder, v, LLVMInt32Type (), "");
else if (t->type == MONO_TYPE_I1 || t->type == MONO_TYPE_I2)
v = LLVMBuildSExt (ctx->builder, v, LLVMInt32Type (), "");
else if (t->type == MONO_TYPE_U8)
v = LLVMBuildZExt (ctx->builder, v, LLVMInt64Type (), "");
}
return v;
}
/*
* emit_volatile_store:
*
* If VREG is volatile, emit a store from its value to its address.
*/
static void
emit_volatile_store (EmitContext *ctx, int vreg)
{
MonoInst *var = get_vreg_to_inst (ctx->cfg, vreg);
if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) {
g_assert (ctx->addresses [vreg]);
#ifdef TARGET_WASM
/* Need volatile stores otherwise the compiler might move them */
mono_llvm_build_store (ctx->builder, convert (ctx, ctx->values [vreg], type_to_llvm_type (ctx, var->inst_vtype)), ctx->addresses [vreg], TRUE, LLVM_BARRIER_NONE);
#else
LLVMBuildStore (ctx->builder, convert (ctx, ctx->values [vreg], type_to_llvm_type (ctx, var->inst_vtype)), ctx->addresses [vreg]);
#endif
}
}
static LLVMTypeRef
sig_to_llvm_sig_no_cinfo (EmitContext *ctx, MonoMethodSignature *sig)
{
LLVMTypeRef ret_type;
LLVMTypeRef *param_types = NULL;
LLVMTypeRef res;
int i, pindex;
ret_type = type_to_llvm_type (ctx, sig->ret);
if (!ctx_ok (ctx))
return NULL;
param_types = g_new0 (LLVMTypeRef, (sig->param_count * 8) + 3);
pindex = 0;
if (sig->hasthis)
param_types [pindex ++] = ThisType ();
for (i = 0; i < sig->param_count; ++i)
param_types [pindex ++] = type_to_llvm_arg_type (ctx, sig->params [i]);
if (!ctx_ok (ctx)) {
g_free (param_types);
return NULL;
}
res = LLVMFunctionType (ret_type, param_types, pindex, FALSE);
g_free (param_types);
return res;
}
/*
* sig_to_llvm_sig_full:
*
* Return the LLVM signature corresponding to the mono signature SIG using the
* calling convention information in CINFO. Fill out the parameter mapping information in CINFO.
*/
static LLVMTypeRef
sig_to_llvm_sig_full (EmitContext *ctx, MonoMethodSignature *sig, LLVMCallInfo *cinfo)
{
LLVMTypeRef ret_type;
LLVMTypeRef *param_types = NULL;
LLVMTypeRef res;
int i, j, pindex, vret_arg_pindex = 0;
gboolean vretaddr = FALSE;
MonoType *rtype;
if (!cinfo)
return sig_to_llvm_sig_no_cinfo (ctx, sig);
ret_type = type_to_llvm_type (ctx, sig->ret);
if (!ctx_ok (ctx))
return NULL;
rtype = mini_get_underlying_type (sig->ret);
switch (cinfo->ret.storage) {
case LLVMArgVtypeInReg:
/* LLVM models this by returning an aggregate value */
if (cinfo->ret.pair_storage [0] == LLVMArgInIReg && cinfo->ret.pair_storage [1] == LLVMArgNone) {
LLVMTypeRef members [2];
members [0] = IntPtrType ();
ret_type = LLVMStructType (members, 1, FALSE);
} else if (cinfo->ret.pair_storage [0] == LLVMArgNone && cinfo->ret.pair_storage [1] == LLVMArgNone) {
/* Empty struct */
ret_type = LLVMVoidType ();
} else if (cinfo->ret.pair_storage [0] == LLVMArgInIReg && cinfo->ret.pair_storage [1] == LLVMArgInIReg) {
LLVMTypeRef members [2];
members [0] = IntPtrType ();
members [1] = IntPtrType ();
ret_type = LLVMStructType (members, 2, FALSE);
} else {
g_assert_not_reached ();
}
break;
case LLVMArgVtypeByVal:
/* Vtype returned normally by val */
break;
case LLVMArgVtypeAsScalar: {
int size = mono_class_value_size (mono_class_from_mono_type_internal (rtype), NULL);
/* LLVM models this by returning an int */
if (size < TARGET_SIZEOF_VOID_P) {
g_assert (cinfo->ret.nslots == 1);
ret_type = LLVMIntType (size * 8);
} else {
g_assert (cinfo->ret.nslots == 1 || cinfo->ret.nslots == 2);
ret_type = LLVMIntType (cinfo->ret.nslots * sizeof (target_mgreg_t) * 8);
}
break;
}
case LLVMArgAsIArgs:
ret_type = LLVMArrayType (IntPtrType (), cinfo->ret.nslots);
break;
case LLVMArgFpStruct: {
/* Vtype returned as a fp struct */
LLVMTypeRef members [16];
/* Have to create our own structure since we don't map fp structures to LLVM fp structures yet */
for (i = 0; i < cinfo->ret.nslots; ++i)
members [i] = cinfo->ret.esize == 8 ? LLVMDoubleType () : LLVMFloatType ();
ret_type = LLVMStructType (members, cinfo->ret.nslots, FALSE);
break;
}
case LLVMArgVtypeByRef:
/* Vtype returned using a hidden argument */
ret_type = LLVMVoidType ();
break;
case LLVMArgVtypeRetAddr:
case LLVMArgGsharedvtFixed:
case LLVMArgGsharedvtFixedVtype:
case LLVMArgGsharedvtVariable:
vretaddr = TRUE;
ret_type = LLVMVoidType ();
break;
case LLVMArgWasmVtypeAsScalar:
g_assert (cinfo->ret.esize);
ret_type = LLVMIntType (cinfo->ret.esize * 8);
break;
default:
break;
}
param_types = g_new0 (LLVMTypeRef, (sig->param_count * 8) + 3);
pindex = 0;
if (cinfo->ret.storage == LLVMArgVtypeByRef) {
/*
* Has to be the first argument because of the sret argument attribute
* FIXME: This might conflict with passing 'this' as the first argument, but
* this is only used on arm64 which has a dedicated struct return register.
*/
cinfo->vret_arg_pindex = pindex;
param_types [pindex] = type_to_llvm_arg_type (ctx, sig->ret);
if (!ctx_ok (ctx)) {
g_free (param_types);
return NULL;
}
param_types [pindex] = LLVMPointerType (param_types [pindex], 0);
pindex ++;
}
if (!ctx->llvm_only && cinfo->rgctx_arg) {
cinfo->rgctx_arg_pindex = pindex;
param_types [pindex] = ctx->module->ptr_type;
pindex ++;
}
if (cinfo->imt_arg) {
cinfo->imt_arg_pindex = pindex;
param_types [pindex] = ctx->module->ptr_type;
pindex ++;
}
if (vretaddr) {
/* Compute the index in the LLVM signature where the vret arg needs to be passed */
vret_arg_pindex = pindex;
if (cinfo->vret_arg_index == 1) {
/* Add the slots consumed by the first argument */
LLVMArgInfo *ainfo = &cinfo->args [0];
switch (ainfo->storage) {
case LLVMArgVtypeInReg:
for (j = 0; j < 2; ++j) {
if (ainfo->pair_storage [j] == LLVMArgInIReg)
vret_arg_pindex ++;
}
break;
default:
vret_arg_pindex ++;
}
}
cinfo->vret_arg_pindex = vret_arg_pindex;
}
if (vretaddr && vret_arg_pindex == pindex)
param_types [pindex ++] = IntPtrType ();
if (sig->hasthis) {
cinfo->this_arg_pindex = pindex;
param_types [pindex ++] = ThisType ();
cinfo->args [0].pindex = cinfo->this_arg_pindex;
}
if (vretaddr && vret_arg_pindex == pindex)
param_types [pindex ++] = IntPtrType ();
for (i = 0; i < sig->param_count; ++i) {
LLVMArgInfo *ainfo = &cinfo->args [i + sig->hasthis];
if (vretaddr && vret_arg_pindex == pindex)
param_types [pindex ++] = IntPtrType ();
ainfo->pindex = pindex;
switch (ainfo->storage) {
case LLVMArgVtypeInReg:
for (j = 0; j < 2; ++j) {
switch (ainfo->pair_storage [j]) {
case LLVMArgInIReg:
param_types [pindex ++] = LLVMIntType (TARGET_SIZEOF_VOID_P * 8);
break;
case LLVMArgNone:
break;
default:
g_assert_not_reached ();
}
}
break;
case LLVMArgVtypeByVal:
param_types [pindex] = type_to_llvm_arg_type (ctx, ainfo->type);
if (!ctx_ok (ctx))
break;
param_types [pindex] = LLVMPointerType (param_types [pindex], 0);
pindex ++;
break;
case LLVMArgAsIArgs:
if (ainfo->esize == 8)
param_types [pindex] = LLVMArrayType (LLVMInt64Type (), ainfo->nslots);
else
param_types [pindex] = LLVMArrayType (IntPtrType (), ainfo->nslots);
pindex ++;
break;
case LLVMArgVtypeAddr:
case LLVMArgVtypeByRef:
param_types [pindex] = type_to_llvm_arg_type (ctx, ainfo->type);
if (!ctx_ok (ctx))
break;
param_types [pindex] = LLVMPointerType (param_types [pindex], 0);
pindex ++;
break;
case LLVMArgAsFpArgs: {
int j;
/* Emit dummy fp arguments if needed so the rest is passed on the stack */
for (j = 0; j < ainfo->ndummy_fpargs; ++j)
param_types [pindex ++] = LLVMDoubleType ();
for (j = 0; j < ainfo->nslots; ++j)
param_types [pindex ++] = ainfo->esize == 8 ? LLVMDoubleType () : LLVMFloatType ();
break;
}
case LLVMArgVtypeAsScalar:
g_assert_not_reached ();
break;
case LLVMArgWasmVtypeAsScalar:
g_assert (ainfo->esize);
param_types [pindex ++] = LLVMIntType (ainfo->esize * 8);
break;
case LLVMArgGsharedvtFixed:
case LLVMArgGsharedvtFixedVtype:
param_types [pindex ++] = LLVMPointerType (type_to_llvm_arg_type (ctx, ainfo->type), 0);
break;
case LLVMArgGsharedvtVariable:
param_types [pindex ++] = LLVMPointerType (IntPtrType (), 0);
break;
default:
param_types [pindex ++] = type_to_llvm_arg_type (ctx, ainfo->type);
break;
}
}
if (!ctx_ok (ctx)) {
g_free (param_types);
return NULL;
}
if (vretaddr && vret_arg_pindex == pindex)
param_types [pindex ++] = IntPtrType ();
if (ctx->llvm_only && cinfo->rgctx_arg) {
/* Pass the rgctx as the last argument */
cinfo->rgctx_arg_pindex = pindex;
param_types [pindex] = ctx->module->ptr_type;
pindex ++;
} else if (ctx->llvm_only && cinfo->dummy_arg) {
/* Pass a dummy arg last */
cinfo->dummy_arg_pindex = pindex;
param_types [pindex] = ctx->module->ptr_type;
pindex ++;
}
res = LLVMFunctionType (ret_type, param_types, pindex, FALSE);
g_free (param_types);
return res;
}
static LLVMTypeRef
sig_to_llvm_sig (EmitContext *ctx, MonoMethodSignature *sig)
{
return sig_to_llvm_sig_full (ctx, sig, NULL);
}
/*
* LLVMFunctionType1:
*
* Create an LLVM function type from the arguments.
*/
static G_GNUC_UNUSED LLVMTypeRef
LLVMFunctionType0 (LLVMTypeRef ReturnType,
int IsVarArg)
{
return LLVMFunctionType (ReturnType, NULL, 0, IsVarArg);
}
/*
* LLVMFunctionType1:
*
* Create an LLVM function type from the arguments.
*/
static G_GNUC_UNUSED LLVMTypeRef
LLVMFunctionType1 (LLVMTypeRef ReturnType,
LLVMTypeRef ParamType1,
int IsVarArg)
{
LLVMTypeRef param_types [1];
param_types [0] = ParamType1;
return LLVMFunctionType (ReturnType, param_types, 1, IsVarArg);
}
/*
* LLVMFunctionType2:
*
* Create an LLVM function type from the arguments.
*/
static G_GNUC_UNUSED LLVMTypeRef
LLVMFunctionType2 (LLVMTypeRef ReturnType,
LLVMTypeRef ParamType1,
LLVMTypeRef ParamType2,
int IsVarArg)
{
LLVMTypeRef param_types [2];
param_types [0] = ParamType1;
param_types [1] = ParamType2;
return LLVMFunctionType (ReturnType, param_types, 2, IsVarArg);
}
/*
* LLVMFunctionType3:
*
* Create an LLVM function type from the arguments.
*/
static G_GNUC_UNUSED LLVMTypeRef
LLVMFunctionType3 (LLVMTypeRef ReturnType,
LLVMTypeRef ParamType1,
LLVMTypeRef ParamType2,
LLVMTypeRef ParamType3,
int IsVarArg)
{
LLVMTypeRef param_types [3];
param_types [0] = ParamType1;
param_types [1] = ParamType2;
param_types [2] = ParamType3;
return LLVMFunctionType (ReturnType, param_types, 3, IsVarArg);
}
static G_GNUC_UNUSED LLVMTypeRef
LLVMFunctionType4 (LLVMTypeRef ReturnType,
LLVMTypeRef ParamType1,
LLVMTypeRef ParamType2,
LLVMTypeRef ParamType3,
LLVMTypeRef ParamType4,
int IsVarArg)
{
LLVMTypeRef param_types [4];
param_types [0] = ParamType1;
param_types [1] = ParamType2;
param_types [2] = ParamType3;
param_types [3] = ParamType4;
return LLVMFunctionType (ReturnType, param_types, 4, IsVarArg);
}
static G_GNUC_UNUSED LLVMTypeRef
LLVMFunctionType5 (LLVMTypeRef ReturnType,
LLVMTypeRef ParamType1,
LLVMTypeRef ParamType2,
LLVMTypeRef ParamType3,
LLVMTypeRef ParamType4,
LLVMTypeRef ParamType5,
int IsVarArg)
{
LLVMTypeRef param_types [5];
param_types [0] = ParamType1;
param_types [1] = ParamType2;
param_types [2] = ParamType3;
param_types [3] = ParamType4;
param_types [4] = ParamType5;
return LLVMFunctionType (ReturnType, param_types, 5, IsVarArg);
}
/*
* create_builder:
*
* Create an LLVM builder and remember it so it can be freed later.
*/
static LLVMBuilderRef
create_builder (EmitContext *ctx)
{
LLVMBuilderRef builder = LLVMCreateBuilder ();
if (mono_use_fast_math)
mono_llvm_set_fast_math (builder);
ctx->builders = g_slist_prepend_mempool (ctx->cfg->mempool, ctx->builders, builder);
emit_default_dbg_loc (ctx, builder);
return builder;
}
static char*
get_aotconst_name (MonoJumpInfoType type, gconstpointer data, int got_offset)
{
char *name;
int len;
switch (type) {
case MONO_PATCH_INFO_JIT_ICALL_ID:
name = g_strdup_printf ("jit_icall_%s", mono_find_jit_icall_info ((MonoJitICallId)(gsize)data)->name);
break;
case MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL:
name = g_strdup_printf ("jit_icall_addr_nocall_%s", mono_find_jit_icall_info ((MonoJitICallId)(gsize)data)->name);
break;
case MONO_PATCH_INFO_RGCTX_SLOT_INDEX: {
MonoJumpInfoRgctxEntry *entry = (MonoJumpInfoRgctxEntry*)data;
name = g_strdup_printf ("rgctx_slot_index_%s", mono_rgctx_info_type_to_str (entry->info_type));
break;
}
case MONO_PATCH_INFO_AOT_MODULE:
case MONO_PATCH_INFO_GC_SAFE_POINT_FLAG:
case MONO_PATCH_INFO_GC_CARD_TABLE_ADDR:
case MONO_PATCH_INFO_GC_NURSERY_START:
case MONO_PATCH_INFO_GC_NURSERY_BITS:
case MONO_PATCH_INFO_INTERRUPTION_REQUEST_FLAG:
name = g_strdup_printf ("%s", mono_ji_type_to_string (type));
len = strlen (name);
for (int i = 0; i < len; ++i)
name [i] = tolower (name [i]);
break;
default:
name = g_strdup_printf ("%s_%d", mono_ji_type_to_string (type), got_offset);
len = strlen (name);
for (int i = 0; i < len; ++i)
name [i] = tolower (name [i]);
break;
}
return name;
}
static int
compute_aot_got_offset (MonoLLVMModule *module, MonoJumpInfo *ji, LLVMTypeRef llvm_type)
{
guint32 got_offset = mono_aot_get_got_offset (ji);
LLVMTypeRef lookup_type = (LLVMTypeRef) g_hash_table_lookup (module->got_idx_to_type, GINT_TO_POINTER (got_offset));
if (!lookup_type) {
lookup_type = llvm_type;
} else if (llvm_type != lookup_type) {
lookup_type = module->ptr_type;
} else {
return got_offset;
}
g_hash_table_insert (module->got_idx_to_type, GINT_TO_POINTER (got_offset), lookup_type);
return got_offset;
}
/* Allocate a GOT slot for TYPE/DATA, and emit IR to load it */
static LLVMValueRef
get_aotconst_module (MonoLLVMModule *module, LLVMBuilderRef builder, MonoJumpInfoType type, gconstpointer data, LLVMTypeRef llvm_type,
guint32 *out_got_offset, MonoJumpInfo **out_ji)
{
guint32 got_offset;
LLVMValueRef load;
MonoJumpInfo tmp_ji;
tmp_ji.type = type;
tmp_ji.data.target = data;
MonoJumpInfo *ji = mono_aot_patch_info_dup (&tmp_ji);
if (out_ji)
*out_ji = ji;
got_offset = compute_aot_got_offset (module, ji, llvm_type);
module->max_got_offset = MAX (module->max_got_offset, got_offset);
if (out_got_offset)
*out_got_offset = got_offset;
if (module->static_link && type == MONO_PATCH_INFO_GC_SAFE_POINT_FLAG) {
if (!module->gc_safe_point_flag_var) {
const char *symbol = "mono_polling_required";
module->gc_safe_point_flag_var = LLVMAddGlobal (module->lmodule, llvm_type, symbol);
LLVMSetLinkage (module->gc_safe_point_flag_var, LLVMExternalLinkage);
}
return module->gc_safe_point_flag_var;
}
if (module->static_link && type == MONO_PATCH_INFO_INTERRUPTION_REQUEST_FLAG) {
if (!module->interrupt_flag_var) {
const char *symbol = "mono_thread_interruption_request_flag";
module->interrupt_flag_var = LLVMAddGlobal (module->lmodule, llvm_type, symbol);
LLVMSetLinkage (module->interrupt_flag_var, LLVMExternalLinkage);
}
return module->interrupt_flag_var;
}
LLVMValueRef const_var = g_hash_table_lookup (module->aotconst_vars, GINT_TO_POINTER (got_offset));
if (!const_var) {
LLVMTypeRef type = llvm_type;
// FIXME:
char *name = get_aotconst_name (ji->type, ji->data.target, got_offset);
char *symbol = g_strdup_printf ("aotconst_%s", name);
g_free (name);
LLVMValueRef v = LLVMAddGlobal (module->lmodule, type, symbol);
LLVMSetVisibility (v, LLVMHiddenVisibility);
LLVMSetLinkage (v, LLVMInternalLinkage);
LLVMSetInitializer (v, LLVMConstNull (type));
// FIXME:
LLVMSetAlignment (v, 8);
g_hash_table_insert (module->aotconst_vars, GINT_TO_POINTER (got_offset), v);
const_var = v;
}
load = LLVMBuildLoad (builder, const_var, "");
if (mono_aot_is_shared_got_offset (got_offset))
set_invariant_load_flag (load);
if (type == MONO_PATCH_INFO_LDSTR)
set_nonnull_load_flag (load);
load = LLVMBuildBitCast (builder, load, llvm_type, "");
return load;
}
static LLVMValueRef
get_aotconst (EmitContext *ctx, MonoJumpInfoType type, gconstpointer data, LLVMTypeRef llvm_type)
{
MonoCompile *cfg;
guint32 got_offset;
MonoJumpInfo *ji;
LLVMValueRef load;
cfg = ctx->cfg;
load = get_aotconst_module (ctx->module, ctx->builder, type, data, llvm_type, &got_offset, &ji);
ji->next = cfg->patch_info;
cfg->patch_info = ji;
/*
* If the got slot is shared, it means its initialized when the aot image is loaded, so we don't need to
* explicitly initialize it.
*/
if (!mono_aot_is_shared_got_offset (got_offset)) {
//mono_print_ji (ji);
//printf ("\n");
ctx->cfg->got_access_count ++;
}
return load;
}
static LLVMValueRef
get_dummy_aotconst (EmitContext *ctx, LLVMTypeRef llvm_type)
{
LLVMValueRef indexes [2];
LLVMValueRef got_entry_addr, load;
LLVMBuilderRef builder = ctx->builder;
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
got_entry_addr = LLVMBuildGEP (builder, ctx->module->dummy_got_var, indexes, 2, "");
load = LLVMBuildLoad (builder, got_entry_addr, "");
load = convert (ctx, load, llvm_type);
return load;
}
typedef struct {
MonoJumpInfo *ji;
MonoMethod *method;
LLVMValueRef load;
LLVMTypeRef type;
LLVMValueRef lmethod;
} CallSite;
static LLVMValueRef
get_callee_llvmonly (EmitContext *ctx, LLVMTypeRef llvm_sig, MonoJumpInfoType type, gconstpointer data)
{
LLVMValueRef callee;
char *callee_name = NULL;
if (ctx->module->static_link && ctx->module->assembly->image != mono_get_corlib ()) {
if (type == MONO_PATCH_INFO_JIT_ICALL_ID) {
MonoJitICallInfo * const info = mono_find_jit_icall_info ((MonoJitICallId)(gsize)data);
g_assert (info);
if (info->func != info->wrapper) {
type = MONO_PATCH_INFO_METHOD;
data = mono_icall_get_wrapper_method (info);
callee_name = mono_aot_get_mangled_method_name ((MonoMethod*)data);
}
} else if (type == MONO_PATCH_INFO_METHOD) {
MonoMethod *method = (MonoMethod*)data;
if (m_class_get_image (method->klass) != ctx->module->assembly->image && mono_aot_is_externally_callable (method))
callee_name = mono_aot_get_mangled_method_name (method);
}
}
if (!callee_name)
callee_name = mono_aot_get_direct_call_symbol (type, data);
if (callee_name) {
/* Directly callable */
// FIXME: Locking
callee = (LLVMValueRef)g_hash_table_lookup (ctx->module->direct_callables, callee_name);
if (!callee) {
callee = LLVMAddFunction (ctx->lmodule, callee_name, llvm_sig);
LLVMSetVisibility (callee, LLVMHiddenVisibility);
g_hash_table_insert (ctx->module->direct_callables, (char*)callee_name, callee);
} else {
/* LLVMTypeRef's are uniqued */
if (LLVMGetElementType (LLVMTypeOf (callee)) != llvm_sig)
return LLVMConstBitCast (callee, LLVMPointerType (llvm_sig, 0));
g_free (callee_name);
}
return callee;
}
/*
* Change references to icalls/pinvokes/jit icalls to their wrappers when in corlib, so
* they can be called directly.
*/
if (ctx->module->assembly->image == mono_get_corlib () && type == MONO_PATCH_INFO_JIT_ICALL_ID) {
MonoJitICallInfo * const info = mono_find_jit_icall_info ((MonoJitICallId)(gsize)data);
if (info->func != info->wrapper) {
type = MONO_PATCH_INFO_METHOD;
data = mono_icall_get_wrapper_method (info);
}
}
if (ctx->module->assembly->image == mono_get_corlib () && type == MONO_PATCH_INFO_METHOD) {
MonoMethod *method = (MonoMethod*)data;
if (m_method_is_icall (method) || m_method_is_pinvoke (method))
data = mono_marshal_get_native_wrapper (method, TRUE, TRUE);
}
/*
* Instead of emitting an indirect call through a got slot, emit a placeholder, and
* replace it with a direct call or an indirect call in mono_llvm_fixup_aot_module ()
* after all methods have been emitted.
*/
if (type == MONO_PATCH_INFO_METHOD) {
MonoMethod *method = (MonoMethod*)data;
if (m_class_get_image (method->klass)->assembly == ctx->module->assembly) {
MonoJumpInfo tmp_ji;
tmp_ji.type = type;
tmp_ji.data.target = method;
MonoJumpInfo *ji = mono_aot_patch_info_dup (&tmp_ji);
ji->next = ctx->cfg->patch_info;
ctx->cfg->patch_info = ji;
LLVMTypeRef llvm_type = LLVMPointerType (llvm_sig, 0);
ctx->cfg->got_access_count ++;
CallSite *info = g_new0 (CallSite, 1);
info->method = method;
info->ji = ji;
info->type = llvm_type;
/*
* Emit a dummy load to represent the callee, and either replace it with
* a reference to the llvm method for the callee, or from a load from the
* GOT.
*/
LLVMValueRef load = get_dummy_aotconst (ctx, llvm_type);
info->load = load;
info->lmethod = ctx->lmethod;
g_ptr_array_add (ctx->callsite_list, info);
return load;
}
}
/*
* All other calls are made through the GOT.
*/
callee = get_aotconst (ctx, type, data, LLVMPointerType (llvm_sig, 0));
return callee;
}
/*
* get_callee:
*
* Return an llvm value representing the callee given by the arguments.
*/
static LLVMValueRef
get_callee (EmitContext *ctx, LLVMTypeRef llvm_sig, MonoJumpInfoType type, gconstpointer data)
{
LLVMValueRef callee;
char *callee_name;
MonoJumpInfo *ji = NULL;
if (ctx->llvm_only)
return get_callee_llvmonly (ctx, llvm_sig, type, data);
callee_name = NULL;
/* Cross-assembly direct calls */
if (type == MONO_PATCH_INFO_METHOD) {
MonoMethod *cmethod = (MonoMethod*)data;
if (m_class_get_image (cmethod->klass) != ctx->module->assembly->image) {
MonoJumpInfo tmp_ji;
memset (&tmp_ji, 0, sizeof (MonoJumpInfo));
tmp_ji.type = type;
tmp_ji.data.target = data;
if (mono_aot_is_direct_callable (&tmp_ji)) {
/*
* This will add a reference to cmethod's image so it will
* be loaded when the current AOT image is loaded, so
* the GOT slots used by the init method code are initialized.
*/
tmp_ji.type = MONO_PATCH_INFO_IMAGE;
tmp_ji.data.image = m_class_get_image (cmethod->klass);
ji = mono_aot_patch_info_dup (&tmp_ji);
mono_aot_get_got_offset (ji);
callee_name = mono_aot_get_mangled_method_name (cmethod);
callee = (LLVMValueRef)g_hash_table_lookup (ctx->module->direct_callables, callee_name);
if (!callee) {
callee = LLVMAddFunction (ctx->lmodule, callee_name, llvm_sig);
LLVMSetLinkage (callee, LLVMExternalLinkage);
g_hash_table_insert (ctx->module->direct_callables, callee_name, callee);
} else {
/* LLVMTypeRef's are uniqued */
if (LLVMGetElementType (LLVMTypeOf (callee)) != llvm_sig)
callee = LLVMConstBitCast (callee, LLVMPointerType (llvm_sig, 0));
g_free (callee_name);
}
return callee;
}
}
}
callee_name = mono_aot_get_plt_symbol (type, data);
if (!callee_name)
return NULL;
if (ctx->cfg->compile_aot)
/* Add a patch so referenced wrappers can be compiled in full aot mode */
mono_add_patch_info (ctx->cfg, 0, type, data);
// FIXME: Locking
callee = (LLVMValueRef)g_hash_table_lookup (ctx->module->plt_entries, callee_name);
if (!callee) {
callee = LLVMAddFunction (ctx->lmodule, callee_name, llvm_sig);
LLVMSetVisibility (callee, LLVMHiddenVisibility);
g_hash_table_insert (ctx->module->plt_entries, (char*)callee_name, callee);
}
if (ctx->cfg->compile_aot) {
ji = g_new0 (MonoJumpInfo, 1);
ji->type = type;
ji->data.target = data;
g_hash_table_insert (ctx->module->plt_entries_ji, ji, callee);
}
return callee;
}
static LLVMValueRef
get_jit_callee (EmitContext *ctx, const char *name, LLVMTypeRef llvm_sig, MonoJumpInfoType type, gconstpointer data)
{
gpointer target;
// This won't be patched so compile the wrapper immediately
if (type == MONO_PATCH_INFO_JIT_ICALL_ID) {
MonoJitICallInfo * const info = mono_find_jit_icall_info ((MonoJitICallId)(gsize)data);
target = (gpointer)mono_icall_get_wrapper_full (info, TRUE);
} else {
target = resolve_patch (ctx->cfg, type, data);
}
LLVMValueRef tramp_var = LLVMAddGlobal (ctx->lmodule, LLVMPointerType (llvm_sig, 0), name);
LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (LLVMConstInt (LLVMInt64Type (), (guint64)(size_t)target, FALSE), LLVMPointerType (llvm_sig, 0)));
LLVMSetLinkage (tramp_var, LLVMExternalLinkage);
LLVMValueRef callee = LLVMBuildLoad (ctx->builder, tramp_var, "");
return callee;
}
static int
get_handler_clause (MonoCompile *cfg, MonoBasicBlock *bb)
{
MonoMethodHeader *header = cfg->header;
MonoExceptionClause *clause;
int i;
/* Directly */
if (bb->region != -1 && MONO_BBLOCK_IS_IN_REGION (bb, MONO_REGION_TRY))
return (bb->region >> 8) - 1;
/* Indirectly */
for (i = 0; i < header->num_clauses; ++i) {
clause = &header->clauses [i];
if (MONO_OFFSET_IN_CLAUSE (clause, bb->real_offset) && clause->flags == MONO_EXCEPTION_CLAUSE_NONE)
return i;
}
return -1;
}
static MonoExceptionClause *
get_most_deep_clause (MonoCompile *cfg, EmitContext *ctx, MonoBasicBlock *bb)
{
if (bb == cfg->bb_init)
return NULL;
// Since they're sorted by nesting we just need
// the first one that the bb is a member of
for (int i = 0; i < cfg->header->num_clauses; i++) {
MonoExceptionClause *curr = &cfg->header->clauses [i];
if (MONO_OFFSET_IN_CLAUSE (curr, bb->real_offset))
return curr;
}
return NULL;
}
static void
set_metadata_flag (LLVMValueRef v, const char *flag_name)
{
LLVMValueRef md_arg;
int md_kind;
md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name));
md_arg = LLVMMDString ("mono", 4);
LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1));
}
static void
set_nonnull_load_flag (LLVMValueRef v)
{
LLVMValueRef md_arg;
int md_kind;
const char *flag_name;
flag_name = "nonnull";
md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name));
md_arg = LLVMMDString ("<index>", strlen ("<index>"));
LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1));
}
static void
set_nontemporal_flag (LLVMValueRef v)
{
LLVMValueRef md_arg;
int md_kind;
const char *flag_name;
// FIXME: Cache this
flag_name = "nontemporal";
md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name));
md_arg = const_int32 (1);
LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1));
}
static void
set_invariant_load_flag (LLVMValueRef v)
{
LLVMValueRef md_arg;
int md_kind;
const char *flag_name;
// FIXME: Cache this
flag_name = "invariant.load";
md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name));
md_arg = LLVMMDString ("<index>", strlen ("<index>"));
LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1));
}
/*
* emit_call:
*
* Emit an LLVM call or invoke instruction depending on whenever the call is inside
* a try region.
*/
static LLVMValueRef
emit_call (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, LLVMValueRef callee, LLVMValueRef *args, int pindex)
{
MonoCompile *cfg = ctx->cfg;
LLVMValueRef lcall = NULL;
LLVMBuilderRef builder = *builder_ref;
MonoExceptionClause *clause;
if (ctx->llvm_only) {
clause = bb ? get_most_deep_clause (cfg, ctx, bb) : NULL;
// FIXME: Use an invoke only for calls inside try-catch blocks
if (clause && (!cfg->deopt || ctx->has_catch)) {
/*
* Have to use an invoke instead of a call, branching to the
* handler bblock of the clause containing this bblock.
*/
intptr_t key = CLAUSE_END (clause);
LLVMBasicBlockRef lpad_bb = (LLVMBasicBlockRef)g_hash_table_lookup (ctx->exc_meta, (gconstpointer)key);
// FIXME: Find the one that has the lowest end bound for the right start address
// FIXME: Finally + nesting
if (lpad_bb) {
LLVMBasicBlockRef noex_bb = gen_bb (ctx, "CALL_NOEX_BB");
/* Use an invoke */
lcall = LLVMBuildInvoke (builder, callee, args, pindex, noex_bb, lpad_bb, "");
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, noex_bb);
ctx->bblocks [bb->block_num].end_bblock = noex_bb;
}
}
} else {
int clause_index = get_handler_clause (cfg, bb);
if (clause_index != -1) {
MonoMethodHeader *header = cfg->header;
MonoExceptionClause *ec = &header->clauses [clause_index];
MonoBasicBlock *tblock;
LLVMBasicBlockRef ex_bb, noex_bb;
/*
* Have to use an invoke instead of a call, branching to the
* handler bblock of the clause containing this bblock.
*/
g_assert (ec->flags == MONO_EXCEPTION_CLAUSE_NONE || ec->flags == MONO_EXCEPTION_CLAUSE_FINALLY || ec->flags == MONO_EXCEPTION_CLAUSE_FAULT);
tblock = cfg->cil_offset_to_bb [ec->handler_offset];
g_assert (tblock);
ctx->bblocks [tblock->block_num].invoke_target = TRUE;
ex_bb = get_bb (ctx, tblock);
noex_bb = gen_bb (ctx, "NOEX_BB");
/* Use an invoke */
lcall = LLVMBuildInvoke (builder, callee, args, pindex, noex_bb, ex_bb, "");
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, noex_bb);
ctx->bblocks [bb->block_num].end_bblock = noex_bb;
}
}
if (!lcall) {
lcall = LLVMBuildCall (builder, callee, args, pindex, "");
ctx->builder = builder;
}
if (builder_ref)
*builder_ref = ctx->builder;
return lcall;
}
static LLVMValueRef
emit_load (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, int size, LLVMValueRef addr, LLVMValueRef base, const char *name, gboolean is_faulting, gboolean is_volatile, BarrierKind barrier)
{
LLVMValueRef res;
/*
* We emit volatile loads for loads which can fault, because otherwise
* LLVM will generate invalid code when encountering a load from a
* NULL address.
*/
if (barrier != LLVM_BARRIER_NONE)
res = mono_llvm_build_atomic_load (*builder_ref, addr, name, is_volatile, size, barrier);
else
res = mono_llvm_build_load (*builder_ref, addr, name, is_volatile);
return res;
}
static void
emit_store_general (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, int size, LLVMValueRef value, LLVMValueRef addr, LLVMValueRef base, gboolean is_faulting, gboolean is_volatile, BarrierKind barrier)
{
if (barrier != LLVM_BARRIER_NONE)
mono_llvm_build_aligned_store (*builder_ref, value, addr, barrier, size);
else
mono_llvm_build_store (*builder_ref, value, addr, is_volatile, barrier);
}
static void
emit_store (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, int size, LLVMValueRef value, LLVMValueRef addr, LLVMValueRef base, gboolean is_faulting, gboolean is_volatile)
{
emit_store_general (ctx, bb, builder_ref, size, value, addr, base, is_faulting, is_volatile, LLVM_BARRIER_NONE);
}
/*
* emit_cond_system_exception:
*
* Emit code to throw the exception EXC_TYPE if the condition CMP is false.
* Might set the ctx exception.
*/
static void
emit_cond_system_exception (EmitContext *ctx, MonoBasicBlock *bb, const char *exc_type, LLVMValueRef cmp, gboolean force_explicit)
{
LLVMBasicBlockRef ex_bb, ex2_bb = NULL, noex_bb;
LLVMBuilderRef builder;
MonoClass *exc_class;
LLVMValueRef args [2];
LLVMValueRef callee;
gboolean no_pc = FALSE;
static MonoClass *exc_classes [MONO_EXC_INTRINS_NUM];
if (IS_TARGET_AMD64)
/* Some platforms don't require the pc argument */
no_pc = TRUE;
int exc_id = mini_exception_id_by_name (exc_type);
if (!exc_classes [exc_id])
exc_classes [exc_id] = mono_class_load_from_name (mono_get_corlib (), "System", exc_type);
exc_class = exc_classes [exc_id];
ex_bb = gen_bb (ctx, "EX_BB");
if (ctx->llvm_only)
ex2_bb = gen_bb (ctx, "EX2_BB");
noex_bb = gen_bb (ctx, "NOEX_BB");
LLVMValueRef branch = LLVMBuildCondBr (ctx->builder, cmp, ex_bb, noex_bb);
if (exc_id == MONO_EXC_NULL_REF && !ctx->cfg->disable_llvm_implicit_null_checks && !force_explicit) {
mono_llvm_set_implicit_branch (ctx->builder, branch);
}
/* Emit exception throwing code */
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, ex_bb);
if (ctx->cfg->llvm_only) {
LLVMBuildBr (builder, ex2_bb);
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, ex2_bb);
if (exc_id == MONO_EXC_NULL_REF) {
static LLVMTypeRef sig;
if (!sig)
sig = LLVMFunctionType0 (LLVMVoidType (), FALSE);
/* Can't cache this */
callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_nullref_exception));
emit_call (ctx, bb, &builder, callee, NULL, 0);
} else {
static LLVMTypeRef sig;
if (!sig)
sig = LLVMFunctionType1 (LLVMVoidType (), LLVMInt32Type (), FALSE);
callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_corlib_exception));
args [0] = LLVMConstInt (LLVMInt32Type (), m_class_get_type_token (exc_class) - MONO_TOKEN_TYPE_DEF, FALSE);
emit_call (ctx, bb, &builder, callee, args, 1);
}
LLVMBuildUnreachable (builder);
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, noex_bb);
ctx->bblocks [bb->block_num].end_bblock = noex_bb;
ctx->ex_index ++;
return;
}
callee = ctx->module->throw_corlib_exception;
if (!callee) {
LLVMTypeRef sig;
if (no_pc)
sig = LLVMFunctionType1 (LLVMVoidType (), LLVMInt32Type (), FALSE);
else
sig = LLVMFunctionType2 (LLVMVoidType (), LLVMInt32Type (), LLVMPointerType (LLVMInt8Type (), 0), FALSE);
const MonoJitICallId icall_id = MONO_JIT_ICALL_mono_llvm_throw_corlib_exception_abs_trampoline;
if (ctx->cfg->compile_aot) {
callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
} else {
/*
* Differences between the LLVM/non-LLVM throw corlib exception trampoline:
* - On x86, LLVM generated code doesn't push the arguments
* - The trampoline takes the throw address as an arguments, not a pc offset.
*/
callee = get_jit_callee (ctx, "llvm_throw_corlib_exception_trampoline", sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
/*
* Make sure that ex_bb starts with the invoke, so the block address points to it, and not to the load
* added by get_jit_callee ().
*/
ex2_bb = gen_bb (ctx, "EX2_BB");
LLVMBuildBr (builder, ex2_bb);
ex_bb = ex2_bb;
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, ex2_bb);
}
}
args [0] = LLVMConstInt (LLVMInt32Type (), m_class_get_type_token (exc_class) - MONO_TOKEN_TYPE_DEF, FALSE);
/*
* The LLVM mono branch contains changes so a block address can be passed as an
* argument to a call.
*/
if (no_pc) {
emit_call (ctx, bb, &builder, callee, args, 1);
} else {
args [1] = LLVMBlockAddress (ctx->lmethod, ex_bb);
emit_call (ctx, bb, &builder, callee, args, 2);
}
LLVMBuildUnreachable (builder);
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, noex_bb);
ctx->bblocks [bb->block_num].end_bblock = noex_bb;
ctx->ex_index ++;
return;
}
/*
* emit_args_to_vtype:
*
* Emit code to store the vtype in the arguments args to the address ADDRESS.
*/
static void
emit_args_to_vtype (EmitContext *ctx, LLVMBuilderRef builder, MonoType *t, LLVMValueRef address, LLVMArgInfo *ainfo, LLVMValueRef *args)
{
int j, size, nslots;
MonoClass *klass;
t = mini_get_underlying_type (t);
klass = mono_class_from_mono_type_internal (t);
size = mono_class_value_size (klass, NULL);
if (MONO_CLASS_IS_SIMD (ctx->cfg, klass))
address = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (LLVMInt8Type (), 0), "");
if (ainfo->storage == LLVMArgAsFpArgs)
nslots = ainfo->nslots;
else
nslots = 2;
for (j = 0; j < nslots; ++j) {
LLVMValueRef index [2], addr, daddr;
int part_size = size > TARGET_SIZEOF_VOID_P ? TARGET_SIZEOF_VOID_P : size;
LLVMTypeRef part_type;
while (part_size != 1 && part_size != 2 && part_size != 4 && part_size < 8)
part_size ++;
if (ainfo->pair_storage [j] == LLVMArgNone)
continue;
switch (ainfo->pair_storage [j]) {
case LLVMArgInIReg: {
part_type = LLVMIntType (part_size * 8);
if (MONO_CLASS_IS_SIMD (ctx->cfg, klass)) {
index [0] = LLVMConstInt (LLVMInt32Type (), j * TARGET_SIZEOF_VOID_P, FALSE);
addr = LLVMBuildGEP (builder, address, index, 1, "");
} else {
daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (IntPtrType (), 0), "");
index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE);
addr = LLVMBuildGEP (builder, daddr, index, 1, "");
}
LLVMBuildStore (builder, convert (ctx, args [j], part_type), LLVMBuildBitCast (ctx->builder, addr, LLVMPointerType (part_type, 0), ""));
break;
}
case LLVMArgInFPReg: {
LLVMTypeRef arg_type;
if (ainfo->esize == 8)
arg_type = LLVMDoubleType ();
else
arg_type = LLVMFloatType ();
index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE);
daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (arg_type, 0), "");
addr = LLVMBuildGEP (builder, daddr, index, 1, "");
LLVMBuildStore (builder, args [j], addr);
break;
}
case LLVMArgNone:
break;
default:
g_assert_not_reached ();
}
size -= TARGET_SIZEOF_VOID_P;
}
}
/*
* emit_vtype_to_args:
*
* Emit code to load a vtype at address ADDRESS into scalar arguments. Store the arguments
* into ARGS, and the number of arguments into NARGS.
*/
static void
emit_vtype_to_args (EmitContext *ctx, LLVMBuilderRef builder, MonoType *t, LLVMValueRef address, LLVMArgInfo *ainfo, LLVMValueRef *args, guint32 *nargs)
{
int pindex = 0;
int j, nslots;
LLVMTypeRef arg_type;
t = mini_get_underlying_type (t);
int32_t size = get_vtype_size_align (t).size;
if (MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (t)))
address = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (LLVMInt8Type (), 0), "");
if (ainfo->storage == LLVMArgAsFpArgs)
nslots = ainfo->nslots;
else
nslots = 2;
for (j = 0; j < nslots; ++j) {
LLVMValueRef index [2], addr, daddr;
int partsize = size > TARGET_SIZEOF_VOID_P ? TARGET_SIZEOF_VOID_P : size;
if (ainfo->pair_storage [j] == LLVMArgNone)
continue;
switch (ainfo->pair_storage [j]) {
case LLVMArgInIReg:
if (MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (t))) {
index [0] = LLVMConstInt (LLVMInt32Type (), j * TARGET_SIZEOF_VOID_P, FALSE);
addr = LLVMBuildGEP (builder, address, index, 1, "");
} else {
daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (IntPtrType (), 0), "");
index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE);
addr = LLVMBuildGEP (builder, daddr, index, 1, "");
}
args [pindex ++] = convert (ctx, LLVMBuildLoad (builder, LLVMBuildBitCast (ctx->builder, addr, LLVMPointerType (LLVMIntType (partsize * 8), 0), ""), ""), IntPtrType ());
break;
case LLVMArgInFPReg:
if (ainfo->esize == 8)
arg_type = LLVMDoubleType ();
else
arg_type = LLVMFloatType ();
daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (arg_type, 0), "");
index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE);
addr = LLVMBuildGEP (builder, daddr, index, 1, "");
args [pindex ++] = LLVMBuildLoad (builder, addr, "");
break;
case LLVMArgNone:
break;
default:
g_assert_not_reached ();
}
size -= TARGET_SIZEOF_VOID_P;
}
*nargs = pindex;
}
static LLVMValueRef
build_alloca_llvm_type_name (EmitContext *ctx, LLVMTypeRef t, int align, const char *name)
{
/*
* Have to place all alloca's at the end of the entry bb, since otherwise they would
* get executed every time control reaches them.
*/
LLVMPositionBuilder (ctx->alloca_builder, get_bb (ctx, ctx->cfg->bb_entry), ctx->last_alloca);
ctx->last_alloca = mono_llvm_build_alloca (ctx->alloca_builder, t, NULL, align, name);
return ctx->last_alloca;
}
static LLVMValueRef
build_alloca_llvm_type (EmitContext *ctx, LLVMTypeRef t, int align)
{
return build_alloca_llvm_type_name (ctx, t, align, "");
}
static LLVMValueRef
build_named_alloca (EmitContext *ctx, MonoType *t, char const *name)
{
MonoClass *k = mono_class_from_mono_type_internal (t);
int align;
g_assert (!mini_is_gsharedvt_variable_type (t));
if (MONO_CLASS_IS_SIMD (ctx->cfg, k))
align = mono_class_value_size (k, NULL);
else
align = mono_class_min_align (k);
/* Sometimes align is not a power of 2 */
while (mono_is_power_of_two (align) == -1)
align ++;
return build_alloca_llvm_type_name (ctx, type_to_llvm_type (ctx, t), align, name);
}
static LLVMValueRef
build_alloca (EmitContext *ctx, MonoType *t)
{
return build_named_alloca (ctx, t, "");
}
static LLVMValueRef
emit_gsharedvt_ldaddr (EmitContext *ctx, int vreg)
{
/*
* gsharedvt local.
* Compute the address of the local as gsharedvt_locals_var + gsharedvt_info_var->locals_offsets [idx].
*/
MonoCompile *cfg = ctx->cfg;
LLVMBuilderRef builder = ctx->builder;
LLVMValueRef offset, offset_var;
LLVMValueRef info_var = ctx->values [cfg->gsharedvt_info_var->dreg];
LLVMValueRef locals_var = ctx->values [cfg->gsharedvt_locals_var->dreg];
LLVMValueRef ptr;
char *name;
g_assert (info_var);
g_assert (locals_var);
int idx = cfg->gsharedvt_vreg_to_idx [vreg] - 1;
offset = LLVMConstInt (LLVMInt32Type (), MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, entries) + (idx * TARGET_SIZEOF_VOID_P), FALSE);
ptr = LLVMBuildAdd (builder, convert (ctx, info_var, IntPtrType ()), convert (ctx, offset, IntPtrType ()), "");
name = g_strdup_printf ("gsharedvt_local_%d_offset", vreg);
offset_var = LLVMBuildLoad (builder, convert (ctx, ptr, LLVMPointerType (LLVMInt32Type (), 0)), name);
return LLVMBuildAdd (builder, convert (ctx, locals_var, IntPtrType ()), convert (ctx, offset_var, IntPtrType ()), "");
}
/*
* Put the global into the 'llvm.used' array to prevent it from being optimized away.
*/
static void
mark_as_used (MonoLLVMModule *module, LLVMValueRef global)
{
if (!module->used)
module->used = g_ptr_array_sized_new (16);
g_ptr_array_add (module->used, global);
}
static void
emit_llvm_used (MonoLLVMModule *module)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMTypeRef used_type;
LLVMValueRef used, *used_elem;
int i;
if (!module->used)
return;
used_type = LLVMArrayType (LLVMPointerType (LLVMInt8Type (), 0), module->used->len);
used = LLVMAddGlobal (lmodule, used_type, "llvm.used");
used_elem = g_new0 (LLVMValueRef, module->used->len);
for (i = 0; i < module->used->len; ++i)
used_elem [i] = LLVMConstBitCast ((LLVMValueRef)g_ptr_array_index (module->used, i), LLVMPointerType (LLVMInt8Type (), 0));
LLVMSetInitializer (used, LLVMConstArray (LLVMPointerType (LLVMInt8Type (), 0), used_elem, module->used->len));
LLVMSetLinkage (used, LLVMAppendingLinkage);
LLVMSetSection (used, "llvm.metadata");
}
/*
* emit_get_method:
*
* Emit a function mapping method indexes to their code
*/
static void
emit_get_method (MonoLLVMModule *module)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef func, switch_ins, m;
LLVMBasicBlockRef entry_bb, fail_bb, bb, code_start_bb, code_end_bb, main_bb;
LLVMBasicBlockRef *bbs = NULL;
LLVMTypeRef rtype;
LLVMBuilderRef builder = LLVMCreateBuilder ();
LLVMValueRef table = NULL;
char *name;
int i;
gboolean emit_table = FALSE;
#ifdef TARGET_WASM
/*
* Emit a table of functions instead of a switch statement,
* its very efficient on wasm. This might be usable on
* other platforms too.
*/
emit_table = TRUE;
#endif
rtype = LLVMPointerType (LLVMInt8Type (), 0);
int table_len = module->max_method_idx + 1;
if (emit_table) {
LLVMTypeRef table_type;
LLVMValueRef *table_elems;
char *table_name;
table_type = LLVMArrayType (rtype, table_len);
table_name = g_strdup_printf ("%s_method_table", module->global_prefix);
table = LLVMAddGlobal (lmodule, table_type, table_name);
table_elems = g_new0 (LLVMValueRef, table_len);
for (i = 0; i < table_len; ++i) {
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_lmethod, GINT_TO_POINTER (i));
if (m && !g_hash_table_lookup (module->no_method_table_lmethods, m))
table_elems [i] = LLVMBuildBitCast (builder, m, rtype, "");
else
table_elems [i] = LLVMConstNull (rtype);
}
LLVMSetInitializer (table, LLVMConstArray (LLVMPointerType (LLVMInt8Type (), 0), table_elems, table_len));
}
/*
* Emit a switch statement. Emitting a table of function addresses is smaller/faster,
* but generating code seems safer.
*/
func = LLVMAddFunction (lmodule, module->get_method_symbol, LLVMFunctionType1 (rtype, LLVMInt32Type (), FALSE));
LLVMSetLinkage (func, LLVMExternalLinkage);
LLVMSetVisibility (func, LLVMHiddenVisibility);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND);
module->get_method = func;
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
/*
* Return llvm_code_start/llvm_code_end when called with -1/-2.
* Hopefully, the toolchain doesn't reorder these functions. If it does,
* then we will have to find another solution.
*/
name = g_strdup_printf ("BB_CODE_START");
code_start_bb = LLVMAppendBasicBlock (func, name);
g_free (name);
LLVMPositionBuilderAtEnd (builder, code_start_bb);
LLVMBuildRet (builder, LLVMBuildBitCast (builder, module->code_start, rtype, ""));
name = g_strdup_printf ("BB_CODE_END");
code_end_bb = LLVMAppendBasicBlock (func, name);
g_free (name);
LLVMPositionBuilderAtEnd (builder, code_end_bb);
LLVMBuildRet (builder, LLVMBuildBitCast (builder, module->code_end, rtype, ""));
if (emit_table) {
/*
* Because table_len is computed using the method indexes available for us, it
* might not include methods which are not compiled because of AOT profiles.
* So table_len can be smaller than info->nmethods. Add a bounds check because
* of that.
* switch (index) {
* case -1: return code_start;
* case -2: return code_end;
* default: return index < table_len ? method_table [index] : 0;
*/
fail_bb = LLVMAppendBasicBlock (func, "FAIL");
LLVMPositionBuilderAtEnd (builder, fail_bb);
LLVMBuildRet (builder, LLVMBuildIntToPtr (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), rtype, ""));
main_bb = LLVMAppendBasicBlock (func, "MAIN");
LLVMPositionBuilderAtEnd (builder, main_bb);
LLVMValueRef base = table;
LLVMValueRef indexes [2];
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMGetParam (func, 0);
LLVMValueRef addr = LLVMBuildGEP (builder, base, indexes, 2, "");
LLVMValueRef res = mono_llvm_build_load (builder, addr, "", FALSE);
LLVMBuildRet (builder, res);
LLVMBasicBlockRef default_bb = LLVMAppendBasicBlock (func, "DEFAULT");
LLVMPositionBuilderAtEnd (builder, default_bb);
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSGE, LLVMGetParam (func, 0), LLVMConstInt (LLVMInt32Type (), table_len, FALSE), "");
LLVMBuildCondBr (builder, cmp, fail_bb, main_bb);
LLVMPositionBuilderAtEnd (builder, entry_bb);
switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), default_bb, 0);
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -1, FALSE), code_start_bb);
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -2, FALSE), code_end_bb);
} else {
bbs = g_new0 (LLVMBasicBlockRef, module->max_method_idx + 1);
for (i = 0; i < module->max_method_idx + 1; ++i) {
name = g_strdup_printf ("BB_%d", i);
bb = LLVMAppendBasicBlock (func, name);
g_free (name);
bbs [i] = bb;
LLVMPositionBuilderAtEnd (builder, bb);
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_lmethod, GINT_TO_POINTER (i));
if (m && !g_hash_table_lookup (module->no_method_table_lmethods, m))
LLVMBuildRet (builder, LLVMBuildBitCast (builder, m, rtype, ""));
else
LLVMBuildRet (builder, LLVMConstNull (rtype));
}
fail_bb = LLVMAppendBasicBlock (func, "FAIL");
LLVMPositionBuilderAtEnd (builder, fail_bb);
LLVMBuildRet (builder, LLVMConstNull (rtype));
LLVMPositionBuilderAtEnd (builder, entry_bb);
switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), fail_bb, 0);
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -1, FALSE), code_start_bb);
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -2, FALSE), code_end_bb);
for (i = 0; i < module->max_method_idx + 1; ++i) {
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]);
}
}
mark_as_used (module, func);
LLVMDisposeBuilder (builder);
}
/*
* emit_get_unbox_tramp:
*
* Emit a function mapping method indexes to their unbox trampoline
*/
static void
emit_get_unbox_tramp (MonoLLVMModule *module)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef func, switch_ins, m;
LLVMBasicBlockRef entry_bb, fail_bb, bb;
LLVMBasicBlockRef *bbs;
LLVMTypeRef rtype;
LLVMBuilderRef builder = LLVMCreateBuilder ();
char *name;
int i;
gboolean emit_table = FALSE;
/* Similar to emit_get_method () */
#ifndef TARGET_WATCHOS
emit_table = TRUE;
#endif
rtype = LLVMPointerType (LLVMInt8Type (), 0);
if (emit_table) {
// About 10% of methods have an unbox tramp, so emit a table of indexes for them
// that the runtime can search using a binary search
int len = 0;
for (i = 0; i < module->max_method_idx + 1; ++i) {
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i));
if (m)
len ++;
}
LLVMTypeRef table_type, elemtype;
LLVMValueRef *table_elems;
LLVMValueRef table;
char *table_name;
int table_len;
int elemsize;
table_len = len;
elemsize = module->max_method_idx < 65000 ? 2 : 4;
// The index table
elemtype = elemsize == 2 ? LLVMInt16Type () : LLVMInt32Type ();
table_type = LLVMArrayType (elemtype, table_len);
table_name = g_strdup_printf ("%s_unbox_tramp_indexes", module->global_prefix);
table = LLVMAddGlobal (lmodule, table_type, table_name);
table_elems = g_new0 (LLVMValueRef, table_len);
int idx = 0;
for (i = 0; i < module->max_method_idx + 1; ++i) {
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i));
if (m)
table_elems [idx ++] = LLVMConstInt (elemtype, i, FALSE);
}
LLVMSetInitializer (table, LLVMConstArray (elemtype, table_elems, table_len));
module->unbox_tramp_indexes = table;
// The trampoline table
elemtype = rtype;
table_type = LLVMArrayType (elemtype, table_len);
table_name = g_strdup_printf ("%s_unbox_trampolines", module->global_prefix);
table = LLVMAddGlobal (lmodule, table_type, table_name);
table_elems = g_new0 (LLVMValueRef, table_len);
idx = 0;
for (i = 0; i < module->max_method_idx + 1; ++i) {
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i));
if (m)
table_elems [idx ++] = LLVMBuildBitCast (builder, m, rtype, "");
}
LLVMSetInitializer (table, LLVMConstArray (elemtype, table_elems, table_len));
module->unbox_trampolines = table;
module->unbox_tramp_num = table_len;
module->unbox_tramp_elemsize = elemsize;
return;
}
func = LLVMAddFunction (lmodule, module->get_unbox_tramp_symbol, LLVMFunctionType1 (rtype, LLVMInt32Type (), FALSE));
LLVMSetLinkage (func, LLVMExternalLinkage);
LLVMSetVisibility (func, LLVMHiddenVisibility);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND);
module->get_unbox_tramp = func;
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
bbs = g_new0 (LLVMBasicBlockRef, module->max_method_idx + 1);
for (i = 0; i < module->max_method_idx + 1; ++i) {
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i));
if (!m)
continue;
name = g_strdup_printf ("BB_%d", i);
bb = LLVMAppendBasicBlock (func, name);
g_free (name);
bbs [i] = bb;
LLVMPositionBuilderAtEnd (builder, bb);
LLVMBuildRet (builder, LLVMBuildBitCast (builder, m, rtype, ""));
}
fail_bb = LLVMAppendBasicBlock (func, "FAIL");
LLVMPositionBuilderAtEnd (builder, fail_bb);
LLVMBuildRet (builder, LLVMConstNull (rtype));
LLVMPositionBuilderAtEnd (builder, entry_bb);
switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), fail_bb, 0);
for (i = 0; i < module->max_method_idx + 1; ++i) {
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i));
if (!m)
continue;
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]);
}
mark_as_used (module, func);
LLVMDisposeBuilder (builder);
}
/*
* emit_init_aotconst:
*
* Emit a function to initialize the aotconst_ variables. Called by the runtime.
*/
static void
emit_init_aotconst (MonoLLVMModule *module)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef func;
LLVMBasicBlockRef entry_bb;
LLVMBuilderRef builder = LLVMCreateBuilder ();
func = LLVMAddFunction (lmodule, module->init_aotconst_symbol, LLVMFunctionType2 (LLVMVoidType (), LLVMInt32Type (), IntPtrType (), FALSE));
LLVMSetLinkage (func, LLVMExternalLinkage);
LLVMSetVisibility (func, LLVMHiddenVisibility);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND);
module->init_aotconst_func = func;
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
LLVMPositionBuilderAtEnd (builder, entry_bb);
#ifdef TARGET_WASM
/* Emit a table of aotconst addresses instead of a switch statement to save space */
LLVMValueRef aotconsts;
LLVMTypeRef aotconst_addr_type = LLVMPointerType (module->ptr_type, 0);
int table_size = module->max_got_offset + 1;
LLVMTypeRef aotconst_arr_type = LLVMArrayType (aotconst_addr_type, table_size);
LLVMValueRef aotconst_dummy = LLVMAddGlobal (module->lmodule, module->ptr_type, "aotconst_dummy");
LLVMSetInitializer (aotconst_dummy, LLVMConstNull (module->ptr_type));
LLVMSetVisibility (aotconst_dummy, LLVMHiddenVisibility);
LLVMSetLinkage (aotconst_dummy, LLVMInternalLinkage);
aotconsts = LLVMAddGlobal (module->lmodule, aotconst_arr_type, "aotconsts");
LLVMValueRef *aotconst_init = g_new0 (LLVMValueRef, table_size);
for (int i = 0; i < table_size; ++i) {
LLVMValueRef aotconst = (LLVMValueRef)g_hash_table_lookup (module->aotconst_vars, GINT_TO_POINTER (i));
if (aotconst)
aotconst_init [i] = LLVMConstBitCast (aotconst, aotconst_addr_type);
else
aotconst_init [i] = LLVMConstBitCast (aotconst_dummy, aotconst_addr_type);
}
LLVMSetInitializer (aotconsts, LLVMConstArray (aotconst_addr_type, aotconst_init, table_size));
LLVMSetVisibility (aotconsts, LLVMHiddenVisibility);
LLVMSetLinkage (aotconsts, LLVMInternalLinkage);
LLVMBasicBlockRef exit_bb = LLVMAppendBasicBlock (func, "EXIT_BB");
LLVMBasicBlockRef main_bb = LLVMAppendBasicBlock (func, "BB");
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSGE, LLVMGetParam (func, 0), LLVMConstInt (LLVMInt32Type (), table_size, FALSE), "");
LLVMBuildCondBr (builder, cmp, exit_bb, main_bb);
LLVMPositionBuilderAtEnd (builder, main_bb);
LLVMValueRef indexes [2];
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMGetParam (func, 0);
LLVMValueRef aotconst_addr = LLVMBuildLoad (builder, LLVMBuildGEP (builder, aotconsts, indexes, 2, ""), "");
LLVMBuildStore (builder, LLVMBuildIntToPtr (builder, LLVMGetParam (func, 1), module->ptr_type, ""), aotconst_addr);
LLVMBuildBr (builder, exit_bb);
LLVMPositionBuilderAtEnd (builder, exit_bb);
LLVMBuildRetVoid (builder);
#else
LLVMValueRef switch_ins;
LLVMBasicBlockRef fail_bb, bb;
LLVMBasicBlockRef *bbs = NULL;
char *name;
bbs = g_new0 (LLVMBasicBlockRef, module->max_got_offset + 1);
for (int i = 0; i < module->max_got_offset + 1; ++i) {
name = g_strdup_printf ("BB_%d", i);
bb = LLVMAppendBasicBlock (func, name);
g_free (name);
bbs [i] = bb;
LLVMPositionBuilderAtEnd (builder, bb);
LLVMValueRef var = g_hash_table_lookup (module->aotconst_vars, GINT_TO_POINTER (i));
if (var) {
LLVMValueRef addr = LLVMBuildBitCast (builder, var, LLVMPointerType (IntPtrType (), 0), "");
LLVMBuildStore (builder, LLVMGetParam (func, 1), addr);
}
LLVMBuildRetVoid (builder);
}
fail_bb = LLVMAppendBasicBlock (func, "FAIL");
LLVMPositionBuilderAtEnd (builder, fail_bb);
LLVMBuildRetVoid (builder);
LLVMPositionBuilderAtEnd (builder, entry_bb);
switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), fail_bb, 0);
for (int i = 0; i < module->max_got_offset + 1; ++i)
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]);
#endif
LLVMDisposeBuilder (builder);
}
/* Add a function to mark the beginning of LLVM code */
static void
emit_llvm_code_start (MonoLLVMModule *module)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef func;
LLVMBasicBlockRef entry_bb;
LLVMBuilderRef builder;
func = LLVMAddFunction (lmodule, "llvm_code_start", LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE));
LLVMSetLinkage (func, LLVMInternalLinkage);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND);
module->code_start = func;
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
builder = LLVMCreateBuilder ();
LLVMPositionBuilderAtEnd (builder, entry_bb);
LLVMBuildRetVoid (builder);
LLVMDisposeBuilder (builder);
}
/*
* emit_init_func:
*
* Emit functions to initialize LLVM methods.
* These are wrappers around the mini_llvm_init_method () JIT icall.
* The wrappers handle adding the 'amodule' argument, loading the vtable from different locations, and they have
* a cold calling convention.
*/
static LLVMValueRef
emit_init_func (MonoLLVMModule *module, MonoAotInitSubtype subtype)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef func, indexes [2], args [16], callee, info_var, index_var, inited_var, cmp;
LLVMBasicBlockRef entry_bb, inited_bb, notinited_bb;
LLVMBuilderRef builder;
LLVMTypeRef icall_sig;
const char *wrapper_name = mono_marshal_get_aot_init_wrapper_name (subtype);
LLVMTypeRef func_type = NULL;
LLVMTypeRef arg_type = module->ptr_type;
char *name = g_strdup_printf ("%s_%s", module->global_prefix, wrapper_name);
switch (subtype) {
case AOT_INIT_METHOD:
func_type = LLVMFunctionType1 (LLVMVoidType (), arg_type, FALSE);
break;
case AOT_INIT_METHOD_GSHARED_MRGCTX:
case AOT_INIT_METHOD_GSHARED_VTABLE:
func_type = LLVMFunctionType2 (LLVMVoidType (), arg_type, IntPtrType (), FALSE);
break;
case AOT_INIT_METHOD_GSHARED_THIS:
func_type = LLVMFunctionType2 (LLVMVoidType (), arg_type, ObjRefType (), FALSE);
break;
default:
g_assert_not_reached ();
}
func = LLVMAddFunction (lmodule, name, func_type);
info_var = LLVMGetParam (func, 0);
LLVMSetLinkage (func, LLVMInternalLinkage);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_INLINE);
set_cold_cconv (func);
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
builder = LLVMCreateBuilder ();
LLVMPositionBuilderAtEnd (builder, entry_bb);
/* Load method_index which is emitted at the start of the method info */
indexes [0] = const_int32 (0);
indexes [1] = const_int32 (0);
// FIXME: Make sure its aligned
index_var = LLVMBuildLoad (builder, LLVMBuildGEP (builder, LLVMBuildBitCast (builder, info_var, LLVMPointerType (LLVMInt32Type (), 0), ""), indexes, 1, ""), "method_index");
/* Check for is_inited here as well, since this can be called from JITted code which might not check it */
indexes [0] = const_int32 (0);
indexes [1] = index_var;
inited_var = LLVMBuildLoad (builder, LLVMBuildGEP (builder, module->inited_var, indexes, 2, ""), "is_inited");
cmp = LLVMBuildICmp (builder, LLVMIntEQ, inited_var, LLVMConstInt (LLVMTypeOf (inited_var), 0, FALSE), "");
inited_bb = LLVMAppendBasicBlock (func, "INITED");
notinited_bb = LLVMAppendBasicBlock (func, "NOT_INITED");
LLVMBuildCondBr (builder, cmp, notinited_bb, inited_bb);
LLVMPositionBuilderAtEnd (builder, notinited_bb);
LLVMValueRef amodule_var = get_aotconst_module (module, builder, MONO_PATCH_INFO_AOT_MODULE, NULL, LLVMPointerType (IntPtrType (), 0), NULL, NULL);
args [0] = LLVMBuildPtrToInt (builder, module->info_var, IntPtrType (), "");
args [1] = LLVMBuildPtrToInt (builder, amodule_var, IntPtrType (), "");
args [2] = info_var;
switch (subtype) {
case AOT_INIT_METHOD:
args [3] = LLVMConstNull (IntPtrType ());
break;
case AOT_INIT_METHOD_GSHARED_VTABLE:
args [3] = LLVMGetParam (func, 1);
break;
case AOT_INIT_METHOD_GSHARED_THIS:
/* Load this->vtable */
args [3] = LLVMBuildBitCast (builder, LLVMGetParam (func, 1), LLVMPointerType (IntPtrType (), 0), "");
indexes [0] = const_int32 (MONO_STRUCT_OFFSET (MonoObject, vtable) / SIZEOF_VOID_P);
args [3] = LLVMBuildLoad (builder, LLVMBuildGEP (builder, args [3], indexes, 1, ""), "vtable");
break;
case AOT_INIT_METHOD_GSHARED_MRGCTX:
/* Load mrgctx->vtable */
args [3] = LLVMBuildIntToPtr (builder, LLVMGetParam (func, 1), LLVMPointerType (IntPtrType (), 0), "");
indexes [0] = const_int32 (MONO_STRUCT_OFFSET (MonoMethodRuntimeGenericContext, class_vtable) / SIZEOF_VOID_P);
args [3] = LLVMBuildLoad (builder, LLVMBuildGEP (builder, args [3], indexes, 1, ""), "vtable");
break;
default:
g_assert_not_reached ();
break;
}
/* Call the mini_llvm_init_method JIT icall */
icall_sig = LLVMFunctionType4 (LLVMVoidType (), IntPtrType (), IntPtrType (), arg_type, IntPtrType (), FALSE);
callee = get_aotconst_module (module, builder, MONO_PATCH_INFO_JIT_ICALL_ID, GINT_TO_POINTER (MONO_JIT_ICALL_mini_llvm_init_method), LLVMPointerType (icall_sig, 0), NULL, NULL);
LLVMBuildCall (builder, callee, args, LLVMCountParamTypes (icall_sig), "");
/*
* Set the inited flag
* This is already done by the LLVM methods themselves, but its needed by JITted methods.
*/
indexes [0] = const_int32 (0);
indexes [1] = index_var;
LLVMBuildStore (builder, LLVMConstInt (LLVMInt8Type (), 1, FALSE), LLVMBuildGEP (builder, module->inited_var, indexes, 2, ""));
LLVMBuildBr (builder, inited_bb);
LLVMPositionBuilderAtEnd (builder, inited_bb);
LLVMBuildRetVoid (builder);
LLVMVerifyFunction (func, LLVMAbortProcessAction);
LLVMDisposeBuilder (builder);
g_free (name);
return func;
}
/* Emit a wrapper around the parameterless JIT icall ICALL_ID with a cold calling convention */
static LLVMValueRef
emit_icall_cold_wrapper (MonoLLVMModule *module, LLVMModuleRef lmodule, MonoJitICallId icall_id, gboolean aot)
{
LLVMValueRef func, callee;
LLVMBasicBlockRef entry_bb;
LLVMBuilderRef builder;
LLVMTypeRef sig;
char *name;
name = g_strdup_printf ("%s_icall_cold_wrapper_%d", module->global_prefix, icall_id);
func = LLVMAddFunction (lmodule, name, LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE));
sig = LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE);
LLVMSetLinkage (func, LLVMInternalLinkage);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_INLINE);
set_cold_cconv (func);
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
builder = LLVMCreateBuilder ();
LLVMPositionBuilderAtEnd (builder, entry_bb);
if (aot) {
callee = get_aotconst_module (module, builder, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id), LLVMPointerType (sig, 0), NULL, NULL);
} else {
MonoJitICallInfo * const info = mono_find_jit_icall_info (icall_id);
gpointer target = (gpointer)mono_icall_get_wrapper_full (info, TRUE);
LLVMValueRef tramp_var = LLVMAddGlobal (lmodule, LLVMPointerType (sig, 0), name);
LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (LLVMConstInt (LLVMInt64Type (), (guint64)(size_t)target, FALSE), LLVMPointerType (sig, 0)));
LLVMSetLinkage (tramp_var, LLVMExternalLinkage);
callee = LLVMBuildLoad (builder, tramp_var, "");
}
LLVMBuildCall (builder, callee, NULL, 0, "");
LLVMBuildRetVoid (builder);
LLVMVerifyFunction(func, LLVMAbortProcessAction);
LLVMDisposeBuilder (builder);
return func;
}
/*
* Emit wrappers around the C icalls used to initialize llvm methods, to
* make the calling code smaller and to enable usage of the llvm
* cold calling convention.
*/
static void
emit_init_funcs (MonoLLVMModule *module)
{
for (int i = 0; i < AOT_INIT_METHOD_NUM; ++i)
module->init_methods [i] = emit_init_func (module, i);
}
static LLVMValueRef
get_init_func (MonoLLVMModule *module, MonoAotInitSubtype subtype)
{
return module->init_methods [subtype];
}
static void
emit_gc_safepoint_poll (MonoLLVMModule *module, LLVMModuleRef lmodule, MonoCompile *cfg)
{
gboolean is_aot = cfg == NULL || cfg->compile_aot;
LLVMValueRef func = mono_llvm_get_or_insert_gc_safepoint_poll (lmodule);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND);
if (is_aot) {
#if TARGET_WIN32
if (module->static_link) {
LLVMSetLinkage (func, LLVMInternalLinkage);
/* Prevent it from being optimized away, leading to asserts inside 'opt' */
mark_as_used (module, func);
} else {
LLVMSetLinkage (func, LLVMWeakODRLinkage);
}
#else
LLVMSetLinkage (func, LLVMWeakODRLinkage);
#endif
} else {
mono_llvm_add_func_attr (func, LLVM_ATTR_OPTIMIZE_NONE); // no need to waste time here, the function is already optimized and will be inlined.
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_INLINE); // optnone attribute requires noinline (but it will be inlined anyway)
if (!module->gc_poll_cold_wrapper_compiled) {
ERROR_DECL (error);
/* Compiling a method here is a bit ugly, but it works */
MonoMethod *wrapper = mono_marshal_get_llvm_func_wrapper (LLVM_FUNC_WRAPPER_GC_POLL);
module->gc_poll_cold_wrapper_compiled = mono_jit_compile_method (wrapper, error);
mono_error_assert_ok (error);
}
}
LLVMBasicBlockRef entry_bb = LLVMAppendBasicBlock (func, "gc.safepoint_poll.entry");
LLVMBasicBlockRef poll_bb = LLVMAppendBasicBlock (func, "gc.safepoint_poll.poll");
LLVMBasicBlockRef exit_bb = LLVMAppendBasicBlock (func, "gc.safepoint_poll.exit");
LLVMTypeRef ptr_type = LLVMPointerType (IntPtrType (), 0);
LLVMBuilderRef builder = LLVMCreateBuilder ();
/* entry: */
LLVMPositionBuilderAtEnd (builder, entry_bb);
LLVMValueRef poll_val_ptr;
if (is_aot) {
poll_val_ptr = get_aotconst_module (module, builder, MONO_PATCH_INFO_GC_SAFE_POINT_FLAG, NULL, ptr_type, NULL, NULL);
} else {
LLVMValueRef poll_val_int = LLVMConstInt (IntPtrType (), (guint64) &mono_polling_required, FALSE);
poll_val_ptr = LLVMBuildIntToPtr (builder, poll_val_int, ptr_type, "");
}
LLVMValueRef poll_val_ptr_load = LLVMBuildLoad (builder, poll_val_ptr, ""); // probably needs to be volatile
LLVMValueRef poll_val = LLVMBuildPtrToInt (builder, poll_val_ptr_load, IntPtrType (), "");
LLVMValueRef poll_val_zero = LLVMConstNull (LLVMTypeOf (poll_val));
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntEQ, poll_val, poll_val_zero, "");
mono_llvm_build_weighted_branch (builder, cmp, exit_bb, poll_bb, 1000 /* weight for exit_bb */, 1 /* weight for poll_bb */);
/* poll: */
LLVMPositionBuilderAtEnd (builder, poll_bb);
LLVMValueRef call;
if (is_aot) {
LLVMValueRef icall_wrapper = emit_icall_cold_wrapper (module, lmodule, MONO_JIT_ICALL_mono_threads_state_poll, TRUE);
module->gc_poll_cold_wrapper = icall_wrapper;
call = LLVMBuildCall (builder, icall_wrapper, NULL, 0, "");
} else {
// in JIT mode we have to emit @gc.safepoint_poll function for each method (module)
// this function calls gc_poll_cold_wrapper_compiled via a global variable.
// @gc.safepoint_poll will be inlined and can be deleted after -place-safepoints pass.
LLVMTypeRef poll_sig = LLVMFunctionType0 (LLVMVoidType (), FALSE);
LLVMTypeRef poll_sig_ptr = LLVMPointerType (poll_sig, 0);
gpointer target = resolve_patch (cfg, MONO_PATCH_INFO_ABS, module->gc_poll_cold_wrapper_compiled);
LLVMValueRef tramp_var = LLVMAddGlobal (lmodule, poll_sig_ptr, "mono_threads_state_poll");
LLVMValueRef target_val = LLVMConstInt (LLVMInt64Type (), (guint64) target, FALSE);
LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (target_val, poll_sig_ptr));
LLVMSetLinkage (tramp_var, LLVMExternalLinkage);
LLVMValueRef callee = LLVMBuildLoad (builder, tramp_var, "");
call = LLVMBuildCall (builder, callee, NULL, 0, "");
}
set_call_cold_cconv (call);
LLVMBuildBr (builder, exit_bb);
/* exit: */
LLVMPositionBuilderAtEnd (builder, exit_bb);
LLVMBuildRetVoid (builder);
LLVMDisposeBuilder (builder);
}
static void
emit_llvm_code_end (MonoLLVMModule *module)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef func;
LLVMBasicBlockRef entry_bb;
LLVMBuilderRef builder;
func = LLVMAddFunction (lmodule, "llvm_code_end", LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE));
LLVMSetLinkage (func, LLVMInternalLinkage);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND);
module->code_end = func;
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
builder = LLVMCreateBuilder ();
LLVMPositionBuilderAtEnd (builder, entry_bb);
LLVMBuildRetVoid (builder);
LLVMDisposeBuilder (builder);
}
static void
emit_div_check (EmitContext *ctx, LLVMBuilderRef builder, MonoBasicBlock *bb, MonoInst *ins, LLVMValueRef lhs, LLVMValueRef rhs)
{
gboolean need_div_check = ctx->cfg->backend->need_div_check;
if (bb->region)
/* LLVM doesn't know that these can throw an exception since they are not called through an intrinsic */
need_div_check = TRUE;
if (!need_div_check)
return;
switch (ins->opcode) {
case OP_IDIV:
case OP_LDIV:
case OP_IREM:
case OP_LREM:
case OP_IDIV_UN:
case OP_LDIV_UN:
case OP_IREM_UN:
case OP_LREM_UN:
case OP_IDIV_IMM:
case OP_LDIV_IMM:
case OP_IREM_IMM:
case OP_LREM_IMM:
case OP_IDIV_UN_IMM:
case OP_LDIV_UN_IMM:
case OP_IREM_UN_IMM:
case OP_LREM_UN_IMM: {
LLVMValueRef cmp;
gboolean is_signed = (ins->opcode == OP_IDIV || ins->opcode == OP_LDIV || ins->opcode == OP_IREM || ins->opcode == OP_LREM ||
ins->opcode == OP_IDIV_IMM || ins->opcode == OP_LDIV_IMM || ins->opcode == OP_IREM_IMM || ins->opcode == OP_LREM_IMM);
cmp = LLVMBuildICmp (builder, LLVMIntEQ, rhs, LLVMConstInt (LLVMTypeOf (rhs), 0, FALSE), "");
emit_cond_system_exception (ctx, bb, "DivideByZeroException", cmp, FALSE);
if (!ctx_ok (ctx))
break;
builder = ctx->builder;
/* b == -1 && a == 0x80000000 */
if (is_signed) {
LLVMValueRef c = (LLVMTypeOf (lhs) == LLVMInt32Type ()) ? LLVMConstInt (LLVMTypeOf (lhs), 0x80000000, FALSE) : LLVMConstInt (LLVMTypeOf (lhs), 0x8000000000000000LL, FALSE);
LLVMValueRef cond1 = LLVMBuildICmp (builder, LLVMIntEQ, rhs, LLVMConstInt (LLVMTypeOf (rhs), -1, FALSE), "");
LLVMValueRef cond2 = LLVMBuildICmp (builder, LLVMIntEQ, lhs, c, "");
cmp = LLVMBuildICmp (builder, LLVMIntEQ, LLVMBuildAnd (builder, cond1, cond2, ""), LLVMConstInt (LLVMInt1Type (), 1, FALSE), "");
emit_cond_system_exception (ctx, bb, "OverflowException", cmp, FALSE);
if (!ctx_ok (ctx))
break;
builder = ctx->builder;
}
break;
}
default:
break;
}
}
/*
* emit_method_init:
*
* Emit code to initialize the GOT slots used by the method.
*/
static void
emit_method_init (EmitContext *ctx)
{
LLVMValueRef indexes [16], args [16];
LLVMValueRef inited_var, cmp, call;
LLVMBasicBlockRef inited_bb, notinited_bb;
LLVMBuilderRef builder = ctx->builder;
MonoCompile *cfg = ctx->cfg;
MonoAotInitSubtype subtype;
ctx->module->max_inited_idx = MAX (ctx->module->max_inited_idx, cfg->method_index);
indexes [0] = const_int32 (0);
indexes [1] = const_int32 (cfg->method_index);
inited_var = LLVMBuildLoad (builder, LLVMBuildGEP (builder, ctx->module->inited_var, indexes, 2, ""), "is_inited");
args [0] = inited_var;
args [1] = LLVMConstInt (LLVMInt8Type (), 1, FALSE);
inited_var = LLVMBuildCall (ctx->builder, get_intrins (ctx, INTRINS_EXPECT_I8), args, 2, "");
cmp = LLVMBuildICmp (builder, LLVMIntEQ, inited_var, LLVMConstInt (LLVMTypeOf (inited_var), 0, FALSE), "");
inited_bb = ctx->inited_bb;
notinited_bb = gen_bb (ctx, "NOTINITED_BB");
ctx->cfg->llvmonly_init_cond = LLVMBuildCondBr (ctx->builder, cmp, notinited_bb, inited_bb);
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, notinited_bb);
LLVMTypeRef type = LLVMArrayType (LLVMInt8Type (), 0);
char *symbol = g_strdup_printf ("info_dummy_%s", cfg->llvm_method_name);
LLVMValueRef info_var = LLVMAddGlobal (ctx->lmodule, type, symbol);
g_free (symbol);
cfg->llvm_dummy_info_var = info_var;
int nargs = 0;
args [nargs ++] = convert (ctx, info_var, ctx->module->ptr_type);
switch (cfg->rgctx_access) {
case MONO_RGCTX_ACCESS_MRGCTX:
if (ctx->rgctx_arg) {
args [nargs ++] = convert (ctx, ctx->rgctx_arg, IntPtrType ());
subtype = AOT_INIT_METHOD_GSHARED_MRGCTX;
} else {
g_assert (ctx->this_arg);
args [nargs ++] = convert (ctx, ctx->this_arg, ObjRefType ());
subtype = AOT_INIT_METHOD_GSHARED_THIS;
}
break;
case MONO_RGCTX_ACCESS_VTABLE:
args [nargs ++] = convert (ctx, ctx->rgctx_arg, IntPtrType ());
subtype = AOT_INIT_METHOD_GSHARED_VTABLE;
break;
case MONO_RGCTX_ACCESS_THIS:
args [nargs ++] = convert (ctx, ctx->this_arg, ObjRefType ());
subtype = AOT_INIT_METHOD_GSHARED_THIS;
break;
case MONO_RGCTX_ACCESS_NONE:
subtype = AOT_INIT_METHOD;
break;
default:
g_assert_not_reached ();
}
call = LLVMBuildCall (builder, ctx->module->init_methods [subtype], args, nargs, "");
/*
* This enables llvm to keep arguments in their original registers/
* scratch registers, since the call will not clobber them.
*/
set_call_cold_cconv (call);
// Set the inited flag
indexes [0] = const_int32 (0);
indexes [1] = const_int32 (cfg->method_index);
LLVMBuildStore (builder, LLVMConstInt (LLVMInt8Type (), 1, FALSE), LLVMBuildGEP (builder, ctx->module->inited_var, indexes, 2, ""));
LLVMBuildBr (builder, inited_bb);
ctx->bblocks [cfg->bb_entry->block_num].end_bblock = inited_bb;
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, inited_bb);
}
static void
emit_unbox_tramp (EmitContext *ctx, const char *method_name, LLVMTypeRef method_type, LLVMValueRef method, int method_index)
{
/*
* Emit unbox trampoline using a tailcall
*/
LLVMValueRef tramp, call, *args;
LLVMBuilderRef builder;
LLVMBasicBlockRef lbb;
LLVMCallInfo *linfo;
char *tramp_name;
int i, nargs;
tramp_name = g_strdup_printf ("ut_%s", method_name);
tramp = LLVMAddFunction (ctx->module->lmodule, tramp_name, method_type);
LLVMSetLinkage (tramp, LLVMInternalLinkage);
mono_llvm_add_func_attr (tramp, LLVM_ATTR_OPTIMIZE_FOR_SIZE);
//mono_llvm_add_func_attr (tramp, LLVM_ATTR_NO_UNWIND);
linfo = ctx->linfo;
// FIXME: Reduce code duplication with mono_llvm_compile_method () etc.
if (!ctx->llvm_only && ctx->rgctx_arg_pindex != -1)
mono_llvm_add_param_attr (LLVMGetParam (tramp, ctx->rgctx_arg_pindex), LLVM_ATTR_IN_REG);
if (ctx->cfg->vret_addr) {
LLVMSetValueName (LLVMGetParam (tramp, linfo->vret_arg_pindex), "vret");
if (linfo->ret.storage == LLVMArgVtypeByRef) {
mono_llvm_add_param_attr (LLVMGetParam (tramp, linfo->vret_arg_pindex), LLVM_ATTR_STRUCT_RET);
mono_llvm_add_param_attr (LLVMGetParam (tramp, linfo->vret_arg_pindex), LLVM_ATTR_NO_ALIAS);
}
}
lbb = LLVMAppendBasicBlock (tramp, "");
builder = LLVMCreateBuilder ();
LLVMPositionBuilderAtEnd (builder, lbb);
nargs = LLVMCountParamTypes (method_type);
args = g_new0 (LLVMValueRef, nargs);
for (i = 0; i < nargs; ++i) {
args [i] = LLVMGetParam (tramp, i);
if (i == ctx->this_arg_pindex) {
LLVMTypeRef arg_type = LLVMTypeOf (args [i]);
args [i] = LLVMBuildPtrToInt (builder, args [i], IntPtrType (), "");
args [i] = LLVMBuildAdd (builder, args [i], LLVMConstInt (IntPtrType (), MONO_ABI_SIZEOF (MonoObject), FALSE), "");
args [i] = LLVMBuildIntToPtr (builder, args [i], arg_type, "");
}
}
call = LLVMBuildCall (builder, method, args, nargs, "");
if (!ctx->llvm_only && ctx->rgctx_arg_pindex != -1)
mono_llvm_add_instr_attr (call, 1 + ctx->rgctx_arg_pindex, LLVM_ATTR_IN_REG);
if (linfo->ret.storage == LLVMArgVtypeByRef)
mono_llvm_add_instr_attr (call, 1 + linfo->vret_arg_pindex, LLVM_ATTR_STRUCT_RET);
// FIXME: This causes assertions in clang
//mono_llvm_set_must_tailcall (call);
if (LLVMGetReturnType (method_type) == LLVMVoidType ())
LLVMBuildRetVoid (builder);
else
LLVMBuildRet (builder, call);
g_hash_table_insert (ctx->module->idx_to_unbox_tramp, GINT_TO_POINTER (method_index), tramp);
LLVMDisposeBuilder (builder);
}
#ifdef TARGET_WASM
static void
emit_gc_pin (EmitContext *ctx, LLVMBuilderRef builder, int vreg)
{
LLVMValueRef index0 = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
LLVMValueRef index1 = LLVMConstInt (LLVMInt32Type (), ctx->gc_var_indexes [vreg] - 1, FALSE);
LLVMValueRef indexes [] = { index0, index1 };
LLVMValueRef addr = LLVMBuildGEP (builder, ctx->gc_pin_area, indexes, 2, "");
mono_llvm_build_store (builder, convert (ctx, ctx->values [vreg], IntPtrType ()), addr, TRUE, LLVM_BARRIER_NONE);
}
#endif
/*
* emit_entry_bb:
*
* Emit code to load/convert arguments.
*/
static void
emit_entry_bb (EmitContext *ctx, LLVMBuilderRef builder)
{
int i, j, pindex;
MonoCompile *cfg = ctx->cfg;
MonoMethodSignature *sig = ctx->sig;
LLVMCallInfo *linfo = ctx->linfo;
MonoBasicBlock *bb;
char **names;
LLVMBuilderRef old_builder = ctx->builder;
ctx->builder = builder;
ctx->alloca_builder = create_builder (ctx);
#ifdef TARGET_WASM
/*
* For GC stack scanning to work, allocate an area on the stack and store
* every ref vreg into it after its written. Because the stack is scanned
* conservatively, the objects will be pinned, so the vregs can directly
* reference the objects, there is no need to load them from the stack
* on every access.
*/
ctx->gc_var_indexes = g_new0 (int, cfg->next_vreg);
int ngc_vars = 0;
for (i = 0; i < cfg->next_vreg; ++i) {
if (vreg_is_ref (cfg, i)) {
ctx->gc_var_indexes [i] = ngc_vars + 1;
ngc_vars ++;
}
}
// FIXME: Count only live vregs
ctx->gc_pin_area = build_alloca_llvm_type_name (ctx, LLVMArrayType (IntPtrType (), ngc_vars), 0, "gc_pin");
#endif
/*
* Handle indirect/volatile variables by allocating memory for them
* using 'alloca', and storing their address in a temporary.
*/
for (i = 0; i < cfg->num_varinfo; ++i) {
MonoInst *var = cfg->varinfo [i];
if ((var->opcode == OP_GSHAREDVT_LOCAL || var->opcode == OP_GSHAREDVT_ARG_REGOFFSET))
continue;
if (var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT) || (mini_type_is_vtype (var->inst_vtype) && !MONO_CLASS_IS_SIMD (ctx->cfg, var->klass))) {
if (!ctx_ok (ctx))
return;
/* Could be already created by an OP_VPHI */
if (!ctx->addresses [var->dreg]) {
if (var->flags & MONO_INST_LMF) {
// FIXME: Allocate a smaller struct in the deopt case
int size = cfg->deopt ? MONO_ABI_SIZEOF (MonoLMFExt) : MONO_ABI_SIZEOF (MonoLMF);
ctx->addresses [var->dreg] = build_alloca_llvm_type_name (ctx, LLVMArrayType (LLVMInt8Type (), size), sizeof (target_mgreg_t), "lmf");
} else {
char *name = g_strdup_printf ("vreg_loc_%d", var->dreg);
ctx->addresses [var->dreg] = build_named_alloca (ctx, var->inst_vtype, name);
g_free (name);
}
}
ctx->vreg_cli_types [var->dreg] = var->inst_vtype;
}
}
names = g_new (char *, sig->param_count);
mono_method_get_param_names (cfg->method, (const char **) names);
for (i = 0; i < sig->param_count; ++i) {
LLVMArgInfo *ainfo = &linfo->args [i + sig->hasthis];
int reg = cfg->args [i + sig->hasthis]->dreg;
char *name;
pindex = ainfo->pindex;
LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex);
switch (ainfo->storage) {
case LLVMArgVtypeInReg:
case LLVMArgAsFpArgs: {
LLVMValueRef args [8];
int j;
pindex += ainfo->ndummy_fpargs;
/* The argument is received as a set of int/fp arguments, store them into the real argument */
memset (args, 0, sizeof (args));
if (ainfo->storage == LLVMArgVtypeInReg) {
args [0] = LLVMGetParam (ctx->lmethod, pindex);
if (ainfo->pair_storage [1] != LLVMArgNone)
args [1] = LLVMGetParam (ctx->lmethod, pindex + 1);
} else {
g_assert (ainfo->nslots <= 8);
for (j = 0; j < ainfo->nslots; ++j)
args [j] = LLVMGetParam (ctx->lmethod, pindex + j);
}
ctx->addresses [reg] = build_alloca (ctx, ainfo->type);
emit_args_to_vtype (ctx, builder, ainfo->type, ctx->addresses [reg], ainfo, args);
break;
}
case LLVMArgVtypeByVal: {
ctx->addresses [reg] = LLVMGetParam (ctx->lmethod, pindex);
break;
}
case LLVMArgVtypeAddr:
case LLVMArgVtypeByRef: {
/* The argument is passed by ref */
ctx->addresses [reg] = LLVMGetParam (ctx->lmethod, pindex);
break;
}
case LLVMArgAsIArgs: {
LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex);
int size;
MonoType *t = mini_get_underlying_type (ainfo->type);
/* The argument is received as an array of ints, store it into the real argument */
ctx->addresses [reg] = build_alloca (ctx, t);
size = mono_class_value_size (mono_class_from_mono_type_internal (t), NULL);
if (size == 0) {
} else if (size < TARGET_SIZEOF_VOID_P) {
/* The upper bits of the registers might not be valid */
LLVMValueRef val = LLVMBuildExtractValue (builder, arg, 0, "");
LLVMValueRef dest = convert (ctx, ctx->addresses [reg], LLVMPointerType (LLVMIntType (size * 8), 0));
LLVMBuildStore (ctx->builder, LLVMBuildTrunc (builder, val, LLVMIntType (size * 8), ""), dest);
} else {
LLVMBuildStore (ctx->builder, arg, convert (ctx, ctx->addresses [reg], LLVMPointerType (LLVMTypeOf (arg), 0)));
}
break;
}
case LLVMArgVtypeAsScalar:
g_assert_not_reached ();
break;
case LLVMArgWasmVtypeAsScalar: {
MonoType *t = mini_get_underlying_type (ainfo->type);
/* The argument is received as a scalar */
ctx->addresses [reg] = build_alloca (ctx, t);
LLVMValueRef dest = convert (ctx, ctx->addresses [reg], LLVMPointerType (LLVMIntType (ainfo->esize * 8), 0));
LLVMBuildStore (ctx->builder, arg, dest);
break;
}
case LLVMArgGsharedvtFixed: {
/* These are non-gsharedvt arguments passed by ref, the rest of the IR treats them as scalars */
LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex);
if (names [i])
name = g_strdup_printf ("arg_%s", names [i]);
else
name = g_strdup_printf ("arg_%d", i);
ctx->values [reg] = LLVMBuildLoad (builder, convert (ctx, arg, LLVMPointerType (type_to_llvm_type (ctx, ainfo->type), 0)), name);
break;
}
case LLVMArgGsharedvtFixedVtype: {
LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex);
if (names [i])
name = g_strdup_printf ("vtype_arg_%s", names [i]);
else
name = g_strdup_printf ("vtype_arg_%d", i);
/* Non-gsharedvt vtype argument passed by ref, the rest of the IR treats it as a vtype */
g_assert (ctx->addresses [reg]);
LLVMSetValueName (ctx->addresses [reg], name);
LLVMBuildStore (builder, LLVMBuildLoad (builder, convert (ctx, arg, LLVMPointerType (type_to_llvm_type (ctx, ainfo->type), 0)), ""), ctx->addresses [reg]);
break;
}
case LLVMArgGsharedvtVariable:
/* The IR treats these as variables with addresses */
if (!ctx->addresses [reg])
ctx->addresses [reg] = LLVMGetParam (ctx->lmethod, pindex);
break;
default: {
LLVMTypeRef t;
/* Needed to avoid phi argument mismatch errors since operations on pointers produce i32/i64 */
if (m_type_is_byref (ainfo->type))
t = IntPtrType ();
else
t = type_to_llvm_type (ctx, ainfo->type);
ctx->values [reg] = convert_full (ctx, ctx->values [reg], llvm_type_to_stack_type (cfg, t), type_is_unsigned (ctx, ainfo->type));
break;
}
}
switch (ainfo->storage) {
case LLVMArgVtypeInReg:
case LLVMArgVtypeByVal:
case LLVMArgAsIArgs:
// FIXME: Enabling this fails on windows
case LLVMArgVtypeAddr:
case LLVMArgVtypeByRef:
{
if (MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (ainfo->type)))
/* Treat these as normal values */
ctx->values [reg] = LLVMBuildLoad (builder, ctx->addresses [reg], "simd_vtype");
break;
}
default:
break;
}
}
g_free (names);
if (sig->hasthis) {
/* Handle this arguments as inputs to phi nodes */
int reg = cfg->args [0]->dreg;
if (ctx->vreg_types [reg])
ctx->values [reg] = convert (ctx, ctx->values [reg], ctx->vreg_types [reg]);
}
if (cfg->vret_addr)
emit_volatile_store (ctx, cfg->vret_addr->dreg);
if (sig->hasthis)
emit_volatile_store (ctx, cfg->args [0]->dreg);
for (i = 0; i < sig->param_count; ++i)
if (!mini_type_is_vtype (sig->params [i]))
emit_volatile_store (ctx, cfg->args [i + sig->hasthis]->dreg);
if (sig->hasthis && !cfg->rgctx_var && cfg->gshared && !cfg->llvm_only) {
LLVMValueRef this_alloc;
/*
* The exception handling code needs the location where the this argument was
* stored for gshared methods. We create a separate alloca to hold it, and mark it
* with the "mono.this" custom metadata to tell llvm that it needs to save its
* location into the LSDA.
*/
this_alloc = mono_llvm_build_alloca (builder, ThisType (), LLVMConstInt (LLVMInt32Type (), 1, FALSE), 0, "");
/* This volatile store will keep the alloca alive */
mono_llvm_build_store (builder, ctx->values [cfg->args [0]->dreg], this_alloc, TRUE, LLVM_BARRIER_NONE);
set_metadata_flag (this_alloc, "mono.this");
}
if (cfg->rgctx_var) {
if (!(cfg->rgctx_var->flags & MONO_INST_VOLATILE)) {
/* FIXME: This could be volatile even in llvmonly mode if used inside a clause etc. */
g_assert (!ctx->addresses [cfg->rgctx_var->dreg]);
ctx->values [cfg->rgctx_var->dreg] = ctx->rgctx_arg;
} else {
LLVMValueRef rgctx_alloc, store;
/*
* We handle the rgctx arg similarly to the this pointer.
*/
g_assert (ctx->addresses [cfg->rgctx_var->dreg]);
rgctx_alloc = ctx->addresses [cfg->rgctx_var->dreg];
/* This volatile store will keep the alloca alive */
store = mono_llvm_build_store (builder, convert (ctx, ctx->rgctx_arg, IntPtrType ()), rgctx_alloc, TRUE, LLVM_BARRIER_NONE);
(void)store; /* unused */
set_metadata_flag (rgctx_alloc, "mono.this");
}
}
#ifdef TARGET_WASM
/*
* Store ref arguments to the pin area.
* FIXME: This might not be needed, since the caller already does it ?
*/
for (i = 0; i < cfg->num_varinfo; ++i) {
MonoInst *var = cfg->varinfo [i];
if (var->opcode == OP_ARG && vreg_is_ref (cfg, var->dreg) && ctx->values [var->dreg])
emit_gc_pin (ctx, builder, var->dreg);
}
#endif
if (cfg->deopt) {
LLVMValueRef addr, index [2];
MonoMethodHeader *header = cfg->header;
int nfields = (sig->ret->type != MONO_TYPE_VOID ? 1 : 0) + sig->hasthis + sig->param_count + header->num_locals + 2;
LLVMTypeRef *types = g_alloca (nfields * sizeof (LLVMTypeRef));
int findex = 0;
/* method */
types [findex ++] = IntPtrType ();
/* il_offset */
types [findex ++] = LLVMInt32Type ();
int data_start = findex;
/* data */
if (sig->ret->type != MONO_TYPE_VOID)
types [findex ++] = IntPtrType ();
if (sig->hasthis)
types [findex ++] = IntPtrType ();
for (int i = 0; i < sig->param_count; ++i)
types [findex ++] = LLVMPointerType (type_to_llvm_type (ctx, sig->params [i]), 0);
for (int i = 0; i < header->num_locals; ++i)
types [findex ++] = LLVMPointerType (type_to_llvm_type (ctx, header->locals [i]), 0);
g_assert (findex == nfields);
char *name = g_strdup_printf ("%s_il_state", ctx->method_name);
LLVMTypeRef il_state_type = LLVMStructCreateNamed (ctx->module->context, name);
LLVMStructSetBody (il_state_type, types, nfields, FALSE);
g_free (name);
ctx->il_state = build_alloca_llvm_type_name (ctx, il_state_type, 0, "il_state");
g_assert (cfg->il_state_var);
ctx->addresses [cfg->il_state_var->dreg] = ctx->il_state;
/* Set il_state->il_offset = -1 */
index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
index [1] = LLVMConstInt (LLVMInt32Type (), 1, FALSE);
addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, "");
LLVMBuildStore (ctx->builder, LLVMConstInt (types [1], -1, FALSE), addr);
/*
* Set il_state->data [i] to either the address of the arg/local, or NULL.
* Because of mono_liveness_handle_exception_clauses (), all locals used/reachable from
* clauses are supposed to be volatile, so they have an address.
*/
findex = data_start;
if (sig->ret->type != MONO_TYPE_VOID) {
LLVMTypeRef ret_type = type_to_llvm_type (ctx, sig->ret);
ctx->il_state_ret = build_alloca_llvm_type_name (ctx, ret_type, 0, "il_state_ret");
index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
index [1] = LLVMConstInt (LLVMInt32Type (), findex, FALSE);
addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, "");
LLVMBuildStore (ctx->builder, ctx->il_state_ret, convert (ctx, addr, LLVMPointerType (LLVMTypeOf (ctx->il_state_ret), 0)));
findex ++;
}
for (int i = 0; i < sig->hasthis + sig->param_count; ++i) {
LLVMValueRef var_addr = ctx->addresses [cfg->args [i]->dreg];
index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
index [1] = LLVMConstInt (LLVMInt32Type (), findex, FALSE);
addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, "");
if (var_addr)
LLVMBuildStore (ctx->builder, var_addr, convert (ctx, addr, LLVMPointerType (LLVMTypeOf (var_addr), 0)));
else
LLVMBuildStore (ctx->builder, LLVMConstNull (types [findex]), addr);
findex ++;
}
for (int i = 0; i < header->num_locals; ++i) {
LLVMValueRef var_addr = ctx->addresses [cfg->locals [i]->dreg];
index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
index [1] = LLVMConstInt (LLVMInt32Type (), findex, FALSE);
addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, "");
if (var_addr)
LLVMBuildStore (ctx->builder, LLVMBuildBitCast (builder, var_addr, types [findex], ""), addr);
else
LLVMBuildStore (ctx->builder, LLVMConstNull (types [findex]), addr);
findex ++;
}
}
/* Initialize the method if needed */
if (cfg->compile_aot) {
/* Emit a location for the initialization code */
ctx->init_bb = gen_bb (ctx, "INIT_BB");
ctx->inited_bb = gen_bb (ctx, "INITED_BB");
LLVMBuildBr (ctx->builder, ctx->init_bb);
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, ctx->inited_bb);
ctx->bblocks [cfg->bb_entry->block_num].end_bblock = ctx->inited_bb;
}
/* Compute nesting between clauses */
ctx->nested_in = (GSList**)mono_mempool_alloc0 (cfg->mempool, sizeof (GSList*) * cfg->header->num_clauses);
for (i = 0; i < cfg->header->num_clauses; ++i) {
for (j = 0; j < cfg->header->num_clauses; ++j) {
MonoExceptionClause *clause1 = &cfg->header->clauses [i];
MonoExceptionClause *clause2 = &cfg->header->clauses [j];
if (i != j && clause1->try_offset >= clause2->try_offset && clause1->handler_offset <= clause2->handler_offset)
ctx->nested_in [i] = g_slist_prepend_mempool (cfg->mempool, ctx->nested_in [i], GINT_TO_POINTER (j));
}
}
/*
* For finally clauses, create an indicator variable telling OP_ENDFINALLY whenever
* it needs to continue normally, or return back to the exception handling system.
*/
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
char name [128];
if (!(bb->region != -1 && (bb->flags & BB_EXCEPTION_HANDLER)))
continue;
if (bb->in_scount == 0) {
LLVMValueRef val;
sprintf (name, "finally_ind_bb%d", bb->block_num);
val = LLVMBuildAlloca (builder, LLVMInt32Type (), name);
LLVMBuildStore (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), val);
ctx->bblocks [bb->block_num].finally_ind = val;
} else {
/* Create a variable to hold the exception var */
if (!ctx->ex_var)
ctx->ex_var = LLVMBuildAlloca (builder, ObjRefType (), "exvar");
}
}
ctx->builder = old_builder;
}
static gboolean
needs_extra_arg (EmitContext *ctx, MonoMethod *method)
{
WrapperInfo *info = NULL;
/*
* When targeting wasm, the caller and callee signature has to match exactly. This means
* that every method which can be called indirectly need an extra arg since the caller
* will call it through an ftnptr and will pass an extra arg.
*/
if (!ctx->cfg->llvm_only || !ctx->emit_dummy_arg)
return FALSE;
if (method->wrapper_type)
info = mono_marshal_get_wrapper_info (method);
switch (method->wrapper_type) {
case MONO_WRAPPER_OTHER:
if (info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG || info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG)
/* Already have an explicit extra arg */
return FALSE;
break;
case MONO_WRAPPER_MANAGED_TO_NATIVE:
if (strstr (method->name, "icall_wrapper"))
/* These are JIT icall wrappers which are only called from JITted code directly */
return FALSE;
/* Normal icalls can be virtual methods which need an extra arg */
break;
case MONO_WRAPPER_RUNTIME_INVOKE:
case MONO_WRAPPER_ALLOC:
case MONO_WRAPPER_CASTCLASS:
case MONO_WRAPPER_WRITE_BARRIER:
case MONO_WRAPPER_NATIVE_TO_MANAGED:
return FALSE;
case MONO_WRAPPER_STELEMREF:
if (info->subtype != WRAPPER_SUBTYPE_VIRTUAL_STELEMREF)
return FALSE;
break;
case MONO_WRAPPER_MANAGED_TO_MANAGED:
if (info->subtype == WRAPPER_SUBTYPE_STRING_CTOR)
return FALSE;
break;
default:
break;
}
if (method->string_ctor)
return FALSE;
/* These are called from gsharedvt code with an indirect call which doesn't pass an extra arg */
if (method->klass == mono_get_string_class () && (strstr (method->name, "memcpy") || strstr (method->name, "bzero")))
return FALSE;
return TRUE;
}
static inline gboolean
is_supported_callconv (EmitContext *ctx, MonoCallInst *call)
{
#if defined(TARGET_WIN32) && defined(TARGET_AMD64)
gboolean result = (call->signature->call_convention == MONO_CALL_DEFAULT) ||
(call->signature->call_convention == MONO_CALL_C) ||
(call->signature->call_convention == MONO_CALL_STDCALL);
#else
gboolean result = (call->signature->call_convention == MONO_CALL_DEFAULT) || ((call->signature->call_convention == MONO_CALL_C) && ctx->llvm_only);
#endif
return result;
}
static void
process_call (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, MonoInst *ins)
{
MonoCompile *cfg = ctx->cfg;
LLVMValueRef *values = ctx->values;
LLVMValueRef *addresses = ctx->addresses;
MonoCallInst *call = (MonoCallInst*)ins;
MonoMethodSignature *sig = call->signature;
LLVMValueRef callee = NULL, lcall;
LLVMValueRef *args;
LLVMCallInfo *cinfo;
GSList *l;
int i, len, nargs;
gboolean vretaddr;
LLVMTypeRef llvm_sig;
gpointer target;
gboolean is_virtual, calli;
LLVMBuilderRef builder = *builder_ref;
/* If both imt and rgctx arg are required, only pass the imt arg, the rgctx trampoline will pass the rgctx */
if (call->imt_arg_reg)
call->rgctx_arg_reg = 0;
if (!is_supported_callconv (ctx, call)) {
set_failure (ctx, "non-default callconv");
return;
}
cinfo = call->cinfo;
g_assert (cinfo);
if (call->rgctx_arg_reg)
cinfo->rgctx_arg = TRUE;
if (call->imt_arg_reg)
cinfo->imt_arg = TRUE;
if (!call->rgctx_arg_reg && call->method && needs_extra_arg (ctx, call->method))
cinfo->dummy_arg = TRUE;
vretaddr = (cinfo->ret.storage == LLVMArgVtypeRetAddr || cinfo->ret.storage == LLVMArgVtypeByRef || cinfo->ret.storage == LLVMArgGsharedvtFixed || cinfo->ret.storage == LLVMArgGsharedvtVariable || cinfo->ret.storage == LLVMArgGsharedvtFixedVtype);
llvm_sig = sig_to_llvm_sig_full (ctx, sig, cinfo);
if (!ctx_ok (ctx))
return;
int const opcode = ins->opcode;
is_virtual = opcode == OP_VOIDCALL_MEMBASE || opcode == OP_CALL_MEMBASE
|| opcode == OP_VCALL_MEMBASE || opcode == OP_LCALL_MEMBASE
|| opcode == OP_FCALL_MEMBASE || opcode == OP_RCALL_MEMBASE
|| opcode == OP_TAILCALL_MEMBASE;
calli = !call->fptr_is_patch && (opcode == OP_VOIDCALL_REG || opcode == OP_CALL_REG
|| opcode == OP_VCALL_REG || opcode == OP_LCALL_REG || opcode == OP_FCALL_REG
|| opcode == OP_RCALL_REG || opcode == OP_TAILCALL_REG);
/* FIXME: Avoid creating duplicate methods */
if (ins->flags & MONO_INST_HAS_METHOD) {
if (is_virtual) {
callee = NULL;
} else {
if (cfg->compile_aot) {
callee = get_callee (ctx, llvm_sig, MONO_PATCH_INFO_METHOD, call->method);
if (!callee) {
set_failure (ctx, "can't encode patch");
return;
}
} else if (cfg->method == call->method) {
callee = ctx->lmethod;
} else {
ERROR_DECL (error);
static int tramp_index;
char *name;
name = g_strdup_printf ("[tramp_%d] %s", tramp_index, mono_method_full_name (call->method, TRUE));
tramp_index ++;
/*
* Use our trampoline infrastructure for lazy compilation instead of llvm's.
* Make all calls through a global. The address of the global will be saved in
* MonoJitDomainInfo.llvm_jit_callees and updated when the method it refers to is
* compiled.
*/
LLVMValueRef tramp_var = (LLVMValueRef)g_hash_table_lookup (ctx->jit_callees, call->method);
if (!tramp_var) {
target =
mono_create_jit_trampoline (call->method, error);
if (!is_ok (error)) {
set_failure (ctx, mono_error_get_message (error));
mono_error_cleanup (error);
return;
}
tramp_var = LLVMAddGlobal (ctx->lmodule, LLVMPointerType (llvm_sig, 0), name);
LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (LLVMConstInt (LLVMInt64Type (), (guint64)(size_t)target, FALSE), LLVMPointerType (llvm_sig, 0)));
LLVMSetLinkage (tramp_var, LLVMExternalLinkage);
g_hash_table_insert (ctx->jit_callees, call->method, tramp_var);
}
callee = LLVMBuildLoad (builder, tramp_var, "");
}
}
if (!cfg->llvm_only && call->method && strstr (m_class_get_name (call->method->klass), "AsyncVoidMethodBuilder")) {
/* LLVM miscompiles async methods */
set_failure (ctx, "#13734");
return;
}
} else if (calli) {
} else {
const MonoJitICallId jit_icall_id = call->jit_icall_id;
if (jit_icall_id) {
if (cfg->compile_aot) {
callee = get_callee (ctx, llvm_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (jit_icall_id));
if (!callee) {
set_failure (ctx, "can't encode patch");
return;
}
} else {
callee = get_jit_callee (ctx, "", llvm_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (jit_icall_id));
}
} else {
if (cfg->compile_aot) {
callee = NULL;
if (cfg->abs_patches) {
MonoJumpInfo *abs_ji = (MonoJumpInfo*)g_hash_table_lookup (cfg->abs_patches, call->fptr);
if (abs_ji) {
callee = get_callee (ctx, llvm_sig, abs_ji->type, abs_ji->data.target);
if (!callee) {
set_failure (ctx, "can't encode patch");
return;
}
}
}
if (!callee) {
set_failure (ctx, "aot");
return;
}
} else {
if (cfg->abs_patches) {
MonoJumpInfo *abs_ji = (MonoJumpInfo*)g_hash_table_lookup (cfg->abs_patches, call->fptr);
if (abs_ji) {
ERROR_DECL (error);
target = mono_resolve_patch_target (cfg->method, NULL, abs_ji, FALSE, error);
mono_error_assert_ok (error);
callee = get_jit_callee (ctx, "", llvm_sig, abs_ji->type, abs_ji->data.target);
} else {
g_assert_not_reached ();
}
} else {
g_assert_not_reached ();
}
}
}
}
if (is_virtual) {
int size = TARGET_SIZEOF_VOID_P;
LLVMValueRef index;
g_assert (ins->inst_offset % size == 0);
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE);
callee = convert (ctx, LLVMBuildLoad (builder, LLVMBuildGEP (builder, convert (ctx, values [ins->inst_basereg], LLVMPointerType (LLVMPointerType (IntPtrType (), 0), 0)), &index, 1, ""), ""), LLVMPointerType (llvm_sig, 0));
} else if (calli) {
callee = convert (ctx, values [ins->sreg1], LLVMPointerType (llvm_sig, 0));
} else {
if (ins->flags & MONO_INST_HAS_METHOD) {
}
}
/*
* Collect and convert arguments
*/
nargs = (sig->param_count * 16) + sig->hasthis + vretaddr + call->rgctx_reg + call->imt_arg_reg + call->cinfo->dummy_arg + 1;
len = sizeof (LLVMValueRef) * nargs;
args = g_newa (LLVMValueRef, nargs);
memset (args, 0, len);
l = call->out_ireg_args;
if (call->rgctx_arg_reg) {
g_assert (values [call->rgctx_arg_reg]);
g_assert (cinfo->rgctx_arg_pindex < nargs);
/*
* On ARM, the imt/rgctx argument is passed in a caller save register, but some of our trampolines etc. clobber it, leading to
* problems is LLVM moves the arg assignment earlier. To work around this, save the argument into a stack slot and load
* it using a volatile load.
*/
#ifdef TARGET_ARM
if (!ctx->imt_rgctx_loc)
ctx->imt_rgctx_loc = build_alloca_llvm_type (ctx, ctx->module->ptr_type, TARGET_SIZEOF_VOID_P);
LLVMBuildStore (builder, convert (ctx, ctx->values [call->rgctx_arg_reg], ctx->module->ptr_type), ctx->imt_rgctx_loc);
args [cinfo->rgctx_arg_pindex] = mono_llvm_build_load (builder, ctx->imt_rgctx_loc, "", TRUE);
#else
args [cinfo->rgctx_arg_pindex] = convert (ctx, values [call->rgctx_arg_reg], ctx->module->ptr_type);
#endif
}
if (call->imt_arg_reg) {
g_assert (!ctx->llvm_only);
g_assert (values [call->imt_arg_reg]);
g_assert (cinfo->imt_arg_pindex < nargs);
#ifdef TARGET_ARM
if (!ctx->imt_rgctx_loc)
ctx->imt_rgctx_loc = build_alloca_llvm_type (ctx, ctx->module->ptr_type, TARGET_SIZEOF_VOID_P);
LLVMBuildStore (builder, convert (ctx, ctx->values [call->imt_arg_reg], ctx->module->ptr_type), ctx->imt_rgctx_loc);
args [cinfo->imt_arg_pindex] = mono_llvm_build_load (builder, ctx->imt_rgctx_loc, "", TRUE);
#else
args [cinfo->imt_arg_pindex] = convert (ctx, values [call->imt_arg_reg], ctx->module->ptr_type);
#endif
}
switch (cinfo->ret.storage) {
case LLVMArgGsharedvtVariable: {
MonoInst *var = get_vreg_to_inst (cfg, call->inst.dreg);
if (var && var->opcode == OP_GSHAREDVT_LOCAL) {
args [cinfo->vret_arg_pindex] = convert (ctx, emit_gsharedvt_ldaddr (ctx, var->dreg), IntPtrType ());
} else {
g_assert (addresses [call->inst.dreg]);
args [cinfo->vret_arg_pindex] = convert (ctx, addresses [call->inst.dreg], IntPtrType ());
}
break;
}
default:
if (vretaddr) {
if (!addresses [call->inst.dreg])
addresses [call->inst.dreg] = build_alloca (ctx, sig->ret);
g_assert (cinfo->vret_arg_pindex < nargs);
if (cinfo->ret.storage == LLVMArgVtypeByRef)
args [cinfo->vret_arg_pindex] = addresses [call->inst.dreg];
else
args [cinfo->vret_arg_pindex] = LLVMBuildPtrToInt (builder, addresses [call->inst.dreg], IntPtrType (), "");
}
break;
}
/*
* Sometimes the same method is called with two different signatures (i.e. with and without 'this'), so
* use the real callee for argument type conversion.
*/
LLVMTypeRef callee_type = LLVMGetElementType (LLVMTypeOf (callee));
LLVMTypeRef *param_types = (LLVMTypeRef*)g_alloca (sizeof (LLVMTypeRef) * LLVMCountParamTypes (callee_type));
LLVMGetParamTypes (callee_type, param_types);
for (i = 0; i < sig->param_count + sig->hasthis; ++i) {
guint32 regpair;
int reg, pindex;
LLVMArgInfo *ainfo = &call->cinfo->args [i];
pindex = ainfo->pindex;
regpair = (guint32)(gssize)(l->data);
reg = regpair & 0xffffff;
args [pindex] = values [reg];
switch (ainfo->storage) {
case LLVMArgVtypeInReg:
case LLVMArgAsFpArgs: {
guint32 nargs;
int j;
for (j = 0; j < ainfo->ndummy_fpargs; ++j)
args [pindex + j] = LLVMConstNull (LLVMDoubleType ());
pindex += ainfo->ndummy_fpargs;
g_assert (addresses [reg]);
emit_vtype_to_args (ctx, builder, ainfo->type, addresses [reg], ainfo, args + pindex, &nargs);
pindex += nargs;
// FIXME: alignment
// FIXME: Get rid of the VMOVE
break;
}
case LLVMArgVtypeByVal:
g_assert (addresses [reg]);
args [pindex] = addresses [reg];
break;
case LLVMArgVtypeAddr :
case LLVMArgVtypeByRef: {
g_assert (addresses [reg]);
args [pindex] = convert (ctx, addresses [reg], LLVMPointerType (type_to_llvm_arg_type (ctx, ainfo->type), 0));
break;
}
case LLVMArgAsIArgs:
g_assert (addresses [reg]);
if (ainfo->esize == 8)
args [pindex] = LLVMBuildLoad (ctx->builder, convert (ctx, addresses [reg], LLVMPointerType (LLVMArrayType (LLVMInt64Type (), ainfo->nslots), 0)), "");
else
args [pindex] = LLVMBuildLoad (ctx->builder, convert (ctx, addresses [reg], LLVMPointerType (LLVMArrayType (IntPtrType (), ainfo->nslots), 0)), "");
break;
case LLVMArgVtypeAsScalar:
g_assert_not_reached ();
break;
case LLVMArgWasmVtypeAsScalar:
g_assert (addresses [reg]);
args [pindex] = LLVMBuildLoad (ctx->builder, convert (ctx, addresses [reg], LLVMPointerType (LLVMIntType (ainfo->esize * 8), 0)), "");
break;
case LLVMArgGsharedvtFixed:
case LLVMArgGsharedvtFixedVtype:
g_assert (addresses [reg]);
args [pindex] = convert (ctx, addresses [reg], LLVMPointerType (type_to_llvm_arg_type (ctx, ainfo->type), 0));
break;
case LLVMArgGsharedvtVariable:
g_assert (addresses [reg]);
args [pindex] = convert (ctx, addresses [reg], LLVMPointerType (IntPtrType (), 0));
break;
default:
g_assert (args [pindex]);
if (i == 0 && sig->hasthis)
args [pindex] = convert (ctx, args [pindex], param_types [pindex]);
else
args [pindex] = convert (ctx, args [pindex], type_to_llvm_arg_type (ctx, ainfo->type));
break;
}
g_assert (pindex <= nargs);
l = l->next;
}
if (call->cinfo->dummy_arg) {
g_assert (call->cinfo->dummy_arg_pindex < nargs);
args [call->cinfo->dummy_arg_pindex] = LLVMConstNull (ctx->module->ptr_type);
}
// FIXME: Align call sites
/*
* Emit the call
*/
lcall = emit_call (ctx, bb, &builder, callee, args, LLVMCountParamTypes (llvm_sig));
mono_llvm_nonnull_state_update (ctx, lcall, call->method, args, LLVMCountParamTypes (llvm_sig));
// If we just allocated an object, it's not null.
if (call->method && call->method->wrapper_type == MONO_WRAPPER_ALLOC) {
mono_llvm_set_call_nonnull_ret (lcall);
}
if (ins->opcode != OP_TAILCALL && ins->opcode != OP_TAILCALL_MEMBASE && LLVMGetInstructionOpcode (lcall) == LLVMCall)
mono_llvm_set_call_notailcall (lcall);
// Add original method name we are currently emitting as a custom string metadata (the only way to leave comments in LLVM IR)
if (mono_debug_enabled () && call && call->method)
mono_llvm_add_string_metadata (lcall, "managed_name", mono_method_full_name (call->method, TRUE));
// As per the LLVM docs, a function has a noalias return value if and only if
// it is an allocation function. This is an allocation function.
if (call->method && call->method->wrapper_type == MONO_WRAPPER_ALLOC) {
mono_llvm_set_call_noalias_ret (lcall);
// All objects are expected to be 8-byte aligned (SGEN_ALLOC_ALIGN)
mono_llvm_set_alignment_ret (lcall, 8);
}
/*
* Modify cconv and parameter attributes to pass rgctx/imt correctly.
*/
#if defined(MONO_ARCH_IMT_REG) && defined(MONO_ARCH_RGCTX_REG)
g_assert (MONO_ARCH_IMT_REG == MONO_ARCH_RGCTX_REG);
#endif
/* The two can't be used together, so use only one LLVM calling conv to pass them */
g_assert (!(call->rgctx_arg_reg && call->imt_arg_reg));
if (!sig->pinvoke && !cfg->llvm_only)
LLVMSetInstructionCallConv (lcall, LLVMMono1CallConv);
if (cinfo->ret.storage == LLVMArgVtypeByRef)
mono_llvm_add_instr_attr (lcall, 1 + cinfo->vret_arg_pindex, LLVM_ATTR_STRUCT_RET);
if (!ctx->llvm_only && call->rgctx_arg_reg)
mono_llvm_add_instr_attr (lcall, 1 + cinfo->rgctx_arg_pindex, LLVM_ATTR_IN_REG);
if (call->imt_arg_reg)
mono_llvm_add_instr_attr (lcall, 1 + cinfo->imt_arg_pindex, LLVM_ATTR_IN_REG);
/* Add byval attributes if needed */
for (i = 0; i < sig->param_count; ++i) {
LLVMArgInfo *ainfo = &call->cinfo->args [i + sig->hasthis];
if (ainfo && ainfo->storage == LLVMArgVtypeByVal)
mono_llvm_add_instr_attr (lcall, 1 + ainfo->pindex, LLVM_ATTR_BY_VAL);
#ifdef TARGET_WASM
if (ainfo && ainfo->storage == LLVMArgVtypeByRef)
/* This causes llvm to make a copy of the value which is what we need */
mono_llvm_add_instr_byval_attr (lcall, 1 + ainfo->pindex, LLVMGetElementType (param_types [ainfo->pindex]));
#endif
}
gboolean is_simd = MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (sig->ret));
gboolean should_promote_to_value = FALSE;
const char *load_name = NULL;
/*
* Convert the result. Non-SIMD value types are manipulated via an
* indirection. SIMD value types are represented directly as LLVM vector
* values, and must have a corresponding LLVM value definition in
* `values`.
*/
switch (cinfo->ret.storage) {
case LLVMArgAsIArgs:
case LLVMArgFpStruct:
if (!addresses [call->inst.dreg])
addresses [call->inst.dreg] = build_alloca (ctx, sig->ret);
LLVMBuildStore (builder, lcall, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (LLVMTypeOf (lcall), 0), FALSE));
break;
case LLVMArgVtypeByVal:
/*
* Only used by amd64 and x86. Only ever used when passing
* arguments; never used for return values.
*/
g_assert_not_reached ();
break;
case LLVMArgVtypeInReg: {
if (LLVMTypeOf (lcall) == LLVMVoidType ())
/* Empty struct */
break;
if (!addresses [ins->dreg])
addresses [ins->dreg] = build_alloca (ctx, sig->ret);
LLVMValueRef regs [2] = { 0 };
regs [0] = LLVMBuildExtractValue (builder, lcall, 0, "");
if (cinfo->ret.pair_storage [1] != LLVMArgNone)
regs [1] = LLVMBuildExtractValue (builder, lcall, 1, "");
emit_args_to_vtype (ctx, builder, sig->ret, addresses [ins->dreg], &cinfo->ret, regs);
load_name = "process_call_vtype_in_reg";
should_promote_to_value = is_simd;
break;
}
case LLVMArgVtypeAsScalar:
if (!addresses [call->inst.dreg])
addresses [call->inst.dreg] = build_alloca (ctx, sig->ret);
LLVMBuildStore (builder, lcall, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (LLVMTypeOf (lcall), 0), FALSE));
load_name = "process_call_vtype_as_scalar";
should_promote_to_value = is_simd;
break;
case LLVMArgVtypeRetAddr:
case LLVMArgVtypeByRef:
load_name = "process_call_vtype_ret_addr";
should_promote_to_value = is_simd;
break;
case LLVMArgGsharedvtVariable:
break;
case LLVMArgGsharedvtFixed:
case LLVMArgGsharedvtFixedVtype:
values [ins->dreg] = LLVMBuildLoad (builder, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (type_to_llvm_type (ctx, sig->ret), 0), FALSE), "");
break;
case LLVMArgWasmVtypeAsScalar:
if (!addresses [call->inst.dreg])
addresses [call->inst.dreg] = build_alloca (ctx, sig->ret);
LLVMBuildStore (builder, lcall, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (LLVMTypeOf (lcall), 0), FALSE));
break;
default:
if (sig->ret->type != MONO_TYPE_VOID)
/* If the method returns an unsigned value, need to zext it */
values [ins->dreg] = convert_full (ctx, lcall, llvm_type_to_stack_type (cfg, type_to_llvm_type (ctx, sig->ret)), type_is_unsigned (ctx, sig->ret));
break;
}
if (should_promote_to_value) {
g_assert (addresses [call->inst.dreg]);
LLVMTypeRef addr_type = LLVMPointerType (type_to_llvm_type (ctx, sig->ret), 0);
LLVMValueRef addr = convert_full (ctx, addresses [call->inst.dreg], addr_type, FALSE);
values [ins->dreg] = LLVMBuildLoad (builder, addr, load_name);
}
*builder_ref = ctx->builder;
}
static void
emit_llvmonly_throw (EmitContext *ctx, MonoBasicBlock *bb, gboolean rethrow, LLVMValueRef exc)
{
MonoJitICallId icall_id = rethrow ? MONO_JIT_ICALL_mini_llvmonly_rethrow_exception : MONO_JIT_ICALL_mini_llvmonly_throw_exception;
LLVMValueRef callee = rethrow ? ctx->module->rethrow : ctx->module->throw_icall;
LLVMTypeRef exc_type = type_to_llvm_type (ctx, m_class_get_byval_arg (mono_get_exception_class ()));
if (!callee) {
LLVMTypeRef fun_sig = LLVMFunctionType1 (LLVMVoidType (), exc_type, FALSE);
g_assert (ctx->cfg->compile_aot);
callee = get_callee (ctx, fun_sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (icall_id));
}
LLVMValueRef args [2];
args [0] = convert (ctx, exc, exc_type);
emit_call (ctx, bb, &ctx->builder, callee, args, 1);
LLVMBuildUnreachable (ctx->builder);
ctx->builder = create_builder (ctx);
}
static void
emit_throw (EmitContext *ctx, MonoBasicBlock *bb, gboolean rethrow, LLVMValueRef exc)
{
MonoMethodSignature *throw_sig;
LLVMValueRef * const pcallee = rethrow ? &ctx->module->rethrow : &ctx->module->throw_icall;
LLVMValueRef callee = *pcallee;
char const * const icall_name = rethrow ? "mono_arch_rethrow_exception" : "mono_arch_throw_exception";
#ifndef TARGET_X86
const
#endif
MonoJitICallId icall_id = rethrow ? MONO_JIT_ICALL_mono_arch_rethrow_exception : MONO_JIT_ICALL_mono_arch_throw_exception;
if (!callee) {
throw_sig = mono_metadata_signature_alloc (mono_get_corlib (), 1);
throw_sig->ret = m_class_get_byval_arg (mono_get_void_class ());
throw_sig->params [0] = m_class_get_byval_arg (mono_get_object_class ());
if (ctx->cfg->compile_aot) {
callee = get_callee (ctx, sig_to_llvm_sig (ctx, throw_sig), MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
} else {
#ifdef TARGET_X86
/*
* LLVM doesn't push the exception argument, so we need a different
* trampoline.
*/
icall_id = rethrow ? MONO_JIT_ICALL_mono_llvm_rethrow_exception_trampoline : MONO_JIT_ICALL_mono_llvm_throw_exception_trampoline;
#endif
callee = get_jit_callee (ctx, icall_name, sig_to_llvm_sig (ctx, throw_sig), MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
}
mono_memory_barrier ();
}
LLVMValueRef arg;
arg = convert (ctx, exc, type_to_llvm_type (ctx, m_class_get_byval_arg (mono_get_object_class ())));
emit_call (ctx, bb, &ctx->builder, callee, &arg, 1);
}
static void
emit_resume_eh (EmitContext *ctx, MonoBasicBlock *bb)
{
const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_resume_exception;
LLVMValueRef callee;
LLVMTypeRef fun_sig = LLVMFunctionType0 (LLVMVoidType (), FALSE);
g_assert (ctx->cfg->compile_aot);
callee = get_callee (ctx, fun_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
emit_call (ctx, bb, &ctx->builder, callee, NULL, 0);
LLVMBuildUnreachable (ctx->builder);
ctx->builder = create_builder (ctx);
}
static LLVMValueRef
mono_llvm_emit_clear_exception_call (EmitContext *ctx, LLVMBuilderRef builder)
{
const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_clear_exception;
LLVMTypeRef call_sig = LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE);
LLVMValueRef callee = NULL;
if (!callee) {
callee = get_callee (ctx, call_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
}
g_assert (builder && callee);
return LLVMBuildCall (builder, callee, NULL, 0, "");
}
static LLVMValueRef
mono_llvm_emit_load_exception_call (EmitContext *ctx, LLVMBuilderRef builder)
{
const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_load_exception;
LLVMTypeRef call_sig = LLVMFunctionType (ObjRefType (), NULL, 0, FALSE);
LLVMValueRef callee = NULL;
g_assert (ctx->cfg->compile_aot);
if (!callee) {
callee = get_callee (ctx, call_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
}
g_assert (builder && callee);
return LLVMBuildCall (builder, callee, NULL, 0, "load_exception");
}
static LLVMValueRef
mono_llvm_emit_match_exception_call (EmitContext *ctx, LLVMBuilderRef builder, gint32 region_start, gint32 region_end)
{
const char *icall_name = "mini_llvmonly_match_exception";
const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_match_exception;
ctx->builder = builder;
LLVMValueRef args[5];
const int num_args = G_N_ELEMENTS (args);
args [0] = convert (ctx, get_aotconst (ctx, MONO_PATCH_INFO_AOT_JIT_INFO, GINT_TO_POINTER (ctx->cfg->method_index), LLVMPointerType (IntPtrType (), 0)), IntPtrType ());
args [1] = LLVMConstInt (LLVMInt32Type (), region_start, 0);
args [2] = LLVMConstInt (LLVMInt32Type (), region_end, 0);
if (ctx->cfg->rgctx_var) {
if (ctx->cfg->llvm_only) {
args [3] = convert (ctx, ctx->rgctx_arg, IntPtrType ());
} else {
LLVMValueRef rgctx_alloc = ctx->addresses [ctx->cfg->rgctx_var->dreg];
g_assert (rgctx_alloc);
args [3] = LLVMBuildLoad (builder, convert (ctx, rgctx_alloc, LLVMPointerType (IntPtrType (), 0)), "");
}
} else {
args [3] = LLVMConstInt (IntPtrType (), 0, 0);
}
if (ctx->this_arg)
args [4] = convert (ctx, ctx->this_arg, IntPtrType ());
else
args [4] = LLVMConstInt (IntPtrType (), 0, 0);
LLVMTypeRef match_sig = LLVMFunctionType5 (LLVMInt32Type (), IntPtrType (), LLVMInt32Type (), LLVMInt32Type (), IntPtrType (), IntPtrType (), FALSE);
LLVMValueRef callee;
g_assert (ctx->cfg->compile_aot);
ctx->builder = builder;
// get_callee expects ctx->builder to be the emitting builder
callee = get_callee (ctx, match_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
g_assert (builder && callee);
g_assert (ctx->ex_var);
return LLVMBuildCall (builder, callee, args, num_args, icall_name);
}
// FIXME: This won't work because the code-finding makes this
// not a constant.
/*#define MONO_PERSONALITY_DEBUG*/
#ifdef MONO_PERSONALITY_DEBUG
static const gboolean use_mono_personality_debug = TRUE;
static const char *default_personality_name = "mono_debug_personality";
#else
static const gboolean use_mono_personality_debug = FALSE;
static const char *default_personality_name = "__gxx_personality_v0";
#endif
static LLVMTypeRef
default_cpp_lpad_exc_signature (void)
{
static LLVMTypeRef sig;
if (!sig) {
LLVMTypeRef signature [2];
signature [0] = LLVMPointerType (LLVMInt8Type (), 0);
signature [1] = LLVMInt32Type ();
sig = LLVMStructType (signature, 2, FALSE);
}
return sig;
}
static LLVMValueRef
get_mono_personality (EmitContext *ctx)
{
LLVMValueRef personality = NULL;
LLVMTypeRef personality_type = LLVMFunctionType (LLVMInt32Type (), NULL, 0, TRUE);
g_assert (ctx->cfg->compile_aot);
if (!use_mono_personality_debug) {
personality = LLVMGetNamedFunction (ctx->lmodule, default_personality_name);
} else {
personality = get_callee (ctx, personality_type, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_debug_personality));
}
g_assert (personality);
return personality;
}
static LLVMBasicBlockRef
emit_landing_pad (EmitContext *ctx, int group_index, int group_size)
{
MonoCompile *cfg = ctx->cfg;
LLVMBuilderRef old_builder = ctx->builder;
MonoExceptionClause *group_start = cfg->header->clauses + group_index;
LLVMBuilderRef lpadBuilder = create_builder (ctx);
ctx->builder = lpadBuilder;
MonoBasicBlock *handler_bb = cfg->cil_offset_to_bb [CLAUSE_START (group_start)];
g_assert (handler_bb);
// <resultval> = landingpad <somety> personality <type> <pers_fn> <clause>+
LLVMValueRef personality = get_mono_personality (ctx);
g_assert (personality);
char *bb_name = g_strdup_printf ("LPAD%d_BB", group_index);
LLVMBasicBlockRef lpad_bb = gen_bb (ctx, bb_name);
g_free (bb_name);
LLVMPositionBuilderAtEnd (lpadBuilder, lpad_bb);
LLVMValueRef landing_pad = LLVMBuildLandingPad (lpadBuilder, default_cpp_lpad_exc_signature (), personality, 0, "");
g_assert (landing_pad);
LLVMValueRef cast = LLVMBuildBitCast (lpadBuilder, ctx->module->sentinel_exception, LLVMPointerType (LLVMInt8Type (), 0), "int8TypeInfo");
LLVMAddClause (landing_pad, cast);
if (ctx->cfg->deopt) {
/*
* Call mini_llvmonly_resume_exception_il_state (lmf, il_state)
*
* The call will execute the catch clause and the rest of the method and store the return
* value into ctx->il_state_ret.
*/
if (!ctx->has_catch) {
/* Unused */
LLVMBuildUnreachable (lpadBuilder);
return lpad_bb;
}
const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_resume_exception_il_state;
LLVMValueRef callee;
LLVMValueRef args [2];
LLVMTypeRef fun_sig = LLVMFunctionType2 (LLVMVoidType (), IntPtrType (), IntPtrType (), FALSE);
callee = get_callee (ctx, fun_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
g_assert (ctx->cfg->lmf_var);
g_assert (ctx->addresses [ctx->cfg->lmf_var->dreg]);
args [0] = LLVMBuildPtrToInt (ctx->builder, ctx->addresses [ctx->cfg->lmf_var->dreg], IntPtrType (), "");
args [1] = LLVMBuildPtrToInt (ctx->builder, ctx->il_state, IntPtrType (), "");
emit_call (ctx, NULL, &ctx->builder, callee, args, 2);
/* Return the value set in ctx->il_state_ret */
LLVMTypeRef ret_type = LLVMGetReturnType (LLVMGetElementType (LLVMTypeOf (ctx->lmethod)));
LLVMBuilderRef builder = ctx->builder;
LLVMValueRef addr, retval, gep, indexes [2];
switch (ctx->linfo->ret.storage) {
case LLVMArgNone:
LLVMBuildRetVoid (builder);
break;
case LLVMArgNormal:
case LLVMArgWasmVtypeAsScalar:
case LLVMArgVtypeInReg: {
if (ctx->sig->ret->type == MONO_TYPE_VOID) {
LLVMBuildRetVoid (builder);
break;
}
addr = ctx->il_state_ret;
g_assert (addr);
addr = convert (ctx, ctx->il_state_ret, LLVMPointerType (ret_type, 0));
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
gep = LLVMBuildGEP (builder, addr, indexes, 1, "");
LLVMBuildRet (builder, LLVMBuildLoad (builder, gep, ""));
break;
}
case LLVMArgVtypeRetAddr: {
LLVMValueRef ret_addr;
g_assert (cfg->vret_addr);
ret_addr = ctx->values [cfg->vret_addr->dreg];
addr = ctx->il_state_ret;
g_assert (addr);
/* The ret value is in il_state_ret, copy it to the memory pointed to by the vret arg */
ret_type = type_to_llvm_type (ctx, ctx->sig->ret);
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
gep = LLVMBuildGEP (builder, addr, indexes, 1, "");
retval = convert (ctx, LLVMBuildLoad (builder, gep, ""), ret_type);
LLVMBuildStore (builder, retval, convert (ctx, ret_addr, LLVMPointerType (ret_type, 0)));
LLVMBuildRetVoid (builder);
break;
}
default:
g_assert_not_reached ();
break;
}
return lpad_bb;
}
LLVMBasicBlockRef resume_bb = gen_bb (ctx, "RESUME_BB");
LLVMBuilderRef resume_builder = create_builder (ctx);
ctx->builder = resume_builder;
LLVMPositionBuilderAtEnd (resume_builder, resume_bb);
emit_resume_eh (ctx, handler_bb);
// Build match
ctx->builder = lpadBuilder;
LLVMPositionBuilderAtEnd (lpadBuilder, lpad_bb);
gboolean finally_only = TRUE;
MonoExceptionClause *group_cursor = group_start;
for (int i = 0; i < group_size; i ++) {
if (!(group_cursor->flags & MONO_EXCEPTION_CLAUSE_FINALLY || group_cursor->flags & MONO_EXCEPTION_CLAUSE_FAULT))
finally_only = FALSE;
group_cursor++;
}
// FIXME:
// Handle landing pad inlining
if (!finally_only) {
// So at each level of the exception stack we will match the exception again.
// During that match, we need to compare against the handler types for the current
// protected region. We send the try start and end so that we can only check against
// handlers for this lexical protected region.
LLVMValueRef match = mono_llvm_emit_match_exception_call (ctx, lpadBuilder, group_start->try_offset, group_start->try_offset + group_start->try_len);
// if returns -1, resume
LLVMValueRef switch_ins = LLVMBuildSwitch (lpadBuilder, match, resume_bb, group_size);
// else move to that target bb
for (int i = 0; i < group_size; i++) {
MonoExceptionClause *clause = group_start + i;
int clause_index = clause - cfg->header->clauses;
MonoBasicBlock *handler_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->clause_to_handler, GINT_TO_POINTER (clause_index));
g_assert (handler_bb);
g_assert (ctx->bblocks [handler_bb->block_num].call_handler_target_bb);
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), clause_index, FALSE), ctx->bblocks [handler_bb->block_num].call_handler_target_bb);
}
} else {
int clause_index = group_start - cfg->header->clauses;
MonoBasicBlock *finally_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->clause_to_handler, GINT_TO_POINTER (clause_index));
g_assert (finally_bb);
LLVMBuildBr (ctx->builder, ctx->bblocks [finally_bb->block_num].call_handler_target_bb);
}
ctx->builder = old_builder;
return lpad_bb;
}
static LLVMValueRef
create_const_vector (LLVMTypeRef t, const int *vals, int count)
{
g_assert (count <= MAX_VECTOR_ELEMS);
LLVMValueRef llvm_vals [MAX_VECTOR_ELEMS];
for (int i = 0; i < count; i++)
llvm_vals [i] = LLVMConstInt (t, vals [i], FALSE);
return LLVMConstVector (llvm_vals, count);
}
static LLVMValueRef
create_const_vector_i32 (const int *mask, int count)
{
return create_const_vector (LLVMInt32Type (), mask, count);
}
static LLVMValueRef
create_const_vector_4_i32 (int v0, int v1, int v2, int v3)
{
LLVMValueRef mask [4];
mask [0] = LLVMConstInt (LLVMInt32Type (), v0, FALSE);
mask [1] = LLVMConstInt (LLVMInt32Type (), v1, FALSE);
mask [2] = LLVMConstInt (LLVMInt32Type (), v2, FALSE);
mask [3] = LLVMConstInt (LLVMInt32Type (), v3, FALSE);
return LLVMConstVector (mask, 4);
}
static LLVMValueRef
create_const_vector_2_i32 (int v0, int v1)
{
LLVMValueRef mask [2];
mask [0] = LLVMConstInt (LLVMInt32Type (), v0, FALSE);
mask [1] = LLVMConstInt (LLVMInt32Type (), v1, FALSE);
return LLVMConstVector (mask, 2);
}
static LLVMValueRef
broadcast_element (EmitContext *ctx, LLVMValueRef elem, int count)
{
LLVMTypeRef t = LLVMTypeOf (elem);
LLVMTypeRef init_vec_t = LLVMVectorType (t, 1);
LLVMValueRef undef = LLVMGetUndef (init_vec_t);
LLVMValueRef vec = LLVMBuildInsertElement (ctx->builder, undef, elem, const_int32 (0), "");
LLVMValueRef select_zero = LLVMConstNull (LLVMVectorType (LLVMInt32Type (), count));
return LLVMBuildShuffleVector (ctx->builder, vec, undef, select_zero, "broadcast");
}
static LLVMValueRef
broadcast_constant (int const_val, LLVMTypeRef elem_t, int count)
{
int vals [MAX_VECTOR_ELEMS];
for (int i = 0; i < count; ++i)
vals [i] = const_val;
return create_const_vector (elem_t, vals, count);
}
static LLVMValueRef
create_shift_vector (EmitContext *ctx, LLVMValueRef type_donor, LLVMValueRef shiftamt)
{
LLVMTypeRef t = LLVMTypeOf (type_donor);
unsigned int elems = LLVMGetVectorSize (t);
LLVMTypeRef elem_t = LLVMGetElementType (t);
shiftamt = convert_full (ctx, shiftamt, elem_t, TRUE);
shiftamt = broadcast_element (ctx, shiftamt, elems);
return shiftamt;
}
static LLVMTypeRef
to_integral_vector_type (LLVMTypeRef t)
{
unsigned int elems = LLVMGetVectorSize (t);
LLVMTypeRef elem_t = LLVMGetElementType (t);
unsigned int bits = mono_llvm_get_prim_size_bits (elem_t);
return LLVMVectorType (LLVMIntType (bits), elems);
}
static LLVMValueRef
bitcast_to_integral (EmitContext *ctx, LLVMValueRef vec)
{
LLVMTypeRef src_t = LLVMTypeOf (vec);
LLVMTypeRef dst_t = to_integral_vector_type (src_t);
if (dst_t != src_t)
return LLVMBuildBitCast (ctx->builder, vec, dst_t, "bc2i");
return vec;
}
static LLVMValueRef
extract_high_elements (EmitContext *ctx, LLVMValueRef src_vec)
{
LLVMTypeRef src_t = LLVMTypeOf (src_vec);
unsigned int src_elems = LLVMGetVectorSize (src_t);
unsigned int dst_elems = src_elems / 2;
int mask [MAX_VECTOR_ELEMS] = { 0 };
for (int i = 0; i < dst_elems; ++i)
mask [i] = dst_elems + i;
return LLVMBuildShuffleVector (ctx->builder, src_vec, LLVMGetUndef (src_t), create_const_vector_i32 (mask, dst_elems), "extract_high");
}
static LLVMValueRef
keep_lowest_element (EmitContext *ctx, LLVMTypeRef dst_t, LLVMValueRef vec)
{
LLVMTypeRef t = LLVMTypeOf (vec);
g_assert (LLVMGetElementType (dst_t) == LLVMGetElementType (t));
unsigned int elems = LLVMGetVectorSize (dst_t);
unsigned int src_elems = LLVMGetVectorSize (t);
int mask [MAX_VECTOR_ELEMS] = { 0 };
mask [0] = 0;
for (unsigned int i = 1; i < elems; ++i)
mask [i] = src_elems;
return LLVMBuildShuffleVector (ctx->builder, vec, LLVMConstNull (t), create_const_vector_i32 (mask, elems), "keep_lowest");
}
static LLVMValueRef
concatenate_vectors (EmitContext *ctx, LLVMValueRef xs, LLVMValueRef ys)
{
LLVMTypeRef t = LLVMTypeOf (xs);
unsigned int elems = LLVMGetVectorSize (t) * 2;
int mask [MAX_VECTOR_ELEMS] = { 0 };
for (int i = 0; i < elems; ++i)
mask [i] = i;
return LLVMBuildShuffleVector (ctx->builder, xs, ys, create_const_vector_i32 (mask, elems), "concat_vecs");
}
static LLVMValueRef
scalar_from_vector (EmitContext *ctx, LLVMValueRef xs)
{
return LLVMBuildExtractElement (ctx->builder, xs, const_int32 (0), "v2s");
}
static LLVMValueRef
vector_from_scalar (EmitContext *ctx, LLVMTypeRef type, LLVMValueRef x)
{
return LLVMBuildInsertElement (ctx->builder, LLVMConstNull (type), x, const_int32 (0), "s2v");
}
typedef struct {
EmitContext *ctx;
MonoBasicBlock *bb;
LLVMBasicBlockRef continuation;
LLVMValueRef phi;
LLVMValueRef switch_ins;
LLVMBasicBlockRef tmp_block;
LLVMBasicBlockRef default_case;
LLVMTypeRef switch_index_type;
const char *name;
int max_cases;
int i;
} ImmediateUnrollCtx;
static ImmediateUnrollCtx
immediate_unroll_begin (
EmitContext *ctx, MonoBasicBlock *bb, int max_cases,
LLVMValueRef switch_index, LLVMTypeRef return_type, const char *name)
{
LLVMBasicBlockRef default_case = gen_bb (ctx, name);
LLVMBasicBlockRef continuation = gen_bb (ctx, name);
LLVMValueRef switch_ins = LLVMBuildSwitch (ctx->builder, switch_index, default_case, max_cases);
LLVMPositionBuilderAtEnd (ctx->builder, continuation);
LLVMValueRef phi = LLVMBuildPhi (ctx->builder, return_type, name);
ImmediateUnrollCtx ictx = { 0 };
ictx.ctx = ctx;
ictx.bb = bb;
ictx.continuation = continuation;
ictx.phi = phi;
ictx.switch_ins = switch_ins;
ictx.default_case = default_case;
ictx.switch_index_type = LLVMTypeOf (switch_index);
ictx.name = name;
ictx.max_cases = max_cases;
return ictx;
}
static gboolean
immediate_unroll_next (ImmediateUnrollCtx *ictx, int *i)
{
if (ictx->i >= ictx->max_cases)
return FALSE;
ictx->tmp_block = gen_bb (ictx->ctx, ictx->name);
LLVMPositionBuilderAtEnd (ictx->ctx->builder, ictx->tmp_block);
*i = ictx->i;
++ictx->i;
return TRUE;
}
static void
immediate_unroll_commit (ImmediateUnrollCtx *ictx, int switch_const, LLVMValueRef value)
{
LLVMBuildBr (ictx->ctx->builder, ictx->continuation);
LLVMAddCase (ictx->switch_ins, LLVMConstInt (ictx->switch_index_type, switch_const, FALSE), ictx->tmp_block);
LLVMAddIncoming (ictx->phi, &value, &ictx->tmp_block, 1);
}
static void
immediate_unroll_default (ImmediateUnrollCtx *ictx)
{
LLVMPositionBuilderAtEnd (ictx->ctx->builder, ictx->default_case);
}
static void
immediate_unroll_commit_default (ImmediateUnrollCtx *ictx, LLVMValueRef value)
{
LLVMBuildBr (ictx->ctx->builder, ictx->continuation);
LLVMAddIncoming (ictx->phi, &value, &ictx->default_case, 1);
}
static void
immediate_unroll_unreachable_default (ImmediateUnrollCtx *ictx)
{
immediate_unroll_default (ictx);
LLVMBuildUnreachable (ictx->ctx->builder);
}
static LLVMValueRef
immediate_unroll_end (ImmediateUnrollCtx *ictx, LLVMBasicBlockRef *continuation)
{
EmitContext *ctx = ictx->ctx;
LLVMBuilderRef builder = ctx->builder;
LLVMPositionBuilderAtEnd (builder, ictx->continuation);
*continuation = ictx->continuation;
ctx->bblocks [ictx->bb->block_num].end_bblock = ictx->continuation;
return ictx->phi;
}
typedef struct {
EmitContext *ctx;
LLVMTypeRef intermediate_type;
LLVMTypeRef return_type;
gboolean needs_fake_scalar_op;
llvm_ovr_tag_t ovr_tag;
} ScalarOpFromVectorOpCtx;
static inline gboolean
check_needs_fake_scalar_op (MonoTypeEnum type)
{
#if defined(TARGET_ARM64)
switch (type) {
case MONO_TYPE_U1:
case MONO_TYPE_I1:
case MONO_TYPE_U2:
case MONO_TYPE_I2:
return TRUE;
}
#endif
return FALSE;
}
static ScalarOpFromVectorOpCtx
scalar_op_from_vector_op (EmitContext *ctx, LLVMTypeRef return_type, MonoInst *ins)
{
ScalarOpFromVectorOpCtx ret = { 0 };
ret.ctx = ctx;
ret.intermediate_type = return_type;
ret.return_type = return_type;
ret.needs_fake_scalar_op = check_needs_fake_scalar_op (inst_c1_type (ins));
ret.ovr_tag = ovr_tag_from_llvm_type (return_type);
if (!ret.needs_fake_scalar_op) {
ret.ovr_tag = ovr_tag_force_scalar (ret.ovr_tag);
ret.intermediate_type = ovr_tag_to_llvm_type (ret.ovr_tag);
}
return ret;
}
static void
scalar_op_from_vector_op_process_args (ScalarOpFromVectorOpCtx *sctx, LLVMValueRef *args, int num_args)
{
if (!sctx->needs_fake_scalar_op)
for (int i = 0; i < num_args; ++i)
args [i] = scalar_from_vector (sctx->ctx, args [i]);
}
static LLVMValueRef
scalar_op_from_vector_op_process_result (ScalarOpFromVectorOpCtx *sctx, LLVMValueRef result)
{
if (sctx->needs_fake_scalar_op)
return keep_lowest_element (sctx->ctx, LLVMTypeOf (result), result);
return vector_from_scalar (sctx->ctx, sctx->return_type, result);
}
static void
emit_llvmonly_handler_start (EmitContext *ctx, MonoBasicBlock *bb, LLVMBasicBlockRef cbb)
{
int clause_index = MONO_REGION_CLAUSE_INDEX (bb->region);
MonoExceptionClause *clause = &ctx->cfg->header->clauses [clause_index];
// Make exception available to catch blocks
if (!(clause->flags & MONO_EXCEPTION_CLAUSE_FINALLY || clause->flags & MONO_EXCEPTION_CLAUSE_FAULT)) {
LLVMValueRef mono_exc = mono_llvm_emit_load_exception_call (ctx, ctx->builder);
g_assert (ctx->ex_var);
LLVMBuildStore (ctx->builder, LLVMBuildBitCast (ctx->builder, mono_exc, ObjRefType (), ""), ctx->ex_var);
if (bb->in_scount == 1) {
MonoInst *exvar = bb->in_stack [0];
g_assert (!ctx->values [exvar->dreg]);
g_assert (ctx->ex_var);
ctx->values [exvar->dreg] = LLVMBuildLoad (ctx->builder, ctx->ex_var, "save_exception");
emit_volatile_store (ctx, exvar->dreg);
}
mono_llvm_emit_clear_exception_call (ctx, ctx->builder);
}
#ifdef TARGET_WASM
if (ctx->cfg->lmf_var && !ctx->cfg->deopt) {
LLVMValueRef callee;
LLVMValueRef args [1];
LLVMTypeRef sig = LLVMFunctionType1 (LLVMVoidType (), ctx->module->ptr_type, FALSE);
/*
* There might be an LMF on the stack inserted to enable stack walking, see
* method_needs_stack_walk (). If an exception is thrown, the LMF popping code
* is not executed, so do it here.
*/
g_assert (ctx->addresses [ctx->cfg->lmf_var->dreg]);
callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_pop_lmf));
args [0] = convert (ctx, ctx->addresses [ctx->cfg->lmf_var->dreg], ctx->module->ptr_type);
emit_call (ctx, bb, &ctx->builder, callee, args, 1);
}
#endif
LLVMBuilderRef handler_builder = create_builder (ctx);
LLVMBasicBlockRef target_bb = ctx->bblocks [bb->block_num].call_handler_target_bb;
LLVMPositionBuilderAtEnd (handler_builder, target_bb);
// Make the handler code end with a jump to cbb
LLVMBuildBr (handler_builder, cbb);
}
static void
emit_handler_start (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef builder)
{
MonoCompile *cfg = ctx->cfg;
LLVMValueRef *values = ctx->values;
LLVMModuleRef lmodule = ctx->lmodule;
BBInfo *bblocks = ctx->bblocks;
LLVMTypeRef i8ptr;
LLVMValueRef personality;
LLVMValueRef landing_pad;
LLVMBasicBlockRef target_bb;
MonoInst *exvar;
static int ti_generator;
char ti_name [128];
LLVMValueRef type_info;
int clause_index;
GSList *l;
// <resultval> = landingpad <somety> personality <type> <pers_fn> <clause>+
if (cfg->compile_aot) {
/* Use a dummy personality function */
personality = LLVMGetNamedFunction (lmodule, "mono_personality");
g_assert (personality);
} else {
/* Can't cache this as each method is in its own llvm module */
LLVMTypeRef personality_type = LLVMFunctionType (LLVMInt32Type (), NULL, 0, TRUE);
personality = LLVMAddFunction (ctx->lmodule, "mono_personality", personality_type);
mono_llvm_add_func_attr (personality, LLVM_ATTR_NO_UNWIND);
LLVMBasicBlockRef entry_bb = LLVMAppendBasicBlock (personality, "ENTRY");
LLVMBuilderRef builder2 = LLVMCreateBuilder ();
LLVMPositionBuilderAtEnd (builder2, entry_bb);
LLVMBuildRet (builder2, LLVMConstInt (LLVMInt32Type (), 0, FALSE));
LLVMDisposeBuilder (builder2);
}
i8ptr = LLVMPointerType (LLVMInt8Type (), 0);
clause_index = (mono_get_block_region_notry (cfg, bb->region) >> 8) - 1;
/*
* Create the type info
*/
sprintf (ti_name, "type_info_%d", ti_generator);
ti_generator ++;
if (cfg->compile_aot) {
/* decode_eh_frame () in aot-runtime.c will decode this */
type_info = LLVMAddGlobal (lmodule, LLVMInt32Type (), ti_name);
LLVMSetInitializer (type_info, LLVMConstInt (LLVMInt32Type (), clause_index, FALSE));
/*
* These symbols are not really used, the clause_index is embedded into the EH tables generated by DwarfMonoException in LLVM.
*/
LLVMSetLinkage (type_info, LLVMInternalLinkage);
} else {
type_info = LLVMAddGlobal (lmodule, LLVMInt32Type (), ti_name);
LLVMSetInitializer (type_info, LLVMConstInt (LLVMInt32Type (), clause_index, FALSE));
}
{
LLVMTypeRef members [2], ret_type;
members [0] = i8ptr;
members [1] = LLVMInt32Type ();
ret_type = LLVMStructType (members, 2, FALSE);
landing_pad = LLVMBuildLandingPad (builder, ret_type, personality, 1, "");
LLVMAddClause (landing_pad, type_info);
/* Store the exception into the exvar */
if (ctx->ex_var)
LLVMBuildStore (builder, convert (ctx, LLVMBuildExtractValue (builder, landing_pad, 0, "ex_obj"), ObjRefType ()), ctx->ex_var);
}
/*
* LLVM throw sites are associated with a one landing pad, and LLVM generated
* code expects control to be transferred to this landing pad even in the
* presence of nested clauses. The landing pad needs to branch to the landing
* pads belonging to nested clauses based on the selector value returned by
* the landing pad instruction, which is passed to the landing pad in a
* register by the EH code.
*/
target_bb = bblocks [bb->block_num].call_handler_target_bb;
g_assert (target_bb);
/*
* Branch to the correct landing pad
*/
LLVMValueRef ex_selector = LLVMBuildExtractValue (builder, landing_pad, 1, "ex_selector");
LLVMValueRef switch_ins = LLVMBuildSwitch (builder, ex_selector, target_bb, 0);
for (l = ctx->nested_in [clause_index]; l; l = l->next) {
int nesting_clause_index = GPOINTER_TO_INT (l->data);
MonoBasicBlock *handler_bb;
handler_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->clause_to_handler, GINT_TO_POINTER (nesting_clause_index));
g_assert (handler_bb);
g_assert (ctx->bblocks [handler_bb->block_num].call_handler_target_bb);
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), nesting_clause_index, FALSE), ctx->bblocks [handler_bb->block_num].call_handler_target_bb);
}
/* Start a new bblock which CALL_HANDLER can branch to */
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, target_bb);
ctx->bblocks [bb->block_num].end_bblock = target_bb;
/* Store the exception into the IL level exvar */
if (bb->in_scount == 1) {
g_assert (bb->in_scount == 1);
exvar = bb->in_stack [0];
// FIXME: This is shared with filter clauses ?
g_assert (!values [exvar->dreg]);
g_assert (ctx->ex_var);
values [exvar->dreg] = LLVMBuildLoad (builder, ctx->ex_var, "");
emit_volatile_store (ctx, exvar->dreg);
}
/* Make normal branches to the start of the clause branch to the new bblock */
bblocks [bb->block_num].bblock = target_bb;
}
static LLVMValueRef
get_double_const (MonoCompile *cfg, double val)
{
//#ifdef TARGET_WASM
#if 0
//Wasm requires us to canonicalize NaNs.
if (mono_isnan (val))
*(gint64 *)&val = 0x7FF8000000000000ll;
#endif
return LLVMConstReal (LLVMDoubleType (), val);
}
static LLVMValueRef
get_float_const (MonoCompile *cfg, float val)
{
//#ifdef TARGET_WASM
#if 0
if (mono_isnan (val))
*(int *)&val = 0x7FC00000;
#endif
if (cfg->r4fp)
return LLVMConstReal (LLVMFloatType (), val);
else
return LLVMConstFPExt (LLVMConstReal (LLVMFloatType (), val), LLVMDoubleType ());
}
static LLVMValueRef
call_overloaded_intrins (EmitContext *ctx, int id, llvm_ovr_tag_t ovr_tag, LLVMValueRef *args, const char *name)
{
int key = key_from_id_and_tag (id, ovr_tag);
LLVMValueRef intrins = get_intrins (ctx, key);
int nargs = LLVMCountParamTypes (LLVMGetElementType (LLVMTypeOf (intrins)));
for (int i = 0; i < nargs; ++i) {
LLVMTypeRef t1 = LLVMTypeOf (args [i]);
LLVMTypeRef t2 = LLVMTypeOf (LLVMGetParam (intrins, i));
if (t1 != t2)
args [i] = convert (ctx, args [i], t2);
}
return LLVMBuildCall (ctx->builder, intrins, args, nargs, name);
}
static LLVMValueRef
call_intrins (EmitContext *ctx, int id, LLVMValueRef *args, const char *name)
{
return call_overloaded_intrins (ctx, id, 0, args, name);
}
static void
process_bb (EmitContext *ctx, MonoBasicBlock *bb)
{
MonoCompile *cfg = ctx->cfg;
MonoMethodSignature *sig = ctx->sig;
LLVMValueRef method = ctx->lmethod;
LLVMValueRef *values = ctx->values;
LLVMValueRef *addresses = ctx->addresses;
LLVMCallInfo *linfo = ctx->linfo;
BBInfo *bblocks = ctx->bblocks;
MonoInst *ins;
LLVMBasicBlockRef cbb;
LLVMBuilderRef builder;
gboolean has_terminator;
LLVMValueRef v;
LLVMValueRef lhs, rhs, arg3;
int nins = 0;
cbb = get_end_bb (ctx, bb);
builder = create_builder (ctx);
ctx->builder = builder;
LLVMPositionBuilderAtEnd (builder, cbb);
if (!ctx_ok (ctx))
return;
if (cfg->interp_entry_only && bb != cfg->bb_init && bb != cfg->bb_entry && bb != cfg->bb_exit) {
/* The interp entry code is in bb_entry, skip the rest as we might not be able to compile it */
LLVMBuildUnreachable (builder);
return;
}
if (bb->flags & BB_EXCEPTION_HANDLER) {
if (!ctx->llvm_only && !bblocks [bb->block_num].invoke_target) {
set_failure (ctx, "handler without invokes");
return;
}
if (ctx->llvm_only)
emit_llvmonly_handler_start (ctx, bb, cbb);
else
emit_handler_start (ctx, bb, builder);
if (!ctx_ok (ctx))
return;
builder = ctx->builder;
}
/* Handle PHI nodes first */
/* They should be grouped at the start of the bb */
for (ins = bb->code; ins; ins = ins->next) {
emit_dbg_loc (ctx, builder, ins->cil_code);
if (ins->opcode == OP_NOP)
continue;
if (!MONO_IS_PHI (ins))
break;
if (cfg->interp_entry_only)
break;
int i;
gboolean empty = TRUE;
/* Check that all input bblocks really branch to us */
for (i = 0; i < bb->in_count; ++i) {
if (bb->in_bb [i]->last_ins && bb->in_bb [i]->last_ins->opcode == OP_NOT_REACHED)
ins->inst_phi_args [i + 1] = -1;
else
empty = FALSE;
}
if (empty) {
/* LLVM doesn't like phi instructions with zero operands */
ctx->is_dead [ins->dreg] = TRUE;
continue;
}
/* Created earlier, insert it now */
LLVMInsertIntoBuilder (builder, values [ins->dreg]);
for (i = 0; i < ins->inst_phi_args [0]; i++) {
int sreg1 = ins->inst_phi_args [i + 1];
int count, j;
/*
* Count the number of times the incoming bblock branches to us,
* since llvm requires a separate entry for each.
*/
if (bb->in_bb [i]->last_ins && bb->in_bb [i]->last_ins->opcode == OP_SWITCH) {
MonoInst *switch_ins = bb->in_bb [i]->last_ins;
count = 0;
for (j = 0; j < GPOINTER_TO_UINT (switch_ins->klass); ++j) {
if (switch_ins->inst_many_bb [j] == bb)
count ++;
}
} else {
count = 1;
}
/* Remember for later */
for (j = 0; j < count; ++j) {
PhiNode *node = (PhiNode*)mono_mempool_alloc0 (ctx->mempool, sizeof (PhiNode));
node->bb = bb;
node->phi = ins;
node->in_bb = bb->in_bb [i];
node->sreg = sreg1;
bblocks [bb->in_bb [i]->block_num].phi_nodes = g_slist_prepend_mempool (ctx->mempool, bblocks [bb->in_bb [i]->block_num].phi_nodes, node);
}
}
}
// Add volatile stores for PHI nodes
// These need to be emitted after the PHI nodes
for (ins = bb->code; ins; ins = ins->next) {
const char *spec = LLVM_INS_INFO (ins->opcode);
if (ins->opcode == OP_NOP)
continue;
if (!MONO_IS_PHI (ins))
break;
if (spec [MONO_INST_DEST] != 'v')
emit_volatile_store (ctx, ins->dreg);
}
has_terminator = FALSE;
for (ins = bb->code; ins; ins = ins->next) {
const char *spec = LLVM_INS_INFO (ins->opcode);
char *dname = NULL;
char dname_buf [128];
emit_dbg_loc (ctx, builder, ins->cil_code);
nins ++;
if (nins > 1000) {
/*
* Some steps in llc are non-linear in the size of basic blocks, see #5714.
* Start a new bblock.
* Prevent the bblocks to be merged by doing a volatile load + cond branch
* from localloc-ed memory.
*/
if (!cfg->llvm_only)
;//set_failure (ctx, "basic block too long");
if (!ctx->long_bb_break_var) {
ctx->long_bb_break_var = build_alloca_llvm_type_name (ctx, LLVMInt32Type (), 0, "long_bb_break");
mono_llvm_build_store (ctx->alloca_builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ctx->long_bb_break_var, TRUE, LLVM_BARRIER_NONE);
}
cbb = gen_bb (ctx, "CONT_LONG_BB");
LLVMBasicBlockRef dummy_bb = gen_bb (ctx, "CONT_LONG_BB_DUMMY");
LLVMValueRef load = mono_llvm_build_load (builder, ctx->long_bb_break_var, "", TRUE);
/*
* The long_bb_break_var is initialized to 0 in the prolog, so this branch will always go to 'cbb'
* but llvm doesn't know that, so the branch is not going to be eliminated.
*/
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntEQ, load, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
LLVMBuildCondBr (builder, cmp, cbb, dummy_bb);
/* Emit a dummy false bblock which does nothing but contains a volatile store so it cannot be eliminated */
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, dummy_bb);
mono_llvm_build_store (builder, LLVMConstInt (LLVMInt32Type (), 1, FALSE), ctx->long_bb_break_var, TRUE, LLVM_BARRIER_NONE);
LLVMBuildBr (builder, cbb);
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, cbb);
ctx->bblocks [bb->block_num].end_bblock = cbb;
nins = 0;
emit_dbg_loc (ctx, builder, ins->cil_code);
}
if (has_terminator)
/* There could be instructions after a terminator, skip them */
break;
if (spec [MONO_INST_DEST] != ' ' && !MONO_IS_STORE_MEMBASE (ins)) {
sprintf (dname_buf, "t%d", ins->dreg);
dname = dname_buf;
}
if (spec [MONO_INST_SRC1] != ' ' && spec [MONO_INST_SRC1] != 'v') {
MonoInst *var = get_vreg_to_inst (cfg, ins->sreg1);
if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT) && var->opcode != OP_GSHAREDVT_ARG_REGOFFSET) {
lhs = emit_volatile_load (ctx, ins->sreg1);
} else {
/* It is ok for SETRET to have an uninitialized argument */
if (!values [ins->sreg1] && ins->opcode != OP_SETRET) {
set_failure (ctx, "sreg1");
return;
}
lhs = values [ins->sreg1];
}
} else {
lhs = NULL;
}
if (spec [MONO_INST_SRC2] != ' ' && spec [MONO_INST_SRC2] != 'v') {
MonoInst *var = get_vreg_to_inst (cfg, ins->sreg2);
if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) {
rhs = emit_volatile_load (ctx, ins->sreg2);
} else {
if (!values [ins->sreg2]) {
set_failure (ctx, "sreg2");
return;
}
rhs = values [ins->sreg2];
}
} else {
rhs = NULL;
}
if (spec [MONO_INST_SRC3] != ' ' && spec [MONO_INST_SRC3] != 'v') {
MonoInst *var = get_vreg_to_inst (cfg, ins->sreg3);
if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) {
arg3 = emit_volatile_load (ctx, ins->sreg3);
} else {
if (!values [ins->sreg3]) {
set_failure (ctx, "sreg3");
return;
}
arg3 = values [ins->sreg3];
}
} else {
arg3 = NULL;
}
//mono_print_ins (ins);
gboolean skip_volatile_store = FALSE;
switch (ins->opcode) {
case OP_NOP:
case OP_NOT_NULL:
case OP_LIVERANGE_START:
case OP_LIVERANGE_END:
break;
case OP_ICONST:
values [ins->dreg] = LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE);
break;
case OP_I8CONST:
#if TARGET_SIZEOF_VOID_P == 4
values [ins->dreg] = LLVMConstInt (LLVMInt64Type (), GET_LONG_IMM (ins), FALSE);
#else
values [ins->dreg] = LLVMConstInt (LLVMInt64Type (), (gint64)ins->inst_c0, FALSE);
#endif
break;
case OP_R8CONST:
values [ins->dreg] = get_double_const (cfg, *(double*)ins->inst_p0);
break;
case OP_R4CONST:
values [ins->dreg] = get_float_const (cfg, *(float*)ins->inst_p0);
break;
case OP_DUMMY_ICONST:
values [ins->dreg] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
break;
case OP_DUMMY_I8CONST:
values [ins->dreg] = LLVMConstInt (LLVMInt64Type (), 0, FALSE);
break;
case OP_DUMMY_R8CONST:
values [ins->dreg] = LLVMConstReal (LLVMDoubleType (), 0.0f);
break;
case OP_BR: {
LLVMBasicBlockRef target_bb = get_bb (ctx, ins->inst_target_bb);
LLVMBuildBr (builder, target_bb);
has_terminator = TRUE;
break;
}
case OP_SWITCH: {
int i;
LLVMValueRef v;
char bb_name [128];
LLVMBasicBlockRef new_bb;
LLVMBuilderRef new_builder;
// The default branch is already handled
// FIXME: Handle it here
/* Start new bblock */
sprintf (bb_name, "SWITCH_DEFAULT_BB%d", ctx->default_index ++);
new_bb = LLVMAppendBasicBlock (ctx->lmethod, bb_name);
lhs = convert (ctx, lhs, LLVMInt32Type ());
v = LLVMBuildSwitch (builder, lhs, new_bb, GPOINTER_TO_UINT (ins->klass));
for (i = 0; i < GPOINTER_TO_UINT (ins->klass); ++i) {
MonoBasicBlock *target_bb = ins->inst_many_bb [i];
LLVMAddCase (v, LLVMConstInt (LLVMInt32Type (), i, FALSE), get_bb (ctx, target_bb));
}
new_builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (new_builder, new_bb);
LLVMBuildUnreachable (new_builder);
has_terminator = TRUE;
g_assert (!ins->next);
break;
}
case OP_SETRET:
switch (linfo->ret.storage) {
case LLVMArgNormal:
case LLVMArgVtypeInReg:
case LLVMArgVtypeAsScalar:
case LLVMArgWasmVtypeAsScalar: {
LLVMTypeRef ret_type = LLVMGetReturnType (LLVMGetElementType (LLVMTypeOf (method)));
LLVMValueRef retval = LLVMGetUndef (ret_type);
gboolean src_in_reg = FALSE;
gboolean is_simd = MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (sig->ret));
switch (linfo->ret.storage) {
case LLVMArgNormal: src_in_reg = TRUE; break;
case LLVMArgVtypeInReg: case LLVMArgVtypeAsScalar: src_in_reg = is_simd; break;
}
if (src_in_reg && (!lhs || ctx->is_dead [ins->sreg1])) {
/*
* The method did not set its return value, probably because it
* ends with a throw.
*/
LLVMBuildRet (builder, retval);
break;
}
switch (linfo->ret.storage) {
case LLVMArgNormal:
retval = convert (ctx, lhs, type_to_llvm_type (ctx, sig->ret));
break;
case LLVMArgVtypeInReg:
if (is_simd) {
/* The return type is an LLVM aggregate type, so a bare bitcast cannot be used to do this conversion. */
int width = mono_type_size (sig->ret, NULL);
int elems = width / TARGET_SIZEOF_VOID_P;
/* The return value might not be set if there is a throw */
LLVMValueRef val = LLVMBuildBitCast (builder, lhs, LLVMVectorType (IntPtrType (), elems), "");
for (int i = 0; i < elems; ++i) {
LLVMValueRef element = LLVMBuildExtractElement (builder, val, const_int32 (i), "");
retval = LLVMBuildInsertValue (builder, retval, element, i, "setret_simd_vtype_in_reg");
}
} else {
LLVMValueRef addr = LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (ret_type, 0), "");
for (int i = 0; i < 2; ++i) {
if (linfo->ret.pair_storage [i] == LLVMArgInIReg) {
LLVMValueRef indexes [2], part_addr;
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMConstInt (LLVMInt32Type (), i, FALSE);
part_addr = LLVMBuildGEP (builder, addr, indexes, 2, "");
retval = LLVMBuildInsertValue (builder, retval, LLVMBuildLoad (builder, part_addr, ""), i, "");
} else {
g_assert (linfo->ret.pair_storage [i] == LLVMArgNone);
}
}
}
break;
case LLVMArgVtypeAsScalar:
if (is_simd) {
retval = LLVMBuildBitCast (builder, values [ins->sreg1], ret_type, "setret_simd_vtype_as_scalar");
} else {
g_assert (addresses [ins->sreg1]);
retval = LLVMBuildLoad (builder, LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (ret_type, 0), ""), "");
}
break;
case LLVMArgWasmVtypeAsScalar:
g_assert (addresses [ins->sreg1]);
retval = LLVMBuildLoad (builder, LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (ret_type, 0), ""), "");
break;
}
LLVMBuildRet (builder, retval);
break;
}
case LLVMArgVtypeByRef: {
LLVMBuildRetVoid (builder);
break;
}
case LLVMArgGsharedvtFixed: {
LLVMTypeRef ret_type = type_to_llvm_type (ctx, sig->ret);
/* The return value is in lhs, need to store to the vret argument */
/* sreg1 might not be set */
if (lhs) {
g_assert (cfg->vret_addr);
g_assert (values [cfg->vret_addr->dreg]);
LLVMBuildStore (builder, convert (ctx, lhs, ret_type), convert (ctx, values [cfg->vret_addr->dreg], LLVMPointerType (ret_type, 0)));
}
LLVMBuildRetVoid (builder);
break;
}
case LLVMArgGsharedvtFixedVtype: {
/* Already set */
LLVMBuildRetVoid (builder);
break;
}
case LLVMArgGsharedvtVariable: {
/* Already set */
LLVMBuildRetVoid (builder);
break;
}
case LLVMArgVtypeRetAddr: {
LLVMBuildRetVoid (builder);
break;
}
case LLVMArgAsIArgs:
case LLVMArgFpStruct: {
LLVMTypeRef ret_type = LLVMGetReturnType (LLVMGetElementType (LLVMTypeOf (method)));
LLVMValueRef retval;
g_assert (addresses [ins->sreg1]);
retval = LLVMBuildLoad (builder, convert (ctx, addresses [ins->sreg1], LLVMPointerType (ret_type, 0)), "");
LLVMBuildRet (builder, retval);
break;
}
case LLVMArgNone:
LLVMBuildRetVoid (builder);
break;
default:
g_assert_not_reached ();
break;
}
has_terminator = TRUE;
break;
case OP_ICOMPARE:
case OP_FCOMPARE:
case OP_RCOMPARE:
case OP_LCOMPARE:
case OP_COMPARE:
case OP_ICOMPARE_IMM:
case OP_LCOMPARE_IMM:
case OP_COMPARE_IMM: {
CompRelation rel;
LLVMValueRef cmp, args [16];
gboolean likely = (ins->flags & MONO_INST_LIKELY) != 0;
gboolean unlikely = FALSE;
if (MONO_IS_COND_BRANCH_OP (ins->next)) {
if (ins->next->inst_false_bb->out_of_line)
likely = TRUE;
else if (ins->next->inst_true_bb->out_of_line)
unlikely = TRUE;
}
if (ins->next->opcode == OP_NOP)
break;
if (ins->next->opcode == OP_BR)
/* The comparison result is not needed */
continue;
rel = mono_opcode_to_cond (ins->next->opcode);
if (ins->opcode == OP_ICOMPARE_IMM) {
lhs = convert (ctx, lhs, LLVMInt32Type ());
rhs = LLVMConstInt (LLVMInt32Type (), ins->inst_imm, FALSE);
}
if (ins->opcode == OP_LCOMPARE_IMM) {
lhs = convert (ctx, lhs, LLVMInt64Type ());
rhs = LLVMConstInt (LLVMInt64Type (), GET_LONG_IMM (ins), FALSE);
}
if (ins->opcode == OP_LCOMPARE) {
lhs = convert (ctx, lhs, LLVMInt64Type ());
rhs = convert (ctx, rhs, LLVMInt64Type ());
}
if (ins->opcode == OP_ICOMPARE) {
lhs = convert (ctx, lhs, LLVMInt32Type ());
rhs = convert (ctx, rhs, LLVMInt32Type ());
}
if (lhs && rhs) {
if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind)
rhs = convert (ctx, rhs, LLVMTypeOf (lhs));
else if (LLVMGetTypeKind (LLVMTypeOf (rhs)) == LLVMPointerTypeKind)
lhs = convert (ctx, lhs, LLVMTypeOf (rhs));
}
/* We use COMPARE+SETcc/Bcc, llvm uses SETcc+br cond */
if (ins->opcode == OP_FCOMPARE) {
cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMDoubleType ()), convert (ctx, rhs, LLVMDoubleType ()), "");
} else if (ins->opcode == OP_RCOMPARE) {
cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMFloatType ()), convert (ctx, rhs, LLVMFloatType ()), "");
} else if (ins->opcode == OP_COMPARE_IMM) {
LLVMIntPredicate llvm_pred = cond_to_llvm_cond [rel];
if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind && ins->inst_imm == 0) {
// We are emitting a NULL check for a pointer
gboolean nonnull = mono_llvm_is_nonnull (lhs);
if (nonnull && llvm_pred == LLVMIntEQ)
cmp = LLVMConstInt (LLVMInt1Type (), FALSE, FALSE);
else if (nonnull && llvm_pred == LLVMIntNE)
cmp = LLVMConstInt (LLVMInt1Type (), TRUE, FALSE);
else
cmp = LLVMBuildICmp (builder, llvm_pred, lhs, LLVMConstNull (LLVMTypeOf (lhs)), "");
} else {
cmp = LLVMBuildICmp (builder, llvm_pred, convert (ctx, lhs, IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE), "");
}
} else if (ins->opcode == OP_LCOMPARE_IMM) {
cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], lhs, rhs, "");
}
else if (ins->opcode == OP_COMPARE) {
if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind && LLVMTypeOf (lhs) == LLVMTypeOf (rhs))
cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], lhs, rhs, "");
else
cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], convert (ctx, lhs, IntPtrType ()), convert (ctx, rhs, IntPtrType ()), "");
} else
cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], lhs, rhs, "");
if (likely || unlikely) {
args [0] = cmp;
args [1] = LLVMConstInt (LLVMInt1Type (), likely ? 1 : 0, FALSE);
cmp = call_intrins (ctx, INTRINS_EXPECT_I1, args, "");
}
if (MONO_IS_COND_BRANCH_OP (ins->next)) {
if (ins->next->inst_true_bb == ins->next->inst_false_bb) {
/*
* If the target bb contains PHI instructions, LLVM requires
* two PHI entries for this bblock, while we only generate one.
* So convert this to an unconditional bblock. (bxc #171).
*/
LLVMBuildBr (builder, get_bb (ctx, ins->next->inst_true_bb));
} else {
LLVMBuildCondBr (builder, cmp, get_bb (ctx, ins->next->inst_true_bb), get_bb (ctx, ins->next->inst_false_bb));
}
has_terminator = TRUE;
} else if (MONO_IS_SETCC (ins->next)) {
sprintf (dname_buf, "t%d", ins->next->dreg);
dname = dname_buf;
values [ins->next->dreg] = LLVMBuildZExt (builder, cmp, LLVMInt32Type (), dname);
/* Add stores for volatile variables */
emit_volatile_store (ctx, ins->next->dreg);
} else if (MONO_IS_COND_EXC (ins->next)) {
gboolean force_explicit_branch = FALSE;
if (bb->region != -1) {
/* Don't tag null check branches in exception-handling
* regions with `make.implicit`.
*/
force_explicit_branch = TRUE;
}
emit_cond_system_exception (ctx, bb, (const char*)ins->next->inst_p1, cmp, force_explicit_branch);
if (!ctx_ok (ctx))
break;
builder = ctx->builder;
} else {
set_failure (ctx, "next");
break;
}
ins = ins->next;
break;
}
case OP_FCEQ:
case OP_FCNEQ:
case OP_FCLT:
case OP_FCLT_UN:
case OP_FCGT:
case OP_FCGT_UN:
case OP_FCGE:
case OP_FCLE: {
CompRelation rel;
LLVMValueRef cmp;
rel = mono_opcode_to_cond (ins->opcode);
cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMDoubleType ()), convert (ctx, rhs, LLVMDoubleType ()), "");
values [ins->dreg] = LLVMBuildZExt (builder, cmp, LLVMInt32Type (), dname);
break;
}
case OP_RCEQ:
case OP_RCNEQ:
case OP_RCLT:
case OP_RCLT_UN:
case OP_RCGT:
case OP_RCGT_UN: {
CompRelation rel;
LLVMValueRef cmp;
rel = mono_opcode_to_cond (ins->opcode);
cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMFloatType ()), convert (ctx, rhs, LLVMFloatType ()), "");
values [ins->dreg] = LLVMBuildZExt (builder, cmp, LLVMInt32Type (), dname);
break;
}
case OP_PHI:
case OP_FPHI:
case OP_VPHI:
case OP_XPHI: {
// Handled above
skip_volatile_store = TRUE;
break;
}
case OP_MOVE:
case OP_LMOVE:
case OP_XMOVE:
case OP_SETFRET:
g_assert (lhs);
values [ins->dreg] = lhs;
break;
case OP_FMOVE:
case OP_RMOVE: {
MonoInst *var = get_vreg_to_inst (cfg, ins->dreg);
g_assert (lhs);
values [ins->dreg] = lhs;
if (var && m_class_get_byval_arg (var->klass)->type == MONO_TYPE_R4) {
/*
* This is added by the spilling pass in case of the JIT,
* but we have to do it ourselves.
*/
values [ins->dreg] = convert (ctx, values [ins->dreg], LLVMFloatType ());
}
break;
}
case OP_MOVE_F_TO_I4: {
values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildFPTrunc (builder, lhs, LLVMFloatType (), ""), LLVMInt32Type (), "");
break;
}
case OP_MOVE_I4_TO_F: {
values [ins->dreg] = LLVMBuildFPExt (builder, LLVMBuildBitCast (builder, lhs, LLVMFloatType (), ""), LLVMDoubleType (), "");
break;
}
case OP_MOVE_F_TO_I8: {
values [ins->dreg] = LLVMBuildBitCast (builder, lhs, LLVMInt64Type (), "");
break;
}
case OP_MOVE_I8_TO_F: {
values [ins->dreg] = LLVMBuildBitCast (builder, lhs, LLVMDoubleType (), "");
break;
}
case OP_IADD:
case OP_ISUB:
case OP_IAND:
case OP_IMUL:
case OP_IDIV:
case OP_IDIV_UN:
case OP_IREM:
case OP_IREM_UN:
case OP_IOR:
case OP_IXOR:
case OP_ISHL:
case OP_ISHR:
case OP_ISHR_UN:
case OP_FADD:
case OP_FSUB:
case OP_FMUL:
case OP_FDIV:
case OP_LADD:
case OP_LSUB:
case OP_LMUL:
case OP_LDIV:
case OP_LDIV_UN:
case OP_LREM:
case OP_LREM_UN:
case OP_LAND:
case OP_LOR:
case OP_LXOR:
case OP_LSHL:
case OP_LSHR:
case OP_LSHR_UN:
lhs = convert (ctx, lhs, regtype_to_llvm_type (spec [MONO_INST_DEST]));
rhs = convert (ctx, rhs, regtype_to_llvm_type (spec [MONO_INST_DEST]));
emit_div_check (ctx, builder, bb, ins, lhs, rhs);
if (!ctx_ok (ctx))
break;
builder = ctx->builder;
switch (ins->opcode) {
case OP_IADD:
case OP_LADD:
values [ins->dreg] = LLVMBuildAdd (builder, lhs, rhs, dname);
break;
case OP_ISUB:
case OP_LSUB:
values [ins->dreg] = LLVMBuildSub (builder, lhs, rhs, dname);
break;
case OP_IMUL:
case OP_LMUL:
values [ins->dreg] = LLVMBuildMul (builder, lhs, rhs, dname);
break;
case OP_IREM:
case OP_LREM:
values [ins->dreg] = LLVMBuildSRem (builder, lhs, rhs, dname);
break;
case OP_IREM_UN:
case OP_LREM_UN:
values [ins->dreg] = LLVMBuildURem (builder, lhs, rhs, dname);
break;
case OP_IDIV:
case OP_LDIV:
values [ins->dreg] = LLVMBuildSDiv (builder, lhs, rhs, dname);
break;
case OP_IDIV_UN:
case OP_LDIV_UN:
values [ins->dreg] = LLVMBuildUDiv (builder, lhs, rhs, dname);
break;
case OP_FDIV:
case OP_RDIV:
values [ins->dreg] = LLVMBuildFDiv (builder, lhs, rhs, dname);
break;
case OP_IAND:
case OP_LAND:
values [ins->dreg] = LLVMBuildAnd (builder, lhs, rhs, dname);
break;
case OP_IOR:
case OP_LOR:
values [ins->dreg] = LLVMBuildOr (builder, lhs, rhs, dname);
break;
case OP_IXOR:
case OP_LXOR:
values [ins->dreg] = LLVMBuildXor (builder, lhs, rhs, dname);
break;
case OP_ISHL:
case OP_LSHL:
values [ins->dreg] = LLVMBuildShl (builder, lhs, rhs, dname);
break;
case OP_ISHR:
case OP_LSHR:
values [ins->dreg] = LLVMBuildAShr (builder, lhs, rhs, dname);
break;
case OP_ISHR_UN:
case OP_LSHR_UN:
values [ins->dreg] = LLVMBuildLShr (builder, lhs, rhs, dname);
break;
case OP_FADD:
values [ins->dreg] = LLVMBuildFAdd (builder, lhs, rhs, dname);
break;
case OP_FSUB:
values [ins->dreg] = LLVMBuildFSub (builder, lhs, rhs, dname);
break;
case OP_FMUL:
values [ins->dreg] = LLVMBuildFMul (builder, lhs, rhs, dname);
break;
default:
g_assert_not_reached ();
}
break;
case OP_RADD:
case OP_RSUB:
case OP_RMUL:
case OP_RDIV: {
lhs = convert (ctx, lhs, LLVMFloatType ());
rhs = convert (ctx, rhs, LLVMFloatType ());
switch (ins->opcode) {
case OP_RADD:
values [ins->dreg] = LLVMBuildFAdd (builder, lhs, rhs, dname);
break;
case OP_RSUB:
values [ins->dreg] = LLVMBuildFSub (builder, lhs, rhs, dname);
break;
case OP_RMUL:
values [ins->dreg] = LLVMBuildFMul (builder, lhs, rhs, dname);
break;
case OP_RDIV:
values [ins->dreg] = LLVMBuildFDiv (builder, lhs, rhs, dname);
break;
default:
g_assert_not_reached ();
break;
}
break;
}
case OP_IADD_IMM:
case OP_ISUB_IMM:
case OP_IMUL_IMM:
case OP_IREM_IMM:
case OP_IREM_UN_IMM:
case OP_IDIV_IMM:
case OP_IDIV_UN_IMM:
case OP_IAND_IMM:
case OP_IOR_IMM:
case OP_IXOR_IMM:
case OP_ISHL_IMM:
case OP_ISHR_IMM:
case OP_ISHR_UN_IMM:
case OP_LADD_IMM:
case OP_LSUB_IMM:
case OP_LMUL_IMM:
case OP_LREM_IMM:
case OP_LAND_IMM:
case OP_LOR_IMM:
case OP_LXOR_IMM:
case OP_LSHL_IMM:
case OP_LSHR_IMM:
case OP_LSHR_UN_IMM:
case OP_ADD_IMM:
case OP_AND_IMM:
case OP_MUL_IMM:
case OP_SHL_IMM:
case OP_SHR_IMM:
case OP_SHR_UN_IMM: {
LLVMValueRef imm;
if (spec [MONO_INST_SRC1] == 'l') {
imm = LLVMConstInt (LLVMInt64Type (), GET_LONG_IMM (ins), FALSE);
} else {
imm = LLVMConstInt (LLVMInt32Type (), ins->inst_imm, FALSE);
}
emit_div_check (ctx, builder, bb, ins, lhs, imm);
if (!ctx_ok (ctx))
break;
builder = ctx->builder;
#if TARGET_SIZEOF_VOID_P == 4
if (ins->opcode == OP_LSHL_IMM || ins->opcode == OP_LSHR_IMM || ins->opcode == OP_LSHR_UN_IMM)
imm = LLVMConstInt (LLVMInt32Type (), ins->inst_imm, FALSE);
#endif
if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind)
lhs = convert (ctx, lhs, IntPtrType ());
imm = convert (ctx, imm, LLVMTypeOf (lhs));
switch (ins->opcode) {
case OP_IADD_IMM:
case OP_LADD_IMM:
case OP_ADD_IMM:
values [ins->dreg] = LLVMBuildAdd (builder, lhs, imm, dname);
break;
case OP_ISUB_IMM:
case OP_LSUB_IMM:
values [ins->dreg] = LLVMBuildSub (builder, lhs, imm, dname);
break;
case OP_IMUL_IMM:
case OP_MUL_IMM:
case OP_LMUL_IMM:
values [ins->dreg] = LLVMBuildMul (builder, lhs, imm, dname);
break;
case OP_IDIV_IMM:
case OP_LDIV_IMM:
values [ins->dreg] = LLVMBuildSDiv (builder, lhs, imm, dname);
break;
case OP_IDIV_UN_IMM:
case OP_LDIV_UN_IMM:
values [ins->dreg] = LLVMBuildUDiv (builder, lhs, imm, dname);
break;
case OP_IREM_IMM:
case OP_LREM_IMM:
values [ins->dreg] = LLVMBuildSRem (builder, lhs, imm, dname);
break;
case OP_IREM_UN_IMM:
values [ins->dreg] = LLVMBuildURem (builder, lhs, imm, dname);
break;
case OP_IAND_IMM:
case OP_LAND_IMM:
case OP_AND_IMM:
values [ins->dreg] = LLVMBuildAnd (builder, lhs, imm, dname);
break;
case OP_IOR_IMM:
case OP_LOR_IMM:
values [ins->dreg] = LLVMBuildOr (builder, lhs, imm, dname);
break;
case OP_IXOR_IMM:
case OP_LXOR_IMM:
values [ins->dreg] = LLVMBuildXor (builder, lhs, imm, dname);
break;
case OP_ISHL_IMM:
case OP_LSHL_IMM:
values [ins->dreg] = LLVMBuildShl (builder, lhs, imm, dname);
break;
case OP_SHL_IMM:
if (TARGET_SIZEOF_VOID_P == 8) {
/* The IL is not regular */
lhs = convert (ctx, lhs, LLVMInt64Type ());
imm = convert (ctx, imm, LLVMInt64Type ());
}
values [ins->dreg] = LLVMBuildShl (builder, lhs, imm, dname);
break;
case OP_ISHR_IMM:
case OP_LSHR_IMM:
case OP_SHR_IMM:
values [ins->dreg] = LLVMBuildAShr (builder, lhs, imm, dname);
break;
case OP_ISHR_UN_IMM:
/* This is used to implement conv.u4, so the lhs could be an i8 */
lhs = convert (ctx, lhs, LLVMInt32Type ());
imm = convert (ctx, imm, LLVMInt32Type ());
values [ins->dreg] = LLVMBuildLShr (builder, lhs, imm, dname);
break;
case OP_LSHR_UN_IMM:
case OP_SHR_UN_IMM:
values [ins->dreg] = LLVMBuildLShr (builder, lhs, imm, dname);
break;
default:
g_assert_not_reached ();
}
break;
}
case OP_INEG:
values [ins->dreg] = LLVMBuildSub (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), convert (ctx, lhs, LLVMInt32Type ()), dname);
break;
case OP_LNEG:
if (LLVMTypeOf (lhs) != LLVMInt64Type ())
lhs = convert (ctx, lhs, LLVMInt64Type ());
values [ins->dreg] = LLVMBuildSub (builder, LLVMConstInt (LLVMInt64Type (), 0, FALSE), lhs, dname);
break;
case OP_FNEG:
lhs = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = LLVMBuildFNeg (builder, lhs, dname);
break;
case OP_RNEG:
lhs = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = LLVMBuildFNeg (builder, lhs, dname);
break;
case OP_INOT: {
guint32 v = 0xffffffff;
values [ins->dreg] = LLVMBuildXor (builder, LLVMConstInt (LLVMInt32Type (), v, FALSE), convert (ctx, lhs, LLVMInt32Type ()), dname);
break;
}
case OP_LNOT: {
if (LLVMTypeOf (lhs) != LLVMInt64Type ())
lhs = convert (ctx, lhs, LLVMInt64Type ());
guint64 v = 0xffffffffffffffffLL;
values [ins->dreg] = LLVMBuildXor (builder, LLVMConstInt (LLVMInt64Type (), v, FALSE), lhs, dname);
break;
}
#if defined(TARGET_X86) || defined(TARGET_AMD64)
case OP_X86_LEA: {
LLVMValueRef v1, v2;
rhs = LLVMBuildSExt (builder, convert (ctx, rhs, LLVMInt32Type ()), LLVMInt64Type (), "");
v1 = LLVMBuildMul (builder, convert (ctx, rhs, IntPtrType ()), LLVMConstInt (IntPtrType (), ((unsigned long long)1 << ins->backend.shift_amount), FALSE), "");
v2 = LLVMBuildAdd (builder, convert (ctx, lhs, IntPtrType ()), v1, "");
values [ins->dreg] = LLVMBuildAdd (builder, v2, LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE), dname);
break;
}
case OP_X86_BSF32:
case OP_X86_BSF64: {
LLVMValueRef args [] = {
lhs,
LLVMConstInt (LLVMInt1Type (), 1, TRUE),
};
int op = ins->opcode == OP_X86_BSF32 ? INTRINS_CTTZ_I32 : INTRINS_CTTZ_I64;
values [ins->dreg] = call_intrins (ctx, op, args, dname);
break;
}
case OP_X86_BSR32:
case OP_X86_BSR64: {
LLVMValueRef args [] = {
lhs,
LLVMConstInt (LLVMInt1Type (), 1, TRUE),
};
int op = ins->opcode == OP_X86_BSR32 ? INTRINS_CTLZ_I32 : INTRINS_CTLZ_I64;
LLVMValueRef width = ins->opcode == OP_X86_BSR32 ? const_int32 (31) : const_int64 (63);
LLVMValueRef tz = call_intrins (ctx, op, args, "");
values [ins->dreg] = LLVMBuildXor (builder, tz, width, dname);
break;
}
#endif
case OP_ICONV_TO_I1:
case OP_ICONV_TO_I2:
case OP_ICONV_TO_I4:
case OP_ICONV_TO_U1:
case OP_ICONV_TO_U2:
case OP_ICONV_TO_U4:
case OP_LCONV_TO_I1:
case OP_LCONV_TO_I2:
case OP_LCONV_TO_U1:
case OP_LCONV_TO_U2:
case OP_LCONV_TO_U4: {
gboolean sign;
sign = (ins->opcode == OP_ICONV_TO_I1) || (ins->opcode == OP_ICONV_TO_I2) || (ins->opcode == OP_ICONV_TO_I4) || (ins->opcode == OP_LCONV_TO_I1) || (ins->opcode == OP_LCONV_TO_I2);
/* Have to do two casts since our vregs have type int */
v = LLVMBuildTrunc (builder, lhs, op_to_llvm_type (ins->opcode), "");
if (sign)
values [ins->dreg] = LLVMBuildSExt (builder, v, LLVMInt32Type (), dname);
else
values [ins->dreg] = LLVMBuildZExt (builder, v, LLVMInt32Type (), dname);
break;
}
case OP_ICONV_TO_I8:
values [ins->dreg] = LLVMBuildSExt (builder, lhs, LLVMInt64Type (), dname);
break;
case OP_ICONV_TO_U8:
values [ins->dreg] = LLVMBuildZExt (builder, lhs, LLVMInt64Type (), dname);
break;
case OP_FCONV_TO_I4:
case OP_RCONV_TO_I4:
values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, LLVMInt32Type (), dname);
break;
case OP_FCONV_TO_I1:
case OP_RCONV_TO_I1:
values [ins->dreg] = LLVMBuildSExt (builder, LLVMBuildFPToSI (builder, lhs, LLVMInt8Type (), dname), LLVMInt32Type (), "");
break;
case OP_FCONV_TO_U1:
case OP_RCONV_TO_U1:
values [ins->dreg] = LLVMBuildZExt (builder, LLVMBuildTrunc (builder, LLVMBuildFPToUI (builder, lhs, IntPtrType (), dname), LLVMInt8Type (), ""), LLVMInt32Type (), "");
break;
case OP_FCONV_TO_I2:
case OP_RCONV_TO_I2:
values [ins->dreg] = LLVMBuildSExt (builder, LLVMBuildFPToSI (builder, lhs, LLVMInt16Type (), dname), LLVMInt32Type (), "");
break;
case OP_FCONV_TO_U2:
case OP_RCONV_TO_U2:
values [ins->dreg] = LLVMBuildZExt (builder, LLVMBuildFPToUI (builder, lhs, LLVMInt16Type (), dname), LLVMInt32Type (), "");
break;
case OP_FCONV_TO_U4:
case OP_RCONV_TO_U4:
values [ins->dreg] = LLVMBuildFPToUI (builder, lhs, LLVMInt32Type (), dname);
break;
case OP_FCONV_TO_U8:
case OP_RCONV_TO_U8:
values [ins->dreg] = LLVMBuildFPToUI (builder, lhs, LLVMInt64Type (), dname);
break;
case OP_FCONV_TO_I8:
case OP_RCONV_TO_I8:
values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, LLVMInt64Type (), dname);
break;
case OP_FCONV_TO_I:
case OP_RCONV_TO_I:
values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, IntPtrType (), dname);
break;
case OP_ICONV_TO_R8:
case OP_LCONV_TO_R8:
values [ins->dreg] = LLVMBuildSIToFP (builder, lhs, LLVMDoubleType (), dname);
break;
case OP_ICONV_TO_R_UN:
case OP_LCONV_TO_R_UN:
values [ins->dreg] = LLVMBuildUIToFP (builder, lhs, LLVMDoubleType (), dname);
break;
#if TARGET_SIZEOF_VOID_P == 4
case OP_LCONV_TO_U:
#endif
case OP_LCONV_TO_I4:
values [ins->dreg] = LLVMBuildTrunc (builder, lhs, LLVMInt32Type (), dname);
break;
case OP_ICONV_TO_R4:
case OP_LCONV_TO_R4:
v = LLVMBuildSIToFP (builder, lhs, LLVMFloatType (), "");
if (cfg->r4fp)
values [ins->dreg] = v;
else
values [ins->dreg] = LLVMBuildFPExt (builder, v, LLVMDoubleType (), dname);
break;
case OP_FCONV_TO_R4:
v = LLVMBuildFPTrunc (builder, lhs, LLVMFloatType (), "");
if (cfg->r4fp)
values [ins->dreg] = v;
else
values [ins->dreg] = LLVMBuildFPExt (builder, v, LLVMDoubleType (), dname);
break;
case OP_RCONV_TO_R8:
values [ins->dreg] = LLVMBuildFPExt (builder, lhs, LLVMDoubleType (), dname);
break;
case OP_RCONV_TO_R4:
values [ins->dreg] = lhs;
break;
case OP_SEXT_I4:
values [ins->dreg] = LLVMBuildSExt (builder, convert (ctx, lhs, LLVMInt32Type ()), LLVMInt64Type (), dname);
break;
case OP_ZEXT_I4:
values [ins->dreg] = LLVMBuildZExt (builder, convert (ctx, lhs, LLVMInt32Type ()), LLVMInt64Type (), dname);
break;
case OP_TRUNC_I4:
values [ins->dreg] = LLVMBuildTrunc (builder, lhs, LLVMInt32Type (), dname);
break;
case OP_LOCALLOC_IMM: {
LLVMValueRef v;
guint32 size = ins->inst_imm;
size = (size + (MONO_ARCH_FRAME_ALIGNMENT - 1)) & ~ (MONO_ARCH_FRAME_ALIGNMENT - 1);
v = mono_llvm_build_alloca (builder, LLVMInt8Type (), LLVMConstInt (LLVMInt32Type (), size, FALSE), MONO_ARCH_FRAME_ALIGNMENT, "");
if (ins->flags & MONO_INST_INIT)
emit_memset (ctx, builder, v, const_int32 (size), MONO_ARCH_FRAME_ALIGNMENT);
values [ins->dreg] = v;
break;
}
case OP_LOCALLOC: {
LLVMValueRef v, size;
size = LLVMBuildAnd (builder, LLVMBuildAdd (builder, convert (ctx, lhs, LLVMInt32Type ()), LLVMConstInt (LLVMInt32Type (), MONO_ARCH_FRAME_ALIGNMENT - 1, FALSE), ""), LLVMConstInt (LLVMInt32Type (), ~ (MONO_ARCH_FRAME_ALIGNMENT - 1), FALSE), "");
v = mono_llvm_build_alloca (builder, LLVMInt8Type (), size, MONO_ARCH_FRAME_ALIGNMENT, "");
if (ins->flags & MONO_INST_INIT)
emit_memset (ctx, builder, v, size, MONO_ARCH_FRAME_ALIGNMENT);
values [ins->dreg] = v;
break;
}
case OP_LOADI1_MEMBASE:
case OP_LOADU1_MEMBASE:
case OP_LOADI2_MEMBASE:
case OP_LOADU2_MEMBASE:
case OP_LOADI4_MEMBASE:
case OP_LOADU4_MEMBASE:
case OP_LOADI8_MEMBASE:
case OP_LOADR4_MEMBASE:
case OP_LOADR8_MEMBASE:
case OP_LOAD_MEMBASE:
case OP_LOADI8_MEM:
case OP_LOADU1_MEM:
case OP_LOADU2_MEM:
case OP_LOADI4_MEM:
case OP_LOADU4_MEM:
case OP_LOAD_MEM: {
int size = 8;
LLVMValueRef base, index, addr;
LLVMTypeRef t;
gboolean sext = FALSE, zext = FALSE;
gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0;
gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0;
gboolean is_unaligned = (ins->flags & MONO_INST_UNALIGNED) != 0;
t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext);
if (sext || zext)
dname = (char*)"";
if ((ins->opcode == OP_LOADI8_MEM) || (ins->opcode == OP_LOAD_MEM) || (ins->opcode == OP_LOADI4_MEM) || (ins->opcode == OP_LOADU4_MEM) || (ins->opcode == OP_LOADU1_MEM) || (ins->opcode == OP_LOADU2_MEM)) {
addr = LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE);
base = addr;
} else {
/* _MEMBASE */
base = lhs;
if (ins->inst_offset == 0) {
LLVMValueRef gep_base, gep_offset;
if (mono_llvm_can_be_gep (base, &gep_base, &gep_offset)) {
addr = LLVMBuildGEP (builder, convert (ctx, gep_base, LLVMPointerType (LLVMInt8Type (), 0)), &gep_offset, 1, "");
} else {
addr = base;
}
} else if (ins->inst_offset % size != 0) {
/* Unaligned load */
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (LLVMInt8Type (), 0)), &index, 1, "");
} else {
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, "");
}
}
addr = convert (ctx, addr, LLVMPointerType (t, 0));
if (is_unaligned)
values [ins->dreg] = mono_llvm_build_aligned_load (builder, addr, dname, is_volatile, 1);
else
values [ins->dreg] = emit_load (ctx, bb, &builder, size, addr, base, dname, is_faulting, is_volatile, LLVM_BARRIER_NONE);
if (!(is_faulting || is_volatile) && (ins->flags & MONO_INST_INVARIANT_LOAD)) {
/*
* These will signal LLVM that these loads do not alias any stores, and
* they can't fail, allowing them to be hoisted out of loops.
*/
set_invariant_load_flag (values [ins->dreg]);
}
if (sext)
values [ins->dreg] = LLVMBuildSExt (builder, values [ins->dreg], LLVMInt32Type (), dname);
else if (zext)
values [ins->dreg] = LLVMBuildZExt (builder, values [ins->dreg], LLVMInt32Type (), dname);
else if (!cfg->r4fp && ins->opcode == OP_LOADR4_MEMBASE)
values [ins->dreg] = LLVMBuildFPExt (builder, values [ins->dreg], LLVMDoubleType (), dname);
break;
}
case OP_STOREI1_MEMBASE_REG:
case OP_STOREI2_MEMBASE_REG:
case OP_STOREI4_MEMBASE_REG:
case OP_STOREI8_MEMBASE_REG:
case OP_STORER4_MEMBASE_REG:
case OP_STORER8_MEMBASE_REG:
case OP_STORE_MEMBASE_REG: {
int size = 8;
LLVMValueRef index, addr, base;
LLVMTypeRef t;
gboolean sext = FALSE, zext = FALSE;
gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0;
gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0;
gboolean is_unaligned = (ins->flags & MONO_INST_UNALIGNED) != 0;
if (!values [ins->inst_destbasereg]) {
set_failure (ctx, "inst_destbasereg");
break;
}
t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext);
base = values [ins->inst_destbasereg];
LLVMValueRef gep_base, gep_offset;
if (ins->inst_offset == 0 && mono_llvm_can_be_gep (base, &gep_base, &gep_offset)) {
addr = LLVMBuildGEP (builder, convert (ctx, gep_base, LLVMPointerType (LLVMInt8Type (), 0)), &gep_offset, 1, "");
} else if (ins->inst_offset % size != 0) {
/* Unaligned store */
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (LLVMInt8Type (), 0)), &index, 1, "");
} else {
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, "");
}
if (is_volatile && LLVMGetInstructionOpcode (base) == LLVMAlloca && !(ins->flags & MONO_INST_VOLATILE))
/* Storing to an alloca cannot fail */
is_volatile = FALSE;
LLVMValueRef srcval = convert (ctx, values [ins->sreg1], t);
LLVMValueRef ptrdst = convert (ctx, addr, LLVMPointerType (t, 0));
if (is_unaligned)
mono_llvm_build_aligned_store (builder, srcval, ptrdst, is_volatile, 1);
else
emit_store (ctx, bb, &builder, size, srcval, ptrdst, base, is_faulting, is_volatile);
break;
}
case OP_STOREI1_MEMBASE_IMM:
case OP_STOREI2_MEMBASE_IMM:
case OP_STOREI4_MEMBASE_IMM:
case OP_STOREI8_MEMBASE_IMM:
case OP_STORE_MEMBASE_IMM: {
int size = 8;
LLVMValueRef index, addr, base;
LLVMTypeRef t;
gboolean sext = FALSE, zext = FALSE;
gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0;
gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0;
gboolean is_unaligned = (ins->flags & MONO_INST_UNALIGNED) != 0;
t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext);
base = values [ins->inst_destbasereg];
LLVMValueRef gep_base, gep_offset;
if (ins->inst_offset == 0 && mono_llvm_can_be_gep (base, &gep_base, &gep_offset)) {
addr = LLVMBuildGEP (builder, convert (ctx, gep_base, LLVMPointerType (LLVMInt8Type (), 0)), &gep_offset, 1, "");
} else if (ins->inst_offset % size != 0) {
/* Unaligned store */
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (LLVMInt8Type (), 0)), &index, 1, "");
} else {
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, "");
}
LLVMValueRef srcval = convert (ctx, LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE), t);
LLVMValueRef ptrdst = convert (ctx, addr, LLVMPointerType (t, 0));
if (is_unaligned)
mono_llvm_build_aligned_store (builder, srcval, ptrdst, is_volatile, 1);
else
emit_store (ctx, bb, &builder, size, srcval, ptrdst, base, is_faulting, is_volatile);
break;
}
case OP_CHECK_THIS:
emit_load (ctx, bb, &builder, TARGET_SIZEOF_VOID_P, convert (ctx, lhs, LLVMPointerType (IntPtrType (), 0)), lhs, "", TRUE, FALSE, LLVM_BARRIER_NONE);
break;
case OP_OUTARG_VTRETADDR:
break;
case OP_VOIDCALL:
case OP_CALL:
case OP_LCALL:
case OP_FCALL:
case OP_RCALL:
case OP_VCALL:
case OP_VOIDCALL_MEMBASE:
case OP_CALL_MEMBASE:
case OP_LCALL_MEMBASE:
case OP_FCALL_MEMBASE:
case OP_RCALL_MEMBASE:
case OP_VCALL_MEMBASE:
case OP_VOIDCALL_REG:
case OP_CALL_REG:
case OP_LCALL_REG:
case OP_FCALL_REG:
case OP_RCALL_REG:
case OP_VCALL_REG: {
process_call (ctx, bb, &builder, ins);
break;
}
case OP_AOTCONST: {
MonoJumpInfoType ji_type = ins->inst_c1;
gpointer ji_data = ins->inst_p0;
if (ji_type == MONO_PATCH_INFO_ICALL_ADDR) {
char *symbol = mono_aot_get_direct_call_symbol (MONO_PATCH_INFO_ICALL_ADDR_CALL, ji_data);
if (symbol) {
/*
* Avoid emitting a got entry for these since the method is directly called, and it might not be
* resolvable at runtime using dlsym ().
*/
g_free (symbol);
values [ins->dreg] = LLVMConstInt (IntPtrType (), 0, FALSE);
break;
}
}
values [ins->dreg] = get_aotconst (ctx, ji_type, ji_data, LLVMPointerType (IntPtrType (), 0));
break;
}
case OP_MEMMOVE: {
int argn = 0;
LLVMValueRef args [5];
args [argn++] = convert (ctx, values [ins->sreg1], LLVMPointerType (LLVMInt8Type (), 0));
args [argn++] = convert (ctx, values [ins->sreg2], LLVMPointerType (LLVMInt8Type (), 0));
args [argn++] = convert (ctx, values [ins->sreg3], LLVMInt64Type ());
args [argn++] = LLVMConstInt (LLVMInt1Type (), 0, FALSE); // is_volatile
call_intrins (ctx, INTRINS_MEMMOVE, args, "");
break;
}
case OP_NOT_REACHED:
LLVMBuildUnreachable (builder);
has_terminator = TRUE;
g_assert (bb->block_num < cfg->max_block_num);
ctx->unreachable [bb->block_num] = TRUE;
/* Might have instructions after this */
while (ins->next) {
MonoInst *next = ins->next;
/*
* FIXME: If later code uses the regs defined by these instructions,
* compilation will fail.
*/
const char *spec = INS_INFO (next->opcode);
if (spec [MONO_INST_DEST] == 'i' && !MONO_IS_STORE_MEMBASE (next))
ctx->values [next->dreg] = LLVMConstNull (LLVMInt32Type ());
MONO_DELETE_INS (bb, next);
}
break;
case OP_LDADDR: {
MonoInst *var = ins->inst_i0;
MonoClass *klass = var->klass;
if (var->opcode == OP_VTARG_ADDR && !MONO_CLASS_IS_SIMD(cfg, klass)) {
/* The variable contains the vtype address */
values [ins->dreg] = values [var->dreg];
} else if (var->opcode == OP_GSHAREDVT_LOCAL) {
values [ins->dreg] = emit_gsharedvt_ldaddr (ctx, var->dreg);
} else {
values [ins->dreg] = addresses [var->dreg];
}
break;
}
case OP_SIN: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_SIN, args, dname);
break;
}
case OP_SINF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_SINF, args, dname);
break;
}
case OP_EXP: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_EXP, args, dname);
break;
}
case OP_EXPF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_EXPF, args, dname);
break;
}
case OP_LOG2: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_LOG2, args, dname);
break;
}
case OP_LOG2F: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_LOG2F, args, dname);
break;
}
case OP_LOG10: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_LOG10, args, dname);
break;
}
case OP_LOG10F: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_LOG10F, args, dname);
break;
}
case OP_LOG: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_LOG, args, dname);
break;
}
case OP_TRUNC: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_TRUNC, args, dname);
break;
}
case OP_TRUNCF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_TRUNCF, args, dname);
break;
}
case OP_COS: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_COS, args, dname);
break;
}
case OP_COSF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_COSF, args, dname);
break;
}
case OP_SQRT: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_SQRT, args, dname);
break;
}
case OP_SQRTF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_SQRTF, args, dname);
break;
}
case OP_FLOOR: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_FLOOR, args, dname);
break;
}
case OP_FLOORF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_FLOORF, args, dname);
break;
}
case OP_CEIL: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_CEIL, args, dname);
break;
}
case OP_CEILF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_CEILF, args, dname);
break;
}
case OP_FMA: {
LLVMValueRef args [3];
args [0] = convert (ctx, values [ins->sreg1], LLVMDoubleType ());
args [1] = convert (ctx, values [ins->sreg2], LLVMDoubleType ());
args [2] = convert (ctx, values [ins->sreg3], LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_FMA, args, dname);
break;
}
case OP_FMAF: {
LLVMValueRef args [3];
args [0] = convert (ctx, values [ins->sreg1], LLVMFloatType ());
args [1] = convert (ctx, values [ins->sreg2], LLVMFloatType ());
args [2] = convert (ctx, values [ins->sreg3], LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_FMAF, args, dname);
break;
}
case OP_ABS: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_FABS, args, dname);
break;
}
case OP_ABSF: {
LLVMValueRef args [1];
#ifdef TARGET_AMD64
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_ABSF, args, dname);
#else
/* llvm.fabs not supported on all platforms */
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_FABS, args, dname);
values [ins->dreg] = convert (ctx, values [ins->dreg], LLVMFloatType ());
#endif
break;
}
case OP_RPOW: {
LLVMValueRef args [2];
args [0] = convert (ctx, lhs, LLVMFloatType ());
args [1] = convert (ctx, rhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_POWF, args, dname);
break;
}
case OP_FPOW: {
LLVMValueRef args [2];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
args [1] = convert (ctx, rhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_POW, args, dname);
break;
}
case OP_FCOPYSIGN: {
LLVMValueRef args [2];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
args [1] = convert (ctx, rhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_COPYSIGN, args, dname);
break;
}
case OP_RCOPYSIGN: {
LLVMValueRef args [2];
args [0] = convert (ctx, lhs, LLVMFloatType ());
args [1] = convert (ctx, rhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_COPYSIGNF, args, dname);
break;
}
case OP_IMIN:
case OP_LMIN:
case OP_IMAX:
case OP_LMAX:
case OP_IMIN_UN:
case OP_LMIN_UN:
case OP_IMAX_UN:
case OP_LMAX_UN:
case OP_FMIN:
case OP_FMAX:
case OP_RMIN:
case OP_RMAX: {
LLVMValueRef v;
lhs = convert (ctx, lhs, regtype_to_llvm_type (spec [MONO_INST_DEST]));
rhs = convert (ctx, rhs, regtype_to_llvm_type (spec [MONO_INST_DEST]));
switch (ins->opcode) {
case OP_IMIN:
case OP_LMIN:
v = LLVMBuildICmp (builder, LLVMIntSLE, lhs, rhs, "");
break;
case OP_IMAX:
case OP_LMAX:
v = LLVMBuildICmp (builder, LLVMIntSGE, lhs, rhs, "");
break;
case OP_IMIN_UN:
case OP_LMIN_UN:
v = LLVMBuildICmp (builder, LLVMIntULE, lhs, rhs, "");
break;
case OP_IMAX_UN:
case OP_LMAX_UN:
v = LLVMBuildICmp (builder, LLVMIntUGE, lhs, rhs, "");
break;
case OP_FMAX:
case OP_RMAX:
v = LLVMBuildFCmp (builder, LLVMRealUGE, lhs, rhs, "");
break;
case OP_FMIN:
case OP_RMIN:
v = LLVMBuildFCmp (builder, LLVMRealULE, lhs, rhs, "");
break;
default:
g_assert_not_reached ();
break;
}
values [ins->dreg] = LLVMBuildSelect (builder, v, lhs, rhs, dname);
break;
}
/*
* See the ARM64 comment in mono/utils/atomic.h for an explanation of why this
* hack is necessary (for now).
*/
#ifdef TARGET_ARM64
#define ARM64_ATOMIC_FENCE_FIX mono_llvm_build_fence (builder, LLVM_BARRIER_SEQ)
#else
#define ARM64_ATOMIC_FENCE_FIX
#endif
case OP_ATOMIC_EXCHANGE_I4:
case OP_ATOMIC_EXCHANGE_I8: {
LLVMValueRef args [2];
LLVMTypeRef t;
if (ins->opcode == OP_ATOMIC_EXCHANGE_I4)
t = LLVMInt32Type ();
else
t = LLVMInt64Type ();
g_assert (ins->inst_offset == 0);
args [0] = convert (ctx, lhs, LLVMPointerType (t, 0));
args [1] = convert (ctx, rhs, t);
ARM64_ATOMIC_FENCE_FIX;
values [ins->dreg] = mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_XCHG, args [0], args [1]);
ARM64_ATOMIC_FENCE_FIX;
break;
}
case OP_ATOMIC_ADD_I4:
case OP_ATOMIC_ADD_I8:
case OP_ATOMIC_AND_I4:
case OP_ATOMIC_AND_I8:
case OP_ATOMIC_OR_I4:
case OP_ATOMIC_OR_I8: {
LLVMValueRef args [2];
LLVMTypeRef t;
if (ins->type == STACK_I4)
t = LLVMInt32Type ();
else
t = LLVMInt64Type ();
g_assert (ins->inst_offset == 0);
args [0] = convert (ctx, lhs, LLVMPointerType (t, 0));
args [1] = convert (ctx, rhs, t);
ARM64_ATOMIC_FENCE_FIX;
if (ins->opcode == OP_ATOMIC_ADD_I4 || ins->opcode == OP_ATOMIC_ADD_I8)
// Interlocked.Add returns new value (that's why we emit additional Add here)
// see https://github.com/dotnet/runtime/pull/33102
values [ins->dreg] = LLVMBuildAdd (builder, mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_ADD, args [0], args [1]), args [1], dname);
else if (ins->opcode == OP_ATOMIC_AND_I4 || ins->opcode == OP_ATOMIC_AND_I8)
values [ins->dreg] = mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_AND, args [0], args [1]);
else if (ins->opcode == OP_ATOMIC_OR_I4 || ins->opcode == OP_ATOMIC_OR_I8)
values [ins->dreg] = mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_OR, args [0], args [1]);
else
g_assert_not_reached ();
ARM64_ATOMIC_FENCE_FIX;
break;
}
case OP_ATOMIC_CAS_I4:
case OP_ATOMIC_CAS_I8: {
LLVMValueRef args [3], val;
LLVMTypeRef t;
if (ins->opcode == OP_ATOMIC_CAS_I4)
t = LLVMInt32Type ();
else
t = LLVMInt64Type ();
args [0] = convert (ctx, lhs, LLVMPointerType (t, 0));
/* comparand */
args [1] = convert (ctx, values [ins->sreg3], t);
/* new value */
args [2] = convert (ctx, values [ins->sreg2], t);
ARM64_ATOMIC_FENCE_FIX;
val = mono_llvm_build_cmpxchg (builder, args [0], args [1], args [2]);
ARM64_ATOMIC_FENCE_FIX;
/* cmpxchg returns a pair */
values [ins->dreg] = LLVMBuildExtractValue (builder, val, 0, "");
break;
}
case OP_MEMORY_BARRIER: {
mono_llvm_build_fence (builder, (BarrierKind) ins->backend.memory_barrier_kind);
break;
}
case OP_ATOMIC_LOAD_I1:
case OP_ATOMIC_LOAD_I2:
case OP_ATOMIC_LOAD_I4:
case OP_ATOMIC_LOAD_I8:
case OP_ATOMIC_LOAD_U1:
case OP_ATOMIC_LOAD_U2:
case OP_ATOMIC_LOAD_U4:
case OP_ATOMIC_LOAD_U8:
case OP_ATOMIC_LOAD_R4:
case OP_ATOMIC_LOAD_R8: {
int size;
gboolean sext, zext;
LLVMTypeRef t;
gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0;
gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0;
BarrierKind barrier = (BarrierKind) ins->backend.memory_barrier_kind;
LLVMValueRef index, addr;
t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext);
if (sext || zext)
dname = (char *)"";
if (ins->inst_offset != 0) {
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, lhs, LLVMPointerType (t, 0)), &index, 1, "");
} else {
addr = lhs;
}
addr = convert (ctx, addr, LLVMPointerType (t, 0));
ARM64_ATOMIC_FENCE_FIX;
values [ins->dreg] = emit_load (ctx, bb, &builder, size, addr, lhs, dname, is_faulting, is_volatile, barrier);
ARM64_ATOMIC_FENCE_FIX;
if (sext)
values [ins->dreg] = LLVMBuildSExt (builder, values [ins->dreg], LLVMInt32Type (), dname);
else if (zext)
values [ins->dreg] = LLVMBuildZExt (builder, values [ins->dreg], LLVMInt32Type (), dname);
break;
}
case OP_ATOMIC_STORE_I1:
case OP_ATOMIC_STORE_I2:
case OP_ATOMIC_STORE_I4:
case OP_ATOMIC_STORE_I8:
case OP_ATOMIC_STORE_U1:
case OP_ATOMIC_STORE_U2:
case OP_ATOMIC_STORE_U4:
case OP_ATOMIC_STORE_U8:
case OP_ATOMIC_STORE_R4:
case OP_ATOMIC_STORE_R8: {
int size;
gboolean sext, zext;
LLVMTypeRef t;
gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0;
gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0;
BarrierKind barrier = (BarrierKind) ins->backend.memory_barrier_kind;
LLVMValueRef index, addr, value, base;
if (!values [ins->inst_destbasereg]) {
set_failure (ctx, "inst_destbasereg");
break;
}
t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext);
base = values [ins->inst_destbasereg];
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, "");
value = convert (ctx, values [ins->sreg1], t);
ARM64_ATOMIC_FENCE_FIX;
emit_store_general (ctx, bb, &builder, size, value, addr, base, is_faulting, is_volatile, barrier);
ARM64_ATOMIC_FENCE_FIX;
break;
}
case OP_RELAXED_NOP: {
#if defined(TARGET_AMD64) || defined(TARGET_X86)
call_intrins (ctx, INTRINS_SSE_PAUSE, NULL, "");
break;
#else
break;
#endif
}
case OP_TLS_GET: {
#if (defined(TARGET_AMD64) || defined(TARGET_X86)) && defined(__linux__)
#ifdef TARGET_AMD64
// 257 == FS segment register
LLVMTypeRef ptrtype = LLVMPointerType (IntPtrType (), 257);
#else
// 256 == GS segment register
LLVMTypeRef ptrtype = LLVMPointerType (IntPtrType (), 256);
#endif
// FIXME: XEN
values [ins->dreg] = LLVMBuildLoad (builder, LLVMBuildIntToPtr (builder, LLVMConstInt (IntPtrType (), ins->inst_offset, TRUE), ptrtype, ""), "");
#elif defined(TARGET_AMD64) && defined(TARGET_OSX)
/* See mono_amd64_emit_tls_get () */
int offset = mono_amd64_get_tls_gs_offset () + (ins->inst_offset * 8);
// 256 == GS segment register
LLVMTypeRef ptrtype = LLVMPointerType (IntPtrType (), 256);
values [ins->dreg] = LLVMBuildLoad (builder, LLVMBuildIntToPtr (builder, LLVMConstInt (IntPtrType (), offset, TRUE), ptrtype, ""), "");
#else
set_failure (ctx, "opcode tls-get");
break;
#endif
break;
}
case OP_GC_SAFE_POINT: {
LLVMValueRef val, cmp, callee, call;
LLVMBasicBlockRef poll_bb, cont_bb;
LLVMValueRef args [2];
static LLVMTypeRef sig;
const char *icall_name = "mono_threads_state_poll";
/*
* Create the cold wrapper around the icall, along with a managed method for it so
* unwinding works.
*/
if (!cfg->compile_aot && !ctx->module->gc_poll_cold_wrapper_compiled) {
ERROR_DECL (error);
/* Compiling a method here is a bit ugly, but it works */
MonoMethod *wrapper = mono_marshal_get_llvm_func_wrapper (LLVM_FUNC_WRAPPER_GC_POLL);
ctx->module->gc_poll_cold_wrapper_compiled = mono_jit_compile_method (wrapper, error);
mono_error_assert_ok (error);
}
if (!sig)
sig = LLVMFunctionType0 (LLVMVoidType (), FALSE);
/*
* if (!*sreg1)
* mono_threads_state_poll ();
*/
val = mono_llvm_build_load (builder, convert (ctx, lhs, LLVMPointerType (IntPtrType (), 0)), "", TRUE);
cmp = LLVMBuildICmp (builder, LLVMIntEQ, val, LLVMConstNull (LLVMTypeOf (val)), "");
poll_bb = gen_bb (ctx, "POLL_BB");
cont_bb = gen_bb (ctx, "CONT_BB");
args [0] = cmp;
args [1] = LLVMConstInt (LLVMInt1Type (), 1, FALSE);
cmp = call_intrins (ctx, INTRINS_EXPECT_I1, args, "");
mono_llvm_build_weighted_branch (builder, cmp, cont_bb, poll_bb, 1000, 1);
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, poll_bb);
if (ctx->cfg->compile_aot) {
callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_threads_state_poll));
call = LLVMBuildCall (builder, callee, NULL, 0, "");
} else {
callee = get_jit_callee (ctx, icall_name, sig, MONO_PATCH_INFO_ABS, ctx->module->gc_poll_cold_wrapper_compiled);
call = LLVMBuildCall (builder, callee, NULL, 0, "");
set_call_cold_cconv (call);
}
LLVMBuildBr (builder, cont_bb);
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, cont_bb);
ctx->bblocks [bb->block_num].end_bblock = cont_bb;
break;
}
/*
* Overflow opcodes.
*/
case OP_IADD_OVF:
case OP_IADD_OVF_UN:
case OP_ISUB_OVF:
case OP_ISUB_OVF_UN:
case OP_IMUL_OVF:
case OP_IMUL_OVF_UN:
case OP_LADD_OVF:
case OP_LADD_OVF_UN:
case OP_LSUB_OVF:
case OP_LSUB_OVF_UN:
case OP_LMUL_OVF:
case OP_LMUL_OVF_UN: {
LLVMValueRef args [2], val, ovf;
IntrinsicId intrins;
args [0] = convert (ctx, lhs, op_to_llvm_type (ins->opcode));
args [1] = convert (ctx, rhs, op_to_llvm_type (ins->opcode));
intrins = ovf_op_to_intrins (ins->opcode);
val = call_intrins (ctx, intrins, args, "");
values [ins->dreg] = LLVMBuildExtractValue (builder, val, 0, dname);
ovf = LLVMBuildExtractValue (builder, val, 1, "");
emit_cond_system_exception (ctx, bb, ins->inst_exc_name, ovf, FALSE);
if (!ctx_ok (ctx))
break;
builder = ctx->builder;
break;
}
/*
* Valuetypes.
* We currently model them using arrays. Promotion to local vregs is
* disabled for them in mono_handle_global_vregs () in the LLVM case,
* so we always have an entry in cfg->varinfo for them.
* FIXME: Is this needed ?
*/
case OP_VZERO: {
MonoClass *klass = ins->klass;
if (!klass) {
// FIXME:
set_failure (ctx, "!klass");
break;
}
if (!addresses [ins->dreg])
addresses [ins->dreg] = build_named_alloca (ctx, m_class_get_byval_arg (klass), "vzero");
LLVMValueRef ptr = LLVMBuildBitCast (builder, addresses [ins->dreg], LLVMPointerType (LLVMInt8Type (), 0), "");
emit_memset (ctx, builder, ptr, const_int32 (mono_class_value_size (klass, NULL)), 0);
break;
}
case OP_DUMMY_VZERO:
break;
case OP_STOREV_MEMBASE:
case OP_LOADV_MEMBASE:
case OP_VMOVE: {
MonoClass *klass = ins->klass;
LLVMValueRef src = NULL, dst, args [5];
gboolean done = FALSE;
gboolean is_volatile = FALSE;
if (!klass) {
// FIXME:
set_failure (ctx, "!klass");
break;
}
if (mini_is_gsharedvt_klass (klass)) {
// FIXME:
set_failure (ctx, "gsharedvt");
break;
}
switch (ins->opcode) {
case OP_STOREV_MEMBASE:
if (cfg->gen_write_barriers && m_class_has_references (klass) && ins->inst_destbasereg != cfg->frame_reg &&
LLVMGetInstructionOpcode (values [ins->inst_destbasereg]) != LLVMAlloca) {
/* Decomposed earlier */
g_assert_not_reached ();
break;
}
if (!addresses [ins->sreg1]) {
/* SIMD */
g_assert (values [ins->sreg1]);
dst = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_destbasereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (type_to_llvm_type (ctx, m_class_get_byval_arg (klass)), 0));
LLVMBuildStore (builder, values [ins->sreg1], dst);
done = TRUE;
} else {
src = LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (LLVMInt8Type (), 0), "");
dst = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_destbasereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (LLVMInt8Type (), 0));
}
break;
case OP_LOADV_MEMBASE:
if (!addresses [ins->dreg])
addresses [ins->dreg] = build_alloca (ctx, m_class_get_byval_arg (klass));
src = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_basereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (LLVMInt8Type (), 0));
dst = LLVMBuildBitCast (builder, addresses [ins->dreg], LLVMPointerType (LLVMInt8Type (), 0), "");
break;
case OP_VMOVE:
if (!addresses [ins->sreg1])
addresses [ins->sreg1] = build_alloca (ctx, m_class_get_byval_arg (klass));
if (!addresses [ins->dreg])
addresses [ins->dreg] = build_alloca (ctx, m_class_get_byval_arg (klass));
src = LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (LLVMInt8Type (), 0), "");
dst = LLVMBuildBitCast (builder, addresses [ins->dreg], LLVMPointerType (LLVMInt8Type (), 0), "");
break;
default:
g_assert_not_reached ();
}
if (!ctx_ok (ctx))
break;
if (done)
break;
#ifdef TARGET_WASM
is_volatile = m_class_has_references (klass);
#endif
int aindex = 0;
args [aindex ++] = dst;
args [aindex ++] = src;
args [aindex ++] = LLVMConstInt (LLVMInt32Type (), mono_class_value_size (klass, NULL), FALSE);
args [aindex ++] = LLVMConstInt (LLVMInt1Type (), is_volatile ? 1 : 0, FALSE);
call_intrins (ctx, INTRINS_MEMCPY, args, "");
break;
}
case OP_LLVM_OUTARG_VT: {
LLVMArgInfo *ainfo = (LLVMArgInfo*)ins->inst_p0;
MonoType *t = mini_get_underlying_type (ins->inst_vtype);
if (ainfo->storage == LLVMArgGsharedvtVariable) {
MonoInst *var = get_vreg_to_inst (cfg, ins->sreg1);
if (var && var->opcode == OP_GSHAREDVT_LOCAL) {
addresses [ins->dreg] = convert (ctx, emit_gsharedvt_ldaddr (ctx, var->dreg), LLVMPointerType (IntPtrType (), 0));
} else {
g_assert (addresses [ins->sreg1]);
addresses [ins->dreg] = addresses [ins->sreg1];
}
} else if (ainfo->storage == LLVMArgGsharedvtFixed) {
if (!addresses [ins->sreg1]) {
addresses [ins->sreg1] = build_alloca (ctx, t);
g_assert (values [ins->sreg1]);
}
LLVMBuildStore (builder, convert (ctx, values [ins->sreg1], LLVMGetElementType (LLVMTypeOf (addresses [ins->sreg1]))), addresses [ins->sreg1]);
addresses [ins->dreg] = addresses [ins->sreg1];
} else {
if (!addresses [ins->sreg1]) {
addresses [ins->sreg1] = build_named_alloca (ctx, t, "llvm_outarg_vt");
g_assert (values [ins->sreg1]);
LLVMBuildStore (builder, convert (ctx, values [ins->sreg1], type_to_llvm_type (ctx, t)), addresses [ins->sreg1]);
addresses [ins->dreg] = addresses [ins->sreg1];
} else if (ainfo->storage == LLVMArgVtypeAddr || values [ins->sreg1] == addresses [ins->sreg1]) {
/* LLVMArgVtypeByRef/LLVMArgVtypeAddr, have to make a copy */
addresses [ins->dreg] = build_alloca (ctx, t);
LLVMValueRef v = LLVMBuildLoad (builder, addresses [ins->sreg1], "llvm_outarg_vt_copy");
LLVMBuildStore (builder, convert (ctx, v, type_to_llvm_type (ctx, t)), addresses [ins->dreg]);
} else {
if (values [ins->sreg1]) {
LLVMTypeRef src_t = LLVMTypeOf (values [ins->sreg1]);
LLVMValueRef dst = convert (ctx, addresses [ins->sreg1], LLVMPointerType (src_t, 0));
LLVMBuildStore (builder, values [ins->sreg1], dst);
}
addresses [ins->dreg] = addresses [ins->sreg1];
}
}
break;
}
case OP_OBJC_GET_SELECTOR: {
const char *name = (const char*)ins->inst_p0;
LLVMValueRef var;
if (!ctx->module->objc_selector_to_var) {
ctx->module->objc_selector_to_var = g_hash_table_new_full (g_str_hash, g_str_equal, g_free, NULL);
LLVMValueRef info_var = LLVMAddGlobal (ctx->lmodule, LLVMArrayType (LLVMInt8Type (), 8), "@OBJC_IMAGE_INFO");
int32_t objc_imageinfo [] = { 0, 16 };
LLVMSetInitializer (info_var, mono_llvm_create_constant_data_array ((uint8_t *) &objc_imageinfo, 8));
LLVMSetLinkage (info_var, LLVMPrivateLinkage);
LLVMSetExternallyInitialized (info_var, TRUE);
LLVMSetSection (info_var, "__DATA, __objc_imageinfo,regular,no_dead_strip");
LLVMSetAlignment (info_var, sizeof (target_mgreg_t));
mark_as_used (ctx->module, info_var);
}
var = (LLVMValueRef)g_hash_table_lookup (ctx->module->objc_selector_to_var, name);
if (!var) {
LLVMValueRef indexes [16];
LLVMValueRef name_var = LLVMAddGlobal (ctx->lmodule, LLVMArrayType (LLVMInt8Type (), strlen (name) + 1), "@OBJC_METH_VAR_NAME_");
LLVMSetInitializer (name_var, mono_llvm_create_constant_data_array ((const uint8_t*)name, strlen (name) + 1));
LLVMSetLinkage (name_var, LLVMPrivateLinkage);
LLVMSetSection (name_var, "__TEXT,__objc_methname,cstring_literals");
mark_as_used (ctx->module, name_var);
LLVMValueRef ref_var = LLVMAddGlobal (ctx->lmodule, LLVMPointerType (LLVMInt8Type (), 0), "@OBJC_SELECTOR_REFERENCES_");
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, 0);
indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, 0);
LLVMSetInitializer (ref_var, LLVMConstGEP (name_var, indexes, 2));
LLVMSetLinkage (ref_var, LLVMPrivateLinkage);
LLVMSetExternallyInitialized (ref_var, TRUE);
LLVMSetSection (ref_var, "__DATA, __objc_selrefs, literal_pointers, no_dead_strip");
LLVMSetAlignment (ref_var, sizeof (target_mgreg_t));
mark_as_used (ctx->module, ref_var);
g_hash_table_insert (ctx->module->objc_selector_to_var, g_strdup (name), ref_var);
var = ref_var;
}
values [ins->dreg] = LLVMBuildLoad (builder, var, "");
break;
}
#if defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_ARM64) || defined(TARGET_WASM)
case OP_EXTRACTX_U2:
case OP_XEXTRACT_I1:
case OP_XEXTRACT_I2:
case OP_XEXTRACT_I4:
case OP_XEXTRACT_I8:
case OP_XEXTRACT_R4:
case OP_XEXTRACT_R8:
case OP_EXTRACT_I1:
case OP_EXTRACT_I2:
case OP_EXTRACT_I4:
case OP_EXTRACT_I8:
case OP_EXTRACT_R4:
case OP_EXTRACT_R8: {
MonoTypeEnum mono_elt_t = inst_c1_type (ins);
LLVMTypeRef elt_t = primitive_type_to_llvm_type (mono_elt_t);
gboolean sext = FALSE;
gboolean zext = FALSE;
switch (mono_elt_t) {
case MONO_TYPE_I1: case MONO_TYPE_I2: sext = TRUE; break;
case MONO_TYPE_U1: case MONO_TYPE_U2: zext = TRUE; break;
}
LLVMValueRef element_ix = NULL;
switch (ins->opcode) {
case OP_XEXTRACT_I1:
case OP_XEXTRACT_I2:
case OP_XEXTRACT_I4:
case OP_XEXTRACT_R4:
case OP_XEXTRACT_R8:
case OP_XEXTRACT_I8:
element_ix = rhs;
break;
default:
element_ix = const_int32 (ins->inst_c0);
}
LLVMTypeRef lhs_t = LLVMTypeOf (lhs);
int vec_width = mono_llvm_get_prim_size_bits (lhs_t);
int elem_width = mono_llvm_get_prim_size_bits (elt_t);
int elements = vec_width / elem_width;
element_ix = LLVMBuildAnd (builder, element_ix, const_int32 (elements - 1), "extract");
LLVMTypeRef ret_t = LLVMVectorType (elt_t, elements);
LLVMValueRef src = LLVMBuildBitCast (builder, lhs, ret_t, "extract");
LLVMValueRef result = LLVMBuildExtractElement (builder, src, element_ix, "extract");
if (zext)
result = LLVMBuildZExt (builder, result, i4_t, "extract_zext");
else if (sext)
result = LLVMBuildSExt (builder, result, i4_t, "extract_sext");
values [ins->dreg] = result;
break;
}
case OP_XINSERT_I1:
case OP_XINSERT_I2:
case OP_XINSERT_I4:
case OP_XINSERT_I8:
case OP_XINSERT_R4:
case OP_XINSERT_R8: {
MonoTypeEnum primty = inst_c1_type (ins);
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMTypeRef elem_t = LLVMGetElementType (ret_t);
int elements = LLVMGetVectorSize (ret_t);
LLVMValueRef element_ix = LLVMBuildAnd (builder, arg3, const_int32 (elements - 1), "xinsert");
LLVMValueRef vec = convert (ctx, lhs, ret_t);
LLVMValueRef val = convert_full (ctx, rhs, elem_t, primitive_type_is_unsigned (primty));
LLVMValueRef result = LLVMBuildInsertElement (builder, vec, val, element_ix, "xinsert");
values [ins->dreg] = result;
break;
}
case OP_EXPAND_I1:
case OP_EXPAND_I2:
case OP_EXPAND_I4:
case OP_EXPAND_I8:
case OP_EXPAND_R4:
case OP_EXPAND_R8: {
LLVMTypeRef t;
LLVMValueRef mask [MAX_VECTOR_ELEMS], v;
int i;
t = simd_class_to_llvm_type (ctx, ins->klass);
for (i = 0; i < MAX_VECTOR_ELEMS; ++i)
mask [i] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
v = convert (ctx, values [ins->sreg1], LLVMGetElementType (t));
values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (t), v, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
values [ins->dreg] = LLVMBuildShuffleVector (builder, values [ins->dreg], LLVMGetUndef (t), LLVMConstVector (mask, LLVMGetVectorSize (t)), "");
break;
}
case OP_XZERO: {
values [ins->dreg] = LLVMConstNull (type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass)));
break;
}
case OP_LOADX_MEMBASE: {
LLVMTypeRef t = type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass));
LLVMValueRef src;
src = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_basereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (t, 0));
values [ins->dreg] = mono_llvm_build_aligned_load (builder, src, "", FALSE, 1);
break;
}
case OP_STOREX_MEMBASE: {
LLVMTypeRef t = LLVMTypeOf (values [ins->sreg1]);
LLVMValueRef dest;
dest = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_destbasereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (t, 0));
mono_llvm_build_aligned_store (builder, values [ins->sreg1], dest, FALSE, 1);
break;
}
case OP_XBINOP:
case OP_XBINOP_SCALAR:
case OP_XBINOP_BYSCALAR: {
gboolean scalar = ins->opcode == OP_XBINOP_SCALAR;
gboolean byscalar = ins->opcode == OP_XBINOP_BYSCALAR;
LLVMValueRef result = NULL;
LLVMValueRef args [] = { lhs, rhs };
if (scalar)
for (int i = 0; i < 2; ++i)
args [i] = scalar_from_vector (ctx, args [i]);
if (byscalar) {
LLVMTypeRef t = LLVMTypeOf (args [0]);
unsigned int elems = LLVMGetVectorSize (t);
args [1] = broadcast_element (ctx, scalar_from_vector (ctx, args [1]), elems);
}
LLVMValueRef l = args [0];
LLVMValueRef r = args [1];
switch (ins->inst_c0) {
case OP_IADD:
result = LLVMBuildAdd (builder, l, r, "");
break;
case OP_ISUB:
result = LLVMBuildSub (builder, l, r, "");
break;
case OP_IMUL:
result = LLVMBuildMul (builder, l, r, "");
break;
case OP_IAND:
result = LLVMBuildAnd (builder, l, r, "");
break;
case OP_IOR:
result = LLVMBuildOr (builder, l, r, "");
break;
case OP_IXOR:
result = LLVMBuildXor (builder, l, r, "");
break;
case OP_FADD:
result = LLVMBuildFAdd (builder, l, r, "");
break;
case OP_FSUB:
result = LLVMBuildFSub (builder, l, r, "");
break;
case OP_FMUL:
result = LLVMBuildFMul (builder, l, r, "");
break;
case OP_FDIV:
result = LLVMBuildFDiv (builder, l, r, "");
break;
case OP_FMAX:
case OP_FMIN: {
LLVMValueRef args [] = { l, r };
#if defined(TARGET_X86) || defined(TARGET_AMD64)
LLVMTypeRef t = LLVMTypeOf (l);
LLVMTypeRef elem_t = LLVMGetElementType (t);
unsigned int elems = LLVMGetVectorSize (t);
unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t);
unsigned int v_size = elems * elem_bits;
if (v_size == 128) {
gboolean is_r4 = ins->inst_c1 == MONO_TYPE_R4;
int iid = -1;
if (ins->inst_c0 == OP_FMAX) {
if (elems == 1)
iid = is_r4 ? INTRINS_SSE_MAXSS : INTRINS_SSE_MAXSD;
else
iid = is_r4 ? INTRINS_SSE_MAXPS : INTRINS_SSE_MAXPD;
} else {
if (elems == 1)
iid = is_r4 ? INTRINS_SSE_MINSS : INTRINS_SSE_MINSD;
else
iid = is_r4 ? INTRINS_SSE_MINPS : INTRINS_SSE_MINPD;
}
result = call_intrins (ctx, iid, args, dname);
} else {
LLVMRealPredicate op = ins->inst_c0 == OP_FMAX ? LLVMRealUGE : LLVMRealULE;
LLVMValueRef cmp = LLVMBuildFCmp (builder, op, l, r, "");
result = LLVMBuildSelect (builder, cmp, l, r, "");
}
#elif defined(TARGET_ARM64)
IntrinsicId iid = ins->inst_c0 == OP_FMAX ? INTRINS_AARCH64_ADV_SIMD_FMAX : INTRINS_AARCH64_ADV_SIMD_FMIN;
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
result = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
#else
NOT_IMPLEMENTED;
#endif
break;
}
case OP_IMAX:
case OP_IMIN: {
gboolean is_unsigned = ins->inst_c1 == MONO_TYPE_U1 || ins->inst_c1 == MONO_TYPE_U2 || ins->inst_c1 == MONO_TYPE_U4 || ins->inst_c1 == MONO_TYPE_U8;
LLVMIntPredicate op;
switch (ins->inst_c0) {
case OP_IMAX:
op = is_unsigned ? LLVMIntUGT : LLVMIntSGT;
break;
case OP_IMIN:
op = is_unsigned ? LLVMIntULT : LLVMIntSLT;
break;
default:
g_assert_not_reached ();
}
#if defined(TARGET_ARM64)
if ((ins->inst_c1 == MONO_TYPE_U8) || (ins->inst_c1 == MONO_TYPE_I8)) {
LLVMValueRef cmp = LLVMBuildICmp (builder, op, l, r, "");
result = LLVMBuildSelect (builder, cmp, l, r, "");
} else {
IntrinsicId iid;
switch (ins->inst_c0) {
case OP_IMAX:
iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UMAX : INTRINS_AARCH64_ADV_SIMD_SMAX;
break;
case OP_IMIN:
iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UMIN : INTRINS_AARCH64_ADV_SIMD_SMIN;
break;
default:
g_assert_not_reached ();
}
LLVMValueRef args [] = { l, r };
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
result = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
}
#else
LLVMValueRef cmp = LLVMBuildICmp (builder, op, l, r, "");
result = LLVMBuildSelect (builder, cmp, l, r, "");
#endif
break;
}
default:
g_assert_not_reached ();
}
if (scalar)
result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result);
values [ins->dreg] = result;
break;
}
case OP_XBINOP_FORCEINT: {
LLVMTypeRef t = LLVMTypeOf (lhs);
LLVMTypeRef elem_t = LLVMGetElementType (t);
unsigned int elems = LLVMGetVectorSize (t);
unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t);
LLVMTypeRef intermediate_elem_t = LLVMIntType (elem_bits);
LLVMTypeRef intermediate_t = LLVMVectorType (intermediate_elem_t, elems);
LLVMValueRef lhs_int = convert (ctx, lhs, intermediate_t);
LLVMValueRef rhs_int = convert (ctx, rhs, intermediate_t);
LLVMValueRef result = NULL;
switch (ins->inst_c0) {
case XBINOP_FORCEINT_and:
result = LLVMBuildAnd (builder, lhs_int, rhs_int, "");
break;
case XBINOP_FORCEINT_or:
result = LLVMBuildOr (builder, lhs_int, rhs_int, "");
break;
case XBINOP_FORCEINT_ornot:
result = LLVMBuildNot (builder, rhs_int, "");
result = LLVMBuildOr (builder, result, lhs_int, "");
break;
case XBINOP_FORCEINT_xor:
result = LLVMBuildXor (builder, lhs_int, rhs_int, "");
break;
}
values [ins->dreg] = LLVMBuildBitCast (builder, result, t, "");
break;
}
case OP_CREATE_SCALAR:
case OP_CREATE_SCALAR_UNSAFE: {
MonoTypeEnum primty = inst_c1_type (ins);
LLVMTypeRef type = simd_class_to_llvm_type (ctx, ins->klass);
// use undef vector (most likely empty but may contain garbage values) for OP_CREATE_SCALAR_UNSAFE
// and zero one for OP_CREATE_SCALAR
LLVMValueRef vector = (ins->opcode == OP_CREATE_SCALAR) ? LLVMConstNull (type) : LLVMGetUndef (type);
LLVMValueRef val = convert_full (ctx, lhs, primitive_type_to_llvm_type (primty), primitive_type_is_unsigned (primty));
values [ins->dreg] = LLVMBuildInsertElement (builder, vector, val, const_int32 (0), "");
break;
}
case OP_INSERT_I1:
values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt8Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname);
break;
case OP_INSERT_I2:
values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt16Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname);
break;
case OP_INSERT_I4:
values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt32Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname);
break;
case OP_INSERT_I8:
values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt64Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname);
break;
case OP_INSERT_R4:
values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMFloatType ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname);
break;
case OP_INSERT_R8:
values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMDoubleType ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname);
break;
case OP_XCAST: {
LLVMTypeRef t = simd_class_to_llvm_type (ctx, ins->klass);
values [ins->dreg] = LLVMBuildBitCast (builder, lhs, t, "");
break;
}
case OP_XCONCAT: {
values [ins->dreg] = concatenate_vectors (ctx, lhs, rhs);
break;
}
case OP_XINSERT_LOWER:
case OP_XINSERT_UPPER: {
const char *oname = ins->opcode == OP_XINSERT_LOWER ? "xinsert_lower" : "xinsert_upper";
int ix = ins->opcode == OP_XINSERT_LOWER ? 0 : 1;
LLVMTypeRef src_t = LLVMTypeOf (lhs);
unsigned int width = mono_llvm_get_prim_size_bits (src_t);
LLVMTypeRef int_t = LLVMIntType (width / 2);
LLVMTypeRef intvec_t = LLVMVectorType (int_t, 2);
LLVMValueRef insval = LLVMBuildBitCast (builder, rhs, int_t, oname);
LLVMValueRef val = LLVMBuildBitCast (builder, lhs, intvec_t, oname);
val = LLVMBuildInsertElement (builder, val, insval, const_int32 (ix), oname);
val = LLVMBuildBitCast (builder, val, src_t, oname);
values [ins->dreg] = val;
break;
}
case OP_XLOWER:
case OP_XUPPER: {
const char *oname = ins->opcode == OP_XLOWER ? "xlower" : "xupper";
LLVMTypeRef src_t = LLVMTypeOf (lhs);
unsigned int elems = LLVMGetVectorSize (src_t);
g_assert (elems >= 2 && elems <= MAX_VECTOR_ELEMS);
unsigned int ret_elems = elems / 2;
int startix = ins->opcode == OP_XLOWER ? 0 : ret_elems;
LLVMValueRef val = LLVMBuildShuffleVector (builder, lhs, LLVMGetUndef (src_t), create_const_vector_i32 (&mask_0_incr_1 [startix], ret_elems), oname);
values [ins->dreg] = val;
break;
}
case OP_XWIDEN:
case OP_XWIDEN_UNSAFE: {
const char *oname = ins->opcode == OP_XWIDEN ? "xwiden" : "xwiden_unsafe";
LLVMTypeRef src_t = LLVMTypeOf (lhs);
unsigned int elems = LLVMGetVectorSize (src_t);
g_assert (elems <= MAX_VECTOR_ELEMS / 2);
unsigned int ret_elems = elems * 2;
LLVMValueRef upper = ins->opcode == OP_XWIDEN ? LLVMConstNull (src_t) : LLVMGetUndef (src_t);
LLVMValueRef val = LLVMBuildShuffleVector (builder, lhs, upper, create_const_vector_i32 (mask_0_incr_1, ret_elems), oname);
values [ins->dreg] = val;
break;
}
#endif // defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_ARM64) || defined(TARGET_WASM)
#if defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_WASM)
case OP_PADDB:
case OP_PADDW:
case OP_PADDD:
case OP_PADDQ:
values [ins->dreg] = LLVMBuildAdd (builder, lhs, rhs, "");
break;
case OP_ADDPD:
case OP_ADDPS:
values [ins->dreg] = LLVMBuildFAdd (builder, lhs, rhs, "");
break;
case OP_PSUBB:
case OP_PSUBW:
case OP_PSUBD:
case OP_PSUBQ:
values [ins->dreg] = LLVMBuildSub (builder, lhs, rhs, "");
break;
case OP_SUBPD:
case OP_SUBPS:
values [ins->dreg] = LLVMBuildFSub (builder, lhs, rhs, "");
break;
case OP_MULPD:
case OP_MULPS:
values [ins->dreg] = LLVMBuildFMul (builder, lhs, rhs, "");
break;
case OP_DIVPD:
case OP_DIVPS:
values [ins->dreg] = LLVMBuildFDiv (builder, lhs, rhs, "");
break;
case OP_PAND:
values [ins->dreg] = LLVMBuildAnd (builder, lhs, rhs, "");
break;
case OP_POR:
values [ins->dreg] = LLVMBuildOr (builder, lhs, rhs, "");
break;
case OP_PXOR:
values [ins->dreg] = LLVMBuildXor (builder, lhs, rhs, "");
break;
case OP_PMULW:
case OP_PMULD:
values [ins->dreg] = LLVMBuildMul (builder, lhs, rhs, "");
break;
case OP_ANDPS:
case OP_ANDNPS:
case OP_ORPS:
case OP_XORPS:
case OP_ANDPD:
case OP_ANDNPD:
case OP_ORPD:
case OP_XORPD: {
LLVMTypeRef t, rt;
LLVMValueRef v = NULL;
switch (ins->opcode) {
case OP_ANDPS:
case OP_ANDNPS:
case OP_ORPS:
case OP_XORPS:
t = LLVMVectorType (LLVMInt32Type (), 4);
rt = LLVMVectorType (LLVMFloatType (), 4);
break;
case OP_ANDPD:
case OP_ANDNPD:
case OP_ORPD:
case OP_XORPD:
t = LLVMVectorType (LLVMInt64Type (), 2);
rt = LLVMVectorType (LLVMDoubleType (), 2);
break;
default:
t = LLVMInt32Type ();
rt = LLVMInt32Type ();
g_assert_not_reached ();
}
lhs = LLVMBuildBitCast (builder, lhs, t, "");
rhs = LLVMBuildBitCast (builder, rhs, t, "");
switch (ins->opcode) {
case OP_ANDPS:
case OP_ANDPD:
v = LLVMBuildAnd (builder, lhs, rhs, "");
break;
case OP_ORPS:
case OP_ORPD:
v = LLVMBuildOr (builder, lhs, rhs, "");
break;
case OP_XORPS:
case OP_XORPD:
v = LLVMBuildXor (builder, lhs, rhs, "");
break;
case OP_ANDNPS:
case OP_ANDNPD:
v = LLVMBuildAnd (builder, rhs, LLVMBuildNot (builder, lhs, ""), "");
break;
}
values [ins->dreg] = LLVMBuildBitCast (builder, v, rt, "");
break;
}
case OP_PMIND_UN:
case OP_PMINW_UN:
case OP_PMINB_UN: {
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntULT, lhs, rhs, "");
values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, "");
break;
}
case OP_PMAXD_UN:
case OP_PMAXW_UN:
case OP_PMAXB_UN: {
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntUGT, lhs, rhs, "");
values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, "");
break;
}
case OP_PMINW: {
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSLT, lhs, rhs, "");
values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, "");
break;
}
case OP_PMAXW: {
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSGT, lhs, rhs, "");
values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, "");
break;
}
case OP_PAVGB_UN:
case OP_PAVGW_UN: {
LLVMValueRef ones_vec;
LLVMValueRef ones [MAX_VECTOR_ELEMS];
int vector_size = LLVMGetVectorSize (LLVMTypeOf (lhs));
LLVMTypeRef ext_elem_type = vector_size == 16 ? LLVMInt16Type () : LLVMInt32Type ();
for (int i = 0; i < MAX_VECTOR_ELEMS; ++i)
ones [i] = LLVMConstInt (ext_elem_type, 1, FALSE);
ones_vec = LLVMConstVector (ones, vector_size);
LLVMValueRef val;
LLVMTypeRef ext_type = LLVMVectorType (ext_elem_type, vector_size);
/* Have to increase the vector element size to prevent overflows */
/* res = trunc ((zext (lhs) + zext (rhs) + 1) >> 1) */
val = LLVMBuildAdd (builder, LLVMBuildZExt (builder, lhs, ext_type, ""), LLVMBuildZExt (builder, rhs, ext_type, ""), "");
val = LLVMBuildAdd (builder, val, ones_vec, "");
val = LLVMBuildLShr (builder, val, ones_vec, "");
values [ins->dreg] = LLVMBuildTrunc (builder, val, LLVMTypeOf (lhs), "");
break;
}
case OP_PCMPEQB:
case OP_PCMPEQW:
case OP_PCMPEQD:
case OP_PCMPEQQ:
case OP_PCMPGTB: {
LLVMValueRef pcmp;
LLVMTypeRef retType;
LLVMIntPredicate cmpOp;
if (ins->opcode == OP_PCMPGTB)
cmpOp = LLVMIntSGT;
else
cmpOp = LLVMIntEQ;
if (LLVMTypeOf (lhs) == LLVMTypeOf (rhs)) {
pcmp = LLVMBuildICmp (builder, cmpOp, lhs, rhs, "");
retType = LLVMTypeOf (lhs);
} else {
LLVMTypeRef flatType = LLVMVectorType (LLVMInt8Type (), 16);
LLVMValueRef flatRHS = convert (ctx, rhs, flatType);
LLVMValueRef flatLHS = convert (ctx, lhs, flatType);
pcmp = LLVMBuildICmp (builder, cmpOp, flatLHS, flatRHS, "");
retType = flatType;
}
values [ins->dreg] = LLVMBuildSExt (builder, pcmp, retType, "");
break;
}
case OP_CVTDQ2PS: {
LLVMValueRef i4 = LLVMBuildBitCast (builder, lhs, sse_i4_t, "");
values [ins->dreg] = LLVMBuildSIToFP (builder, i4, sse_r4_t, dname);
break;
}
case OP_CVTDQ2PD: {
LLVMValueRef indexes [16];
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMConstInt (LLVMInt32Type (), 1, FALSE);
LLVMValueRef mask = LLVMConstVector (indexes, 2);
LLVMValueRef shuffle = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), mask, "");
values [ins->dreg] = LLVMBuildSIToFP (builder, shuffle, LLVMVectorType (LLVMDoubleType (), 2), dname);
break;
}
case OP_SSE2_CVTSS2SD: {
LLVMValueRef rhs_elem = LLVMBuildExtractElement (builder, rhs, const_int32 (0), "");
LLVMValueRef fpext = LLVMBuildFPExt (builder, rhs_elem, LLVMDoubleType (), dname);
values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, fpext, const_int32 (0), "");
break;
}
case OP_CVTPS2PD: {
LLVMValueRef indexes [16];
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMConstInt (LLVMInt32Type (), 1, FALSE);
LLVMValueRef mask = LLVMConstVector (indexes, 2);
LLVMValueRef shuffle = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), mask, "");
values [ins->dreg] = LLVMBuildFPExt (builder, shuffle, LLVMVectorType (LLVMDoubleType (), 2), dname);
break;
}
case OP_CVTTPS2DQ:
values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, LLVMVectorType (LLVMInt32Type (), 4), dname);
break;
case OP_CVTPD2DQ:
case OP_CVTPS2DQ:
case OP_CVTPD2PS:
case OP_CVTTPD2DQ: {
LLVMValueRef v;
v = convert (ctx, values [ins->sreg1], simd_op_to_llvm_type (ins->opcode));
values [ins->dreg] = call_intrins (ctx, simd_ins_to_intrins (ins->opcode), &v, dname);
break;
}
case OP_COMPPS:
case OP_COMPPD: {
LLVMRealPredicate op;
switch (ins->inst_c0) {
case SIMD_COMP_EQ:
op = LLVMRealOEQ;
break;
case SIMD_COMP_LT:
op = LLVMRealOLT;
break;
case SIMD_COMP_LE:
op = LLVMRealOLE;
break;
case SIMD_COMP_UNORD:
op = LLVMRealUNO;
break;
case SIMD_COMP_NEQ:
op = LLVMRealUNE;
break;
case SIMD_COMP_NLT:
op = LLVMRealUGE;
break;
case SIMD_COMP_NLE:
op = LLVMRealUGT;
break;
case SIMD_COMP_ORD:
op = LLVMRealORD;
break;
default:
g_assert_not_reached ();
}
LLVMValueRef cmp = LLVMBuildFCmp (builder, op, lhs, rhs, "");
if (ins->opcode == OP_COMPPD)
values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt64Type (), 2), ""), LLVMTypeOf (lhs), "");
else
values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt32Type (), 4), ""), LLVMTypeOf (lhs), "");
break;
}
case OP_ICONV_TO_X:
/* This is only used for implementing shifts by non-immediate */
values [ins->dreg] = lhs;
break;
case OP_SHUFPS:
case OP_SHUFPD:
case OP_PSHUFLED:
case OP_PSHUFLEW_LOW:
case OP_PSHUFLEW_HIGH: {
int mask [16];
LLVMValueRef v1 = NULL, v2 = NULL, mask_values [16];
int i, mask_size = 0;
int imask = ins->inst_c0;
/* Convert the x86 shuffle mask to LLVM's */
switch (ins->opcode) {
case OP_SHUFPS:
mask_size = 4;
mask [0] = ((imask >> 0) & 3);
mask [1] = ((imask >> 2) & 3);
mask [2] = ((imask >> 4) & 3) + 4;
mask [3] = ((imask >> 6) & 3) + 4;
v1 = values [ins->sreg1];
v2 = values [ins->sreg2];
break;
case OP_SHUFPD:
mask_size = 2;
mask [0] = ((imask >> 0) & 1);
mask [1] = ((imask >> 1) & 1) + 2;
v1 = values [ins->sreg1];
v2 = values [ins->sreg2];
break;
case OP_PSHUFLEW_LOW:
mask_size = 8;
mask [0] = ((imask >> 0) & 3);
mask [1] = ((imask >> 2) & 3);
mask [2] = ((imask >> 4) & 3);
mask [3] = ((imask >> 6) & 3);
mask [4] = 4 + 0;
mask [5] = 4 + 1;
mask [6] = 4 + 2;
mask [7] = 4 + 3;
v1 = values [ins->sreg1];
v2 = LLVMGetUndef (LLVMTypeOf (v1));
break;
case OP_PSHUFLEW_HIGH:
mask_size = 8;
mask [0] = 0;
mask [1] = 1;
mask [2] = 2;
mask [3] = 3;
mask [4] = 4 + ((imask >> 0) & 3);
mask [5] = 4 + ((imask >> 2) & 3);
mask [6] = 4 + ((imask >> 4) & 3);
mask [7] = 4 + ((imask >> 6) & 3);
v1 = values [ins->sreg1];
v2 = LLVMGetUndef (LLVMTypeOf (v1));
break;
case OP_PSHUFLED:
mask_size = 4;
mask [0] = ((imask >> 0) & 3);
mask [1] = ((imask >> 2) & 3);
mask [2] = ((imask >> 4) & 3);
mask [3] = ((imask >> 6) & 3);
v1 = values [ins->sreg1];
v2 = LLVMGetUndef (LLVMTypeOf (v1));
break;
default:
g_assert_not_reached ();
}
for (i = 0; i < mask_size; ++i)
mask_values [i] = LLVMConstInt (LLVMInt32Type (), mask [i], FALSE);
values [ins->dreg] =
LLVMBuildShuffleVector (builder, v1, v2,
LLVMConstVector (mask_values, mask_size), dname);
break;
}
case OP_UNPACK_LOWB:
case OP_UNPACK_LOWW:
case OP_UNPACK_LOWD:
case OP_UNPACK_LOWQ:
case OP_UNPACK_LOWPS:
case OP_UNPACK_LOWPD:
case OP_UNPACK_HIGHB:
case OP_UNPACK_HIGHW:
case OP_UNPACK_HIGHD:
case OP_UNPACK_HIGHQ:
case OP_UNPACK_HIGHPS:
case OP_UNPACK_HIGHPD: {
int mask [16];
LLVMValueRef mask_values [16];
int i, mask_size = 0;
gboolean low = FALSE;
switch (ins->opcode) {
case OP_UNPACK_LOWB:
mask_size = 16;
low = TRUE;
break;
case OP_UNPACK_LOWW:
mask_size = 8;
low = TRUE;
break;
case OP_UNPACK_LOWD:
case OP_UNPACK_LOWPS:
mask_size = 4;
low = TRUE;
break;
case OP_UNPACK_LOWQ:
case OP_UNPACK_LOWPD:
mask_size = 2;
low = TRUE;
break;
case OP_UNPACK_HIGHB:
mask_size = 16;
break;
case OP_UNPACK_HIGHW:
mask_size = 8;
break;
case OP_UNPACK_HIGHD:
case OP_UNPACK_HIGHPS:
mask_size = 4;
break;
case OP_UNPACK_HIGHQ:
case OP_UNPACK_HIGHPD:
mask_size = 2;
break;
default:
g_assert_not_reached ();
}
if (low) {
for (i = 0; i < (mask_size / 2); ++i) {
mask [(i * 2)] = i;
mask [(i * 2) + 1] = mask_size + i;
}
} else {
for (i = 0; i < (mask_size / 2); ++i) {
mask [(i * 2)] = (mask_size / 2) + i;
mask [(i * 2) + 1] = mask_size + (mask_size / 2) + i;
}
}
for (i = 0; i < mask_size; ++i)
mask_values [i] = LLVMConstInt (LLVMInt32Type (), mask [i], FALSE);
values [ins->dreg] =
LLVMBuildShuffleVector (builder, values [ins->sreg1], values [ins->sreg2],
LLVMConstVector (mask_values, mask_size), dname);
break;
}
case OP_DUPPD: {
LLVMTypeRef t = simd_op_to_llvm_type (ins->opcode);
LLVMValueRef v, val;
v = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
val = LLVMConstNull (t);
val = LLVMBuildInsertElement (builder, val, v, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
val = LLVMBuildInsertElement (builder, val, v, LLVMConstInt (LLVMInt32Type (), 1, FALSE), dname);
values [ins->dreg] = val;
break;
}
case OP_DUPPS_LOW:
case OP_DUPPS_HIGH: {
LLVMTypeRef t = simd_op_to_llvm_type (ins->opcode);
LLVMValueRef v1, v2, val;
if (ins->opcode == OP_DUPPS_LOW) {
v1 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
v2 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 2, FALSE), "");
} else {
v1 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 1, FALSE), "");
v2 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 3, FALSE), "");
}
val = LLVMConstNull (t);
val = LLVMBuildInsertElement (builder, val, v1, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
val = LLVMBuildInsertElement (builder, val, v1, LLVMConstInt (LLVMInt32Type (), 1, FALSE), "");
val = LLVMBuildInsertElement (builder, val, v2, LLVMConstInt (LLVMInt32Type (), 2, FALSE), "");
val = LLVMBuildInsertElement (builder, val, v2, LLVMConstInt (LLVMInt32Type (), 3, FALSE), "");
values [ins->dreg] = val;
break;
}
case OP_FCONV_TO_R8_X: {
values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (sse_r8_t), lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
break;
}
case OP_FCONV_TO_R4_X: {
values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (sse_r4_t), lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
break;
}
#if defined(TARGET_X86) || defined(TARGET_AMD64)
case OP_SSE_MOVMSK: {
LLVMValueRef args [1];
if (ins->inst_c1 == MONO_TYPE_R4) {
args [0] = lhs;
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_MOVMSK_PS, args, dname);
} else if (ins->inst_c1 == MONO_TYPE_R8) {
args [0] = lhs;
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_MOVMSK_PD, args, dname);
} else {
args [0] = convert (ctx, lhs, sse_i1_t);
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_PMOVMSKB, args, dname);
}
break;
}
case OP_SSE_MOVS:
case OP_SSE_MOVS2: {
if (ins->inst_c1 == MONO_TYPE_R4)
values [ins->dreg] = LLVMBuildShuffleVector (builder, rhs, lhs, create_const_vector_4_i32 (0, 5, 6, 7), "");
else if (ins->inst_c1 == MONO_TYPE_R8)
values [ins->dreg] = LLVMBuildShuffleVector (builder, rhs, lhs, create_const_vector_2_i32 (0, 3), "");
else if (ins->inst_c1 == MONO_TYPE_I8 || ins->inst_c1 == MONO_TYPE_U8)
values [ins->dreg] = LLVMBuildInsertElement (builder, lhs,
LLVMConstInt (LLVMInt64Type (), 0, FALSE),
LLVMConstInt (LLVMInt32Type (), 1, FALSE), "");
else
g_assert_not_reached (); // will be needed for other types later
break;
}
case OP_SSE_MOVEHL: {
if (ins->inst_c1 == MONO_TYPE_R4)
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (6, 7, 2, 3), "");
else
g_assert_not_reached ();
break;
}
case OP_SSE_MOVELH: {
if (ins->inst_c1 == MONO_TYPE_R4)
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (0, 1, 4, 5), "");
else
g_assert_not_reached ();
break;
}
case OP_SSE_UNPACKLO: {
if (ins->inst_c1 == MONO_TYPE_R8 || ins->inst_c1 == MONO_TYPE_I8 || ins->inst_c1 == MONO_TYPE_U8) {
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_2_i32 (0, 2), "");
} else if (ins->inst_c1 == MONO_TYPE_R4 || ins->inst_c1 == MONO_TYPE_I4 || ins->inst_c1 == MONO_TYPE_U4) {
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (0, 4, 1, 5), "");
} else if (ins->inst_c1 == MONO_TYPE_I2 || ins->inst_c1 == MONO_TYPE_U2) {
const int mask_values [] = { 0, 8, 1, 9, 2, 10, 3, 11 };
LLVMValueRef shuffled = LLVMBuildShuffleVector (builder,
convert (ctx, lhs, sse_i2_t),
convert (ctx, rhs, sse_i2_t),
create_const_vector_i32 (mask_values, 8), "");
values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1));
} else if (ins->inst_c1 == MONO_TYPE_I1 || ins->inst_c1 == MONO_TYPE_U1) {
const int mask_values [] = { 0, 16, 1, 17, 2, 18, 3, 19, 4, 20, 5, 21, 6, 22, 7, 23 };
LLVMValueRef shuffled = LLVMBuildShuffleVector (builder,
convert (ctx, lhs, sse_i1_t),
convert (ctx, rhs, sse_i1_t),
create_const_vector_i32 (mask_values, 16), "");
values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1));
} else {
g_assert_not_reached ();
}
break;
}
case OP_SSE_UNPACKHI: {
if (ins->inst_c1 == MONO_TYPE_R8 || ins->inst_c1 == MONO_TYPE_I8 || ins->inst_c1 == MONO_TYPE_U8) {
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_2_i32 (1, 3), "");
} else if (ins->inst_c1 == MONO_TYPE_R4 || ins->inst_c1 == MONO_TYPE_I4 || ins->inst_c1 == MONO_TYPE_U4) {
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (2, 6, 3, 7), "");
} else if (ins->inst_c1 == MONO_TYPE_I2 || ins->inst_c1 == MONO_TYPE_U2) {
const int mask_values [] = { 4, 12, 5, 13, 6, 14, 7, 15 };
LLVMValueRef shuffled = LLVMBuildShuffleVector (builder,
convert (ctx, lhs, sse_i2_t),
convert (ctx, rhs, sse_i2_t),
create_const_vector_i32 (mask_values, 8), "");
values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1));
} else if (ins->inst_c1 == MONO_TYPE_I1 || ins->inst_c1 == MONO_TYPE_U1) {
const int mask_values [] = { 8, 24, 9, 25, 10, 26, 11, 27, 12, 28, 13, 29, 14, 30, 15, 31 };
LLVMValueRef shuffled = LLVMBuildShuffleVector (builder,
convert (ctx, lhs, sse_i1_t),
convert (ctx, rhs, sse_i1_t),
create_const_vector_i32 (mask_values, 16), "");
values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1));
} else {
g_assert_not_reached ();
}
break;
}
case OP_SSE_LOADU: {
LLVMValueRef dst_ptr = convert (ctx, lhs, LLVMPointerType (primitive_type_to_llvm_type (inst_c1_type (ins)), 0));
LLVMValueRef dst_vec = LLVMBuildBitCast (builder, dst_ptr, LLVMPointerType (type_to_sse_type (ins->inst_c1), 0), "");
values [ins->dreg] = mono_llvm_build_aligned_load (builder, dst_vec, "", FALSE, ins->inst_c0); // inst_c0 is alignment
break;
}
case OP_SSE_MOVSS: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMFloatType (), 0));
LLVMValueRef val = mono_llvm_build_load (builder, addr, "", FALSE);
values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (type_to_sse_type (ins->inst_c1)), val, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
break;
}
case OP_SSE_MOVSS_STORE: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMFloatType (), 0));
LLVMValueRef val = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
mono_llvm_build_store (builder, val, addr, FALSE, LLVM_BARRIER_NONE);
break;
}
case OP_SSE2_MOVD:
case OP_SSE2_MOVQ:
case OP_SSE2_MOVUPD: {
LLVMTypeRef rty = NULL;
switch (ins->opcode) {
case OP_SSE2_MOVD: rty = sse_i4_t; break;
case OP_SSE2_MOVQ: rty = sse_i8_t; break;
case OP_SSE2_MOVUPD: rty = sse_r8_t; break;
}
LLVMTypeRef srcty = LLVMGetElementType (rty);
LLVMValueRef zero = LLVMConstNull (rty);
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (srcty, 0));
LLVMValueRef val = mono_llvm_build_aligned_load (builder, addr, "", FALSE, 1);
values [ins->dreg] = LLVMBuildInsertElement (builder, zero, val, const_int32 (0), dname);
break;
}
case OP_SSE_MOVLPS_LOAD:
case OP_SSE_MOVHPS_LOAD: {
LLVMTypeRef t = LLVMFloatType ();
int size = 4;
gboolean high = ins->opcode == OP_SSE_MOVHPS_LOAD;
/* Load two floats from rhs and store them in the low/high part of lhs */
LLVMValueRef addr = rhs;
LLVMValueRef addr1 = convert (ctx, addr, LLVMPointerType (t, 0));
LLVMValueRef addr2 = convert (ctx, LLVMBuildAdd (builder, convert (ctx, addr, IntPtrType ()), convert (ctx, LLVMConstInt (LLVMInt32Type (), size, FALSE), IntPtrType ()), ""), LLVMPointerType (t, 0));
LLVMValueRef val1 = mono_llvm_build_load (builder, addr1, "", FALSE);
LLVMValueRef val2 = mono_llvm_build_load (builder, addr2, "", FALSE);
int index1, index2;
index1 = high ? 2: 0;
index2 = high ? 3 : 1;
values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMBuildInsertElement (builder, lhs, val1, LLVMConstInt (LLVMInt32Type (), index1, FALSE), ""), val2, LLVMConstInt (LLVMInt32Type (), index2, FALSE), "");
break;
}
case OP_SSE2_MOVLPD_LOAD:
case OP_SSE2_MOVHPD_LOAD: {
LLVMTypeRef t = LLVMDoubleType ();
LLVMValueRef addr = convert (ctx, rhs, LLVMPointerType (t, 0));
LLVMValueRef val = mono_llvm_build_load (builder, addr, "", FALSE);
int index = ins->opcode == OP_SSE2_MOVHPD_LOAD ? 1 : 0;
values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, val, const_int32 (index), "");
break;
}
case OP_SSE_MOVLPS_STORE:
case OP_SSE_MOVHPS_STORE: {
/* Store two floats from the low/hight part of rhs into lhs */
LLVMValueRef addr = lhs;
LLVMValueRef addr1 = convert (ctx, addr, LLVMPointerType (LLVMFloatType (), 0));
LLVMValueRef addr2 = convert (ctx, LLVMBuildAdd (builder, convert (ctx, addr, IntPtrType ()), convert (ctx, LLVMConstInt (LLVMInt32Type (), 4, FALSE), IntPtrType ()), ""), LLVMPointerType (LLVMFloatType (), 0));
int index1 = ins->opcode == OP_SSE_MOVLPS_STORE ? 0 : 2;
int index2 = ins->opcode == OP_SSE_MOVLPS_STORE ? 1 : 3;
LLVMValueRef val1 = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), index1, FALSE), "");
LLVMValueRef val2 = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), index2, FALSE), "");
mono_llvm_build_store (builder, val1, addr1, FALSE, LLVM_BARRIER_NONE);
mono_llvm_build_store (builder, val2, addr2, FALSE, LLVM_BARRIER_NONE);
break;
}
case OP_SSE2_MOVLPD_STORE:
case OP_SSE2_MOVHPD_STORE: {
LLVMTypeRef t = LLVMDoubleType ();
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (t, 0));
int index = ins->opcode == OP_SSE2_MOVHPD_STORE ? 1 : 0;
LLVMValueRef val = LLVMBuildExtractElement (builder, rhs, const_int32 (index), "");
mono_llvm_build_store (builder, val, addr, FALSE, LLVM_BARRIER_NONE);
break;
}
case OP_SSE_STORE: {
LLVMValueRef dst_vec = convert (ctx, lhs, LLVMPointerType (LLVMTypeOf (rhs), 0));
mono_llvm_build_aligned_store (builder, rhs, dst_vec, FALSE, ins->inst_c0);
break;
}
case OP_SSE_STORES: {
LLVMValueRef first_elem = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
LLVMValueRef dst = convert (ctx, lhs, LLVMPointerType (LLVMTypeOf (first_elem), 0));
mono_llvm_build_aligned_store (builder, first_elem, dst, FALSE, 1);
break;
}
case OP_SSE_MOVNTPS: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMTypeOf (rhs), 0));
LLVMValueRef store = mono_llvm_build_aligned_store (builder, rhs, addr, FALSE, ins->inst_c0);
set_nontemporal_flag (store);
break;
}
case OP_SSE_PREFETCHT0: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0));
LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (3), const_int32 (1) };
call_intrins (ctx, INTRINS_PREFETCH, args, "");
break;
}
case OP_SSE_PREFETCHT1: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0));
LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (2), const_int32 (1) };
call_intrins (ctx, INTRINS_PREFETCH, args, "");
break;
}
case OP_SSE_PREFETCHT2: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0));
LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (1), const_int32 (1) };
call_intrins (ctx, INTRINS_PREFETCH, args, "");
break;
}
case OP_SSE_PREFETCHNTA: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0));
LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (0), const_int32 (1) };
call_intrins (ctx, INTRINS_PREFETCH, args, "");
break;
}
case OP_SSE_OR: {
LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t);
LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t);
LLVMValueRef vec_and = LLVMBuildOr (builder, vec_lhs_i64, vec_rhs_i64, "");
values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), "");
break;
}
case OP_SSE_XOR: {
LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t);
LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t);
LLVMValueRef vec_and = LLVMBuildXor (builder, vec_lhs_i64, vec_rhs_i64, "");
values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), "");
break;
}
case OP_SSE_AND: {
LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t);
LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t);
LLVMValueRef vec_and = LLVMBuildAnd (builder, vec_lhs_i64, vec_rhs_i64, "");
values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), "");
break;
}
case OP_SSE_ANDN: {
LLVMValueRef minus_one [2];
minus_one [0] = LLVMConstInt (LLVMInt64Type (), -1, FALSE);
minus_one [1] = LLVMConstInt (LLVMInt64Type (), -1, FALSE);
LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t);
LLVMValueRef vec_xor = LLVMBuildXor (builder, vec_lhs_i64, LLVMConstVector (minus_one, 2), "");
LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t);
LLVMValueRef vec_and = LLVMBuildAnd (builder, vec_rhs_i64, vec_xor, "");
values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), "");
break;
}
case OP_SSE_ADDSS:
case OP_SSE_SUBSS:
case OP_SSE_DIVSS:
case OP_SSE_MULSS:
case OP_SSE2_ADDSD:
case OP_SSE2_SUBSD:
case OP_SSE2_DIVSD:
case OP_SSE2_MULSD: {
LLVMValueRef v1 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
LLVMValueRef v2 = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
LLVMValueRef v = NULL;
switch (ins->opcode) {
case OP_SSE_ADDSS:
case OP_SSE2_ADDSD:
v = LLVMBuildFAdd (builder, v1, v2, "");
break;
case OP_SSE_SUBSS:
case OP_SSE2_SUBSD:
v = LLVMBuildFSub (builder, v1, v2, "");
break;
case OP_SSE_DIVSS:
case OP_SSE2_DIVSD:
v = LLVMBuildFDiv (builder, v1, v2, "");
break;
case OP_SSE_MULSS:
case OP_SSE2_MULSD:
v = LLVMBuildFMul (builder, v1, v2, "");
break;
default:
g_assert_not_reached ();
}
values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, v, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
break;
}
case OP_SSE_CMPSS:
case OP_SSE2_CMPSD: {
int imm = -1;
gboolean swap = FALSE;
switch (ins->inst_c0) {
case CMP_EQ: imm = SSE_eq_ord_nosignal; break;
case CMP_GT: imm = SSE_lt_ord_signal; swap = TRUE; break;
case CMP_GE: imm = SSE_le_ord_signal; swap = TRUE; break;
case CMP_LT: imm = SSE_lt_ord_signal; break;
case CMP_LE: imm = SSE_le_ord_signal; break;
case CMP_GT_UN: imm = SSE_nle_unord_signal; break;
case CMP_GE_UN: imm = SSE_nlt_unord_signal; break;
case CMP_LT_UN: imm = SSE_nle_unord_signal; swap = TRUE; break;
case CMP_LE_UN: imm = SSE_nlt_unord_signal; swap = TRUE; break;
case CMP_NE: imm = SSE_neq_unord_nosignal; break;
case CMP_ORD: imm = SSE_ord_nosignal; break;
case CMP_UNORD: imm = SSE_unord_nosignal; break;
default: g_assert_not_reached (); break;
}
LLVMValueRef cmp = LLVMConstInt (LLVMInt8Type (), imm, FALSE);
LLVMValueRef args [] = { lhs, rhs, cmp };
if (swap) {
args [0] = rhs;
args [1] = lhs;
}
IntrinsicId id = (IntrinsicId) 0;
switch (ins->opcode) {
case OP_SSE_CMPSS: id = INTRINS_SSE_CMPSS; break;
case OP_SSE2_CMPSD: id = INTRINS_SSE_CMPSD; break;
default: g_assert_not_reached (); break;
}
int elements = LLVMGetVectorSize (LLVMTypeOf (lhs));
int mask_values [MAX_VECTOR_ELEMS] = { 0 };
for (int i = 1; i < elements; ++i) {
mask_values [i] = elements + i;
}
LLVMValueRef result = call_intrins (ctx, id, args, "");
result = LLVMBuildShuffleVector (builder, result, lhs, create_const_vector_i32 (mask_values, elements), "");
values [ins->dreg] = result;
break;
}
case OP_SSE_COMISS: {
LLVMValueRef args [] = { lhs, rhs };
IntrinsicId id = (IntrinsicId)0;
switch (ins->inst_c0) {
case CMP_EQ: id = INTRINS_SSE_COMIEQ_SS; break;
case CMP_GT: id = INTRINS_SSE_COMIGT_SS; break;
case CMP_GE: id = INTRINS_SSE_COMIGE_SS; break;
case CMP_LT: id = INTRINS_SSE_COMILT_SS; break;
case CMP_LE: id = INTRINS_SSE_COMILE_SS; break;
case CMP_NE: id = INTRINS_SSE_COMINEQ_SS; break;
default: g_assert_not_reached (); break;
}
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_SSE_UCOMISS: {
LLVMValueRef args [] = { lhs, rhs };
IntrinsicId id = (IntrinsicId)0;
switch (ins->inst_c0) {
case CMP_EQ: id = INTRINS_SSE_UCOMIEQ_SS; break;
case CMP_GT: id = INTRINS_SSE_UCOMIGT_SS; break;
case CMP_GE: id = INTRINS_SSE_UCOMIGE_SS; break;
case CMP_LT: id = INTRINS_SSE_UCOMILT_SS; break;
case CMP_LE: id = INTRINS_SSE_UCOMILE_SS; break;
case CMP_NE: id = INTRINS_SSE_UCOMINEQ_SS; break;
default: g_assert_not_reached (); break;
}
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_SSE2_COMISD: {
LLVMValueRef args [] = { lhs, rhs };
IntrinsicId id = (IntrinsicId)0;
switch (ins->inst_c0) {
case CMP_EQ: id = INTRINS_SSE_COMIEQ_SD; break;
case CMP_GT: id = INTRINS_SSE_COMIGT_SD; break;
case CMP_GE: id = INTRINS_SSE_COMIGE_SD; break;
case CMP_LT: id = INTRINS_SSE_COMILT_SD; break;
case CMP_LE: id = INTRINS_SSE_COMILE_SD; break;
case CMP_NE: id = INTRINS_SSE_COMINEQ_SD; break;
default: g_assert_not_reached (); break;
}
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_SSE2_UCOMISD: {
LLVMValueRef args [] = { lhs, rhs };
IntrinsicId id = (IntrinsicId)0;
switch (ins->inst_c0) {
case CMP_EQ: id = INTRINS_SSE_UCOMIEQ_SD; break;
case CMP_GT: id = INTRINS_SSE_UCOMIGT_SD; break;
case CMP_GE: id = INTRINS_SSE_UCOMIGE_SD; break;
case CMP_LT: id = INTRINS_SSE_UCOMILT_SD; break;
case CMP_LE: id = INTRINS_SSE_UCOMILE_SD; break;
case CMP_NE: id = INTRINS_SSE_UCOMINEQ_SD; break;
default: g_assert_not_reached (); break;
}
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_SSE_CVTSI2SS:
case OP_SSE_CVTSI2SS64:
case OP_SSE2_CVTSI2SD:
case OP_SSE2_CVTSI2SD64: {
LLVMTypeRef ty = LLVMFloatType ();
switch (ins->opcode) {
case OP_SSE2_CVTSI2SD:
case OP_SSE2_CVTSI2SD64:
ty = LLVMDoubleType ();
break;
}
LLVMValueRef fp = LLVMBuildSIToFP (builder, rhs, ty, "");
values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, fp, const_int32 (0), dname);
break;
}
case OP_SSE2_PMULUDQ: {
LLVMValueRef i32_max = LLVMConstInt (LLVMInt64Type (), UINT32_MAX, FALSE);
LLVMValueRef maskvals [] = { i32_max, i32_max };
LLVMValueRef mask = LLVMConstVector (maskvals, 2);
LLVMValueRef l = LLVMBuildAnd (builder, convert (ctx, lhs, sse_i8_t), mask, "");
LLVMValueRef r = LLVMBuildAnd (builder, convert (ctx, rhs, sse_i8_t), mask, "");
values [ins->dreg] = LLVMBuildNUWMul (builder, l, r, dname);
break;
}
case OP_SSE_SQRTSS:
case OP_SSE2_SQRTSD: {
LLVMValueRef upper = values [ins->sreg1];
LLVMValueRef lower = values [ins->sreg2];
LLVMValueRef scalar = LLVMBuildExtractElement (builder, lower, const_int32 (0), "");
LLVMValueRef result = call_intrins (ctx, simd_ins_to_intrins (ins->opcode), &scalar, dname);
values [ins->dreg] = LLVMBuildInsertElement (builder, upper, result, const_int32 (0), "");
break;
}
case OP_SSE_RCPSS:
case OP_SSE_RSQRTSS: {
IntrinsicId id = (IntrinsicId)0;
switch (ins->opcode) {
case OP_SSE_RCPSS: id = INTRINS_SSE_RCP_SS; break;
case OP_SSE_RSQRTSS: id = INTRINS_SSE_RSQRT_SS; break;
default: g_assert_not_reached (); break;
};
LLVMValueRef result = call_intrins (ctx, id, &rhs, dname);
const int mask[] = { 0, 5, 6, 7 };
LLVMValueRef shufmask = create_const_vector_i32 (mask, 4);
values [ins->dreg] = LLVMBuildShuffleVector (builder, result, lhs, shufmask, "");
break;
}
case OP_XOP: {
IntrinsicId id = (IntrinsicId)ins->inst_c0;
call_intrins (ctx, id, NULL, "");
break;
}
case OP_XOP_X_I:
case OP_XOP_X_X:
case OP_XOP_I4_X:
case OP_XOP_I8_X:
case OP_XOP_X_X_X:
case OP_XOP_X_X_I4:
case OP_XOP_X_X_I8: {
IntrinsicId id = (IntrinsicId)ins->inst_c0;
LLVMValueRef args [] = { lhs, rhs };
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_XOP_I4_X_X: {
gboolean to_i8_t = FALSE;
gboolean ret_bool = FALSE;
IntrinsicId id = (IntrinsicId)ins->inst_c0;
switch (ins->inst_c0) {
case INTRINS_SSE_TESTC: to_i8_t = TRUE; ret_bool = TRUE; break;
case INTRINS_SSE_TESTZ: to_i8_t = TRUE; ret_bool = TRUE; break;
case INTRINS_SSE_TESTNZ: to_i8_t = TRUE; ret_bool = TRUE; break;
default: g_assert_not_reached (); break;
}
LLVMValueRef args [] = { lhs, rhs };
if (to_i8_t) {
args [0] = convert (ctx, args [0], sse_i8_t);
args [1] = convert (ctx, args [1], sse_i8_t);
}
LLVMValueRef call = call_intrins (ctx, id, args, "");
if (ret_bool) {
// if return type is bool (it's still i32) we need to normalize it to 1/0
LLVMValueRef cmp_zero = LLVMBuildICmp (builder, LLVMIntNE, call, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
values [ins->dreg] = LLVMBuildZExt (builder, cmp_zero, LLVMInt8Type (), "");
} else {
values [ins->dreg] = call;
}
break;
}
case OP_SSE2_MASKMOVDQU: {
LLVMTypeRef i8ptr = LLVMPointerType (LLVMInt8Type (), 0);
LLVMValueRef dstaddr = convert (ctx, values [ins->sreg3], i8ptr);
LLVMValueRef src = convert (ctx, lhs, sse_i1_t);
LLVMValueRef mask = convert (ctx, rhs, sse_i1_t);
LLVMValueRef args[] = { src, mask, dstaddr };
call_intrins (ctx, INTRINS_SSE_MASKMOVDQU, args, "");
break;
}
case OP_PADDB_SAT:
case OP_PADDW_SAT:
case OP_PSUBB_SAT:
case OP_PSUBW_SAT:
case OP_PADDB_SAT_UN:
case OP_PADDW_SAT_UN:
case OP_PSUBB_SAT_UN:
case OP_PSUBW_SAT_UN:
case OP_SSE2_ADDS:
case OP_SSE2_SUBS: {
IntrinsicId id = (IntrinsicId)0;
int type = 0;
gboolean is_add = TRUE;
switch (ins->opcode) {
case OP_PADDB_SAT: type = MONO_TYPE_I1; break;
case OP_PADDW_SAT: type = MONO_TYPE_I2; break;
case OP_PSUBB_SAT: type = MONO_TYPE_I1; is_add = FALSE; break;
case OP_PSUBW_SAT: type = MONO_TYPE_I2; is_add = FALSE; break;
case OP_PADDB_SAT_UN: type = MONO_TYPE_U1; break;
case OP_PADDW_SAT_UN: type = MONO_TYPE_U2; break;
case OP_PSUBB_SAT_UN: type = MONO_TYPE_U1; is_add = FALSE; break;
case OP_PSUBW_SAT_UN: type = MONO_TYPE_U2; is_add = FALSE; break;
case OP_SSE2_ADDS: type = ins->inst_c1; break;
case OP_SSE2_SUBS: type = ins->inst_c1; is_add = FALSE; break;
default: g_assert_not_reached ();
}
if (is_add) {
switch (type) {
case MONO_TYPE_I1: id = INTRINS_SSE_SADD_SATI8; break;
case MONO_TYPE_U1: id = INTRINS_SSE_UADD_SATI8; break;
case MONO_TYPE_I2: id = INTRINS_SSE_SADD_SATI16; break;
case MONO_TYPE_U2: id = INTRINS_SSE_UADD_SATI16; break;
default: g_assert_not_reached (); break;
}
} else {
switch (type) {
case MONO_TYPE_I1: id = INTRINS_SSE_SSUB_SATI8; break;
case MONO_TYPE_U1: id = INTRINS_SSE_USUB_SATI8; break;
case MONO_TYPE_I2: id = INTRINS_SSE_SSUB_SATI16; break;
case MONO_TYPE_U2: id = INTRINS_SSE_USUB_SATI16; break;
default: g_assert_not_reached (); break;
}
}
LLVMTypeRef vecty = type_to_sse_type (type);
LLVMValueRef args [] = { convert (ctx, lhs, vecty), convert (ctx, rhs, vecty) };
LLVMValueRef result = call_intrins (ctx, id, args, dname);
values [ins->dreg] = convert (ctx, result, vecty);
break;
}
case OP_SSE2_PACKUS: {
LLVMValueRef args [2];
args [0] = convert (ctx, lhs, sse_i2_t);
args [1] = convert (ctx, rhs, sse_i2_t);
values [ins->dreg] = convert (ctx,
call_intrins (ctx, INTRINS_SSE_PACKUSWB, args, dname),
type_to_sse_type (ins->inst_c1));
break;
}
case OP_SSE2_SRLI: {
LLVMValueRef args [] = { lhs, rhs };
values [ins->dreg] = convert (ctx,
call_intrins (ctx, INTRINS_SSE_PSRLI_W, args, dname),
type_to_sse_type (ins->inst_c1));
break;
}
case OP_SSE2_PSLLDQ:
case OP_SSE2_PSRLDQ: {
LLVMBasicBlockRef bbs [16 + 1];
LLVMValueRef switch_ins;
LLVMValueRef value = lhs;
LLVMValueRef index = rhs;
LLVMValueRef phi_values [16 + 1];
LLVMTypeRef t = sse_i1_t;
int nelems = 16;
int i;
gboolean shift_right = (ins->opcode == OP_SSE2_PSRLDQ);
value = convert (ctx, value, t);
// No corresponding LLVM intrinsics
// FIXME: Optimize const count
for (i = 0; i < nelems; ++i)
bbs [i] = gen_bb (ctx, "PSLLDQ_CASE_BB");
bbs [nelems] = gen_bb (ctx, "PSLLDQ_DEF_BB");
cbb = gen_bb (ctx, "PSLLDQ_COND_BB");
switch_ins = LLVMBuildSwitch (builder, index, bbs [nelems], 0);
for (i = 0; i < nelems; ++i) {
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]);
LLVMPositionBuilderAtEnd (builder, bbs [i]);
int mask_values [16];
// Implement shift using a shuffle
if (shift_right) {
for (int j = 0; j < nelems - i; ++j)
mask_values [j] = i + j;
for (int j = nelems -i ; j < nelems; ++j)
mask_values [j] = nelems;
} else {
for (int j = 0; j < i; ++j)
mask_values [j] = nelems;
for (int j = 0; j < nelems - i; ++j)
mask_values [j + i] = j;
}
phi_values [i] = LLVMBuildShuffleVector (builder, value, LLVMGetUndef (t), create_const_vector_i32 (mask_values, nelems), "");
LLVMBuildBr (builder, cbb);
}
/* Default case */
LLVMPositionBuilderAtEnd (builder, bbs [nelems]);
phi_values [nelems] = LLVMConstNull (t);
LLVMBuildBr (builder, cbb);
LLVMPositionBuilderAtEnd (builder, cbb);
values [ins->dreg] = LLVMBuildPhi (builder, LLVMTypeOf (phi_values [0]), "");
LLVMAddIncoming (values [ins->dreg], phi_values, bbs, nelems + 1);
values [ins->dreg] = convert (ctx, values [ins->dreg], type_to_sse_type (ins->inst_c1));
ctx->bblocks [bb->block_num].end_bblock = cbb;
break;
}
case OP_SSE2_PSRAW_IMM:
case OP_SSE2_PSRAD_IMM:
case OP_SSE2_PSRLW_IMM:
case OP_SSE2_PSRLD_IMM:
case OP_SSE2_PSRLQ_IMM: {
LLVMValueRef value = lhs;
LLVMValueRef index = rhs;
IntrinsicId id;
// FIXME: Optimize const index case
/* Use the non-immediate version */
switch (ins->opcode) {
case OP_SSE2_PSRAW_IMM: id = INTRINS_SSE_PSRA_W; break;
case OP_SSE2_PSRAD_IMM: id = INTRINS_SSE_PSRA_D; break;
case OP_SSE2_PSRLW_IMM: id = INTRINS_SSE_PSRL_W; break;
case OP_SSE2_PSRLD_IMM: id = INTRINS_SSE_PSRL_D; break;
case OP_SSE2_PSRLQ_IMM: id = INTRINS_SSE_PSRL_Q; break;
default: g_assert_not_reached (); break;
}
LLVMTypeRef t = LLVMTypeOf (value);
LLVMValueRef index_vect = LLVMBuildInsertElement (builder, LLVMConstNull (t), convert (ctx, index, LLVMGetElementType (t)), const_int32 (0), "");
LLVMValueRef args [] = { value, index_vect };
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_SSE_SHUFPS:
case OP_SSE2_SHUFPD:
case OP_SSE2_PSHUFD:
case OP_SSE2_PSHUFHW:
case OP_SSE2_PSHUFLW: {
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
LLVMValueRef l = lhs;
LLVMValueRef r = rhs;
LLVMValueRef ctl = arg3;
const char *oname = "";
int ncases = 0;
switch (ins->opcode) {
case OP_SSE_SHUFPS: ncases = 256; break;
case OP_SSE2_SHUFPD: ncases = 4; break;
case OP_SSE2_PSHUFD: case OP_SSE2_PSHUFHW: case OP_SSE2_PSHUFLW: ncases = 256; r = lhs; ctl = rhs; break;
}
switch (ins->opcode) {
case OP_SSE_SHUFPS: oname = "sse_shufps"; break;
case OP_SSE2_SHUFPD: oname = "sse2_shufpd"; break;
case OP_SSE2_PSHUFD: oname = "sse2_pshufd"; break;
case OP_SSE2_PSHUFHW: oname = "sse2_pshufhw"; break;
case OP_SSE2_PSHUFLW: oname = "sse2_pshuflw"; break;
}
ctl = LLVMBuildAnd (builder, ctl, const_int32 (ncases - 1), "");
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, ncases, ctl, ret_t, oname);
int mask_values [8];
int mask_len = 0;
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
switch (ins->opcode) {
case OP_SSE_SHUFPS:
mask_len = 4;
mask_values [0] = ((i >> 0) & 0x3) + 0; // take two elements from lhs
mask_values [1] = ((i >> 2) & 0x3) + 0;
mask_values [2] = ((i >> 4) & 0x3) + 4; // and two from rhs
mask_values [3] = ((i >> 6) & 0x3) + 4;
break;
case OP_SSE2_SHUFPD:
mask_len = 2;
mask_values [0] = ((i >> 0) & 0x1) + 0;
mask_values [1] = ((i >> 1) & 0x1) + 2;
break;
case OP_SSE2_PSHUFD:
/*
* Each 2 bits in mask selects 1 dword from the the source and copies it to the
* destination.
*/
mask_len = 4;
for (int j = 0; j < 4; ++j) {
int windex = (i >> (j * 2)) & 0x3;
mask_values [j] = windex;
}
break;
case OP_SSE2_PSHUFHW:
/*
* Each 2 bits in mask selects 1 word from the high quadword of the source and copies it to the
* high quadword of the destination.
*/
mask_len = 8;
/* The low quadword stays the same */
for (int j = 0; j < 4; ++j)
mask_values [j] = j;
for (int j = 0; j < 4; ++j) {
int windex = (i >> (j * 2)) & 0x3;
mask_values [j + 4] = 4 + windex;
}
break;
case OP_SSE2_PSHUFLW:
mask_len = 8;
/* The high quadword stays the same */
for (int j = 0; j < 4; ++j)
mask_values [j + 4] = j + 4;
for (int j = 0; j < 4; ++j) {
int windex = (i >> (j * 2)) & 0x3;
mask_values [j] = windex;
}
break;
}
LLVMValueRef mask = create_const_vector_i32 (mask_values, mask_len);
LLVMValueRef result = LLVMBuildShuffleVector (builder, l, r, mask, oname);
immediate_unroll_commit (&ictx, i, result);
}
immediate_unroll_default (&ictx);
immediate_unroll_commit_default (&ictx, LLVMGetUndef (ret_t));
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_SSE3_MOVDDUP: {
int mask [] = { 0, 0 };
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs,
LLVMGetUndef (LLVMTypeOf (lhs)),
create_const_vector_i32 (mask, 2), "");
break;
}
case OP_SSE3_MOVDDUP_MEM: {
LLVMValueRef undef = LLVMGetUndef (v128_r8_t);
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (r8_t, 0));
LLVMValueRef elem = mono_llvm_build_aligned_load (builder, addr, "sse3_movddup_mem", FALSE, 1);
LLVMValueRef val = LLVMBuildInsertElement (builder, undef, elem, const_int32 (0), "sse3_movddup_mem");
values [ins->dreg] = LLVMBuildShuffleVector (builder, val, undef, LLVMConstNull (LLVMVectorType (i4_t, 2)), "sse3_movddup_mem");
break;
}
case OP_SSE3_MOVSHDUP: {
int mask [] = { 1, 1, 3, 3 };
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), create_const_vector_i32 (mask, 4), "");
break;
}
case OP_SSE3_MOVSLDUP: {
int mask [] = { 0, 0, 2, 2 };
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), create_const_vector_i32 (mask, 4), "");
break;
}
case OP_SSSE3_SHUFFLE: {
LLVMValueRef args [] = { lhs, rhs };
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_PSHUFB, args, dname);
break;
}
case OP_SSSE3_ABS: {
// %sub = sub <16 x i8> zeroinitializer, %arg
// %cmp = icmp sgt <16 x i8> %arg, zeroinitializer
// %abs = select <16 x i1> %cmp, <16 x i8> %arg, <16 x i8> %sub
LLVMTypeRef typ = type_to_sse_type (ins->inst_c1);
LLVMValueRef sub = LLVMBuildSub(builder, LLVMConstNull(typ), lhs, "");
LLVMValueRef cmp = LLVMBuildICmp(builder, LLVMIntSGT, lhs, LLVMConstNull(typ), "");
LLVMValueRef abs = LLVMBuildSelect (builder, cmp, lhs, sub, "");
values [ins->dreg] = convert (ctx, abs, typ);
break;
}
case OP_SSSE3_ALIGNR: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMValueRef zero = LLVMConstNull (v128_i1_t);
LLVMValueRef hivec = convert (ctx, lhs, v128_i1_t);
LLVMValueRef lovec = convert (ctx, rhs, v128_i1_t);
LLVMValueRef rshift_amount = convert (ctx, arg3, i1_t);
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 32, rshift_amount, v128_i1_t, "ssse3_alignr");
LLVMValueRef mask_values [16]; // 128-bit vector, 8-bit elements, 16 total elements
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
LLVMValueRef hi = NULL;
LLVMValueRef lo = NULL;
if (i <= 16) {
for (int j = 0; j < 16; j++)
mask_values [j] = const_int32 (i + j);
lo = lovec;
hi = hivec;
} else {
for (int j = 0; j < 16; j++)
mask_values [j] = const_int32 (i + j - 16);
lo = hivec;
hi = zero;
}
LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, lo, hi, LLVMConstVector (mask_values, 16), "ssse3_alignr");
immediate_unroll_commit (&ictx, i, shuffled);
}
immediate_unroll_default (&ictx);
immediate_unroll_commit_default (&ictx, zero);
LLVMValueRef result = immediate_unroll_end (&ictx, &cbb);
values [ins->dreg] = convert (ctx, result, ret_t);
break;
}
case OP_SSE41_ROUNDP: {
LLVMValueRef args [] = { lhs, LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE) };
values [ins->dreg] = call_intrins (ctx, ins->inst_c1 == MONO_TYPE_R4 ? INTRINS_SSE_ROUNDPS : INTRINS_SSE_ROUNDPD, args, dname);
break;
}
case OP_SSE41_ROUNDS: {
LLVMValueRef args [3];
args [0] = lhs;
args [1] = rhs;
args [2] = LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE);
values [ins->dreg] = call_intrins (ctx, ins->inst_c1 == MONO_TYPE_R4 ? INTRINS_SSE_ROUNDSS : INTRINS_SSE_ROUNDSD, args, dname);
break;
}
case OP_SSE41_DPPS:
case OP_SSE41_DPPD: {
/* Bits 0, 1, 4, 5 are meaningful for the control mask
* in dppd; all bits are meaningful for dpps.
*/
LLVMTypeRef ret_t = NULL;
LLVMValueRef mask = NULL;
int mask_bits = 0;
int high_shift = 0;
int low_mask = 0;
IntrinsicId iid = (IntrinsicId) 0;
const char *oname = "";
switch (ins->opcode) {
case OP_SSE41_DPPS:
ret_t = v128_r4_t;
mask = const_int8 (0xff); // 0b11111111
mask_bits = 8;
high_shift = 4;
low_mask = 0xf;
iid = INTRINS_SSE_DPPS;
oname = "sse41_dpps";
break;
case OP_SSE41_DPPD:
ret_t = v128_r8_t;
mask = const_int8 (0x33); // 0b00110011
mask_bits = 4;
high_shift = 2;
low_mask = 0x3;
iid = INTRINS_SSE_DPPD;
oname = "sse41_dppd";
break;
}
LLVMValueRef args [] = { lhs, rhs, NULL };
LLVMValueRef index = LLVMBuildAnd (builder, convert (ctx, arg3, i1_t), mask, oname);
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 1 << mask_bits, index, ret_t, oname);
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
int imm = ((i >> high_shift) << 4) | (i & low_mask);
args [2] = const_int8 (imm);
LLVMValueRef result = call_intrins (ctx, iid, args, dname);
immediate_unroll_commit (&ictx, imm, result);
}
immediate_unroll_default (&ictx);
immediate_unroll_commit_default (&ictx, LLVMGetUndef (ret_t));
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_SSE41_MPSADBW: {
LLVMValueRef args [] = {
convert (ctx, lhs, sse_i1_t),
convert (ctx, rhs, sse_i1_t),
NULL,
};
LLVMValueRef ctl = convert (ctx, arg3, i1_t);
// Only 3 bits (bits 0-2) are used by mpsadbw and llvm.x86.sse41.mpsadbw
int used_bits = 0x7;
ctl = LLVMBuildAnd (builder, ctl, const_int8 (used_bits), "sse41_mpsadbw");
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, used_bits + 1, ctl, v128_i2_t, "sse41_mpsadbw");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
args [2] = const_int8 (i);
LLVMValueRef result = call_intrins (ctx, INTRINS_SSE_MPSADBW, args, "sse41_mpsadbw");
immediate_unroll_commit (&ictx, i, result);
}
immediate_unroll_unreachable_default (&ictx);
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_SSE41_INSERTPS: {
LLVMValueRef ctl = convert (ctx, arg3, i1_t);
LLVMValueRef args [] = { lhs, rhs, NULL };
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 256, ctl, v128_r4_t, "sse41_insertps");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
args [2] = const_int8 (i);
LLVMValueRef result = call_intrins (ctx, INTRINS_SSE_INSERTPS, args, dname);
immediate_unroll_commit (&ictx, i, result);
}
immediate_unroll_unreachable_default (&ictx);
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_SSE41_BLEND: {
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
int nelem = LLVMGetVectorSize (ret_t);
g_assert (nelem >= 2 && nelem <= 8); // I2, U2, R4, R8
int unique_ctl_patterns = 1 << nelem;
int ctlmask = unique_ctl_patterns - 1;
LLVMValueRef ctl = convert (ctx, arg3, i1_t);
ctl = LLVMBuildAnd (builder, ctl, const_int8 (ctlmask), "sse41_blend");
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, unique_ctl_patterns, ctl, ret_t, "sse41_blend");
int i = 0;
int mask_values [MAX_VECTOR_ELEMS] = { 0 };
while (immediate_unroll_next (&ictx, &i)) {
for (int lane = 0; lane < nelem; ++lane) {
// n-bit in inst_c0 (control byte) is set to 1
gboolean bit_set = (i & (1 << lane)) >> lane;
mask_values [lane] = lane + (bit_set ? nelem : 0);
}
LLVMValueRef mask = create_const_vector_i32 (mask_values, nelem);
LLVMValueRef result = LLVMBuildShuffleVector (builder, lhs, rhs, mask, "sse41_blend");
immediate_unroll_commit (&ictx, i, result);
}
immediate_unroll_default (&ictx);
immediate_unroll_commit_default (&ictx, LLVMGetUndef (ret_t));
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_SSE41_BLENDV: {
LLVMValueRef args [] = { lhs, rhs, values [ins->sreg3] };
if (ins->inst_c1 == MONO_TYPE_R4) {
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_BLENDVPS, args, dname);
} else if (ins->inst_c1 == MONO_TYPE_R8) {
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_BLENDVPD, args, dname);
} else {
// for other non-fp type just convert to <16 x i8> and pass to @llvm.x86.sse41.pblendvb
args [0] = LLVMBuildBitCast (ctx->builder, args [0], sse_i1_t, "");
args [1] = LLVMBuildBitCast (ctx->builder, args [1], sse_i1_t, "");
args [2] = LLVMBuildBitCast (ctx->builder, args [2], sse_i1_t, "");
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_PBLENDVB, args, dname);
}
break;
}
case OP_SSE_CVTII: {
gboolean is_signed = (ins->inst_c1 == MONO_TYPE_I1) ||
(ins->inst_c1 == MONO_TYPE_I2) || (ins->inst_c1 == MONO_TYPE_I4);
LLVMTypeRef vec_type;
if ((ins->inst_c1 == MONO_TYPE_I1) || (ins->inst_c1 == MONO_TYPE_U1))
vec_type = sse_i1_t;
else if ((ins->inst_c1 == MONO_TYPE_I2) || (ins->inst_c1 == MONO_TYPE_U2))
vec_type = sse_i2_t;
else
vec_type = sse_i4_t;
LLVMValueRef value;
if (LLVMGetTypeKind (LLVMTypeOf (lhs)) != LLVMVectorTypeKind) {
LLVMValueRef bitcasted = LLVMBuildBitCast (ctx->builder, lhs, LLVMPointerType (vec_type, 0), "");
value = mono_llvm_build_aligned_load (builder, bitcasted, "", FALSE, 1);
} else {
value = LLVMBuildBitCast (ctx->builder, lhs, vec_type, "");
}
LLVMValueRef mask_vec;
LLVMTypeRef dst_type;
if (ins->inst_c0 == MONO_TYPE_I2) {
mask_vec = create_const_vector_i32 (mask_0_incr_1, 8);
dst_type = sse_i2_t;
} else if (ins->inst_c0 == MONO_TYPE_I4) {
mask_vec = create_const_vector_i32 (mask_0_incr_1, 4);
dst_type = sse_i4_t;
} else {
g_assert (ins->inst_c0 == MONO_TYPE_I8);
mask_vec = create_const_vector_i32 (mask_0_incr_1, 2);
dst_type = sse_i8_t;
}
LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, value,
LLVMGetUndef (vec_type), mask_vec, "");
if (is_signed)
values [ins->dreg] = LLVMBuildSExt (ctx->builder, shuffled, dst_type, "");
else
values [ins->dreg] = LLVMBuildZExt (ctx->builder, shuffled, dst_type, "");
break;
}
case OP_SSE41_LOADANT: {
LLVMValueRef dst_ptr = convert (ctx, lhs, LLVMPointerType (primitive_type_to_llvm_type (inst_c1_type (ins)), 0));
LLVMValueRef dst_vec = LLVMBuildBitCast (builder, dst_ptr, LLVMPointerType (type_to_sse_type (ins->inst_c1), 0), "");
LLVMValueRef load = mono_llvm_build_aligned_load (builder, dst_vec, "", FALSE, 16);
set_nontemporal_flag (load);
values [ins->dreg] = load;
break;
}
case OP_SSE41_MUL: {
const int shift_vals [] = { 32, 32 };
const LLVMValueRef args [] = {
convert (ctx, lhs, sse_i8_t),
convert (ctx, rhs, sse_i8_t),
};
LLVMValueRef mul_args [2] = { 0 };
LLVMValueRef shift_vec = create_const_vector (LLVMInt64Type (), shift_vals, 2);
for (int i = 0; i < 2; ++i) {
LLVMValueRef padded = LLVMBuildShl (builder, args [i], shift_vec, "");
mul_args[i] = mono_llvm_build_exact_ashr (builder, padded, shift_vec);
}
values [ins->dreg] = LLVMBuildNSWMul (builder, mul_args [0], mul_args [1], dname);
break;
}
case OP_SSE41_MULLO: {
values [ins->dreg] = LLVMBuildMul (ctx->builder, lhs, rhs, "");
break;
}
case OP_SSE42_CRC32:
case OP_SSE42_CRC64: {
LLVMValueRef args [2];
args [0] = lhs;
args [1] = convert (ctx, rhs, primitive_type_to_llvm_type (ins->inst_c0));
IntrinsicId id;
switch (ins->inst_c0) {
case MONO_TYPE_U1: id = INTRINS_SSE_CRC32_32_8; break;
case MONO_TYPE_U2: id = INTRINS_SSE_CRC32_32_16; break;
case MONO_TYPE_U4: id = INTRINS_SSE_CRC32_32_32; break;
case MONO_TYPE_U8: id = INTRINS_SSE_CRC32_64_64; break;
default: g_assert_not_reached (); break;
}
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_PCLMULQDQ: {
LLVMValueRef args [] = { lhs, rhs, NULL };
LLVMValueRef ctl = convert (ctx, arg3, i1_t);
// Only bits 0 and 4 of the immediate operand are used by PCLMULQDQ.
ctl = LLVMBuildAnd (builder, ctl, const_int8 (0x11), "pclmulqdq");
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 1 << 2, ctl, v128_i8_t, "pclmulqdq");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
int imm = ((i & 0x2) << 3) | (i & 0x1);
args [2] = const_int8 (imm);
LLVMValueRef result = call_intrins (ctx, INTRINS_PCLMULQDQ, args, "pclmulqdq");
immediate_unroll_commit (&ictx, imm, result);
}
immediate_unroll_unreachable_default (&ictx);
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_AES_KEYGENASSIST: {
LLVMValueRef roundconstant = convert (ctx, rhs, i1_t);
LLVMValueRef args [] = { convert (ctx, lhs, v128_i8_t), NULL };
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 256, roundconstant, v128_i8_t, "aes_keygenassist");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
args [1] = const_int8 (i);
LLVMValueRef result = call_intrins (ctx, INTRINS_AESNI_AESKEYGENASSIST, args, "aes_keygenassist");
immediate_unroll_commit (&ictx, i, result);
}
immediate_unroll_unreachable_default (&ictx);
LLVMValueRef result = immediate_unroll_end (&ictx, &cbb);
values [ins->dreg] = convert (ctx, result, v128_i1_t);
break;
}
#endif
case OP_XCOMPARE_FP: {
LLVMRealPredicate pred = fpcond_to_llvm_cond [ins->inst_c0];
LLVMValueRef cmp = LLVMBuildFCmp (builder, pred, lhs, rhs, "");
int nelems = LLVMGetVectorSize (LLVMTypeOf (cmp));
g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs));
if (ins->inst_c1 == MONO_TYPE_R8)
values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt64Type (), nelems), ""), LLVMTypeOf (lhs), "");
else
values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt32Type (), nelems), ""), LLVMTypeOf (lhs), "");
break;
}
case OP_XCOMPARE: {
LLVMIntPredicate pred = cond_to_llvm_cond [ins->inst_c0];
LLVMValueRef cmp = LLVMBuildICmp (builder, pred, lhs, rhs, "");
g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs));
values [ins->dreg] = LLVMBuildSExt (builder, cmp, LLVMTypeOf (lhs), "");
break;
}
case OP_POPCNT32:
values [ins->dreg] = call_intrins (ctx, INTRINS_CTPOP_I32, &lhs, "");
break;
case OP_POPCNT64:
values [ins->dreg] = call_intrins (ctx, INTRINS_CTPOP_I64, &lhs, "");
break;
case OP_CTTZ32:
case OP_CTTZ64: {
LLVMValueRef args [2];
args [0] = lhs;
args [1] = LLVMConstInt (LLVMInt1Type (), 0, FALSE);
values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_CTTZ32 ? INTRINS_CTTZ_I32 : INTRINS_CTTZ_I64, args, "");
break;
}
case OP_BMI1_BEXTR32:
case OP_BMI1_BEXTR64: {
LLVMValueRef args [2];
args [0] = lhs;
args [1] = convert (ctx, rhs, ins->opcode == OP_BMI1_BEXTR32 ? i4_t : i8_t); // cast ushort to u32/u64
values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_BMI1_BEXTR32 ? INTRINS_BEXTR_I32 : INTRINS_BEXTR_I64, args, "");
break;
}
case OP_BZHI32:
case OP_BZHI64: {
LLVMValueRef args [2];
args [0] = lhs;
args [1] = rhs;
values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_BZHI32 ? INTRINS_BZHI_I32 : INTRINS_BZHI_I64, args, "");
break;
}
case OP_MULX_H32:
case OP_MULX_H64:
case OP_MULX_HL32:
case OP_MULX_HL64: {
gboolean is_64 = ins->opcode == OP_MULX_H64 || ins->opcode == OP_MULX_HL64;
gboolean only_high = ins->opcode == OP_MULX_H32 || ins->opcode == OP_MULX_H64;
LLVMValueRef lx = LLVMBuildZExt (ctx->builder, lhs, LLVMInt128Type (), "");
LLVMValueRef rx = LLVMBuildZExt (ctx->builder, rhs, LLVMInt128Type (), "");
LLVMValueRef mulx = LLVMBuildMul (ctx->builder, lx, rx, "");
if (!only_high) {
LLVMValueRef addr = convert (ctx, arg3, LLVMPointerType (is_64 ? i8_t : i4_t, 0));
LLVMValueRef lowx = LLVMBuildTrunc (ctx->builder, mulx, is_64 ? LLVMInt64Type () : LLVMInt32Type (), "");
LLVMBuildStore (ctx->builder, lowx, addr);
}
LLVMValueRef shift = LLVMConstInt (LLVMInt128Type (), is_64 ? 64 : 32, FALSE);
LLVMValueRef highx = LLVMBuildLShr (ctx->builder, mulx, shift, "");
values [ins->dreg] = LLVMBuildTrunc (ctx->builder, highx, is_64 ? LLVMInt64Type () : LLVMInt32Type (), "");
break;
}
case OP_PEXT32:
case OP_PEXT64: {
LLVMValueRef args [2];
args [0] = lhs;
args [1] = rhs;
values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_PEXT32 ? INTRINS_PEXT_I32 : INTRINS_PEXT_I64, args, "");
break;
}
case OP_PDEP32:
case OP_PDEP64: {
LLVMValueRef args [2];
args [0] = lhs;
args [1] = rhs;
values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_PDEP32 ? INTRINS_PDEP_I32 : INTRINS_PDEP_I64, args, "");
break;
}
#endif /* defined(TARGET_X86) || defined(TARGET_AMD64) */
// Shared between ARM64 and X86
#if defined(TARGET_ARM64) || defined(TARGET_X86) || defined(TARGET_AMD64)
case OP_LZCNT32:
case OP_LZCNT64: {
IntrinsicId iid = ins->opcode == OP_LZCNT32 ? INTRINS_CTLZ_I32 : INTRINS_CTLZ_I64;
LLVMValueRef args [] = { lhs, const_int1 (FALSE) };
values [ins->dreg] = call_intrins (ctx, iid, args, "");
break;
}
#endif
#if defined(TARGET_ARM64) || defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_WASM)
case OP_XEQUAL: {
LLVMTypeRef t;
LLVMValueRef cmp, mask [MAX_VECTOR_ELEMS], shuffle;
int nelems;
#if defined(TARGET_WASM)
/* The wasm code generator doesn't understand the shuffle/and code sequence below */
LLVMValueRef val;
if (LLVMIsNull (lhs) || LLVMIsNull (rhs)) {
val = LLVMIsNull (lhs) ? rhs : lhs;
nelems = LLVMGetVectorSize (LLVMTypeOf (lhs));
IntrinsicId intrins = (IntrinsicId)0;
switch (nelems) {
case 16:
intrins = INTRINS_WASM_ANYTRUE_V16;
break;
case 8:
intrins = INTRINS_WASM_ANYTRUE_V8;
break;
case 4:
intrins = INTRINS_WASM_ANYTRUE_V4;
break;
case 2:
intrins = INTRINS_WASM_ANYTRUE_V2;
break;
default:
g_assert_not_reached ();
}
/* res = !wasm.anytrue (val) */
values [ins->dreg] = call_intrins (ctx, intrins, &val, "");
values [ins->dreg] = LLVMBuildZExt (builder, LLVMBuildICmp (builder, LLVMIntEQ, values [ins->dreg], LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""), LLVMInt32Type (), dname);
break;
}
#endif
LLVMTypeRef srcelemt = LLVMGetElementType (LLVMTypeOf (lhs));
//%c = icmp sgt <16 x i8> %a0, %a1
if (srcelemt == LLVMDoubleType () || srcelemt == LLVMFloatType ())
cmp = LLVMBuildFCmp (builder, LLVMRealOEQ, lhs, rhs, "");
else
cmp = LLVMBuildICmp (builder, LLVMIntEQ, lhs, rhs, "");
nelems = LLVMGetVectorSize (LLVMTypeOf (cmp));
LLVMTypeRef elemt;
if (srcelemt == LLVMDoubleType ())
elemt = LLVMInt64Type ();
else if (srcelemt == LLVMFloatType ())
elemt = LLVMInt32Type ();
else
elemt = srcelemt;
t = LLVMVectorType (elemt, nelems);
cmp = LLVMBuildSExt (builder, cmp, t, "");
// cmp is a <nelems x elemt> vector, each element is either 0xff... or 0
int half = nelems / 2;
while (half >= 1) {
// AND the top and bottom halfes into the bottom half
for (int i = 0; i < half; ++i)
mask [i] = LLVMConstInt (LLVMInt32Type (), half + i, FALSE);
for (int i = half; i < nelems; ++i)
mask [i] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
shuffle = LLVMBuildShuffleVector (builder, cmp, LLVMGetUndef (t), LLVMConstVector (mask, LLVMGetVectorSize (t)), "");
cmp = LLVMBuildAnd (builder, cmp, shuffle, "");
half = half / 2;
}
// Extract [0]
LLVMValueRef first_elem = LLVMBuildExtractElement (builder, cmp, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
// convert to 0/1
LLVMValueRef cmp_zero = LLVMBuildICmp (builder, LLVMIntNE, first_elem, LLVMConstInt (elemt, 0, FALSE), "");
values [ins->dreg] = LLVMBuildZExt (builder, cmp_zero, LLVMInt8Type (), "");
break;
}
#endif
#if defined(TARGET_ARM64)
case OP_XOP_I4_I4:
case OP_XOP_I8_I8: {
IntrinsicId id = (IntrinsicId)ins->inst_c0;
values [ins->dreg] = call_intrins (ctx, id, &lhs, "");
break;
}
case OP_XOP_X_X_X:
case OP_XOP_I4_I4_I4:
case OP_XOP_I4_I4_I8: {
IntrinsicId id = (IntrinsicId)ins->inst_c0;
gboolean zext_last = FALSE, bitcast_result = FALSE, getElement = FALSE;
int element_idx = -1;
switch (id) {
case INTRINS_AARCH64_PMULL64:
getElement = TRUE;
bitcast_result = TRUE;
element_idx = ins->inst_c1;
break;
case INTRINS_AARCH64_CRC32B:
case INTRINS_AARCH64_CRC32H:
case INTRINS_AARCH64_CRC32W:
case INTRINS_AARCH64_CRC32CB:
case INTRINS_AARCH64_CRC32CH:
case INTRINS_AARCH64_CRC32CW:
zext_last = TRUE;
break;
default:
break;
}
LLVMValueRef arg1 = rhs;
if (zext_last)
arg1 = LLVMBuildZExt (ctx->builder, arg1, LLVMInt32Type (), "");
LLVMValueRef args [] = { lhs, arg1 };
if (getElement) {
args [0] = LLVMBuildExtractElement (ctx->builder, args [0], const_int32 (element_idx), "");
args [1] = LLVMBuildExtractElement (ctx->builder, args [1], const_int32 (element_idx), "");
}
values [ins->dreg] = call_intrins (ctx, id, args, "");
if (bitcast_result)
values [ins->dreg] = convert (ctx, values [ins->dreg], LLVMVectorType (LLVMInt64Type (), 2));
break;
}
case OP_XOP_X_X_X_X: {
IntrinsicId id = (IntrinsicId)ins->inst_c0;
gboolean getLowerElement = FALSE;
int arg_idx = -1;
switch (id) {
case INTRINS_AARCH64_SHA1C:
case INTRINS_AARCH64_SHA1M:
case INTRINS_AARCH64_SHA1P:
getLowerElement = TRUE;
arg_idx = 1;
break;
default:
break;
}
LLVMValueRef args [] = { lhs, rhs, arg3 };
if (getLowerElement)
args [arg_idx] = LLVMBuildExtractElement (ctx->builder, args [arg_idx], const_int32 (0), "");
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_XOP_X_X: {
IntrinsicId id = (IntrinsicId)ins->inst_c0;
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean getLowerElement = FALSE;
switch (id) {
case INTRINS_AARCH64_SHA1H: getLowerElement = TRUE; break;
default: break;
}
LLVMValueRef arg0 = lhs;
if (getLowerElement)
arg0 = LLVMBuildExtractElement (ctx->builder, arg0, const_int32 (0), "");
LLVMValueRef result = call_intrins (ctx, id, &arg0, "");
if (getLowerElement)
result = vector_from_scalar (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_XCOMPARE_FP_SCALAR:
case OP_XCOMPARE_FP: {
g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs));
gboolean scalar = ins->opcode == OP_XCOMPARE_FP_SCALAR;
LLVMRealPredicate pred = fpcond_to_llvm_cond [ins->inst_c0];
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
LLVMTypeRef reti_t = to_integral_vector_type (ret_t);
LLVMValueRef args [] = { lhs, rhs };
if (scalar)
for (int i = 0; i < 2; ++i)
args [i] = scalar_from_vector (ctx, args [i]);
LLVMValueRef result = LLVMBuildFCmp (builder, pred, args [0], args [1], "xcompare_fp");
if (scalar)
result = vector_from_scalar (ctx, LLVMVectorType (LLVMIntType (1), LLVMGetVectorSize (reti_t)), result);
result = LLVMBuildSExt (builder, result, reti_t, "");
result = LLVMBuildBitCast (builder, result, ret_t, "");
values [ins->dreg] = result;
break;
}
case OP_XCOMPARE_SCALAR:
case OP_XCOMPARE: {
g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs));
gboolean scalar = ins->opcode == OP_XCOMPARE_SCALAR;
LLVMIntPredicate pred = cond_to_llvm_cond [ins->inst_c0];
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
LLVMValueRef args [] = { lhs, rhs };
if (scalar)
for (int i = 0; i < 2; ++i)
args [i] = scalar_from_vector (ctx, args [i]);
LLVMValueRef result = LLVMBuildICmp (builder, pred, args [0], args [1], "xcompare");
if (scalar)
result = vector_from_scalar (ctx, LLVMVectorType (LLVMIntType (1), LLVMGetVectorSize (ret_t)), result);
values [ins->dreg] = LLVMBuildSExt (builder, result, ret_t, "");
break;
}
case OP_ARM64_EXT: {
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
unsigned int elems = LLVMGetVectorSize (ret_t);
g_assert (elems <= ARM64_MAX_VECTOR_ELEMS);
LLVMValueRef index = arg3;
LLVMValueRef default_value = lhs;
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, elems, index, ret_t, "arm64_ext");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
LLVMValueRef mask = create_const_vector_i32 (&mask_0_incr_1 [i], elems);
LLVMValueRef result = LLVMBuildShuffleVector (builder, lhs, rhs, mask, "arm64_ext");
immediate_unroll_commit (&ictx, i, result);
}
immediate_unroll_default (&ictx);
immediate_unroll_commit_default (&ictx, default_value);
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_ARM64_MVN: {
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
LLVMValueRef result = bitcast_to_integral (ctx, lhs);
result = LLVMBuildNot (builder, result, "arm64_mvn");
result = convert (ctx, result, ret_t);
values [ins->dreg] = result;
break;
}
case OP_ARM64_BIC: {
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
LLVMValueRef result = bitcast_to_integral (ctx, lhs);
LLVMValueRef mask = bitcast_to_integral (ctx, rhs);
mask = LLVMBuildNot (builder, mask, "");
result = LLVMBuildAnd (builder, mask, result, "arm64_bic");
result = convert (ctx, result, ret_t);
values [ins->dreg] = result;
break;
}
case OP_ARM64_BSL: {
LLVMTypeRef ret_t = LLVMTypeOf (rhs);
LLVMValueRef select = bitcast_to_integral (ctx, lhs);
LLVMValueRef left = bitcast_to_integral (ctx, rhs);
LLVMValueRef right = bitcast_to_integral (ctx, arg3);
LLVMValueRef result1 = LLVMBuildAnd (builder, select, left, "arm64_bsl");
LLVMValueRef result2 = LLVMBuildAnd (builder, LLVMBuildNot (builder, select, ""), right, "");
LLVMValueRef result = LLVMBuildOr (builder, result1, result2, "");
result = convert (ctx, result, ret_t);
values [ins->dreg] = result;
break;
}
case OP_ARM64_CMTST: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMValueRef l = bitcast_to_integral (ctx, lhs);
LLVMValueRef r = bitcast_to_integral (ctx, rhs);
LLVMValueRef result = LLVMBuildAnd (builder, l, r, "arm64_cmtst");
LLVMTypeRef t = LLVMTypeOf (l);
result = LLVMBuildICmp (builder, LLVMIntNE, result, LLVMConstNull (t), "");
result = LLVMBuildSExt (builder, result, t, "");
result = convert (ctx, result, ret_t);
values [ins->dreg] = result;
break;
}
case OP_ARM64_FCVTL:
case OP_ARM64_FCVTL2: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean high = ins->opcode == OP_ARM64_FCVTL2;
LLVMValueRef result = lhs;
if (high)
result = extract_high_elements (ctx, result);
result = LLVMBuildFPExt (builder, result, ret_t, "arm64_fcvtl");
values [ins->dreg] = result;
break;
}
case OP_ARM64_FCVTXN:
case OP_ARM64_FCVTXN2:
case OP_ARM64_FCVTN:
case OP_ARM64_FCVTN2: {
gboolean high = FALSE;
int iid = 0;
switch (ins->opcode) {
case OP_ARM64_FCVTXN2: high = TRUE; case OP_ARM64_FCVTXN: iid = INTRINS_AARCH64_ADV_SIMD_FCVTXN; break;
case OP_ARM64_FCVTN2: high = TRUE; break;
}
LLVMValueRef result = lhs;
if (high)
result = rhs;
if (iid)
result = call_intrins (ctx, iid, &result, "");
else
result = LLVMBuildFPTrunc (builder, result, v64_r4_t, "");
if (high)
result = concatenate_vectors (ctx, lhs, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_UCVTF:
case OP_ARM64_SCVTF:
case OP_ARM64_UCVTF_SCALAR:
case OP_ARM64_SCVTF_SCALAR: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean scalar = FALSE;
gboolean is_unsigned = FALSE;
switch (ins->opcode) {
case OP_ARM64_UCVTF_SCALAR: scalar = TRUE; case OP_ARM64_UCVTF: is_unsigned = TRUE; break;
case OP_ARM64_SCVTF_SCALAR: scalar = TRUE; break;
}
LLVMValueRef result = lhs;
LLVMTypeRef cvt_t = ret_t;
if (scalar) {
result = scalar_from_vector (ctx, result);
cvt_t = LLVMGetElementType (ret_t);
}
if (is_unsigned)
result = LLVMBuildUIToFP (builder, result, cvt_t, "arm64_ucvtf");
else
result = LLVMBuildSIToFP (builder, result, cvt_t, "arm64_scvtf");
if (scalar)
result = vector_from_scalar (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_FCVTZS:
case OP_ARM64_FCVTZS_SCALAR:
case OP_ARM64_FCVTZU:
case OP_ARM64_FCVTZU_SCALAR: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean scalar = FALSE;
gboolean is_unsigned = FALSE;
switch (ins->opcode) {
case OP_ARM64_FCVTZU_SCALAR: scalar = TRUE; case OP_ARM64_FCVTZU: is_unsigned = TRUE; break;
case OP_ARM64_FCVTZS_SCALAR: scalar = TRUE; break;
}
LLVMValueRef result = lhs;
LLVMTypeRef cvt_t = ret_t;
if (scalar) {
result = scalar_from_vector (ctx, result);
cvt_t = LLVMGetElementType (ret_t);
}
if (is_unsigned)
result = LLVMBuildFPToUI (builder, result, cvt_t, "arm64_fcvtzu");
else
result = LLVMBuildFPToSI (builder, result, cvt_t, "arm64_fcvtzs");
if (scalar)
result = vector_from_scalar (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SELECT_SCALAR: {
LLVMValueRef result = LLVMBuildExtractElement (builder, lhs, rhs, "");
LLVMTypeRef elem_t = LLVMTypeOf (result);
unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t);
LLVMTypeRef t = LLVMVectorType (elem_t, 64 / elem_bits);
result = vector_from_scalar (ctx, t, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SELECT_QUAD: {
LLVMTypeRef src_type = simd_class_to_llvm_type (ctx, ins->data.op [1].klass);
LLVMTypeRef ret_type = simd_class_to_llvm_type (ctx, ins->klass);
unsigned int src_type_bits = mono_llvm_get_prim_size_bits (src_type);
unsigned int ret_type_bits = mono_llvm_get_prim_size_bits (ret_type);
unsigned int src_intermediate_elems = src_type_bits / 32;
unsigned int ret_intermediate_elems = ret_type_bits / 32;
LLVMTypeRef intermediate_type = LLVMVectorType (i4_t, src_intermediate_elems);
LLVMValueRef result = LLVMBuildBitCast (builder, lhs, intermediate_type, "arm64_select_quad");
result = LLVMBuildExtractElement (builder, result, rhs, "arm64_select_quad");
result = broadcast_element (ctx, result, ret_intermediate_elems);
result = LLVMBuildBitCast (builder, result, ret_type, "arm64_select_quad");
values [ins->dreg] = result;
break;
}
case OP_LSCNT32:
case OP_LSCNT64: {
// %shr = ashr i32 %x, 31
// %xor = xor i32 %shr, %x
// %mul = shl i32 %xor, 1
// %add = or i32 %mul, 1
// %0 = tail call i32 @llvm.ctlz.i32(i32 %add, i1 false)
LLVMValueRef shr = LLVMBuildAShr (builder, lhs, ins->opcode == OP_LSCNT32 ?
LLVMConstInt (LLVMInt32Type (), 31, FALSE) :
LLVMConstInt (LLVMInt64Type (), 63, FALSE), "");
LLVMValueRef one = ins->opcode == OP_LSCNT32 ?
LLVMConstInt (LLVMInt32Type (), 1, FALSE) :
LLVMConstInt (LLVMInt64Type (), 1, FALSE);
LLVMValueRef xor = LLVMBuildXor (builder, shr, lhs, "");
LLVMValueRef mul = LLVMBuildShl (builder, xor, one, "");
LLVMValueRef add = LLVMBuildOr (builder, mul, one, "");
LLVMValueRef args [2];
args [0] = add;
args [1] = LLVMConstInt (LLVMInt1Type (), 0, FALSE);
values [ins->dreg] = LLVMBuildCall (builder, get_intrins (ctx, ins->opcode == OP_LSCNT32 ? INTRINS_CTLZ_I32 : INTRINS_CTLZ_I64), args, 2, "");
break;
}
case OP_ARM64_SQRDMLAH:
case OP_ARM64_SQRDMLAH_BYSCALAR:
case OP_ARM64_SQRDMLAH_SCALAR:
case OP_ARM64_SQRDMLSH:
case OP_ARM64_SQRDMLSH_BYSCALAR:
case OP_ARM64_SQRDMLSH_SCALAR: {
gboolean byscalar = FALSE;
gboolean scalar = FALSE;
gboolean subtract = FALSE;
switch (ins->opcode) {
case OP_ARM64_SQRDMLAH_BYSCALAR: byscalar = TRUE; break;
case OP_ARM64_SQRDMLAH_SCALAR: scalar = TRUE; break;
case OP_ARM64_SQRDMLSH: subtract = TRUE; break;
case OP_ARM64_SQRDMLSH_BYSCALAR: subtract = TRUE; byscalar = TRUE; break;
case OP_ARM64_SQRDMLSH_SCALAR: subtract = TRUE; scalar = TRUE; break;
}
int acc_iid = subtract ? INTRINS_AARCH64_ADV_SIMD_SQSUB : INTRINS_AARCH64_ADV_SIMD_SQADD;
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (ret_t);
ScalarOpFromVectorOpCtx sctx = scalar_op_from_vector_op (ctx, ret_t, ins);
LLVMValueRef args [] = { lhs, rhs, arg3 };
if (byscalar) {
unsigned int elems = LLVMGetVectorSize (ret_t);
args [2] = broadcast_element (ctx, scalar_from_vector (ctx, args [2]), elems);
}
if (scalar) {
ovr_tag = sctx.ovr_tag;
scalar_op_from_vector_op_process_args (&sctx, args, 3);
}
LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_SQRDMULH, ovr_tag, &args [1], "arm64_sqrdmlxh");
args [1] = result;
result = call_overloaded_intrins (ctx, acc_iid, ovr_tag, &args [0], "arm64_sqrdmlxh");
if (scalar)
result = scalar_op_from_vector_op_process_result (&sctx, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SMULH:
case OP_ARM64_UMULH: {
LLVMValueRef op1, op2;
if (ins->opcode == OP_ARM64_SMULH) {
op1 = LLVMBuildSExt (builder, lhs, LLVMInt128Type (), "");
op2 = LLVMBuildSExt (builder, rhs, LLVMInt128Type (), "");
} else {
op1 = LLVMBuildZExt (builder, lhs, LLVMInt128Type (), "");
op2 = LLVMBuildZExt (builder, rhs, LLVMInt128Type (), "");
}
LLVMValueRef mul = LLVMBuildMul (builder, op1, op2, "");
LLVMValueRef hi64 = LLVMBuildLShr (builder, mul,
LLVMConstInt (LLVMInt128Type (), 64, FALSE), "");
values [ins->dreg] = LLVMBuildTrunc (builder, hi64, LLVMInt64Type (), "");
break;
}
case OP_ARM64_XNARROW_SCALAR: {
// Unfortunately, @llvm.aarch64.neon.scalar.sqxtun isn't available for i8 or i16.
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (ret_t);
LLVMTypeRef elem_t = LLVMGetElementType (ret_t);
LLVMValueRef result = NULL;
int iid = ins->inst_c0;
int scalar_iid = 0;
switch (iid) {
case INTRINS_AARCH64_ADV_SIMD_SQXTUN: scalar_iid = INTRINS_AARCH64_ADV_SIMD_SCALAR_SQXTUN; break;
case INTRINS_AARCH64_ADV_SIMD_SQXTN: scalar_iid = INTRINS_AARCH64_ADV_SIMD_SCALAR_SQXTN; break;
case INTRINS_AARCH64_ADV_SIMD_UQXTN: scalar_iid = INTRINS_AARCH64_ADV_SIMD_SCALAR_UQXTN; break;
default: g_assert_not_reached ();
}
if (elem_t == i4_t) {
LLVMValueRef arg = scalar_from_vector (ctx, lhs);
result = call_intrins (ctx, scalar_iid, &arg, "arm64_xnarrow_scalar");
result = vector_from_scalar (ctx, ret_t, result);
} else {
LLVMTypeRef arg_t = LLVMTypeOf (lhs);
LLVMTypeRef argelem_t = LLVMGetElementType (arg_t);
unsigned int argelems = LLVMGetVectorSize (arg_t);
LLVMValueRef arg = keep_lowest_element (ctx, LLVMVectorType (argelem_t, argelems * 2), lhs);
result = call_overloaded_intrins (ctx, iid, ovr_tag, &arg, "arm64_xnarrow_scalar");
result = keep_lowest_element (ctx, LLVMTypeOf (result), result);
}
values [ins->dreg] = result;
break;
}
case OP_ARM64_SQXTUN2:
case OP_ARM64_UQXTN2:
case OP_ARM64_SQXTN2:
case OP_ARM64_XTN:
case OP_ARM64_XTN2: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
gboolean high = FALSE;
int iid = 0;
switch (ins->opcode) {
case OP_ARM64_SQXTUN2: high = TRUE; iid = INTRINS_AARCH64_ADV_SIMD_SQXTUN; break;
case OP_ARM64_UQXTN2: high = TRUE; iid = INTRINS_AARCH64_ADV_SIMD_UQXTN; break;
case OP_ARM64_SQXTN2: high = TRUE; iid = INTRINS_AARCH64_ADV_SIMD_SQXTN; break;
case OP_ARM64_XTN2: high = TRUE; break;
}
LLVMValueRef result = lhs;
if (high) {
result = rhs;
ovr_tag = ovr_tag_smaller_vector (ovr_tag);
}
LLVMTypeRef t = LLVMTypeOf (result);
LLVMTypeRef elem_t = LLVMGetElementType (t);
unsigned int elems = LLVMGetVectorSize (t);
unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t);
LLVMTypeRef result_t = LLVMVectorType (LLVMIntType (elem_bits / 2), elems);
if (iid != 0)
result = call_overloaded_intrins (ctx, iid, ovr_tag, &result, "");
else
result = LLVMBuildTrunc (builder, result, result_t, "arm64_xtn");
if (high)
result = concatenate_vectors (ctx, lhs, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_CLZ: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
LLVMValueRef args [] = { lhs, const_int1 (0) };
LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_CLZ, ovr_tag, args, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_FMSUB:
case OP_ARM64_FMSUB_BYSCALAR:
case OP_ARM64_FMSUB_SCALAR:
case OP_ARM64_FNMSUB_SCALAR:
case OP_ARM64_FMADD:
case OP_ARM64_FMADD_BYSCALAR:
case OP_ARM64_FMADD_SCALAR:
case OP_ARM64_FNMADD_SCALAR: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
gboolean scalar = FALSE;
gboolean negate = FALSE;
gboolean subtract = FALSE;
gboolean byscalar = FALSE;
switch (ins->opcode) {
case OP_ARM64_FMSUB: subtract = TRUE; break;
case OP_ARM64_FMSUB_BYSCALAR: subtract = TRUE; byscalar = TRUE; break;
case OP_ARM64_FMSUB_SCALAR: subtract = TRUE; scalar = TRUE; break;
case OP_ARM64_FNMSUB_SCALAR: subtract = TRUE; scalar = TRUE; negate = TRUE; break;
case OP_ARM64_FMADD: break;
case OP_ARM64_FMADD_BYSCALAR: byscalar = TRUE; break;
case OP_ARM64_FMADD_SCALAR: scalar = TRUE; break;
case OP_ARM64_FNMADD_SCALAR: scalar = TRUE; negate = TRUE; break;
}
// llvm.fma argument order: mulop1, mulop2, addend
LLVMValueRef args [] = { rhs, arg3, lhs };
if (byscalar) {
unsigned int elems = LLVMGetVectorSize (LLVMTypeOf (args [0]));
args [1] = broadcast_element (ctx, scalar_from_vector (ctx, args [1]), elems);
}
if (scalar) {
ovr_tag = ovr_tag_force_scalar (ovr_tag);
for (int i = 0; i < 3; ++i)
args [i] = scalar_from_vector (ctx, args [i]);
}
if (subtract)
args [0] = LLVMBuildFNeg (builder, args [0], "arm64_fma_sub");
if (negate) {
args [0] = LLVMBuildFNeg (builder, args [0], "arm64_fma_negate");
args [2] = LLVMBuildFNeg (builder, args [2], "arm64_fma_negate");
}
LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_FMA, ovr_tag, args, "arm64_fma");
if (scalar)
result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SQDMULL:
case OP_ARM64_SQDMULL_BYSCALAR:
case OP_ARM64_SQDMULL2:
case OP_ARM64_SQDMULL2_BYSCALAR:
case OP_ARM64_SQDMLAL:
case OP_ARM64_SQDMLAL_BYSCALAR:
case OP_ARM64_SQDMLAL2:
case OP_ARM64_SQDMLAL2_BYSCALAR:
case OP_ARM64_SQDMLSL:
case OP_ARM64_SQDMLSL_BYSCALAR:
case OP_ARM64_SQDMLSL2:
case OP_ARM64_SQDMLSL2_BYSCALAR: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
gboolean scalar = FALSE;
gboolean add = FALSE;
gboolean subtract = FALSE;
gboolean high = FALSE;
switch (ins->opcode) {
case OP_ARM64_SQDMULL_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMULL: break;
case OP_ARM64_SQDMULL2_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMULL2: high = TRUE; break;
case OP_ARM64_SQDMLAL_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLAL: add = TRUE; break;
case OP_ARM64_SQDMLAL2_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLAL2: high = TRUE; add = TRUE; break;
case OP_ARM64_SQDMLSL_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLSL: subtract = TRUE; break;
case OP_ARM64_SQDMLSL2_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLSL2: high = TRUE; subtract = TRUE; break;
}
int iid = 0;
if (add)
iid = INTRINS_AARCH64_ADV_SIMD_SQADD;
else if (subtract)
iid = INTRINS_AARCH64_ADV_SIMD_SQSUB;
LLVMValueRef mul1 = lhs;
LLVMValueRef mul2 = rhs;
if (iid != 0) {
mul1 = rhs;
mul2 = arg3;
}
if (scalar) {
LLVMTypeRef t = LLVMTypeOf (mul1);
unsigned int elems = LLVMGetVectorSize (t);
mul2 = broadcast_element (ctx, scalar_from_vector (ctx, mul2), elems);
}
LLVMValueRef args [] = { mul1, mul2 };
if (high)
for (int i = 0; i < 2; ++i)
args [i] = extract_high_elements (ctx, args [i]);
LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_SQDMULL, ovr_tag, args, "");
LLVMValueRef args2 [] = { lhs, result };
if (iid != 0)
result = call_overloaded_intrins (ctx, iid, ovr_tag, args2, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SQDMULL_SCALAR:
case OP_ARM64_SQDMLAL_SCALAR:
case OP_ARM64_SQDMLSL_SCALAR: {
/*
* define dso_local i32 @__vqdmlslh_lane_s16(i32, i16, <4 x i16>, i32) local_unnamed_addr #0 {
* %5 = insertelement <4 x i16> undef, i16 %1, i64 0
* %6 = shufflevector <4 x i16> %2, <4 x i16> undef, <4 x i32> <i32 3, i32 undef, i32 undef, i32 undef>
* %7 = tail call <4 x i32> @llvm.aarch64.neon.sqdmull.v4i32(<4 x i16> %5, <4 x i16> %6)
* %8 = extractelement <4 x i32> %7, i64 0
* %9 = tail call i32 @llvm.aarch64.neon.sqsub.i32(i32 %0, i32 %8)
* ret i32 %9
* }
*
* define dso_local i64 @__vqdmlals_s32(i64, i32, i32) local_unnamed_addr #0 {
* %4 = tail call i64 @llvm.aarch64.neon.sqdmulls.scalar(i32 %1, i32 %2) #2
* %5 = tail call i64 @llvm.aarch64.neon.sqadd.i64(i64 %0, i64 %4) #2
* ret i64 %5
* }
*/
int mulid = INTRINS_AARCH64_ADV_SIMD_SQDMULL;
int iid = 0;
gboolean scalar_mul_result = FALSE;
gboolean scalar_acc_result = FALSE;
switch (ins->opcode) {
case OP_ARM64_SQDMLAL_SCALAR: iid = INTRINS_AARCH64_ADV_SIMD_SQADD; break;
case OP_ARM64_SQDMLSL_SCALAR: iid = INTRINS_AARCH64_ADV_SIMD_SQSUB; break;
}
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMValueRef mularg = lhs;
LLVMValueRef selected_scalar = rhs;
if (iid != 0) {
mularg = rhs;
selected_scalar = arg3;
}
llvm_ovr_tag_t multag = ovr_tag_smaller_elements (ovr_tag_from_llvm_type (ret_t));
llvm_ovr_tag_t iidtag = ovr_tag_force_scalar (ovr_tag_from_llvm_type (ret_t));
LLVMTypeRef mularg_t = ovr_tag_to_llvm_type (multag);
if (multag & INTRIN_int32) {
/* The (i32, i32) -> i64 variant of aarch64_neon_sqdmull has
* a unique, non-overloaded name.
*/
mulid = INTRINS_AARCH64_ADV_SIMD_SQDMULL_SCALAR;
multag = 0;
iidtag = INTRIN_int64 | INTRIN_scalar;
scalar_mul_result = TRUE;
scalar_acc_result = TRUE;
} else if (multag & INTRIN_int16) {
/* We were passed a (<4 x i16>, <4 x i16>) but the
* widening multiplication intrinsic will yield a <4 x i32>.
*/
multag = INTRIN_int32 | INTRIN_vector128;
} else
g_assert_not_reached ();
if (scalar_mul_result) {
mularg = scalar_from_vector (ctx, mularg);
selected_scalar = scalar_from_vector (ctx, selected_scalar);
} else {
mularg = keep_lowest_element (ctx, mularg_t, mularg);
selected_scalar = keep_lowest_element (ctx, mularg_t, selected_scalar);
}
LLVMValueRef mulargs [] = { mularg, selected_scalar };
LLVMValueRef result = call_overloaded_intrins (ctx, mulid, multag, mulargs, "arm64_sqdmull_scalar");
if (iid != 0) {
LLVMValueRef acc = scalar_from_vector (ctx, lhs);
if (!scalar_mul_result)
result = scalar_from_vector (ctx, result);
LLVMValueRef subargs [] = { acc, result };
result = call_overloaded_intrins (ctx, iid, iidtag, subargs, "arm64_sqdmlxl_scalar");
scalar_acc_result = TRUE;
}
if (scalar_acc_result)
result = vector_from_scalar (ctx, ret_t, result);
else
result = keep_lowest_element (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_FMUL_SEL: {
LLVMValueRef mul2 = LLVMBuildExtractElement (builder, rhs, arg3, "");
LLVMValueRef mul1 = scalar_from_vector (ctx, lhs);
LLVMValueRef result = LLVMBuildFMul (builder, mul1, mul2, "arm64_fmul_sel");
result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_MLA:
case OP_ARM64_MLA_SCALAR:
case OP_ARM64_MLS:
case OP_ARM64_MLS_SCALAR: {
gboolean scalar = FALSE;
gboolean add = FALSE;
switch (ins->opcode) {
case OP_ARM64_MLA_SCALAR: scalar = TRUE; case OP_ARM64_MLA: add = TRUE; break;
case OP_ARM64_MLS_SCALAR: scalar = TRUE; case OP_ARM64_MLS: break;
}
LLVMTypeRef mul_t = LLVMTypeOf (rhs);
unsigned int elems = LLVMGetVectorSize (mul_t);
LLVMValueRef mul2 = arg3;
if (scalar)
mul2 = broadcast_element (ctx, scalar_from_vector (ctx, mul2), elems);
LLVMValueRef result = LLVMBuildMul (builder, rhs, mul2, "");
if (add)
result = LLVMBuildAdd (builder, lhs, result, "");
else
result = LLVMBuildSub (builder, lhs, result, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SMULL:
case OP_ARM64_SMULL_SCALAR:
case OP_ARM64_SMULL2:
case OP_ARM64_SMULL2_SCALAR:
case OP_ARM64_UMULL:
case OP_ARM64_UMULL_SCALAR:
case OP_ARM64_UMULL2:
case OP_ARM64_UMULL2_SCALAR:
case OP_ARM64_SMLAL:
case OP_ARM64_SMLAL_SCALAR:
case OP_ARM64_SMLAL2:
case OP_ARM64_SMLAL2_SCALAR:
case OP_ARM64_UMLAL:
case OP_ARM64_UMLAL_SCALAR:
case OP_ARM64_UMLAL2:
case OP_ARM64_UMLAL2_SCALAR:
case OP_ARM64_SMLSL:
case OP_ARM64_SMLSL_SCALAR:
case OP_ARM64_SMLSL2:
case OP_ARM64_SMLSL2_SCALAR:
case OP_ARM64_UMLSL:
case OP_ARM64_UMLSL_SCALAR:
case OP_ARM64_UMLSL2:
case OP_ARM64_UMLSL2_SCALAR: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
gboolean is_unsigned = FALSE;
gboolean high = FALSE;
gboolean add = FALSE;
gboolean subtract = FALSE;
gboolean scalar = FALSE;
int opcode = ins->opcode;
switch (opcode) {
case OP_ARM64_SMULL_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMULL; break;
case OP_ARM64_UMULL_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMULL; break;
case OP_ARM64_SMLAL_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLAL; break;
case OP_ARM64_UMLAL_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLAL; break;
case OP_ARM64_SMLSL_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLSL; break;
case OP_ARM64_UMLSL_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLSL; break;
case OP_ARM64_SMULL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMULL2; break;
case OP_ARM64_UMULL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMULL2; break;
case OP_ARM64_SMLAL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLAL2; break;
case OP_ARM64_UMLAL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLAL2; break;
case OP_ARM64_SMLSL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLSL2; break;
case OP_ARM64_UMLSL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLSL2; break;
}
switch (opcode) {
case OP_ARM64_SMULL2: high = TRUE; case OP_ARM64_SMULL: break;
case OP_ARM64_UMULL2: high = TRUE; case OP_ARM64_UMULL: is_unsigned = TRUE; break;
case OP_ARM64_SMLAL2: high = TRUE; case OP_ARM64_SMLAL: add = TRUE; break;
case OP_ARM64_UMLAL2: high = TRUE; case OP_ARM64_UMLAL: add = TRUE; is_unsigned = TRUE; break;
case OP_ARM64_SMLSL2: high = TRUE; case OP_ARM64_SMLSL: subtract = TRUE; break;
case OP_ARM64_UMLSL2: high = TRUE; case OP_ARM64_UMLSL: subtract = TRUE; is_unsigned = TRUE; break;
}
int iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UMULL : INTRINS_AARCH64_ADV_SIMD_SMULL;
LLVMValueRef intrin_args [] = { lhs, rhs };
if (add || subtract) {
intrin_args [0] = rhs;
intrin_args [1] = arg3;
}
if (scalar) {
LLVMValueRef sarg = intrin_args [1];
LLVMTypeRef t = LLVMTypeOf (intrin_args [0]);
unsigned int elems = LLVMGetVectorSize (t);
sarg = broadcast_element (ctx, scalar_from_vector (ctx, sarg), elems);
intrin_args [1] = sarg;
}
if (high)
for (int i = 0; i < 2; ++i)
intrin_args [i] = extract_high_elements (ctx, intrin_args [i]);
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, "");
if (add)
result = LLVMBuildAdd (builder, lhs, result, "");
if (subtract)
result = LLVMBuildSub (builder, lhs, result, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_XNEG:
case OP_ARM64_XNEG_SCALAR: {
gboolean scalar = ins->opcode == OP_ARM64_XNEG_SCALAR;
gboolean is_float = FALSE;
switch (inst_c1_type (ins)) {
case MONO_TYPE_R4: case MONO_TYPE_R8: is_float = TRUE;
}
LLVMValueRef result = lhs;
if (scalar)
result = scalar_from_vector (ctx, result);
if (is_float)
result = LLVMBuildFNeg (builder, result, "arm64_xneg");
else
result = LLVMBuildNeg (builder, result, "arm64_xneg");
if (scalar)
result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_PMULL:
case OP_ARM64_PMULL2: {
gboolean high = ins->opcode == OP_ARM64_PMULL2;
LLVMValueRef args [] = { lhs, rhs };
if (high)
for (int i = 0; i < 2; ++i)
args [i] = extract_high_elements (ctx, args [i]);
LLVMValueRef result = call_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_PMULL, args, "arm64_pmull");
values [ins->dreg] = result;
break;
}
case OP_ARM64_REVN: {
LLVMTypeRef t = LLVMTypeOf (lhs);
LLVMTypeRef elem_t = LLVMGetElementType (t);
unsigned int group_bits = mono_llvm_get_prim_size_bits (elem_t);
unsigned int vec_bits = mono_llvm_get_prim_size_bits (t);
unsigned int tmp_bits = ins->inst_c0;
unsigned int tmp_elements = vec_bits / tmp_bits;
const int cycle8 [] = { 7, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8 };
const int cycle4 [] = { 3, 2, 1, 0, 7, 6, 5, 4, 11, 10, 9, 8, 15, 14, 13, 12 };
const int cycle2 [] = { 1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14 };
const int *cycle = NULL;
switch (group_bits / tmp_bits) {
case 2: cycle = cycle2; break;
case 4: cycle = cycle4; break;
case 8: cycle = cycle8; break;
default: g_assert_not_reached ();
}
g_assert (tmp_elements <= ARM64_MAX_VECTOR_ELEMS);
LLVMTypeRef tmp_t = LLVMVectorType (LLVMIntType (tmp_bits), tmp_elements);
LLVMValueRef tmp = LLVMBuildBitCast (builder, lhs, tmp_t, "arm64_revn");
LLVMValueRef result = LLVMBuildShuffleVector (builder, tmp, LLVMGetUndef (tmp_t), create_const_vector_i32 (cycle, tmp_elements), "");
result = LLVMBuildBitCast (builder, result, t, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SHL:
case OP_ARM64_SSHR:
case OP_ARM64_SSRA:
case OP_ARM64_USHR:
case OP_ARM64_USRA: {
gboolean right = FALSE;
gboolean add = FALSE;
gboolean arith = FALSE;
switch (ins->opcode) {
case OP_ARM64_USHR: right = TRUE; break;
case OP_ARM64_USRA: right = TRUE; add = TRUE; break;
case OP_ARM64_SSHR: arith = TRUE; break;
case OP_ARM64_SSRA: arith = TRUE; add = TRUE; break;
}
LLVMValueRef shiftarg = lhs;
LLVMValueRef shift = rhs;
if (add) {
shiftarg = rhs;
shift = arg3;
}
shift = create_shift_vector (ctx, shiftarg, shift);
LLVMValueRef result = NULL;
if (right)
result = LLVMBuildLShr (builder, shiftarg, shift, "");
else if (arith)
result = LLVMBuildAShr (builder, shiftarg, shift, "");
else
result = LLVMBuildShl (builder, shiftarg, shift, "");
if (add)
result = LLVMBuildAdd (builder, lhs, result, "arm64_usra");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SHRN:
case OP_ARM64_SHRN2: {
LLVMValueRef shiftarg = lhs;
LLVMValueRef shift = rhs;
gboolean high = ins->opcode == OP_ARM64_SHRN2;
if (high) {
shiftarg = rhs;
shift = arg3;
}
LLVMTypeRef arg_t = LLVMTypeOf (shiftarg);
LLVMTypeRef elem_t = LLVMGetElementType (arg_t);
unsigned int elems = LLVMGetVectorSize (arg_t);
unsigned int bits = mono_llvm_get_prim_size_bits (elem_t);
LLVMTypeRef trunc_t = LLVMVectorType (LLVMIntType (bits / 2), elems);
shift = create_shift_vector (ctx, shiftarg, shift);
LLVMValueRef result = LLVMBuildLShr (builder, shiftarg, shift, "shrn");
result = LLVMBuildTrunc (builder, result, trunc_t, "");
if (high) {
result = concatenate_vectors (ctx, lhs, result);
}
values [ins->dreg] = result;
break;
}
case OP_ARM64_SRSHR:
case OP_ARM64_SRSRA:
case OP_ARM64_URSHR:
case OP_ARM64_URSRA: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
LLVMValueRef shiftarg = lhs;
LLVMValueRef shift = rhs;
gboolean right = FALSE;
gboolean add = FALSE;
switch (ins->opcode) {
case OP_ARM64_URSRA: add = TRUE; case OP_ARM64_URSHR: right = TRUE; break;
case OP_ARM64_SRSRA: add = TRUE; case OP_ARM64_SRSHR: right = TRUE; break;
}
int iid = 0;
switch (ins->opcode) {
case OP_ARM64_URSRA: case OP_ARM64_URSHR: iid = INTRINS_AARCH64_ADV_SIMD_URSHL; break;
case OP_ARM64_SRSRA: case OP_ARM64_SRSHR: iid = INTRINS_AARCH64_ADV_SIMD_SRSHL; break;
}
if (add) {
shiftarg = rhs;
shift = arg3;
}
if (right)
shift = LLVMBuildNeg (builder, shift, "");
shift = create_shift_vector (ctx, shiftarg, shift);
LLVMValueRef args [] = { shiftarg, shift };
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
if (add)
result = LLVMBuildAdd (builder, result, lhs, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_XNSHIFT_SCALAR:
case OP_ARM64_XNSHIFT:
case OP_ARM64_XNSHIFT2: {
LLVMTypeRef intrin_result_t = simd_class_to_llvm_type (ctx, ins->klass);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (intrin_result_t);
LLVMValueRef shift_arg = lhs;
LLVMValueRef shift_amount = rhs;
gboolean high = FALSE;
gboolean scalar = FALSE;
int iid = ins->inst_c0;
switch (ins->opcode) {
case OP_ARM64_XNSHIFT_SCALAR: scalar = TRUE; break;
case OP_ARM64_XNSHIFT2: high = TRUE; break;
}
if (high) {
shift_arg = rhs;
shift_amount = arg3;
ovr_tag = ovr_tag_smaller_vector (ovr_tag);
intrin_result_t = ovr_tag_to_llvm_type (ovr_tag);
}
LLVMTypeRef shift_arg_t = LLVMTypeOf (shift_arg);
LLVMTypeRef shift_arg_elem_t = LLVMGetElementType (shift_arg_t);
unsigned int element_bits = mono_llvm_get_prim_size_bits (shift_arg_elem_t);
int range_min = 1;
int range_max = element_bits / 2;
if (scalar) {
unsigned int elems = LLVMGetVectorSize (shift_arg_t);
LLVMValueRef lo = scalar_from_vector (ctx, shift_arg);
shift_arg = vector_from_scalar (ctx, LLVMVectorType (shift_arg_elem_t, elems * 2), lo);
}
int max_index = range_max - range_min + 1;
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, max_index, shift_amount, intrin_result_t, "arm64_xnshift");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
int shift_const = i + range_min;
LLVMValueRef intrin_args [] = { shift_arg, const_int32 (shift_const) };
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, "");
immediate_unroll_commit (&ictx, shift_const, result);
}
{
immediate_unroll_default (&ictx);
LLVMValueRef intrin_args [] = { shift_arg, const_int32 (range_max) };
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, "");
immediate_unroll_commit_default (&ictx, result);
}
LLVMValueRef result = immediate_unroll_end (&ictx, &cbb);
if (high)
result = concatenate_vectors (ctx, lhs, result);
if (scalar)
result = keep_lowest_element (ctx, LLVMTypeOf (result), result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SQSHLU:
case OP_ARM64_SQSHLU_SCALAR: {
gboolean scalar = ins->opcode == OP_ARM64_SQSHLU_SCALAR;
LLVMTypeRef intrin_result_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMTypeRef elem_t = LLVMGetElementType (intrin_result_t);
unsigned int element_bits = mono_llvm_get_prim_size_bits (elem_t);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (intrin_result_t);
int max_index = element_bits;
ScalarOpFromVectorOpCtx sctx = scalar_op_from_vector_op (ctx, intrin_result_t, ins);
intrin_result_t = scalar ? sctx.intermediate_type : intrin_result_t;
ovr_tag = scalar ? sctx.ovr_tag : ovr_tag;
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, max_index, rhs, intrin_result_t, "arm64_sqshlu");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
int shift_const = i;
LLVMValueRef args [2] = { lhs, create_shift_vector (ctx, lhs, const_int32 (shift_const)) };
if (scalar)
scalar_op_from_vector_op_process_args (&sctx, args, 2);
LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_SQSHLU, ovr_tag, args, "");
immediate_unroll_commit (&ictx, shift_const, result);
}
{
immediate_unroll_default (&ictx);
LLVMValueRef srcarg = lhs;
if (scalar)
scalar_op_from_vector_op_process_args (&sctx, &srcarg, 1);
immediate_unroll_commit_default (&ictx, srcarg);
}
LLVMValueRef result = immediate_unroll_end (&ictx, &cbb);
if (scalar)
result = scalar_op_from_vector_op_process_result (&sctx, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SSHLL:
case OP_ARM64_SSHLL2:
case OP_ARM64_USHLL:
case OP_ARM64_USHLL2: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean high = FALSE;
gboolean is_unsigned = FALSE;
switch (ins->opcode) {
case OP_ARM64_SSHLL2: high = TRUE; break;
case OP_ARM64_USHLL2: high = TRUE; case OP_ARM64_USHLL: is_unsigned = TRUE; break;
}
LLVMValueRef result = lhs;
if (high)
result = extract_high_elements (ctx, result);
if (is_unsigned)
result = LLVMBuildZExt (builder, result, ret_t, "arm64_ushll");
else
result = LLVMBuildSExt (builder, result, ret_t, "arm64_ushll");
result = LLVMBuildShl (builder, result, create_shift_vector (ctx, result, rhs), "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SLI:
case OP_ARM64_SRI: {
LLVMTypeRef intrin_result_t = simd_class_to_llvm_type (ctx, ins->klass);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (intrin_result_t);
unsigned int element_bits = mono_llvm_get_prim_size_bits (LLVMGetElementType (intrin_result_t));
int range_min = 0;
int range_max = element_bits - 1;
if (ins->opcode == OP_ARM64_SRI) {
++range_min;
++range_max;
}
int iid = ins->opcode == OP_ARM64_SRI ? INTRINS_AARCH64_ADV_SIMD_SRI : INTRINS_AARCH64_ADV_SIMD_SLI;
int max_index = range_max - range_min + 1;
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, max_index, arg3, intrin_result_t, "arm64_ext");
LLVMValueRef intrin_args [3] = { lhs, rhs, arg3 };
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
int shift_const = i + range_min;
intrin_args [2] = const_int32 (shift_const);
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, "");
immediate_unroll_commit (&ictx, shift_const, result);
}
immediate_unroll_default (&ictx);
immediate_unroll_commit_default (&ictx, lhs);
LLVMValueRef result = immediate_unroll_end (&ictx, &cbb);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SQRT_SCALAR: {
int iid = ins->inst_c0 == MONO_TYPE_R8 ? INTRINS_SQRT : INTRINS_SQRTF;
LLVMTypeRef t = LLVMTypeOf (lhs);
LLVMValueRef scalar = LLVMBuildExtractElement (builder, lhs, const_int32 (0), "");
LLVMValueRef result = call_intrins (ctx, iid, &scalar, "arm64_sqrt_scalar");
values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMGetUndef (t), result, const_int32 (0), "");
break;
}
case OP_ARM64_STP:
case OP_ARM64_STP_SCALAR:
case OP_ARM64_STNP:
case OP_ARM64_STNP_SCALAR: {
gboolean nontemporal = FALSE;
gboolean scalar = FALSE;
switch (ins->opcode) {
case OP_ARM64_STNP: nontemporal = TRUE; break;
case OP_ARM64_STNP_SCALAR: nontemporal = TRUE; scalar = TRUE; break;
case OP_ARM64_STP_SCALAR: scalar = TRUE; break;
}
LLVMTypeRef rhs_t = LLVMTypeOf (rhs);
LLVMValueRef val = NULL;
LLVMTypeRef dst_t = LLVMPointerType (rhs_t, 0);
if (scalar)
val = LLVMBuildShuffleVector (builder, rhs, arg3, create_const_vector_2_i32 (0, 2), "");
else {
unsigned int rhs_elems = LLVMGetVectorSize (rhs_t);
LLVMTypeRef rhs_elt_t = LLVMGetElementType (rhs_t);
dst_t = LLVMPointerType (LLVMVectorType (rhs_elt_t, rhs_elems * 2), 0);
val = concatenate_vectors (ctx, rhs, arg3);
}
LLVMValueRef address = convert (ctx, lhs, dst_t);
LLVMValueRef store = mono_llvm_build_store (builder, val, address, FALSE, LLVM_BARRIER_NONE);
if (nontemporal)
set_nontemporal_flag (store);
break;
}
case OP_ARM64_LD1_INSERT: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMTypeRef elem_t = LLVMGetElementType (ret_t);
LLVMValueRef address = convert (ctx, arg3, LLVMPointerType (elem_t, 0));
unsigned int alignment = mono_llvm_get_prim_size_bits (ret_t) / 8;
LLVMValueRef result = mono_llvm_build_aligned_load (builder, address, "arm64_ld1_insert", FALSE, alignment);
result = LLVMBuildInsertElement (builder, lhs, result, rhs, "arm64_ld1_insert");
values [ins->dreg] = result;
break;
}
case OP_ARM64_LD1R:
case OP_ARM64_LD1: {
gboolean replicate = ins->opcode == OP_ARM64_LD1R;
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
unsigned int alignment = mono_llvm_get_prim_size_bits (ret_t) / 8;
LLVMValueRef address = lhs;
LLVMTypeRef address_t = LLVMPointerType (ret_t, 0);
if (replicate) {
LLVMTypeRef elem_t = LLVMGetElementType (ret_t);
address_t = LLVMPointerType (elem_t, 0);
}
address = convert (ctx, address, address_t);
LLVMValueRef result = mono_llvm_build_aligned_load (builder, address, "arm64_ld1", FALSE, alignment);
if (replicate) {
unsigned int elems = LLVMGetVectorSize (ret_t);
result = broadcast_element (ctx, result, elems);
}
values [ins->dreg] = result;
break;
}
case OP_ARM64_LDNP:
case OP_ARM64_LDNP_SCALAR:
case OP_ARM64_LDP:
case OP_ARM64_LDP_SCALAR: {
const char *oname = NULL;
gboolean nontemporal = FALSE;
gboolean scalar = FALSE;
switch (ins->opcode) {
case OP_ARM64_LDNP: oname = "arm64_ldnp"; nontemporal = TRUE; break;
case OP_ARM64_LDNP_SCALAR: oname = "arm64_ldnp_scalar"; nontemporal = TRUE; scalar = TRUE; break;
case OP_ARM64_LDP: oname = "arm64_ldp"; break;
case OP_ARM64_LDP_SCALAR: oname = "arm64_ldp_scalar"; scalar = TRUE; break;
}
if (!addresses [ins->dreg])
addresses [ins->dreg] = build_named_alloca (ctx, m_class_get_byval_arg (ins->klass), oname);
LLVMTypeRef ret_t = simd_valuetuple_to_llvm_type (ctx, ins->klass);
LLVMTypeRef vec_t = LLVMGetElementType (ret_t);
LLVMValueRef ix = const_int32 (1);
LLVMTypeRef src_t = LLVMPointerType (scalar ? LLVMGetElementType (vec_t) : vec_t, 0);
LLVMValueRef src0 = convert (ctx, lhs, src_t);
LLVMValueRef src1 = LLVMBuildGEP (builder, src0, &ix, 1, oname);
LLVMValueRef vals [] = { src0, src1 };
for (int i = 0; i < 2; ++i) {
vals [i] = LLVMBuildLoad (builder, vals [i], oname);
if (nontemporal)
set_nontemporal_flag (vals [i]);
}
unsigned int vec_sz = mono_llvm_get_prim_size_bits (vec_t);
if (scalar) {
g_assert (vec_sz == 64);
LLVMValueRef undef = LLVMGetUndef (vec_t);
for (int i = 0; i < 2; ++i)
vals [i] = LLVMBuildInsertElement (builder, undef, vals [i], const_int32 (0), oname);
}
LLVMValueRef val = LLVMGetUndef (ret_t);
for (int i = 0; i < 2; ++i)
val = LLVMBuildInsertValue (builder, val, vals [i], i, oname);
LLVMTypeRef retptr_t = LLVMPointerType (ret_t, 0);
LLVMValueRef dst = convert (ctx, addresses [ins->dreg], retptr_t);
LLVMBuildStore (builder, val, dst);
values [ins->dreg] = vec_sz == 64 ? val : NULL;
break;
}
case OP_ARM64_ST1: {
LLVMTypeRef t = LLVMTypeOf (rhs);
LLVMValueRef address = convert (ctx, lhs, LLVMPointerType (t, 0));
unsigned int alignment = mono_llvm_get_prim_size_bits (t) / 8;
mono_llvm_build_aligned_store (builder, rhs, address, FALSE, alignment);
break;
}
case OP_ARM64_ST1_SCALAR: {
LLVMTypeRef t = LLVMGetElementType (LLVMTypeOf (rhs));
LLVMValueRef val = LLVMBuildExtractElement (builder, rhs, arg3, "arm64_st1_scalar");
LLVMValueRef address = convert (ctx, lhs, LLVMPointerType (t, 0));
unsigned int alignment = mono_llvm_get_prim_size_bits (t) / 8;
mono_llvm_build_aligned_store (builder, val, address, FALSE, alignment);
break;
}
case OP_ARM64_ADDHN:
case OP_ARM64_ADDHN2:
case OP_ARM64_SUBHN:
case OP_ARM64_SUBHN2:
case OP_ARM64_RADDHN:
case OP_ARM64_RADDHN2:
case OP_ARM64_RSUBHN:
case OP_ARM64_RSUBHN2: {
LLVMValueRef args [2] = { lhs, rhs };
gboolean high = FALSE;
gboolean subtract = FALSE;
int iid = 0;
switch (ins->opcode) {
case OP_ARM64_ADDHN2: high = TRUE; case OP_ARM64_ADDHN: break;
case OP_ARM64_SUBHN2: high = TRUE; case OP_ARM64_SUBHN: subtract = TRUE; break;
case OP_ARM64_RSUBHN2: high = TRUE; case OP_ARM64_RSUBHN: iid = INTRINS_AARCH64_ADV_SIMD_RSUBHN; break;
case OP_ARM64_RADDHN2: high = TRUE; case OP_ARM64_RADDHN: iid = INTRINS_AARCH64_ADV_SIMD_RADDHN; break;
}
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
if (high) {
args [0] = rhs;
args [1] = arg3;
ovr_tag = ovr_tag_smaller_vector (ovr_tag);
}
LLVMValueRef result = NULL;
if (iid != 0)
result = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
else {
LLVMTypeRef t = LLVMTypeOf (args [0]);
LLVMTypeRef elt_t = LLVMGetElementType (t);
unsigned int elems = LLVMGetVectorSize (t);
unsigned int elem_bits = mono_llvm_get_prim_size_bits (elt_t);
if (subtract)
result = LLVMBuildSub (builder, args [0], args [1], "");
else
result = LLVMBuildAdd (builder, args [0], args [1], "");
result = LLVMBuildLShr (builder, result, broadcast_constant (elem_bits / 2, elt_t, elems), "");
result = LLVMBuildTrunc (builder, result, LLVMVectorType (LLVMIntType (elem_bits / 2), elems), "");
}
if (high)
result = concatenate_vectors (ctx, lhs, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SADD:
case OP_ARM64_UADD:
case OP_ARM64_SADD2:
case OP_ARM64_UADD2:
case OP_ARM64_SSUB:
case OP_ARM64_USUB:
case OP_ARM64_SSUB2:
case OP_ARM64_USUB2: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean is_unsigned = FALSE;
gboolean high = FALSE;
gboolean subtract = FALSE;
switch (ins->opcode) {
case OP_ARM64_SADD2: high = TRUE; case OP_ARM64_SADD: break;
case OP_ARM64_UADD2: high = TRUE; case OP_ARM64_UADD: is_unsigned = TRUE; break;
case OP_ARM64_SSUB2: high = TRUE; case OP_ARM64_SSUB: subtract = TRUE; break;
case OP_ARM64_USUB2: high = TRUE; case OP_ARM64_USUB: subtract = TRUE; is_unsigned = TRUE; break;
}
LLVMValueRef args [] = { lhs, rhs };
for (int i = 0; i < 2; ++i) {
LLVMValueRef arg = args [i];
LLVMTypeRef arg_t = LLVMTypeOf (arg);
if (high && arg_t != ret_t)
arg = extract_high_elements (ctx, arg);
if (is_unsigned)
arg = LLVMBuildZExt (builder, arg, ret_t, "");
else
arg = LLVMBuildSExt (builder, arg, ret_t, "");
args [i] = arg;
}
LLVMValueRef result = NULL;
if (subtract)
result = LLVMBuildSub (builder, args [0], args [1], "arm64_sub");
else
result = LLVMBuildAdd (builder, args [0], args [1], "arm64_add");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SABAL:
case OP_ARM64_SABAL2:
case OP_ARM64_UABAL:
case OP_ARM64_UABAL2:
case OP_ARM64_SABDL:
case OP_ARM64_SABDL2:
case OP_ARM64_UABDL:
case OP_ARM64_UABDL2:
case OP_ARM64_SABA:
case OP_ARM64_UABA:
case OP_ARM64_SABD:
case OP_ARM64_UABD: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean is_unsigned = FALSE;
gboolean high = FALSE;
gboolean add = FALSE;
gboolean widen = FALSE;
switch (ins->opcode) {
case OP_ARM64_SABAL2: high = TRUE; case OP_ARM64_SABAL: widen = TRUE; add = TRUE; break;
case OP_ARM64_UABAL2: high = TRUE; case OP_ARM64_UABAL: widen = TRUE; add = TRUE; is_unsigned = TRUE; break;
case OP_ARM64_SABDL2: high = TRUE; case OP_ARM64_SABDL: widen = TRUE; break;
case OP_ARM64_UABDL2: high = TRUE; case OP_ARM64_UABDL: widen = TRUE; is_unsigned = TRUE; break;
case OP_ARM64_SABA: add = TRUE; break;
case OP_ARM64_UABA: add = TRUE; is_unsigned = TRUE; break;
case OP_ARM64_UABD: is_unsigned = TRUE; break;
}
LLVMValueRef args [] = { lhs, rhs };
if (add) {
args [0] = rhs;
args [1] = arg3;
}
if (high)
for (int i = 0; i < 2; ++i)
args [i] = extract_high_elements (ctx, args [i]);
int iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UABD : INTRINS_AARCH64_ADV_SIMD_SABD;
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (LLVMTypeOf (args [0]));
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
if (widen)
result = LLVMBuildZExt (builder, result, ret_t, "");
if (add)
result = LLVMBuildAdd (builder, result, lhs, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_XHORIZ: {
gboolean truncate = FALSE;
LLVMTypeRef arg_t = LLVMTypeOf (lhs);
LLVMTypeRef elem_t = LLVMGetElementType (arg_t);
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (arg_t);
if (elem_t == i1_t || elem_t == i2_t)
truncate = TRUE;
LLVMValueRef result = call_overloaded_intrins (ctx, ins->inst_c0, ovr_tag, &lhs, "");
if (truncate) {
// @llvm.aarch64.neon.saddv.i32.v8i16 ought to return an i16, but doesn't in LLVM 9.
result = LLVMBuildTrunc (builder, result, elem_t, "");
}
result = vector_from_scalar (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SADDLV:
case OP_ARM64_UADDLV: {
LLVMTypeRef arg_t = LLVMTypeOf (lhs);
LLVMTypeRef elem_t = LLVMGetElementType (arg_t);
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (arg_t);
gboolean truncate = elem_t == i1_t;
int iid = ins->opcode == OP_ARM64_UADDLV ? INTRINS_AARCH64_ADV_SIMD_UADDLV : INTRINS_AARCH64_ADV_SIMD_SADDLV;
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, &lhs, "");
if (truncate) {
// @llvm.aarch64.neon.saddlv.i32.v16i8 ought to return an i16, but doesn't in LLVM 9.
result = LLVMBuildTrunc (builder, result, i2_t, "");
}
result = vector_from_scalar (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_UADALP:
case OP_ARM64_SADALP: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
int iid = ins->opcode == OP_ARM64_UADALP ? INTRINS_AARCH64_ADV_SIMD_UADDLP : INTRINS_AARCH64_ADV_SIMD_SADDLP;
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, &rhs, "");
result = LLVMBuildAdd (builder, result, lhs, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_ADDP_SCALAR: {
llvm_ovr_tag_t ovr_tag = INTRIN_vector128 | INTRIN_int64;
LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_UADDV, ovr_tag, &lhs, "arm64_addp_scalar");
result = LLVMBuildInsertElement (builder, LLVMConstNull (v64_i8_t), result, const_int32 (0), "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_FADDP_SCALAR: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMValueRef hi = LLVMBuildExtractElement (builder, lhs, const_int32 (0), "");
LLVMValueRef lo = LLVMBuildExtractElement (builder, lhs, const_int32 (1), "");
LLVMValueRef result = LLVMBuildFAdd (builder, hi, lo, "arm64_faddp_scalar");
result = LLVMBuildInsertElement (builder, LLVMConstNull (ret_t), result, const_int32 (0), "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SXTL:
case OP_ARM64_SXTL2:
case OP_ARM64_UXTL:
case OP_ARM64_UXTL2: {
gboolean high = FALSE;
gboolean is_unsigned = FALSE;
switch (ins->opcode) {
case OP_ARM64_SXTL2: high = TRUE; break;
case OP_ARM64_UXTL2: high = TRUE; case OP_ARM64_UXTL: is_unsigned = TRUE; break;
}
LLVMTypeRef t = LLVMTypeOf (lhs);
unsigned int elem_bits = LLVMGetIntTypeWidth (LLVMGetElementType (t));
unsigned int src_elems = LLVMGetVectorSize (t);
unsigned int dst_elems = src_elems;
LLVMValueRef arg = lhs;
if (high) {
arg = extract_high_elements (ctx, lhs);
dst_elems = LLVMGetVectorSize (LLVMTypeOf (arg));
}
LLVMTypeRef result_t = LLVMVectorType (LLVMIntType (elem_bits * 2), dst_elems);
LLVMValueRef result = NULL;
if (is_unsigned)
result = LLVMBuildZExt (builder, arg, result_t, "arm64_uxtl");
else
result = LLVMBuildSExt (builder, arg, result_t, "arm64_sxtl");
values [ins->dreg] = result;
break;
}
case OP_ARM64_TRN1:
case OP_ARM64_TRN2: {
gboolean high = ins->opcode == OP_ARM64_TRN2;
LLVMTypeRef t = LLVMTypeOf (lhs);
unsigned int src_elems = LLVMGetVectorSize (t);
int mask [MAX_VECTOR_ELEMS] = { 0 };
int laneix = high ? 1 : 0;
for (unsigned int i = 0; i < src_elems; i += 2) {
mask [i] = laneix;
mask [i + 1] = laneix + src_elems;
laneix += 2;
}
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_i32 (mask, src_elems), "arm64_uzp");
break;
}
case OP_ARM64_UZP1:
case OP_ARM64_UZP2: {
gboolean high = ins->opcode == OP_ARM64_UZP2;
LLVMTypeRef t = LLVMTypeOf (lhs);
unsigned int src_elems = LLVMGetVectorSize (t);
int mask [MAX_VECTOR_ELEMS] = { 0 };
int laneix = high ? 1 : 0;
for (unsigned int i = 0; i < src_elems; ++i) {
mask [i] = laneix;
laneix += 2;
}
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_i32 (mask, src_elems), "arm64_uzp");
break;
}
case OP_ARM64_ZIP1:
case OP_ARM64_ZIP2: {
gboolean high = ins->opcode == OP_ARM64_ZIP2;
LLVMTypeRef t = LLVMTypeOf (lhs);
unsigned int src_elems = LLVMGetVectorSize (t);
int mask [MAX_VECTOR_ELEMS] = { 0 };
int laneix = high ? src_elems / 2 : 0;
for (unsigned int i = 0; i < src_elems; i += 2) {
mask [i] = laneix;
mask [i + 1] = laneix + src_elems;
++laneix;
}
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_i32 (mask, src_elems), "arm64_zip");
break;
}
case OP_ARM64_ABSCOMPARE: {
IntrinsicId iid = (IntrinsicId) ins->inst_c0;
gboolean scalar = ins->inst_c1;
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMTypeRef elem_t = LLVMGetElementType (ret_t);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
ovr_tag = ovr_tag_corresponding_integer (ovr_tag);
LLVMValueRef args [] = { lhs, rhs };
LLVMTypeRef result_t = ret_t;
if (scalar) {
ovr_tag = ovr_tag_force_scalar (ovr_tag);
result_t = elem_t;
for (int i = 0; i < 2; ++i)
args [i] = scalar_from_vector (ctx, args [i]);
}
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
result = LLVMBuildBitCast (builder, result, result_t, "");
if (scalar)
result = vector_from_scalar (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_XOP_OVR_X_X: {
IntrinsicId iid = (IntrinsicId) ins->inst_c0;
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, &lhs, "");
break;
}
case OP_XOP_OVR_X_X_X: {
IntrinsicId iid = (IntrinsicId) ins->inst_c0;
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
LLVMValueRef args [] = { lhs, rhs };
values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
break;
}
case OP_XOP_OVR_X_X_X_X: {
IntrinsicId iid = (IntrinsicId) ins->inst_c0;
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
LLVMValueRef args [] = { lhs, rhs, arg3 };
values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
break;
}
case OP_XOP_OVR_BYSCALAR_X_X_X: {
IntrinsicId iid = (IntrinsicId) ins->inst_c0;
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
LLVMTypeRef t = LLVMTypeOf (lhs);
unsigned int elems = LLVMGetVectorSize (t);
LLVMValueRef arg2 = broadcast_element (ctx, scalar_from_vector (ctx, rhs), elems);
LLVMValueRef args [] = { lhs, arg2 };
values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
break;
}
case OP_XOP_OVR_SCALAR_X_X:
case OP_XOP_OVR_SCALAR_X_X_X:
case OP_XOP_OVR_SCALAR_X_X_X_X: {
int num_args = 0;
IntrinsicId iid = (IntrinsicId) ins->inst_c0;
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
switch (ins->opcode) {
case OP_XOP_OVR_SCALAR_X_X: num_args = 1; break;
case OP_XOP_OVR_SCALAR_X_X_X: num_args = 2; break;
case OP_XOP_OVR_SCALAR_X_X_X_X: num_args = 3; break;
}
/* LLVM 9 NEON intrinsic functions have scalar overloads. Unfortunately
* only overloads for 32 and 64-bit integers and floating point types are
* supported. 8 and 16-bit integers are unsupported, and will fail during
* instruction selection. This is worked around by using a vector
* operation and then explicitly clearing the upper bits of the register.
*/
ScalarOpFromVectorOpCtx sctx = scalar_op_from_vector_op (ctx, ret_t, ins);
LLVMValueRef args [3] = { lhs, rhs, arg3 };
scalar_op_from_vector_op_process_args (&sctx, args, num_args);
LLVMValueRef result = call_overloaded_intrins (ctx, iid, sctx.ovr_tag, args, "");
result = scalar_op_from_vector_op_process_result (&sctx, result);
values [ins->dreg] = result;
break;
}
#endif
case OP_DUMMY_USE:
break;
/*
* EXCEPTION HANDLING
*/
case OP_IMPLICIT_EXCEPTION:
/* This marks a place where an implicit exception can happen */
if (bb->region != -1)
set_failure (ctx, "implicit-exception");
break;
case OP_THROW:
case OP_RETHROW: {
gboolean rethrow = (ins->opcode == OP_RETHROW);
if (ctx->llvm_only) {
emit_llvmonly_throw (ctx, bb, rethrow, lhs);
has_terminator = TRUE;
ctx->unreachable [bb->block_num] = TRUE;
} else {
emit_throw (ctx, bb, rethrow, lhs);
builder = ctx->builder;
}
break;
}
case OP_CALL_HANDLER: {
/*
* We don't 'call' handlers, but instead simply branch to them.
* The code generated by ENDFINALLY will branch back to us.
*/
LLVMBasicBlockRef noex_bb;
GSList *bb_list;
BBInfo *info = &bblocks [ins->inst_target_bb->block_num];
bb_list = info->call_handler_return_bbs;
/*
* Set the indicator variable for the finally clause.
*/
lhs = info->finally_ind;
g_assert (lhs);
LLVMBuildStore (builder, LLVMConstInt (LLVMInt32Type (), g_slist_length (bb_list) + 1, FALSE), lhs);
/* Branch to the finally clause */
LLVMBuildBr (builder, info->call_handler_target_bb);
noex_bb = gen_bb (ctx, "CALL_HANDLER_CONT_BB");
info->call_handler_return_bbs = g_slist_append_mempool (cfg->mempool, info->call_handler_return_bbs, noex_bb);
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, noex_bb);
bblocks [bb->block_num].end_bblock = noex_bb;
break;
}
case OP_START_HANDLER: {
break;
}
case OP_ENDFINALLY: {
LLVMBasicBlockRef resume_bb;
MonoBasicBlock *handler_bb;
LLVMValueRef val, switch_ins, callee;
GSList *bb_list;
BBInfo *info;
gboolean is_fault = MONO_REGION_FLAGS (bb->region) == MONO_EXCEPTION_CLAUSE_FAULT;
/*
* Fault clauses are like finally clauses, but they are only called if an exception is thrown.
*/
if (!is_fault) {
handler_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->region_to_handler, GUINT_TO_POINTER (mono_get_block_region_notry (cfg, bb->region)));
g_assert (handler_bb);
info = &bblocks [handler_bb->block_num];
lhs = info->finally_ind;
g_assert (lhs);
bb_list = info->call_handler_return_bbs;
resume_bb = gen_bb (ctx, "ENDFINALLY_RESUME_BB");
/* Load the finally variable */
val = LLVMBuildLoad (builder, lhs, "");
/* Reset the variable */
LLVMBuildStore (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), lhs);
/* Branch to either resume_bb, or to the bblocks in bb_list */
switch_ins = LLVMBuildSwitch (builder, val, resume_bb, g_slist_length (bb_list));
/*
* The other targets are added at the end to handle OP_CALL_HANDLER
* opcodes processed later.
*/
info->endfinally_switch_ins_list = g_slist_append_mempool (cfg->mempool, info->endfinally_switch_ins_list, switch_ins);
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, resume_bb);
}
if (ctx->llvm_only) {
if (!cfg->deopt) {
emit_resume_eh (ctx, bb);
} else {
/* Not needed */
LLVMBuildUnreachable (builder);
}
} else {
LLVMTypeRef icall_sig = LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE);
if (ctx->cfg->compile_aot) {
callee = get_callee (ctx, icall_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_llvm_resume_unwind_trampoline));
} else {
callee = get_jit_callee (ctx, "llvm_resume_unwind_trampoline", icall_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_llvm_resume_unwind_trampoline));
}
LLVMBuildCall (builder, callee, NULL, 0, "");
LLVMBuildUnreachable (builder);
}
has_terminator = TRUE;
break;
}
case OP_ENDFILTER: {
g_assert (cfg->llvm_only && cfg->deopt);
LLVMBuildUnreachable (builder);
has_terminator = TRUE;
break;
}
case OP_IL_SEQ_POINT:
break;
default: {
char reason [128];
sprintf (reason, "opcode %s", mono_inst_name (ins->opcode));
set_failure (ctx, reason);
break;
}
}
if (!ctx_ok (ctx))
break;
/* Convert the value to the type required by phi nodes */
if (spec [MONO_INST_DEST] != ' ' && !MONO_IS_STORE_MEMBASE (ins) && ctx->vreg_types [ins->dreg]) {
if (ctx->is_vphi [ins->dreg])
/* vtypes */
values [ins->dreg] = addresses [ins->dreg];
else
values [ins->dreg] = convert (ctx, values [ins->dreg], ctx->vreg_types [ins->dreg]);
}
/* Add stores for volatile/ref variables */
if (spec [MONO_INST_DEST] != ' ' && spec [MONO_INST_DEST] != 'v' && !MONO_IS_STORE_MEMBASE (ins)) {
if (!skip_volatile_store)
emit_volatile_store (ctx, ins->dreg);
#ifdef TARGET_WASM
if (vreg_is_ref (cfg, ins->dreg) && ctx->values [ins->dreg])
emit_gc_pin (ctx, builder, ins->dreg);
#endif
}
}
if (!ctx_ok (ctx))
return;
if (!has_terminator && bb->next_bb && (bb == cfg->bb_entry || bb->in_count > 0)) {
LLVMBuildBr (builder, get_bb (ctx, bb->next_bb));
}
if (bb == cfg->bb_exit && sig->ret->type == MONO_TYPE_VOID) {
emit_dbg_loc (ctx, builder, cfg->header->code + cfg->header->code_size - 1);
LLVMBuildRetVoid (builder);
}
if (bb == cfg->bb_entry)
ctx->last_alloca = LLVMGetLastInstruction (get_bb (ctx, cfg->bb_entry));
}
/*
* mono_llvm_check_method_supported:
*
* Do some quick checks to decide whenever cfg->method can be compiled by LLVM, to avoid
* compiling a method twice.
*/
void
mono_llvm_check_method_supported (MonoCompile *cfg)
{
int i, j;
#ifdef TARGET_WASM
if (mono_method_signature_internal (cfg->method)->call_convention == MONO_CALL_VARARG) {
cfg->exception_message = g_strdup ("vararg callconv");
cfg->disable_llvm = TRUE;
return;
}
#endif
if (cfg->llvm_only)
return;
if (cfg->method->save_lmf) {
cfg->exception_message = g_strdup ("lmf");
cfg->disable_llvm = TRUE;
}
if (cfg->disable_llvm)
return;
/*
* Nested clauses where one of the clauses is a finally clause is
* not supported, because LLVM can't figure out the control flow,
* probably because we resume exception handling by calling our
* own function instead of using the 'resume' llvm instruction.
*/
for (i = 0; i < cfg->header->num_clauses; ++i) {
for (j = 0; j < cfg->header->num_clauses; ++j) {
MonoExceptionClause *clause1 = &cfg->header->clauses [i];
MonoExceptionClause *clause2 = &cfg->header->clauses [j];
// FIXME: Nested try clauses fail in some cases too, i.e. #37273
if (i != j && clause1->try_offset >= clause2->try_offset && clause1->handler_offset <= clause2->handler_offset) {
//(clause1->flags == MONO_EXCEPTION_CLAUSE_FINALLY || clause2->flags == MONO_EXCEPTION_CLAUSE_FINALLY)) {
cfg->exception_message = g_strdup ("nested clauses");
cfg->disable_llvm = TRUE;
break;
}
}
}
if (cfg->disable_llvm)
return;
/* FIXME: */
if (cfg->method->dynamic) {
cfg->exception_message = g_strdup ("dynamic.");
cfg->disable_llvm = TRUE;
}
if (cfg->disable_llvm)
return;
}
static LLVMCallInfo*
get_llvm_call_info (MonoCompile *cfg, MonoMethodSignature *sig)
{
LLVMCallInfo *linfo;
int i;
if (cfg->gsharedvt && cfg->llvm_only && mini_is_gsharedvt_variable_signature (sig)) {
int i, n, pindex;
/*
* Gsharedvt methods have the following calling convention:
* - all arguments are passed by ref, even non generic ones
* - the return value is returned by ref too, using a vret
* argument passed after 'this'.
*/
n = sig->param_count + sig->hasthis;
linfo = (LLVMCallInfo*)mono_mempool_alloc0 (cfg->mempool, sizeof (LLVMCallInfo) + (sizeof (LLVMArgInfo) * n));
pindex = 0;
if (sig->hasthis)
linfo->args [pindex ++].storage = LLVMArgNormal;
if (sig->ret->type != MONO_TYPE_VOID) {
if (mini_is_gsharedvt_variable_type (sig->ret))
linfo->ret.storage = LLVMArgGsharedvtVariable;
else if (mini_type_is_vtype (sig->ret))
linfo->ret.storage = LLVMArgGsharedvtFixedVtype;
else
linfo->ret.storage = LLVMArgGsharedvtFixed;
linfo->vret_arg_index = pindex;
} else {
linfo->ret.storage = LLVMArgNone;
}
for (i = 0; i < sig->param_count; ++i) {
if (m_type_is_byref (sig->params [i]))
linfo->args [pindex].storage = LLVMArgNormal;
else if (mini_is_gsharedvt_variable_type (sig->params [i]))
linfo->args [pindex].storage = LLVMArgGsharedvtVariable;
else if (mini_type_is_vtype (sig->params [i]))
linfo->args [pindex].storage = LLVMArgGsharedvtFixedVtype;
else
linfo->args [pindex].storage = LLVMArgGsharedvtFixed;
linfo->args [pindex].type = sig->params [i];
pindex ++;
}
return linfo;
}
linfo = mono_arch_get_llvm_call_info (cfg, sig);
linfo->dummy_arg_pindex = -1;
for (i = 0; i < sig->param_count; ++i)
linfo->args [i + sig->hasthis].type = sig->params [i];
return linfo;
}
static void
emit_method_inner (EmitContext *ctx);
static void
free_ctx (EmitContext *ctx)
{
GSList *l;
g_free (ctx->values);
g_free (ctx->addresses);
g_free (ctx->vreg_types);
g_free (ctx->is_vphi);
g_free (ctx->vreg_cli_types);
g_free (ctx->is_dead);
g_free (ctx->unreachable);
g_free (ctx->gc_var_indexes);
g_ptr_array_free (ctx->phi_values, TRUE);
g_free (ctx->bblocks);
g_hash_table_destroy (ctx->region_to_handler);
g_hash_table_destroy (ctx->clause_to_handler);
g_hash_table_destroy (ctx->jit_callees);
g_ptr_array_free (ctx->callsite_list, TRUE);
g_free (ctx->method_name);
g_ptr_array_free (ctx->bblock_list, TRUE);
for (l = ctx->builders; l; l = l->next) {
LLVMBuilderRef builder = (LLVMBuilderRef)l->data;
LLVMDisposeBuilder (builder);
}
g_free (ctx);
}
static gboolean
is_linkonce_method (MonoMethod *method)
{
#ifdef TARGET_WASM
/*
* Under wasm, linkonce works, so use it instead of the dedup pass for wrappers at least.
* FIXME: Use for everything, i.e. can_dedup ().
* FIXME: Fails System.Core tests
* -> amodule->sorted_methods contains duplicates, screwing up jit tables.
*/
// FIXME: This works, but the aot data for the methods is still kept, so size still increases
#if 0
if (method->wrapper_type == MONO_WRAPPER_OTHER) {
WrapperInfo *info = mono_marshal_get_wrapper_info (method);
if (info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG || info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG)
return TRUE;
}
#endif
#endif
return FALSE;
}
/*
* mono_llvm_emit_method:
*
* Emit LLVM IL from the mono IL, and compile it to native code using LLVM.
*/
void
mono_llvm_emit_method (MonoCompile *cfg)
{
EmitContext *ctx;
char *method_name;
gboolean is_linkonce = FALSE;
int i;
if (cfg->skip)
return;
/* The code below might acquire the loader lock, so use it for global locking */
mono_loader_lock ();
ctx = g_new0 (EmitContext, 1);
ctx->cfg = cfg;
ctx->mempool = cfg->mempool;
/*
* This maps vregs to the LLVM instruction defining them
*/
ctx->values = g_new0 (LLVMValueRef, cfg->next_vreg);
/*
* This maps vregs for volatile variables to the LLVM instruction defining their
* address.
*/
ctx->addresses = g_new0 (LLVMValueRef, cfg->next_vreg);
ctx->vreg_types = g_new0 (LLVMTypeRef, cfg->next_vreg);
ctx->is_vphi = g_new0 (gboolean, cfg->next_vreg);
ctx->vreg_cli_types = g_new0 (MonoType*, cfg->next_vreg);
ctx->phi_values = g_ptr_array_sized_new (256);
/*
* This signals whenever the vreg was defined by a phi node with no input vars
* (i.e. all its input bblocks end with NOT_REACHABLE).
*/
ctx->is_dead = g_new0 (gboolean, cfg->next_vreg);
/* Whenever the bblock is unreachable */
ctx->unreachable = g_new0 (gboolean, cfg->max_block_num);
ctx->bblock_list = g_ptr_array_sized_new (256);
ctx->region_to_handler = g_hash_table_new (NULL, NULL);
ctx->clause_to_handler = g_hash_table_new (NULL, NULL);
ctx->callsite_list = g_ptr_array_new ();
ctx->jit_callees = g_hash_table_new (NULL, NULL);
if (cfg->compile_aot) {
ctx->module = &aot_module;
/*
* Allow the linker to discard duplicate copies of wrappers, generic instances etc. by using the 'linkonce'
* linkage for them. This requires the following:
* - the method needs to have a unique mangled name
* - llvmonly mode, since the code in aot-runtime.c would initialize got slots in the wrong aot image etc.
*/
if (ctx->module->llvm_only && ctx->module->static_link && is_linkonce_method (cfg->method))
is_linkonce = TRUE;
if (is_linkonce || mono_aot_is_externally_callable (cfg->method))
method_name = mono_aot_get_mangled_method_name (cfg->method);
else
method_name = mono_aot_get_method_name (cfg);
cfg->llvm_method_name = g_strdup (method_name);
} else {
ctx->module = init_jit_module ();
method_name = mono_method_full_name (cfg->method, TRUE);
}
ctx->method_name = method_name;
ctx->is_linkonce = is_linkonce;
if (cfg->compile_aot) {
ctx->lmodule = ctx->module->lmodule;
} else {
ctx->lmodule = LLVMModuleCreateWithName (g_strdup_printf ("jit-module-%s", cfg->method->name));
}
ctx->llvm_only = ctx->module->llvm_only;
#ifdef TARGET_WASM
ctx->emit_dummy_arg = TRUE;
#endif
emit_method_inner (ctx);
if (!ctx_ok (ctx)) {
if (ctx->lmethod) {
/* Need to add unused phi nodes as they can be referenced by other values */
LLVMBasicBlockRef phi_bb = LLVMAppendBasicBlock (ctx->lmethod, "PHI_BB");
LLVMBuilderRef builder;
builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, phi_bb);
for (i = 0; i < ctx->phi_values->len; ++i) {
LLVMValueRef v = (LLVMValueRef)g_ptr_array_index (ctx->phi_values, i);
if (LLVMGetInstructionParent (v) == NULL)
LLVMInsertIntoBuilder (builder, v);
}
if (ctx->module->llvm_only && ctx->module->static_link && cfg->interp) {
/* The caller will retry compilation */
LLVMDeleteFunction (ctx->lmethod);
} else if (ctx->module->llvm_only && ctx->module->static_link) {
// Keep a stub for the function since it might be called directly
int nbbs = LLVMCountBasicBlocks (ctx->lmethod);
LLVMBasicBlockRef *bblocks = g_new0 (LLVMBasicBlockRef, nbbs);
LLVMGetBasicBlocks (ctx->lmethod, bblocks);
for (int i = 0; i < nbbs; ++i)
LLVMRemoveBasicBlockFromParent (bblocks [i]);
LLVMBasicBlockRef entry_bb = LLVMAppendBasicBlock (ctx->lmethod, "ENTRY");
builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, entry_bb);
ctx->builder = builder;
LLVMTypeRef sig = LLVMFunctionType0 (LLVMVoidType (), FALSE);
LLVMValueRef callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_nullref_exception));
LLVMBuildCall (builder, callee, NULL, 0, "");
LLVMBuildUnreachable (builder);
/* Clean references to instructions inside the method */
for (int i = 0; i < ctx->callsite_list->len; ++i) {
CallSite *callsite = (CallSite*)g_ptr_array_index (ctx->callsite_list, i);
if (callsite->lmethod == ctx->lmethod)
callsite->load = NULL;
}
} else {
LLVMDeleteFunction (ctx->lmethod);
}
}
}
free_ctx (ctx);
mono_loader_unlock ();
}
static void
emit_method_inner (EmitContext *ctx)
{
MonoCompile *cfg = ctx->cfg;
MonoMethodSignature *sig;
MonoBasicBlock *bb;
LLVMTypeRef method_type;
LLVMValueRef method = NULL;
LLVMValueRef *values = ctx->values;
int i, max_block_num, bb_index;
gboolean llvmonly_fail = FALSE;
LLVMCallInfo *linfo;
LLVMModuleRef lmodule = ctx->lmodule;
BBInfo *bblocks;
GPtrArray *bblock_list = ctx->bblock_list;
MonoMethodHeader *header;
MonoExceptionClause *clause;
char **names;
LLVMBuilderRef entry_builder = NULL;
LLVMBasicBlockRef entry_bb = NULL;
if (cfg->gsharedvt && !cfg->llvm_only) {
set_failure (ctx, "gsharedvt");
return;
}
#if 0
{
static int count = 0;
count ++;
char *llvm_count_str = g_getenv ("LLVM_COUNT");
if (llvm_count_str) {
int lcount = atoi (llvm_count_str);
g_free (llvm_count_str);
if (count == lcount) {
printf ("LAST: %s\n", mono_method_full_name (cfg->method, TRUE));
fflush (stdout);
}
if (count > lcount) {
set_failure (ctx, "count");
return;
}
}
}
#endif
// If we come upon one of the init_method wrappers, we need to find
// the method that we have already emitted and tell LLVM that this
// managed method info for the wrapper is associated with this method
// we constructed ourselves from LLVM IR.
//
// This is necessary to unwind through the init_method, in the case that
// it has to run a static cctor that throws an exception
if (cfg->method->wrapper_type == MONO_WRAPPER_OTHER) {
WrapperInfo *info = mono_marshal_get_wrapper_info (cfg->method);
if (info->subtype == WRAPPER_SUBTYPE_AOT_INIT) {
method = get_init_func (ctx->module, info->d.aot_init.subtype);
ctx->lmethod = method;
ctx->module->max_method_idx = MAX (ctx->module->max_method_idx, cfg->method_index);
const char *init_name = mono_marshal_get_aot_init_wrapper_name (info->d.aot_init.subtype);
ctx->method_name = g_strdup_printf ("%s_%s", ctx->module->global_prefix, init_name);
ctx->cfg->asm_symbol = g_strdup (ctx->method_name);
if (!cfg->llvm_only && ctx->module->external_symbols) {
LLVMSetLinkage (method, LLVMExternalLinkage);
LLVMSetVisibility (method, LLVMHiddenVisibility);
}
/* Not looked up at runtime */
g_hash_table_insert (ctx->module->no_method_table_lmethods, method, method);
goto after_codegen;
} else if (info->subtype == WRAPPER_SUBTYPE_LLVM_FUNC) {
g_assert (info->d.llvm_func.subtype == LLVM_FUNC_WRAPPER_GC_POLL);
if (cfg->compile_aot) {
method = ctx->module->gc_poll_cold_wrapper;
g_assert (method);
} else {
method = emit_icall_cold_wrapper (ctx->module, lmodule, MONO_JIT_ICALL_mono_threads_state_poll, FALSE);
}
ctx->lmethod = method;
ctx->module->max_method_idx = MAX (ctx->module->max_method_idx, cfg->method_index);
ctx->method_name = g_strdup (LLVMGetValueName (method)); //g_strdup_printf ("%s_%s", ctx->module->global_prefix, LLVMGetValueName (method));
ctx->cfg->asm_symbol = g_strdup (ctx->method_name);
if (!cfg->llvm_only && ctx->module->external_symbols) {
LLVMSetLinkage (method, LLVMExternalLinkage);
LLVMSetVisibility (method, LLVMHiddenVisibility);
}
goto after_codegen;
}
}
sig = mono_method_signature_internal (cfg->method);
ctx->sig = sig;
linfo = get_llvm_call_info (cfg, sig);
ctx->linfo = linfo;
if (!ctx_ok (ctx))
return;
if (cfg->rgctx_var)
linfo->rgctx_arg = TRUE;
else if (needs_extra_arg (ctx, cfg->method))
linfo->dummy_arg = TRUE;
ctx->method_type = method_type = sig_to_llvm_sig_full (ctx, sig, linfo);
if (!ctx_ok (ctx))
return;
method = LLVMAddFunction (lmodule, ctx->method_name, method_type);
ctx->lmethod = method;
if (!cfg->llvm_only)
LLVMSetFunctionCallConv (method, LLVMMono1CallConv);
/* if the method doesn't contain
* (1) a call (so it's a leaf method)
* (2) and no loops
* we can skip the GC safepoint on method entry. */
gboolean requires_safepoint;
requires_safepoint = cfg->has_calls;
if (!requires_safepoint) {
for (bb = cfg->bb_entry->next_bb; bb; bb = bb->next_bb) {
if (bb->loop_body_start || (bb->flags & BB_EXCEPTION_HANDLER)) {
requires_safepoint = TRUE;
}
}
}
if (cfg->method->wrapper_type) {
if (cfg->method->wrapper_type == MONO_WRAPPER_ALLOC || cfg->method->wrapper_type == MONO_WRAPPER_WRITE_BARRIER) {
requires_safepoint = FALSE;
} else {
WrapperInfo *info = mono_marshal_get_wrapper_info (cfg->method);
switch (info->subtype) {
case WRAPPER_SUBTYPE_GSHAREDVT_IN:
case WRAPPER_SUBTYPE_GSHAREDVT_OUT:
case WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG:
case WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG:
/* Arguments are not used after the call */
requires_safepoint = FALSE;
break;
}
}
}
ctx->has_safepoints = requires_safepoint;
if (!cfg->llvm_only && mono_threads_are_safepoints_enabled () && requires_safepoint) {
if (!cfg->compile_aot) {
LLVMSetGC (method, "coreclr");
emit_gc_safepoint_poll (ctx->module, ctx->lmodule, cfg);
} else {
LLVMSetGC (method, "coreclr");
}
}
LLVMSetLinkage (method, LLVMPrivateLinkage);
mono_llvm_add_func_attr (method, LLVM_ATTR_UW_TABLE);
if (cfg->disable_omit_fp)
mono_llvm_add_func_attr_nv (method, "frame-pointer", "all");
if (cfg->compile_aot) {
if (mono_aot_is_externally_callable (cfg->method)) {
LLVMSetLinkage (method, LLVMExternalLinkage);
} else {
LLVMSetLinkage (method, LLVMInternalLinkage);
//all methods have internal visibility when doing llvm_only
if (!cfg->llvm_only && ctx->module->external_symbols) {
LLVMSetLinkage (method, LLVMExternalLinkage);
LLVMSetVisibility (method, LLVMHiddenVisibility);
}
}
if (ctx->is_linkonce) {
LLVMSetLinkage (method, LLVMLinkOnceAnyLinkage);
LLVMSetVisibility (method, LLVMDefaultVisibility);
}
} else {
LLVMSetLinkage (method, LLVMExternalLinkage);
}
if (cfg->method->save_lmf && !cfg->llvm_only) {
set_failure (ctx, "lmf");
return;
}
if (sig->pinvoke && cfg->method->wrapper_type != MONO_WRAPPER_RUNTIME_INVOKE && !cfg->llvm_only) {
set_failure (ctx, "pinvoke signature");
return;
}
#ifdef TARGET_WASM
if (ctx->module->interp && cfg->header->code_size > 100000 && !cfg->interp_entry_only) {
/* Large methods slow down llvm too much */
set_failure (ctx, "il code too large.");
return;
}
#endif
header = cfg->header;
for (i = 0; i < header->num_clauses; ++i) {
clause = &header->clauses [i];
if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY && clause->flags != MONO_EXCEPTION_CLAUSE_FAULT && clause->flags != MONO_EXCEPTION_CLAUSE_NONE) {
if (cfg->llvm_only) {
if (!cfg->deopt && !cfg->interp_entry_only)
llvmonly_fail = TRUE;
} else {
set_failure (ctx, "non-finally/catch/fault clause.");
return;
}
}
}
if (header->num_clauses || (cfg->method->iflags & METHOD_IMPL_ATTRIBUTE_NOINLINING) || cfg->no_inline)
/* We can't handle inlined methods with clauses */
mono_llvm_add_func_attr (method, LLVM_ATTR_NO_INLINE);
for (int i = 0; i < cfg->header->num_clauses; i++) {
MonoExceptionClause *clause = &cfg->header->clauses [i];
if (clause->flags == MONO_EXCEPTION_CLAUSE_NONE || clause->flags == MONO_EXCEPTION_CLAUSE_FILTER)
ctx->has_catch = TRUE;
}
if (linfo->rgctx_arg) {
ctx->rgctx_arg = LLVMGetParam (method, linfo->rgctx_arg_pindex);
ctx->rgctx_arg_pindex = linfo->rgctx_arg_pindex;
/*
* We mark the rgctx parameter with the inreg attribute, which is mapped to
* MONO_ARCH_RGCTX_REG in the Mono calling convention in llvm, i.e.
* CC_X86_64_Mono in X86CallingConv.td.
*/
if (!ctx->llvm_only)
mono_llvm_add_param_attr (ctx->rgctx_arg, LLVM_ATTR_IN_REG);
LLVMSetValueName (ctx->rgctx_arg, "rgctx");
} else {
ctx->rgctx_arg_pindex = -1;
}
if (cfg->vret_addr) {
values [cfg->vret_addr->dreg] = LLVMGetParam (method, linfo->vret_arg_pindex);
LLVMSetValueName (values [cfg->vret_addr->dreg], "vret");
if (linfo->ret.storage == LLVMArgVtypeByRef) {
mono_llvm_add_param_attr (LLVMGetParam (method, linfo->vret_arg_pindex), LLVM_ATTR_STRUCT_RET);
mono_llvm_add_param_attr (LLVMGetParam (method, linfo->vret_arg_pindex), LLVM_ATTR_NO_ALIAS);
}
}
if (sig->hasthis) {
ctx->this_arg_pindex = linfo->this_arg_pindex;
ctx->this_arg = LLVMGetParam (method, linfo->this_arg_pindex);
values [cfg->args [0]->dreg] = ctx->this_arg;
LLVMSetValueName (values [cfg->args [0]->dreg], "this");
}
if (linfo->dummy_arg)
LLVMSetValueName (LLVMGetParam (method, linfo->dummy_arg_pindex), "dummy_arg");
names = g_new (char *, sig->param_count);
mono_method_get_param_names (cfg->method, (const char **) names);
/* Set parameter names/attributes */
for (i = 0; i < sig->param_count; ++i) {
LLVMArgInfo *ainfo = &linfo->args [i + sig->hasthis];
char *name;
int pindex = ainfo->pindex + ainfo->ndummy_fpargs;
int j;
for (j = 0; j < ainfo->ndummy_fpargs; ++j) {
name = g_strdup_printf ("dummy_%d_%d", i, j);
LLVMSetValueName (LLVMGetParam (method, ainfo->pindex + j), name);
g_free (name);
}
if (ainfo->storage == LLVMArgVtypeInReg && ainfo->pair_storage [0] == LLVMArgNone && ainfo->pair_storage [1] == LLVMArgNone)
continue;
values [cfg->args [i + sig->hasthis]->dreg] = LLVMGetParam (method, pindex);
if (ainfo->storage == LLVMArgGsharedvtFixed || ainfo->storage == LLVMArgGsharedvtFixedVtype) {
if (names [i] && names [i][0] != '\0')
name = g_strdup_printf ("p_arg_%s", names [i]);
else
name = g_strdup_printf ("p_arg_%d", i);
} else {
if (names [i] && names [i][0] != '\0')
name = g_strdup_printf ("arg_%s", names [i]);
else
name = g_strdup_printf ("arg_%d", i);
}
LLVMSetValueName (LLVMGetParam (method, pindex), name);
g_free (name);
if (ainfo->storage == LLVMArgVtypeByVal)
mono_llvm_add_param_attr (LLVMGetParam (method, pindex), LLVM_ATTR_BY_VAL);
if (ainfo->storage == LLVMArgVtypeByRef || ainfo->storage == LLVMArgVtypeAddr) {
/* For OP_LDADDR */
cfg->args [i + sig->hasthis]->opcode = OP_VTARG_ADDR;
}
#ifdef TARGET_WASM
if (ainfo->storage == LLVMArgVtypeByRef) {
/* This causes llvm to make a copy of the value which is what we need */
mono_llvm_add_param_byval_attr (LLVMGetParam (method, pindex), LLVMGetElementType (LLVMTypeOf (LLVMGetParam (method, pindex))));
}
#endif
}
g_free (names);
if (ctx->module->emit_dwarf && cfg->compile_aot && mono_debug_enabled ()) {
ctx->minfo = mono_debug_lookup_method (cfg->method);
ctx->dbg_md = emit_dbg_subprogram (ctx, cfg, method, ctx->method_name);
}
max_block_num = 0;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
max_block_num = MAX (max_block_num, bb->block_num);
ctx->bblocks = bblocks = g_new0 (BBInfo, max_block_num + 1);
/* Add branches between non-consecutive bblocks */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
if (bb->last_ins && MONO_IS_COND_BRANCH_OP (bb->last_ins) &&
bb->next_bb != bb->last_ins->inst_false_bb) {
MonoInst *inst = (MonoInst*)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoInst));
inst->opcode = OP_BR;
inst->inst_target_bb = bb->last_ins->inst_false_bb;
mono_bblock_add_inst (bb, inst);
}
}
/*
* Make a first pass over the code to precreate PHI nodes/set INDIRECT flags.
*/
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
MonoInst *ins;
LLVMBuilderRef builder;
char *dname;
char dname_buf[128];
builder = create_builder (ctx);
for (ins = bb->code; ins; ins = ins->next) {
switch (ins->opcode) {
case OP_PHI:
case OP_FPHI:
case OP_VPHI:
case OP_XPHI: {
LLVMTypeRef phi_type = llvm_type_to_stack_type (cfg, type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass)));
if (!ctx_ok (ctx))
return;
if (cfg->interp_entry_only)
break;
if (ins->opcode == OP_VPHI) {
/* Treat valuetype PHI nodes as operating on the address itself */
g_assert (ins->klass);
phi_type = LLVMPointerType (type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass)), 0);
}
/*
* Have to precreate these, as they can be referenced by
* earlier instructions.
*/
sprintf (dname_buf, "t%d", ins->dreg);
dname = dname_buf;
values [ins->dreg] = LLVMBuildPhi (builder, phi_type, dname);
if (ins->opcode == OP_VPHI)
ctx->addresses [ins->dreg] = values [ins->dreg];
g_ptr_array_add (ctx->phi_values, values [ins->dreg]);
/*
* Set the expected type of the incoming arguments since these have
* to have the same type.
*/
for (i = 0; i < ins->inst_phi_args [0]; i++) {
int sreg1 = ins->inst_phi_args [i + 1];
if (sreg1 != -1) {
if (ins->opcode == OP_VPHI)
ctx->is_vphi [sreg1] = TRUE;
ctx->vreg_types [sreg1] = phi_type;
}
}
break;
}
case OP_LDADDR:
((MonoInst*)ins->inst_p0)->flags |= MONO_INST_INDIRECT;
break;
default:
break;
}
}
}
/*
* Create an ordering for bblocks, use the depth first order first, then
* put the exception handling bblocks last.
*/
for (bb_index = 0; bb_index < cfg->num_bblocks; ++bb_index) {
bb = cfg->bblocks [bb_index];
if (!(bb->region != -1 && !MONO_BBLOCK_IS_IN_REGION (bb, MONO_REGION_TRY))) {
g_ptr_array_add (bblock_list, bb);
bblocks [bb->block_num].added = TRUE;
}
}
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
if (!bblocks [bb->block_num].added)
g_ptr_array_add (bblock_list, bb);
}
/*
* Second pass: generate code.
*/
// Emit entry point
entry_builder = create_builder (ctx);
entry_bb = get_bb (ctx, cfg->bb_entry);
LLVMPositionBuilderAtEnd (entry_builder, entry_bb);
emit_entry_bb (ctx, entry_builder);
if (llvmonly_fail)
/*
* In llvmonly mode, we want to emit an llvm method for every method even if it fails to compile,
* so direct calls can be made from outside the assembly.
*/
goto after_codegen_1;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
int clause_index;
char name [128];
if (ctx->cfg->interp_entry_only || !(bb->region != -1 && (bb->flags & BB_EXCEPTION_HANDLER)))
continue;
if (ctx->cfg->deopt && MONO_REGION_FLAGS (bb->region) == MONO_EXCEPTION_CLAUSE_FILTER)
continue;
clause_index = MONO_REGION_CLAUSE_INDEX (bb->region);
g_hash_table_insert (ctx->region_to_handler, GUINT_TO_POINTER (mono_get_block_region_notry (cfg, bb->region)), bb);
g_hash_table_insert (ctx->clause_to_handler, GINT_TO_POINTER (clause_index), bb);
/*
* Create a new bblock which CALL_HANDLER/landing pads can branch to, because branching to the
* LLVM bblock containing a landing pad causes problems for the
* LLVM optimizer passes.
*/
sprintf (name, "BB%d_CALL_HANDLER_TARGET", bb->block_num);
ctx->bblocks [bb->block_num].call_handler_target_bb = LLVMAppendBasicBlock (ctx->lmethod, name);
}
// Make landing pads first
ctx->exc_meta = g_hash_table_new_full (NULL, NULL, NULL, NULL);
if (ctx->llvm_only && !ctx->cfg->interp_entry_only) {
size_t group_index = 0;
while (group_index < cfg->header->num_clauses) {
if (cfg->clause_is_dead [group_index]) {
group_index ++;
continue;
}
int count = 0;
size_t cursor = group_index;
while (cursor < cfg->header->num_clauses &&
CLAUSE_START (&cfg->header->clauses [cursor]) == CLAUSE_START (&cfg->header->clauses [group_index]) &&
CLAUSE_END (&cfg->header->clauses [cursor]) == CLAUSE_END (&cfg->header->clauses [group_index])) {
count++;
cursor++;
}
LLVMBasicBlockRef lpad_bb = emit_landing_pad (ctx, group_index, count);
intptr_t key = CLAUSE_END (&cfg->header->clauses [group_index]);
g_hash_table_insert (ctx->exc_meta, (gpointer)key, lpad_bb);
group_index = cursor;
}
}
for (bb_index = 0; bb_index < bblock_list->len; ++bb_index) {
bb = (MonoBasicBlock*)g_ptr_array_index (bblock_list, bb_index);
// Prune unreachable mono BBs.
if (!(bb == cfg->bb_entry || bb->in_count > 0))
continue;
process_bb (ctx, bb);
if (!ctx_ok (ctx))
return;
}
g_hash_table_destroy (ctx->exc_meta);
mono_memory_barrier ();
/* Add incoming phi values */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
GSList *l, *ins_list;
ins_list = bblocks [bb->block_num].phi_nodes;
for (l = ins_list; l; l = l->next) {
PhiNode *node = (PhiNode*)l->data;
MonoInst *phi = node->phi;
int sreg1 = node->sreg;
LLVMBasicBlockRef in_bb;
if (sreg1 == -1)
continue;
in_bb = get_end_bb (ctx, node->in_bb);
if (ctx->unreachable [node->in_bb->block_num])
continue;
if (phi->opcode == OP_VPHI) {
g_assert (LLVMTypeOf (ctx->addresses [sreg1]) == LLVMTypeOf (values [phi->dreg]));
LLVMAddIncoming (values [phi->dreg], &ctx->addresses [sreg1], &in_bb, 1);
} else {
if (!values [sreg1]) {
/* Can happen with values in EH clauses */
set_failure (ctx, "incoming phi sreg1");
return;
}
if (LLVMTypeOf (values [sreg1]) != LLVMTypeOf (values [phi->dreg])) {
set_failure (ctx, "incoming phi arg type mismatch");
return;
}
g_assert (LLVMTypeOf (values [sreg1]) == LLVMTypeOf (values [phi->dreg]));
LLVMAddIncoming (values [phi->dreg], &values [sreg1], &in_bb, 1);
}
}
}
/* Nullify empty phi instructions */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
GSList *l, *ins_list;
ins_list = bblocks [bb->block_num].phi_nodes;
for (l = ins_list; l; l = l->next) {
PhiNode *node = (PhiNode*)l->data;
MonoInst *phi = node->phi;
LLVMValueRef phi_ins = values [phi->dreg];
if (!phi_ins)
/* Already removed */
continue;
if (LLVMCountIncoming (phi_ins) == 0) {
mono_llvm_replace_uses_of (phi_ins, LLVMConstNull (LLVMTypeOf (phi_ins)));
LLVMInstructionEraseFromParent (phi_ins);
values [phi->dreg] = NULL;
}
}
}
/* Create the SWITCH statements for ENDFINALLY instructions */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
BBInfo *info = &bblocks [bb->block_num];
GSList *l;
for (l = info->endfinally_switch_ins_list; l; l = l->next) {
LLVMValueRef switch_ins = (LLVMValueRef)l->data;
GSList *bb_list = info->call_handler_return_bbs;
GSList *bb_list_iter;
i = 0;
for (bb_list_iter = bb_list; bb_list_iter; bb_list_iter = g_slist_next (bb_list_iter)) {
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i + 1, FALSE), (LLVMBasicBlockRef)bb_list_iter->data);
i ++;
}
}
}
ctx->module->max_method_idx = MAX (ctx->module->max_method_idx, cfg->method_index);
after_codegen_1:
if (llvmonly_fail) {
/*
* FIXME: Maybe fallback to interpreter
*/
static LLVMTypeRef sig;
ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, ctx->inited_bb);
char *name = mono_method_get_full_name (cfg->method);
int len = strlen (name);
LLVMTypeRef type = LLVMArrayType (LLVMInt8Type (), len + 1);
LLVMValueRef name_var = LLVMAddGlobal (ctx->lmodule, type, "missing_method_name");
LLVMSetVisibility (name_var, LLVMHiddenVisibility);
LLVMSetLinkage (name_var, LLVMInternalLinkage);
LLVMSetInitializer (name_var, mono_llvm_create_constant_data_array ((guint8*)name, len + 1));
mono_llvm_set_is_constant (name_var);
g_free (name);
if (!sig)
sig = LLVMFunctionType1 (LLVMVoidType (), ctx->module->ptr_type, FALSE);
LLVMValueRef callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_aot_failed_exception));
LLVMValueRef args [] = { convert (ctx, name_var, ctx->module->ptr_type) };
LLVMBuildCall (ctx->builder, callee, args, 1, "");
LLVMBuildUnreachable (ctx->builder);
}
/* Initialize the method if needed */
if (cfg->compile_aot) {
// FIXME: Add more shared got entries
ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, ctx->init_bb);
// FIXME: beforefieldinit
/*
* NATIVE_TO_MANAGED methods might be called on a thread not attached to the runtime, so they are initialized when loaded
* in load_method ().
*/
gboolean needs_init = ctx->cfg->got_access_count > 0;
MonoMethod *cctor = NULL;
if (!needs_init && (cctor = mono_class_get_cctor (cfg->method->klass))) {
/* Needs init to run the cctor */
if (cfg->method->flags & METHOD_ATTRIBUTE_STATIC)
needs_init = TRUE;
if (cctor == cfg->method)
needs_init = FALSE;
// If we are a constructor, we need to init so the static
// constructor gets called.
if (!strcmp (cfg->method->name, ".ctor"))
needs_init = TRUE;
}
if (cfg->method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED)
needs_init = FALSE;
if (needs_init)
emit_method_init (ctx);
else
LLVMBuildBr (ctx->builder, ctx->inited_bb);
// Was observing LLVM moving field accesses into the caller's method
// body before the init call (the inlined one), leading to NULL derefs
// after the init_method returns (GOT is filled out though)
if (needs_init)
mono_llvm_add_func_attr (method, LLVM_ATTR_NO_INLINE);
}
if (mini_get_debug_options ()->llvm_disable_inlining)
mono_llvm_add_func_attr (method, LLVM_ATTR_NO_INLINE);
after_codegen:
if (cfg->compile_aot)
g_ptr_array_add (ctx->module->cfgs, cfg);
if (cfg->llvm_only) {
/*
* Add the contents of ctx->callsite_list to module->callsite_list.
* We can't do this earlier, as it contains llvm instructions which can be
* freed if compilation fails.
* FIXME: Get rid of this when all methods can be llvm compiled.
*/
for (int i = 0; i < ctx->callsite_list->len; ++i)
g_ptr_array_add (ctx->module->callsite_list, g_ptr_array_index (ctx->callsite_list, i));
}
if (cfg->verbose_level > 1) {
g_print ("\n*** Unoptimized LLVM IR for %s ***\n", mono_method_full_name (cfg->method, TRUE));
if (cfg->compile_aot) {
mono_llvm_dump_value (method);
} else {
mono_llvm_dump_module (ctx->lmodule);
}
g_print ("***\n\n");
}
if (cfg->compile_aot && !cfg->llvm_only)
mark_as_used (ctx->module, method);
if (!cfg->llvm_only) {
LLVMValueRef md_args [16];
LLVMValueRef md_node;
int method_index;
if (cfg->compile_aot)
method_index = mono_aot_get_method_index (cfg->orig_method);
else
method_index = 1;
md_args [0] = LLVMMDString (ctx->method_name, strlen (ctx->method_name));
md_args [1] = LLVMConstInt (LLVMInt32Type (), method_index, FALSE);
md_node = LLVMMDNode (md_args, 2);
LLVMAddNamedMetadataOperand (lmodule, "mono.function_indexes", md_node);
//LLVMSetMetadata (method, md_kind, LLVMMDNode (&md_arg, 1));
}
if (cfg->compile_aot) {
/* Don't generate native code, keep the LLVM IR */
if (cfg->verbose_level) {
char *name = mono_method_get_full_name (cfg->method);
printf ("%s emitted as %s\n", name, ctx->method_name);
g_free (name);
}
#if 0
int err = LLVMVerifyFunction (ctx->lmethod, LLVMPrintMessageAction);
if (err != 0)
LLVMDumpValue (ctx->lmethod);
g_assert (err == 0);
#endif
} else {
//LLVMVerifyFunction (method, 0);
llvm_jit_finalize_method (ctx);
}
if (ctx->module->method_to_lmethod)
g_hash_table_insert (ctx->module->method_to_lmethod, cfg->method, ctx->lmethod);
if (ctx->module->idx_to_lmethod)
g_hash_table_insert (ctx->module->idx_to_lmethod, GINT_TO_POINTER (cfg->method_index), ctx->lmethod);
if (ctx->llvm_only && m_class_is_valuetype (cfg->orig_method->klass) && !(cfg->orig_method->flags & METHOD_ATTRIBUTE_STATIC))
emit_unbox_tramp (ctx, ctx->method_name, ctx->method_type, ctx->lmethod, cfg->method_index);
}
/*
* mono_llvm_create_vars:
*
* Same as mono_arch_create_vars () for LLVM.
*/
void
mono_llvm_create_vars (MonoCompile *cfg)
{
MonoMethodSignature *sig;
sig = mono_method_signature_internal (cfg->method);
if (cfg->gsharedvt && cfg->llvm_only) {
gboolean vretaddr = FALSE;
if (mini_is_gsharedvt_variable_signature (sig) && sig->ret->type != MONO_TYPE_VOID) {
vretaddr = TRUE;
} else {
MonoMethodSignature *sig = mono_method_signature_internal (cfg->method);
LLVMCallInfo *linfo;
linfo = get_llvm_call_info (cfg, sig);
vretaddr = (linfo->ret.storage == LLVMArgVtypeRetAddr || linfo->ret.storage == LLVMArgVtypeByRef || linfo->ret.storage == LLVMArgGsharedvtFixed || linfo->ret.storage == LLVMArgGsharedvtVariable || linfo->ret.storage == LLVMArgGsharedvtFixedVtype);
}
if (vretaddr) {
/*
* Creating vret_addr forces CEE_SETRET to store the result into it,
* so we don't have to generate any code in our OP_SETRET case.
*/
cfg->vret_addr = mono_compile_create_var (cfg, m_class_get_byval_arg (mono_get_intptr_class ()), OP_ARG);
if (G_UNLIKELY (cfg->verbose_level > 1)) {
printf ("vret_addr = ");
mono_print_ins (cfg->vret_addr);
}
}
} else {
mono_arch_create_vars (cfg);
}
cfg->lmf_ir = TRUE;
}
/*
* mono_llvm_emit_call:
*
* Same as mono_arch_emit_call () for LLVM.
*/
void
mono_llvm_emit_call (MonoCompile *cfg, MonoCallInst *call)
{
MonoInst *in;
MonoMethodSignature *sig;
int i, n;
LLVMArgInfo *ainfo;
sig = call->signature;
n = sig->param_count + sig->hasthis;
if (sig->call_convention == MONO_CALL_VARARG) {
cfg->exception_message = g_strdup ("varargs");
cfg->disable_llvm = TRUE;
return;
}
call->cinfo = get_llvm_call_info (cfg, sig);
if (cfg->disable_llvm)
return;
for (i = 0; i < n; ++i) {
MonoInst *ins;
ainfo = call->cinfo->args + i;
in = call->args [i];
/* Simply remember the arguments */
switch (ainfo->storage) {
case LLVMArgNormal: {
MonoType *t = (sig->hasthis && i == 0) ? m_class_get_byval_arg (mono_get_intptr_class ()) : ainfo->type;
int opcode;
opcode = mono_type_to_regmove (cfg, t);
if (opcode == OP_FMOVE) {
MONO_INST_NEW (cfg, ins, OP_FMOVE);
ins->dreg = mono_alloc_freg (cfg);
} else if (opcode == OP_LMOVE) {
MONO_INST_NEW (cfg, ins, OP_LMOVE);
ins->dreg = mono_alloc_lreg (cfg);
} else if (opcode == OP_RMOVE) {
MONO_INST_NEW (cfg, ins, OP_RMOVE);
ins->dreg = mono_alloc_freg (cfg);
} else {
MONO_INST_NEW (cfg, ins, OP_MOVE);
ins->dreg = mono_alloc_ireg (cfg);
}
ins->sreg1 = in->dreg;
break;
}
case LLVMArgVtypeByVal:
case LLVMArgVtypeByRef:
case LLVMArgVtypeInReg:
case LLVMArgVtypeAddr:
case LLVMArgVtypeAsScalar:
case LLVMArgAsIArgs:
case LLVMArgAsFpArgs:
case LLVMArgGsharedvtVariable:
case LLVMArgGsharedvtFixed:
case LLVMArgGsharedvtFixedVtype:
case LLVMArgWasmVtypeAsScalar:
MONO_INST_NEW (cfg, ins, OP_LLVM_OUTARG_VT);
ins->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = in->dreg;
ins->inst_p0 = mono_mempool_alloc0 (cfg->mempool, sizeof (LLVMArgInfo));
memcpy (ins->inst_p0, ainfo, sizeof (LLVMArgInfo));
ins->inst_vtype = ainfo->type;
ins->klass = mono_class_from_mono_type_internal (ainfo->type);
break;
default:
cfg->exception_message = g_strdup ("ainfo->storage");
cfg->disable_llvm = TRUE;
return;
}
if (!cfg->disable_llvm) {
MONO_ADD_INS (cfg->cbb, ins);
mono_call_inst_add_outarg_reg (cfg, call, ins->dreg, 0, FALSE);
}
}
}
static inline void
add_func (LLVMModuleRef module, const char *name, LLVMTypeRef ret_type, LLVMTypeRef *param_types, int nparams)
{
LLVMAddFunction (module, name, LLVMFunctionType (ret_type, param_types, nparams, FALSE));
}
static LLVMValueRef
add_intrins (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef *params, int nparams)
{
return mono_llvm_register_overloaded_intrinsic (module, id, params, nparams);
}
static LLVMValueRef
add_intrins1 (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef param1)
{
return mono_llvm_register_overloaded_intrinsic (module, id, ¶m1, 1);
}
static LLVMValueRef
add_intrins2 (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef param1, LLVMTypeRef param2)
{
LLVMTypeRef params [] = { param1, param2 };
return mono_llvm_register_overloaded_intrinsic (module, id, params, 2);
}
static LLVMValueRef
add_intrins3 (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef param1, LLVMTypeRef param2, LLVMTypeRef param3)
{
LLVMTypeRef params [] = { param1, param2, param3 };
return mono_llvm_register_overloaded_intrinsic (module, id, params, 3);
}
static void
add_intrinsic (LLVMModuleRef module, int id)
{
/* Register simple intrinsics */
LLVMValueRef intrins = mono_llvm_register_intrinsic (module, (IntrinsicId)id);
if (intrins) {
g_hash_table_insert (intrins_id_to_intrins, GINT_TO_POINTER (id), intrins);
return;
}
if (intrin_arm64_ovr [id] != 0) {
llvm_ovr_tag_t spec = intrin_arm64_ovr [id];
for (int vw = 0; vw < INTRIN_vectorwidths; ++vw) {
for (int ew = 0; ew < INTRIN_elementwidths; ++ew) {
llvm_ovr_tag_t vec_bit = INTRIN_vector128 >> ((INTRIN_vectorwidths - 1) - vw);
llvm_ovr_tag_t elem_bit = INTRIN_int8 << ew;
llvm_ovr_tag_t test = vec_bit | elem_bit;
if ((spec & test) == test) {
uint8_t kind = intrin_kind [id];
LLVMTypeRef distinguishing_type = intrin_types [vw][ew];
if (kind == INTRIN_kind_ftoi && (elem_bit & (INTRIN_int32 | INTRIN_int64))) {
/*
* @llvm.aarch64.neon.fcvtas.v4i32.v4f32
* @llvm.aarch64.neon.fcvtas.v2i64.v2f64
*/
intrins = add_intrins2 (module, id, distinguishing_type, intrin_types [vw][ew + 2]);
} else if (kind == INTRIN_kind_widen) {
/*
* @llvm.aarch64.neon.saddlp.v2i64.v4i32
* @llvm.aarch64.neon.saddlp.v4i16.v8i8
*/
intrins = add_intrins2 (module, id, distinguishing_type, intrin_types [vw][ew - 1]);
} else if (kind == INTRIN_kind_widen_across) {
/*
* @llvm.aarch64.neon.saddlv.i64.v4i32
* @llvm.aarch64.neon.saddlv.i32.v8i16
* @llvm.aarch64.neon.saddlv.i32.v16i8
* i8/i16 return types for NEON intrinsics will make isel fail as of LLVM 9.
*/
int associated_prim = MAX(ew + 1, 2);
LLVMTypeRef associated_scalar_type = intrin_types [0][associated_prim];
intrins = add_intrins2 (module, id, associated_scalar_type, distinguishing_type);
} else if (kind == INTRIN_kind_across) {
/*
* @llvm.aarch64.neon.uaddv.i64.v4i64
* @llvm.aarch64.neon.uaddv.i32.v4i32
* @llvm.aarch64.neon.uaddv.i32.v8i16
* @llvm.aarch64.neon.uaddv.i32.v16i8
* i8/i16 return types for NEON intrinsics will make isel fail as of LLVM 9.
*/
int associated_prim = MAX(ew, 2);
LLVMTypeRef associated_scalar_type = intrin_types [0][associated_prim];
intrins = add_intrins2 (module, id, associated_scalar_type, distinguishing_type);
} else if (kind == INTRIN_kind_arm64_dot_prod) {
/*
* @llvm.aarch64.neon.sdot.v2i32.v8i8
* @llvm.aarch64.neon.sdot.v4i32.v16i8
*/
LLVMTypeRef associated_type = intrin_types [vw][0];
intrins = add_intrins2 (module, id, distinguishing_type, associated_type);
} else
intrins = add_intrins1 (module, id, distinguishing_type);
int key = key_from_id_and_tag (id, test);
g_hash_table_insert (intrins_id_to_intrins, GINT_TO_POINTER (key), intrins);
}
}
}
return;
}
/* Register overloaded intrinsics */
switch (id) {
#define INTRINS(intrin_name, llvm_id, arch)
#define INTRINS_OVR(intrin_name, llvm_id, arch, llvm_type) case INTRINS_ ## intrin_name: intrins = add_intrins1(module, id, llvm_type); break;
#define INTRINS_OVR_2_ARG(intrin_name, llvm_id, arch, llvm_type1, llvm_type2) case INTRINS_ ## intrin_name: intrins = add_intrins2(module, id, llvm_type1, llvm_type2); break;
#define INTRINS_OVR_3_ARG(intrin_name, llvm_id, arch, llvm_type1, llvm_type2, llvm_type3) case INTRINS_ ## intrin_name: intrins = add_intrins3(module, id, llvm_type1, llvm_type2, llvm_type3); break;
#define INTRINS_OVR_TAG(...)
#define INTRINS_OVR_TAG_KIND(...)
#include "llvm-intrinsics.h"
default:
g_assert_not_reached ();
break;
}
g_assert (intrins);
g_hash_table_insert (intrins_id_to_intrins, GINT_TO_POINTER (id), intrins);
}
static LLVMValueRef
get_intrins_from_module (LLVMModuleRef lmodule, int id)
{
LLVMValueRef res;
res = (LLVMValueRef)g_hash_table_lookup (intrins_id_to_intrins, GINT_TO_POINTER (id));
g_assert (res);
return res;
}
static LLVMValueRef
get_intrins (EmitContext *ctx, int id)
{
return get_intrins_from_module (ctx->lmodule, id);
}
static void
add_intrinsics (LLVMModuleRef module)
{
int i;
/* Emit declarations of instrinsics */
/*
* It would be nicer to emit only the intrinsics actually used, but LLVM's Module
* type doesn't seem to do any locking.
*/
for (i = 0; i < INTRINS_NUM; ++i)
add_intrinsic (module, i);
/* EH intrinsics */
add_func (module, "mono_personality", LLVMVoidType (), NULL, 0);
add_func (module, "llvm_resume_unwind_trampoline", LLVMVoidType (), NULL, 0);
}
static void
add_types (MonoLLVMModule *module)
{
module->ptr_type = LLVMPointerType (TARGET_SIZEOF_VOID_P == 8 ? LLVMInt64Type () : LLVMInt32Type (), 0);
}
void
mono_llvm_init (gboolean enable_jit)
{
intrin_types [0][0] = i1_t = LLVMInt8Type ();
intrin_types [0][1] = i2_t = LLVMInt16Type ();
intrin_types [0][2] = i4_t = LLVMInt32Type ();
intrin_types [0][3] = i8_t = LLVMInt64Type ();
intrin_types [0][4] = r4_t = LLVMFloatType ();
intrin_types [0][5] = r8_t = LLVMDoubleType ();
intrin_types [1][0] = v64_i1_t = LLVMVectorType (LLVMInt8Type (), 8);
intrin_types [1][1] = v64_i2_t = LLVMVectorType (LLVMInt16Type (), 4);
intrin_types [1][2] = v64_i4_t = LLVMVectorType (LLVMInt32Type (), 2);
intrin_types [1][3] = v64_i8_t = LLVMVectorType (LLVMInt64Type (), 1);
intrin_types [1][4] = v64_r4_t = LLVMVectorType (LLVMFloatType (), 2);
intrin_types [1][5] = v64_r8_t = LLVMVectorType (LLVMDoubleType (), 1);
intrin_types [2][0] = v128_i1_t = sse_i1_t = type_to_sse_type (MONO_TYPE_I1);
intrin_types [2][1] = v128_i2_t = sse_i2_t = type_to_sse_type (MONO_TYPE_I2);
intrin_types [2][2] = v128_i4_t = sse_i4_t = type_to_sse_type (MONO_TYPE_I4);
intrin_types [2][3] = v128_i8_t = sse_i8_t = type_to_sse_type (MONO_TYPE_I8);
intrin_types [2][4] = v128_r4_t = sse_r4_t = type_to_sse_type (MONO_TYPE_R4);
intrin_types [2][5] = v128_r8_t = sse_r8_t = type_to_sse_type (MONO_TYPE_R8);
intrins_id_to_intrins = g_hash_table_new (NULL, NULL);
void_func_t = LLVMFunctionType0 (LLVMVoidType (), FALSE);
if (enable_jit)
mono_llvm_jit_init ();
}
void
mono_llvm_free_mem_manager (MonoJitMemoryManager *mem_manager)
{
MonoLLVMModule *module = (MonoLLVMModule*)mem_manager->llvm_module;
int i;
if (!module)
return;
g_hash_table_destroy (module->llvm_types);
mono_llvm_dispose_ee (module->mono_ee);
if (module->bb_names) {
for (i = 0; i < module->bb_names_len; ++i)
g_free (module->bb_names [i]);
g_free (module->bb_names);
}
//LLVMDisposeModule (module->module);
g_free (module);
mem_manager->llvm_module = NULL;
}
void
mono_llvm_create_aot_module (MonoAssembly *assembly, const char *global_prefix, int initial_got_size, LLVMModuleFlags flags)
{
MonoLLVMModule *module = &aot_module;
gboolean emit_dwarf = (flags & LLVM_MODULE_FLAG_DWARF) ? 1 : 0;
#ifdef TARGET_WIN32_MSVC
gboolean emit_codeview = (flags & LLVM_MODULE_FLAG_CODEVIEW) ? 1 : 0;
#endif
gboolean static_link = (flags & LLVM_MODULE_FLAG_STATIC) ? 1 : 0;
gboolean llvm_only = (flags & LLVM_MODULE_FLAG_LLVM_ONLY) ? 1 : 0;
gboolean interp = (flags & LLVM_MODULE_FLAG_INTERP) ? 1 : 0;
/* Delete previous module */
g_hash_table_destroy (module->plt_entries);
if (module->lmodule)
LLVMDisposeModule (module->lmodule);
memset (module, 0, sizeof (aot_module));
module->lmodule = LLVMModuleCreateWithName ("aot");
module->assembly = assembly;
module->global_prefix = g_strdup (global_prefix);
module->eh_frame_symbol = g_strdup_printf ("%s_eh_frame", global_prefix);
module->get_method_symbol = g_strdup_printf ("%s_get_method", global_prefix);
module->get_unbox_tramp_symbol = g_strdup_printf ("%s_get_unbox_tramp", global_prefix);
module->init_aotconst_symbol = g_strdup_printf ("%s_init_aotconst", global_prefix);
module->external_symbols = TRUE;
module->emit_dwarf = emit_dwarf;
module->static_link = static_link;
module->llvm_only = llvm_only;
module->interp = interp;
/* The first few entries are reserved */
module->max_got_offset = initial_got_size;
module->context = LLVMGetGlobalContext ();
module->cfgs = g_ptr_array_new ();
module->aotconst_vars = g_hash_table_new (NULL, NULL);
module->llvm_types = g_hash_table_new (NULL, NULL);
module->plt_entries = g_hash_table_new (g_str_hash, g_str_equal);
module->plt_entries_ji = g_hash_table_new (NULL, NULL);
module->direct_callables = g_hash_table_new (g_str_hash, g_str_equal);
module->idx_to_lmethod = g_hash_table_new (NULL, NULL);
module->method_to_lmethod = g_hash_table_new (NULL, NULL);
module->method_to_call_info = g_hash_table_new (NULL, NULL);
module->idx_to_unbox_tramp = g_hash_table_new (NULL, NULL);
module->no_method_table_lmethods = g_hash_table_new (NULL, NULL);
module->callsite_list = g_ptr_array_new ();
if (llvm_only)
/* clang ignores our debug info because it has an invalid version */
module->emit_dwarf = FALSE;
add_intrinsics (module->lmodule);
add_types (module);
#ifdef MONO_ARCH_LLVM_TARGET_LAYOUT
LLVMSetDataLayout (module->lmodule, MONO_ARCH_LLVM_TARGET_LAYOUT);
#else
g_assert_not_reached ();
#endif
#ifdef MONO_ARCH_LLVM_TARGET_TRIPLE
LLVMSetTarget (module->lmodule, MONO_ARCH_LLVM_TARGET_TRIPLE);
#endif
if (module->emit_dwarf) {
char *dir, *build_info, *s, *cu_name;
module->di_builder = mono_llvm_create_di_builder (module->lmodule);
// FIXME:
dir = g_strdup (".");
build_info = mono_get_runtime_build_info ();
s = g_strdup_printf ("Mono AOT Compiler %s (LLVM)", build_info);
cu_name = g_path_get_basename (assembly->image->name);
module->cu = mono_llvm_di_create_compile_unit (module->di_builder, cu_name, dir, s);
g_free (dir);
g_free (build_info);
g_free (s);
}
#ifdef TARGET_WIN32_MSVC
if (emit_codeview) {
LLVMValueRef codeview_option_args[3];
codeview_option_args[0] = LLVMConstInt (LLVMInt32Type (), 2, FALSE);
codeview_option_args[1] = LLVMMDString ("CodeView", 8);
codeview_option_args[2] = LLVMConstInt (LLVMInt32Type (), 1, FALSE);
LLVMAddNamedMetadataOperand (module->lmodule, "llvm.module.flags", LLVMMDNode (codeview_option_args, G_N_ELEMENTS (codeview_option_args)));
}
if (!static_link) {
const char linker_options[] = "Linker Options";
const char *default_dynamic_lib_names[] = { "/DEFAULTLIB:msvcrt",
"/DEFAULTLIB:ucrt.lib",
"/DEFAULTLIB:vcruntime.lib" };
LLVMValueRef default_lib_args[G_N_ELEMENTS (default_dynamic_lib_names)];
LLVMValueRef default_lib_nodes[G_N_ELEMENTS(default_dynamic_lib_names)];
const char *default_lib_name = NULL;
for (int i = 0; i < G_N_ELEMENTS (default_dynamic_lib_names); ++i) {
const char *default_lib_name = default_dynamic_lib_names[i];
default_lib_args[i] = LLVMMDString (default_lib_name, strlen (default_lib_name));
default_lib_nodes[i] = LLVMMDNode (default_lib_args + i, 1);
}
LLVMAddNamedMetadataOperand (module->lmodule, "llvm.linker.options", LLVMMDNode (default_lib_args, G_N_ELEMENTS (default_lib_args)));
}
#endif
{
LLVMTypeRef got_type = LLVMArrayType (module->ptr_type, 16);
module->dummy_got_var = LLVMAddGlobal (module->lmodule, got_type, "dummy_got");
module->got_idx_to_type = g_hash_table_new (NULL, NULL);
LLVMSetInitializer (module->dummy_got_var, LLVMConstNull (got_type));
LLVMSetVisibility (module->dummy_got_var, LLVMHiddenVisibility);
LLVMSetLinkage (module->dummy_got_var, LLVMInternalLinkage);
}
/* Add initialization array */
LLVMTypeRef inited_type = LLVMArrayType (LLVMInt8Type (), 0);
module->inited_var = LLVMAddGlobal (aot_module.lmodule, inited_type, "mono_inited_tmp");
LLVMSetInitializer (module->inited_var, LLVMConstNull (inited_type));
create_aot_info_var (module);
emit_gc_safepoint_poll (module, module->lmodule, NULL);
emit_llvm_code_start (module);
// Needs idx_to_lmethod
emit_init_funcs (module);
/* Add a dummy personality function */
if (!use_mono_personality_debug) {
LLVMValueRef personality = LLVMAddFunction (module->lmodule, default_personality_name, LLVMFunctionType (LLVMInt32Type (), NULL, 0, TRUE));
LLVMSetLinkage (personality, LLVMExternalLinkage);
//EMCC chockes if the personality function is referenced in the 'used' array
#ifndef TARGET_WASM
mark_as_used (module, personality);
#endif
}
/* Add a reference to the c++ exception we throw/catch */
{
LLVMTypeRef exc = LLVMPointerType (LLVMInt8Type (), 0);
module->sentinel_exception = LLVMAddGlobal (module->lmodule, exc, "_ZTIPi");
LLVMSetLinkage (module->sentinel_exception, LLVMExternalLinkage);
mono_llvm_set_is_constant (module->sentinel_exception);
}
}
void
mono_llvm_fixup_aot_module (void)
{
MonoLLVMModule *module = &aot_module;
MonoMethod *method;
/*
* Replace GOT entries for directly callable methods with the methods themselves.
* It would be easier to implement this by predefining all methods before compiling
* their bodies, but that couldn't handle the case when a method fails to compile
* with llvm.
*/
GHashTable *specializable = g_hash_table_new (NULL, NULL);
GHashTable *patches_to_null = g_hash_table_new (mono_patch_info_hash, mono_patch_info_equal);
for (int sindex = 0; sindex < module->callsite_list->len; ++sindex) {
CallSite *site = (CallSite*)g_ptr_array_index (module->callsite_list, sindex);
method = site->method;
LLVMValueRef lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, method);
LLVMValueRef placeholder = (LLVMValueRef)site->load;
LLVMValueRef load;
if (placeholder == NULL)
/* Method failed LLVM compilation */
continue;
gboolean can_direct_call = FALSE;
/* Replace sharable instances with their shared version */
if (!lmethod && method->is_inflated) {
if (mono_method_is_generic_sharable_full (method, FALSE, TRUE, FALSE)) {
ERROR_DECL (error);
MonoMethod *shared = mini_get_shared_method_full (method, SHARE_MODE_NONE, error);
if (is_ok (error)) {
lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, shared);
if (lmethod)
method = shared;
}
}
}
if (lmethod && !m_method_is_synchronized (method)) {
can_direct_call = TRUE;
} else if (m_method_is_wrapper (method) && !method->is_inflated) {
WrapperInfo *info = mono_marshal_get_wrapper_info (method);
/* This is a call from the synchronized wrapper to the real method */
if (info->subtype == WRAPPER_SUBTYPE_SYNCHRONIZED_INNER) {
method = info->d.synchronized.method;
lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, method);
if (lmethod)
can_direct_call = TRUE;
}
}
if (can_direct_call) {
mono_llvm_replace_uses_of (placeholder, lmethod);
if (mono_aot_can_specialize (method))
g_hash_table_insert (specializable, lmethod, method);
g_hash_table_insert (patches_to_null, site->ji, site->ji);
} else {
// FIXME:
LLVMBuilderRef builder = LLVMCreateBuilder ();
LLVMPositionBuilderBefore (builder, placeholder);
load = get_aotconst_module (module, builder, site->ji->type, site->ji->data.target, site->type, NULL, NULL);
LLVMReplaceAllUsesWith (placeholder, load);
}
g_free (site);
}
mono_llvm_propagate_nonnull_final (specializable, module);
g_hash_table_destroy (specializable);
for (int i = 0; i < module->cfgs->len; ++i) {
/*
* Nullify the patches pointing to direct calls. This is needed to
* avoid allocating extra got slots, which is a perf problem and it
* makes module->max_got_offset invalid.
* It would be better to just store the patch_info in CallSite, but
* cfg->patch_info is copied in aot-compiler.c.
*/
MonoCompile *cfg = (MonoCompile *)g_ptr_array_index (module->cfgs, i);
for (MonoJumpInfo *patch_info = cfg->patch_info; patch_info; patch_info = patch_info->next) {
if (patch_info->type == MONO_PATCH_INFO_METHOD) {
if (g_hash_table_lookup (patches_to_null, patch_info)) {
patch_info->type = MONO_PATCH_INFO_NONE;
/* Nullify the call to init_method () if possible */
g_assert (cfg->got_access_count);
cfg->got_access_count --;
if (cfg->got_access_count == 0) {
LLVMValueRef br = (LLVMValueRef)cfg->llvmonly_init_cond;
if (br)
LLVMSetSuccessor (br, 0, LLVMGetSuccessor (br, 1));
}
}
}
}
}
g_hash_table_destroy (patches_to_null);
}
static LLVMValueRef
llvm_array_from_uints (LLVMTypeRef el_type, guint32 *values, int nvalues)
{
int i;
LLVMValueRef res, *vals;
vals = g_new0 (LLVMValueRef, nvalues);
for (i = 0; i < nvalues; ++i)
vals [i] = LLVMConstInt (LLVMInt32Type (), values [i], FALSE);
res = LLVMConstArray (LLVMInt32Type (), vals, nvalues);
g_free (vals);
return res;
}
static LLVMValueRef
llvm_array_from_bytes (guint8 *values, int nvalues)
{
int i;
LLVMValueRef res, *vals;
vals = g_new0 (LLVMValueRef, nvalues);
for (i = 0; i < nvalues; ++i)
vals [i] = LLVMConstInt (LLVMInt8Type (), values [i], FALSE);
res = LLVMConstArray (LLVMInt8Type (), vals, nvalues);
g_free (vals);
return res;
}
/*
* mono_llvm_emit_aot_file_info:
*
* Emit the MonoAotFileInfo structure.
* Same as emit_aot_file_info () in aot-compiler.c.
*/
void
mono_llvm_emit_aot_file_info (MonoAotFileInfo *info, gboolean has_jitted_code)
{
MonoLLVMModule *module = &aot_module;
/* Save these for later */
memcpy (&module->aot_info, info, sizeof (MonoAotFileInfo));
module->has_jitted_code = has_jitted_code;
}
/*
* mono_llvm_emit_aot_data:
*
* Emit the binary data DATA pointed to by symbol SYMBOL.
* Return the LLVM variable for the data.
*/
gpointer
mono_llvm_emit_aot_data_aligned (const char *symbol, guint8 *data, int data_len, int align)
{
MonoLLVMModule *module = &aot_module;
LLVMTypeRef type;
LLVMValueRef d;
type = LLVMArrayType (LLVMInt8Type (), data_len);
d = LLVMAddGlobal (module->lmodule, type, symbol);
LLVMSetVisibility (d, LLVMHiddenVisibility);
LLVMSetLinkage (d, LLVMInternalLinkage);
LLVMSetInitializer (d, mono_llvm_create_constant_data_array (data, data_len));
if (align != 1)
LLVMSetAlignment (d, align);
mono_llvm_set_is_constant (d);
return d;
}
gpointer
mono_llvm_emit_aot_data (const char *symbol, guint8 *data, int data_len)
{
return mono_llvm_emit_aot_data_aligned (symbol, data, data_len, 8);
}
/* Add a reference to a global defined in JITted code */
static LLVMValueRef
AddJitGlobal (MonoLLVMModule *module, LLVMTypeRef type, const char *name)
{
char *s;
LLVMValueRef v;
s = g_strdup_printf ("%s%s", module->global_prefix, name);
v = LLVMAddGlobal (module->lmodule, LLVMInt8Type (), s);
LLVMSetVisibility (v, LLVMHiddenVisibility);
g_free (s);
return v;
}
#define FILE_INFO_NUM_HEADER_FIELDS 2
#define FILE_INFO_NUM_SCALAR_FIELDS 23
#define FILE_INFO_NUM_ARRAY_FIELDS 5
#define FILE_INFO_NUM_AOTID_FIELDS 1
#define FILE_INFO_NFIELDS (FILE_INFO_NUM_HEADER_FIELDS + MONO_AOT_FILE_INFO_NUM_SYMBOLS + FILE_INFO_NUM_SCALAR_FIELDS + FILE_INFO_NUM_ARRAY_FIELDS + FILE_INFO_NUM_AOTID_FIELDS)
static void
create_aot_info_var (MonoLLVMModule *module)
{
LLVMTypeRef file_info_type;
LLVMTypeRef *eltypes;
LLVMValueRef info_var;
int i, nfields, tindex;
LLVMModuleRef lmodule = module->lmodule;
/* Create an LLVM type to represent MonoAotFileInfo */
nfields = FILE_INFO_NFIELDS;
eltypes = g_new (LLVMTypeRef, nfields);
tindex = 0;
eltypes [tindex ++] = LLVMInt32Type ();
eltypes [tindex ++] = LLVMInt32Type ();
/* Symbols */
for (i = 0; i < MONO_AOT_FILE_INFO_NUM_SYMBOLS; ++i)
eltypes [tindex ++] = LLVMPointerType (LLVMInt8Type (), 0);
/* Scalars */
for (i = 0; i < FILE_INFO_NUM_SCALAR_FIELDS; ++i)
eltypes [tindex ++] = LLVMInt32Type ();
/* Arrays */
eltypes [tindex ++] = LLVMArrayType (LLVMInt32Type (), MONO_AOT_TABLE_NUM);
for (i = 0; i < FILE_INFO_NUM_ARRAY_FIELDS - 1; ++i)
eltypes [tindex ++] = LLVMArrayType (LLVMInt32Type (), MONO_AOT_TRAMP_NUM);
eltypes [tindex ++] = LLVMArrayType (LLVMInt8Type (), 16);
g_assert (tindex == nfields);
file_info_type = LLVMStructCreateNamed (module->context, "MonoAotFileInfo");
LLVMStructSetBody (file_info_type, eltypes, nfields, FALSE);
info_var = LLVMAddGlobal (lmodule, file_info_type, "mono_aot_file_info");
module->info_var = info_var;
module->info_var_eltypes = eltypes;
}
static void
emit_aot_file_info (MonoLLVMModule *module)
{
LLVMTypeRef *eltypes, eltype;
LLVMValueRef info_var;
LLVMValueRef *fields;
int i, nfields, tindex;
MonoAotFileInfo *info;
LLVMModuleRef lmodule = module->lmodule;
info = &module->aot_info;
info_var = module->info_var;
eltypes = module->info_var_eltypes;
nfields = FILE_INFO_NFIELDS;
if (module->static_link) {
LLVMSetVisibility (info_var, LLVMHiddenVisibility);
LLVMSetLinkage (info_var, LLVMInternalLinkage);
}
#ifdef TARGET_WIN32
if (!module->static_link) {
LLVMSetDLLStorageClass (info_var, LLVMDLLExportStorageClass);
}
#endif
fields = g_new (LLVMValueRef, nfields);
tindex = 0;
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->version, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->dummy, FALSE);
/* Symbols */
/*
* We use LLVMGetNamedGlobal () for symbol which are defined in LLVM code, and LLVMAddGlobal ()
* for symbols defined in the .s file emitted by the aot compiler.
*/
eltype = eltypes [tindex];
if (module->llvm_only)
fields [tindex ++] = LLVMConstNull (eltype);
else
fields [tindex ++] = AddJitGlobal (module, eltype, "jit_got");
/* llc defines this directly */
if (!module->llvm_only) {
fields [tindex ++] = LLVMAddGlobal (lmodule, eltype, module->eh_frame_symbol);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
} else {
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = module->get_method;
fields [tindex ++] = module->get_unbox_tramp ? module->get_unbox_tramp : LLVMConstNull (eltype);
}
fields [tindex ++] = module->init_aotconst_func;
if (module->has_jitted_code) {
fields [tindex ++] = AddJitGlobal (module, eltype, "jit_code_start");
fields [tindex ++] = AddJitGlobal (module, eltype, "jit_code_end");
} else {
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
}
if (!module->llvm_only)
fields [tindex ++] = AddJitGlobal (module, eltype, "method_addresses");
else
fields [tindex ++] = LLVMConstNull (eltype);
if (module->llvm_only && module->unbox_tramp_indexes) {
fields [tindex ++] = module->unbox_tramp_indexes;
fields [tindex ++] = module->unbox_trampolines;
} else {
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
}
if (info->flags & MONO_AOT_FILE_FLAG_SEPARATE_DATA) {
for (i = 0; i < MONO_AOT_TABLE_NUM; ++i)
fields [tindex ++] = LLVMConstNull (eltype);
} else {
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "blob");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "class_name_table");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "class_info_offsets");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "method_info_offsets");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "ex_info_offsets");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "extra_method_info_offsets");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "extra_method_table");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "got_info_offsets");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "llvm_got_info_offsets");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "image_table");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "weak_field_indexes");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "method_flags_table");
}
/* Not needed (mem_end) */
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "assembly_guid");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "runtime_version");
if (info->trampoline_size [0]) {
fields [tindex ++] = AddJitGlobal (module, eltype, "specific_trampolines");
fields [tindex ++] = AddJitGlobal (module, eltype, "static_rgctx_trampolines");
fields [tindex ++] = AddJitGlobal (module, eltype, "imt_trampolines");
fields [tindex ++] = AddJitGlobal (module, eltype, "gsharedvt_arg_trampolines");
fields [tindex ++] = AddJitGlobal (module, eltype, "ftnptr_arg_trampolines");
fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_arbitrary_trampolines");
} else {
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
}
if (module->static_link && !module->llvm_only)
fields [tindex ++] = AddJitGlobal (module, eltype, "globals");
else
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "assembly_name");
if (!module->llvm_only) {
fields [tindex ++] = AddJitGlobal (module, eltype, "plt");
fields [tindex ++] = AddJitGlobal (module, eltype, "plt_end");
fields [tindex ++] = AddJitGlobal (module, eltype, "unwind_info");
fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_trampolines");
fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_trampolines_end");
fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_trampoline_addresses");
} else {
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
}
for (i = 0; i < MONO_AOT_FILE_INFO_NUM_SYMBOLS; ++i) {
g_assert (fields [FILE_INFO_NUM_HEADER_FIELDS + i]);
fields [FILE_INFO_NUM_HEADER_FIELDS + i] = LLVMConstBitCast (fields [FILE_INFO_NUM_HEADER_FIELDS + i], eltype);
}
/* Scalars */
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->plt_got_offset_base, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->plt_got_info_offset_base, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->got_size, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->llvm_got_size, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->plt_size, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->nmethods, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->nextra_methods, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->flags, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->opts, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->simd_opts, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->gc_name_index, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->num_rgctx_fetch_trampolines, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->double_align, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->long_align, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->generic_tramp_num, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->card_table_shift_bits, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->card_table_mask, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->tramp_page_size, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->call_table_entry_size, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->nshared_got_entries, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->datafile_size, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), module->unbox_tramp_num, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), module->unbox_tramp_elemsize, FALSE);
/* Arrays */
fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->table_offsets, MONO_AOT_TABLE_NUM);
fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->num_trampolines, MONO_AOT_TRAMP_NUM);
fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->trampoline_got_offset_base, MONO_AOT_TRAMP_NUM);
fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->trampoline_size, MONO_AOT_TRAMP_NUM);
fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->tramp_page_code_offsets, MONO_AOT_TRAMP_NUM);
fields [tindex ++] = llvm_array_from_bytes (info->aotid, 16);
g_assert (tindex == nfields);
LLVMSetInitializer (info_var, LLVMConstNamedStruct (LLVMGetElementType (LLVMTypeOf (info_var)), fields, nfields));
if (module->static_link) {
char *s, *p;
LLVMValueRef var;
s = g_strdup_printf ("mono_aot_module_%s_info", module->assembly->aname.name);
/* Get rid of characters which cannot occur in symbols */
p = s;
for (p = s; *p; ++p) {
if (!(isalnum (*p) || *p == '_'))
*p = '_';
}
var = LLVMAddGlobal (module->lmodule, LLVMPointerType (LLVMInt8Type (), 0), s);
g_free (s);
LLVMSetInitializer (var, LLVMConstBitCast (LLVMGetNamedGlobal (module->lmodule, "mono_aot_file_info"), LLVMPointerType (LLVMInt8Type (), 0)));
LLVMSetLinkage (var, LLVMExternalLinkage);
}
}
typedef struct {
LLVMValueRef lmethod;
int argument;
} NonnullPropWorkItem;
static void
mono_llvm_nonnull_state_update (EmitContext *ctx, LLVMValueRef lcall, MonoMethod *call_method, LLVMValueRef *args, int num_params)
{
if (mono_aot_can_specialize (call_method)) {
int num_passed = LLVMGetNumArgOperands (lcall);
g_assert (num_params <= num_passed);
g_assert (ctx->module->method_to_call_info);
GArray *call_site_union = (GArray *) g_hash_table_lookup (ctx->module->method_to_call_info, call_method);
if (!call_site_union) {
call_site_union = g_array_sized_new (FALSE, TRUE, sizeof (gint32), num_params);
int zero = 0;
for (int i = 0; i < num_params; i++)
g_array_insert_val (call_site_union, i, zero);
}
for (int i = 0; i < num_params; i++) {
if (mono_llvm_is_nonnull (args [i])) {
g_assert (i < LLVMGetNumArgOperands (lcall));
mono_llvm_set_call_nonnull_arg (lcall, i);
} else {
gint32 *nullable_count = &g_array_index (call_site_union, gint32, i);
*nullable_count = *nullable_count + 1;
}
}
g_hash_table_insert (ctx->module->method_to_call_info, call_method, call_site_union);
}
}
static void
mono_llvm_propagate_nonnull_final (GHashTable *all_specializable, MonoLLVMModule *module)
{
// When we first traverse the mini IL, we mark the things that are
// nonnull (the roots). Then, for all of the methods that can be specialized, we
// see if their call sites have nonnull attributes.
// If so, we mark the function's param. This param has uses to propagate
// the attribute to. This propagation can trigger a need to mark more attributes
// non-null, and so on and so forth.
GSList *queue = NULL;
GHashTableIter iter;
LLVMValueRef lmethod;
MonoMethod *method;
g_hash_table_iter_init (&iter, all_specializable);
while (g_hash_table_iter_next (&iter, (void**)&lmethod, (void**)&method)) {
GArray *call_site_union = (GArray *) g_hash_table_lookup (module->method_to_call_info, method);
// Basic sanity checking
if (call_site_union)
g_assert (call_site_union->len == LLVMCountParams (lmethod));
// Add root to work queue
for (int i = 0; call_site_union && i < call_site_union->len; i++) {
if (g_array_index (call_site_union, gint32, i) == 0) {
NonnullPropWorkItem *item = g_malloc (sizeof (NonnullPropWorkItem));
item->lmethod = lmethod;
item->argument = i;
queue = g_slist_prepend (queue, item);
}
}
}
// This is essentially reference counting, and we are propagating
// the refcount decrement here. We have less work to do than we may otherwise
// because we are only working with a set of subgraphs of specializable functions.
//
// We rely on being able to see all of the references in the graph.
// This is ensured by the function mono_aot_can_specialize. Everything in
// all_specializable is a function that can be specialized, and is the resulting
// node in the graph after all of the subsitutions are done.
//
// Anything disrupting the direct calls made with self-init will break this optimization.
while (queue) {
// Update the queue state.
// Our only other per-iteration responsibility is now to free current
NonnullPropWorkItem *current = (NonnullPropWorkItem *) queue->data;
queue = queue->next;
g_assert (current->argument < LLVMCountParams (current->lmethod));
// Does the actual leaf-node work here
// Mark the function argument as nonnull for LLVM
mono_llvm_set_func_nonnull_arg (current->lmethod, current->argument);
// The rest of this is for propagating forward nullability changes
// to calls that use the argument that is now nullable.
// Get the actual LLVM value of the argument, so we can see which call instructions
// used that argument
LLVMValueRef caller_argument = LLVMGetParam (current->lmethod, current->argument);
// Iterate over the calls using the newly-non-nullable argument
GSList *calls = mono_llvm_calls_using (caller_argument);
for (GSList *cursor = calls; cursor != NULL; cursor = cursor->next) {
LLVMValueRef lcall = (LLVMValueRef) cursor->data;
LLVMValueRef callee_lmethod = LLVMGetCalledValue (lcall);
// If this wasn't a direct call for which mono_aot_can_specialize is true,
// this lookup won't find a MonoMethod.
MonoMethod *callee_method = (MonoMethod *) g_hash_table_lookup (all_specializable, callee_lmethod);
if (!callee_method)
continue;
// Decrement number of nullable refs at that func's arg offset
GArray *call_site_union = (GArray *) g_hash_table_lookup (module->method_to_call_info, callee_method);
// It has module-local callers and is specializable, should have seen this call site
// and inited this
g_assert (call_site_union);
// The function *definition* parameter arity should always be consistent
int max_params = LLVMCountParams (callee_lmethod);
if (call_site_union->len != max_params) {
mono_llvm_dump_value (callee_lmethod);
g_assert_not_reached ();
}
// Get the values that correspond to the parameters passed to the call
// that used our argument
LLVMValueRef *operands = mono_llvm_call_args (lcall);
for (int call_argument = 0; call_argument < max_params; call_argument++) {
// Every time we used the newly-non-nullable argument, decrement the nullable
// refcount for that function.
if (caller_argument == operands [call_argument]) {
gint32 *nullable_count = &g_array_index (call_site_union, gint32, call_argument);
g_assert (*nullable_count > 0);
*nullable_count = *nullable_count - 1;
// If we caused that callee's parameter to become newly nullable, add to work queue
if (*nullable_count == 0) {
NonnullPropWorkItem *item = g_malloc (sizeof (NonnullPropWorkItem));
item->lmethod = callee_lmethod;
item->argument = call_argument;
queue = g_slist_prepend (queue, item);
}
}
}
g_free (operands);
// Update nullability refcount information for the callee now
g_hash_table_insert (module->method_to_call_info, callee_method, call_site_union);
}
g_slist_free (calls);
g_free (current);
}
}
/*
* Emit the aot module into the LLVM bitcode file FILENAME.
*/
void
mono_llvm_emit_aot_module (const char *filename, const char *cu_name)
{
LLVMTypeRef inited_type;
LLVMValueRef real_inited;
MonoLLVMModule *module = &aot_module;
emit_llvm_code_end (module);
/*
* Create the real init_var and replace all uses of the dummy variable with
* the real one.
*/
inited_type = LLVMArrayType (LLVMInt8Type (), module->max_inited_idx + 1);
real_inited = LLVMAddGlobal (module->lmodule, inited_type, "mono_inited");
LLVMSetInitializer (real_inited, LLVMConstNull (inited_type));
LLVMSetLinkage (real_inited, LLVMInternalLinkage);
mono_llvm_replace_uses_of (module->inited_var, real_inited);
LLVMDeleteGlobal (module->inited_var);
/* Replace the dummy info_ variables with the real ones */
for (int i = 0; i < module->cfgs->len; ++i) {
MonoCompile *cfg = (MonoCompile *)g_ptr_array_index (module->cfgs, i);
// FIXME: Eliminate unused vars
// FIXME: Speed this up
if (cfg->llvm_dummy_info_var) {
if (cfg->llvm_info_var) {
mono_llvm_replace_uses_of (cfg->llvm_dummy_info_var, cfg->llvm_info_var);
LLVMDeleteGlobal (cfg->llvm_dummy_info_var);
} else {
// FIXME: How can this happen ?
LLVMSetInitializer (cfg->llvm_dummy_info_var, mono_llvm_create_constant_data_array (NULL, 0));
}
}
}
if (module->llvm_only) {
emit_get_method (&aot_module);
emit_get_unbox_tramp (&aot_module);
}
emit_init_aotconst (module);
emit_llvm_used (&aot_module);
emit_dbg_info (&aot_module, filename, cu_name);
emit_aot_file_info (&aot_module);
/* Replace PLT entries for directly callable methods with the methods themselves */
{
GHashTableIter iter;
MonoJumpInfo *ji;
LLVMValueRef callee;
GHashTable *specializable = g_hash_table_new (NULL, NULL);
g_hash_table_iter_init (&iter, module->plt_entries_ji);
while (g_hash_table_iter_next (&iter, (void**)&ji, (void**)&callee)) {
if (mono_aot_is_direct_callable (ji)) {
LLVMValueRef lmethod;
lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, ji->data.method);
/* The types might not match because the caller might pass an rgctx */
if (lmethod && LLVMTypeOf (callee) == LLVMTypeOf (lmethod)) {
mono_llvm_replace_uses_of (callee, lmethod);
if (mono_aot_can_specialize (ji->data.method))
g_hash_table_insert (specializable, lmethod, ji->data.method);
mono_aot_mark_unused_llvm_plt_entry (ji);
}
}
}
mono_llvm_propagate_nonnull_final (specializable, module);
g_hash_table_destroy (specializable);
}
#if 0
{
char *verifier_err;
if (LLVMVerifyModule (module->lmodule, LLVMReturnStatusAction, &verifier_err)) {
printf ("%s\n", verifier_err);
g_assert_not_reached ();
}
}
#endif
/* Note: You can still dump an invalid bitcode file by running `llvm-dis`
* in a debugger, set a breakpoint on `LLVMVerifyModule` and fake its
* result to 0 (indicating success). */
LLVMWriteBitcodeToFile (module->lmodule, filename);
}
static LLVMValueRef
md_string (const char *s)
{
return LLVMMDString (s, strlen (s));
}
/* Debugging support */
static void
emit_dbg_info (MonoLLVMModule *module, const char *filename, const char *cu_name)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef args [16], ver;
/*
* This can only be enabled when LLVM code is emitted into a separate object
* file, since the AOT compiler also emits dwarf info,
* and the abbrev indexes will not be correct since llvm has added its own
* abbrevs.
*/
if (!module->emit_dwarf)
return;
mono_llvm_di_builder_finalize (module->di_builder);
args [0] = LLVMConstInt (LLVMInt32Type (), 2, FALSE);
args [1] = LLVMMDString ("Dwarf Version", strlen ("Dwarf Version"));
args [2] = LLVMConstInt (LLVMInt32Type (), 2, FALSE);
ver = LLVMMDNode (args, 3);
LLVMAddNamedMetadataOperand (lmodule, "llvm.module.flags", ver);
args [0] = LLVMConstInt (LLVMInt32Type (), 2, FALSE);
args [1] = LLVMMDString ("Debug Info Version", strlen ("Debug Info Version"));
args [2] = LLVMConstInt (LLVMInt64Type (), 3, FALSE);
ver = LLVMMDNode (args, 3);
LLVMAddNamedMetadataOperand (lmodule, "llvm.module.flags", ver);
}
static LLVMValueRef
emit_dbg_subprogram (EmitContext *ctx, MonoCompile *cfg, LLVMValueRef method, const char *name)
{
MonoLLVMModule *module = ctx->module;
MonoDebugMethodInfo *minfo = ctx->minfo;
char *source_file, *dir, *filename;
MonoSymSeqPoint *sym_seq_points;
int n_seq_points;
if (!minfo)
return NULL;
mono_debug_get_seq_points (minfo, &source_file, NULL, NULL, &sym_seq_points, &n_seq_points);
if (!source_file)
source_file = g_strdup ("<unknown>");
dir = g_path_get_dirname (source_file);
filename = g_path_get_basename (source_file);
g_free (source_file);
return (LLVMValueRef)mono_llvm_di_create_function (module->di_builder, module->cu, method, cfg->method->name, name, dir, filename, n_seq_points ? sym_seq_points [0].line : 1);
}
static void
emit_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder, const unsigned char *cil_code)
{
MonoCompile *cfg = ctx->cfg;
if (ctx->minfo && cil_code && cil_code >= cfg->header->code && cil_code < cfg->header->code + cfg->header->code_size) {
MonoDebugSourceLocation *loc;
LLVMValueRef loc_md;
loc = mono_debug_method_lookup_location (ctx->minfo, cil_code - cfg->header->code);
if (loc) {
loc_md = (LLVMValueRef)mono_llvm_di_create_location (ctx->module->di_builder, ctx->dbg_md, loc->row, loc->column);
mono_llvm_di_set_location (builder, loc_md);
mono_debug_free_source_location (loc);
}
}
}
static void
emit_default_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder)
{
if (ctx->minfo) {
LLVMValueRef loc_md;
loc_md = (LLVMValueRef)mono_llvm_di_create_location (ctx->module->di_builder, ctx->dbg_md, 0, 0);
mono_llvm_di_set_location (builder, loc_md);
}
}
/*
DESIGN:
- Emit LLVM IR from the mono IR using the LLVM C API.
- The original arch specific code remains, so we can fall back to it if we run
into something we can't handle.
*/
/*
A partial list of issues:
- Handling of opcodes which can throw exceptions.
In the mono JIT, these are implemented using code like this:
method:
<compare>
throw_pos:
b<cond> ex_label
<rest of code>
ex_label:
push throw_pos - method
call <exception trampoline>
The problematic part is push throw_pos - method, which cannot be represented
in the LLVM IR, since it does not support label values.
-> this can be implemented in AOT mode using inline asm + labels, but cannot
be implemented in JIT mode ?
-> a possible but slower implementation would use the normal exception
throwing code but it would need to control the placement of the throw code
(it needs to be exactly after the compare+branch).
-> perhaps add a PC offset intrinsics ?
- efficient implementation of .ovf opcodes.
These are currently implemented as:
<ins which sets the condition codes>
b<cond> ex_label
Some overflow opcodes are now supported by LLVM SVN.
- exception handling, unwinding.
- SSA is disabled for methods with exception handlers
- How to obtain unwind info for LLVM compiled methods ?
-> this is now solved by converting the unwind info generated by LLVM
into our format.
- LLVM uses the c++ exception handling framework, while we use our home grown
code, and couldn't use the c++ one:
- its not supported under VC++, other exotic platforms.
- it might be impossible to support filter clauses with it.
- trampolines.
The trampolines need a predictable call sequence, since they need to disasm
the calling code to obtain register numbers / offsets.
LLVM currently generates this code in non-JIT mode:
mov -0x98(%rax),%eax
callq *%rax
Here, the vtable pointer is lost.
-> solution: use one vtable trampoline per class.
- passing/receiving the IMT pointer/RGCTX.
-> solution: pass them as normal arguments ?
- argument passing.
LLVM does not allow the specification of argument registers etc. This means
that all calls are made according to the platform ABI.
- passing/receiving vtypes.
Vtypes passed/received in registers are handled by the front end by using
a signature with scalar arguments, and loading the parts of the vtype into those
arguments.
Vtypes passed on the stack are handled using the 'byval' attribute.
- ldaddr.
Supported though alloca, we need to emit the load/store code.
- types.
The mono JIT uses pointer sized iregs/double fregs, while LLVM uses precisely
typed registers, so we have to keep track of the precise LLVM type of each vreg.
This is made easier because the IR is already in SSA form.
An additional problem is that our IR is not consistent with types, i.e. i32/i64
types are frequently used incorrectly.
*/
/*
AOT SUPPORT:
Emit LLVM bytecode into a .bc file, compile it using llc into a .s file, then link
it with the file containing the methods emitted by the JIT and the AOT data
structures.
*/
/* FIXME: Normalize some aspects of the mono IR to allow easier translation, like:
* - each bblock should end with a branch
* - setting the return value, making cfg->ret non-volatile
* - avoid some transformations in the JIT which make it harder for us to generate
* code.
* - use pointer types to help optimizations.
*/
#else /* DISABLE_JIT */
void
mono_llvm_cleanup (void)
{
}
void
mono_llvm_free_mem_manager (MonoJitMemoryManager *mem_manager)
{
}
void
mono_llvm_init (gboolean enable_jit)
{
}
#endif /* DISABLE_JIT */
#if !defined(DISABLE_JIT) && !defined(MONO_CROSS_COMPILE)
/* LLVM JIT support */
/*
* decode_llvm_eh_info:
*
* Decode the EH table emitted by llvm in jit mode, and store
* the result into cfg.
*/
static void
decode_llvm_eh_info (EmitContext *ctx, gpointer eh_frame)
{
MonoCompile *cfg = ctx->cfg;
guint8 *cie, *fde;
int fde_len;
MonoLLVMFDEInfo info;
MonoJitExceptionInfo *ei;
guint8 *p = (guint8*)eh_frame;
int version, fde_count, fde_offset;
guint32 ei_len, i, nested_len;
gpointer *type_info;
gint32 *table;
guint8 *unw_info;
/*
* Decode the one element EH table emitted by the MonoException class
* in llvm.
*/
/* Similar to decode_llvm_mono_eh_frame () in aot-runtime.c */
version = *p;
g_assert (version == 3);
p ++;
p ++;
p = (guint8 *)ALIGN_PTR_TO (p, 4);
fde_count = *(guint32*)p;
p += 4;
table = (gint32*)p;
g_assert (fde_count <= 2);
/* The first entry is the real method */
g_assert (table [0] == 1);
fde_offset = table [1];
table += fde_count * 2;
/* Extra entry */
cfg->code_len = table [0];
fde_len = table [1] - fde_offset;
table += 2;
fde = (guint8*)eh_frame + fde_offset;
cie = (guint8*)table;
/* Compute lengths */
mono_unwind_decode_llvm_mono_fde (fde, fde_len, cie, cfg->native_code, &info, NULL, NULL, NULL);
ei = (MonoJitExceptionInfo *)g_malloc0 (info.ex_info_len * sizeof (MonoJitExceptionInfo));
type_info = (gpointer *)g_malloc0 (info.ex_info_len * sizeof (gpointer));
unw_info = (guint8*)g_malloc0 (info.unw_info_len);
mono_unwind_decode_llvm_mono_fde (fde, fde_len, cie, cfg->native_code, &info, ei, type_info, unw_info);
cfg->encoded_unwind_ops = unw_info;
cfg->encoded_unwind_ops_len = info.unw_info_len;
if (cfg->verbose_level > 1)
mono_print_unwind_info (cfg->encoded_unwind_ops, cfg->encoded_unwind_ops_len);
if (info.this_reg != -1) {
cfg->llvm_this_reg = info.this_reg;
cfg->llvm_this_offset = info.this_offset;
}
ei_len = info.ex_info_len;
// Nested clauses are currently disabled
nested_len = 0;
cfg->llvm_ex_info = (MonoJitExceptionInfo*)mono_mempool_alloc0 (cfg->mempool, (ei_len + nested_len) * sizeof (MonoJitExceptionInfo));
cfg->llvm_ex_info_len = ei_len + nested_len;
memcpy (cfg->llvm_ex_info, ei, ei_len * sizeof (MonoJitExceptionInfo));
/* Fill the rest of the information from the type info */
for (i = 0; i < ei_len; ++i) {
gint32 clause_index = *(gint32*)type_info [i];
MonoExceptionClause *clause = &cfg->header->clauses [clause_index];
cfg->llvm_ex_info [i].flags = clause->flags;
cfg->llvm_ex_info [i].data.catch_class = clause->data.catch_class;
cfg->llvm_ex_info [i].clause_index = clause_index;
}
}
static MonoLLVMModule*
init_jit_module (void)
{
MonoJitMemoryManager *jit_mm;
MonoLLVMModule *module;
// FIXME:
jit_mm = get_default_jit_mm ();
if (jit_mm->llvm_module)
return (MonoLLVMModule*)jit_mm->llvm_module;
mono_loader_lock ();
if (jit_mm->llvm_module) {
mono_loader_unlock ();
return (MonoLLVMModule*)jit_mm->llvm_module;
}
module = g_new0 (MonoLLVMModule, 1);
module->context = LLVMGetGlobalContext ();
module->mono_ee = (MonoEERef*)mono_llvm_create_ee (&module->ee);
// This contains just the intrinsics
module->lmodule = LLVMModuleCreateWithName ("jit-global-module");
add_intrinsics (module->lmodule);
add_types (module);
module->llvm_types = g_hash_table_new (NULL, NULL);
mono_memory_barrier ();
jit_mm->llvm_module = module;
mono_loader_unlock ();
return (MonoLLVMModule*)jit_mm->llvm_module;
}
static void
llvm_jit_finalize_method (EmitContext *ctx)
{
MonoCompile *cfg = ctx->cfg;
int nvars = g_hash_table_size (ctx->jit_callees);
LLVMValueRef *callee_vars = g_new0 (LLVMValueRef, nvars);
gpointer *callee_addrs = g_new0 (gpointer, nvars);
GHashTableIter iter;
LLVMValueRef var;
MonoMethod *callee;
gpointer eh_frame;
int i;
/*
* Compute the addresses of the LLVM globals pointing to the
* methods called by the current method. Pass it to the trampoline
* code so it can update them after their corresponding method was
* compiled.
*/
g_hash_table_iter_init (&iter, ctx->jit_callees);
i = 0;
while (g_hash_table_iter_next (&iter, NULL, (void**)&var))
callee_vars [i ++] = var;
mono_llvm_optimize_method (ctx->lmethod);
if (cfg->verbose_level > 1) {
g_print ("\n*** Optimized LLVM IR for %s ***\n", mono_method_full_name (cfg->method, TRUE));
if (cfg->compile_aot) {
mono_llvm_dump_value (ctx->lmethod);
} else {
mono_llvm_dump_module (ctx->lmodule);
}
g_print ("***\n\n");
}
mono_codeman_enable_write ();
cfg->native_code = (guint8*)mono_llvm_compile_method (ctx->module->mono_ee, cfg, ctx->lmethod, nvars, callee_vars, callee_addrs, &eh_frame);
mono_llvm_remove_gc_safepoint_poll (ctx->lmodule);
mono_codeman_disable_write ();
decode_llvm_eh_info (ctx, eh_frame);
// FIXME:
MonoJitMemoryManager *jit_mm = get_default_jit_mm ();
jit_mm_lock (jit_mm);
if (!jit_mm->llvm_jit_callees)
jit_mm->llvm_jit_callees = g_hash_table_new (NULL, NULL);
g_hash_table_iter_init (&iter, ctx->jit_callees);
i = 0;
while (g_hash_table_iter_next (&iter, (void**)&callee, (void**)&var)) {
GSList *addrs = (GSList*)g_hash_table_lookup (jit_mm->llvm_jit_callees, callee);
addrs = g_slist_prepend (addrs, callee_addrs [i]);
g_hash_table_insert (jit_mm->llvm_jit_callees, callee, addrs);
i ++;
}
jit_mm_unlock (jit_mm);
}
#else
static MonoLLVMModule*
init_jit_module (void)
{
g_assert_not_reached ();
}
static void
llvm_jit_finalize_method (EmitContext *ctx)
{
g_assert_not_reached ();
}
#endif
static MonoCPUFeatures cpu_features;
MonoCPUFeatures mono_llvm_get_cpu_features (void)
{
static const CpuFeatureAliasFlag flags_map [] = {
#if defined(TARGET_X86) || defined(TARGET_AMD64)
{ "sse", MONO_CPU_X86_SSE },
{ "sse2", MONO_CPU_X86_SSE2 },
{ "pclmul", MONO_CPU_X86_PCLMUL },
{ "aes", MONO_CPU_X86_AES },
{ "sse2", MONO_CPU_X86_SSE2 },
{ "sse3", MONO_CPU_X86_SSE3 },
{ "ssse3", MONO_CPU_X86_SSSE3 },
{ "sse4.1", MONO_CPU_X86_SSE41 },
{ "sse4.2", MONO_CPU_X86_SSE42 },
{ "popcnt", MONO_CPU_X86_POPCNT },
{ "avx", MONO_CPU_X86_AVX },
{ "avx2", MONO_CPU_X86_AVX2 },
{ "fma", MONO_CPU_X86_FMA },
{ "lzcnt", MONO_CPU_X86_LZCNT },
{ "bmi", MONO_CPU_X86_BMI1 },
{ "bmi2", MONO_CPU_X86_BMI2 },
#endif
#if defined(TARGET_ARM64)
{ "crc", MONO_CPU_ARM64_CRC },
{ "crypto", MONO_CPU_ARM64_CRYPTO },
{ "neon", MONO_CPU_ARM64_NEON },
{ "rdm", MONO_CPU_ARM64_RDM },
{ "dotprod", MONO_CPU_ARM64_DP },
#endif
#if defined(TARGET_WASM)
{ "simd", MONO_CPU_WASM_SIMD },
#endif
// flags_map cannot be zero length in MSVC, so add useless dummy entry for arm32
#if defined(TARGET_ARM) && defined(HOST_WIN32)
{ "inited", MONO_CPU_INITED},
#endif
};
if (!cpu_features)
cpu_features = MONO_CPU_INITED | (MonoCPUFeatures)mono_llvm_check_cpu_features (flags_map, G_N_ELEMENTS (flags_map));
return cpu_features;
}
| /**
* \file
* llvm "Backend" for the mono JIT
*
* Copyright 2009-2011 Novell Inc (http://www.novell.com)
* Copyright 2011 Xamarin Inc (http://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#include "config.h"
#include <mono/metadata/debug-helpers.h>
#include <mono/metadata/debug-internals.h>
#include <mono/metadata/mempool-internals.h>
#include <mono/metadata/environment.h>
#include <mono/metadata/object-internals.h>
#include <mono/metadata/abi-details.h>
#include <mono/metadata/tokentype.h>
#include <mono/utils/mono-tls.h>
#include <mono/utils/mono-dl.h>
#include <mono/utils/mono-time.h>
#include <mono/utils/freebsd-dwarf.h>
#ifndef __STDC_LIMIT_MACROS
#define __STDC_LIMIT_MACROS
#endif
#ifndef __STDC_CONSTANT_MACROS
#define __STDC_CONSTANT_MACROS
#endif
#include "llvm-c/BitWriter.h"
#include "llvm-c/Analysis.h"
#include "mini-llvm-cpp.h"
#include "llvm-jit.h"
#include "aot-compiler.h"
#include "mini-llvm.h"
#include "mini-runtime.h"
#include <mono/utils/mono-math.h>
#ifndef DISABLE_JIT
#if defined(TARGET_AMD64) && defined(TARGET_WIN32) && defined(HOST_WIN32) && defined(_MSC_VER)
#define TARGET_X86_64_WIN32_MSVC
#endif
#if defined(TARGET_X86_64_WIN32_MSVC)
#define TARGET_WIN32_MSVC
#endif
#if LLVM_API_VERSION < 900
#error "The version of the mono llvm repository is too old."
#endif
/*
* Information associated by mono with LLVM modules.
*/
typedef struct {
LLVMModuleRef lmodule;
LLVMValueRef throw_icall, rethrow, throw_corlib_exception;
GHashTable *llvm_types;
LLVMValueRef dummy_got_var;
const char *get_method_symbol;
const char *get_unbox_tramp_symbol;
const char *init_aotconst_symbol;
GHashTable *plt_entries;
GHashTable *plt_entries_ji;
GHashTable *method_to_lmethod;
GHashTable *method_to_call_info;
GHashTable *lvalue_to_lcalls;
GHashTable *direct_callables;
/* Maps got slot index -> LLVMValueRef */
GHashTable *aotconst_vars;
char **bb_names;
int bb_names_len;
GPtrArray *used;
LLVMTypeRef ptr_type;
GPtrArray *subprogram_mds;
MonoEERef *mono_ee;
LLVMExecutionEngineRef ee;
gboolean external_symbols;
gboolean emit_dwarf;
int max_got_offset;
LLVMValueRef personality;
gpointer gc_poll_cold_wrapper_compiled;
/* For AOT */
MonoAssembly *assembly;
char *global_prefix;
MonoAotFileInfo aot_info;
const char *eh_frame_symbol;
LLVMValueRef get_method, get_unbox_tramp, init_aotconst_func;
LLVMValueRef init_methods [AOT_INIT_METHOD_NUM];
LLVMValueRef code_start, code_end;
LLVMValueRef inited_var;
LLVMValueRef unbox_tramp_indexes;
LLVMValueRef unbox_trampolines;
LLVMValueRef gc_poll_cold_wrapper;
LLVMValueRef info_var;
LLVMTypeRef *info_var_eltypes;
int max_inited_idx, max_method_idx;
gboolean has_jitted_code;
gboolean static_link;
gboolean llvm_only;
gboolean interp;
GHashTable *idx_to_lmethod;
GHashTable *idx_to_unbox_tramp;
GPtrArray *callsite_list;
LLVMContextRef context;
LLVMValueRef sentinel_exception;
LLVMValueRef gc_safe_point_flag_var;
LLVMValueRef interrupt_flag_var;
void *di_builder, *cu;
GHashTable *objc_selector_to_var;
GPtrArray *cfgs;
int unbox_tramp_num, unbox_tramp_elemsize;
GHashTable *got_idx_to_type;
GHashTable *no_method_table_lmethods;
} MonoLLVMModule;
/*
* Information associated by the backend with mono basic blocks.
*/
typedef struct {
LLVMBasicBlockRef bblock, end_bblock;
LLVMValueRef finally_ind;
gboolean added, invoke_target;
/*
* If this bblock is the start of a finally clause, this is a list of bblocks it
* needs to branch to in ENDFINALLY.
*/
GSList *call_handler_return_bbs;
/*
* If this bblock is the start of a finally clause, this is the bblock that
* CALL_HANDLER needs to branch to.
*/
LLVMBasicBlockRef call_handler_target_bb;
/* The list of switch statements generated by ENDFINALLY instructions */
GSList *endfinally_switch_ins_list;
GSList *phi_nodes;
} BBInfo;
/*
* Structure containing emit state
*/
typedef struct {
MonoMemPool *mempool;
/* Maps method names to the corresponding LLVMValueRef */
GHashTable *emitted_method_decls;
MonoCompile *cfg;
LLVMValueRef lmethod;
MonoLLVMModule *module;
LLVMModuleRef lmodule;
BBInfo *bblocks;
int sindex, default_index, ex_index;
LLVMBuilderRef builder;
LLVMValueRef *values, *addresses;
MonoType **vreg_cli_types;
LLVMCallInfo *linfo;
MonoMethodSignature *sig;
GSList *builders;
GHashTable *region_to_handler;
GHashTable *clause_to_handler;
LLVMBuilderRef alloca_builder;
LLVMValueRef last_alloca;
LLVMValueRef rgctx_arg;
LLVMValueRef this_arg;
LLVMTypeRef *vreg_types;
gboolean *is_vphi;
LLVMTypeRef method_type;
LLVMBasicBlockRef init_bb, inited_bb;
gboolean *is_dead;
gboolean *unreachable;
gboolean llvm_only;
gboolean has_got_access;
gboolean is_linkonce;
gboolean emit_dummy_arg;
gboolean has_safepoints;
gboolean has_catch;
int this_arg_pindex, rgctx_arg_pindex;
LLVMValueRef imt_rgctx_loc;
GHashTable *llvm_types;
LLVMValueRef dbg_md;
MonoDebugMethodInfo *minfo;
/* For every clause, the clauses it is nested in */
GSList **nested_in;
LLVMValueRef ex_var;
GHashTable *exc_meta;
GPtrArray *callsite_list;
GPtrArray *phi_values;
GPtrArray *bblock_list;
char *method_name;
GHashTable *jit_callees;
LLVMValueRef long_bb_break_var;
int *gc_var_indexes;
LLVMValueRef gc_pin_area;
LLVMValueRef il_state;
LLVMValueRef il_state_ret;
} EmitContext;
typedef struct {
MonoBasicBlock *bb;
MonoInst *phi;
MonoBasicBlock *in_bb;
int sreg;
} PhiNode;
/*
* Instruction metadata
* This is the same as ins_info, but LREG != IREG.
*/
#ifdef MINI_OP
#undef MINI_OP
#endif
#ifdef MINI_OP3
#undef MINI_OP3
#endif
#define MINI_OP(a,b,dest,src1,src2) dest, src1, src2, ' ',
#define MINI_OP3(a,b,dest,src1,src2,src3) dest, src1, src2, src3,
#define NONE ' '
#define IREG 'i'
#define FREG 'f'
#define VREG 'v'
#define XREG 'x'
#define LREG 'l'
/* keep in sync with the enum in mini.h */
const char
mini_llvm_ins_info[] = {
#include "mini-ops.h"
};
#undef MINI_OP
#undef MINI_OP3
#if TARGET_SIZEOF_VOID_P == 4
#define GET_LONG_IMM(ins) ((ins)->inst_l)
#else
#define GET_LONG_IMM(ins) ((ins)->inst_imm)
#endif
#define LLVM_INS_INFO(opcode) (&mini_llvm_ins_info [((opcode) - OP_START - 1) * 4])
#if 0
#define TRACE_FAILURE(msg) do { printf ("%s\n", msg); } while (0)
#else
#define TRACE_FAILURE(msg)
#endif
#ifdef TARGET_X86
#define IS_TARGET_X86 1
#else
#define IS_TARGET_X86 0
#endif
#ifdef TARGET_AMD64
#define IS_TARGET_AMD64 1
#else
#define IS_TARGET_AMD64 0
#endif
#define ctx_ok(ctx) (!(ctx)->cfg->disable_llvm)
enum {
MAX_VECTOR_ELEMS = 32, // 2 vectors * 128 bits per vector / 8 bits per element
ARM64_MAX_VECTOR_ELEMS = 16,
};
const int mask_0_incr_1 [] = {
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
};
static LLVMIntPredicate cond_to_llvm_cond [] = {
LLVMIntEQ,
LLVMIntNE,
LLVMIntSLE,
LLVMIntSGE,
LLVMIntSLT,
LLVMIntSGT,
LLVMIntULE,
LLVMIntUGE,
LLVMIntULT,
LLVMIntUGT,
};
static LLVMRealPredicate fpcond_to_llvm_cond [] = {
LLVMRealOEQ,
LLVMRealUNE,
LLVMRealOLE,
LLVMRealOGE,
LLVMRealOLT,
LLVMRealOGT,
LLVMRealULE,
LLVMRealUGE,
LLVMRealULT,
LLVMRealUGT,
LLVMRealORD,
LLVMRealUNO
};
/* See Table 3-1 ("Comparison Predicate for CMPPD and CMPPS Instructions") in
* Vol. 2A of the Intel SDM.
*/
enum {
SSE_eq_ord_nosignal = 0,
SSE_lt_ord_signal = 1,
SSE_le_ord_signal = 2,
SSE_unord_nosignal = 3,
SSE_neq_unord_nosignal = 4,
SSE_nlt_unord_signal = 5,
SSE_nle_unord_signal = 6,
SSE_ord_nosignal = 7,
};
static MonoLLVMModule aot_module;
static GHashTable *intrins_id_to_intrins;
static LLVMTypeRef i1_t, i2_t, i4_t, i8_t, r4_t, r8_t;
static LLVMTypeRef sse_i1_t, sse_i2_t, sse_i4_t, sse_i8_t, sse_r4_t, sse_r8_t;
static LLVMTypeRef v64_i1_t, v64_i2_t, v64_i4_t, v64_i8_t, v64_r4_t, v64_r8_t;
static LLVMTypeRef v128_i1_t, v128_i2_t, v128_i4_t, v128_i8_t, v128_r4_t, v128_r8_t;
static LLVMTypeRef void_func_t;
static MonoLLVMModule *init_jit_module (void);
static void emit_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder, const unsigned char *cil_code);
static void emit_default_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder);
static LLVMValueRef emit_dbg_subprogram (EmitContext *ctx, MonoCompile *cfg, LLVMValueRef method, const char *name);
static void emit_dbg_info (MonoLLVMModule *module, const char *filename, const char *cu_name);
static void emit_cond_system_exception (EmitContext *ctx, MonoBasicBlock *bb, const char *exc_type, LLVMValueRef cmp, gboolean force_explicit);
static LLVMValueRef get_intrins (EmitContext *ctx, int id);
static LLVMValueRef get_intrins_from_module (LLVMModuleRef lmodule, int id);
static void llvm_jit_finalize_method (EmitContext *ctx);
static void mono_llvm_nonnull_state_update (EmitContext *ctx, LLVMValueRef lcall, MonoMethod *call_method, LLVMValueRef *args, int num_params);
static void mono_llvm_propagate_nonnull_final (GHashTable *all_specializable, MonoLLVMModule *module);
static void create_aot_info_var (MonoLLVMModule *module);
static void set_invariant_load_flag (LLVMValueRef v);
static void set_nonnull_load_flag (LLVMValueRef v);
enum {
INTRIN_scalar = 1 << 0,
INTRIN_vector64 = 1 << 1,
INTRIN_vector128 = 1 << 2,
INTRIN_vectorwidths = 3,
INTRIN_vectormask = 0x7,
INTRIN_int8 = 1 << 3,
INTRIN_int16 = 1 << 4,
INTRIN_int32 = 1 << 5,
INTRIN_int64 = 1 << 6,
INTRIN_float32 = 1 << 7,
INTRIN_float64 = 1 << 8,
INTRIN_elementwidths = 6,
};
typedef uint16_t llvm_ovr_tag_t;
static LLVMTypeRef intrin_types [INTRIN_vectorwidths][INTRIN_elementwidths];
static const llvm_ovr_tag_t intrin_arm64_ovr [] = {
#define INTRINS(sym, ...) 0,
#define INTRINS_OVR(sym, ...) 0,
#define INTRINS_OVR_2_ARG(sym, ...) 0,
#define INTRINS_OVR_3_ARG(sym, ...) 0,
#define INTRINS_OVR_TAG(sym, _, arch, spec) spec,
#define INTRINS_OVR_TAG_KIND(sym, _, kind, arch, spec) spec,
#include "llvm-intrinsics.h"
};
enum {
INTRIN_kind_ftoi = 1,
INTRIN_kind_widen,
INTRIN_kind_widen_across,
INTRIN_kind_across,
INTRIN_kind_arm64_dot_prod,
};
static const uint8_t intrin_kind [] = {
#define INTRINS(sym, ...) 0,
#define INTRINS_OVR(sym, ...) 0,
#define INTRINS_OVR_2_ARG(sym, ...) 0,
#define INTRINS_OVR_3_ARG(sym, ...) 0,
#define INTRINS_OVR_TAG(sym, _, arch, spec) 0,
#define INTRINS_OVR_TAG_KIND(sym, _, arch, kind, spec) kind,
#include "llvm-intrinsics.h"
};
static inline llvm_ovr_tag_t
ovr_tag_force_scalar (llvm_ovr_tag_t tag)
{
return (tag & ~INTRIN_vectormask) | INTRIN_scalar;
}
static inline llvm_ovr_tag_t
ovr_tag_smaller_vector (llvm_ovr_tag_t tag)
{
return (tag & ~INTRIN_vectormask) | ((tag & INTRIN_vectormask) >> 1);
}
static inline llvm_ovr_tag_t
ovr_tag_smaller_elements (llvm_ovr_tag_t tag)
{
return ((tag & ~INTRIN_vectormask) >> 1) | (tag & INTRIN_vectormask);
}
static inline llvm_ovr_tag_t
ovr_tag_corresponding_integer (llvm_ovr_tag_t tag)
{
return ((tag & ~INTRIN_vectormask) >> 2) | (tag & INTRIN_vectormask);
}
static LLVMTypeRef
ovr_tag_to_llvm_type (llvm_ovr_tag_t tag)
{
int vw = 0;
int ew = 0;
if (tag & INTRIN_vector64) vw = 1;
else if (tag & INTRIN_vector128) vw = 2;
if (tag & INTRIN_int16) ew = 1;
else if (tag & INTRIN_int32) ew = 2;
else if (tag & INTRIN_int64) ew = 3;
else if (tag & INTRIN_float32) ew = 4;
else if (tag & INTRIN_float64) ew = 5;
return intrin_types [vw][ew];
}
static int
key_from_id_and_tag (int id, llvm_ovr_tag_t ovr_tag)
{
return (((int) ovr_tag) << 23) | id;
}
static llvm_ovr_tag_t
ovr_tag_from_mono_vector_class (MonoClass *klass) {
int size = mono_class_value_size (klass, NULL);
llvm_ovr_tag_t ret = 0;
switch (size) {
case 8: ret |= INTRIN_vector64; break;
case 16: ret |= INTRIN_vector128; break;
}
MonoType *etype = mono_class_get_context (klass)->class_inst->type_argv [0];
switch (etype->type) {
case MONO_TYPE_I1: case MONO_TYPE_U1: ret |= INTRIN_int8; break;
case MONO_TYPE_I2: case MONO_TYPE_U2: ret |= INTRIN_int16; break;
case MONO_TYPE_I4: case MONO_TYPE_U4: ret |= INTRIN_int32; break;
case MONO_TYPE_I8: case MONO_TYPE_U8: ret |= INTRIN_int64; break;
case MONO_TYPE_R4: ret |= INTRIN_float32; break;
case MONO_TYPE_R8: ret |= INTRIN_float64; break;
}
return ret;
}
static llvm_ovr_tag_t
ovr_tag_from_llvm_type (LLVMTypeRef type)
{
llvm_ovr_tag_t ret = 0;
LLVMTypeKind kind = LLVMGetTypeKind (type);
LLVMTypeRef elem_t = NULL;
switch (kind) {
case LLVMVectorTypeKind: {
elem_t = LLVMGetElementType (type);
unsigned int bits = mono_llvm_get_prim_size_bits (type);
switch (bits) {
case 64: ret |= INTRIN_vector64; break;
case 128: ret |= INTRIN_vector128; break;
default: g_assert_not_reached ();
}
break;
}
default:
g_assert_not_reached ();
}
if (elem_t == i1_t) ret |= INTRIN_int8;
if (elem_t == i2_t) ret |= INTRIN_int16;
if (elem_t == i4_t) ret |= INTRIN_int32;
if (elem_t == i8_t) ret |= INTRIN_int64;
if (elem_t == r4_t) ret |= INTRIN_float32;
if (elem_t == r8_t) ret |= INTRIN_float64;
return ret;
}
static inline void
set_failure (EmitContext *ctx, const char *message)
{
TRACE_FAILURE (reason);
ctx->cfg->exception_message = g_strdup (message);
ctx->cfg->disable_llvm = TRUE;
}
static LLVMValueRef
const_int1 (int v)
{
return LLVMConstInt (LLVMInt1Type (), v ? 1 : 0, FALSE);
}
static LLVMValueRef
const_int8 (int v)
{
return LLVMConstInt (LLVMInt8Type (), v, FALSE);
}
static LLVMValueRef
const_int32 (int v)
{
return LLVMConstInt (LLVMInt32Type (), v, FALSE);
}
static LLVMValueRef
const_int64 (int64_t v)
{
return LLVMConstInt (LLVMInt64Type (), v, FALSE);
}
/*
* IntPtrType:
*
* The LLVM type with width == TARGET_SIZEOF_VOID_P
*/
static LLVMTypeRef
IntPtrType (void)
{
return TARGET_SIZEOF_VOID_P == 8 ? LLVMInt64Type () : LLVMInt32Type ();
}
static LLVMTypeRef
ObjRefType (void)
{
return TARGET_SIZEOF_VOID_P == 8 ? LLVMPointerType (LLVMInt64Type (), 0) : LLVMPointerType (LLVMInt32Type (), 0);
}
static LLVMTypeRef
ThisType (void)
{
return TARGET_SIZEOF_VOID_P == 8 ? LLVMPointerType (LLVMInt64Type (), 0) : LLVMPointerType (LLVMInt32Type (), 0);
}
typedef struct {
int32_t size;
uint32_t align;
} MonoSizeAlign;
/*
* get_vtype_size:
*
* Return the size of the LLVM representation of the vtype T.
*/
static MonoSizeAlign
get_vtype_size_align (MonoType *t)
{
uint32_t align = 0;
int32_t size = mono_class_value_size (mono_class_from_mono_type_internal (t), &align);
/* LLVMArgAsIArgs depends on this since it stores whole words */
while (size < 2 * TARGET_SIZEOF_VOID_P && mono_is_power_of_two (size) == -1)
size ++;
MonoSizeAlign ret = { size, align };
return ret;
}
/*
* simd_class_to_llvm_type:
*
* Return the LLVM type corresponding to the Mono.SIMD class KLASS
*/
static LLVMTypeRef
simd_class_to_llvm_type (EmitContext *ctx, MonoClass *klass)
{
const char *klass_name = m_class_get_name (klass);
if (!strcmp (klass_name, "Vector2d")) {
return LLVMVectorType (LLVMDoubleType (), 2);
} else if (!strcmp (klass_name, "Vector2l")) {
return LLVMVectorType (LLVMInt64Type (), 2);
} else if (!strcmp (klass_name, "Vector2ul")) {
return LLVMVectorType (LLVMInt64Type (), 2);
} else if (!strcmp (klass_name, "Vector4i")) {
return LLVMVectorType (LLVMInt32Type (), 4);
} else if (!strcmp (klass_name, "Vector4ui")) {
return LLVMVectorType (LLVMInt32Type (), 4);
} else if (!strcmp (klass_name, "Vector4f")) {
return LLVMVectorType (LLVMFloatType (), 4);
} else if (!strcmp (klass_name, "Vector8s")) {
return LLVMVectorType (LLVMInt16Type (), 8);
} else if (!strcmp (klass_name, "Vector8us")) {
return LLVMVectorType (LLVMInt16Type (), 8);
} else if (!strcmp (klass_name, "Vector16sb")) {
return LLVMVectorType (LLVMInt8Type (), 16);
} else if (!strcmp (klass_name, "Vector16b")) {
return LLVMVectorType (LLVMInt8Type (), 16);
} else if (!strcmp (klass_name, "Vector2")) {
/* System.Numerics */
return LLVMVectorType (LLVMFloatType (), 4);
} else if (!strcmp (klass_name, "Vector3")) {
return LLVMVectorType (LLVMFloatType (), 4);
} else if (!strcmp (klass_name, "Vector4")) {
return LLVMVectorType (LLVMFloatType (), 4);
} else if (!strcmp (klass_name, "Vector`1") || !strcmp (klass_name, "Vector64`1") || !strcmp (klass_name, "Vector128`1") || !strcmp (klass_name, "Vector256`1")) {
MonoType *etype = mono_class_get_generic_class (klass)->context.class_inst->type_argv [0];
int size = mono_class_value_size (klass, NULL);
switch (etype->type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
return LLVMVectorType (LLVMInt8Type (), size);
case MONO_TYPE_I2:
case MONO_TYPE_U2:
return LLVMVectorType (LLVMInt16Type (), size / 2);
case MONO_TYPE_I4:
case MONO_TYPE_U4:
return LLVMVectorType (LLVMInt32Type (), size / 4);
case MONO_TYPE_I8:
case MONO_TYPE_U8:
return LLVMVectorType (LLVMInt64Type (), size / 8);
case MONO_TYPE_I:
case MONO_TYPE_U:
#if TARGET_SIZEOF_VOID_P == 8
return LLVMVectorType (LLVMInt64Type (), size / 8);
#else
return LLVMVectorType (LLVMInt32Type (), size / 4);
#endif
case MONO_TYPE_R4:
return LLVMVectorType (LLVMFloatType (), size / 4);
case MONO_TYPE_R8:
return LLVMVectorType (LLVMDoubleType (), size / 8);
default:
g_assert_not_reached ();
return NULL;
}
} else {
printf ("%s\n", klass_name);
NOT_IMPLEMENTED;
return NULL;
}
}
static LLVMTypeRef
simd_valuetuple_to_llvm_type (EmitContext *ctx, MonoClass *klass)
{
const char *klass_name = m_class_get_name (klass);
if (!strcmp (klass_name, "ValueTuple`2")) {
MonoType *etype = mono_class_get_generic_class (klass)->context.class_inst->type_argv [0];
if (etype->type != MONO_TYPE_GENERICINST)
g_assert_not_reached ();
MonoClass *eklass = etype->data.generic_class->cached_class;
LLVMTypeRef ltype = simd_class_to_llvm_type (ctx, eklass);
return LLVMArrayType (ltype, 2);
}
g_assert_not_reached ();
}
/* Return the 128 bit SIMD type corresponding to the mono type TYPE */
static inline G_GNUC_UNUSED LLVMTypeRef
type_to_sse_type (int type)
{
switch (type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
return LLVMVectorType (LLVMInt8Type (), 16);
case MONO_TYPE_U2:
case MONO_TYPE_I2:
return LLVMVectorType (LLVMInt16Type (), 8);
case MONO_TYPE_U4:
case MONO_TYPE_I4:
return LLVMVectorType (LLVMInt32Type (), 4);
case MONO_TYPE_U8:
case MONO_TYPE_I8:
return LLVMVectorType (LLVMInt64Type (), 2);
case MONO_TYPE_I:
case MONO_TYPE_U:
#if TARGET_SIZEOF_VOID_P == 8
return LLVMVectorType (LLVMInt64Type (), 2);
#else
return LLVMVectorType (LLVMInt32Type (), 4);
#endif
case MONO_TYPE_R8:
return LLVMVectorType (LLVMDoubleType (), 2);
case MONO_TYPE_R4:
return LLVMVectorType (LLVMFloatType (), 4);
default:
g_assert_not_reached ();
return NULL;
}
}
static LLVMTypeRef
create_llvm_type_for_type (MonoLLVMModule *module, MonoClass *klass)
{
int i, size, nfields, esize;
LLVMTypeRef *eltypes;
char *name;
MonoType *t;
LLVMTypeRef ltype;
t = m_class_get_byval_arg (klass);
if (mini_type_is_hfa (t, &nfields, &esize)) {
/*
* This is needed on arm64 where HFAs are returned in
* registers.
*/
/* SIMD types have size 16 in mono_class_value_size () */
if (m_class_is_simd_type (klass))
nfields = 16/ esize;
size = nfields;
eltypes = g_new (LLVMTypeRef, size);
for (i = 0; i < size; ++i)
eltypes [i] = esize == 4 ? LLVMFloatType () : LLVMDoubleType ();
} else {
MonoSizeAlign size_align = get_vtype_size_align (t);
eltypes = g_new (LLVMTypeRef, size_align.size);
size = 0;
uint32_t bytes = 0;
uint32_t chunk = size_align.align < TARGET_SIZEOF_VOID_P ? size_align.align : TARGET_SIZEOF_VOID_P;
for (; chunk > 0; chunk = chunk >> 1) {
for (; (bytes + chunk) <= size_align.size; bytes += chunk) {
eltypes [size] = LLVMIntType (chunk * 8);
++size;
}
}
}
name = mono_type_full_name (m_class_get_byval_arg (klass));
ltype = LLVMStructCreateNamed (module->context, name);
LLVMStructSetBody (ltype, eltypes, size, FALSE);
g_free (eltypes);
g_free (name);
return ltype;
}
static LLVMTypeRef
primitive_type_to_llvm_type (MonoTypeEnum type)
{
switch (type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
return LLVMInt8Type ();
case MONO_TYPE_I2:
case MONO_TYPE_U2:
return LLVMInt16Type ();
case MONO_TYPE_I4:
case MONO_TYPE_U4:
return LLVMInt32Type ();
case MONO_TYPE_I8:
case MONO_TYPE_U8:
return LLVMInt64Type ();
case MONO_TYPE_R4:
return LLVMFloatType ();
case MONO_TYPE_R8:
return LLVMDoubleType ();
case MONO_TYPE_I:
case MONO_TYPE_U:
return IntPtrType ();
default:
return NULL;
}
}
static MonoTypeEnum
inst_c1_type (const MonoInst *ins)
{
return (MonoTypeEnum)ins->inst_c1;
}
/*
* type_to_llvm_type:
*
* Return the LLVM type corresponding to T.
*/
static LLVMTypeRef
type_to_llvm_type (EmitContext *ctx, MonoType *t)
{
if (m_type_is_byref (t))
return ThisType ();
t = mini_get_underlying_type (t);
LLVMTypeRef prim_llvm_type = primitive_type_to_llvm_type (t->type);
if (prim_llvm_type != NULL)
return prim_llvm_type;
switch (t->type) {
case MONO_TYPE_VOID:
return LLVMVoidType ();
case MONO_TYPE_OBJECT:
return ObjRefType ();
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR: {
MonoClass *klass = mono_class_from_mono_type_internal (t);
MonoClass *ptr_klass = m_class_get_element_class (klass);
MonoType *ptr_type = m_class_get_byval_arg (ptr_klass);
/* Handle primitive pointers */
switch (ptr_type->type) {
case MONO_TYPE_I1:
case MONO_TYPE_I2:
case MONO_TYPE_I4:
case MONO_TYPE_U1:
case MONO_TYPE_U2:
case MONO_TYPE_U4:
return LLVMPointerType (type_to_llvm_type (ctx, ptr_type), 0);
}
return ObjRefType ();
}
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
/* Because of generic sharing */
return ObjRefType ();
case MONO_TYPE_GENERICINST:
if (!mono_type_generic_inst_is_valuetype (t))
return ObjRefType ();
/* Fall through */
case MONO_TYPE_VALUETYPE:
case MONO_TYPE_TYPEDBYREF: {
MonoClass *klass;
LLVMTypeRef ltype;
klass = mono_class_from_mono_type_internal (t);
if (MONO_CLASS_IS_SIMD (ctx->cfg, klass))
return simd_class_to_llvm_type (ctx, klass);
if (m_class_is_enumtype (klass))
return type_to_llvm_type (ctx, mono_class_enum_basetype_internal (klass));
ltype = (LLVMTypeRef)g_hash_table_lookup (ctx->module->llvm_types, klass);
if (!ltype) {
ltype = create_llvm_type_for_type (ctx->module, klass);
g_hash_table_insert (ctx->module->llvm_types, klass, ltype);
}
return ltype;
}
default:
printf ("X: %d\n", t->type);
ctx->cfg->exception_message = g_strdup_printf ("type %s", mono_type_full_name (t));
ctx->cfg->disable_llvm = TRUE;
return NULL;
}
}
static gboolean
primitive_type_is_unsigned (MonoTypeEnum t)
{
switch (t) {
case MONO_TYPE_U1:
case MONO_TYPE_U2:
case MONO_TYPE_CHAR:
case MONO_TYPE_U4:
case MONO_TYPE_U8:
case MONO_TYPE_U:
return TRUE;
default:
return FALSE;
}
}
/*
* type_is_unsigned:
*
* Return whenever T is an unsigned int type.
*/
static gboolean
type_is_unsigned (EmitContext *ctx, MonoType *t)
{
t = mini_get_underlying_type (t);
if (m_type_is_byref (t))
return FALSE;
return primitive_type_is_unsigned (t->type);
}
/*
* type_to_llvm_arg_type:
*
* Same as type_to_llvm_type, but treat i8/i16 as i32.
*/
static LLVMTypeRef
type_to_llvm_arg_type (EmitContext *ctx, MonoType *t)
{
LLVMTypeRef ptype = type_to_llvm_type (ctx, t);
if (ctx->cfg->llvm_only)
return ptype;
/*
* This works on all abis except arm64/ios which passes multiple
* arguments in one stack slot.
*/
#ifndef TARGET_ARM64
if (ptype == LLVMInt8Type () || ptype == LLVMInt16Type ()) {
/*
* LLVM generates code which only sets the lower bits, while JITted
* code expects all the bits to be set.
*/
ptype = LLVMInt32Type ();
}
#endif
return ptype;
}
/*
* llvm_type_to_stack_type:
*
* Return the LLVM type which needs to be used when a value of type TYPE is pushed
* on the IL stack.
*/
static G_GNUC_UNUSED LLVMTypeRef
llvm_type_to_stack_type (MonoCompile *cfg, LLVMTypeRef type)
{
if (type == NULL)
return NULL;
if (type == LLVMInt8Type ())
return LLVMInt32Type ();
else if (type == LLVMInt16Type ())
return LLVMInt32Type ();
else if (!cfg->r4fp && type == LLVMFloatType ())
return LLVMDoubleType ();
else
return type;
}
/*
* regtype_to_llvm_type:
*
* Return the LLVM type corresponding to the regtype C used in instruction
* descriptions.
*/
static LLVMTypeRef
regtype_to_llvm_type (char c)
{
switch (c) {
case 'i':
return LLVMInt32Type ();
case 'l':
return LLVMInt64Type ();
case 'f':
return LLVMDoubleType ();
default:
return NULL;
}
}
/*
* op_to_llvm_type:
*
* Return the LLVM type corresponding to the unary/binary opcode OPCODE.
*/
static LLVMTypeRef
op_to_llvm_type (int opcode)
{
switch (opcode) {
case OP_ICONV_TO_I1:
case OP_LCONV_TO_I1:
return LLVMInt8Type ();
case OP_ICONV_TO_U1:
case OP_LCONV_TO_U1:
return LLVMInt8Type ();
case OP_ICONV_TO_I2:
case OP_LCONV_TO_I2:
return LLVMInt16Type ();
case OP_ICONV_TO_U2:
case OP_LCONV_TO_U2:
return LLVMInt16Type ();
case OP_ICONV_TO_I4:
case OP_LCONV_TO_I4:
return LLVMInt32Type ();
case OP_ICONV_TO_U4:
case OP_LCONV_TO_U4:
return LLVMInt32Type ();
case OP_ICONV_TO_I8:
return LLVMInt64Type ();
case OP_ICONV_TO_R4:
return LLVMFloatType ();
case OP_ICONV_TO_R8:
return LLVMDoubleType ();
case OP_ICONV_TO_U8:
return LLVMInt64Type ();
case OP_FCONV_TO_I4:
return LLVMInt32Type ();
case OP_FCONV_TO_I8:
return LLVMInt64Type ();
case OP_FCONV_TO_I1:
case OP_FCONV_TO_U1:
case OP_RCONV_TO_I1:
case OP_RCONV_TO_U1:
return LLVMInt8Type ();
case OP_FCONV_TO_I2:
case OP_FCONV_TO_U2:
case OP_RCONV_TO_I2:
case OP_RCONV_TO_U2:
return LLVMInt16Type ();
case OP_FCONV_TO_U4:
case OP_RCONV_TO_U4:
return LLVMInt32Type ();
case OP_FCONV_TO_U8:
case OP_RCONV_TO_U8:
return LLVMInt64Type ();
case OP_FCONV_TO_I:
case OP_RCONV_TO_I:
return TARGET_SIZEOF_VOID_P == 8 ? LLVMInt64Type () : LLVMInt32Type ();
case OP_IADD_OVF:
case OP_IADD_OVF_UN:
case OP_ISUB_OVF:
case OP_ISUB_OVF_UN:
case OP_IMUL_OVF:
case OP_IMUL_OVF_UN:
return LLVMInt32Type ();
case OP_LADD_OVF:
case OP_LADD_OVF_UN:
case OP_LSUB_OVF:
case OP_LSUB_OVF_UN:
case OP_LMUL_OVF:
case OP_LMUL_OVF_UN:
return LLVMInt64Type ();
default:
printf ("%s\n", mono_inst_name (opcode));
g_assert_not_reached ();
return NULL;
}
}
#define CLAUSE_START(clause) ((clause)->try_offset)
#define CLAUSE_END(clause) (((clause))->try_offset + ((clause))->try_len)
/*
* load_store_to_llvm_type:
*
* Return the size/sign/zero extension corresponding to the load/store opcode
* OPCODE.
*/
static LLVMTypeRef
load_store_to_llvm_type (int opcode, int *size, gboolean *sext, gboolean *zext)
{
*sext = FALSE;
*zext = FALSE;
switch (opcode) {
case OP_LOADI1_MEMBASE:
case OP_STOREI1_MEMBASE_REG:
case OP_STOREI1_MEMBASE_IMM:
case OP_ATOMIC_LOAD_I1:
case OP_ATOMIC_STORE_I1:
*size = 1;
*sext = TRUE;
return LLVMInt8Type ();
case OP_LOADU1_MEMBASE:
case OP_LOADU1_MEM:
case OP_ATOMIC_LOAD_U1:
case OP_ATOMIC_STORE_U1:
*size = 1;
*zext = TRUE;
return LLVMInt8Type ();
case OP_LOADI2_MEMBASE:
case OP_STOREI2_MEMBASE_REG:
case OP_STOREI2_MEMBASE_IMM:
case OP_ATOMIC_LOAD_I2:
case OP_ATOMIC_STORE_I2:
*size = 2;
*sext = TRUE;
return LLVMInt16Type ();
case OP_LOADU2_MEMBASE:
case OP_LOADU2_MEM:
case OP_ATOMIC_LOAD_U2:
case OP_ATOMIC_STORE_U2:
*size = 2;
*zext = TRUE;
return LLVMInt16Type ();
case OP_LOADI4_MEMBASE:
case OP_LOADU4_MEMBASE:
case OP_LOADI4_MEM:
case OP_LOADU4_MEM:
case OP_STOREI4_MEMBASE_REG:
case OP_STOREI4_MEMBASE_IMM:
case OP_ATOMIC_LOAD_I4:
case OP_ATOMIC_STORE_I4:
case OP_ATOMIC_LOAD_U4:
case OP_ATOMIC_STORE_U4:
*size = 4;
return LLVMInt32Type ();
case OP_LOADI8_MEMBASE:
case OP_LOADI8_MEM:
case OP_STOREI8_MEMBASE_REG:
case OP_STOREI8_MEMBASE_IMM:
case OP_ATOMIC_LOAD_I8:
case OP_ATOMIC_STORE_I8:
case OP_ATOMIC_LOAD_U8:
case OP_ATOMIC_STORE_U8:
*size = 8;
return LLVMInt64Type ();
case OP_LOADR4_MEMBASE:
case OP_STORER4_MEMBASE_REG:
case OP_ATOMIC_LOAD_R4:
case OP_ATOMIC_STORE_R4:
*size = 4;
return LLVMFloatType ();
case OP_LOADR8_MEMBASE:
case OP_STORER8_MEMBASE_REG:
case OP_ATOMIC_LOAD_R8:
case OP_ATOMIC_STORE_R8:
*size = 8;
return LLVMDoubleType ();
case OP_LOAD_MEMBASE:
case OP_LOAD_MEM:
case OP_STORE_MEMBASE_REG:
case OP_STORE_MEMBASE_IMM:
*size = TARGET_SIZEOF_VOID_P;
return IntPtrType ();
default:
g_assert_not_reached ();
return NULL;
}
}
/*
* ovf_op_to_intrins:
*
* Return the LLVM intrinsics corresponding to the overflow opcode OPCODE.
*/
static IntrinsicId
ovf_op_to_intrins (int opcode)
{
switch (opcode) {
case OP_IADD_OVF:
return INTRINS_SADD_OVF_I32;
case OP_IADD_OVF_UN:
return INTRINS_UADD_OVF_I32;
case OP_ISUB_OVF:
return INTRINS_SSUB_OVF_I32;
case OP_ISUB_OVF_UN:
return INTRINS_USUB_OVF_I32;
case OP_IMUL_OVF:
return INTRINS_SMUL_OVF_I32;
case OP_IMUL_OVF_UN:
return INTRINS_UMUL_OVF_I32;
case OP_LADD_OVF:
return INTRINS_SADD_OVF_I64;
case OP_LADD_OVF_UN:
return INTRINS_UADD_OVF_I64;
case OP_LSUB_OVF:
return INTRINS_SSUB_OVF_I64;
case OP_LSUB_OVF_UN:
return INTRINS_USUB_OVF_I64;
case OP_LMUL_OVF:
return INTRINS_SMUL_OVF_I64;
case OP_LMUL_OVF_UN:
return INTRINS_UMUL_OVF_I64;
default:
g_assert_not_reached ();
return (IntrinsicId)0;
}
}
static IntrinsicId
simd_ins_to_intrins (int opcode)
{
switch (opcode) {
#if defined(TARGET_X86) || defined(TARGET_AMD64)
case OP_CVTPD2DQ:
return INTRINS_SSE_CVTPD2DQ;
case OP_CVTPS2DQ:
return INTRINS_SSE_CVTPS2DQ;
case OP_CVTPD2PS:
return INTRINS_SSE_CVTPD2PS;
case OP_CVTTPD2DQ:
return INTRINS_SSE_CVTTPD2DQ;
case OP_CVTTPS2DQ:
return INTRINS_SSE_CVTTPS2DQ;
case OP_SSE_SQRTSS:
return INTRINS_SSE_SQRT_SS;
case OP_SSE2_SQRTSD:
return INTRINS_SSE_SQRT_SD;
#endif
default:
g_assert_not_reached ();
return (IntrinsicId)0;
}
}
static LLVMTypeRef
simd_op_to_llvm_type (int opcode)
{
#if defined(TARGET_X86) || defined(TARGET_AMD64)
switch (opcode) {
case OP_EXTRACT_R8:
case OP_EXPAND_R8:
return sse_r8_t;
case OP_EXTRACT_I8:
case OP_EXPAND_I8:
return sse_i8_t;
case OP_EXTRACT_I4:
case OP_EXPAND_I4:
return sse_i4_t;
case OP_EXTRACT_I2:
case OP_EXTRACTX_U2:
case OP_EXPAND_I2:
return sse_i2_t;
case OP_EXTRACT_I1:
case OP_EXPAND_I1:
return sse_i1_t;
case OP_EXTRACT_R4:
case OP_EXPAND_R4:
return sse_r4_t;
case OP_CVTPD2DQ:
case OP_CVTPD2PS:
case OP_CVTTPD2DQ:
return sse_r8_t;
case OP_CVTPS2DQ:
case OP_CVTTPS2DQ:
return sse_r4_t;
case OP_SQRTPS:
case OP_RSQRTPS:
case OP_DUPPS_LOW:
case OP_DUPPS_HIGH:
return sse_r4_t;
case OP_SQRTPD:
case OP_DUPPD:
return sse_r8_t;
default:
g_assert_not_reached ();
return NULL;
}
#else
return NULL;
#endif
}
static void
set_cold_cconv (LLVMValueRef func)
{
/*
* xcode10 (watchOS) and ARM/ARM64 doesn't seem to support preserveall, it fails with:
* fatal error: error in backend: Unsupported calling convention
*/
#if !defined(TARGET_WATCHOS) && !defined(TARGET_ARM) && !defined(TARGET_ARM64)
LLVMSetFunctionCallConv (func, LLVMColdCallConv);
#endif
}
static void
set_call_cold_cconv (LLVMValueRef func)
{
#if !defined(TARGET_WATCHOS) && !defined(TARGET_ARM) && !defined(TARGET_ARM64)
LLVMSetInstructionCallConv (func, LLVMColdCallConv);
#endif
}
/*
* get_bb:
*
* Return the LLVM basic block corresponding to BB.
*/
static LLVMBasicBlockRef
get_bb (EmitContext *ctx, MonoBasicBlock *bb)
{
char bb_name_buf [128];
char *bb_name;
if (ctx->bblocks [bb->block_num].bblock == NULL) {
if (bb->flags & BB_EXCEPTION_HANDLER) {
int clause_index = (mono_get_block_region_notry (ctx->cfg, bb->region) >> 8) - 1;
sprintf (bb_name_buf, "EH_CLAUSE%d_BB%d", clause_index, bb->block_num);
bb_name = bb_name_buf;
} else if (bb->block_num < 256) {
if (!ctx->module->bb_names) {
ctx->module->bb_names_len = 256;
ctx->module->bb_names = g_new0 (char*, ctx->module->bb_names_len);
}
if (!ctx->module->bb_names [bb->block_num]) {
char *n;
n = g_strdup_printf ("BB%d", bb->block_num);
mono_memory_barrier ();
ctx->module->bb_names [bb->block_num] = n;
}
bb_name = ctx->module->bb_names [bb->block_num];
} else {
sprintf (bb_name_buf, "BB%d", bb->block_num);
bb_name = bb_name_buf;
}
ctx->bblocks [bb->block_num].bblock = LLVMAppendBasicBlock (ctx->lmethod, bb_name);
ctx->bblocks [bb->block_num].end_bblock = ctx->bblocks [bb->block_num].bblock;
}
return ctx->bblocks [bb->block_num].bblock;
}
/*
* get_end_bb:
*
* Return the last LLVM bblock corresponding to BB.
* This might not be equal to the bb returned by get_bb () since we need to generate
* multiple LLVM bblocks for a mono bblock to handle throwing exceptions.
*/
static LLVMBasicBlockRef
get_end_bb (EmitContext *ctx, MonoBasicBlock *bb)
{
get_bb (ctx, bb);
return ctx->bblocks [bb->block_num].end_bblock;
}
static LLVMBasicBlockRef
gen_bb (EmitContext *ctx, const char *prefix)
{
char bb_name [128];
sprintf (bb_name, "%s%d", prefix, ++ ctx->ex_index);
return LLVMAppendBasicBlock (ctx->lmethod, bb_name);
}
/*
* resolve_patch:
*
* Return the target of the patch identified by TYPE and TARGET.
*/
static gpointer
resolve_patch (MonoCompile *cfg, MonoJumpInfoType type, gconstpointer target)
{
MonoJumpInfo ji;
ERROR_DECL (error);
gpointer res;
memset (&ji, 0, sizeof (ji));
ji.type = type;
ji.data.target = target;
res = mono_resolve_patch_target (cfg->method, NULL, &ji, FALSE, error);
mono_error_assert_ok (error);
return res;
}
/*
* convert_full:
*
* Emit code to convert the LLVM value V to DTYPE.
*/
static LLVMValueRef
convert_full (EmitContext *ctx, LLVMValueRef v, LLVMTypeRef dtype, gboolean is_unsigned)
{
LLVMTypeRef stype = LLVMTypeOf (v);
if (stype != dtype) {
gboolean ext = FALSE;
/* Extend */
if (dtype == LLVMInt64Type () && (stype == LLVMInt32Type () || stype == LLVMInt16Type () || stype == LLVMInt8Type ()))
ext = TRUE;
else if (dtype == LLVMInt32Type () && (stype == LLVMInt16Type () || stype == LLVMInt8Type ()))
ext = TRUE;
else if (dtype == LLVMInt16Type () && (stype == LLVMInt8Type ()))
ext = TRUE;
if (ext)
return is_unsigned ? LLVMBuildZExt (ctx->builder, v, dtype, "") : LLVMBuildSExt (ctx->builder, v, dtype, "");
if (dtype == LLVMDoubleType () && stype == LLVMFloatType ())
return LLVMBuildFPExt (ctx->builder, v, dtype, "");
/* Trunc */
if (stype == LLVMInt64Type () && (dtype == LLVMInt32Type () || dtype == LLVMInt16Type () || dtype == LLVMInt8Type ()))
return LLVMBuildTrunc (ctx->builder, v, dtype, "");
if (stype == LLVMInt32Type () && (dtype == LLVMInt16Type () || dtype == LLVMInt8Type ()))
return LLVMBuildTrunc (ctx->builder, v, dtype, "");
if (stype == LLVMInt16Type () && dtype == LLVMInt8Type ())
return LLVMBuildTrunc (ctx->builder, v, dtype, "");
if (stype == LLVMDoubleType () && dtype == LLVMFloatType ())
return LLVMBuildFPTrunc (ctx->builder, v, dtype, "");
if (LLVMGetTypeKind (stype) == LLVMPointerTypeKind && LLVMGetTypeKind (dtype) == LLVMPointerTypeKind)
return LLVMBuildBitCast (ctx->builder, v, dtype, "");
if (LLVMGetTypeKind (dtype) == LLVMPointerTypeKind)
return LLVMBuildIntToPtr (ctx->builder, v, dtype, "");
if (LLVMGetTypeKind (stype) == LLVMPointerTypeKind)
return LLVMBuildPtrToInt (ctx->builder, v, dtype, "");
if (mono_arch_is_soft_float ()) {
if (stype == LLVMInt32Type () && dtype == LLVMFloatType ())
return LLVMBuildBitCast (ctx->builder, v, dtype, "");
if (stype == LLVMInt32Type () && dtype == LLVMDoubleType ())
return LLVMBuildBitCast (ctx->builder, LLVMBuildZExt (ctx->builder, v, LLVMInt64Type (), ""), dtype, "");
}
if (LLVMGetTypeKind (stype) == LLVMVectorTypeKind && LLVMGetTypeKind (dtype) == LLVMVectorTypeKind) {
if (mono_llvm_get_prim_size_bits (stype) == mono_llvm_get_prim_size_bits (dtype))
return LLVMBuildBitCast (ctx->builder, v, dtype, "");
}
mono_llvm_dump_value (v);
mono_llvm_dump_type (dtype);
printf ("\n");
g_assert_not_reached ();
return NULL;
} else {
return v;
}
}
static LLVMValueRef
convert (EmitContext *ctx, LLVMValueRef v, LLVMTypeRef dtype)
{
return convert_full (ctx, v, dtype, FALSE);
}
static void
emit_memset (EmitContext *ctx, LLVMBuilderRef builder, LLVMValueRef v, LLVMValueRef size, int alignment)
{
LLVMValueRef args [5];
int aindex = 0;
args [aindex ++] = v;
args [aindex ++] = LLVMConstInt (LLVMInt8Type (), 0, FALSE);
args [aindex ++] = size;
args [aindex ++] = LLVMConstInt (LLVMInt1Type (), 0, FALSE);
LLVMBuildCall (builder, get_intrins (ctx, INTRINS_MEMSET), args, aindex, "");
}
/*
* emit_volatile_load:
*
* If vreg is volatile, emit a load from its address.
*/
static LLVMValueRef
emit_volatile_load (EmitContext *ctx, int vreg)
{
MonoType *t;
LLVMValueRef v;
// On arm64, we pass the rgctx in a callee saved
// register on arm64 (x15), and llvm might keep the value in that register
// even through the register is marked as 'reserved' inside llvm.
v = mono_llvm_build_load (ctx->builder, ctx->addresses [vreg], "", TRUE);
t = ctx->vreg_cli_types [vreg];
if (t && !m_type_is_byref (t)) {
/*
* Might have to zero extend since llvm doesn't have
* unsigned types.
*/
if (t->type == MONO_TYPE_U1 || t->type == MONO_TYPE_U2 || t->type == MONO_TYPE_CHAR || t->type == MONO_TYPE_BOOLEAN)
v = LLVMBuildZExt (ctx->builder, v, LLVMInt32Type (), "");
else if (t->type == MONO_TYPE_I1 || t->type == MONO_TYPE_I2)
v = LLVMBuildSExt (ctx->builder, v, LLVMInt32Type (), "");
else if (t->type == MONO_TYPE_U8)
v = LLVMBuildZExt (ctx->builder, v, LLVMInt64Type (), "");
}
return v;
}
/*
* emit_volatile_store:
*
* If VREG is volatile, emit a store from its value to its address.
*/
static void
emit_volatile_store (EmitContext *ctx, int vreg)
{
MonoInst *var = get_vreg_to_inst (ctx->cfg, vreg);
if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) {
g_assert (ctx->addresses [vreg]);
#ifdef TARGET_WASM
/* Need volatile stores otherwise the compiler might move them */
mono_llvm_build_store (ctx->builder, convert (ctx, ctx->values [vreg], type_to_llvm_type (ctx, var->inst_vtype)), ctx->addresses [vreg], TRUE, LLVM_BARRIER_NONE);
#else
LLVMBuildStore (ctx->builder, convert (ctx, ctx->values [vreg], type_to_llvm_type (ctx, var->inst_vtype)), ctx->addresses [vreg]);
#endif
}
}
static LLVMTypeRef
sig_to_llvm_sig_no_cinfo (EmitContext *ctx, MonoMethodSignature *sig)
{
LLVMTypeRef ret_type;
LLVMTypeRef *param_types = NULL;
LLVMTypeRef res;
int i, pindex;
ret_type = type_to_llvm_type (ctx, sig->ret);
if (!ctx_ok (ctx))
return NULL;
param_types = g_new0 (LLVMTypeRef, (sig->param_count * 8) + 3);
pindex = 0;
if (sig->hasthis)
param_types [pindex ++] = ThisType ();
for (i = 0; i < sig->param_count; ++i)
param_types [pindex ++] = type_to_llvm_arg_type (ctx, sig->params [i]);
if (!ctx_ok (ctx)) {
g_free (param_types);
return NULL;
}
res = LLVMFunctionType (ret_type, param_types, pindex, FALSE);
g_free (param_types);
return res;
}
/*
* sig_to_llvm_sig_full:
*
* Return the LLVM signature corresponding to the mono signature SIG using the
* calling convention information in CINFO. Fill out the parameter mapping information in CINFO.
*/
static LLVMTypeRef
sig_to_llvm_sig_full (EmitContext *ctx, MonoMethodSignature *sig, LLVMCallInfo *cinfo)
{
LLVMTypeRef ret_type;
LLVMTypeRef *param_types = NULL;
LLVMTypeRef res;
int i, j, pindex, vret_arg_pindex = 0;
gboolean vretaddr = FALSE;
MonoType *rtype;
if (!cinfo)
return sig_to_llvm_sig_no_cinfo (ctx, sig);
ret_type = type_to_llvm_type (ctx, sig->ret);
if (!ctx_ok (ctx))
return NULL;
rtype = mini_get_underlying_type (sig->ret);
switch (cinfo->ret.storage) {
case LLVMArgVtypeInReg:
/* LLVM models this by returning an aggregate value */
if (cinfo->ret.pair_storage [0] == LLVMArgInIReg && cinfo->ret.pair_storage [1] == LLVMArgNone) {
LLVMTypeRef members [2];
members [0] = IntPtrType ();
ret_type = LLVMStructType (members, 1, FALSE);
} else if (cinfo->ret.pair_storage [0] == LLVMArgNone && cinfo->ret.pair_storage [1] == LLVMArgNone) {
/* Empty struct */
ret_type = LLVMVoidType ();
} else if (cinfo->ret.pair_storage [0] == LLVMArgInIReg && cinfo->ret.pair_storage [1] == LLVMArgInIReg) {
LLVMTypeRef members [2];
members [0] = IntPtrType ();
members [1] = IntPtrType ();
ret_type = LLVMStructType (members, 2, FALSE);
} else {
g_assert_not_reached ();
}
break;
case LLVMArgVtypeByVal:
/* Vtype returned normally by val */
break;
case LLVMArgVtypeAsScalar: {
int size = mono_class_value_size (mono_class_from_mono_type_internal (rtype), NULL);
/* LLVM models this by returning an int */
if (size < TARGET_SIZEOF_VOID_P) {
g_assert (cinfo->ret.nslots == 1);
ret_type = LLVMIntType (size * 8);
} else {
g_assert (cinfo->ret.nslots == 1 || cinfo->ret.nslots == 2);
ret_type = LLVMIntType (cinfo->ret.nslots * sizeof (target_mgreg_t) * 8);
}
break;
}
case LLVMArgAsIArgs:
ret_type = LLVMArrayType (IntPtrType (), cinfo->ret.nslots);
break;
case LLVMArgFpStruct: {
/* Vtype returned as a fp struct */
LLVMTypeRef members [16];
/* Have to create our own structure since we don't map fp structures to LLVM fp structures yet */
for (i = 0; i < cinfo->ret.nslots; ++i)
members [i] = cinfo->ret.esize == 8 ? LLVMDoubleType () : LLVMFloatType ();
ret_type = LLVMStructType (members, cinfo->ret.nslots, FALSE);
break;
}
case LLVMArgVtypeByRef:
/* Vtype returned using a hidden argument */
ret_type = LLVMVoidType ();
break;
case LLVMArgVtypeRetAddr:
case LLVMArgGsharedvtFixed:
case LLVMArgGsharedvtFixedVtype:
case LLVMArgGsharedvtVariable:
vretaddr = TRUE;
ret_type = LLVMVoidType ();
break;
case LLVMArgWasmVtypeAsScalar:
g_assert (cinfo->ret.esize);
ret_type = LLVMIntType (cinfo->ret.esize * 8);
break;
default:
break;
}
param_types = g_new0 (LLVMTypeRef, (sig->param_count * 8) + 3);
pindex = 0;
if (cinfo->ret.storage == LLVMArgVtypeByRef) {
/*
* Has to be the first argument because of the sret argument attribute
* FIXME: This might conflict with passing 'this' as the first argument, but
* this is only used on arm64 which has a dedicated struct return register.
*/
cinfo->vret_arg_pindex = pindex;
param_types [pindex] = type_to_llvm_arg_type (ctx, sig->ret);
if (!ctx_ok (ctx)) {
g_free (param_types);
return NULL;
}
param_types [pindex] = LLVMPointerType (param_types [pindex], 0);
pindex ++;
}
if (!ctx->llvm_only && cinfo->rgctx_arg) {
cinfo->rgctx_arg_pindex = pindex;
param_types [pindex] = ctx->module->ptr_type;
pindex ++;
}
if (cinfo->imt_arg) {
cinfo->imt_arg_pindex = pindex;
param_types [pindex] = ctx->module->ptr_type;
pindex ++;
}
if (vretaddr) {
/* Compute the index in the LLVM signature where the vret arg needs to be passed */
vret_arg_pindex = pindex;
if (cinfo->vret_arg_index == 1) {
/* Add the slots consumed by the first argument */
LLVMArgInfo *ainfo = &cinfo->args [0];
switch (ainfo->storage) {
case LLVMArgVtypeInReg:
for (j = 0; j < 2; ++j) {
if (ainfo->pair_storage [j] == LLVMArgInIReg)
vret_arg_pindex ++;
}
break;
default:
vret_arg_pindex ++;
}
}
cinfo->vret_arg_pindex = vret_arg_pindex;
}
if (vretaddr && vret_arg_pindex == pindex)
param_types [pindex ++] = IntPtrType ();
if (sig->hasthis) {
cinfo->this_arg_pindex = pindex;
param_types [pindex ++] = ThisType ();
cinfo->args [0].pindex = cinfo->this_arg_pindex;
}
if (vretaddr && vret_arg_pindex == pindex)
param_types [pindex ++] = IntPtrType ();
for (i = 0; i < sig->param_count; ++i) {
LLVMArgInfo *ainfo = &cinfo->args [i + sig->hasthis];
if (vretaddr && vret_arg_pindex == pindex)
param_types [pindex ++] = IntPtrType ();
ainfo->pindex = pindex;
switch (ainfo->storage) {
case LLVMArgVtypeInReg:
for (j = 0; j < 2; ++j) {
switch (ainfo->pair_storage [j]) {
case LLVMArgInIReg:
param_types [pindex ++] = LLVMIntType (TARGET_SIZEOF_VOID_P * 8);
break;
case LLVMArgNone:
break;
default:
g_assert_not_reached ();
}
}
break;
case LLVMArgVtypeByVal:
param_types [pindex] = type_to_llvm_arg_type (ctx, ainfo->type);
if (!ctx_ok (ctx))
break;
param_types [pindex] = LLVMPointerType (param_types [pindex], 0);
pindex ++;
break;
case LLVMArgAsIArgs:
if (ainfo->esize == 8)
param_types [pindex] = LLVMArrayType (LLVMInt64Type (), ainfo->nslots);
else
param_types [pindex] = LLVMArrayType (IntPtrType (), ainfo->nslots);
pindex ++;
break;
case LLVMArgVtypeAddr:
case LLVMArgVtypeByRef:
param_types [pindex] = type_to_llvm_arg_type (ctx, ainfo->type);
if (!ctx_ok (ctx))
break;
param_types [pindex] = LLVMPointerType (param_types [pindex], 0);
pindex ++;
break;
case LLVMArgAsFpArgs: {
int j;
/* Emit dummy fp arguments if needed so the rest is passed on the stack */
for (j = 0; j < ainfo->ndummy_fpargs; ++j)
param_types [pindex ++] = LLVMDoubleType ();
for (j = 0; j < ainfo->nslots; ++j)
param_types [pindex ++] = ainfo->esize == 8 ? LLVMDoubleType () : LLVMFloatType ();
break;
}
case LLVMArgVtypeAsScalar:
g_assert_not_reached ();
break;
case LLVMArgWasmVtypeAsScalar:
g_assert (ainfo->esize);
param_types [pindex ++] = LLVMIntType (ainfo->esize * 8);
break;
case LLVMArgGsharedvtFixed:
case LLVMArgGsharedvtFixedVtype:
param_types [pindex ++] = LLVMPointerType (type_to_llvm_arg_type (ctx, ainfo->type), 0);
break;
case LLVMArgGsharedvtVariable:
param_types [pindex ++] = LLVMPointerType (IntPtrType (), 0);
break;
default:
param_types [pindex ++] = type_to_llvm_arg_type (ctx, ainfo->type);
break;
}
}
if (!ctx_ok (ctx)) {
g_free (param_types);
return NULL;
}
if (vretaddr && vret_arg_pindex == pindex)
param_types [pindex ++] = IntPtrType ();
if (ctx->llvm_only && cinfo->rgctx_arg) {
/* Pass the rgctx as the last argument */
cinfo->rgctx_arg_pindex = pindex;
param_types [pindex] = ctx->module->ptr_type;
pindex ++;
} else if (ctx->llvm_only && cinfo->dummy_arg) {
/* Pass a dummy arg last */
cinfo->dummy_arg_pindex = pindex;
param_types [pindex] = ctx->module->ptr_type;
pindex ++;
}
res = LLVMFunctionType (ret_type, param_types, pindex, FALSE);
g_free (param_types);
return res;
}
static LLVMTypeRef
sig_to_llvm_sig (EmitContext *ctx, MonoMethodSignature *sig)
{
return sig_to_llvm_sig_full (ctx, sig, NULL);
}
/*
* LLVMFunctionType1:
*
* Create an LLVM function type from the arguments.
*/
static G_GNUC_UNUSED LLVMTypeRef
LLVMFunctionType0 (LLVMTypeRef ReturnType,
int IsVarArg)
{
return LLVMFunctionType (ReturnType, NULL, 0, IsVarArg);
}
/*
* LLVMFunctionType1:
*
* Create an LLVM function type from the arguments.
*/
static G_GNUC_UNUSED LLVMTypeRef
LLVMFunctionType1 (LLVMTypeRef ReturnType,
LLVMTypeRef ParamType1,
int IsVarArg)
{
LLVMTypeRef param_types [1];
param_types [0] = ParamType1;
return LLVMFunctionType (ReturnType, param_types, 1, IsVarArg);
}
/*
* LLVMFunctionType2:
*
* Create an LLVM function type from the arguments.
*/
static G_GNUC_UNUSED LLVMTypeRef
LLVMFunctionType2 (LLVMTypeRef ReturnType,
LLVMTypeRef ParamType1,
LLVMTypeRef ParamType2,
int IsVarArg)
{
LLVMTypeRef param_types [2];
param_types [0] = ParamType1;
param_types [1] = ParamType2;
return LLVMFunctionType (ReturnType, param_types, 2, IsVarArg);
}
/*
* LLVMFunctionType3:
*
* Create an LLVM function type from the arguments.
*/
static G_GNUC_UNUSED LLVMTypeRef
LLVMFunctionType3 (LLVMTypeRef ReturnType,
LLVMTypeRef ParamType1,
LLVMTypeRef ParamType2,
LLVMTypeRef ParamType3,
int IsVarArg)
{
LLVMTypeRef param_types [3];
param_types [0] = ParamType1;
param_types [1] = ParamType2;
param_types [2] = ParamType3;
return LLVMFunctionType (ReturnType, param_types, 3, IsVarArg);
}
static G_GNUC_UNUSED LLVMTypeRef
LLVMFunctionType4 (LLVMTypeRef ReturnType,
LLVMTypeRef ParamType1,
LLVMTypeRef ParamType2,
LLVMTypeRef ParamType3,
LLVMTypeRef ParamType4,
int IsVarArg)
{
LLVMTypeRef param_types [4];
param_types [0] = ParamType1;
param_types [1] = ParamType2;
param_types [2] = ParamType3;
param_types [3] = ParamType4;
return LLVMFunctionType (ReturnType, param_types, 4, IsVarArg);
}
static G_GNUC_UNUSED LLVMTypeRef
LLVMFunctionType5 (LLVMTypeRef ReturnType,
LLVMTypeRef ParamType1,
LLVMTypeRef ParamType2,
LLVMTypeRef ParamType3,
LLVMTypeRef ParamType4,
LLVMTypeRef ParamType5,
int IsVarArg)
{
LLVMTypeRef param_types [5];
param_types [0] = ParamType1;
param_types [1] = ParamType2;
param_types [2] = ParamType3;
param_types [3] = ParamType4;
param_types [4] = ParamType5;
return LLVMFunctionType (ReturnType, param_types, 5, IsVarArg);
}
/*
* create_builder:
*
* Create an LLVM builder and remember it so it can be freed later.
*/
static LLVMBuilderRef
create_builder (EmitContext *ctx)
{
LLVMBuilderRef builder = LLVMCreateBuilder ();
if (mono_use_fast_math)
mono_llvm_set_fast_math (builder);
ctx->builders = g_slist_prepend_mempool (ctx->cfg->mempool, ctx->builders, builder);
emit_default_dbg_loc (ctx, builder);
return builder;
}
static char*
get_aotconst_name (MonoJumpInfoType type, gconstpointer data, int got_offset)
{
char *name;
int len;
switch (type) {
case MONO_PATCH_INFO_JIT_ICALL_ID:
name = g_strdup_printf ("jit_icall_%s", mono_find_jit_icall_info ((MonoJitICallId)(gsize)data)->name);
break;
case MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL:
name = g_strdup_printf ("jit_icall_addr_nocall_%s", mono_find_jit_icall_info ((MonoJitICallId)(gsize)data)->name);
break;
case MONO_PATCH_INFO_RGCTX_SLOT_INDEX: {
MonoJumpInfoRgctxEntry *entry = (MonoJumpInfoRgctxEntry*)data;
name = g_strdup_printf ("rgctx_slot_index_%s", mono_rgctx_info_type_to_str (entry->info_type));
break;
}
case MONO_PATCH_INFO_AOT_MODULE:
case MONO_PATCH_INFO_GC_SAFE_POINT_FLAG:
case MONO_PATCH_INFO_GC_CARD_TABLE_ADDR:
case MONO_PATCH_INFO_GC_NURSERY_START:
case MONO_PATCH_INFO_GC_NURSERY_BITS:
case MONO_PATCH_INFO_INTERRUPTION_REQUEST_FLAG:
name = g_strdup_printf ("%s", mono_ji_type_to_string (type));
len = strlen (name);
for (int i = 0; i < len; ++i)
name [i] = tolower (name [i]);
break;
default:
name = g_strdup_printf ("%s_%d", mono_ji_type_to_string (type), got_offset);
len = strlen (name);
for (int i = 0; i < len; ++i)
name [i] = tolower (name [i]);
break;
}
return name;
}
static int
compute_aot_got_offset (MonoLLVMModule *module, MonoJumpInfo *ji, LLVMTypeRef llvm_type)
{
guint32 got_offset = mono_aot_get_got_offset (ji);
LLVMTypeRef lookup_type = (LLVMTypeRef) g_hash_table_lookup (module->got_idx_to_type, GINT_TO_POINTER (got_offset));
if (!lookup_type) {
lookup_type = llvm_type;
} else if (llvm_type != lookup_type) {
lookup_type = module->ptr_type;
} else {
return got_offset;
}
g_hash_table_insert (module->got_idx_to_type, GINT_TO_POINTER (got_offset), lookup_type);
return got_offset;
}
/* Allocate a GOT slot for TYPE/DATA, and emit IR to load it */
static LLVMValueRef
get_aotconst_module (MonoLLVMModule *module, LLVMBuilderRef builder, MonoJumpInfoType type, gconstpointer data, LLVMTypeRef llvm_type,
guint32 *out_got_offset, MonoJumpInfo **out_ji)
{
guint32 got_offset;
LLVMValueRef load;
MonoJumpInfo tmp_ji;
tmp_ji.type = type;
tmp_ji.data.target = data;
MonoJumpInfo *ji = mono_aot_patch_info_dup (&tmp_ji);
if (out_ji)
*out_ji = ji;
got_offset = compute_aot_got_offset (module, ji, llvm_type);
module->max_got_offset = MAX (module->max_got_offset, got_offset);
if (out_got_offset)
*out_got_offset = got_offset;
if (module->static_link && type == MONO_PATCH_INFO_GC_SAFE_POINT_FLAG) {
if (!module->gc_safe_point_flag_var) {
const char *symbol = "mono_polling_required";
module->gc_safe_point_flag_var = LLVMAddGlobal (module->lmodule, llvm_type, symbol);
LLVMSetLinkage (module->gc_safe_point_flag_var, LLVMExternalLinkage);
}
return module->gc_safe_point_flag_var;
}
if (module->static_link && type == MONO_PATCH_INFO_INTERRUPTION_REQUEST_FLAG) {
if (!module->interrupt_flag_var) {
const char *symbol = "mono_thread_interruption_request_flag";
module->interrupt_flag_var = LLVMAddGlobal (module->lmodule, llvm_type, symbol);
LLVMSetLinkage (module->interrupt_flag_var, LLVMExternalLinkage);
}
return module->interrupt_flag_var;
}
LLVMValueRef const_var = g_hash_table_lookup (module->aotconst_vars, GINT_TO_POINTER (got_offset));
if (!const_var) {
LLVMTypeRef type = llvm_type;
// FIXME:
char *name = get_aotconst_name (ji->type, ji->data.target, got_offset);
char *symbol = g_strdup_printf ("aotconst_%s", name);
g_free (name);
LLVMValueRef v = LLVMAddGlobal (module->lmodule, type, symbol);
LLVMSetVisibility (v, LLVMHiddenVisibility);
LLVMSetLinkage (v, LLVMInternalLinkage);
LLVMSetInitializer (v, LLVMConstNull (type));
// FIXME:
LLVMSetAlignment (v, 8);
g_hash_table_insert (module->aotconst_vars, GINT_TO_POINTER (got_offset), v);
const_var = v;
}
load = LLVMBuildLoad (builder, const_var, "");
if (mono_aot_is_shared_got_offset (got_offset))
set_invariant_load_flag (load);
if (type == MONO_PATCH_INFO_LDSTR)
set_nonnull_load_flag (load);
load = LLVMBuildBitCast (builder, load, llvm_type, "");
return load;
}
static LLVMValueRef
get_aotconst (EmitContext *ctx, MonoJumpInfoType type, gconstpointer data, LLVMTypeRef llvm_type)
{
MonoCompile *cfg;
guint32 got_offset;
MonoJumpInfo *ji;
LLVMValueRef load;
cfg = ctx->cfg;
load = get_aotconst_module (ctx->module, ctx->builder, type, data, llvm_type, &got_offset, &ji);
ji->next = cfg->patch_info;
cfg->patch_info = ji;
/*
* If the got slot is shared, it means its initialized when the aot image is loaded, so we don't need to
* explicitly initialize it.
*/
if (!mono_aot_is_shared_got_offset (got_offset)) {
//mono_print_ji (ji);
//printf ("\n");
ctx->cfg->got_access_count ++;
}
return load;
}
static LLVMValueRef
get_dummy_aotconst (EmitContext *ctx, LLVMTypeRef llvm_type)
{
LLVMValueRef indexes [2];
LLVMValueRef got_entry_addr, load;
LLVMBuilderRef builder = ctx->builder;
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
got_entry_addr = LLVMBuildGEP (builder, ctx->module->dummy_got_var, indexes, 2, "");
load = LLVMBuildLoad (builder, got_entry_addr, "");
load = convert (ctx, load, llvm_type);
return load;
}
typedef struct {
MonoJumpInfo *ji;
MonoMethod *method;
LLVMValueRef load;
LLVMTypeRef type;
LLVMValueRef lmethod;
} CallSite;
static LLVMValueRef
get_callee_llvmonly (EmitContext *ctx, LLVMTypeRef llvm_sig, MonoJumpInfoType type, gconstpointer data)
{
LLVMValueRef callee;
char *callee_name = NULL;
if (ctx->module->static_link && ctx->module->assembly->image != mono_get_corlib ()) {
if (type == MONO_PATCH_INFO_JIT_ICALL_ID) {
MonoJitICallInfo * const info = mono_find_jit_icall_info ((MonoJitICallId)(gsize)data);
g_assert (info);
if (info->func != info->wrapper) {
type = MONO_PATCH_INFO_METHOD;
data = mono_icall_get_wrapper_method (info);
callee_name = mono_aot_get_mangled_method_name ((MonoMethod*)data);
}
} else if (type == MONO_PATCH_INFO_METHOD) {
MonoMethod *method = (MonoMethod*)data;
if (m_class_get_image (method->klass) != ctx->module->assembly->image && mono_aot_is_externally_callable (method))
callee_name = mono_aot_get_mangled_method_name (method);
}
}
if (!callee_name)
callee_name = mono_aot_get_direct_call_symbol (type, data);
if (callee_name) {
/* Directly callable */
// FIXME: Locking
callee = (LLVMValueRef)g_hash_table_lookup (ctx->module->direct_callables, callee_name);
if (!callee) {
callee = LLVMAddFunction (ctx->lmodule, callee_name, llvm_sig);
LLVMSetVisibility (callee, LLVMHiddenVisibility);
g_hash_table_insert (ctx->module->direct_callables, (char*)callee_name, callee);
} else {
/* LLVMTypeRef's are uniqued */
if (LLVMGetElementType (LLVMTypeOf (callee)) != llvm_sig)
return LLVMConstBitCast (callee, LLVMPointerType (llvm_sig, 0));
g_free (callee_name);
}
return callee;
}
/*
* Change references to icalls/pinvokes/jit icalls to their wrappers when in corlib, so
* they can be called directly.
*/
if (ctx->module->assembly->image == mono_get_corlib () && type == MONO_PATCH_INFO_JIT_ICALL_ID) {
MonoJitICallInfo * const info = mono_find_jit_icall_info ((MonoJitICallId)(gsize)data);
if (info->func != info->wrapper) {
type = MONO_PATCH_INFO_METHOD;
data = mono_icall_get_wrapper_method (info);
}
}
if (ctx->module->assembly->image == mono_get_corlib () && type == MONO_PATCH_INFO_METHOD) {
MonoMethod *method = (MonoMethod*)data;
if (m_method_is_icall (method) || m_method_is_pinvoke (method))
data = mono_marshal_get_native_wrapper (method, TRUE, TRUE);
}
/*
* Instead of emitting an indirect call through a got slot, emit a placeholder, and
* replace it with a direct call or an indirect call in mono_llvm_fixup_aot_module ()
* after all methods have been emitted.
*/
if (type == MONO_PATCH_INFO_METHOD) {
MonoMethod *method = (MonoMethod*)data;
if (m_class_get_image (method->klass)->assembly == ctx->module->assembly) {
MonoJumpInfo tmp_ji;
tmp_ji.type = type;
tmp_ji.data.target = method;
MonoJumpInfo *ji = mono_aot_patch_info_dup (&tmp_ji);
ji->next = ctx->cfg->patch_info;
ctx->cfg->patch_info = ji;
LLVMTypeRef llvm_type = LLVMPointerType (llvm_sig, 0);
ctx->cfg->got_access_count ++;
CallSite *info = g_new0 (CallSite, 1);
info->method = method;
info->ji = ji;
info->type = llvm_type;
/*
* Emit a dummy load to represent the callee, and either replace it with
* a reference to the llvm method for the callee, or from a load from the
* GOT.
*/
LLVMValueRef load = get_dummy_aotconst (ctx, llvm_type);
info->load = load;
info->lmethod = ctx->lmethod;
g_ptr_array_add (ctx->callsite_list, info);
return load;
}
}
/*
* All other calls are made through the GOT.
*/
callee = get_aotconst (ctx, type, data, LLVMPointerType (llvm_sig, 0));
return callee;
}
/*
* get_callee:
*
* Return an llvm value representing the callee given by the arguments.
*/
static LLVMValueRef
get_callee (EmitContext *ctx, LLVMTypeRef llvm_sig, MonoJumpInfoType type, gconstpointer data)
{
LLVMValueRef callee;
char *callee_name;
MonoJumpInfo *ji = NULL;
if (ctx->llvm_only)
return get_callee_llvmonly (ctx, llvm_sig, type, data);
callee_name = NULL;
/* Cross-assembly direct calls */
if (type == MONO_PATCH_INFO_METHOD) {
MonoMethod *cmethod = (MonoMethod*)data;
if (m_class_get_image (cmethod->klass) != ctx->module->assembly->image) {
MonoJumpInfo tmp_ji;
memset (&tmp_ji, 0, sizeof (MonoJumpInfo));
tmp_ji.type = type;
tmp_ji.data.target = data;
if (mono_aot_is_direct_callable (&tmp_ji)) {
/*
* This will add a reference to cmethod's image so it will
* be loaded when the current AOT image is loaded, so
* the GOT slots used by the init method code are initialized.
*/
tmp_ji.type = MONO_PATCH_INFO_IMAGE;
tmp_ji.data.image = m_class_get_image (cmethod->klass);
ji = mono_aot_patch_info_dup (&tmp_ji);
mono_aot_get_got_offset (ji);
callee_name = mono_aot_get_mangled_method_name (cmethod);
callee = (LLVMValueRef)g_hash_table_lookup (ctx->module->direct_callables, callee_name);
if (!callee) {
callee = LLVMAddFunction (ctx->lmodule, callee_name, llvm_sig);
LLVMSetLinkage (callee, LLVMExternalLinkage);
g_hash_table_insert (ctx->module->direct_callables, callee_name, callee);
} else {
/* LLVMTypeRef's are uniqued */
if (LLVMGetElementType (LLVMTypeOf (callee)) != llvm_sig)
callee = LLVMConstBitCast (callee, LLVMPointerType (llvm_sig, 0));
g_free (callee_name);
}
return callee;
}
}
}
callee_name = mono_aot_get_plt_symbol (type, data);
if (!callee_name)
return NULL;
if (ctx->cfg->compile_aot)
/* Add a patch so referenced wrappers can be compiled in full aot mode */
mono_add_patch_info (ctx->cfg, 0, type, data);
// FIXME: Locking
callee = (LLVMValueRef)g_hash_table_lookup (ctx->module->plt_entries, callee_name);
if (!callee) {
callee = LLVMAddFunction (ctx->lmodule, callee_name, llvm_sig);
LLVMSetVisibility (callee, LLVMHiddenVisibility);
g_hash_table_insert (ctx->module->plt_entries, (char*)callee_name, callee);
}
if (ctx->cfg->compile_aot) {
ji = g_new0 (MonoJumpInfo, 1);
ji->type = type;
ji->data.target = data;
g_hash_table_insert (ctx->module->plt_entries_ji, ji, callee);
}
return callee;
}
static LLVMValueRef
get_jit_callee (EmitContext *ctx, const char *name, LLVMTypeRef llvm_sig, MonoJumpInfoType type, gconstpointer data)
{
gpointer target;
// This won't be patched so compile the wrapper immediately
if (type == MONO_PATCH_INFO_JIT_ICALL_ID) {
MonoJitICallInfo * const info = mono_find_jit_icall_info ((MonoJitICallId)(gsize)data);
target = (gpointer)mono_icall_get_wrapper_full (info, TRUE);
} else {
target = resolve_patch (ctx->cfg, type, data);
}
LLVMValueRef tramp_var = LLVMAddGlobal (ctx->lmodule, LLVMPointerType (llvm_sig, 0), name);
LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (LLVMConstInt (LLVMInt64Type (), (guint64)(size_t)target, FALSE), LLVMPointerType (llvm_sig, 0)));
LLVMSetLinkage (tramp_var, LLVMExternalLinkage);
LLVMValueRef callee = LLVMBuildLoad (ctx->builder, tramp_var, "");
return callee;
}
static int
get_handler_clause (MonoCompile *cfg, MonoBasicBlock *bb)
{
MonoMethodHeader *header = cfg->header;
MonoExceptionClause *clause;
int i;
/* Directly */
if (bb->region != -1 && MONO_BBLOCK_IS_IN_REGION (bb, MONO_REGION_TRY))
return (bb->region >> 8) - 1;
/* Indirectly */
for (i = 0; i < header->num_clauses; ++i) {
clause = &header->clauses [i];
if (MONO_OFFSET_IN_CLAUSE (clause, bb->real_offset) && clause->flags == MONO_EXCEPTION_CLAUSE_NONE)
return i;
}
return -1;
}
static MonoExceptionClause *
get_most_deep_clause (MonoCompile *cfg, EmitContext *ctx, MonoBasicBlock *bb)
{
if (bb == cfg->bb_init)
return NULL;
// Since they're sorted by nesting we just need
// the first one that the bb is a member of
for (int i = 0; i < cfg->header->num_clauses; i++) {
MonoExceptionClause *curr = &cfg->header->clauses [i];
if (MONO_OFFSET_IN_CLAUSE (curr, bb->real_offset))
return curr;
}
return NULL;
}
static void
set_metadata_flag (LLVMValueRef v, const char *flag_name)
{
LLVMValueRef md_arg;
int md_kind;
md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name));
md_arg = LLVMMDString ("mono", 4);
LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1));
}
static void
set_nonnull_load_flag (LLVMValueRef v)
{
LLVMValueRef md_arg;
int md_kind;
const char *flag_name;
flag_name = "nonnull";
md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name));
md_arg = LLVMMDString ("<index>", strlen ("<index>"));
LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1));
}
static void
set_nontemporal_flag (LLVMValueRef v)
{
LLVMValueRef md_arg;
int md_kind;
const char *flag_name;
// FIXME: Cache this
flag_name = "nontemporal";
md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name));
md_arg = const_int32 (1);
LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1));
}
static void
set_invariant_load_flag (LLVMValueRef v)
{
LLVMValueRef md_arg;
int md_kind;
const char *flag_name;
// FIXME: Cache this
flag_name = "invariant.load";
md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name));
md_arg = LLVMMDString ("<index>", strlen ("<index>"));
LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1));
}
/*
* emit_call:
*
* Emit an LLVM call or invoke instruction depending on whenever the call is inside
* a try region.
*/
static LLVMValueRef
emit_call (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, LLVMValueRef callee, LLVMValueRef *args, int pindex)
{
MonoCompile *cfg = ctx->cfg;
LLVMValueRef lcall = NULL;
LLVMBuilderRef builder = *builder_ref;
MonoExceptionClause *clause;
if (ctx->llvm_only) {
clause = bb ? get_most_deep_clause (cfg, ctx, bb) : NULL;
// FIXME: Use an invoke only for calls inside try-catch blocks
if (clause && (!cfg->deopt || ctx->has_catch)) {
/*
* Have to use an invoke instead of a call, branching to the
* handler bblock of the clause containing this bblock.
*/
intptr_t key = CLAUSE_END (clause);
LLVMBasicBlockRef lpad_bb = (LLVMBasicBlockRef)g_hash_table_lookup (ctx->exc_meta, (gconstpointer)key);
// FIXME: Find the one that has the lowest end bound for the right start address
// FIXME: Finally + nesting
if (lpad_bb) {
LLVMBasicBlockRef noex_bb = gen_bb (ctx, "CALL_NOEX_BB");
/* Use an invoke */
lcall = LLVMBuildInvoke (builder, callee, args, pindex, noex_bb, lpad_bb, "");
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, noex_bb);
ctx->bblocks [bb->block_num].end_bblock = noex_bb;
}
}
} else {
int clause_index = get_handler_clause (cfg, bb);
if (clause_index != -1) {
MonoMethodHeader *header = cfg->header;
MonoExceptionClause *ec = &header->clauses [clause_index];
MonoBasicBlock *tblock;
LLVMBasicBlockRef ex_bb, noex_bb;
/*
* Have to use an invoke instead of a call, branching to the
* handler bblock of the clause containing this bblock.
*/
g_assert (ec->flags == MONO_EXCEPTION_CLAUSE_NONE || ec->flags == MONO_EXCEPTION_CLAUSE_FINALLY || ec->flags == MONO_EXCEPTION_CLAUSE_FAULT);
tblock = cfg->cil_offset_to_bb [ec->handler_offset];
g_assert (tblock);
ctx->bblocks [tblock->block_num].invoke_target = TRUE;
ex_bb = get_bb (ctx, tblock);
noex_bb = gen_bb (ctx, "NOEX_BB");
/* Use an invoke */
lcall = LLVMBuildInvoke (builder, callee, args, pindex, noex_bb, ex_bb, "");
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, noex_bb);
ctx->bblocks [bb->block_num].end_bblock = noex_bb;
}
}
if (!lcall) {
lcall = LLVMBuildCall (builder, callee, args, pindex, "");
ctx->builder = builder;
}
if (builder_ref)
*builder_ref = ctx->builder;
return lcall;
}
static LLVMValueRef
emit_load (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, int size, LLVMValueRef addr, LLVMValueRef base, const char *name, gboolean is_faulting, gboolean is_volatile, BarrierKind barrier)
{
LLVMValueRef res;
/*
* We emit volatile loads for loads which can fault, because otherwise
* LLVM will generate invalid code when encountering a load from a
* NULL address.
*/
if (barrier != LLVM_BARRIER_NONE)
res = mono_llvm_build_atomic_load (*builder_ref, addr, name, is_volatile, size, barrier);
else
res = mono_llvm_build_load (*builder_ref, addr, name, is_volatile);
return res;
}
static void
emit_store_general (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, int size, LLVMValueRef value, LLVMValueRef addr, LLVMValueRef base, gboolean is_faulting, gboolean is_volatile, BarrierKind barrier)
{
if (barrier != LLVM_BARRIER_NONE)
mono_llvm_build_aligned_store (*builder_ref, value, addr, barrier, size);
else
mono_llvm_build_store (*builder_ref, value, addr, is_volatile, barrier);
}
static void
emit_store (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, int size, LLVMValueRef value, LLVMValueRef addr, LLVMValueRef base, gboolean is_faulting, gboolean is_volatile)
{
emit_store_general (ctx, bb, builder_ref, size, value, addr, base, is_faulting, is_volatile, LLVM_BARRIER_NONE);
}
/*
* emit_cond_system_exception:
*
* Emit code to throw the exception EXC_TYPE if the condition CMP is false.
* Might set the ctx exception.
*/
static void
emit_cond_system_exception (EmitContext *ctx, MonoBasicBlock *bb, const char *exc_type, LLVMValueRef cmp, gboolean force_explicit)
{
LLVMBasicBlockRef ex_bb, ex2_bb = NULL, noex_bb;
LLVMBuilderRef builder;
MonoClass *exc_class;
LLVMValueRef args [2];
LLVMValueRef callee;
gboolean no_pc = FALSE;
static MonoClass *exc_classes [MONO_EXC_INTRINS_NUM];
if (IS_TARGET_AMD64)
/* Some platforms don't require the pc argument */
no_pc = TRUE;
int exc_id = mini_exception_id_by_name (exc_type);
if (!exc_classes [exc_id])
exc_classes [exc_id] = mono_class_load_from_name (mono_get_corlib (), "System", exc_type);
exc_class = exc_classes [exc_id];
ex_bb = gen_bb (ctx, "EX_BB");
if (ctx->llvm_only)
ex2_bb = gen_bb (ctx, "EX2_BB");
noex_bb = gen_bb (ctx, "NOEX_BB");
LLVMValueRef branch = LLVMBuildCondBr (ctx->builder, cmp, ex_bb, noex_bb);
if (exc_id == MONO_EXC_NULL_REF && !ctx->cfg->disable_llvm_implicit_null_checks && !force_explicit) {
mono_llvm_set_implicit_branch (ctx->builder, branch);
}
/* Emit exception throwing code */
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, ex_bb);
if (ctx->cfg->llvm_only) {
LLVMBuildBr (builder, ex2_bb);
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, ex2_bb);
if (exc_id == MONO_EXC_NULL_REF) {
static LLVMTypeRef sig;
if (!sig)
sig = LLVMFunctionType0 (LLVMVoidType (), FALSE);
/* Can't cache this */
callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_nullref_exception));
emit_call (ctx, bb, &builder, callee, NULL, 0);
} else {
static LLVMTypeRef sig;
if (!sig)
sig = LLVMFunctionType1 (LLVMVoidType (), LLVMInt32Type (), FALSE);
callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_corlib_exception));
args [0] = LLVMConstInt (LLVMInt32Type (), m_class_get_type_token (exc_class) - MONO_TOKEN_TYPE_DEF, FALSE);
emit_call (ctx, bb, &builder, callee, args, 1);
}
LLVMBuildUnreachable (builder);
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, noex_bb);
ctx->bblocks [bb->block_num].end_bblock = noex_bb;
ctx->ex_index ++;
return;
}
callee = ctx->module->throw_corlib_exception;
if (!callee) {
LLVMTypeRef sig;
if (no_pc)
sig = LLVMFunctionType1 (LLVMVoidType (), LLVMInt32Type (), FALSE);
else
sig = LLVMFunctionType2 (LLVMVoidType (), LLVMInt32Type (), LLVMPointerType (LLVMInt8Type (), 0), FALSE);
const MonoJitICallId icall_id = MONO_JIT_ICALL_mono_llvm_throw_corlib_exception_abs_trampoline;
if (ctx->cfg->compile_aot) {
callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
} else {
/*
* Differences between the LLVM/non-LLVM throw corlib exception trampoline:
* - On x86, LLVM generated code doesn't push the arguments
* - The trampoline takes the throw address as an arguments, not a pc offset.
*/
callee = get_jit_callee (ctx, "llvm_throw_corlib_exception_trampoline", sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
/*
* Make sure that ex_bb starts with the invoke, so the block address points to it, and not to the load
* added by get_jit_callee ().
*/
ex2_bb = gen_bb (ctx, "EX2_BB");
LLVMBuildBr (builder, ex2_bb);
ex_bb = ex2_bb;
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, ex2_bb);
}
}
args [0] = LLVMConstInt (LLVMInt32Type (), m_class_get_type_token (exc_class) - MONO_TOKEN_TYPE_DEF, FALSE);
/*
* The LLVM mono branch contains changes so a block address can be passed as an
* argument to a call.
*/
if (no_pc) {
emit_call (ctx, bb, &builder, callee, args, 1);
} else {
args [1] = LLVMBlockAddress (ctx->lmethod, ex_bb);
emit_call (ctx, bb, &builder, callee, args, 2);
}
LLVMBuildUnreachable (builder);
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, noex_bb);
ctx->bblocks [bb->block_num].end_bblock = noex_bb;
ctx->ex_index ++;
return;
}
/*
* emit_args_to_vtype:
*
* Emit code to store the vtype in the arguments args to the address ADDRESS.
*/
static void
emit_args_to_vtype (EmitContext *ctx, LLVMBuilderRef builder, MonoType *t, LLVMValueRef address, LLVMArgInfo *ainfo, LLVMValueRef *args)
{
int j, size, nslots;
MonoClass *klass;
t = mini_get_underlying_type (t);
klass = mono_class_from_mono_type_internal (t);
size = mono_class_value_size (klass, NULL);
if (MONO_CLASS_IS_SIMD (ctx->cfg, klass))
address = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (LLVMInt8Type (), 0), "");
if (ainfo->storage == LLVMArgAsFpArgs)
nslots = ainfo->nslots;
else
nslots = 2;
for (j = 0; j < nslots; ++j) {
LLVMValueRef index [2], addr, daddr;
int part_size = size > TARGET_SIZEOF_VOID_P ? TARGET_SIZEOF_VOID_P : size;
LLVMTypeRef part_type;
while (part_size != 1 && part_size != 2 && part_size != 4 && part_size < 8)
part_size ++;
if (ainfo->pair_storage [j] == LLVMArgNone)
continue;
switch (ainfo->pair_storage [j]) {
case LLVMArgInIReg: {
part_type = LLVMIntType (part_size * 8);
if (MONO_CLASS_IS_SIMD (ctx->cfg, klass)) {
index [0] = LLVMConstInt (LLVMInt32Type (), j * TARGET_SIZEOF_VOID_P, FALSE);
addr = LLVMBuildGEP (builder, address, index, 1, "");
} else {
daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (IntPtrType (), 0), "");
index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE);
addr = LLVMBuildGEP (builder, daddr, index, 1, "");
}
LLVMBuildStore (builder, convert (ctx, args [j], part_type), LLVMBuildBitCast (ctx->builder, addr, LLVMPointerType (part_type, 0), ""));
break;
}
case LLVMArgInFPReg: {
LLVMTypeRef arg_type;
if (ainfo->esize == 8)
arg_type = LLVMDoubleType ();
else
arg_type = LLVMFloatType ();
index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE);
daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (arg_type, 0), "");
addr = LLVMBuildGEP (builder, daddr, index, 1, "");
LLVMBuildStore (builder, args [j], addr);
break;
}
case LLVMArgNone:
break;
default:
g_assert_not_reached ();
}
size -= TARGET_SIZEOF_VOID_P;
}
}
/*
* emit_vtype_to_args:
*
* Emit code to load a vtype at address ADDRESS into scalar arguments. Store the arguments
* into ARGS, and the number of arguments into NARGS.
*/
static void
emit_vtype_to_args (EmitContext *ctx, LLVMBuilderRef builder, MonoType *t, LLVMValueRef address, LLVMArgInfo *ainfo, LLVMValueRef *args, guint32 *nargs)
{
int pindex = 0;
int j, nslots;
LLVMTypeRef arg_type;
t = mini_get_underlying_type (t);
int32_t size = get_vtype_size_align (t).size;
if (MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (t)))
address = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (LLVMInt8Type (), 0), "");
if (ainfo->storage == LLVMArgAsFpArgs)
nslots = ainfo->nslots;
else
nslots = 2;
for (j = 0; j < nslots; ++j) {
LLVMValueRef index [2], addr, daddr;
int partsize = size > TARGET_SIZEOF_VOID_P ? TARGET_SIZEOF_VOID_P : size;
if (ainfo->pair_storage [j] == LLVMArgNone)
continue;
switch (ainfo->pair_storage [j]) {
case LLVMArgInIReg:
if (MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (t))) {
index [0] = LLVMConstInt (LLVMInt32Type (), j * TARGET_SIZEOF_VOID_P, FALSE);
addr = LLVMBuildGEP (builder, address, index, 1, "");
} else {
daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (IntPtrType (), 0), "");
index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE);
addr = LLVMBuildGEP (builder, daddr, index, 1, "");
}
args [pindex ++] = convert (ctx, LLVMBuildLoad (builder, LLVMBuildBitCast (ctx->builder, addr, LLVMPointerType (LLVMIntType (partsize * 8), 0), ""), ""), IntPtrType ());
break;
case LLVMArgInFPReg:
if (ainfo->esize == 8)
arg_type = LLVMDoubleType ();
else
arg_type = LLVMFloatType ();
daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (arg_type, 0), "");
index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE);
addr = LLVMBuildGEP (builder, daddr, index, 1, "");
args [pindex ++] = LLVMBuildLoad (builder, addr, "");
break;
case LLVMArgNone:
break;
default:
g_assert_not_reached ();
}
size -= TARGET_SIZEOF_VOID_P;
}
*nargs = pindex;
}
static LLVMValueRef
build_alloca_llvm_type_name (EmitContext *ctx, LLVMTypeRef t, int align, const char *name)
{
/*
* Have to place all alloca's at the end of the entry bb, since otherwise they would
* get executed every time control reaches them.
*/
LLVMPositionBuilder (ctx->alloca_builder, get_bb (ctx, ctx->cfg->bb_entry), ctx->last_alloca);
ctx->last_alloca = mono_llvm_build_alloca (ctx->alloca_builder, t, NULL, align, name);
return ctx->last_alloca;
}
static LLVMValueRef
build_alloca_llvm_type (EmitContext *ctx, LLVMTypeRef t, int align)
{
return build_alloca_llvm_type_name (ctx, t, align, "");
}
static LLVMValueRef
build_named_alloca (EmitContext *ctx, MonoType *t, char const *name)
{
MonoClass *k = mono_class_from_mono_type_internal (t);
int align;
g_assert (!mini_is_gsharedvt_variable_type (t));
if (MONO_CLASS_IS_SIMD (ctx->cfg, k))
align = mono_class_value_size (k, NULL);
else
align = mono_class_min_align (k);
/* Sometimes align is not a power of 2 */
while (mono_is_power_of_two (align) == -1)
align ++;
return build_alloca_llvm_type_name (ctx, type_to_llvm_type (ctx, t), align, name);
}
static LLVMValueRef
build_alloca (EmitContext *ctx, MonoType *t)
{
return build_named_alloca (ctx, t, "");
}
static LLVMValueRef
emit_gsharedvt_ldaddr (EmitContext *ctx, int vreg)
{
/*
* gsharedvt local.
* Compute the address of the local as gsharedvt_locals_var + gsharedvt_info_var->locals_offsets [idx].
*/
MonoCompile *cfg = ctx->cfg;
LLVMBuilderRef builder = ctx->builder;
LLVMValueRef offset, offset_var;
LLVMValueRef info_var = ctx->values [cfg->gsharedvt_info_var->dreg];
LLVMValueRef locals_var = ctx->values [cfg->gsharedvt_locals_var->dreg];
LLVMValueRef ptr;
char *name;
g_assert (info_var);
g_assert (locals_var);
int idx = cfg->gsharedvt_vreg_to_idx [vreg] - 1;
offset = LLVMConstInt (LLVMInt32Type (), MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, entries) + (idx * TARGET_SIZEOF_VOID_P), FALSE);
ptr = LLVMBuildAdd (builder, convert (ctx, info_var, IntPtrType ()), convert (ctx, offset, IntPtrType ()), "");
name = g_strdup_printf ("gsharedvt_local_%d_offset", vreg);
offset_var = LLVMBuildLoad (builder, convert (ctx, ptr, LLVMPointerType (LLVMInt32Type (), 0)), name);
return LLVMBuildAdd (builder, convert (ctx, locals_var, IntPtrType ()), convert (ctx, offset_var, IntPtrType ()), "");
}
/*
* Put the global into the 'llvm.used' array to prevent it from being optimized away.
*/
static void
mark_as_used (MonoLLVMModule *module, LLVMValueRef global)
{
if (!module->used)
module->used = g_ptr_array_sized_new (16);
g_ptr_array_add (module->used, global);
}
static void
emit_llvm_used (MonoLLVMModule *module)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMTypeRef used_type;
LLVMValueRef used, *used_elem;
int i;
if (!module->used)
return;
used_type = LLVMArrayType (LLVMPointerType (LLVMInt8Type (), 0), module->used->len);
used = LLVMAddGlobal (lmodule, used_type, "llvm.used");
used_elem = g_new0 (LLVMValueRef, module->used->len);
for (i = 0; i < module->used->len; ++i)
used_elem [i] = LLVMConstBitCast ((LLVMValueRef)g_ptr_array_index (module->used, i), LLVMPointerType (LLVMInt8Type (), 0));
LLVMSetInitializer (used, LLVMConstArray (LLVMPointerType (LLVMInt8Type (), 0), used_elem, module->used->len));
LLVMSetLinkage (used, LLVMAppendingLinkage);
LLVMSetSection (used, "llvm.metadata");
}
/*
* emit_get_method:
*
* Emit a function mapping method indexes to their code
*/
static void
emit_get_method (MonoLLVMModule *module)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef func, switch_ins, m;
LLVMBasicBlockRef entry_bb, fail_bb, bb, code_start_bb, code_end_bb, main_bb;
LLVMBasicBlockRef *bbs = NULL;
LLVMTypeRef rtype;
LLVMBuilderRef builder = LLVMCreateBuilder ();
LLVMValueRef table = NULL;
char *name;
int i;
gboolean emit_table = FALSE;
#ifdef TARGET_WASM
/*
* Emit a table of functions instead of a switch statement,
* its very efficient on wasm. This might be usable on
* other platforms too.
*/
emit_table = TRUE;
#endif
rtype = LLVMPointerType (LLVMInt8Type (), 0);
int table_len = module->max_method_idx + 1;
if (emit_table) {
LLVMTypeRef table_type;
LLVMValueRef *table_elems;
char *table_name;
table_type = LLVMArrayType (rtype, table_len);
table_name = g_strdup_printf ("%s_method_table", module->global_prefix);
table = LLVMAddGlobal (lmodule, table_type, table_name);
table_elems = g_new0 (LLVMValueRef, table_len);
for (i = 0; i < table_len; ++i) {
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_lmethod, GINT_TO_POINTER (i));
if (m && !g_hash_table_lookup (module->no_method_table_lmethods, m))
table_elems [i] = LLVMBuildBitCast (builder, m, rtype, "");
else
table_elems [i] = LLVMConstNull (rtype);
}
LLVMSetInitializer (table, LLVMConstArray (LLVMPointerType (LLVMInt8Type (), 0), table_elems, table_len));
}
/*
* Emit a switch statement. Emitting a table of function addresses is smaller/faster,
* but generating code seems safer.
*/
func = LLVMAddFunction (lmodule, module->get_method_symbol, LLVMFunctionType1 (rtype, LLVMInt32Type (), FALSE));
LLVMSetLinkage (func, LLVMExternalLinkage);
LLVMSetVisibility (func, LLVMHiddenVisibility);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND);
module->get_method = func;
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
/*
* Return llvm_code_start/llvm_code_end when called with -1/-2.
* Hopefully, the toolchain doesn't reorder these functions. If it does,
* then we will have to find another solution.
*/
name = g_strdup_printf ("BB_CODE_START");
code_start_bb = LLVMAppendBasicBlock (func, name);
g_free (name);
LLVMPositionBuilderAtEnd (builder, code_start_bb);
LLVMBuildRet (builder, LLVMBuildBitCast (builder, module->code_start, rtype, ""));
name = g_strdup_printf ("BB_CODE_END");
code_end_bb = LLVMAppendBasicBlock (func, name);
g_free (name);
LLVMPositionBuilderAtEnd (builder, code_end_bb);
LLVMBuildRet (builder, LLVMBuildBitCast (builder, module->code_end, rtype, ""));
if (emit_table) {
/*
* Because table_len is computed using the method indexes available for us, it
* might not include methods which are not compiled because of AOT profiles.
* So table_len can be smaller than info->nmethods. Add a bounds check because
* of that.
* switch (index) {
* case -1: return code_start;
* case -2: return code_end;
* default: return index < table_len ? method_table [index] : 0;
*/
fail_bb = LLVMAppendBasicBlock (func, "FAIL");
LLVMPositionBuilderAtEnd (builder, fail_bb);
LLVMBuildRet (builder, LLVMBuildIntToPtr (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), rtype, ""));
main_bb = LLVMAppendBasicBlock (func, "MAIN");
LLVMPositionBuilderAtEnd (builder, main_bb);
LLVMValueRef base = table;
LLVMValueRef indexes [2];
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMGetParam (func, 0);
LLVMValueRef addr = LLVMBuildGEP (builder, base, indexes, 2, "");
LLVMValueRef res = mono_llvm_build_load (builder, addr, "", FALSE);
LLVMBuildRet (builder, res);
LLVMBasicBlockRef default_bb = LLVMAppendBasicBlock (func, "DEFAULT");
LLVMPositionBuilderAtEnd (builder, default_bb);
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSGE, LLVMGetParam (func, 0), LLVMConstInt (LLVMInt32Type (), table_len, FALSE), "");
LLVMBuildCondBr (builder, cmp, fail_bb, main_bb);
LLVMPositionBuilderAtEnd (builder, entry_bb);
switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), default_bb, 0);
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -1, FALSE), code_start_bb);
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -2, FALSE), code_end_bb);
} else {
bbs = g_new0 (LLVMBasicBlockRef, module->max_method_idx + 1);
for (i = 0; i < module->max_method_idx + 1; ++i) {
name = g_strdup_printf ("BB_%d", i);
bb = LLVMAppendBasicBlock (func, name);
g_free (name);
bbs [i] = bb;
LLVMPositionBuilderAtEnd (builder, bb);
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_lmethod, GINT_TO_POINTER (i));
if (m && !g_hash_table_lookup (module->no_method_table_lmethods, m))
LLVMBuildRet (builder, LLVMBuildBitCast (builder, m, rtype, ""));
else
LLVMBuildRet (builder, LLVMConstNull (rtype));
}
fail_bb = LLVMAppendBasicBlock (func, "FAIL");
LLVMPositionBuilderAtEnd (builder, fail_bb);
LLVMBuildRet (builder, LLVMConstNull (rtype));
LLVMPositionBuilderAtEnd (builder, entry_bb);
switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), fail_bb, 0);
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -1, FALSE), code_start_bb);
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -2, FALSE), code_end_bb);
for (i = 0; i < module->max_method_idx + 1; ++i) {
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]);
}
}
mark_as_used (module, func);
LLVMDisposeBuilder (builder);
}
/*
* emit_get_unbox_tramp:
*
* Emit a function mapping method indexes to their unbox trampoline
*/
static void
emit_get_unbox_tramp (MonoLLVMModule *module)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef func, switch_ins, m;
LLVMBasicBlockRef entry_bb, fail_bb, bb;
LLVMBasicBlockRef *bbs;
LLVMTypeRef rtype;
LLVMBuilderRef builder = LLVMCreateBuilder ();
char *name;
int i;
gboolean emit_table = FALSE;
/* Similar to emit_get_method () */
#ifndef TARGET_WATCHOS
emit_table = TRUE;
#endif
rtype = LLVMPointerType (LLVMInt8Type (), 0);
if (emit_table) {
// About 10% of methods have an unbox tramp, so emit a table of indexes for them
// that the runtime can search using a binary search
int len = 0;
for (i = 0; i < module->max_method_idx + 1; ++i) {
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i));
if (m)
len ++;
}
LLVMTypeRef table_type, elemtype;
LLVMValueRef *table_elems;
LLVMValueRef table;
char *table_name;
int table_len;
int elemsize;
table_len = len;
elemsize = module->max_method_idx < 65000 ? 2 : 4;
// The index table
elemtype = elemsize == 2 ? LLVMInt16Type () : LLVMInt32Type ();
table_type = LLVMArrayType (elemtype, table_len);
table_name = g_strdup_printf ("%s_unbox_tramp_indexes", module->global_prefix);
table = LLVMAddGlobal (lmodule, table_type, table_name);
table_elems = g_new0 (LLVMValueRef, table_len);
int idx = 0;
for (i = 0; i < module->max_method_idx + 1; ++i) {
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i));
if (m)
table_elems [idx ++] = LLVMConstInt (elemtype, i, FALSE);
}
LLVMSetInitializer (table, LLVMConstArray (elemtype, table_elems, table_len));
module->unbox_tramp_indexes = table;
// The trampoline table
elemtype = rtype;
table_type = LLVMArrayType (elemtype, table_len);
table_name = g_strdup_printf ("%s_unbox_trampolines", module->global_prefix);
table = LLVMAddGlobal (lmodule, table_type, table_name);
table_elems = g_new0 (LLVMValueRef, table_len);
idx = 0;
for (i = 0; i < module->max_method_idx + 1; ++i) {
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i));
if (m)
table_elems [idx ++] = LLVMBuildBitCast (builder, m, rtype, "");
}
LLVMSetInitializer (table, LLVMConstArray (elemtype, table_elems, table_len));
module->unbox_trampolines = table;
module->unbox_tramp_num = table_len;
module->unbox_tramp_elemsize = elemsize;
return;
}
func = LLVMAddFunction (lmodule, module->get_unbox_tramp_symbol, LLVMFunctionType1 (rtype, LLVMInt32Type (), FALSE));
LLVMSetLinkage (func, LLVMExternalLinkage);
LLVMSetVisibility (func, LLVMHiddenVisibility);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND);
module->get_unbox_tramp = func;
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
bbs = g_new0 (LLVMBasicBlockRef, module->max_method_idx + 1);
for (i = 0; i < module->max_method_idx + 1; ++i) {
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i));
if (!m)
continue;
name = g_strdup_printf ("BB_%d", i);
bb = LLVMAppendBasicBlock (func, name);
g_free (name);
bbs [i] = bb;
LLVMPositionBuilderAtEnd (builder, bb);
LLVMBuildRet (builder, LLVMBuildBitCast (builder, m, rtype, ""));
}
fail_bb = LLVMAppendBasicBlock (func, "FAIL");
LLVMPositionBuilderAtEnd (builder, fail_bb);
LLVMBuildRet (builder, LLVMConstNull (rtype));
LLVMPositionBuilderAtEnd (builder, entry_bb);
switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), fail_bb, 0);
for (i = 0; i < module->max_method_idx + 1; ++i) {
m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i));
if (!m)
continue;
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]);
}
mark_as_used (module, func);
LLVMDisposeBuilder (builder);
}
/*
* emit_init_aotconst:
*
* Emit a function to initialize the aotconst_ variables. Called by the runtime.
*/
static void
emit_init_aotconst (MonoLLVMModule *module)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef func;
LLVMBasicBlockRef entry_bb;
LLVMBuilderRef builder = LLVMCreateBuilder ();
func = LLVMAddFunction (lmodule, module->init_aotconst_symbol, LLVMFunctionType2 (LLVMVoidType (), LLVMInt32Type (), IntPtrType (), FALSE));
LLVMSetLinkage (func, LLVMExternalLinkage);
LLVMSetVisibility (func, LLVMHiddenVisibility);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND);
module->init_aotconst_func = func;
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
LLVMPositionBuilderAtEnd (builder, entry_bb);
#ifdef TARGET_WASM
/* Emit a table of aotconst addresses instead of a switch statement to save space */
LLVMValueRef aotconsts;
LLVMTypeRef aotconst_addr_type = LLVMPointerType (module->ptr_type, 0);
int table_size = module->max_got_offset + 1;
LLVMTypeRef aotconst_arr_type = LLVMArrayType (aotconst_addr_type, table_size);
LLVMValueRef aotconst_dummy = LLVMAddGlobal (module->lmodule, module->ptr_type, "aotconst_dummy");
LLVMSetInitializer (aotconst_dummy, LLVMConstNull (module->ptr_type));
LLVMSetVisibility (aotconst_dummy, LLVMHiddenVisibility);
LLVMSetLinkage (aotconst_dummy, LLVMInternalLinkage);
aotconsts = LLVMAddGlobal (module->lmodule, aotconst_arr_type, "aotconsts");
LLVMValueRef *aotconst_init = g_new0 (LLVMValueRef, table_size);
for (int i = 0; i < table_size; ++i) {
LLVMValueRef aotconst = (LLVMValueRef)g_hash_table_lookup (module->aotconst_vars, GINT_TO_POINTER (i));
if (aotconst)
aotconst_init [i] = LLVMConstBitCast (aotconst, aotconst_addr_type);
else
aotconst_init [i] = LLVMConstBitCast (aotconst_dummy, aotconst_addr_type);
}
LLVMSetInitializer (aotconsts, LLVMConstArray (aotconst_addr_type, aotconst_init, table_size));
LLVMSetVisibility (aotconsts, LLVMHiddenVisibility);
LLVMSetLinkage (aotconsts, LLVMInternalLinkage);
LLVMBasicBlockRef exit_bb = LLVMAppendBasicBlock (func, "EXIT_BB");
LLVMBasicBlockRef main_bb = LLVMAppendBasicBlock (func, "BB");
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSGE, LLVMGetParam (func, 0), LLVMConstInt (LLVMInt32Type (), table_size, FALSE), "");
LLVMBuildCondBr (builder, cmp, exit_bb, main_bb);
LLVMPositionBuilderAtEnd (builder, main_bb);
LLVMValueRef indexes [2];
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMGetParam (func, 0);
LLVMValueRef aotconst_addr = LLVMBuildLoad (builder, LLVMBuildGEP (builder, aotconsts, indexes, 2, ""), "");
LLVMBuildStore (builder, LLVMBuildIntToPtr (builder, LLVMGetParam (func, 1), module->ptr_type, ""), aotconst_addr);
LLVMBuildBr (builder, exit_bb);
LLVMPositionBuilderAtEnd (builder, exit_bb);
LLVMBuildRetVoid (builder);
#else
LLVMValueRef switch_ins;
LLVMBasicBlockRef fail_bb, bb;
LLVMBasicBlockRef *bbs = NULL;
char *name;
bbs = g_new0 (LLVMBasicBlockRef, module->max_got_offset + 1);
for (int i = 0; i < module->max_got_offset + 1; ++i) {
name = g_strdup_printf ("BB_%d", i);
bb = LLVMAppendBasicBlock (func, name);
g_free (name);
bbs [i] = bb;
LLVMPositionBuilderAtEnd (builder, bb);
LLVMValueRef var = g_hash_table_lookup (module->aotconst_vars, GINT_TO_POINTER (i));
if (var) {
LLVMValueRef addr = LLVMBuildBitCast (builder, var, LLVMPointerType (IntPtrType (), 0), "");
LLVMBuildStore (builder, LLVMGetParam (func, 1), addr);
}
LLVMBuildRetVoid (builder);
}
fail_bb = LLVMAppendBasicBlock (func, "FAIL");
LLVMPositionBuilderAtEnd (builder, fail_bb);
LLVMBuildRetVoid (builder);
LLVMPositionBuilderAtEnd (builder, entry_bb);
switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), fail_bb, 0);
for (int i = 0; i < module->max_got_offset + 1; ++i)
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]);
#endif
LLVMDisposeBuilder (builder);
}
/* Add a function to mark the beginning of LLVM code */
static void
emit_llvm_code_start (MonoLLVMModule *module)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef func;
LLVMBasicBlockRef entry_bb;
LLVMBuilderRef builder;
func = LLVMAddFunction (lmodule, "llvm_code_start", LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE));
LLVMSetLinkage (func, LLVMInternalLinkage);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND);
module->code_start = func;
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
builder = LLVMCreateBuilder ();
LLVMPositionBuilderAtEnd (builder, entry_bb);
LLVMBuildRetVoid (builder);
LLVMDisposeBuilder (builder);
}
/*
* emit_init_func:
*
* Emit functions to initialize LLVM methods.
* These are wrappers around the mini_llvm_init_method () JIT icall.
* The wrappers handle adding the 'amodule' argument, loading the vtable from different locations, and they have
* a cold calling convention.
*/
static LLVMValueRef
emit_init_func (MonoLLVMModule *module, MonoAotInitSubtype subtype)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef func, indexes [2], args [16], callee, info_var, index_var, inited_var, cmp;
LLVMBasicBlockRef entry_bb, inited_bb, notinited_bb;
LLVMBuilderRef builder;
LLVMTypeRef icall_sig;
const char *wrapper_name = mono_marshal_get_aot_init_wrapper_name (subtype);
LLVMTypeRef func_type = NULL;
LLVMTypeRef arg_type = module->ptr_type;
char *name = g_strdup_printf ("%s_%s", module->global_prefix, wrapper_name);
switch (subtype) {
case AOT_INIT_METHOD:
func_type = LLVMFunctionType1 (LLVMVoidType (), arg_type, FALSE);
break;
case AOT_INIT_METHOD_GSHARED_MRGCTX:
case AOT_INIT_METHOD_GSHARED_VTABLE:
func_type = LLVMFunctionType2 (LLVMVoidType (), arg_type, IntPtrType (), FALSE);
break;
case AOT_INIT_METHOD_GSHARED_THIS:
func_type = LLVMFunctionType2 (LLVMVoidType (), arg_type, ObjRefType (), FALSE);
break;
default:
g_assert_not_reached ();
}
func = LLVMAddFunction (lmodule, name, func_type);
info_var = LLVMGetParam (func, 0);
LLVMSetLinkage (func, LLVMInternalLinkage);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_INLINE);
set_cold_cconv (func);
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
builder = LLVMCreateBuilder ();
LLVMPositionBuilderAtEnd (builder, entry_bb);
/* Load method_index which is emitted at the start of the method info */
indexes [0] = const_int32 (0);
indexes [1] = const_int32 (0);
// FIXME: Make sure its aligned
index_var = LLVMBuildLoad (builder, LLVMBuildGEP (builder, LLVMBuildBitCast (builder, info_var, LLVMPointerType (LLVMInt32Type (), 0), ""), indexes, 1, ""), "method_index");
/* Check for is_inited here as well, since this can be called from JITted code which might not check it */
indexes [0] = const_int32 (0);
indexes [1] = index_var;
inited_var = LLVMBuildLoad (builder, LLVMBuildGEP (builder, module->inited_var, indexes, 2, ""), "is_inited");
cmp = LLVMBuildICmp (builder, LLVMIntEQ, inited_var, LLVMConstInt (LLVMTypeOf (inited_var), 0, FALSE), "");
inited_bb = LLVMAppendBasicBlock (func, "INITED");
notinited_bb = LLVMAppendBasicBlock (func, "NOT_INITED");
LLVMBuildCondBr (builder, cmp, notinited_bb, inited_bb);
LLVMPositionBuilderAtEnd (builder, notinited_bb);
LLVMValueRef amodule_var = get_aotconst_module (module, builder, MONO_PATCH_INFO_AOT_MODULE, NULL, LLVMPointerType (IntPtrType (), 0), NULL, NULL);
args [0] = LLVMBuildPtrToInt (builder, module->info_var, IntPtrType (), "");
args [1] = LLVMBuildPtrToInt (builder, amodule_var, IntPtrType (), "");
args [2] = info_var;
switch (subtype) {
case AOT_INIT_METHOD:
args [3] = LLVMConstNull (IntPtrType ());
break;
case AOT_INIT_METHOD_GSHARED_VTABLE:
args [3] = LLVMGetParam (func, 1);
break;
case AOT_INIT_METHOD_GSHARED_THIS:
/* Load this->vtable */
args [3] = LLVMBuildBitCast (builder, LLVMGetParam (func, 1), LLVMPointerType (IntPtrType (), 0), "");
indexes [0] = const_int32 (MONO_STRUCT_OFFSET (MonoObject, vtable) / SIZEOF_VOID_P);
args [3] = LLVMBuildLoad (builder, LLVMBuildGEP (builder, args [3], indexes, 1, ""), "vtable");
break;
case AOT_INIT_METHOD_GSHARED_MRGCTX:
/* Load mrgctx->vtable */
args [3] = LLVMBuildIntToPtr (builder, LLVMGetParam (func, 1), LLVMPointerType (IntPtrType (), 0), "");
indexes [0] = const_int32 (MONO_STRUCT_OFFSET (MonoMethodRuntimeGenericContext, class_vtable) / SIZEOF_VOID_P);
args [3] = LLVMBuildLoad (builder, LLVMBuildGEP (builder, args [3], indexes, 1, ""), "vtable");
break;
default:
g_assert_not_reached ();
break;
}
/* Call the mini_llvm_init_method JIT icall */
icall_sig = LLVMFunctionType4 (LLVMVoidType (), IntPtrType (), IntPtrType (), arg_type, IntPtrType (), FALSE);
callee = get_aotconst_module (module, builder, MONO_PATCH_INFO_JIT_ICALL_ID, GINT_TO_POINTER (MONO_JIT_ICALL_mini_llvm_init_method), LLVMPointerType (icall_sig, 0), NULL, NULL);
LLVMBuildCall (builder, callee, args, LLVMCountParamTypes (icall_sig), "");
/*
* Set the inited flag
* This is already done by the LLVM methods themselves, but its needed by JITted methods.
*/
indexes [0] = const_int32 (0);
indexes [1] = index_var;
LLVMBuildStore (builder, LLVMConstInt (LLVMInt8Type (), 1, FALSE), LLVMBuildGEP (builder, module->inited_var, indexes, 2, ""));
LLVMBuildBr (builder, inited_bb);
LLVMPositionBuilderAtEnd (builder, inited_bb);
LLVMBuildRetVoid (builder);
LLVMVerifyFunction (func, LLVMAbortProcessAction);
LLVMDisposeBuilder (builder);
g_free (name);
return func;
}
/* Emit a wrapper around the parameterless JIT icall ICALL_ID with a cold calling convention */
static LLVMValueRef
emit_icall_cold_wrapper (MonoLLVMModule *module, LLVMModuleRef lmodule, MonoJitICallId icall_id, gboolean aot)
{
LLVMValueRef func, callee;
LLVMBasicBlockRef entry_bb;
LLVMBuilderRef builder;
LLVMTypeRef sig;
char *name;
name = g_strdup_printf ("%s_icall_cold_wrapper_%d", module->global_prefix, icall_id);
func = LLVMAddFunction (lmodule, name, LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE));
sig = LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE);
LLVMSetLinkage (func, LLVMInternalLinkage);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_INLINE);
set_cold_cconv (func);
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
builder = LLVMCreateBuilder ();
LLVMPositionBuilderAtEnd (builder, entry_bb);
if (aot) {
callee = get_aotconst_module (module, builder, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id), LLVMPointerType (sig, 0), NULL, NULL);
} else {
MonoJitICallInfo * const info = mono_find_jit_icall_info (icall_id);
gpointer target = (gpointer)mono_icall_get_wrapper_full (info, TRUE);
LLVMValueRef tramp_var = LLVMAddGlobal (lmodule, LLVMPointerType (sig, 0), name);
LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (LLVMConstInt (LLVMInt64Type (), (guint64)(size_t)target, FALSE), LLVMPointerType (sig, 0)));
LLVMSetLinkage (tramp_var, LLVMExternalLinkage);
callee = LLVMBuildLoad (builder, tramp_var, "");
}
LLVMBuildCall (builder, callee, NULL, 0, "");
LLVMBuildRetVoid (builder);
LLVMVerifyFunction(func, LLVMAbortProcessAction);
LLVMDisposeBuilder (builder);
return func;
}
/*
* Emit wrappers around the C icalls used to initialize llvm methods, to
* make the calling code smaller and to enable usage of the llvm
* cold calling convention.
*/
static void
emit_init_funcs (MonoLLVMModule *module)
{
for (int i = 0; i < AOT_INIT_METHOD_NUM; ++i)
module->init_methods [i] = emit_init_func (module, i);
}
static LLVMValueRef
get_init_func (MonoLLVMModule *module, MonoAotInitSubtype subtype)
{
return module->init_methods [subtype];
}
static void
emit_gc_safepoint_poll (MonoLLVMModule *module, LLVMModuleRef lmodule, MonoCompile *cfg)
{
gboolean is_aot = cfg == NULL || cfg->compile_aot;
LLVMValueRef func = mono_llvm_get_or_insert_gc_safepoint_poll (lmodule);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND);
if (is_aot) {
#if TARGET_WIN32
if (module->static_link) {
LLVMSetLinkage (func, LLVMInternalLinkage);
/* Prevent it from being optimized away, leading to asserts inside 'opt' */
mark_as_used (module, func);
} else {
LLVMSetLinkage (func, LLVMWeakODRLinkage);
}
#else
LLVMSetLinkage (func, LLVMWeakODRLinkage);
#endif
} else {
mono_llvm_add_func_attr (func, LLVM_ATTR_OPTIMIZE_NONE); // no need to waste time here, the function is already optimized and will be inlined.
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_INLINE); // optnone attribute requires noinline (but it will be inlined anyway)
if (!module->gc_poll_cold_wrapper_compiled) {
ERROR_DECL (error);
/* Compiling a method here is a bit ugly, but it works */
MonoMethod *wrapper = mono_marshal_get_llvm_func_wrapper (LLVM_FUNC_WRAPPER_GC_POLL);
module->gc_poll_cold_wrapper_compiled = mono_jit_compile_method (wrapper, error);
mono_error_assert_ok (error);
}
}
LLVMBasicBlockRef entry_bb = LLVMAppendBasicBlock (func, "gc.safepoint_poll.entry");
LLVMBasicBlockRef poll_bb = LLVMAppendBasicBlock (func, "gc.safepoint_poll.poll");
LLVMBasicBlockRef exit_bb = LLVMAppendBasicBlock (func, "gc.safepoint_poll.exit");
LLVMTypeRef ptr_type = LLVMPointerType (IntPtrType (), 0);
LLVMBuilderRef builder = LLVMCreateBuilder ();
/* entry: */
LLVMPositionBuilderAtEnd (builder, entry_bb);
LLVMValueRef poll_val_ptr;
if (is_aot) {
poll_val_ptr = get_aotconst_module (module, builder, MONO_PATCH_INFO_GC_SAFE_POINT_FLAG, NULL, ptr_type, NULL, NULL);
} else {
LLVMValueRef poll_val_int = LLVMConstInt (IntPtrType (), (guint64) &mono_polling_required, FALSE);
poll_val_ptr = LLVMBuildIntToPtr (builder, poll_val_int, ptr_type, "");
}
LLVMValueRef poll_val_ptr_load = LLVMBuildLoad (builder, poll_val_ptr, ""); // probably needs to be volatile
LLVMValueRef poll_val = LLVMBuildPtrToInt (builder, poll_val_ptr_load, IntPtrType (), "");
LLVMValueRef poll_val_zero = LLVMConstNull (LLVMTypeOf (poll_val));
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntEQ, poll_val, poll_val_zero, "");
mono_llvm_build_weighted_branch (builder, cmp, exit_bb, poll_bb, 1000 /* weight for exit_bb */, 1 /* weight for poll_bb */);
/* poll: */
LLVMPositionBuilderAtEnd (builder, poll_bb);
LLVMValueRef call;
if (is_aot) {
LLVMValueRef icall_wrapper = emit_icall_cold_wrapper (module, lmodule, MONO_JIT_ICALL_mono_threads_state_poll, TRUE);
module->gc_poll_cold_wrapper = icall_wrapper;
call = LLVMBuildCall (builder, icall_wrapper, NULL, 0, "");
} else {
// in JIT mode we have to emit @gc.safepoint_poll function for each method (module)
// this function calls gc_poll_cold_wrapper_compiled via a global variable.
// @gc.safepoint_poll will be inlined and can be deleted after -place-safepoints pass.
LLVMTypeRef poll_sig = LLVMFunctionType0 (LLVMVoidType (), FALSE);
LLVMTypeRef poll_sig_ptr = LLVMPointerType (poll_sig, 0);
gpointer target = resolve_patch (cfg, MONO_PATCH_INFO_ABS, module->gc_poll_cold_wrapper_compiled);
LLVMValueRef tramp_var = LLVMAddGlobal (lmodule, poll_sig_ptr, "mono_threads_state_poll");
LLVMValueRef target_val = LLVMConstInt (LLVMInt64Type (), (guint64) target, FALSE);
LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (target_val, poll_sig_ptr));
LLVMSetLinkage (tramp_var, LLVMExternalLinkage);
LLVMValueRef callee = LLVMBuildLoad (builder, tramp_var, "");
call = LLVMBuildCall (builder, callee, NULL, 0, "");
}
set_call_cold_cconv (call);
LLVMBuildBr (builder, exit_bb);
/* exit: */
LLVMPositionBuilderAtEnd (builder, exit_bb);
LLVMBuildRetVoid (builder);
LLVMDisposeBuilder (builder);
}
static void
emit_llvm_code_end (MonoLLVMModule *module)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef func;
LLVMBasicBlockRef entry_bb;
LLVMBuilderRef builder;
func = LLVMAddFunction (lmodule, "llvm_code_end", LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE));
LLVMSetLinkage (func, LLVMInternalLinkage);
mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND);
module->code_end = func;
entry_bb = LLVMAppendBasicBlock (func, "ENTRY");
builder = LLVMCreateBuilder ();
LLVMPositionBuilderAtEnd (builder, entry_bb);
LLVMBuildRetVoid (builder);
LLVMDisposeBuilder (builder);
}
static void
emit_div_check (EmitContext *ctx, LLVMBuilderRef builder, MonoBasicBlock *bb, MonoInst *ins, LLVMValueRef lhs, LLVMValueRef rhs)
{
gboolean need_div_check = ctx->cfg->backend->need_div_check;
if (bb->region)
/* LLVM doesn't know that these can throw an exception since they are not called through an intrinsic */
need_div_check = TRUE;
if (!need_div_check)
return;
switch (ins->opcode) {
case OP_IDIV:
case OP_LDIV:
case OP_IREM:
case OP_LREM:
case OP_IDIV_UN:
case OP_LDIV_UN:
case OP_IREM_UN:
case OP_LREM_UN:
case OP_IDIV_IMM:
case OP_LDIV_IMM:
case OP_IREM_IMM:
case OP_LREM_IMM:
case OP_IDIV_UN_IMM:
case OP_LDIV_UN_IMM:
case OP_IREM_UN_IMM:
case OP_LREM_UN_IMM: {
LLVMValueRef cmp;
gboolean is_signed = (ins->opcode == OP_IDIV || ins->opcode == OP_LDIV || ins->opcode == OP_IREM || ins->opcode == OP_LREM ||
ins->opcode == OP_IDIV_IMM || ins->opcode == OP_LDIV_IMM || ins->opcode == OP_IREM_IMM || ins->opcode == OP_LREM_IMM);
cmp = LLVMBuildICmp (builder, LLVMIntEQ, rhs, LLVMConstInt (LLVMTypeOf (rhs), 0, FALSE), "");
emit_cond_system_exception (ctx, bb, "DivideByZeroException", cmp, FALSE);
if (!ctx_ok (ctx))
break;
builder = ctx->builder;
/* b == -1 && a == 0x80000000 */
if (is_signed) {
LLVMValueRef c = (LLVMTypeOf (lhs) == LLVMInt32Type ()) ? LLVMConstInt (LLVMTypeOf (lhs), 0x80000000, FALSE) : LLVMConstInt (LLVMTypeOf (lhs), 0x8000000000000000LL, FALSE);
LLVMValueRef cond1 = LLVMBuildICmp (builder, LLVMIntEQ, rhs, LLVMConstInt (LLVMTypeOf (rhs), -1, FALSE), "");
LLVMValueRef cond2 = LLVMBuildICmp (builder, LLVMIntEQ, lhs, c, "");
cmp = LLVMBuildICmp (builder, LLVMIntEQ, LLVMBuildAnd (builder, cond1, cond2, ""), LLVMConstInt (LLVMInt1Type (), 1, FALSE), "");
emit_cond_system_exception (ctx, bb, "OverflowException", cmp, FALSE);
if (!ctx_ok (ctx))
break;
builder = ctx->builder;
}
break;
}
default:
break;
}
}
/*
* emit_method_init:
*
* Emit code to initialize the GOT slots used by the method.
*/
static void
emit_method_init (EmitContext *ctx)
{
LLVMValueRef indexes [16], args [16];
LLVMValueRef inited_var, cmp, call;
LLVMBasicBlockRef inited_bb, notinited_bb;
LLVMBuilderRef builder = ctx->builder;
MonoCompile *cfg = ctx->cfg;
MonoAotInitSubtype subtype;
ctx->module->max_inited_idx = MAX (ctx->module->max_inited_idx, cfg->method_index);
indexes [0] = const_int32 (0);
indexes [1] = const_int32 (cfg->method_index);
inited_var = LLVMBuildLoad (builder, LLVMBuildGEP (builder, ctx->module->inited_var, indexes, 2, ""), "is_inited");
args [0] = inited_var;
args [1] = LLVMConstInt (LLVMInt8Type (), 1, FALSE);
inited_var = LLVMBuildCall (ctx->builder, get_intrins (ctx, INTRINS_EXPECT_I8), args, 2, "");
cmp = LLVMBuildICmp (builder, LLVMIntEQ, inited_var, LLVMConstInt (LLVMTypeOf (inited_var), 0, FALSE), "");
inited_bb = ctx->inited_bb;
notinited_bb = gen_bb (ctx, "NOTINITED_BB");
ctx->cfg->llvmonly_init_cond = LLVMBuildCondBr (ctx->builder, cmp, notinited_bb, inited_bb);
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, notinited_bb);
LLVMTypeRef type = LLVMArrayType (LLVMInt8Type (), 0);
char *symbol = g_strdup_printf ("info_dummy_%s", cfg->llvm_method_name);
LLVMValueRef info_var = LLVMAddGlobal (ctx->lmodule, type, symbol);
g_free (symbol);
cfg->llvm_dummy_info_var = info_var;
int nargs = 0;
args [nargs ++] = convert (ctx, info_var, ctx->module->ptr_type);
switch (cfg->rgctx_access) {
case MONO_RGCTX_ACCESS_MRGCTX:
if (ctx->rgctx_arg) {
args [nargs ++] = convert (ctx, ctx->rgctx_arg, IntPtrType ());
subtype = AOT_INIT_METHOD_GSHARED_MRGCTX;
} else {
g_assert (ctx->this_arg);
args [nargs ++] = convert (ctx, ctx->this_arg, ObjRefType ());
subtype = AOT_INIT_METHOD_GSHARED_THIS;
}
break;
case MONO_RGCTX_ACCESS_VTABLE:
args [nargs ++] = convert (ctx, ctx->rgctx_arg, IntPtrType ());
subtype = AOT_INIT_METHOD_GSHARED_VTABLE;
break;
case MONO_RGCTX_ACCESS_THIS:
args [nargs ++] = convert (ctx, ctx->this_arg, ObjRefType ());
subtype = AOT_INIT_METHOD_GSHARED_THIS;
break;
case MONO_RGCTX_ACCESS_NONE:
subtype = AOT_INIT_METHOD;
break;
default:
g_assert_not_reached ();
}
call = LLVMBuildCall (builder, ctx->module->init_methods [subtype], args, nargs, "");
/*
* This enables llvm to keep arguments in their original registers/
* scratch registers, since the call will not clobber them.
*/
set_call_cold_cconv (call);
// Set the inited flag
indexes [0] = const_int32 (0);
indexes [1] = const_int32 (cfg->method_index);
LLVMBuildStore (builder, LLVMConstInt (LLVMInt8Type (), 1, FALSE), LLVMBuildGEP (builder, ctx->module->inited_var, indexes, 2, ""));
LLVMBuildBr (builder, inited_bb);
ctx->bblocks [cfg->bb_entry->block_num].end_bblock = inited_bb;
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, inited_bb);
}
static void
emit_unbox_tramp (EmitContext *ctx, const char *method_name, LLVMTypeRef method_type, LLVMValueRef method, int method_index)
{
/*
* Emit unbox trampoline using a tailcall
*/
LLVMValueRef tramp, call, *args;
LLVMBuilderRef builder;
LLVMBasicBlockRef lbb;
LLVMCallInfo *linfo;
char *tramp_name;
int i, nargs;
tramp_name = g_strdup_printf ("ut_%s", method_name);
tramp = LLVMAddFunction (ctx->module->lmodule, tramp_name, method_type);
LLVMSetLinkage (tramp, LLVMInternalLinkage);
mono_llvm_add_func_attr (tramp, LLVM_ATTR_OPTIMIZE_FOR_SIZE);
//mono_llvm_add_func_attr (tramp, LLVM_ATTR_NO_UNWIND);
linfo = ctx->linfo;
// FIXME: Reduce code duplication with mono_llvm_compile_method () etc.
if (!ctx->llvm_only && ctx->rgctx_arg_pindex != -1)
mono_llvm_add_param_attr (LLVMGetParam (tramp, ctx->rgctx_arg_pindex), LLVM_ATTR_IN_REG);
if (ctx->cfg->vret_addr) {
LLVMSetValueName (LLVMGetParam (tramp, linfo->vret_arg_pindex), "vret");
if (linfo->ret.storage == LLVMArgVtypeByRef) {
mono_llvm_add_param_attr (LLVMGetParam (tramp, linfo->vret_arg_pindex), LLVM_ATTR_STRUCT_RET);
mono_llvm_add_param_attr (LLVMGetParam (tramp, linfo->vret_arg_pindex), LLVM_ATTR_NO_ALIAS);
}
}
lbb = LLVMAppendBasicBlock (tramp, "");
builder = LLVMCreateBuilder ();
LLVMPositionBuilderAtEnd (builder, lbb);
nargs = LLVMCountParamTypes (method_type);
args = g_new0 (LLVMValueRef, nargs);
for (i = 0; i < nargs; ++i) {
args [i] = LLVMGetParam (tramp, i);
if (i == ctx->this_arg_pindex) {
LLVMTypeRef arg_type = LLVMTypeOf (args [i]);
args [i] = LLVMBuildPtrToInt (builder, args [i], IntPtrType (), "");
args [i] = LLVMBuildAdd (builder, args [i], LLVMConstInt (IntPtrType (), MONO_ABI_SIZEOF (MonoObject), FALSE), "");
args [i] = LLVMBuildIntToPtr (builder, args [i], arg_type, "");
}
}
call = LLVMBuildCall (builder, method, args, nargs, "");
if (!ctx->llvm_only && ctx->rgctx_arg_pindex != -1)
mono_llvm_add_instr_attr (call, 1 + ctx->rgctx_arg_pindex, LLVM_ATTR_IN_REG);
if (linfo->ret.storage == LLVMArgVtypeByRef)
mono_llvm_add_instr_attr (call, 1 + linfo->vret_arg_pindex, LLVM_ATTR_STRUCT_RET);
// FIXME: This causes assertions in clang
//mono_llvm_set_must_tailcall (call);
if (LLVMGetReturnType (method_type) == LLVMVoidType ())
LLVMBuildRetVoid (builder);
else
LLVMBuildRet (builder, call);
g_hash_table_insert (ctx->module->idx_to_unbox_tramp, GINT_TO_POINTER (method_index), tramp);
LLVMDisposeBuilder (builder);
}
#ifdef TARGET_WASM
static void
emit_gc_pin (EmitContext *ctx, LLVMBuilderRef builder, int vreg)
{
LLVMValueRef index0 = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
LLVMValueRef index1 = LLVMConstInt (LLVMInt32Type (), ctx->gc_var_indexes [vreg] - 1, FALSE);
LLVMValueRef indexes [] = { index0, index1 };
LLVMValueRef addr = LLVMBuildGEP (builder, ctx->gc_pin_area, indexes, 2, "");
mono_llvm_build_store (builder, convert (ctx, ctx->values [vreg], IntPtrType ()), addr, TRUE, LLVM_BARRIER_NONE);
}
#endif
/*
* emit_entry_bb:
*
* Emit code to load/convert arguments.
*/
static void
emit_entry_bb (EmitContext *ctx, LLVMBuilderRef builder)
{
int i, j, pindex;
MonoCompile *cfg = ctx->cfg;
MonoMethodSignature *sig = ctx->sig;
LLVMCallInfo *linfo = ctx->linfo;
MonoBasicBlock *bb;
char **names;
LLVMBuilderRef old_builder = ctx->builder;
ctx->builder = builder;
ctx->alloca_builder = create_builder (ctx);
#ifdef TARGET_WASM
/*
* For GC stack scanning to work, allocate an area on the stack and store
* every ref vreg into it after its written. Because the stack is scanned
* conservatively, the objects will be pinned, so the vregs can directly
* reference the objects, there is no need to load them from the stack
* on every access.
*/
ctx->gc_var_indexes = g_new0 (int, cfg->next_vreg);
int ngc_vars = 0;
for (i = 0; i < cfg->next_vreg; ++i) {
if (vreg_is_ref (cfg, i)) {
ctx->gc_var_indexes [i] = ngc_vars + 1;
ngc_vars ++;
}
}
// FIXME: Count only live vregs
ctx->gc_pin_area = build_alloca_llvm_type_name (ctx, LLVMArrayType (IntPtrType (), ngc_vars), 0, "gc_pin");
#endif
/*
* Handle indirect/volatile variables by allocating memory for them
* using 'alloca', and storing their address in a temporary.
*/
for (i = 0; i < cfg->num_varinfo; ++i) {
MonoInst *var = cfg->varinfo [i];
if ((var->opcode == OP_GSHAREDVT_LOCAL || var->opcode == OP_GSHAREDVT_ARG_REGOFFSET))
continue;
if (var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT) || (mini_type_is_vtype (var->inst_vtype) && !MONO_CLASS_IS_SIMD (ctx->cfg, var->klass))) {
if (!ctx_ok (ctx))
return;
/* Could be already created by an OP_VPHI */
if (!ctx->addresses [var->dreg]) {
if (var->flags & MONO_INST_LMF) {
// FIXME: Allocate a smaller struct in the deopt case
int size = cfg->deopt ? MONO_ABI_SIZEOF (MonoLMFExt) : MONO_ABI_SIZEOF (MonoLMF);
ctx->addresses [var->dreg] = build_alloca_llvm_type_name (ctx, LLVMArrayType (LLVMInt8Type (), size), sizeof (target_mgreg_t), "lmf");
} else {
char *name = g_strdup_printf ("vreg_loc_%d", var->dreg);
ctx->addresses [var->dreg] = build_named_alloca (ctx, var->inst_vtype, name);
g_free (name);
}
}
ctx->vreg_cli_types [var->dreg] = var->inst_vtype;
}
}
names = g_new (char *, sig->param_count);
mono_method_get_param_names (cfg->method, (const char **) names);
for (i = 0; i < sig->param_count; ++i) {
LLVMArgInfo *ainfo = &linfo->args [i + sig->hasthis];
int reg = cfg->args [i + sig->hasthis]->dreg;
char *name;
pindex = ainfo->pindex;
LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex);
switch (ainfo->storage) {
case LLVMArgVtypeInReg:
case LLVMArgAsFpArgs: {
LLVMValueRef args [8];
int j;
pindex += ainfo->ndummy_fpargs;
/* The argument is received as a set of int/fp arguments, store them into the real argument */
memset (args, 0, sizeof (args));
if (ainfo->storage == LLVMArgVtypeInReg) {
args [0] = LLVMGetParam (ctx->lmethod, pindex);
if (ainfo->pair_storage [1] != LLVMArgNone)
args [1] = LLVMGetParam (ctx->lmethod, pindex + 1);
} else {
g_assert (ainfo->nslots <= 8);
for (j = 0; j < ainfo->nslots; ++j)
args [j] = LLVMGetParam (ctx->lmethod, pindex + j);
}
ctx->addresses [reg] = build_alloca (ctx, ainfo->type);
emit_args_to_vtype (ctx, builder, ainfo->type, ctx->addresses [reg], ainfo, args);
break;
}
case LLVMArgVtypeByVal: {
ctx->addresses [reg] = LLVMGetParam (ctx->lmethod, pindex);
break;
}
case LLVMArgVtypeAddr:
case LLVMArgVtypeByRef: {
/* The argument is passed by ref */
ctx->addresses [reg] = LLVMGetParam (ctx->lmethod, pindex);
break;
}
case LLVMArgAsIArgs: {
LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex);
int size;
MonoType *t = mini_get_underlying_type (ainfo->type);
/* The argument is received as an array of ints, store it into the real argument */
ctx->addresses [reg] = build_alloca (ctx, t);
size = mono_class_value_size (mono_class_from_mono_type_internal (t), NULL);
if (size == 0) {
} else if (size < TARGET_SIZEOF_VOID_P) {
/* The upper bits of the registers might not be valid */
LLVMValueRef val = LLVMBuildExtractValue (builder, arg, 0, "");
LLVMValueRef dest = convert (ctx, ctx->addresses [reg], LLVMPointerType (LLVMIntType (size * 8), 0));
LLVMBuildStore (ctx->builder, LLVMBuildTrunc (builder, val, LLVMIntType (size * 8), ""), dest);
} else {
LLVMBuildStore (ctx->builder, arg, convert (ctx, ctx->addresses [reg], LLVMPointerType (LLVMTypeOf (arg), 0)));
}
break;
}
case LLVMArgVtypeAsScalar:
g_assert_not_reached ();
break;
case LLVMArgWasmVtypeAsScalar: {
MonoType *t = mini_get_underlying_type (ainfo->type);
/* The argument is received as a scalar */
ctx->addresses [reg] = build_alloca (ctx, t);
LLVMValueRef dest = convert (ctx, ctx->addresses [reg], LLVMPointerType (LLVMIntType (ainfo->esize * 8), 0));
LLVMBuildStore (ctx->builder, arg, dest);
break;
}
case LLVMArgGsharedvtFixed: {
/* These are non-gsharedvt arguments passed by ref, the rest of the IR treats them as scalars */
LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex);
if (names [i])
name = g_strdup_printf ("arg_%s", names [i]);
else
name = g_strdup_printf ("arg_%d", i);
ctx->values [reg] = LLVMBuildLoad (builder, convert (ctx, arg, LLVMPointerType (type_to_llvm_type (ctx, ainfo->type), 0)), name);
break;
}
case LLVMArgGsharedvtFixedVtype: {
LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex);
if (names [i])
name = g_strdup_printf ("vtype_arg_%s", names [i]);
else
name = g_strdup_printf ("vtype_arg_%d", i);
/* Non-gsharedvt vtype argument passed by ref, the rest of the IR treats it as a vtype */
g_assert (ctx->addresses [reg]);
LLVMSetValueName (ctx->addresses [reg], name);
LLVMBuildStore (builder, LLVMBuildLoad (builder, convert (ctx, arg, LLVMPointerType (type_to_llvm_type (ctx, ainfo->type), 0)), ""), ctx->addresses [reg]);
break;
}
case LLVMArgGsharedvtVariable:
/* The IR treats these as variables with addresses */
if (!ctx->addresses [reg])
ctx->addresses [reg] = LLVMGetParam (ctx->lmethod, pindex);
break;
default: {
LLVMTypeRef t;
/* Needed to avoid phi argument mismatch errors since operations on pointers produce i32/i64 */
if (m_type_is_byref (ainfo->type))
t = IntPtrType ();
else
t = type_to_llvm_type (ctx, ainfo->type);
ctx->values [reg] = convert_full (ctx, ctx->values [reg], llvm_type_to_stack_type (cfg, t), type_is_unsigned (ctx, ainfo->type));
break;
}
}
switch (ainfo->storage) {
case LLVMArgVtypeInReg:
case LLVMArgVtypeByVal:
case LLVMArgAsIArgs:
// FIXME: Enabling this fails on windows
case LLVMArgVtypeAddr:
case LLVMArgVtypeByRef:
{
if (MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (ainfo->type)))
/* Treat these as normal values */
ctx->values [reg] = LLVMBuildLoad (builder, ctx->addresses [reg], "simd_vtype");
break;
}
default:
break;
}
}
g_free (names);
if (sig->hasthis) {
/* Handle this arguments as inputs to phi nodes */
int reg = cfg->args [0]->dreg;
if (ctx->vreg_types [reg])
ctx->values [reg] = convert (ctx, ctx->values [reg], ctx->vreg_types [reg]);
}
if (cfg->vret_addr)
emit_volatile_store (ctx, cfg->vret_addr->dreg);
if (sig->hasthis)
emit_volatile_store (ctx, cfg->args [0]->dreg);
for (i = 0; i < sig->param_count; ++i)
if (!mini_type_is_vtype (sig->params [i]))
emit_volatile_store (ctx, cfg->args [i + sig->hasthis]->dreg);
if (sig->hasthis && !cfg->rgctx_var && cfg->gshared && !cfg->llvm_only) {
LLVMValueRef this_alloc;
/*
* The exception handling code needs the location where the this argument was
* stored for gshared methods. We create a separate alloca to hold it, and mark it
* with the "mono.this" custom metadata to tell llvm that it needs to save its
* location into the LSDA.
*/
this_alloc = mono_llvm_build_alloca (builder, ThisType (), LLVMConstInt (LLVMInt32Type (), 1, FALSE), 0, "");
/* This volatile store will keep the alloca alive */
mono_llvm_build_store (builder, ctx->values [cfg->args [0]->dreg], this_alloc, TRUE, LLVM_BARRIER_NONE);
set_metadata_flag (this_alloc, "mono.this");
}
if (cfg->rgctx_var) {
if (!(cfg->rgctx_var->flags & MONO_INST_VOLATILE)) {
/* FIXME: This could be volatile even in llvmonly mode if used inside a clause etc. */
g_assert (!ctx->addresses [cfg->rgctx_var->dreg]);
ctx->values [cfg->rgctx_var->dreg] = ctx->rgctx_arg;
} else {
LLVMValueRef rgctx_alloc, store;
/*
* We handle the rgctx arg similarly to the this pointer.
*/
g_assert (ctx->addresses [cfg->rgctx_var->dreg]);
rgctx_alloc = ctx->addresses [cfg->rgctx_var->dreg];
/* This volatile store will keep the alloca alive */
store = mono_llvm_build_store (builder, convert (ctx, ctx->rgctx_arg, IntPtrType ()), rgctx_alloc, TRUE, LLVM_BARRIER_NONE);
(void)store; /* unused */
set_metadata_flag (rgctx_alloc, "mono.this");
}
}
#ifdef TARGET_WASM
/*
* Store ref arguments to the pin area.
* FIXME: This might not be needed, since the caller already does it ?
*/
for (i = 0; i < cfg->num_varinfo; ++i) {
MonoInst *var = cfg->varinfo [i];
if (var->opcode == OP_ARG && vreg_is_ref (cfg, var->dreg) && ctx->values [var->dreg])
emit_gc_pin (ctx, builder, var->dreg);
}
#endif
if (cfg->deopt) {
LLVMValueRef addr, index [2];
MonoMethodHeader *header = cfg->header;
int nfields = (sig->ret->type != MONO_TYPE_VOID ? 1 : 0) + sig->hasthis + sig->param_count + header->num_locals + 2;
LLVMTypeRef *types = g_alloca (nfields * sizeof (LLVMTypeRef));
int findex = 0;
/* method */
types [findex ++] = IntPtrType ();
/* il_offset */
types [findex ++] = LLVMInt32Type ();
int data_start = findex;
/* data */
if (sig->ret->type != MONO_TYPE_VOID)
types [findex ++] = IntPtrType ();
if (sig->hasthis)
types [findex ++] = IntPtrType ();
for (int i = 0; i < sig->param_count; ++i)
types [findex ++] = LLVMPointerType (type_to_llvm_type (ctx, sig->params [i]), 0);
for (int i = 0; i < header->num_locals; ++i)
types [findex ++] = LLVMPointerType (type_to_llvm_type (ctx, header->locals [i]), 0);
g_assert (findex == nfields);
char *name = g_strdup_printf ("%s_il_state", ctx->method_name);
LLVMTypeRef il_state_type = LLVMStructCreateNamed (ctx->module->context, name);
LLVMStructSetBody (il_state_type, types, nfields, FALSE);
g_free (name);
ctx->il_state = build_alloca_llvm_type_name (ctx, il_state_type, 0, "il_state");
g_assert (cfg->il_state_var);
ctx->addresses [cfg->il_state_var->dreg] = ctx->il_state;
/* Set il_state->il_offset = -1 */
index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
index [1] = LLVMConstInt (LLVMInt32Type (), 1, FALSE);
addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, "");
LLVMBuildStore (ctx->builder, LLVMConstInt (types [1], -1, FALSE), addr);
/*
* Set il_state->data [i] to either the address of the arg/local, or NULL.
* Because of mono_liveness_handle_exception_clauses (), all locals used/reachable from
* clauses are supposed to be volatile, so they have an address.
*/
findex = data_start;
if (sig->ret->type != MONO_TYPE_VOID) {
LLVMTypeRef ret_type = type_to_llvm_type (ctx, sig->ret);
ctx->il_state_ret = build_alloca_llvm_type_name (ctx, ret_type, 0, "il_state_ret");
index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
index [1] = LLVMConstInt (LLVMInt32Type (), findex, FALSE);
addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, "");
LLVMBuildStore (ctx->builder, ctx->il_state_ret, convert (ctx, addr, LLVMPointerType (LLVMTypeOf (ctx->il_state_ret), 0)));
findex ++;
}
for (int i = 0; i < sig->hasthis + sig->param_count; ++i) {
LLVMValueRef var_addr = ctx->addresses [cfg->args [i]->dreg];
index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
index [1] = LLVMConstInt (LLVMInt32Type (), findex, FALSE);
addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, "");
if (var_addr)
LLVMBuildStore (ctx->builder, var_addr, convert (ctx, addr, LLVMPointerType (LLVMTypeOf (var_addr), 0)));
else
LLVMBuildStore (ctx->builder, LLVMConstNull (types [findex]), addr);
findex ++;
}
for (int i = 0; i < header->num_locals; ++i) {
LLVMValueRef var_addr = ctx->addresses [cfg->locals [i]->dreg];
index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
index [1] = LLVMConstInt (LLVMInt32Type (), findex, FALSE);
addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, "");
if (var_addr)
LLVMBuildStore (ctx->builder, LLVMBuildBitCast (builder, var_addr, types [findex], ""), addr);
else
LLVMBuildStore (ctx->builder, LLVMConstNull (types [findex]), addr);
findex ++;
}
}
/* Initialize the method if needed */
if (cfg->compile_aot) {
/* Emit a location for the initialization code */
ctx->init_bb = gen_bb (ctx, "INIT_BB");
ctx->inited_bb = gen_bb (ctx, "INITED_BB");
LLVMBuildBr (ctx->builder, ctx->init_bb);
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, ctx->inited_bb);
ctx->bblocks [cfg->bb_entry->block_num].end_bblock = ctx->inited_bb;
}
/* Compute nesting between clauses */
ctx->nested_in = (GSList**)mono_mempool_alloc0 (cfg->mempool, sizeof (GSList*) * cfg->header->num_clauses);
for (i = 0; i < cfg->header->num_clauses; ++i) {
for (j = 0; j < cfg->header->num_clauses; ++j) {
MonoExceptionClause *clause1 = &cfg->header->clauses [i];
MonoExceptionClause *clause2 = &cfg->header->clauses [j];
if (i != j && clause1->try_offset >= clause2->try_offset && clause1->handler_offset <= clause2->handler_offset)
ctx->nested_in [i] = g_slist_prepend_mempool (cfg->mempool, ctx->nested_in [i], GINT_TO_POINTER (j));
}
}
/*
* For finally clauses, create an indicator variable telling OP_ENDFINALLY whenever
* it needs to continue normally, or return back to the exception handling system.
*/
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
char name [128];
if (!(bb->region != -1 && (bb->flags & BB_EXCEPTION_HANDLER)))
continue;
if (bb->in_scount == 0) {
LLVMValueRef val;
sprintf (name, "finally_ind_bb%d", bb->block_num);
val = LLVMBuildAlloca (builder, LLVMInt32Type (), name);
LLVMBuildStore (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), val);
ctx->bblocks [bb->block_num].finally_ind = val;
} else {
/* Create a variable to hold the exception var */
if (!ctx->ex_var)
ctx->ex_var = LLVMBuildAlloca (builder, ObjRefType (), "exvar");
}
}
ctx->builder = old_builder;
}
static gboolean
needs_extra_arg (EmitContext *ctx, MonoMethod *method)
{
WrapperInfo *info = NULL;
/*
* When targeting wasm, the caller and callee signature has to match exactly. This means
* that every method which can be called indirectly need an extra arg since the caller
* will call it through an ftnptr and will pass an extra arg.
*/
if (!ctx->cfg->llvm_only || !ctx->emit_dummy_arg)
return FALSE;
if (method->wrapper_type)
info = mono_marshal_get_wrapper_info (method);
switch (method->wrapper_type) {
case MONO_WRAPPER_OTHER:
if (info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG || info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG)
/* Already have an explicit extra arg */
return FALSE;
break;
case MONO_WRAPPER_MANAGED_TO_NATIVE:
if (strstr (method->name, "icall_wrapper"))
/* These are JIT icall wrappers which are only called from JITted code directly */
return FALSE;
/* Normal icalls can be virtual methods which need an extra arg */
break;
case MONO_WRAPPER_RUNTIME_INVOKE:
case MONO_WRAPPER_ALLOC:
case MONO_WRAPPER_CASTCLASS:
case MONO_WRAPPER_WRITE_BARRIER:
case MONO_WRAPPER_NATIVE_TO_MANAGED:
return FALSE;
case MONO_WRAPPER_STELEMREF:
if (info->subtype != WRAPPER_SUBTYPE_VIRTUAL_STELEMREF)
return FALSE;
break;
case MONO_WRAPPER_MANAGED_TO_MANAGED:
if (info->subtype == WRAPPER_SUBTYPE_STRING_CTOR)
return FALSE;
break;
default:
break;
}
if (method->string_ctor)
return FALSE;
/* These are called from gsharedvt code with an indirect call which doesn't pass an extra arg */
if (method->klass == mono_get_string_class () && (strstr (method->name, "memcpy") || strstr (method->name, "bzero")))
return FALSE;
return TRUE;
}
static inline gboolean
is_supported_callconv (EmitContext *ctx, MonoCallInst *call)
{
#if defined(TARGET_WIN32) && defined(TARGET_AMD64)
gboolean result = (call->signature->call_convention == MONO_CALL_DEFAULT) ||
(call->signature->call_convention == MONO_CALL_C) ||
(call->signature->call_convention == MONO_CALL_STDCALL);
#else
gboolean result = (call->signature->call_convention == MONO_CALL_DEFAULT) || ((call->signature->call_convention == MONO_CALL_C) && ctx->llvm_only);
#endif
return result;
}
static void
process_call (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, MonoInst *ins)
{
MonoCompile *cfg = ctx->cfg;
LLVMValueRef *values = ctx->values;
LLVMValueRef *addresses = ctx->addresses;
MonoCallInst *call = (MonoCallInst*)ins;
MonoMethodSignature *sig = call->signature;
LLVMValueRef callee = NULL, lcall;
LLVMValueRef *args;
LLVMCallInfo *cinfo;
GSList *l;
int i, len, nargs;
gboolean vretaddr;
LLVMTypeRef llvm_sig;
gpointer target;
gboolean is_virtual, calli;
LLVMBuilderRef builder = *builder_ref;
/* If both imt and rgctx arg are required, only pass the imt arg, the rgctx trampoline will pass the rgctx */
if (call->imt_arg_reg)
call->rgctx_arg_reg = 0;
if (!is_supported_callconv (ctx, call)) {
set_failure (ctx, "non-default callconv");
return;
}
cinfo = call->cinfo;
g_assert (cinfo);
if (call->rgctx_arg_reg)
cinfo->rgctx_arg = TRUE;
if (call->imt_arg_reg)
cinfo->imt_arg = TRUE;
if (!call->rgctx_arg_reg && call->method && needs_extra_arg (ctx, call->method))
cinfo->dummy_arg = TRUE;
vretaddr = (cinfo->ret.storage == LLVMArgVtypeRetAddr || cinfo->ret.storage == LLVMArgVtypeByRef || cinfo->ret.storage == LLVMArgGsharedvtFixed || cinfo->ret.storage == LLVMArgGsharedvtVariable || cinfo->ret.storage == LLVMArgGsharedvtFixedVtype);
llvm_sig = sig_to_llvm_sig_full (ctx, sig, cinfo);
if (!ctx_ok (ctx))
return;
int const opcode = ins->opcode;
is_virtual = opcode == OP_VOIDCALL_MEMBASE || opcode == OP_CALL_MEMBASE
|| opcode == OP_VCALL_MEMBASE || opcode == OP_LCALL_MEMBASE
|| opcode == OP_FCALL_MEMBASE || opcode == OP_RCALL_MEMBASE
|| opcode == OP_TAILCALL_MEMBASE;
calli = !call->fptr_is_patch && (opcode == OP_VOIDCALL_REG || opcode == OP_CALL_REG
|| opcode == OP_VCALL_REG || opcode == OP_LCALL_REG || opcode == OP_FCALL_REG
|| opcode == OP_RCALL_REG || opcode == OP_TAILCALL_REG);
/* FIXME: Avoid creating duplicate methods */
if (ins->flags & MONO_INST_HAS_METHOD) {
if (is_virtual) {
callee = NULL;
} else {
if (cfg->compile_aot) {
callee = get_callee (ctx, llvm_sig, MONO_PATCH_INFO_METHOD, call->method);
if (!callee) {
set_failure (ctx, "can't encode patch");
return;
}
} else if (cfg->method == call->method) {
callee = ctx->lmethod;
} else {
ERROR_DECL (error);
static int tramp_index;
char *name;
name = g_strdup_printf ("[tramp_%d] %s", tramp_index, mono_method_full_name (call->method, TRUE));
tramp_index ++;
/*
* Use our trampoline infrastructure for lazy compilation instead of llvm's.
* Make all calls through a global. The address of the global will be saved in
* MonoJitDomainInfo.llvm_jit_callees and updated when the method it refers to is
* compiled.
*/
LLVMValueRef tramp_var = (LLVMValueRef)g_hash_table_lookup (ctx->jit_callees, call->method);
if (!tramp_var) {
target =
mono_create_jit_trampoline (call->method, error);
if (!is_ok (error)) {
set_failure (ctx, mono_error_get_message (error));
mono_error_cleanup (error);
return;
}
tramp_var = LLVMAddGlobal (ctx->lmodule, LLVMPointerType (llvm_sig, 0), name);
LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (LLVMConstInt (LLVMInt64Type (), (guint64)(size_t)target, FALSE), LLVMPointerType (llvm_sig, 0)));
LLVMSetLinkage (tramp_var, LLVMExternalLinkage);
g_hash_table_insert (ctx->jit_callees, call->method, tramp_var);
}
callee = LLVMBuildLoad (builder, tramp_var, "");
}
}
if (!cfg->llvm_only && call->method && strstr (m_class_get_name (call->method->klass), "AsyncVoidMethodBuilder")) {
/* LLVM miscompiles async methods */
set_failure (ctx, "#13734");
return;
}
} else if (calli) {
} else {
const MonoJitICallId jit_icall_id = call->jit_icall_id;
if (jit_icall_id) {
if (cfg->compile_aot) {
callee = get_callee (ctx, llvm_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (jit_icall_id));
if (!callee) {
set_failure (ctx, "can't encode patch");
return;
}
} else {
callee = get_jit_callee (ctx, "", llvm_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (jit_icall_id));
}
} else {
if (cfg->compile_aot) {
callee = NULL;
if (cfg->abs_patches) {
MonoJumpInfo *abs_ji = (MonoJumpInfo*)g_hash_table_lookup (cfg->abs_patches, call->fptr);
if (abs_ji) {
callee = get_callee (ctx, llvm_sig, abs_ji->type, abs_ji->data.target);
if (!callee) {
set_failure (ctx, "can't encode patch");
return;
}
}
}
if (!callee) {
set_failure (ctx, "aot");
return;
}
} else {
if (cfg->abs_patches) {
MonoJumpInfo *abs_ji = (MonoJumpInfo*)g_hash_table_lookup (cfg->abs_patches, call->fptr);
if (abs_ji) {
ERROR_DECL (error);
target = mono_resolve_patch_target (cfg->method, NULL, abs_ji, FALSE, error);
mono_error_assert_ok (error);
callee = get_jit_callee (ctx, "", llvm_sig, abs_ji->type, abs_ji->data.target);
} else {
g_assert_not_reached ();
}
} else {
g_assert_not_reached ();
}
}
}
}
if (is_virtual) {
int size = TARGET_SIZEOF_VOID_P;
LLVMValueRef index;
g_assert (ins->inst_offset % size == 0);
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE);
callee = convert (ctx, LLVMBuildLoad (builder, LLVMBuildGEP (builder, convert (ctx, values [ins->inst_basereg], LLVMPointerType (LLVMPointerType (IntPtrType (), 0), 0)), &index, 1, ""), ""), LLVMPointerType (llvm_sig, 0));
} else if (calli) {
callee = convert (ctx, values [ins->sreg1], LLVMPointerType (llvm_sig, 0));
} else {
if (ins->flags & MONO_INST_HAS_METHOD) {
}
}
/*
* Collect and convert arguments
*/
nargs = (sig->param_count * 16) + sig->hasthis + vretaddr + call->rgctx_reg + call->imt_arg_reg + call->cinfo->dummy_arg + 1;
len = sizeof (LLVMValueRef) * nargs;
args = g_newa (LLVMValueRef, nargs);
memset (args, 0, len);
l = call->out_ireg_args;
if (call->rgctx_arg_reg) {
g_assert (values [call->rgctx_arg_reg]);
g_assert (cinfo->rgctx_arg_pindex < nargs);
/*
* On ARM, the imt/rgctx argument is passed in a caller save register, but some of our trampolines etc. clobber it, leading to
* problems is LLVM moves the arg assignment earlier. To work around this, save the argument into a stack slot and load
* it using a volatile load.
*/
#ifdef TARGET_ARM
if (!ctx->imt_rgctx_loc)
ctx->imt_rgctx_loc = build_alloca_llvm_type (ctx, ctx->module->ptr_type, TARGET_SIZEOF_VOID_P);
LLVMBuildStore (builder, convert (ctx, ctx->values [call->rgctx_arg_reg], ctx->module->ptr_type), ctx->imt_rgctx_loc);
args [cinfo->rgctx_arg_pindex] = mono_llvm_build_load (builder, ctx->imt_rgctx_loc, "", TRUE);
#else
args [cinfo->rgctx_arg_pindex] = convert (ctx, values [call->rgctx_arg_reg], ctx->module->ptr_type);
#endif
}
if (call->imt_arg_reg) {
g_assert (!ctx->llvm_only);
g_assert (values [call->imt_arg_reg]);
g_assert (cinfo->imt_arg_pindex < nargs);
#ifdef TARGET_ARM
if (!ctx->imt_rgctx_loc)
ctx->imt_rgctx_loc = build_alloca_llvm_type (ctx, ctx->module->ptr_type, TARGET_SIZEOF_VOID_P);
LLVMBuildStore (builder, convert (ctx, ctx->values [call->imt_arg_reg], ctx->module->ptr_type), ctx->imt_rgctx_loc);
args [cinfo->imt_arg_pindex] = mono_llvm_build_load (builder, ctx->imt_rgctx_loc, "", TRUE);
#else
args [cinfo->imt_arg_pindex] = convert (ctx, values [call->imt_arg_reg], ctx->module->ptr_type);
#endif
}
switch (cinfo->ret.storage) {
case LLVMArgGsharedvtVariable: {
MonoInst *var = get_vreg_to_inst (cfg, call->inst.dreg);
if (var && var->opcode == OP_GSHAREDVT_LOCAL) {
args [cinfo->vret_arg_pindex] = convert (ctx, emit_gsharedvt_ldaddr (ctx, var->dreg), IntPtrType ());
} else {
g_assert (addresses [call->inst.dreg]);
args [cinfo->vret_arg_pindex] = convert (ctx, addresses [call->inst.dreg], IntPtrType ());
}
break;
}
default:
if (vretaddr) {
if (!addresses [call->inst.dreg])
addresses [call->inst.dreg] = build_alloca (ctx, sig->ret);
g_assert (cinfo->vret_arg_pindex < nargs);
if (cinfo->ret.storage == LLVMArgVtypeByRef)
args [cinfo->vret_arg_pindex] = addresses [call->inst.dreg];
else
args [cinfo->vret_arg_pindex] = LLVMBuildPtrToInt (builder, addresses [call->inst.dreg], IntPtrType (), "");
}
break;
}
/*
* Sometimes the same method is called with two different signatures (i.e. with and without 'this'), so
* use the real callee for argument type conversion.
*/
LLVMTypeRef callee_type = LLVMGetElementType (LLVMTypeOf (callee));
LLVMTypeRef *param_types = (LLVMTypeRef*)g_alloca (sizeof (LLVMTypeRef) * LLVMCountParamTypes (callee_type));
LLVMGetParamTypes (callee_type, param_types);
for (i = 0; i < sig->param_count + sig->hasthis; ++i) {
guint32 regpair;
int reg, pindex;
LLVMArgInfo *ainfo = &call->cinfo->args [i];
pindex = ainfo->pindex;
regpair = (guint32)(gssize)(l->data);
reg = regpair & 0xffffff;
args [pindex] = values [reg];
switch (ainfo->storage) {
case LLVMArgVtypeInReg:
case LLVMArgAsFpArgs: {
guint32 nargs;
int j;
for (j = 0; j < ainfo->ndummy_fpargs; ++j)
args [pindex + j] = LLVMConstNull (LLVMDoubleType ());
pindex += ainfo->ndummy_fpargs;
g_assert (addresses [reg]);
emit_vtype_to_args (ctx, builder, ainfo->type, addresses [reg], ainfo, args + pindex, &nargs);
pindex += nargs;
// FIXME: alignment
// FIXME: Get rid of the VMOVE
break;
}
case LLVMArgVtypeByVal:
g_assert (addresses [reg]);
args [pindex] = addresses [reg];
break;
case LLVMArgVtypeAddr :
case LLVMArgVtypeByRef: {
g_assert (addresses [reg]);
args [pindex] = convert (ctx, addresses [reg], LLVMPointerType (type_to_llvm_arg_type (ctx, ainfo->type), 0));
break;
}
case LLVMArgAsIArgs:
g_assert (addresses [reg]);
if (ainfo->esize == 8)
args [pindex] = LLVMBuildLoad (ctx->builder, convert (ctx, addresses [reg], LLVMPointerType (LLVMArrayType (LLVMInt64Type (), ainfo->nslots), 0)), "");
else
args [pindex] = LLVMBuildLoad (ctx->builder, convert (ctx, addresses [reg], LLVMPointerType (LLVMArrayType (IntPtrType (), ainfo->nslots), 0)), "");
break;
case LLVMArgVtypeAsScalar:
g_assert_not_reached ();
break;
case LLVMArgWasmVtypeAsScalar:
g_assert (addresses [reg]);
args [pindex] = LLVMBuildLoad (ctx->builder, convert (ctx, addresses [reg], LLVMPointerType (LLVMIntType (ainfo->esize * 8), 0)), "");
break;
case LLVMArgGsharedvtFixed:
case LLVMArgGsharedvtFixedVtype:
g_assert (addresses [reg]);
args [pindex] = convert (ctx, addresses [reg], LLVMPointerType (type_to_llvm_arg_type (ctx, ainfo->type), 0));
break;
case LLVMArgGsharedvtVariable:
g_assert (addresses [reg]);
args [pindex] = convert (ctx, addresses [reg], LLVMPointerType (IntPtrType (), 0));
break;
default:
g_assert (args [pindex]);
if (i == 0 && sig->hasthis)
args [pindex] = convert (ctx, args [pindex], param_types [pindex]);
else
args [pindex] = convert (ctx, args [pindex], type_to_llvm_arg_type (ctx, ainfo->type));
break;
}
g_assert (pindex <= nargs);
l = l->next;
}
if (call->cinfo->dummy_arg) {
g_assert (call->cinfo->dummy_arg_pindex < nargs);
args [call->cinfo->dummy_arg_pindex] = LLVMConstNull (ctx->module->ptr_type);
}
// FIXME: Align call sites
/*
* Emit the call
*/
lcall = emit_call (ctx, bb, &builder, callee, args, LLVMCountParamTypes (llvm_sig));
mono_llvm_nonnull_state_update (ctx, lcall, call->method, args, LLVMCountParamTypes (llvm_sig));
// If we just allocated an object, it's not null.
if (call->method && call->method->wrapper_type == MONO_WRAPPER_ALLOC) {
mono_llvm_set_call_nonnull_ret (lcall);
}
if (ins->opcode != OP_TAILCALL && ins->opcode != OP_TAILCALL_MEMBASE && LLVMGetInstructionOpcode (lcall) == LLVMCall)
mono_llvm_set_call_notailcall (lcall);
// Add original method name we are currently emitting as a custom string metadata (the only way to leave comments in LLVM IR)
if (mono_debug_enabled () && call && call->method)
mono_llvm_add_string_metadata (lcall, "managed_name", mono_method_full_name (call->method, TRUE));
// As per the LLVM docs, a function has a noalias return value if and only if
// it is an allocation function. This is an allocation function.
if (call->method && call->method->wrapper_type == MONO_WRAPPER_ALLOC) {
mono_llvm_set_call_noalias_ret (lcall);
// All objects are expected to be 8-byte aligned (SGEN_ALLOC_ALIGN)
mono_llvm_set_alignment_ret (lcall, 8);
}
/*
* Modify cconv and parameter attributes to pass rgctx/imt correctly.
*/
#if defined(MONO_ARCH_IMT_REG) && defined(MONO_ARCH_RGCTX_REG)
g_assert (MONO_ARCH_IMT_REG == MONO_ARCH_RGCTX_REG);
#endif
/* The two can't be used together, so use only one LLVM calling conv to pass them */
g_assert (!(call->rgctx_arg_reg && call->imt_arg_reg));
if (!sig->pinvoke && !cfg->llvm_only)
LLVMSetInstructionCallConv (lcall, LLVMMono1CallConv);
if (cinfo->ret.storage == LLVMArgVtypeByRef)
mono_llvm_add_instr_attr (lcall, 1 + cinfo->vret_arg_pindex, LLVM_ATTR_STRUCT_RET);
if (!ctx->llvm_only && call->rgctx_arg_reg)
mono_llvm_add_instr_attr (lcall, 1 + cinfo->rgctx_arg_pindex, LLVM_ATTR_IN_REG);
if (call->imt_arg_reg)
mono_llvm_add_instr_attr (lcall, 1 + cinfo->imt_arg_pindex, LLVM_ATTR_IN_REG);
/* Add byval attributes if needed */
for (i = 0; i < sig->param_count; ++i) {
LLVMArgInfo *ainfo = &call->cinfo->args [i + sig->hasthis];
if (ainfo && ainfo->storage == LLVMArgVtypeByVal)
mono_llvm_add_instr_attr (lcall, 1 + ainfo->pindex, LLVM_ATTR_BY_VAL);
#ifdef TARGET_WASM
if (ainfo && ainfo->storage == LLVMArgVtypeByRef)
/* This causes llvm to make a copy of the value which is what we need */
mono_llvm_add_instr_byval_attr (lcall, 1 + ainfo->pindex, LLVMGetElementType (param_types [ainfo->pindex]));
#endif
}
gboolean is_simd = MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (sig->ret));
gboolean should_promote_to_value = FALSE;
const char *load_name = NULL;
/*
* Convert the result. Non-SIMD value types are manipulated via an
* indirection. SIMD value types are represented directly as LLVM vector
* values, and must have a corresponding LLVM value definition in
* `values`.
*/
switch (cinfo->ret.storage) {
case LLVMArgAsIArgs:
case LLVMArgFpStruct:
if (!addresses [call->inst.dreg])
addresses [call->inst.dreg] = build_alloca (ctx, sig->ret);
LLVMBuildStore (builder, lcall, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (LLVMTypeOf (lcall), 0), FALSE));
break;
case LLVMArgVtypeByVal:
/*
* Only used by amd64 and x86. Only ever used when passing
* arguments; never used for return values.
*/
g_assert_not_reached ();
break;
case LLVMArgVtypeInReg: {
if (LLVMTypeOf (lcall) == LLVMVoidType ())
/* Empty struct */
break;
if (!addresses [ins->dreg])
addresses [ins->dreg] = build_alloca (ctx, sig->ret);
LLVMValueRef regs [2] = { 0 };
regs [0] = LLVMBuildExtractValue (builder, lcall, 0, "");
if (cinfo->ret.pair_storage [1] != LLVMArgNone)
regs [1] = LLVMBuildExtractValue (builder, lcall, 1, "");
emit_args_to_vtype (ctx, builder, sig->ret, addresses [ins->dreg], &cinfo->ret, regs);
load_name = "process_call_vtype_in_reg";
should_promote_to_value = is_simd;
break;
}
case LLVMArgVtypeAsScalar:
if (!addresses [call->inst.dreg])
addresses [call->inst.dreg] = build_alloca (ctx, sig->ret);
LLVMBuildStore (builder, lcall, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (LLVMTypeOf (lcall), 0), FALSE));
load_name = "process_call_vtype_as_scalar";
should_promote_to_value = is_simd;
break;
case LLVMArgVtypeRetAddr:
case LLVMArgVtypeByRef:
load_name = "process_call_vtype_ret_addr";
should_promote_to_value = is_simd;
break;
case LLVMArgGsharedvtVariable:
break;
case LLVMArgGsharedvtFixed:
case LLVMArgGsharedvtFixedVtype:
values [ins->dreg] = LLVMBuildLoad (builder, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (type_to_llvm_type (ctx, sig->ret), 0), FALSE), "");
break;
case LLVMArgWasmVtypeAsScalar:
if (!addresses [call->inst.dreg])
addresses [call->inst.dreg] = build_alloca (ctx, sig->ret);
LLVMBuildStore (builder, lcall, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (LLVMTypeOf (lcall), 0), FALSE));
break;
default:
if (sig->ret->type != MONO_TYPE_VOID)
/* If the method returns an unsigned value, need to zext it */
values [ins->dreg] = convert_full (ctx, lcall, llvm_type_to_stack_type (cfg, type_to_llvm_type (ctx, sig->ret)), type_is_unsigned (ctx, sig->ret));
break;
}
if (should_promote_to_value) {
g_assert (addresses [call->inst.dreg]);
LLVMTypeRef addr_type = LLVMPointerType (type_to_llvm_type (ctx, sig->ret), 0);
LLVMValueRef addr = convert_full (ctx, addresses [call->inst.dreg], addr_type, FALSE);
values [ins->dreg] = LLVMBuildLoad (builder, addr, load_name);
}
*builder_ref = ctx->builder;
}
static void
emit_llvmonly_throw (EmitContext *ctx, MonoBasicBlock *bb, gboolean rethrow, LLVMValueRef exc)
{
MonoJitICallId icall_id = rethrow ? MONO_JIT_ICALL_mini_llvmonly_rethrow_exception : MONO_JIT_ICALL_mini_llvmonly_throw_exception;
LLVMValueRef callee = rethrow ? ctx->module->rethrow : ctx->module->throw_icall;
LLVMTypeRef exc_type = type_to_llvm_type (ctx, m_class_get_byval_arg (mono_get_exception_class ()));
if (!callee) {
LLVMTypeRef fun_sig = LLVMFunctionType1 (LLVMVoidType (), exc_type, FALSE);
g_assert (ctx->cfg->compile_aot);
callee = get_callee (ctx, fun_sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (icall_id));
}
LLVMValueRef args [2];
args [0] = convert (ctx, exc, exc_type);
emit_call (ctx, bb, &ctx->builder, callee, args, 1);
LLVMBuildUnreachable (ctx->builder);
ctx->builder = create_builder (ctx);
}
static void
emit_throw (EmitContext *ctx, MonoBasicBlock *bb, gboolean rethrow, LLVMValueRef exc)
{
MonoMethodSignature *throw_sig;
LLVMValueRef * const pcallee = rethrow ? &ctx->module->rethrow : &ctx->module->throw_icall;
LLVMValueRef callee = *pcallee;
char const * const icall_name = rethrow ? "mono_arch_rethrow_exception" : "mono_arch_throw_exception";
#ifndef TARGET_X86
const
#endif
MonoJitICallId icall_id = rethrow ? MONO_JIT_ICALL_mono_arch_rethrow_exception : MONO_JIT_ICALL_mono_arch_throw_exception;
if (!callee) {
throw_sig = mono_metadata_signature_alloc (mono_get_corlib (), 1);
throw_sig->ret = m_class_get_byval_arg (mono_get_void_class ());
throw_sig->params [0] = m_class_get_byval_arg (mono_get_object_class ());
if (ctx->cfg->compile_aot) {
callee = get_callee (ctx, sig_to_llvm_sig (ctx, throw_sig), MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
} else {
#ifdef TARGET_X86
/*
* LLVM doesn't push the exception argument, so we need a different
* trampoline.
*/
icall_id = rethrow ? MONO_JIT_ICALL_mono_llvm_rethrow_exception_trampoline : MONO_JIT_ICALL_mono_llvm_throw_exception_trampoline;
#endif
callee = get_jit_callee (ctx, icall_name, sig_to_llvm_sig (ctx, throw_sig), MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
}
mono_memory_barrier ();
}
LLVMValueRef arg;
arg = convert (ctx, exc, type_to_llvm_type (ctx, m_class_get_byval_arg (mono_get_object_class ())));
emit_call (ctx, bb, &ctx->builder, callee, &arg, 1);
}
static void
emit_resume_eh (EmitContext *ctx, MonoBasicBlock *bb)
{
const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_resume_exception;
LLVMValueRef callee;
LLVMTypeRef fun_sig = LLVMFunctionType0 (LLVMVoidType (), FALSE);
g_assert (ctx->cfg->compile_aot);
callee = get_callee (ctx, fun_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
emit_call (ctx, bb, &ctx->builder, callee, NULL, 0);
LLVMBuildUnreachable (ctx->builder);
ctx->builder = create_builder (ctx);
}
static LLVMValueRef
mono_llvm_emit_clear_exception_call (EmitContext *ctx, LLVMBuilderRef builder)
{
const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_clear_exception;
LLVMTypeRef call_sig = LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE);
LLVMValueRef callee = NULL;
if (!callee) {
callee = get_callee (ctx, call_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
}
g_assert (builder && callee);
return LLVMBuildCall (builder, callee, NULL, 0, "");
}
static LLVMValueRef
mono_llvm_emit_load_exception_call (EmitContext *ctx, LLVMBuilderRef builder)
{
const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_load_exception;
LLVMTypeRef call_sig = LLVMFunctionType (ObjRefType (), NULL, 0, FALSE);
LLVMValueRef callee = NULL;
g_assert (ctx->cfg->compile_aot);
if (!callee) {
callee = get_callee (ctx, call_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
}
g_assert (builder && callee);
return LLVMBuildCall (builder, callee, NULL, 0, "load_exception");
}
static LLVMValueRef
mono_llvm_emit_match_exception_call (EmitContext *ctx, LLVMBuilderRef builder, gint32 region_start, gint32 region_end)
{
const char *icall_name = "mini_llvmonly_match_exception";
const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_match_exception;
ctx->builder = builder;
LLVMValueRef args[5];
const int num_args = G_N_ELEMENTS (args);
args [0] = convert (ctx, get_aotconst (ctx, MONO_PATCH_INFO_AOT_JIT_INFO, GINT_TO_POINTER (ctx->cfg->method_index), LLVMPointerType (IntPtrType (), 0)), IntPtrType ());
args [1] = LLVMConstInt (LLVMInt32Type (), region_start, 0);
args [2] = LLVMConstInt (LLVMInt32Type (), region_end, 0);
if (ctx->cfg->rgctx_var) {
if (ctx->cfg->llvm_only) {
args [3] = convert (ctx, ctx->rgctx_arg, IntPtrType ());
} else {
LLVMValueRef rgctx_alloc = ctx->addresses [ctx->cfg->rgctx_var->dreg];
g_assert (rgctx_alloc);
args [3] = LLVMBuildLoad (builder, convert (ctx, rgctx_alloc, LLVMPointerType (IntPtrType (), 0)), "");
}
} else {
args [3] = LLVMConstInt (IntPtrType (), 0, 0);
}
if (ctx->this_arg)
args [4] = convert (ctx, ctx->this_arg, IntPtrType ());
else
args [4] = LLVMConstInt (IntPtrType (), 0, 0);
LLVMTypeRef match_sig = LLVMFunctionType5 (LLVMInt32Type (), IntPtrType (), LLVMInt32Type (), LLVMInt32Type (), IntPtrType (), IntPtrType (), FALSE);
LLVMValueRef callee;
g_assert (ctx->cfg->compile_aot);
ctx->builder = builder;
// get_callee expects ctx->builder to be the emitting builder
callee = get_callee (ctx, match_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
g_assert (builder && callee);
g_assert (ctx->ex_var);
return LLVMBuildCall (builder, callee, args, num_args, icall_name);
}
// FIXME: This won't work because the code-finding makes this
// not a constant.
/*#define MONO_PERSONALITY_DEBUG*/
#ifdef MONO_PERSONALITY_DEBUG
static const gboolean use_mono_personality_debug = TRUE;
static const char *default_personality_name = "mono_debug_personality";
#else
static const gboolean use_mono_personality_debug = FALSE;
static const char *default_personality_name = "__gxx_personality_v0";
#endif
static LLVMTypeRef
default_cpp_lpad_exc_signature (void)
{
static LLVMTypeRef sig;
if (!sig) {
LLVMTypeRef signature [2];
signature [0] = LLVMPointerType (LLVMInt8Type (), 0);
signature [1] = LLVMInt32Type ();
sig = LLVMStructType (signature, 2, FALSE);
}
return sig;
}
static LLVMValueRef
get_mono_personality (EmitContext *ctx)
{
LLVMValueRef personality = NULL;
LLVMTypeRef personality_type = LLVMFunctionType (LLVMInt32Type (), NULL, 0, TRUE);
g_assert (ctx->cfg->compile_aot);
if (!use_mono_personality_debug) {
personality = LLVMGetNamedFunction (ctx->lmodule, default_personality_name);
} else {
personality = get_callee (ctx, personality_type, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_debug_personality));
}
g_assert (personality);
return personality;
}
static LLVMBasicBlockRef
emit_landing_pad (EmitContext *ctx, int group_index, int group_size)
{
MonoCompile *cfg = ctx->cfg;
LLVMBuilderRef old_builder = ctx->builder;
MonoExceptionClause *group_start = cfg->header->clauses + group_index;
LLVMBuilderRef lpadBuilder = create_builder (ctx);
ctx->builder = lpadBuilder;
MonoBasicBlock *handler_bb = cfg->cil_offset_to_bb [CLAUSE_START (group_start)];
g_assert (handler_bb);
// <resultval> = landingpad <somety> personality <type> <pers_fn> <clause>+
LLVMValueRef personality = get_mono_personality (ctx);
g_assert (personality);
char *bb_name = g_strdup_printf ("LPAD%d_BB", group_index);
LLVMBasicBlockRef lpad_bb = gen_bb (ctx, bb_name);
g_free (bb_name);
LLVMPositionBuilderAtEnd (lpadBuilder, lpad_bb);
LLVMValueRef landing_pad = LLVMBuildLandingPad (lpadBuilder, default_cpp_lpad_exc_signature (), personality, 0, "");
g_assert (landing_pad);
LLVMValueRef cast = LLVMBuildBitCast (lpadBuilder, ctx->module->sentinel_exception, LLVMPointerType (LLVMInt8Type (), 0), "int8TypeInfo");
LLVMAddClause (landing_pad, cast);
if (ctx->cfg->deopt) {
/*
* Call mini_llvmonly_resume_exception_il_state (lmf, il_state)
*
* The call will execute the catch clause and the rest of the method and store the return
* value into ctx->il_state_ret.
*/
if (!ctx->has_catch) {
/* Unused */
LLVMBuildUnreachable (lpadBuilder);
return lpad_bb;
}
const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_resume_exception_il_state;
LLVMValueRef callee;
LLVMValueRef args [2];
LLVMTypeRef fun_sig = LLVMFunctionType2 (LLVMVoidType (), IntPtrType (), IntPtrType (), FALSE);
callee = get_callee (ctx, fun_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id));
g_assert (ctx->cfg->lmf_var);
g_assert (ctx->addresses [ctx->cfg->lmf_var->dreg]);
args [0] = LLVMBuildPtrToInt (ctx->builder, ctx->addresses [ctx->cfg->lmf_var->dreg], IntPtrType (), "");
args [1] = LLVMBuildPtrToInt (ctx->builder, ctx->il_state, IntPtrType (), "");
emit_call (ctx, NULL, &ctx->builder, callee, args, 2);
/* Return the value set in ctx->il_state_ret */
LLVMTypeRef ret_type = LLVMGetReturnType (LLVMGetElementType (LLVMTypeOf (ctx->lmethod)));
LLVMBuilderRef builder = ctx->builder;
LLVMValueRef addr, retval, gep, indexes [2];
switch (ctx->linfo->ret.storage) {
case LLVMArgNone:
LLVMBuildRetVoid (builder);
break;
case LLVMArgNormal:
case LLVMArgWasmVtypeAsScalar:
case LLVMArgVtypeInReg: {
if (ctx->sig->ret->type == MONO_TYPE_VOID) {
LLVMBuildRetVoid (builder);
break;
}
addr = ctx->il_state_ret;
g_assert (addr);
addr = convert (ctx, ctx->il_state_ret, LLVMPointerType (ret_type, 0));
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
gep = LLVMBuildGEP (builder, addr, indexes, 1, "");
LLVMBuildRet (builder, LLVMBuildLoad (builder, gep, ""));
break;
}
case LLVMArgVtypeRetAddr: {
LLVMValueRef ret_addr;
g_assert (cfg->vret_addr);
ret_addr = ctx->values [cfg->vret_addr->dreg];
addr = ctx->il_state_ret;
g_assert (addr);
/* The ret value is in il_state_ret, copy it to the memory pointed to by the vret arg */
ret_type = type_to_llvm_type (ctx, ctx->sig->ret);
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
gep = LLVMBuildGEP (builder, addr, indexes, 1, "");
retval = convert (ctx, LLVMBuildLoad (builder, gep, ""), ret_type);
LLVMBuildStore (builder, retval, convert (ctx, ret_addr, LLVMPointerType (ret_type, 0)));
LLVMBuildRetVoid (builder);
break;
}
default:
g_assert_not_reached ();
break;
}
return lpad_bb;
}
LLVMBasicBlockRef resume_bb = gen_bb (ctx, "RESUME_BB");
LLVMBuilderRef resume_builder = create_builder (ctx);
ctx->builder = resume_builder;
LLVMPositionBuilderAtEnd (resume_builder, resume_bb);
emit_resume_eh (ctx, handler_bb);
// Build match
ctx->builder = lpadBuilder;
LLVMPositionBuilderAtEnd (lpadBuilder, lpad_bb);
gboolean finally_only = TRUE;
MonoExceptionClause *group_cursor = group_start;
for (int i = 0; i < group_size; i ++) {
if (!(group_cursor->flags & MONO_EXCEPTION_CLAUSE_FINALLY || group_cursor->flags & MONO_EXCEPTION_CLAUSE_FAULT))
finally_only = FALSE;
group_cursor++;
}
// FIXME:
// Handle landing pad inlining
if (!finally_only) {
// So at each level of the exception stack we will match the exception again.
// During that match, we need to compare against the handler types for the current
// protected region. We send the try start and end so that we can only check against
// handlers for this lexical protected region.
LLVMValueRef match = mono_llvm_emit_match_exception_call (ctx, lpadBuilder, group_start->try_offset, group_start->try_offset + group_start->try_len);
// if returns -1, resume
LLVMValueRef switch_ins = LLVMBuildSwitch (lpadBuilder, match, resume_bb, group_size);
// else move to that target bb
for (int i = 0; i < group_size; i++) {
MonoExceptionClause *clause = group_start + i;
int clause_index = clause - cfg->header->clauses;
MonoBasicBlock *handler_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->clause_to_handler, GINT_TO_POINTER (clause_index));
g_assert (handler_bb);
g_assert (ctx->bblocks [handler_bb->block_num].call_handler_target_bb);
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), clause_index, FALSE), ctx->bblocks [handler_bb->block_num].call_handler_target_bb);
}
} else {
int clause_index = group_start - cfg->header->clauses;
MonoBasicBlock *finally_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->clause_to_handler, GINT_TO_POINTER (clause_index));
g_assert (finally_bb);
LLVMBuildBr (ctx->builder, ctx->bblocks [finally_bb->block_num].call_handler_target_bb);
}
ctx->builder = old_builder;
return lpad_bb;
}
static LLVMValueRef
create_const_vector (LLVMTypeRef t, const int *vals, int count)
{
g_assert (count <= MAX_VECTOR_ELEMS);
LLVMValueRef llvm_vals [MAX_VECTOR_ELEMS];
for (int i = 0; i < count; i++)
llvm_vals [i] = LLVMConstInt (t, vals [i], FALSE);
return LLVMConstVector (llvm_vals, count);
}
static LLVMValueRef
create_const_vector_i32 (const int *mask, int count)
{
return create_const_vector (LLVMInt32Type (), mask, count);
}
static LLVMValueRef
create_const_vector_4_i32 (int v0, int v1, int v2, int v3)
{
LLVMValueRef mask [4];
mask [0] = LLVMConstInt (LLVMInt32Type (), v0, FALSE);
mask [1] = LLVMConstInt (LLVMInt32Type (), v1, FALSE);
mask [2] = LLVMConstInt (LLVMInt32Type (), v2, FALSE);
mask [3] = LLVMConstInt (LLVMInt32Type (), v3, FALSE);
return LLVMConstVector (mask, 4);
}
static LLVMValueRef
create_const_vector_2_i32 (int v0, int v1)
{
LLVMValueRef mask [2];
mask [0] = LLVMConstInt (LLVMInt32Type (), v0, FALSE);
mask [1] = LLVMConstInt (LLVMInt32Type (), v1, FALSE);
return LLVMConstVector (mask, 2);
}
static LLVMValueRef
broadcast_element (EmitContext *ctx, LLVMValueRef elem, int count)
{
LLVMTypeRef t = LLVMTypeOf (elem);
LLVMTypeRef init_vec_t = LLVMVectorType (t, 1);
LLVMValueRef undef = LLVMGetUndef (init_vec_t);
LLVMValueRef vec = LLVMBuildInsertElement (ctx->builder, undef, elem, const_int32 (0), "");
LLVMValueRef select_zero = LLVMConstNull (LLVMVectorType (LLVMInt32Type (), count));
return LLVMBuildShuffleVector (ctx->builder, vec, undef, select_zero, "broadcast");
}
static LLVMValueRef
broadcast_constant (int const_val, LLVMTypeRef elem_t, int count)
{
int vals [MAX_VECTOR_ELEMS];
for (int i = 0; i < count; ++i)
vals [i] = const_val;
return create_const_vector (elem_t, vals, count);
}
static LLVMValueRef
create_shift_vector (EmitContext *ctx, LLVMValueRef type_donor, LLVMValueRef shiftamt)
{
LLVMTypeRef t = LLVMTypeOf (type_donor);
unsigned int elems = LLVMGetVectorSize (t);
LLVMTypeRef elem_t = LLVMGetElementType (t);
shiftamt = convert_full (ctx, shiftamt, elem_t, TRUE);
shiftamt = broadcast_element (ctx, shiftamt, elems);
return shiftamt;
}
static LLVMTypeRef
to_integral_vector_type (LLVMTypeRef t)
{
unsigned int elems = LLVMGetVectorSize (t);
LLVMTypeRef elem_t = LLVMGetElementType (t);
unsigned int bits = mono_llvm_get_prim_size_bits (elem_t);
return LLVMVectorType (LLVMIntType (bits), elems);
}
static LLVMValueRef
bitcast_to_integral (EmitContext *ctx, LLVMValueRef vec)
{
LLVMTypeRef src_t = LLVMTypeOf (vec);
LLVMTypeRef dst_t = to_integral_vector_type (src_t);
if (dst_t != src_t)
return LLVMBuildBitCast (ctx->builder, vec, dst_t, "bc2i");
return vec;
}
static LLVMValueRef
extract_high_elements (EmitContext *ctx, LLVMValueRef src_vec)
{
LLVMTypeRef src_t = LLVMTypeOf (src_vec);
unsigned int src_elems = LLVMGetVectorSize (src_t);
unsigned int dst_elems = src_elems / 2;
int mask [MAX_VECTOR_ELEMS] = { 0 };
for (int i = 0; i < dst_elems; ++i)
mask [i] = dst_elems + i;
return LLVMBuildShuffleVector (ctx->builder, src_vec, LLVMGetUndef (src_t), create_const_vector_i32 (mask, dst_elems), "extract_high");
}
static LLVMValueRef
keep_lowest_element (EmitContext *ctx, LLVMTypeRef dst_t, LLVMValueRef vec)
{
LLVMTypeRef t = LLVMTypeOf (vec);
g_assert (LLVMGetElementType (dst_t) == LLVMGetElementType (t));
unsigned int elems = LLVMGetVectorSize (dst_t);
unsigned int src_elems = LLVMGetVectorSize (t);
int mask [MAX_VECTOR_ELEMS] = { 0 };
mask [0] = 0;
for (unsigned int i = 1; i < elems; ++i)
mask [i] = src_elems;
return LLVMBuildShuffleVector (ctx->builder, vec, LLVMConstNull (t), create_const_vector_i32 (mask, elems), "keep_lowest");
}
static LLVMValueRef
concatenate_vectors (EmitContext *ctx, LLVMValueRef xs, LLVMValueRef ys)
{
LLVMTypeRef t = LLVMTypeOf (xs);
unsigned int elems = LLVMGetVectorSize (t) * 2;
int mask [MAX_VECTOR_ELEMS] = { 0 };
for (int i = 0; i < elems; ++i)
mask [i] = i;
return LLVMBuildShuffleVector (ctx->builder, xs, ys, create_const_vector_i32 (mask, elems), "concat_vecs");
}
static LLVMValueRef
scalar_from_vector (EmitContext *ctx, LLVMValueRef xs)
{
return LLVMBuildExtractElement (ctx->builder, xs, const_int32 (0), "v2s");
}
static LLVMValueRef
vector_from_scalar (EmitContext *ctx, LLVMTypeRef type, LLVMValueRef x)
{
return LLVMBuildInsertElement (ctx->builder, LLVMConstNull (type), x, const_int32 (0), "s2v");
}
typedef struct {
EmitContext *ctx;
MonoBasicBlock *bb;
LLVMBasicBlockRef continuation;
LLVMValueRef phi;
LLVMValueRef switch_ins;
LLVMBasicBlockRef tmp_block;
LLVMBasicBlockRef default_case;
LLVMTypeRef switch_index_type;
const char *name;
int max_cases;
int i;
} ImmediateUnrollCtx;
static ImmediateUnrollCtx
immediate_unroll_begin (
EmitContext *ctx, MonoBasicBlock *bb, int max_cases,
LLVMValueRef switch_index, LLVMTypeRef return_type, const char *name)
{
LLVMBasicBlockRef default_case = gen_bb (ctx, name);
LLVMBasicBlockRef continuation = gen_bb (ctx, name);
LLVMValueRef switch_ins = LLVMBuildSwitch (ctx->builder, switch_index, default_case, max_cases);
LLVMPositionBuilderAtEnd (ctx->builder, continuation);
LLVMValueRef phi = LLVMBuildPhi (ctx->builder, return_type, name);
ImmediateUnrollCtx ictx = { 0 };
ictx.ctx = ctx;
ictx.bb = bb;
ictx.continuation = continuation;
ictx.phi = phi;
ictx.switch_ins = switch_ins;
ictx.default_case = default_case;
ictx.switch_index_type = LLVMTypeOf (switch_index);
ictx.name = name;
ictx.max_cases = max_cases;
return ictx;
}
static gboolean
immediate_unroll_next (ImmediateUnrollCtx *ictx, int *i)
{
if (ictx->i >= ictx->max_cases)
return FALSE;
ictx->tmp_block = gen_bb (ictx->ctx, ictx->name);
LLVMPositionBuilderAtEnd (ictx->ctx->builder, ictx->tmp_block);
*i = ictx->i;
++ictx->i;
return TRUE;
}
static void
immediate_unroll_commit (ImmediateUnrollCtx *ictx, int switch_const, LLVMValueRef value)
{
LLVMBuildBr (ictx->ctx->builder, ictx->continuation);
LLVMAddCase (ictx->switch_ins, LLVMConstInt (ictx->switch_index_type, switch_const, FALSE), ictx->tmp_block);
LLVMAddIncoming (ictx->phi, &value, &ictx->tmp_block, 1);
}
static void
immediate_unroll_default (ImmediateUnrollCtx *ictx)
{
LLVMPositionBuilderAtEnd (ictx->ctx->builder, ictx->default_case);
}
static void
immediate_unroll_commit_default (ImmediateUnrollCtx *ictx, LLVMValueRef value)
{
LLVMBuildBr (ictx->ctx->builder, ictx->continuation);
LLVMAddIncoming (ictx->phi, &value, &ictx->default_case, 1);
}
static void
immediate_unroll_unreachable_default (ImmediateUnrollCtx *ictx)
{
immediate_unroll_default (ictx);
LLVMBuildUnreachable (ictx->ctx->builder);
}
static LLVMValueRef
immediate_unroll_end (ImmediateUnrollCtx *ictx, LLVMBasicBlockRef *continuation)
{
EmitContext *ctx = ictx->ctx;
LLVMBuilderRef builder = ctx->builder;
LLVMPositionBuilderAtEnd (builder, ictx->continuation);
*continuation = ictx->continuation;
ctx->bblocks [ictx->bb->block_num].end_bblock = ictx->continuation;
return ictx->phi;
}
typedef struct {
EmitContext *ctx;
LLVMTypeRef intermediate_type;
LLVMTypeRef return_type;
gboolean needs_fake_scalar_op;
llvm_ovr_tag_t ovr_tag;
} ScalarOpFromVectorOpCtx;
static inline gboolean
check_needs_fake_scalar_op (MonoTypeEnum type)
{
#if defined(TARGET_ARM64)
switch (type) {
case MONO_TYPE_U1:
case MONO_TYPE_I1:
case MONO_TYPE_U2:
case MONO_TYPE_I2:
return TRUE;
}
#endif
return FALSE;
}
static ScalarOpFromVectorOpCtx
scalar_op_from_vector_op (EmitContext *ctx, LLVMTypeRef return_type, MonoInst *ins)
{
ScalarOpFromVectorOpCtx ret = { 0 };
ret.ctx = ctx;
ret.intermediate_type = return_type;
ret.return_type = return_type;
ret.needs_fake_scalar_op = check_needs_fake_scalar_op (inst_c1_type (ins));
ret.ovr_tag = ovr_tag_from_llvm_type (return_type);
if (!ret.needs_fake_scalar_op) {
ret.ovr_tag = ovr_tag_force_scalar (ret.ovr_tag);
ret.intermediate_type = ovr_tag_to_llvm_type (ret.ovr_tag);
}
return ret;
}
static void
scalar_op_from_vector_op_process_args (ScalarOpFromVectorOpCtx *sctx, LLVMValueRef *args, int num_args)
{
if (!sctx->needs_fake_scalar_op)
for (int i = 0; i < num_args; ++i)
args [i] = scalar_from_vector (sctx->ctx, args [i]);
}
static LLVMValueRef
scalar_op_from_vector_op_process_result (ScalarOpFromVectorOpCtx *sctx, LLVMValueRef result)
{
if (sctx->needs_fake_scalar_op)
return keep_lowest_element (sctx->ctx, LLVMTypeOf (result), result);
return vector_from_scalar (sctx->ctx, sctx->return_type, result);
}
static void
emit_llvmonly_handler_start (EmitContext *ctx, MonoBasicBlock *bb, LLVMBasicBlockRef cbb)
{
int clause_index = MONO_REGION_CLAUSE_INDEX (bb->region);
MonoExceptionClause *clause = &ctx->cfg->header->clauses [clause_index];
// Make exception available to catch blocks
if (!(clause->flags & MONO_EXCEPTION_CLAUSE_FINALLY || clause->flags & MONO_EXCEPTION_CLAUSE_FAULT)) {
LLVMValueRef mono_exc = mono_llvm_emit_load_exception_call (ctx, ctx->builder);
g_assert (ctx->ex_var);
LLVMBuildStore (ctx->builder, LLVMBuildBitCast (ctx->builder, mono_exc, ObjRefType (), ""), ctx->ex_var);
if (bb->in_scount == 1) {
MonoInst *exvar = bb->in_stack [0];
g_assert (!ctx->values [exvar->dreg]);
g_assert (ctx->ex_var);
ctx->values [exvar->dreg] = LLVMBuildLoad (ctx->builder, ctx->ex_var, "save_exception");
emit_volatile_store (ctx, exvar->dreg);
}
mono_llvm_emit_clear_exception_call (ctx, ctx->builder);
}
#ifdef TARGET_WASM
if (ctx->cfg->lmf_var && !ctx->cfg->deopt) {
LLVMValueRef callee;
LLVMValueRef args [1];
LLVMTypeRef sig = LLVMFunctionType1 (LLVMVoidType (), ctx->module->ptr_type, FALSE);
/*
* There might be an LMF on the stack inserted to enable stack walking, see
* method_needs_stack_walk (). If an exception is thrown, the LMF popping code
* is not executed, so do it here.
*/
g_assert (ctx->addresses [ctx->cfg->lmf_var->dreg]);
callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_pop_lmf));
args [0] = convert (ctx, ctx->addresses [ctx->cfg->lmf_var->dreg], ctx->module->ptr_type);
emit_call (ctx, bb, &ctx->builder, callee, args, 1);
}
#endif
LLVMBuilderRef handler_builder = create_builder (ctx);
LLVMBasicBlockRef target_bb = ctx->bblocks [bb->block_num].call_handler_target_bb;
LLVMPositionBuilderAtEnd (handler_builder, target_bb);
// Make the handler code end with a jump to cbb
LLVMBuildBr (handler_builder, cbb);
}
static void
emit_handler_start (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef builder)
{
MonoCompile *cfg = ctx->cfg;
LLVMValueRef *values = ctx->values;
LLVMModuleRef lmodule = ctx->lmodule;
BBInfo *bblocks = ctx->bblocks;
LLVMTypeRef i8ptr;
LLVMValueRef personality;
LLVMValueRef landing_pad;
LLVMBasicBlockRef target_bb;
MonoInst *exvar;
static int ti_generator;
char ti_name [128];
LLVMValueRef type_info;
int clause_index;
GSList *l;
// <resultval> = landingpad <somety> personality <type> <pers_fn> <clause>+
if (cfg->compile_aot) {
/* Use a dummy personality function */
personality = LLVMGetNamedFunction (lmodule, "mono_personality");
g_assert (personality);
} else {
/* Can't cache this as each method is in its own llvm module */
LLVMTypeRef personality_type = LLVMFunctionType (LLVMInt32Type (), NULL, 0, TRUE);
personality = LLVMAddFunction (ctx->lmodule, "mono_personality", personality_type);
mono_llvm_add_func_attr (personality, LLVM_ATTR_NO_UNWIND);
LLVMBasicBlockRef entry_bb = LLVMAppendBasicBlock (personality, "ENTRY");
LLVMBuilderRef builder2 = LLVMCreateBuilder ();
LLVMPositionBuilderAtEnd (builder2, entry_bb);
LLVMBuildRet (builder2, LLVMConstInt (LLVMInt32Type (), 0, FALSE));
LLVMDisposeBuilder (builder2);
}
i8ptr = LLVMPointerType (LLVMInt8Type (), 0);
clause_index = (mono_get_block_region_notry (cfg, bb->region) >> 8) - 1;
/*
* Create the type info
*/
sprintf (ti_name, "type_info_%d", ti_generator);
ti_generator ++;
if (cfg->compile_aot) {
/* decode_eh_frame () in aot-runtime.c will decode this */
type_info = LLVMAddGlobal (lmodule, LLVMInt32Type (), ti_name);
LLVMSetInitializer (type_info, LLVMConstInt (LLVMInt32Type (), clause_index, FALSE));
/*
* These symbols are not really used, the clause_index is embedded into the EH tables generated by DwarfMonoException in LLVM.
*/
LLVMSetLinkage (type_info, LLVMInternalLinkage);
} else {
type_info = LLVMAddGlobal (lmodule, LLVMInt32Type (), ti_name);
LLVMSetInitializer (type_info, LLVMConstInt (LLVMInt32Type (), clause_index, FALSE));
}
{
LLVMTypeRef members [2], ret_type;
members [0] = i8ptr;
members [1] = LLVMInt32Type ();
ret_type = LLVMStructType (members, 2, FALSE);
landing_pad = LLVMBuildLandingPad (builder, ret_type, personality, 1, "");
LLVMAddClause (landing_pad, type_info);
/* Store the exception into the exvar */
if (ctx->ex_var)
LLVMBuildStore (builder, convert (ctx, LLVMBuildExtractValue (builder, landing_pad, 0, "ex_obj"), ObjRefType ()), ctx->ex_var);
}
/*
* LLVM throw sites are associated with a one landing pad, and LLVM generated
* code expects control to be transferred to this landing pad even in the
* presence of nested clauses. The landing pad needs to branch to the landing
* pads belonging to nested clauses based on the selector value returned by
* the landing pad instruction, which is passed to the landing pad in a
* register by the EH code.
*/
target_bb = bblocks [bb->block_num].call_handler_target_bb;
g_assert (target_bb);
/*
* Branch to the correct landing pad
*/
LLVMValueRef ex_selector = LLVMBuildExtractValue (builder, landing_pad, 1, "ex_selector");
LLVMValueRef switch_ins = LLVMBuildSwitch (builder, ex_selector, target_bb, 0);
for (l = ctx->nested_in [clause_index]; l; l = l->next) {
int nesting_clause_index = GPOINTER_TO_INT (l->data);
MonoBasicBlock *handler_bb;
handler_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->clause_to_handler, GINT_TO_POINTER (nesting_clause_index));
g_assert (handler_bb);
g_assert (ctx->bblocks [handler_bb->block_num].call_handler_target_bb);
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), nesting_clause_index, FALSE), ctx->bblocks [handler_bb->block_num].call_handler_target_bb);
}
/* Start a new bblock which CALL_HANDLER can branch to */
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, target_bb);
ctx->bblocks [bb->block_num].end_bblock = target_bb;
/* Store the exception into the IL level exvar */
if (bb->in_scount == 1) {
g_assert (bb->in_scount == 1);
exvar = bb->in_stack [0];
// FIXME: This is shared with filter clauses ?
g_assert (!values [exvar->dreg]);
g_assert (ctx->ex_var);
values [exvar->dreg] = LLVMBuildLoad (builder, ctx->ex_var, "");
emit_volatile_store (ctx, exvar->dreg);
}
/* Make normal branches to the start of the clause branch to the new bblock */
bblocks [bb->block_num].bblock = target_bb;
}
static LLVMValueRef
get_double_const (MonoCompile *cfg, double val)
{
//#ifdef TARGET_WASM
#if 0
//Wasm requires us to canonicalize NaNs.
if (mono_isnan (val))
*(gint64 *)&val = 0x7FF8000000000000ll;
#endif
return LLVMConstReal (LLVMDoubleType (), val);
}
static LLVMValueRef
get_float_const (MonoCompile *cfg, float val)
{
//#ifdef TARGET_WASM
#if 0
if (mono_isnan (val))
*(int *)&val = 0x7FC00000;
#endif
if (cfg->r4fp)
return LLVMConstReal (LLVMFloatType (), val);
else
return LLVMConstFPExt (LLVMConstReal (LLVMFloatType (), val), LLVMDoubleType ());
}
static LLVMValueRef
call_overloaded_intrins (EmitContext *ctx, int id, llvm_ovr_tag_t ovr_tag, LLVMValueRef *args, const char *name)
{
int key = key_from_id_and_tag (id, ovr_tag);
LLVMValueRef intrins = get_intrins (ctx, key);
int nargs = LLVMCountParamTypes (LLVMGetElementType (LLVMTypeOf (intrins)));
for (int i = 0; i < nargs; ++i) {
LLVMTypeRef t1 = LLVMTypeOf (args [i]);
LLVMTypeRef t2 = LLVMTypeOf (LLVMGetParam (intrins, i));
if (t1 != t2)
args [i] = convert (ctx, args [i], t2);
}
return LLVMBuildCall (ctx->builder, intrins, args, nargs, name);
}
static LLVMValueRef
call_intrins (EmitContext *ctx, int id, LLVMValueRef *args, const char *name)
{
return call_overloaded_intrins (ctx, id, 0, args, name);
}
static void
process_bb (EmitContext *ctx, MonoBasicBlock *bb)
{
MonoCompile *cfg = ctx->cfg;
MonoMethodSignature *sig = ctx->sig;
LLVMValueRef method = ctx->lmethod;
LLVMValueRef *values = ctx->values;
LLVMValueRef *addresses = ctx->addresses;
LLVMCallInfo *linfo = ctx->linfo;
BBInfo *bblocks = ctx->bblocks;
MonoInst *ins;
LLVMBasicBlockRef cbb;
LLVMBuilderRef builder;
gboolean has_terminator;
LLVMValueRef v;
LLVMValueRef lhs, rhs, arg3;
int nins = 0;
cbb = get_end_bb (ctx, bb);
builder = create_builder (ctx);
ctx->builder = builder;
LLVMPositionBuilderAtEnd (builder, cbb);
if (!ctx_ok (ctx))
return;
if (cfg->interp_entry_only && bb != cfg->bb_init && bb != cfg->bb_entry && bb != cfg->bb_exit) {
/* The interp entry code is in bb_entry, skip the rest as we might not be able to compile it */
LLVMBuildUnreachable (builder);
return;
}
if (bb->flags & BB_EXCEPTION_HANDLER) {
if (!ctx->llvm_only && !bblocks [bb->block_num].invoke_target) {
set_failure (ctx, "handler without invokes");
return;
}
if (ctx->llvm_only)
emit_llvmonly_handler_start (ctx, bb, cbb);
else
emit_handler_start (ctx, bb, builder);
if (!ctx_ok (ctx))
return;
builder = ctx->builder;
}
/* Handle PHI nodes first */
/* They should be grouped at the start of the bb */
for (ins = bb->code; ins; ins = ins->next) {
emit_dbg_loc (ctx, builder, ins->cil_code);
if (ins->opcode == OP_NOP)
continue;
if (!MONO_IS_PHI (ins))
break;
if (cfg->interp_entry_only)
break;
int i;
gboolean empty = TRUE;
/* Check that all input bblocks really branch to us */
for (i = 0; i < bb->in_count; ++i) {
if (bb->in_bb [i]->last_ins && bb->in_bb [i]->last_ins->opcode == OP_NOT_REACHED)
ins->inst_phi_args [i + 1] = -1;
else
empty = FALSE;
}
if (empty) {
/* LLVM doesn't like phi instructions with zero operands */
ctx->is_dead [ins->dreg] = TRUE;
continue;
}
/* Created earlier, insert it now */
LLVMInsertIntoBuilder (builder, values [ins->dreg]);
for (i = 0; i < ins->inst_phi_args [0]; i++) {
int sreg1 = ins->inst_phi_args [i + 1];
int count, j;
/*
* Count the number of times the incoming bblock branches to us,
* since llvm requires a separate entry for each.
*/
if (bb->in_bb [i]->last_ins && bb->in_bb [i]->last_ins->opcode == OP_SWITCH) {
MonoInst *switch_ins = bb->in_bb [i]->last_ins;
count = 0;
for (j = 0; j < GPOINTER_TO_UINT (switch_ins->klass); ++j) {
if (switch_ins->inst_many_bb [j] == bb)
count ++;
}
} else {
count = 1;
}
/* Remember for later */
for (j = 0; j < count; ++j) {
PhiNode *node = (PhiNode*)mono_mempool_alloc0 (ctx->mempool, sizeof (PhiNode));
node->bb = bb;
node->phi = ins;
node->in_bb = bb->in_bb [i];
node->sreg = sreg1;
bblocks [bb->in_bb [i]->block_num].phi_nodes = g_slist_prepend_mempool (ctx->mempool, bblocks [bb->in_bb [i]->block_num].phi_nodes, node);
}
}
}
// Add volatile stores for PHI nodes
// These need to be emitted after the PHI nodes
for (ins = bb->code; ins; ins = ins->next) {
const char *spec = LLVM_INS_INFO (ins->opcode);
if (ins->opcode == OP_NOP)
continue;
if (!MONO_IS_PHI (ins))
break;
if (spec [MONO_INST_DEST] != 'v')
emit_volatile_store (ctx, ins->dreg);
}
has_terminator = FALSE;
for (ins = bb->code; ins; ins = ins->next) {
const char *spec = LLVM_INS_INFO (ins->opcode);
char *dname = NULL;
char dname_buf [128];
emit_dbg_loc (ctx, builder, ins->cil_code);
nins ++;
if (nins > 1000) {
/*
* Some steps in llc are non-linear in the size of basic blocks, see #5714.
* Start a new bblock.
* Prevent the bblocks to be merged by doing a volatile load + cond branch
* from localloc-ed memory.
*/
if (!cfg->llvm_only)
;//set_failure (ctx, "basic block too long");
if (!ctx->long_bb_break_var) {
ctx->long_bb_break_var = build_alloca_llvm_type_name (ctx, LLVMInt32Type (), 0, "long_bb_break");
mono_llvm_build_store (ctx->alloca_builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ctx->long_bb_break_var, TRUE, LLVM_BARRIER_NONE);
}
cbb = gen_bb (ctx, "CONT_LONG_BB");
LLVMBasicBlockRef dummy_bb = gen_bb (ctx, "CONT_LONG_BB_DUMMY");
LLVMValueRef load = mono_llvm_build_load (builder, ctx->long_bb_break_var, "", TRUE);
/*
* The long_bb_break_var is initialized to 0 in the prolog, so this branch will always go to 'cbb'
* but llvm doesn't know that, so the branch is not going to be eliminated.
*/
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntEQ, load, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
LLVMBuildCondBr (builder, cmp, cbb, dummy_bb);
/* Emit a dummy false bblock which does nothing but contains a volatile store so it cannot be eliminated */
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, dummy_bb);
mono_llvm_build_store (builder, LLVMConstInt (LLVMInt32Type (), 1, FALSE), ctx->long_bb_break_var, TRUE, LLVM_BARRIER_NONE);
LLVMBuildBr (builder, cbb);
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, cbb);
ctx->bblocks [bb->block_num].end_bblock = cbb;
nins = 0;
emit_dbg_loc (ctx, builder, ins->cil_code);
}
if (has_terminator)
/* There could be instructions after a terminator, skip them */
break;
if (spec [MONO_INST_DEST] != ' ' && !MONO_IS_STORE_MEMBASE (ins)) {
sprintf (dname_buf, "t%d", ins->dreg);
dname = dname_buf;
}
if (spec [MONO_INST_SRC1] != ' ' && spec [MONO_INST_SRC1] != 'v') {
MonoInst *var = get_vreg_to_inst (cfg, ins->sreg1);
if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT) && var->opcode != OP_GSHAREDVT_ARG_REGOFFSET) {
lhs = emit_volatile_load (ctx, ins->sreg1);
} else {
/* It is ok for SETRET to have an uninitialized argument */
if (!values [ins->sreg1] && ins->opcode != OP_SETRET) {
set_failure (ctx, "sreg1");
return;
}
lhs = values [ins->sreg1];
}
} else {
lhs = NULL;
}
if (spec [MONO_INST_SRC2] != ' ' && spec [MONO_INST_SRC2] != 'v') {
MonoInst *var = get_vreg_to_inst (cfg, ins->sreg2);
if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) {
rhs = emit_volatile_load (ctx, ins->sreg2);
} else {
if (!values [ins->sreg2]) {
set_failure (ctx, "sreg2");
return;
}
rhs = values [ins->sreg2];
}
} else {
rhs = NULL;
}
if (spec [MONO_INST_SRC3] != ' ' && spec [MONO_INST_SRC3] != 'v') {
MonoInst *var = get_vreg_to_inst (cfg, ins->sreg3);
if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) {
arg3 = emit_volatile_load (ctx, ins->sreg3);
} else {
if (!values [ins->sreg3]) {
set_failure (ctx, "sreg3");
return;
}
arg3 = values [ins->sreg3];
}
} else {
arg3 = NULL;
}
//mono_print_ins (ins);
gboolean skip_volatile_store = FALSE;
switch (ins->opcode) {
case OP_NOP:
case OP_NOT_NULL:
case OP_LIVERANGE_START:
case OP_LIVERANGE_END:
break;
case OP_ICONST:
values [ins->dreg] = LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE);
break;
case OP_I8CONST:
#if TARGET_SIZEOF_VOID_P == 4
values [ins->dreg] = LLVMConstInt (LLVMInt64Type (), GET_LONG_IMM (ins), FALSE);
#else
values [ins->dreg] = LLVMConstInt (LLVMInt64Type (), (gint64)ins->inst_c0, FALSE);
#endif
break;
case OP_R8CONST:
values [ins->dreg] = get_double_const (cfg, *(double*)ins->inst_p0);
break;
case OP_R4CONST:
values [ins->dreg] = get_float_const (cfg, *(float*)ins->inst_p0);
break;
case OP_DUMMY_ICONST:
values [ins->dreg] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
break;
case OP_DUMMY_I8CONST:
values [ins->dreg] = LLVMConstInt (LLVMInt64Type (), 0, FALSE);
break;
case OP_DUMMY_R8CONST:
values [ins->dreg] = LLVMConstReal (LLVMDoubleType (), 0.0f);
break;
case OP_BR: {
LLVMBasicBlockRef target_bb = get_bb (ctx, ins->inst_target_bb);
LLVMBuildBr (builder, target_bb);
has_terminator = TRUE;
break;
}
case OP_SWITCH: {
int i;
LLVMValueRef v;
char bb_name [128];
LLVMBasicBlockRef new_bb;
LLVMBuilderRef new_builder;
// The default branch is already handled
// FIXME: Handle it here
/* Start new bblock */
sprintf (bb_name, "SWITCH_DEFAULT_BB%d", ctx->default_index ++);
new_bb = LLVMAppendBasicBlock (ctx->lmethod, bb_name);
lhs = convert (ctx, lhs, LLVMInt32Type ());
v = LLVMBuildSwitch (builder, lhs, new_bb, GPOINTER_TO_UINT (ins->klass));
for (i = 0; i < GPOINTER_TO_UINT (ins->klass); ++i) {
MonoBasicBlock *target_bb = ins->inst_many_bb [i];
LLVMAddCase (v, LLVMConstInt (LLVMInt32Type (), i, FALSE), get_bb (ctx, target_bb));
}
new_builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (new_builder, new_bb);
LLVMBuildUnreachable (new_builder);
has_terminator = TRUE;
g_assert (!ins->next);
break;
}
case OP_SETRET:
switch (linfo->ret.storage) {
case LLVMArgNormal:
case LLVMArgVtypeInReg:
case LLVMArgVtypeAsScalar:
case LLVMArgWasmVtypeAsScalar: {
LLVMTypeRef ret_type = LLVMGetReturnType (LLVMGetElementType (LLVMTypeOf (method)));
LLVMValueRef retval = LLVMGetUndef (ret_type);
gboolean src_in_reg = FALSE;
gboolean is_simd = MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (sig->ret));
switch (linfo->ret.storage) {
case LLVMArgNormal: src_in_reg = TRUE; break;
case LLVMArgVtypeInReg: case LLVMArgVtypeAsScalar: src_in_reg = is_simd; break;
}
if (src_in_reg && (!lhs || ctx->is_dead [ins->sreg1])) {
/*
* The method did not set its return value, probably because it
* ends with a throw.
*/
LLVMBuildRet (builder, retval);
break;
}
switch (linfo->ret.storage) {
case LLVMArgNormal:
retval = convert (ctx, lhs, type_to_llvm_type (ctx, sig->ret));
break;
case LLVMArgVtypeInReg:
if (is_simd) {
/* The return type is an LLVM aggregate type, so a bare bitcast cannot be used to do this conversion. */
int width = mono_type_size (sig->ret, NULL);
int elems = width / TARGET_SIZEOF_VOID_P;
/* The return value might not be set if there is a throw */
LLVMValueRef val = LLVMBuildBitCast (builder, lhs, LLVMVectorType (IntPtrType (), elems), "");
for (int i = 0; i < elems; ++i) {
LLVMValueRef element = LLVMBuildExtractElement (builder, val, const_int32 (i), "");
retval = LLVMBuildInsertValue (builder, retval, element, i, "setret_simd_vtype_in_reg");
}
} else {
LLVMValueRef addr = LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (ret_type, 0), "");
for (int i = 0; i < 2; ++i) {
if (linfo->ret.pair_storage [i] == LLVMArgInIReg) {
LLVMValueRef indexes [2], part_addr;
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMConstInt (LLVMInt32Type (), i, FALSE);
part_addr = LLVMBuildGEP (builder, addr, indexes, 2, "");
retval = LLVMBuildInsertValue (builder, retval, LLVMBuildLoad (builder, part_addr, ""), i, "");
} else {
g_assert (linfo->ret.pair_storage [i] == LLVMArgNone);
}
}
}
break;
case LLVMArgVtypeAsScalar:
if (is_simd) {
retval = LLVMBuildBitCast (builder, values [ins->sreg1], ret_type, "setret_simd_vtype_as_scalar");
} else {
g_assert (addresses [ins->sreg1]);
retval = LLVMBuildLoad (builder, LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (ret_type, 0), ""), "");
}
break;
case LLVMArgWasmVtypeAsScalar:
g_assert (addresses [ins->sreg1]);
retval = LLVMBuildLoad (builder, LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (ret_type, 0), ""), "");
break;
}
LLVMBuildRet (builder, retval);
break;
}
case LLVMArgVtypeByRef: {
LLVMBuildRetVoid (builder);
break;
}
case LLVMArgGsharedvtFixed: {
LLVMTypeRef ret_type = type_to_llvm_type (ctx, sig->ret);
/* The return value is in lhs, need to store to the vret argument */
/* sreg1 might not be set */
if (lhs) {
g_assert (cfg->vret_addr);
g_assert (values [cfg->vret_addr->dreg]);
LLVMBuildStore (builder, convert (ctx, lhs, ret_type), convert (ctx, values [cfg->vret_addr->dreg], LLVMPointerType (ret_type, 0)));
}
LLVMBuildRetVoid (builder);
break;
}
case LLVMArgGsharedvtFixedVtype: {
/* Already set */
LLVMBuildRetVoid (builder);
break;
}
case LLVMArgGsharedvtVariable: {
/* Already set */
LLVMBuildRetVoid (builder);
break;
}
case LLVMArgVtypeRetAddr: {
LLVMBuildRetVoid (builder);
break;
}
case LLVMArgAsIArgs:
case LLVMArgFpStruct: {
LLVMTypeRef ret_type = LLVMGetReturnType (LLVMGetElementType (LLVMTypeOf (method)));
LLVMValueRef retval;
g_assert (addresses [ins->sreg1]);
retval = LLVMBuildLoad (builder, convert (ctx, addresses [ins->sreg1], LLVMPointerType (ret_type, 0)), "");
LLVMBuildRet (builder, retval);
break;
}
case LLVMArgNone:
LLVMBuildRetVoid (builder);
break;
default:
g_assert_not_reached ();
break;
}
has_terminator = TRUE;
break;
case OP_ICOMPARE:
case OP_FCOMPARE:
case OP_RCOMPARE:
case OP_LCOMPARE:
case OP_COMPARE:
case OP_ICOMPARE_IMM:
case OP_LCOMPARE_IMM:
case OP_COMPARE_IMM: {
CompRelation rel;
LLVMValueRef cmp, args [16];
gboolean likely = (ins->flags & MONO_INST_LIKELY) != 0;
gboolean unlikely = FALSE;
if (MONO_IS_COND_BRANCH_OP (ins->next)) {
if (ins->next->inst_false_bb->out_of_line)
likely = TRUE;
else if (ins->next->inst_true_bb->out_of_line)
unlikely = TRUE;
}
if (ins->next->opcode == OP_NOP)
break;
if (ins->next->opcode == OP_BR)
/* The comparison result is not needed */
continue;
rel = mono_opcode_to_cond (ins->next->opcode);
if (ins->opcode == OP_ICOMPARE_IMM) {
lhs = convert (ctx, lhs, LLVMInt32Type ());
rhs = LLVMConstInt (LLVMInt32Type (), ins->inst_imm, FALSE);
}
if (ins->opcode == OP_LCOMPARE_IMM) {
lhs = convert (ctx, lhs, LLVMInt64Type ());
rhs = LLVMConstInt (LLVMInt64Type (), GET_LONG_IMM (ins), FALSE);
}
if (ins->opcode == OP_LCOMPARE) {
lhs = convert (ctx, lhs, LLVMInt64Type ());
rhs = convert (ctx, rhs, LLVMInt64Type ());
}
if (ins->opcode == OP_ICOMPARE) {
lhs = convert (ctx, lhs, LLVMInt32Type ());
rhs = convert (ctx, rhs, LLVMInt32Type ());
}
if (lhs && rhs) {
if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind)
rhs = convert (ctx, rhs, LLVMTypeOf (lhs));
else if (LLVMGetTypeKind (LLVMTypeOf (rhs)) == LLVMPointerTypeKind)
lhs = convert (ctx, lhs, LLVMTypeOf (rhs));
}
/* We use COMPARE+SETcc/Bcc, llvm uses SETcc+br cond */
if (ins->opcode == OP_FCOMPARE) {
cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMDoubleType ()), convert (ctx, rhs, LLVMDoubleType ()), "");
} else if (ins->opcode == OP_RCOMPARE) {
cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMFloatType ()), convert (ctx, rhs, LLVMFloatType ()), "");
} else if (ins->opcode == OP_COMPARE_IMM) {
LLVMIntPredicate llvm_pred = cond_to_llvm_cond [rel];
if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind && ins->inst_imm == 0) {
// We are emitting a NULL check for a pointer
gboolean nonnull = mono_llvm_is_nonnull (lhs);
if (nonnull && llvm_pred == LLVMIntEQ)
cmp = LLVMConstInt (LLVMInt1Type (), FALSE, FALSE);
else if (nonnull && llvm_pred == LLVMIntNE)
cmp = LLVMConstInt (LLVMInt1Type (), TRUE, FALSE);
else
cmp = LLVMBuildICmp (builder, llvm_pred, lhs, LLVMConstNull (LLVMTypeOf (lhs)), "");
} else {
cmp = LLVMBuildICmp (builder, llvm_pred, convert (ctx, lhs, IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE), "");
}
} else if (ins->opcode == OP_LCOMPARE_IMM) {
cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], lhs, rhs, "");
}
else if (ins->opcode == OP_COMPARE) {
if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind && LLVMTypeOf (lhs) == LLVMTypeOf (rhs))
cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], lhs, rhs, "");
else
cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], convert (ctx, lhs, IntPtrType ()), convert (ctx, rhs, IntPtrType ()), "");
} else
cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], lhs, rhs, "");
if (likely || unlikely) {
args [0] = cmp;
args [1] = LLVMConstInt (LLVMInt1Type (), likely ? 1 : 0, FALSE);
cmp = call_intrins (ctx, INTRINS_EXPECT_I1, args, "");
}
if (MONO_IS_COND_BRANCH_OP (ins->next)) {
if (ins->next->inst_true_bb == ins->next->inst_false_bb) {
/*
* If the target bb contains PHI instructions, LLVM requires
* two PHI entries for this bblock, while we only generate one.
* So convert this to an unconditional bblock. (bxc #171).
*/
LLVMBuildBr (builder, get_bb (ctx, ins->next->inst_true_bb));
} else {
LLVMBuildCondBr (builder, cmp, get_bb (ctx, ins->next->inst_true_bb), get_bb (ctx, ins->next->inst_false_bb));
}
has_terminator = TRUE;
} else if (MONO_IS_SETCC (ins->next)) {
sprintf (dname_buf, "t%d", ins->next->dreg);
dname = dname_buf;
values [ins->next->dreg] = LLVMBuildZExt (builder, cmp, LLVMInt32Type (), dname);
/* Add stores for volatile variables */
emit_volatile_store (ctx, ins->next->dreg);
} else if (MONO_IS_COND_EXC (ins->next)) {
gboolean force_explicit_branch = FALSE;
if (bb->region != -1) {
/* Don't tag null check branches in exception-handling
* regions with `make.implicit`.
*/
force_explicit_branch = TRUE;
}
emit_cond_system_exception (ctx, bb, (const char*)ins->next->inst_p1, cmp, force_explicit_branch);
if (!ctx_ok (ctx))
break;
builder = ctx->builder;
} else {
set_failure (ctx, "next");
break;
}
ins = ins->next;
break;
}
case OP_FCEQ:
case OP_FCNEQ:
case OP_FCLT:
case OP_FCLT_UN:
case OP_FCGT:
case OP_FCGT_UN:
case OP_FCGE:
case OP_FCLE: {
CompRelation rel;
LLVMValueRef cmp;
rel = mono_opcode_to_cond (ins->opcode);
cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMDoubleType ()), convert (ctx, rhs, LLVMDoubleType ()), "");
values [ins->dreg] = LLVMBuildZExt (builder, cmp, LLVMInt32Type (), dname);
break;
}
case OP_RCEQ:
case OP_RCNEQ:
case OP_RCLT:
case OP_RCLT_UN:
case OP_RCGT:
case OP_RCGT_UN: {
CompRelation rel;
LLVMValueRef cmp;
rel = mono_opcode_to_cond (ins->opcode);
cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMFloatType ()), convert (ctx, rhs, LLVMFloatType ()), "");
values [ins->dreg] = LLVMBuildZExt (builder, cmp, LLVMInt32Type (), dname);
break;
}
case OP_PHI:
case OP_FPHI:
case OP_VPHI:
case OP_XPHI: {
// Handled above
skip_volatile_store = TRUE;
break;
}
case OP_MOVE:
case OP_LMOVE:
case OP_XMOVE:
case OP_SETFRET:
g_assert (lhs);
values [ins->dreg] = lhs;
break;
case OP_FMOVE:
case OP_RMOVE: {
MonoInst *var = get_vreg_to_inst (cfg, ins->dreg);
g_assert (lhs);
values [ins->dreg] = lhs;
if (var && m_class_get_byval_arg (var->klass)->type == MONO_TYPE_R4) {
/*
* This is added by the spilling pass in case of the JIT,
* but we have to do it ourselves.
*/
values [ins->dreg] = convert (ctx, values [ins->dreg], LLVMFloatType ());
}
break;
}
case OP_MOVE_F_TO_I4: {
values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildFPTrunc (builder, lhs, LLVMFloatType (), ""), LLVMInt32Type (), "");
break;
}
case OP_MOVE_I4_TO_F: {
values [ins->dreg] = LLVMBuildFPExt (builder, LLVMBuildBitCast (builder, lhs, LLVMFloatType (), ""), LLVMDoubleType (), "");
break;
}
case OP_MOVE_F_TO_I8: {
values [ins->dreg] = LLVMBuildBitCast (builder, lhs, LLVMInt64Type (), "");
break;
}
case OP_MOVE_I8_TO_F: {
values [ins->dreg] = LLVMBuildBitCast (builder, lhs, LLVMDoubleType (), "");
break;
}
case OP_IADD:
case OP_ISUB:
case OP_IAND:
case OP_IMUL:
case OP_IDIV:
case OP_IDIV_UN:
case OP_IREM:
case OP_IREM_UN:
case OP_IOR:
case OP_IXOR:
case OP_ISHL:
case OP_ISHR:
case OP_ISHR_UN:
case OP_FADD:
case OP_FSUB:
case OP_FMUL:
case OP_FDIV:
case OP_LADD:
case OP_LSUB:
case OP_LMUL:
case OP_LDIV:
case OP_LDIV_UN:
case OP_LREM:
case OP_LREM_UN:
case OP_LAND:
case OP_LOR:
case OP_LXOR:
case OP_LSHL:
case OP_LSHR:
case OP_LSHR_UN:
lhs = convert (ctx, lhs, regtype_to_llvm_type (spec [MONO_INST_DEST]));
rhs = convert (ctx, rhs, regtype_to_llvm_type (spec [MONO_INST_DEST]));
emit_div_check (ctx, builder, bb, ins, lhs, rhs);
if (!ctx_ok (ctx))
break;
builder = ctx->builder;
switch (ins->opcode) {
case OP_IADD:
case OP_LADD:
values [ins->dreg] = LLVMBuildAdd (builder, lhs, rhs, dname);
break;
case OP_ISUB:
case OP_LSUB:
values [ins->dreg] = LLVMBuildSub (builder, lhs, rhs, dname);
break;
case OP_IMUL:
case OP_LMUL:
values [ins->dreg] = LLVMBuildMul (builder, lhs, rhs, dname);
break;
case OP_IREM:
case OP_LREM:
values [ins->dreg] = LLVMBuildSRem (builder, lhs, rhs, dname);
break;
case OP_IREM_UN:
case OP_LREM_UN:
values [ins->dreg] = LLVMBuildURem (builder, lhs, rhs, dname);
break;
case OP_IDIV:
case OP_LDIV:
values [ins->dreg] = LLVMBuildSDiv (builder, lhs, rhs, dname);
break;
case OP_IDIV_UN:
case OP_LDIV_UN:
values [ins->dreg] = LLVMBuildUDiv (builder, lhs, rhs, dname);
break;
case OP_FDIV:
case OP_RDIV:
values [ins->dreg] = LLVMBuildFDiv (builder, lhs, rhs, dname);
break;
case OP_IAND:
case OP_LAND:
values [ins->dreg] = LLVMBuildAnd (builder, lhs, rhs, dname);
break;
case OP_IOR:
case OP_LOR:
values [ins->dreg] = LLVMBuildOr (builder, lhs, rhs, dname);
break;
case OP_IXOR:
case OP_LXOR:
values [ins->dreg] = LLVMBuildXor (builder, lhs, rhs, dname);
break;
case OP_ISHL:
case OP_LSHL:
values [ins->dreg] = LLVMBuildShl (builder, lhs, rhs, dname);
break;
case OP_ISHR:
case OP_LSHR:
values [ins->dreg] = LLVMBuildAShr (builder, lhs, rhs, dname);
break;
case OP_ISHR_UN:
case OP_LSHR_UN:
values [ins->dreg] = LLVMBuildLShr (builder, lhs, rhs, dname);
break;
case OP_FADD:
values [ins->dreg] = LLVMBuildFAdd (builder, lhs, rhs, dname);
break;
case OP_FSUB:
values [ins->dreg] = LLVMBuildFSub (builder, lhs, rhs, dname);
break;
case OP_FMUL:
values [ins->dreg] = LLVMBuildFMul (builder, lhs, rhs, dname);
break;
default:
g_assert_not_reached ();
}
break;
case OP_RADD:
case OP_RSUB:
case OP_RMUL:
case OP_RDIV: {
lhs = convert (ctx, lhs, LLVMFloatType ());
rhs = convert (ctx, rhs, LLVMFloatType ());
switch (ins->opcode) {
case OP_RADD:
values [ins->dreg] = LLVMBuildFAdd (builder, lhs, rhs, dname);
break;
case OP_RSUB:
values [ins->dreg] = LLVMBuildFSub (builder, lhs, rhs, dname);
break;
case OP_RMUL:
values [ins->dreg] = LLVMBuildFMul (builder, lhs, rhs, dname);
break;
case OP_RDIV:
values [ins->dreg] = LLVMBuildFDiv (builder, lhs, rhs, dname);
break;
default:
g_assert_not_reached ();
break;
}
break;
}
case OP_IADD_IMM:
case OP_ISUB_IMM:
case OP_IMUL_IMM:
case OP_IREM_IMM:
case OP_IREM_UN_IMM:
case OP_IDIV_IMM:
case OP_IDIV_UN_IMM:
case OP_IAND_IMM:
case OP_IOR_IMM:
case OP_IXOR_IMM:
case OP_ISHL_IMM:
case OP_ISHR_IMM:
case OP_ISHR_UN_IMM:
case OP_LADD_IMM:
case OP_LSUB_IMM:
case OP_LMUL_IMM:
case OP_LREM_IMM:
case OP_LAND_IMM:
case OP_LOR_IMM:
case OP_LXOR_IMM:
case OP_LSHL_IMM:
case OP_LSHR_IMM:
case OP_LSHR_UN_IMM:
case OP_ADD_IMM:
case OP_AND_IMM:
case OP_MUL_IMM:
case OP_SHL_IMM:
case OP_SHR_IMM:
case OP_SHR_UN_IMM: {
LLVMValueRef imm;
if (spec [MONO_INST_SRC1] == 'l') {
imm = LLVMConstInt (LLVMInt64Type (), GET_LONG_IMM (ins), FALSE);
} else {
imm = LLVMConstInt (LLVMInt32Type (), ins->inst_imm, FALSE);
}
emit_div_check (ctx, builder, bb, ins, lhs, imm);
if (!ctx_ok (ctx))
break;
builder = ctx->builder;
#if TARGET_SIZEOF_VOID_P == 4
if (ins->opcode == OP_LSHL_IMM || ins->opcode == OP_LSHR_IMM || ins->opcode == OP_LSHR_UN_IMM)
imm = LLVMConstInt (LLVMInt32Type (), ins->inst_imm, FALSE);
#endif
if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind)
lhs = convert (ctx, lhs, IntPtrType ());
imm = convert (ctx, imm, LLVMTypeOf (lhs));
switch (ins->opcode) {
case OP_IADD_IMM:
case OP_LADD_IMM:
case OP_ADD_IMM:
values [ins->dreg] = LLVMBuildAdd (builder, lhs, imm, dname);
break;
case OP_ISUB_IMM:
case OP_LSUB_IMM:
values [ins->dreg] = LLVMBuildSub (builder, lhs, imm, dname);
break;
case OP_IMUL_IMM:
case OP_MUL_IMM:
case OP_LMUL_IMM:
values [ins->dreg] = LLVMBuildMul (builder, lhs, imm, dname);
break;
case OP_IDIV_IMM:
case OP_LDIV_IMM:
values [ins->dreg] = LLVMBuildSDiv (builder, lhs, imm, dname);
break;
case OP_IDIV_UN_IMM:
case OP_LDIV_UN_IMM:
values [ins->dreg] = LLVMBuildUDiv (builder, lhs, imm, dname);
break;
case OP_IREM_IMM:
case OP_LREM_IMM:
values [ins->dreg] = LLVMBuildSRem (builder, lhs, imm, dname);
break;
case OP_IREM_UN_IMM:
values [ins->dreg] = LLVMBuildURem (builder, lhs, imm, dname);
break;
case OP_IAND_IMM:
case OP_LAND_IMM:
case OP_AND_IMM:
values [ins->dreg] = LLVMBuildAnd (builder, lhs, imm, dname);
break;
case OP_IOR_IMM:
case OP_LOR_IMM:
values [ins->dreg] = LLVMBuildOr (builder, lhs, imm, dname);
break;
case OP_IXOR_IMM:
case OP_LXOR_IMM:
values [ins->dreg] = LLVMBuildXor (builder, lhs, imm, dname);
break;
case OP_ISHL_IMM:
case OP_LSHL_IMM:
values [ins->dreg] = LLVMBuildShl (builder, lhs, imm, dname);
break;
case OP_SHL_IMM:
if (TARGET_SIZEOF_VOID_P == 8) {
/* The IL is not regular */
lhs = convert (ctx, lhs, LLVMInt64Type ());
imm = convert (ctx, imm, LLVMInt64Type ());
}
values [ins->dreg] = LLVMBuildShl (builder, lhs, imm, dname);
break;
case OP_ISHR_IMM:
case OP_LSHR_IMM:
case OP_SHR_IMM:
values [ins->dreg] = LLVMBuildAShr (builder, lhs, imm, dname);
break;
case OP_ISHR_UN_IMM:
/* This is used to implement conv.u4, so the lhs could be an i8 */
lhs = convert (ctx, lhs, LLVMInt32Type ());
imm = convert (ctx, imm, LLVMInt32Type ());
values [ins->dreg] = LLVMBuildLShr (builder, lhs, imm, dname);
break;
case OP_LSHR_UN_IMM:
case OP_SHR_UN_IMM:
values [ins->dreg] = LLVMBuildLShr (builder, lhs, imm, dname);
break;
default:
g_assert_not_reached ();
}
break;
}
case OP_INEG:
values [ins->dreg] = LLVMBuildSub (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), convert (ctx, lhs, LLVMInt32Type ()), dname);
break;
case OP_LNEG:
if (LLVMTypeOf (lhs) != LLVMInt64Type ())
lhs = convert (ctx, lhs, LLVMInt64Type ());
values [ins->dreg] = LLVMBuildSub (builder, LLVMConstInt (LLVMInt64Type (), 0, FALSE), lhs, dname);
break;
case OP_FNEG:
lhs = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = LLVMBuildFNeg (builder, lhs, dname);
break;
case OP_RNEG:
lhs = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = LLVMBuildFNeg (builder, lhs, dname);
break;
case OP_INOT: {
guint32 v = 0xffffffff;
values [ins->dreg] = LLVMBuildXor (builder, LLVMConstInt (LLVMInt32Type (), v, FALSE), convert (ctx, lhs, LLVMInt32Type ()), dname);
break;
}
case OP_LNOT: {
if (LLVMTypeOf (lhs) != LLVMInt64Type ())
lhs = convert (ctx, lhs, LLVMInt64Type ());
guint64 v = 0xffffffffffffffffLL;
values [ins->dreg] = LLVMBuildXor (builder, LLVMConstInt (LLVMInt64Type (), v, FALSE), lhs, dname);
break;
}
#if defined(TARGET_X86) || defined(TARGET_AMD64)
case OP_X86_LEA: {
LLVMValueRef v1, v2;
rhs = LLVMBuildSExt (builder, convert (ctx, rhs, LLVMInt32Type ()), LLVMInt64Type (), "");
v1 = LLVMBuildMul (builder, convert (ctx, rhs, IntPtrType ()), LLVMConstInt (IntPtrType (), ((unsigned long long)1 << ins->backend.shift_amount), FALSE), "");
v2 = LLVMBuildAdd (builder, convert (ctx, lhs, IntPtrType ()), v1, "");
values [ins->dreg] = LLVMBuildAdd (builder, v2, LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE), dname);
break;
}
case OP_X86_BSF32:
case OP_X86_BSF64: {
LLVMValueRef args [] = {
lhs,
LLVMConstInt (LLVMInt1Type (), 1, TRUE),
};
int op = ins->opcode == OP_X86_BSF32 ? INTRINS_CTTZ_I32 : INTRINS_CTTZ_I64;
values [ins->dreg] = call_intrins (ctx, op, args, dname);
break;
}
case OP_X86_BSR32:
case OP_X86_BSR64: {
LLVMValueRef args [] = {
lhs,
LLVMConstInt (LLVMInt1Type (), 1, TRUE),
};
int op = ins->opcode == OP_X86_BSR32 ? INTRINS_CTLZ_I32 : INTRINS_CTLZ_I64;
LLVMValueRef width = ins->opcode == OP_X86_BSR32 ? const_int32 (31) : const_int64 (63);
LLVMValueRef tz = call_intrins (ctx, op, args, "");
values [ins->dreg] = LLVMBuildXor (builder, tz, width, dname);
break;
}
#endif
case OP_ICONV_TO_I1:
case OP_ICONV_TO_I2:
case OP_ICONV_TO_I4:
case OP_ICONV_TO_U1:
case OP_ICONV_TO_U2:
case OP_ICONV_TO_U4:
case OP_LCONV_TO_I1:
case OP_LCONV_TO_I2:
case OP_LCONV_TO_U1:
case OP_LCONV_TO_U2:
case OP_LCONV_TO_U4: {
gboolean sign;
sign = (ins->opcode == OP_ICONV_TO_I1) || (ins->opcode == OP_ICONV_TO_I2) || (ins->opcode == OP_ICONV_TO_I4) || (ins->opcode == OP_LCONV_TO_I1) || (ins->opcode == OP_LCONV_TO_I2);
/* Have to do two casts since our vregs have type int */
v = LLVMBuildTrunc (builder, lhs, op_to_llvm_type (ins->opcode), "");
if (sign)
values [ins->dreg] = LLVMBuildSExt (builder, v, LLVMInt32Type (), dname);
else
values [ins->dreg] = LLVMBuildZExt (builder, v, LLVMInt32Type (), dname);
break;
}
case OP_ICONV_TO_I8:
values [ins->dreg] = LLVMBuildSExt (builder, lhs, LLVMInt64Type (), dname);
break;
case OP_ICONV_TO_U8:
values [ins->dreg] = LLVMBuildZExt (builder, lhs, LLVMInt64Type (), dname);
break;
case OP_FCONV_TO_I4:
case OP_RCONV_TO_I4:
values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, LLVMInt32Type (), dname);
break;
case OP_FCONV_TO_I1:
case OP_RCONV_TO_I1:
values [ins->dreg] = LLVMBuildSExt (builder, LLVMBuildFPToSI (builder, lhs, LLVMInt8Type (), dname), LLVMInt32Type (), "");
break;
case OP_FCONV_TO_U1:
case OP_RCONV_TO_U1:
values [ins->dreg] = LLVMBuildZExt (builder, LLVMBuildTrunc (builder, LLVMBuildFPToUI (builder, lhs, IntPtrType (), dname), LLVMInt8Type (), ""), LLVMInt32Type (), "");
break;
case OP_FCONV_TO_I2:
case OP_RCONV_TO_I2:
values [ins->dreg] = LLVMBuildSExt (builder, LLVMBuildFPToSI (builder, lhs, LLVMInt16Type (), dname), LLVMInt32Type (), "");
break;
case OP_FCONV_TO_U2:
case OP_RCONV_TO_U2:
values [ins->dreg] = LLVMBuildZExt (builder, LLVMBuildFPToUI (builder, lhs, LLVMInt16Type (), dname), LLVMInt32Type (), "");
break;
case OP_FCONV_TO_U4:
case OP_RCONV_TO_U4:
values [ins->dreg] = LLVMBuildFPToUI (builder, lhs, LLVMInt32Type (), dname);
break;
case OP_FCONV_TO_U8:
case OP_RCONV_TO_U8:
values [ins->dreg] = LLVMBuildFPToUI (builder, lhs, LLVMInt64Type (), dname);
break;
case OP_FCONV_TO_I8:
case OP_RCONV_TO_I8:
values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, LLVMInt64Type (), dname);
break;
case OP_FCONV_TO_I:
case OP_RCONV_TO_I:
values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, IntPtrType (), dname);
break;
case OP_ICONV_TO_R8:
case OP_LCONV_TO_R8:
values [ins->dreg] = LLVMBuildSIToFP (builder, lhs, LLVMDoubleType (), dname);
break;
case OP_ICONV_TO_R_UN:
case OP_LCONV_TO_R_UN:
values [ins->dreg] = LLVMBuildUIToFP (builder, lhs, LLVMDoubleType (), dname);
break;
#if TARGET_SIZEOF_VOID_P == 4
case OP_LCONV_TO_U:
#endif
case OP_LCONV_TO_I4:
values [ins->dreg] = LLVMBuildTrunc (builder, lhs, LLVMInt32Type (), dname);
break;
case OP_ICONV_TO_R4:
case OP_LCONV_TO_R4:
v = LLVMBuildSIToFP (builder, lhs, LLVMFloatType (), "");
if (cfg->r4fp)
values [ins->dreg] = v;
else
values [ins->dreg] = LLVMBuildFPExt (builder, v, LLVMDoubleType (), dname);
break;
case OP_FCONV_TO_R4:
v = LLVMBuildFPTrunc (builder, lhs, LLVMFloatType (), "");
if (cfg->r4fp)
values [ins->dreg] = v;
else
values [ins->dreg] = LLVMBuildFPExt (builder, v, LLVMDoubleType (), dname);
break;
case OP_RCONV_TO_R8:
values [ins->dreg] = LLVMBuildFPExt (builder, lhs, LLVMDoubleType (), dname);
break;
case OP_RCONV_TO_R4:
values [ins->dreg] = lhs;
break;
case OP_SEXT_I4:
values [ins->dreg] = LLVMBuildSExt (builder, convert (ctx, lhs, LLVMInt32Type ()), LLVMInt64Type (), dname);
break;
case OP_ZEXT_I4:
values [ins->dreg] = LLVMBuildZExt (builder, convert (ctx, lhs, LLVMInt32Type ()), LLVMInt64Type (), dname);
break;
case OP_TRUNC_I4:
values [ins->dreg] = LLVMBuildTrunc (builder, lhs, LLVMInt32Type (), dname);
break;
case OP_LOCALLOC_IMM: {
LLVMValueRef v;
guint32 size = ins->inst_imm;
size = (size + (MONO_ARCH_FRAME_ALIGNMENT - 1)) & ~ (MONO_ARCH_FRAME_ALIGNMENT - 1);
v = mono_llvm_build_alloca (builder, LLVMInt8Type (), LLVMConstInt (LLVMInt32Type (), size, FALSE), MONO_ARCH_FRAME_ALIGNMENT, "");
if (ins->flags & MONO_INST_INIT)
emit_memset (ctx, builder, v, const_int32 (size), MONO_ARCH_FRAME_ALIGNMENT);
values [ins->dreg] = v;
break;
}
case OP_LOCALLOC: {
LLVMValueRef v, size;
size = LLVMBuildAnd (builder, LLVMBuildAdd (builder, convert (ctx, lhs, LLVMInt32Type ()), LLVMConstInt (LLVMInt32Type (), MONO_ARCH_FRAME_ALIGNMENT - 1, FALSE), ""), LLVMConstInt (LLVMInt32Type (), ~ (MONO_ARCH_FRAME_ALIGNMENT - 1), FALSE), "");
v = mono_llvm_build_alloca (builder, LLVMInt8Type (), size, MONO_ARCH_FRAME_ALIGNMENT, "");
if (ins->flags & MONO_INST_INIT)
emit_memset (ctx, builder, v, size, MONO_ARCH_FRAME_ALIGNMENT);
values [ins->dreg] = v;
break;
}
case OP_LOADI1_MEMBASE:
case OP_LOADU1_MEMBASE:
case OP_LOADI2_MEMBASE:
case OP_LOADU2_MEMBASE:
case OP_LOADI4_MEMBASE:
case OP_LOADU4_MEMBASE:
case OP_LOADI8_MEMBASE:
case OP_LOADR4_MEMBASE:
case OP_LOADR8_MEMBASE:
case OP_LOAD_MEMBASE:
case OP_LOADI8_MEM:
case OP_LOADU1_MEM:
case OP_LOADU2_MEM:
case OP_LOADI4_MEM:
case OP_LOADU4_MEM:
case OP_LOAD_MEM: {
int size = 8;
LLVMValueRef base, index, addr;
LLVMTypeRef t;
gboolean sext = FALSE, zext = FALSE;
gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0;
gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0;
gboolean is_unaligned = (ins->flags & MONO_INST_UNALIGNED) != 0;
t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext);
if (sext || zext)
dname = (char*)"";
if ((ins->opcode == OP_LOADI8_MEM) || (ins->opcode == OP_LOAD_MEM) || (ins->opcode == OP_LOADI4_MEM) || (ins->opcode == OP_LOADU4_MEM) || (ins->opcode == OP_LOADU1_MEM) || (ins->opcode == OP_LOADU2_MEM)) {
addr = LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE);
base = addr;
} else {
/* _MEMBASE */
base = lhs;
if (ins->inst_offset == 0) {
LLVMValueRef gep_base, gep_offset;
if (mono_llvm_can_be_gep (base, &gep_base, &gep_offset)) {
addr = LLVMBuildGEP (builder, convert (ctx, gep_base, LLVMPointerType (LLVMInt8Type (), 0)), &gep_offset, 1, "");
} else {
addr = base;
}
} else if (ins->inst_offset % size != 0) {
/* Unaligned load */
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (LLVMInt8Type (), 0)), &index, 1, "");
} else {
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, "");
}
}
addr = convert (ctx, addr, LLVMPointerType (t, 0));
if (is_unaligned)
values [ins->dreg] = mono_llvm_build_aligned_load (builder, addr, dname, is_volatile, 1);
else
values [ins->dreg] = emit_load (ctx, bb, &builder, size, addr, base, dname, is_faulting, is_volatile, LLVM_BARRIER_NONE);
if (!(is_faulting || is_volatile) && (ins->flags & MONO_INST_INVARIANT_LOAD)) {
/*
* These will signal LLVM that these loads do not alias any stores, and
* they can't fail, allowing them to be hoisted out of loops.
*/
set_invariant_load_flag (values [ins->dreg]);
}
if (sext)
values [ins->dreg] = LLVMBuildSExt (builder, values [ins->dreg], LLVMInt32Type (), dname);
else if (zext)
values [ins->dreg] = LLVMBuildZExt (builder, values [ins->dreg], LLVMInt32Type (), dname);
else if (!cfg->r4fp && ins->opcode == OP_LOADR4_MEMBASE)
values [ins->dreg] = LLVMBuildFPExt (builder, values [ins->dreg], LLVMDoubleType (), dname);
break;
}
case OP_STOREI1_MEMBASE_REG:
case OP_STOREI2_MEMBASE_REG:
case OP_STOREI4_MEMBASE_REG:
case OP_STOREI8_MEMBASE_REG:
case OP_STORER4_MEMBASE_REG:
case OP_STORER8_MEMBASE_REG:
case OP_STORE_MEMBASE_REG: {
int size = 8;
LLVMValueRef index, addr, base;
LLVMTypeRef t;
gboolean sext = FALSE, zext = FALSE;
gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0;
gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0;
gboolean is_unaligned = (ins->flags & MONO_INST_UNALIGNED) != 0;
if (!values [ins->inst_destbasereg]) {
set_failure (ctx, "inst_destbasereg");
break;
}
t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext);
base = values [ins->inst_destbasereg];
LLVMValueRef gep_base, gep_offset;
if (ins->inst_offset == 0 && mono_llvm_can_be_gep (base, &gep_base, &gep_offset)) {
addr = LLVMBuildGEP (builder, convert (ctx, gep_base, LLVMPointerType (LLVMInt8Type (), 0)), &gep_offset, 1, "");
} else if (ins->inst_offset % size != 0) {
/* Unaligned store */
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (LLVMInt8Type (), 0)), &index, 1, "");
} else {
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, "");
}
if (is_volatile && LLVMGetInstructionOpcode (base) == LLVMAlloca && !(ins->flags & MONO_INST_VOLATILE))
/* Storing to an alloca cannot fail */
is_volatile = FALSE;
LLVMValueRef srcval = convert (ctx, values [ins->sreg1], t);
LLVMValueRef ptrdst = convert (ctx, addr, LLVMPointerType (t, 0));
if (is_unaligned)
mono_llvm_build_aligned_store (builder, srcval, ptrdst, is_volatile, 1);
else
emit_store (ctx, bb, &builder, size, srcval, ptrdst, base, is_faulting, is_volatile);
break;
}
case OP_STOREI1_MEMBASE_IMM:
case OP_STOREI2_MEMBASE_IMM:
case OP_STOREI4_MEMBASE_IMM:
case OP_STOREI8_MEMBASE_IMM:
case OP_STORE_MEMBASE_IMM: {
int size = 8;
LLVMValueRef index, addr, base;
LLVMTypeRef t;
gboolean sext = FALSE, zext = FALSE;
gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0;
gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0;
gboolean is_unaligned = (ins->flags & MONO_INST_UNALIGNED) != 0;
t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext);
base = values [ins->inst_destbasereg];
LLVMValueRef gep_base, gep_offset;
if (ins->inst_offset == 0 && mono_llvm_can_be_gep (base, &gep_base, &gep_offset)) {
addr = LLVMBuildGEP (builder, convert (ctx, gep_base, LLVMPointerType (LLVMInt8Type (), 0)), &gep_offset, 1, "");
} else if (ins->inst_offset % size != 0) {
/* Unaligned store */
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (LLVMInt8Type (), 0)), &index, 1, "");
} else {
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, "");
}
LLVMValueRef srcval = convert (ctx, LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE), t);
LLVMValueRef ptrdst = convert (ctx, addr, LLVMPointerType (t, 0));
if (is_unaligned)
mono_llvm_build_aligned_store (builder, srcval, ptrdst, is_volatile, 1);
else
emit_store (ctx, bb, &builder, size, srcval, ptrdst, base, is_faulting, is_volatile);
break;
}
case OP_CHECK_THIS:
emit_load (ctx, bb, &builder, TARGET_SIZEOF_VOID_P, convert (ctx, lhs, LLVMPointerType (IntPtrType (), 0)), lhs, "", TRUE, FALSE, LLVM_BARRIER_NONE);
break;
case OP_OUTARG_VTRETADDR:
break;
case OP_VOIDCALL:
case OP_CALL:
case OP_LCALL:
case OP_FCALL:
case OP_RCALL:
case OP_VCALL:
case OP_VOIDCALL_MEMBASE:
case OP_CALL_MEMBASE:
case OP_LCALL_MEMBASE:
case OP_FCALL_MEMBASE:
case OP_RCALL_MEMBASE:
case OP_VCALL_MEMBASE:
case OP_VOIDCALL_REG:
case OP_CALL_REG:
case OP_LCALL_REG:
case OP_FCALL_REG:
case OP_RCALL_REG:
case OP_VCALL_REG: {
process_call (ctx, bb, &builder, ins);
break;
}
case OP_AOTCONST: {
MonoJumpInfoType ji_type = ins->inst_c1;
gpointer ji_data = ins->inst_p0;
if (ji_type == MONO_PATCH_INFO_ICALL_ADDR) {
char *symbol = mono_aot_get_direct_call_symbol (MONO_PATCH_INFO_ICALL_ADDR_CALL, ji_data);
if (symbol) {
/*
* Avoid emitting a got entry for these since the method is directly called, and it might not be
* resolvable at runtime using dlsym ().
*/
g_free (symbol);
values [ins->dreg] = LLVMConstInt (IntPtrType (), 0, FALSE);
break;
}
}
values [ins->dreg] = get_aotconst (ctx, ji_type, ji_data, LLVMPointerType (IntPtrType (), 0));
break;
}
case OP_MEMMOVE: {
int argn = 0;
LLVMValueRef args [5];
args [argn++] = convert (ctx, values [ins->sreg1], LLVMPointerType (LLVMInt8Type (), 0));
args [argn++] = convert (ctx, values [ins->sreg2], LLVMPointerType (LLVMInt8Type (), 0));
args [argn++] = convert (ctx, values [ins->sreg3], LLVMInt64Type ());
args [argn++] = LLVMConstInt (LLVMInt1Type (), 0, FALSE); // is_volatile
call_intrins (ctx, INTRINS_MEMMOVE, args, "");
break;
}
case OP_NOT_REACHED:
LLVMBuildUnreachable (builder);
has_terminator = TRUE;
g_assert (bb->block_num < cfg->max_block_num);
ctx->unreachable [bb->block_num] = TRUE;
/* Might have instructions after this */
while (ins->next) {
MonoInst *next = ins->next;
/*
* FIXME: If later code uses the regs defined by these instructions,
* compilation will fail.
*/
const char *spec = INS_INFO (next->opcode);
if (spec [MONO_INST_DEST] == 'i' && !MONO_IS_STORE_MEMBASE (next))
ctx->values [next->dreg] = LLVMConstNull (LLVMInt32Type ());
MONO_DELETE_INS (bb, next);
}
break;
case OP_LDADDR: {
MonoInst *var = ins->inst_i0;
MonoClass *klass = var->klass;
if (var->opcode == OP_VTARG_ADDR && !MONO_CLASS_IS_SIMD(cfg, klass)) {
/* The variable contains the vtype address */
values [ins->dreg] = values [var->dreg];
} else if (var->opcode == OP_GSHAREDVT_LOCAL) {
values [ins->dreg] = emit_gsharedvt_ldaddr (ctx, var->dreg);
} else {
values [ins->dreg] = addresses [var->dreg];
}
break;
}
case OP_SIN: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_SIN, args, dname);
break;
}
case OP_SINF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_SINF, args, dname);
break;
}
case OP_EXP: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_EXP, args, dname);
break;
}
case OP_EXPF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_EXPF, args, dname);
break;
}
case OP_LOG2: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_LOG2, args, dname);
break;
}
case OP_LOG2F: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_LOG2F, args, dname);
break;
}
case OP_LOG10: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_LOG10, args, dname);
break;
}
case OP_LOG10F: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_LOG10F, args, dname);
break;
}
case OP_LOG: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_LOG, args, dname);
break;
}
case OP_TRUNC: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_TRUNC, args, dname);
break;
}
case OP_TRUNCF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_TRUNCF, args, dname);
break;
}
case OP_COS: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_COS, args, dname);
break;
}
case OP_COSF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_COSF, args, dname);
break;
}
case OP_SQRT: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_SQRT, args, dname);
break;
}
case OP_SQRTF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_SQRTF, args, dname);
break;
}
case OP_FLOOR: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_FLOOR, args, dname);
break;
}
case OP_FLOORF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_FLOORF, args, dname);
break;
}
case OP_CEIL: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_CEIL, args, dname);
break;
}
case OP_CEILF: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_CEILF, args, dname);
break;
}
case OP_FMA: {
LLVMValueRef args [3];
args [0] = convert (ctx, values [ins->sreg1], LLVMDoubleType ());
args [1] = convert (ctx, values [ins->sreg2], LLVMDoubleType ());
args [2] = convert (ctx, values [ins->sreg3], LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_FMA, args, dname);
break;
}
case OP_FMAF: {
LLVMValueRef args [3];
args [0] = convert (ctx, values [ins->sreg1], LLVMFloatType ());
args [1] = convert (ctx, values [ins->sreg2], LLVMFloatType ());
args [2] = convert (ctx, values [ins->sreg3], LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_FMAF, args, dname);
break;
}
case OP_ABS: {
LLVMValueRef args [1];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_FABS, args, dname);
break;
}
case OP_ABSF: {
LLVMValueRef args [1];
#ifdef TARGET_AMD64
args [0] = convert (ctx, lhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_ABSF, args, dname);
#else
/* llvm.fabs not supported on all platforms */
args [0] = convert (ctx, lhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_FABS, args, dname);
values [ins->dreg] = convert (ctx, values [ins->dreg], LLVMFloatType ());
#endif
break;
}
case OP_RPOW: {
LLVMValueRef args [2];
args [0] = convert (ctx, lhs, LLVMFloatType ());
args [1] = convert (ctx, rhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_POWF, args, dname);
break;
}
case OP_FPOW: {
LLVMValueRef args [2];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
args [1] = convert (ctx, rhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_POW, args, dname);
break;
}
case OP_FCOPYSIGN: {
LLVMValueRef args [2];
args [0] = convert (ctx, lhs, LLVMDoubleType ());
args [1] = convert (ctx, rhs, LLVMDoubleType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_COPYSIGN, args, dname);
break;
}
case OP_RCOPYSIGN: {
LLVMValueRef args [2];
args [0] = convert (ctx, lhs, LLVMFloatType ());
args [1] = convert (ctx, rhs, LLVMFloatType ());
values [ins->dreg] = call_intrins (ctx, INTRINS_COPYSIGNF, args, dname);
break;
}
case OP_IMIN:
case OP_LMIN:
case OP_IMAX:
case OP_LMAX:
case OP_IMIN_UN:
case OP_LMIN_UN:
case OP_IMAX_UN:
case OP_LMAX_UN:
case OP_FMIN:
case OP_FMAX:
case OP_RMIN:
case OP_RMAX: {
LLVMValueRef v;
lhs = convert (ctx, lhs, regtype_to_llvm_type (spec [MONO_INST_DEST]));
rhs = convert (ctx, rhs, regtype_to_llvm_type (spec [MONO_INST_DEST]));
switch (ins->opcode) {
case OP_IMIN:
case OP_LMIN:
v = LLVMBuildICmp (builder, LLVMIntSLE, lhs, rhs, "");
break;
case OP_IMAX:
case OP_LMAX:
v = LLVMBuildICmp (builder, LLVMIntSGE, lhs, rhs, "");
break;
case OP_IMIN_UN:
case OP_LMIN_UN:
v = LLVMBuildICmp (builder, LLVMIntULE, lhs, rhs, "");
break;
case OP_IMAX_UN:
case OP_LMAX_UN:
v = LLVMBuildICmp (builder, LLVMIntUGE, lhs, rhs, "");
break;
case OP_FMAX:
case OP_RMAX:
v = LLVMBuildFCmp (builder, LLVMRealUGE, lhs, rhs, "");
break;
case OP_FMIN:
case OP_RMIN:
v = LLVMBuildFCmp (builder, LLVMRealULE, lhs, rhs, "");
break;
default:
g_assert_not_reached ();
break;
}
values [ins->dreg] = LLVMBuildSelect (builder, v, lhs, rhs, dname);
break;
}
/*
* See the ARM64 comment in mono/utils/atomic.h for an explanation of why this
* hack is necessary (for now).
*/
#ifdef TARGET_ARM64
#define ARM64_ATOMIC_FENCE_FIX mono_llvm_build_fence (builder, LLVM_BARRIER_SEQ)
#else
#define ARM64_ATOMIC_FENCE_FIX
#endif
case OP_ATOMIC_EXCHANGE_I4:
case OP_ATOMIC_EXCHANGE_I8: {
LLVMValueRef args [2];
LLVMTypeRef t;
if (ins->opcode == OP_ATOMIC_EXCHANGE_I4)
t = LLVMInt32Type ();
else
t = LLVMInt64Type ();
g_assert (ins->inst_offset == 0);
args [0] = convert (ctx, lhs, LLVMPointerType (t, 0));
args [1] = convert (ctx, rhs, t);
ARM64_ATOMIC_FENCE_FIX;
values [ins->dreg] = mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_XCHG, args [0], args [1]);
ARM64_ATOMIC_FENCE_FIX;
break;
}
case OP_ATOMIC_ADD_I4:
case OP_ATOMIC_ADD_I8:
case OP_ATOMIC_AND_I4:
case OP_ATOMIC_AND_I8:
case OP_ATOMIC_OR_I4:
case OP_ATOMIC_OR_I8: {
LLVMValueRef args [2];
LLVMTypeRef t;
if (ins->type == STACK_I4)
t = LLVMInt32Type ();
else
t = LLVMInt64Type ();
g_assert (ins->inst_offset == 0);
args [0] = convert (ctx, lhs, LLVMPointerType (t, 0));
args [1] = convert (ctx, rhs, t);
ARM64_ATOMIC_FENCE_FIX;
if (ins->opcode == OP_ATOMIC_ADD_I4 || ins->opcode == OP_ATOMIC_ADD_I8)
// Interlocked.Add returns new value (that's why we emit additional Add here)
// see https://github.com/dotnet/runtime/pull/33102
values [ins->dreg] = LLVMBuildAdd (builder, mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_ADD, args [0], args [1]), args [1], dname);
else if (ins->opcode == OP_ATOMIC_AND_I4 || ins->opcode == OP_ATOMIC_AND_I8)
values [ins->dreg] = mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_AND, args [0], args [1]);
else if (ins->opcode == OP_ATOMIC_OR_I4 || ins->opcode == OP_ATOMIC_OR_I8)
values [ins->dreg] = mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_OR, args [0], args [1]);
else
g_assert_not_reached ();
ARM64_ATOMIC_FENCE_FIX;
break;
}
case OP_ATOMIC_CAS_I4:
case OP_ATOMIC_CAS_I8: {
LLVMValueRef args [3], val;
LLVMTypeRef t;
if (ins->opcode == OP_ATOMIC_CAS_I4)
t = LLVMInt32Type ();
else
t = LLVMInt64Type ();
args [0] = convert (ctx, lhs, LLVMPointerType (t, 0));
/* comparand */
args [1] = convert (ctx, values [ins->sreg3], t);
/* new value */
args [2] = convert (ctx, values [ins->sreg2], t);
ARM64_ATOMIC_FENCE_FIX;
val = mono_llvm_build_cmpxchg (builder, args [0], args [1], args [2]);
ARM64_ATOMIC_FENCE_FIX;
/* cmpxchg returns a pair */
values [ins->dreg] = LLVMBuildExtractValue (builder, val, 0, "");
break;
}
case OP_MEMORY_BARRIER: {
mono_llvm_build_fence (builder, (BarrierKind) ins->backend.memory_barrier_kind);
break;
}
case OP_ATOMIC_LOAD_I1:
case OP_ATOMIC_LOAD_I2:
case OP_ATOMIC_LOAD_I4:
case OP_ATOMIC_LOAD_I8:
case OP_ATOMIC_LOAD_U1:
case OP_ATOMIC_LOAD_U2:
case OP_ATOMIC_LOAD_U4:
case OP_ATOMIC_LOAD_U8:
case OP_ATOMIC_LOAD_R4:
case OP_ATOMIC_LOAD_R8: {
int size;
gboolean sext, zext;
LLVMTypeRef t;
gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0;
gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0;
BarrierKind barrier = (BarrierKind) ins->backend.memory_barrier_kind;
LLVMValueRef index, addr;
t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext);
if (sext || zext)
dname = (char *)"";
if (ins->inst_offset != 0) {
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, lhs, LLVMPointerType (t, 0)), &index, 1, "");
} else {
addr = lhs;
}
addr = convert (ctx, addr, LLVMPointerType (t, 0));
ARM64_ATOMIC_FENCE_FIX;
values [ins->dreg] = emit_load (ctx, bb, &builder, size, addr, lhs, dname, is_faulting, is_volatile, barrier);
ARM64_ATOMIC_FENCE_FIX;
if (sext)
values [ins->dreg] = LLVMBuildSExt (builder, values [ins->dreg], LLVMInt32Type (), dname);
else if (zext)
values [ins->dreg] = LLVMBuildZExt (builder, values [ins->dreg], LLVMInt32Type (), dname);
break;
}
case OP_ATOMIC_STORE_I1:
case OP_ATOMIC_STORE_I2:
case OP_ATOMIC_STORE_I4:
case OP_ATOMIC_STORE_I8:
case OP_ATOMIC_STORE_U1:
case OP_ATOMIC_STORE_U2:
case OP_ATOMIC_STORE_U4:
case OP_ATOMIC_STORE_U8:
case OP_ATOMIC_STORE_R4:
case OP_ATOMIC_STORE_R8: {
int size;
gboolean sext, zext;
LLVMTypeRef t;
gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0;
gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0;
BarrierKind barrier = (BarrierKind) ins->backend.memory_barrier_kind;
LLVMValueRef index, addr, value, base;
if (!values [ins->inst_destbasereg]) {
set_failure (ctx, "inst_destbasereg");
break;
}
t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext);
base = values [ins->inst_destbasereg];
index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE);
addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, "");
value = convert (ctx, values [ins->sreg1], t);
ARM64_ATOMIC_FENCE_FIX;
emit_store_general (ctx, bb, &builder, size, value, addr, base, is_faulting, is_volatile, barrier);
ARM64_ATOMIC_FENCE_FIX;
break;
}
case OP_RELAXED_NOP: {
#if defined(TARGET_AMD64) || defined(TARGET_X86)
call_intrins (ctx, INTRINS_SSE_PAUSE, NULL, "");
break;
#else
break;
#endif
}
case OP_TLS_GET: {
#if (defined(TARGET_AMD64) || defined(TARGET_X86)) && defined(__linux__)
#ifdef TARGET_AMD64
// 257 == FS segment register
LLVMTypeRef ptrtype = LLVMPointerType (IntPtrType (), 257);
#else
// 256 == GS segment register
LLVMTypeRef ptrtype = LLVMPointerType (IntPtrType (), 256);
#endif
// FIXME: XEN
values [ins->dreg] = LLVMBuildLoad (builder, LLVMBuildIntToPtr (builder, LLVMConstInt (IntPtrType (), ins->inst_offset, TRUE), ptrtype, ""), "");
#elif defined(TARGET_AMD64) && defined(TARGET_OSX)
/* See mono_amd64_emit_tls_get () */
int offset = mono_amd64_get_tls_gs_offset () + (ins->inst_offset * 8);
// 256 == GS segment register
LLVMTypeRef ptrtype = LLVMPointerType (IntPtrType (), 256);
values [ins->dreg] = LLVMBuildLoad (builder, LLVMBuildIntToPtr (builder, LLVMConstInt (IntPtrType (), offset, TRUE), ptrtype, ""), "");
#else
set_failure (ctx, "opcode tls-get");
break;
#endif
break;
}
case OP_GC_SAFE_POINT: {
LLVMValueRef val, cmp, callee, call;
LLVMBasicBlockRef poll_bb, cont_bb;
LLVMValueRef args [2];
static LLVMTypeRef sig;
const char *icall_name = "mono_threads_state_poll";
/*
* Create the cold wrapper around the icall, along with a managed method for it so
* unwinding works.
*/
if (!cfg->compile_aot && !ctx->module->gc_poll_cold_wrapper_compiled) {
ERROR_DECL (error);
/* Compiling a method here is a bit ugly, but it works */
MonoMethod *wrapper = mono_marshal_get_llvm_func_wrapper (LLVM_FUNC_WRAPPER_GC_POLL);
ctx->module->gc_poll_cold_wrapper_compiled = mono_jit_compile_method (wrapper, error);
mono_error_assert_ok (error);
}
if (!sig)
sig = LLVMFunctionType0 (LLVMVoidType (), FALSE);
/*
* if (!*sreg1)
* mono_threads_state_poll ();
*/
val = mono_llvm_build_load (builder, convert (ctx, lhs, LLVMPointerType (IntPtrType (), 0)), "", TRUE);
cmp = LLVMBuildICmp (builder, LLVMIntEQ, val, LLVMConstNull (LLVMTypeOf (val)), "");
poll_bb = gen_bb (ctx, "POLL_BB");
cont_bb = gen_bb (ctx, "CONT_BB");
args [0] = cmp;
args [1] = LLVMConstInt (LLVMInt1Type (), 1, FALSE);
cmp = call_intrins (ctx, INTRINS_EXPECT_I1, args, "");
mono_llvm_build_weighted_branch (builder, cmp, cont_bb, poll_bb, 1000, 1);
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, poll_bb);
if (ctx->cfg->compile_aot) {
callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_threads_state_poll));
call = LLVMBuildCall (builder, callee, NULL, 0, "");
} else {
callee = get_jit_callee (ctx, icall_name, sig, MONO_PATCH_INFO_ABS, ctx->module->gc_poll_cold_wrapper_compiled);
call = LLVMBuildCall (builder, callee, NULL, 0, "");
set_call_cold_cconv (call);
}
LLVMBuildBr (builder, cont_bb);
ctx->builder = builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, cont_bb);
ctx->bblocks [bb->block_num].end_bblock = cont_bb;
break;
}
/*
* Overflow opcodes.
*/
case OP_IADD_OVF:
case OP_IADD_OVF_UN:
case OP_ISUB_OVF:
case OP_ISUB_OVF_UN:
case OP_IMUL_OVF:
case OP_IMUL_OVF_UN:
case OP_LADD_OVF:
case OP_LADD_OVF_UN:
case OP_LSUB_OVF:
case OP_LSUB_OVF_UN:
case OP_LMUL_OVF:
case OP_LMUL_OVF_UN: {
LLVMValueRef args [2], val, ovf;
IntrinsicId intrins;
args [0] = convert (ctx, lhs, op_to_llvm_type (ins->opcode));
args [1] = convert (ctx, rhs, op_to_llvm_type (ins->opcode));
intrins = ovf_op_to_intrins (ins->opcode);
val = call_intrins (ctx, intrins, args, "");
values [ins->dreg] = LLVMBuildExtractValue (builder, val, 0, dname);
ovf = LLVMBuildExtractValue (builder, val, 1, "");
emit_cond_system_exception (ctx, bb, ins->inst_exc_name, ovf, FALSE);
if (!ctx_ok (ctx))
break;
builder = ctx->builder;
break;
}
/*
* Valuetypes.
* We currently model them using arrays. Promotion to local vregs is
* disabled for them in mono_handle_global_vregs () in the LLVM case,
* so we always have an entry in cfg->varinfo for them.
* FIXME: Is this needed ?
*/
case OP_VZERO: {
MonoClass *klass = ins->klass;
if (!klass) {
// FIXME:
set_failure (ctx, "!klass");
break;
}
if (!addresses [ins->dreg])
addresses [ins->dreg] = build_named_alloca (ctx, m_class_get_byval_arg (klass), "vzero");
LLVMValueRef ptr = LLVMBuildBitCast (builder, addresses [ins->dreg], LLVMPointerType (LLVMInt8Type (), 0), "");
emit_memset (ctx, builder, ptr, const_int32 (mono_class_value_size (klass, NULL)), 0);
break;
}
case OP_DUMMY_VZERO:
break;
case OP_STOREV_MEMBASE:
case OP_LOADV_MEMBASE:
case OP_VMOVE: {
MonoClass *klass = ins->klass;
LLVMValueRef src = NULL, dst, args [5];
gboolean done = FALSE;
gboolean is_volatile = FALSE;
if (!klass) {
// FIXME:
set_failure (ctx, "!klass");
break;
}
if (mini_is_gsharedvt_klass (klass)) {
// FIXME:
set_failure (ctx, "gsharedvt");
break;
}
switch (ins->opcode) {
case OP_STOREV_MEMBASE:
if (cfg->gen_write_barriers && m_class_has_references (klass) && ins->inst_destbasereg != cfg->frame_reg &&
LLVMGetInstructionOpcode (values [ins->inst_destbasereg]) != LLVMAlloca) {
/* Decomposed earlier */
g_assert_not_reached ();
break;
}
if (!addresses [ins->sreg1]) {
/* SIMD */
g_assert (values [ins->sreg1]);
dst = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_destbasereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (type_to_llvm_type (ctx, m_class_get_byval_arg (klass)), 0));
LLVMBuildStore (builder, values [ins->sreg1], dst);
done = TRUE;
} else {
src = LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (LLVMInt8Type (), 0), "");
dst = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_destbasereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (LLVMInt8Type (), 0));
}
break;
case OP_LOADV_MEMBASE:
if (!addresses [ins->dreg])
addresses [ins->dreg] = build_alloca (ctx, m_class_get_byval_arg (klass));
src = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_basereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (LLVMInt8Type (), 0));
dst = LLVMBuildBitCast (builder, addresses [ins->dreg], LLVMPointerType (LLVMInt8Type (), 0), "");
break;
case OP_VMOVE:
if (!addresses [ins->sreg1])
addresses [ins->sreg1] = build_alloca (ctx, m_class_get_byval_arg (klass));
if (!addresses [ins->dreg])
addresses [ins->dreg] = build_alloca (ctx, m_class_get_byval_arg (klass));
src = LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (LLVMInt8Type (), 0), "");
dst = LLVMBuildBitCast (builder, addresses [ins->dreg], LLVMPointerType (LLVMInt8Type (), 0), "");
break;
default:
g_assert_not_reached ();
}
if (!ctx_ok (ctx))
break;
if (done)
break;
#ifdef TARGET_WASM
is_volatile = m_class_has_references (klass);
#endif
int aindex = 0;
args [aindex ++] = dst;
args [aindex ++] = src;
args [aindex ++] = LLVMConstInt (LLVMInt32Type (), mono_class_value_size (klass, NULL), FALSE);
args [aindex ++] = LLVMConstInt (LLVMInt1Type (), is_volatile ? 1 : 0, FALSE);
call_intrins (ctx, INTRINS_MEMCPY, args, "");
break;
}
case OP_LLVM_OUTARG_VT: {
LLVMArgInfo *ainfo = (LLVMArgInfo*)ins->inst_p0;
MonoType *t = mini_get_underlying_type (ins->inst_vtype);
if (ainfo->storage == LLVMArgGsharedvtVariable) {
MonoInst *var = get_vreg_to_inst (cfg, ins->sreg1);
if (var && var->opcode == OP_GSHAREDVT_LOCAL) {
addresses [ins->dreg] = convert (ctx, emit_gsharedvt_ldaddr (ctx, var->dreg), LLVMPointerType (IntPtrType (), 0));
} else {
g_assert (addresses [ins->sreg1]);
addresses [ins->dreg] = addresses [ins->sreg1];
}
} else if (ainfo->storage == LLVMArgGsharedvtFixed) {
if (!addresses [ins->sreg1]) {
addresses [ins->sreg1] = build_alloca (ctx, t);
g_assert (values [ins->sreg1]);
}
LLVMBuildStore (builder, convert (ctx, values [ins->sreg1], LLVMGetElementType (LLVMTypeOf (addresses [ins->sreg1]))), addresses [ins->sreg1]);
addresses [ins->dreg] = addresses [ins->sreg1];
} else {
if (!addresses [ins->sreg1]) {
addresses [ins->sreg1] = build_named_alloca (ctx, t, "llvm_outarg_vt");
g_assert (values [ins->sreg1]);
LLVMBuildStore (builder, convert (ctx, values [ins->sreg1], type_to_llvm_type (ctx, t)), addresses [ins->sreg1]);
addresses [ins->dreg] = addresses [ins->sreg1];
} else if (ainfo->storage == LLVMArgVtypeAddr || values [ins->sreg1] == addresses [ins->sreg1]) {
/* LLVMArgVtypeByRef/LLVMArgVtypeAddr, have to make a copy */
addresses [ins->dreg] = build_alloca (ctx, t);
LLVMValueRef v = LLVMBuildLoad (builder, addresses [ins->sreg1], "llvm_outarg_vt_copy");
LLVMBuildStore (builder, convert (ctx, v, type_to_llvm_type (ctx, t)), addresses [ins->dreg]);
} else {
if (values [ins->sreg1]) {
LLVMTypeRef src_t = LLVMTypeOf (values [ins->sreg1]);
LLVMValueRef dst = convert (ctx, addresses [ins->sreg1], LLVMPointerType (src_t, 0));
LLVMBuildStore (builder, values [ins->sreg1], dst);
}
addresses [ins->dreg] = addresses [ins->sreg1];
}
}
break;
}
case OP_OBJC_GET_SELECTOR: {
const char *name = (const char*)ins->inst_p0;
LLVMValueRef var;
if (!ctx->module->objc_selector_to_var) {
ctx->module->objc_selector_to_var = g_hash_table_new_full (g_str_hash, g_str_equal, g_free, NULL);
LLVMValueRef info_var = LLVMAddGlobal (ctx->lmodule, LLVMArrayType (LLVMInt8Type (), 8), "@OBJC_IMAGE_INFO");
int32_t objc_imageinfo [] = { 0, 16 };
LLVMSetInitializer (info_var, mono_llvm_create_constant_data_array ((uint8_t *) &objc_imageinfo, 8));
LLVMSetLinkage (info_var, LLVMPrivateLinkage);
LLVMSetExternallyInitialized (info_var, TRUE);
LLVMSetSection (info_var, "__DATA, __objc_imageinfo,regular,no_dead_strip");
LLVMSetAlignment (info_var, sizeof (target_mgreg_t));
mark_as_used (ctx->module, info_var);
}
var = (LLVMValueRef)g_hash_table_lookup (ctx->module->objc_selector_to_var, name);
if (!var) {
LLVMValueRef indexes [16];
LLVMValueRef name_var = LLVMAddGlobal (ctx->lmodule, LLVMArrayType (LLVMInt8Type (), strlen (name) + 1), "@OBJC_METH_VAR_NAME_");
LLVMSetInitializer (name_var, mono_llvm_create_constant_data_array ((const uint8_t*)name, strlen (name) + 1));
LLVMSetLinkage (name_var, LLVMPrivateLinkage);
LLVMSetSection (name_var, "__TEXT,__objc_methname,cstring_literals");
mark_as_used (ctx->module, name_var);
LLVMValueRef ref_var = LLVMAddGlobal (ctx->lmodule, LLVMPointerType (LLVMInt8Type (), 0), "@OBJC_SELECTOR_REFERENCES_");
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, 0);
indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, 0);
LLVMSetInitializer (ref_var, LLVMConstGEP (name_var, indexes, 2));
LLVMSetLinkage (ref_var, LLVMPrivateLinkage);
LLVMSetExternallyInitialized (ref_var, TRUE);
LLVMSetSection (ref_var, "__DATA, __objc_selrefs, literal_pointers, no_dead_strip");
LLVMSetAlignment (ref_var, sizeof (target_mgreg_t));
mark_as_used (ctx->module, ref_var);
g_hash_table_insert (ctx->module->objc_selector_to_var, g_strdup (name), ref_var);
var = ref_var;
}
values [ins->dreg] = LLVMBuildLoad (builder, var, "");
break;
}
#if defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_ARM64) || defined(TARGET_WASM)
case OP_EXTRACTX_U2:
case OP_XEXTRACT_I1:
case OP_XEXTRACT_I2:
case OP_XEXTRACT_I4:
case OP_XEXTRACT_I8:
case OP_XEXTRACT_R4:
case OP_XEXTRACT_R8:
case OP_EXTRACT_I1:
case OP_EXTRACT_I2:
case OP_EXTRACT_I4:
case OP_EXTRACT_I8:
case OP_EXTRACT_R4:
case OP_EXTRACT_R8: {
MonoTypeEnum mono_elt_t = inst_c1_type (ins);
LLVMTypeRef elt_t = primitive_type_to_llvm_type (mono_elt_t);
gboolean sext = FALSE;
gboolean zext = FALSE;
switch (mono_elt_t) {
case MONO_TYPE_I1: case MONO_TYPE_I2: sext = TRUE; break;
case MONO_TYPE_U1: case MONO_TYPE_U2: zext = TRUE; break;
}
LLVMValueRef element_ix = NULL;
switch (ins->opcode) {
case OP_XEXTRACT_I1:
case OP_XEXTRACT_I2:
case OP_XEXTRACT_I4:
case OP_XEXTRACT_R4:
case OP_XEXTRACT_R8:
case OP_XEXTRACT_I8:
element_ix = rhs;
break;
default:
element_ix = const_int32 (ins->inst_c0);
}
LLVMTypeRef lhs_t = LLVMTypeOf (lhs);
int vec_width = mono_llvm_get_prim_size_bits (lhs_t);
int elem_width = mono_llvm_get_prim_size_bits (elt_t);
int elements = vec_width / elem_width;
element_ix = LLVMBuildAnd (builder, element_ix, const_int32 (elements - 1), "extract");
LLVMTypeRef ret_t = LLVMVectorType (elt_t, elements);
LLVMValueRef src = LLVMBuildBitCast (builder, lhs, ret_t, "extract");
LLVMValueRef result = LLVMBuildExtractElement (builder, src, element_ix, "extract");
if (zext)
result = LLVMBuildZExt (builder, result, i4_t, "extract_zext");
else if (sext)
result = LLVMBuildSExt (builder, result, i4_t, "extract_sext");
values [ins->dreg] = result;
break;
}
case OP_XINSERT_I1:
case OP_XINSERT_I2:
case OP_XINSERT_I4:
case OP_XINSERT_I8:
case OP_XINSERT_R4:
case OP_XINSERT_R8: {
MonoTypeEnum primty = inst_c1_type (ins);
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMTypeRef elem_t = LLVMGetElementType (ret_t);
int elements = LLVMGetVectorSize (ret_t);
LLVMValueRef element_ix = LLVMBuildAnd (builder, arg3, const_int32 (elements - 1), "xinsert");
LLVMValueRef vec = convert (ctx, lhs, ret_t);
LLVMValueRef val = convert_full (ctx, rhs, elem_t, primitive_type_is_unsigned (primty));
LLVMValueRef result = LLVMBuildInsertElement (builder, vec, val, element_ix, "xinsert");
values [ins->dreg] = result;
break;
}
case OP_EXPAND_I1:
case OP_EXPAND_I2:
case OP_EXPAND_I4:
case OP_EXPAND_I8:
case OP_EXPAND_R4:
case OP_EXPAND_R8: {
LLVMTypeRef t;
LLVMValueRef mask [MAX_VECTOR_ELEMS], v;
int i;
t = simd_class_to_llvm_type (ctx, ins->klass);
for (i = 0; i < MAX_VECTOR_ELEMS; ++i)
mask [i] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
v = convert (ctx, values [ins->sreg1], LLVMGetElementType (t));
values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (t), v, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
values [ins->dreg] = LLVMBuildShuffleVector (builder, values [ins->dreg], LLVMGetUndef (t), LLVMConstVector (mask, LLVMGetVectorSize (t)), "");
break;
}
case OP_XZERO: {
values [ins->dreg] = LLVMConstNull (type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass)));
break;
}
case OP_LOADX_MEMBASE: {
LLVMTypeRef t = type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass));
LLVMValueRef src;
src = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_basereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (t, 0));
values [ins->dreg] = mono_llvm_build_aligned_load (builder, src, "", FALSE, 1);
break;
}
case OP_STOREX_MEMBASE: {
LLVMTypeRef t = LLVMTypeOf (values [ins->sreg1]);
LLVMValueRef dest;
dest = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_destbasereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (t, 0));
mono_llvm_build_aligned_store (builder, values [ins->sreg1], dest, FALSE, 1);
break;
}
case OP_XBINOP:
case OP_XBINOP_SCALAR:
case OP_XBINOP_BYSCALAR: {
gboolean scalar = ins->opcode == OP_XBINOP_SCALAR;
gboolean byscalar = ins->opcode == OP_XBINOP_BYSCALAR;
LLVMValueRef result = NULL;
LLVMValueRef args [] = { lhs, rhs };
if (scalar)
for (int i = 0; i < 2; ++i)
args [i] = scalar_from_vector (ctx, args [i]);
if (byscalar) {
LLVMTypeRef t = LLVMTypeOf (args [0]);
unsigned int elems = LLVMGetVectorSize (t);
args [1] = broadcast_element (ctx, scalar_from_vector (ctx, args [1]), elems);
}
LLVMValueRef l = args [0];
LLVMValueRef r = args [1];
switch (ins->inst_c0) {
case OP_IADD:
result = LLVMBuildAdd (builder, l, r, "");
break;
case OP_ISUB:
result = LLVMBuildSub (builder, l, r, "");
break;
case OP_IMUL:
result = LLVMBuildMul (builder, l, r, "");
break;
case OP_IAND:
result = LLVMBuildAnd (builder, l, r, "");
break;
case OP_IOR:
result = LLVMBuildOr (builder, l, r, "");
break;
case OP_IXOR:
result = LLVMBuildXor (builder, l, r, "");
break;
case OP_FADD:
result = LLVMBuildFAdd (builder, l, r, "");
break;
case OP_FSUB:
result = LLVMBuildFSub (builder, l, r, "");
break;
case OP_FMUL:
result = LLVMBuildFMul (builder, l, r, "");
break;
case OP_FDIV:
result = LLVMBuildFDiv (builder, l, r, "");
break;
case OP_FMAX:
case OP_FMIN: {
LLVMValueRef args [] = { l, r };
#if defined(TARGET_X86) || defined(TARGET_AMD64)
LLVMTypeRef t = LLVMTypeOf (l);
LLVMTypeRef elem_t = LLVMGetElementType (t);
unsigned int elems = LLVMGetVectorSize (t);
unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t);
unsigned int v_size = elems * elem_bits;
if (v_size == 128) {
gboolean is_r4 = ins->inst_c1 == MONO_TYPE_R4;
int iid = -1;
if (ins->inst_c0 == OP_FMAX) {
if (elems == 1)
iid = is_r4 ? INTRINS_SSE_MAXSS : INTRINS_SSE_MAXSD;
else
iid = is_r4 ? INTRINS_SSE_MAXPS : INTRINS_SSE_MAXPD;
} else {
if (elems == 1)
iid = is_r4 ? INTRINS_SSE_MINSS : INTRINS_SSE_MINSD;
else
iid = is_r4 ? INTRINS_SSE_MINPS : INTRINS_SSE_MINPD;
}
result = call_intrins (ctx, iid, args, dname);
} else {
LLVMRealPredicate op = ins->inst_c0 == OP_FMAX ? LLVMRealUGE : LLVMRealULE;
LLVMValueRef cmp = LLVMBuildFCmp (builder, op, l, r, "");
result = LLVMBuildSelect (builder, cmp, l, r, "");
}
#elif defined(TARGET_ARM64)
IntrinsicId iid = ins->inst_c0 == OP_FMAX ? INTRINS_AARCH64_ADV_SIMD_FMAX : INTRINS_AARCH64_ADV_SIMD_FMIN;
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
result = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
#else
NOT_IMPLEMENTED;
#endif
break;
}
case OP_IMAX:
case OP_IMIN: {
gboolean is_unsigned = ins->inst_c1 == MONO_TYPE_U1 || ins->inst_c1 == MONO_TYPE_U2 || ins->inst_c1 == MONO_TYPE_U4 || ins->inst_c1 == MONO_TYPE_U8;
LLVMIntPredicate op;
switch (ins->inst_c0) {
case OP_IMAX:
op = is_unsigned ? LLVMIntUGT : LLVMIntSGT;
break;
case OP_IMIN:
op = is_unsigned ? LLVMIntULT : LLVMIntSLT;
break;
default:
g_assert_not_reached ();
}
#if defined(TARGET_ARM64)
if ((ins->inst_c1 == MONO_TYPE_U8) || (ins->inst_c1 == MONO_TYPE_I8)) {
LLVMValueRef cmp = LLVMBuildICmp (builder, op, l, r, "");
result = LLVMBuildSelect (builder, cmp, l, r, "");
} else {
IntrinsicId iid;
switch (ins->inst_c0) {
case OP_IMAX:
iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UMAX : INTRINS_AARCH64_ADV_SIMD_SMAX;
break;
case OP_IMIN:
iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UMIN : INTRINS_AARCH64_ADV_SIMD_SMIN;
break;
default:
g_assert_not_reached ();
}
LLVMValueRef args [] = { l, r };
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
result = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
}
#else
LLVMValueRef cmp = LLVMBuildICmp (builder, op, l, r, "");
result = LLVMBuildSelect (builder, cmp, l, r, "");
#endif
break;
}
default:
g_assert_not_reached ();
}
if (scalar)
result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result);
values [ins->dreg] = result;
break;
}
case OP_XBINOP_FORCEINT: {
LLVMTypeRef t = LLVMTypeOf (lhs);
LLVMTypeRef elem_t = LLVMGetElementType (t);
unsigned int elems = LLVMGetVectorSize (t);
unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t);
LLVMTypeRef intermediate_elem_t = LLVMIntType (elem_bits);
LLVMTypeRef intermediate_t = LLVMVectorType (intermediate_elem_t, elems);
LLVMValueRef lhs_int = convert (ctx, lhs, intermediate_t);
LLVMValueRef rhs_int = convert (ctx, rhs, intermediate_t);
LLVMValueRef result = NULL;
switch (ins->inst_c0) {
case XBINOP_FORCEINT_and:
result = LLVMBuildAnd (builder, lhs_int, rhs_int, "");
break;
case XBINOP_FORCEINT_or:
result = LLVMBuildOr (builder, lhs_int, rhs_int, "");
break;
case XBINOP_FORCEINT_ornot:
result = LLVMBuildNot (builder, rhs_int, "");
result = LLVMBuildOr (builder, result, lhs_int, "");
break;
case XBINOP_FORCEINT_xor:
result = LLVMBuildXor (builder, lhs_int, rhs_int, "");
break;
}
values [ins->dreg] = LLVMBuildBitCast (builder, result, t, "");
break;
}
case OP_CREATE_SCALAR:
case OP_CREATE_SCALAR_UNSAFE: {
MonoTypeEnum primty = inst_c1_type (ins);
LLVMTypeRef type = simd_class_to_llvm_type (ctx, ins->klass);
// use undef vector (most likely empty but may contain garbage values) for OP_CREATE_SCALAR_UNSAFE
// and zero one for OP_CREATE_SCALAR
LLVMValueRef vector = (ins->opcode == OP_CREATE_SCALAR) ? LLVMConstNull (type) : LLVMGetUndef (type);
LLVMValueRef val = convert_full (ctx, lhs, primitive_type_to_llvm_type (primty), primitive_type_is_unsigned (primty));
values [ins->dreg] = LLVMBuildInsertElement (builder, vector, val, const_int32 (0), "");
break;
}
case OP_INSERT_I1:
values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt8Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname);
break;
case OP_INSERT_I2:
values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt16Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname);
break;
case OP_INSERT_I4:
values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt32Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname);
break;
case OP_INSERT_I8:
values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt64Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname);
break;
case OP_INSERT_R4:
values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMFloatType ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname);
break;
case OP_INSERT_R8:
values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMDoubleType ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname);
break;
case OP_XCAST: {
LLVMTypeRef t = simd_class_to_llvm_type (ctx, ins->klass);
values [ins->dreg] = LLVMBuildBitCast (builder, lhs, t, "");
break;
}
case OP_XCONCAT: {
values [ins->dreg] = concatenate_vectors (ctx, lhs, rhs);
break;
}
case OP_XINSERT_LOWER:
case OP_XINSERT_UPPER: {
const char *oname = ins->opcode == OP_XINSERT_LOWER ? "xinsert_lower" : "xinsert_upper";
int ix = ins->opcode == OP_XINSERT_LOWER ? 0 : 1;
LLVMTypeRef src_t = LLVMTypeOf (lhs);
unsigned int width = mono_llvm_get_prim_size_bits (src_t);
LLVMTypeRef int_t = LLVMIntType (width / 2);
LLVMTypeRef intvec_t = LLVMVectorType (int_t, 2);
LLVMValueRef insval = LLVMBuildBitCast (builder, rhs, int_t, oname);
LLVMValueRef val = LLVMBuildBitCast (builder, lhs, intvec_t, oname);
val = LLVMBuildInsertElement (builder, val, insval, const_int32 (ix), oname);
val = LLVMBuildBitCast (builder, val, src_t, oname);
values [ins->dreg] = val;
break;
}
case OP_XLOWER:
case OP_XUPPER: {
const char *oname = ins->opcode == OP_XLOWER ? "xlower" : "xupper";
LLVMTypeRef src_t = LLVMTypeOf (lhs);
unsigned int elems = LLVMGetVectorSize (src_t);
g_assert (elems >= 2 && elems <= MAX_VECTOR_ELEMS);
unsigned int ret_elems = elems / 2;
int startix = ins->opcode == OP_XLOWER ? 0 : ret_elems;
LLVMValueRef val = LLVMBuildShuffleVector (builder, lhs, LLVMGetUndef (src_t), create_const_vector_i32 (&mask_0_incr_1 [startix], ret_elems), oname);
values [ins->dreg] = val;
break;
}
case OP_XWIDEN:
case OP_XWIDEN_UNSAFE: {
const char *oname = ins->opcode == OP_XWIDEN ? "xwiden" : "xwiden_unsafe";
LLVMTypeRef src_t = LLVMTypeOf (lhs);
unsigned int elems = LLVMGetVectorSize (src_t);
g_assert (elems <= MAX_VECTOR_ELEMS / 2);
unsigned int ret_elems = elems * 2;
LLVMValueRef upper = ins->opcode == OP_XWIDEN ? LLVMConstNull (src_t) : LLVMGetUndef (src_t);
LLVMValueRef val = LLVMBuildShuffleVector (builder, lhs, upper, create_const_vector_i32 (mask_0_incr_1, ret_elems), oname);
values [ins->dreg] = val;
break;
}
#endif // defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_ARM64) || defined(TARGET_WASM)
#if defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_WASM)
case OP_PADDB:
case OP_PADDW:
case OP_PADDD:
case OP_PADDQ:
values [ins->dreg] = LLVMBuildAdd (builder, lhs, rhs, "");
break;
case OP_ADDPD:
case OP_ADDPS:
values [ins->dreg] = LLVMBuildFAdd (builder, lhs, rhs, "");
break;
case OP_PSUBB:
case OP_PSUBW:
case OP_PSUBD:
case OP_PSUBQ:
values [ins->dreg] = LLVMBuildSub (builder, lhs, rhs, "");
break;
case OP_SUBPD:
case OP_SUBPS:
values [ins->dreg] = LLVMBuildFSub (builder, lhs, rhs, "");
break;
case OP_MULPD:
case OP_MULPS:
values [ins->dreg] = LLVMBuildFMul (builder, lhs, rhs, "");
break;
case OP_DIVPD:
case OP_DIVPS:
values [ins->dreg] = LLVMBuildFDiv (builder, lhs, rhs, "");
break;
case OP_PAND:
values [ins->dreg] = LLVMBuildAnd (builder, lhs, rhs, "");
break;
case OP_POR:
values [ins->dreg] = LLVMBuildOr (builder, lhs, rhs, "");
break;
case OP_PXOR:
values [ins->dreg] = LLVMBuildXor (builder, lhs, rhs, "");
break;
case OP_PMULW:
case OP_PMULD:
values [ins->dreg] = LLVMBuildMul (builder, lhs, rhs, "");
break;
case OP_ANDPS:
case OP_ANDNPS:
case OP_ORPS:
case OP_XORPS:
case OP_ANDPD:
case OP_ANDNPD:
case OP_ORPD:
case OP_XORPD: {
LLVMTypeRef t, rt;
LLVMValueRef v = NULL;
switch (ins->opcode) {
case OP_ANDPS:
case OP_ANDNPS:
case OP_ORPS:
case OP_XORPS:
t = LLVMVectorType (LLVMInt32Type (), 4);
rt = LLVMVectorType (LLVMFloatType (), 4);
break;
case OP_ANDPD:
case OP_ANDNPD:
case OP_ORPD:
case OP_XORPD:
t = LLVMVectorType (LLVMInt64Type (), 2);
rt = LLVMVectorType (LLVMDoubleType (), 2);
break;
default:
t = LLVMInt32Type ();
rt = LLVMInt32Type ();
g_assert_not_reached ();
}
lhs = LLVMBuildBitCast (builder, lhs, t, "");
rhs = LLVMBuildBitCast (builder, rhs, t, "");
switch (ins->opcode) {
case OP_ANDPS:
case OP_ANDPD:
v = LLVMBuildAnd (builder, lhs, rhs, "");
break;
case OP_ORPS:
case OP_ORPD:
v = LLVMBuildOr (builder, lhs, rhs, "");
break;
case OP_XORPS:
case OP_XORPD:
v = LLVMBuildXor (builder, lhs, rhs, "");
break;
case OP_ANDNPS:
case OP_ANDNPD:
v = LLVMBuildAnd (builder, rhs, LLVMBuildNot (builder, lhs, ""), "");
break;
}
values [ins->dreg] = LLVMBuildBitCast (builder, v, rt, "");
break;
}
case OP_PMIND_UN:
case OP_PMINW_UN:
case OP_PMINB_UN: {
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntULT, lhs, rhs, "");
values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, "");
break;
}
case OP_PMAXD_UN:
case OP_PMAXW_UN:
case OP_PMAXB_UN: {
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntUGT, lhs, rhs, "");
values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, "");
break;
}
case OP_PMINW: {
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSLT, lhs, rhs, "");
values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, "");
break;
}
case OP_PMAXW: {
LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSGT, lhs, rhs, "");
values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, "");
break;
}
case OP_PAVGB_UN:
case OP_PAVGW_UN: {
LLVMValueRef ones_vec;
LLVMValueRef ones [MAX_VECTOR_ELEMS];
int vector_size = LLVMGetVectorSize (LLVMTypeOf (lhs));
LLVMTypeRef ext_elem_type = vector_size == 16 ? LLVMInt16Type () : LLVMInt32Type ();
for (int i = 0; i < MAX_VECTOR_ELEMS; ++i)
ones [i] = LLVMConstInt (ext_elem_type, 1, FALSE);
ones_vec = LLVMConstVector (ones, vector_size);
LLVMValueRef val;
LLVMTypeRef ext_type = LLVMVectorType (ext_elem_type, vector_size);
/* Have to increase the vector element size to prevent overflows */
/* res = trunc ((zext (lhs) + zext (rhs) + 1) >> 1) */
val = LLVMBuildAdd (builder, LLVMBuildZExt (builder, lhs, ext_type, ""), LLVMBuildZExt (builder, rhs, ext_type, ""), "");
val = LLVMBuildAdd (builder, val, ones_vec, "");
val = LLVMBuildLShr (builder, val, ones_vec, "");
values [ins->dreg] = LLVMBuildTrunc (builder, val, LLVMTypeOf (lhs), "");
break;
}
case OP_PCMPEQB:
case OP_PCMPEQW:
case OP_PCMPEQD:
case OP_PCMPEQQ:
case OP_PCMPGTB: {
LLVMValueRef pcmp;
LLVMTypeRef retType;
LLVMIntPredicate cmpOp;
if (ins->opcode == OP_PCMPGTB)
cmpOp = LLVMIntSGT;
else
cmpOp = LLVMIntEQ;
if (LLVMTypeOf (lhs) == LLVMTypeOf (rhs)) {
pcmp = LLVMBuildICmp (builder, cmpOp, lhs, rhs, "");
retType = LLVMTypeOf (lhs);
} else {
LLVMTypeRef flatType = LLVMVectorType (LLVMInt8Type (), 16);
LLVMValueRef flatRHS = convert (ctx, rhs, flatType);
LLVMValueRef flatLHS = convert (ctx, lhs, flatType);
pcmp = LLVMBuildICmp (builder, cmpOp, flatLHS, flatRHS, "");
retType = flatType;
}
values [ins->dreg] = LLVMBuildSExt (builder, pcmp, retType, "");
break;
}
case OP_CVTDQ2PS: {
LLVMValueRef i4 = LLVMBuildBitCast (builder, lhs, sse_i4_t, "");
values [ins->dreg] = LLVMBuildSIToFP (builder, i4, sse_r4_t, dname);
break;
}
case OP_CVTDQ2PD: {
LLVMValueRef indexes [16];
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMConstInt (LLVMInt32Type (), 1, FALSE);
LLVMValueRef mask = LLVMConstVector (indexes, 2);
LLVMValueRef shuffle = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), mask, "");
values [ins->dreg] = LLVMBuildSIToFP (builder, shuffle, LLVMVectorType (LLVMDoubleType (), 2), dname);
break;
}
case OP_SSE2_CVTSS2SD: {
LLVMValueRef rhs_elem = LLVMBuildExtractElement (builder, rhs, const_int32 (0), "");
LLVMValueRef fpext = LLVMBuildFPExt (builder, rhs_elem, LLVMDoubleType (), dname);
values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, fpext, const_int32 (0), "");
break;
}
case OP_CVTPS2PD: {
LLVMValueRef indexes [16];
indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
indexes [1] = LLVMConstInt (LLVMInt32Type (), 1, FALSE);
LLVMValueRef mask = LLVMConstVector (indexes, 2);
LLVMValueRef shuffle = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), mask, "");
values [ins->dreg] = LLVMBuildFPExt (builder, shuffle, LLVMVectorType (LLVMDoubleType (), 2), dname);
break;
}
case OP_CVTTPS2DQ:
values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, LLVMVectorType (LLVMInt32Type (), 4), dname);
break;
case OP_CVTPD2DQ:
case OP_CVTPS2DQ:
case OP_CVTPD2PS:
case OP_CVTTPD2DQ: {
LLVMValueRef v;
v = convert (ctx, values [ins->sreg1], simd_op_to_llvm_type (ins->opcode));
values [ins->dreg] = call_intrins (ctx, simd_ins_to_intrins (ins->opcode), &v, dname);
break;
}
case OP_COMPPS:
case OP_COMPPD: {
LLVMRealPredicate op;
switch (ins->inst_c0) {
case SIMD_COMP_EQ:
op = LLVMRealOEQ;
break;
case SIMD_COMP_LT:
op = LLVMRealOLT;
break;
case SIMD_COMP_LE:
op = LLVMRealOLE;
break;
case SIMD_COMP_UNORD:
op = LLVMRealUNO;
break;
case SIMD_COMP_NEQ:
op = LLVMRealUNE;
break;
case SIMD_COMP_NLT:
op = LLVMRealUGE;
break;
case SIMD_COMP_NLE:
op = LLVMRealUGT;
break;
case SIMD_COMP_ORD:
op = LLVMRealORD;
break;
default:
g_assert_not_reached ();
}
LLVMValueRef cmp = LLVMBuildFCmp (builder, op, lhs, rhs, "");
if (ins->opcode == OP_COMPPD)
values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt64Type (), 2), ""), LLVMTypeOf (lhs), "");
else
values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt32Type (), 4), ""), LLVMTypeOf (lhs), "");
break;
}
case OP_ICONV_TO_X:
/* This is only used for implementing shifts by non-immediate */
values [ins->dreg] = lhs;
break;
case OP_SHUFPS:
case OP_SHUFPD:
case OP_PSHUFLED:
case OP_PSHUFLEW_LOW:
case OP_PSHUFLEW_HIGH: {
int mask [16];
LLVMValueRef v1 = NULL, v2 = NULL, mask_values [16];
int i, mask_size = 0;
int imask = ins->inst_c0;
/* Convert the x86 shuffle mask to LLVM's */
switch (ins->opcode) {
case OP_SHUFPS:
mask_size = 4;
mask [0] = ((imask >> 0) & 3);
mask [1] = ((imask >> 2) & 3);
mask [2] = ((imask >> 4) & 3) + 4;
mask [3] = ((imask >> 6) & 3) + 4;
v1 = values [ins->sreg1];
v2 = values [ins->sreg2];
break;
case OP_SHUFPD:
mask_size = 2;
mask [0] = ((imask >> 0) & 1);
mask [1] = ((imask >> 1) & 1) + 2;
v1 = values [ins->sreg1];
v2 = values [ins->sreg2];
break;
case OP_PSHUFLEW_LOW:
mask_size = 8;
mask [0] = ((imask >> 0) & 3);
mask [1] = ((imask >> 2) & 3);
mask [2] = ((imask >> 4) & 3);
mask [3] = ((imask >> 6) & 3);
mask [4] = 4 + 0;
mask [5] = 4 + 1;
mask [6] = 4 + 2;
mask [7] = 4 + 3;
v1 = values [ins->sreg1];
v2 = LLVMGetUndef (LLVMTypeOf (v1));
break;
case OP_PSHUFLEW_HIGH:
mask_size = 8;
mask [0] = 0;
mask [1] = 1;
mask [2] = 2;
mask [3] = 3;
mask [4] = 4 + ((imask >> 0) & 3);
mask [5] = 4 + ((imask >> 2) & 3);
mask [6] = 4 + ((imask >> 4) & 3);
mask [7] = 4 + ((imask >> 6) & 3);
v1 = values [ins->sreg1];
v2 = LLVMGetUndef (LLVMTypeOf (v1));
break;
case OP_PSHUFLED:
mask_size = 4;
mask [0] = ((imask >> 0) & 3);
mask [1] = ((imask >> 2) & 3);
mask [2] = ((imask >> 4) & 3);
mask [3] = ((imask >> 6) & 3);
v1 = values [ins->sreg1];
v2 = LLVMGetUndef (LLVMTypeOf (v1));
break;
default:
g_assert_not_reached ();
}
for (i = 0; i < mask_size; ++i)
mask_values [i] = LLVMConstInt (LLVMInt32Type (), mask [i], FALSE);
values [ins->dreg] =
LLVMBuildShuffleVector (builder, v1, v2,
LLVMConstVector (mask_values, mask_size), dname);
break;
}
case OP_UNPACK_LOWB:
case OP_UNPACK_LOWW:
case OP_UNPACK_LOWD:
case OP_UNPACK_LOWQ:
case OP_UNPACK_LOWPS:
case OP_UNPACK_LOWPD:
case OP_UNPACK_HIGHB:
case OP_UNPACK_HIGHW:
case OP_UNPACK_HIGHD:
case OP_UNPACK_HIGHQ:
case OP_UNPACK_HIGHPS:
case OP_UNPACK_HIGHPD: {
int mask [16];
LLVMValueRef mask_values [16];
int i, mask_size = 0;
gboolean low = FALSE;
switch (ins->opcode) {
case OP_UNPACK_LOWB:
mask_size = 16;
low = TRUE;
break;
case OP_UNPACK_LOWW:
mask_size = 8;
low = TRUE;
break;
case OP_UNPACK_LOWD:
case OP_UNPACK_LOWPS:
mask_size = 4;
low = TRUE;
break;
case OP_UNPACK_LOWQ:
case OP_UNPACK_LOWPD:
mask_size = 2;
low = TRUE;
break;
case OP_UNPACK_HIGHB:
mask_size = 16;
break;
case OP_UNPACK_HIGHW:
mask_size = 8;
break;
case OP_UNPACK_HIGHD:
case OP_UNPACK_HIGHPS:
mask_size = 4;
break;
case OP_UNPACK_HIGHQ:
case OP_UNPACK_HIGHPD:
mask_size = 2;
break;
default:
g_assert_not_reached ();
}
if (low) {
for (i = 0; i < (mask_size / 2); ++i) {
mask [(i * 2)] = i;
mask [(i * 2) + 1] = mask_size + i;
}
} else {
for (i = 0; i < (mask_size / 2); ++i) {
mask [(i * 2)] = (mask_size / 2) + i;
mask [(i * 2) + 1] = mask_size + (mask_size / 2) + i;
}
}
for (i = 0; i < mask_size; ++i)
mask_values [i] = LLVMConstInt (LLVMInt32Type (), mask [i], FALSE);
values [ins->dreg] =
LLVMBuildShuffleVector (builder, values [ins->sreg1], values [ins->sreg2],
LLVMConstVector (mask_values, mask_size), dname);
break;
}
case OP_DUPPD: {
LLVMTypeRef t = simd_op_to_llvm_type (ins->opcode);
LLVMValueRef v, val;
v = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
val = LLVMConstNull (t);
val = LLVMBuildInsertElement (builder, val, v, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
val = LLVMBuildInsertElement (builder, val, v, LLVMConstInt (LLVMInt32Type (), 1, FALSE), dname);
values [ins->dreg] = val;
break;
}
case OP_DUPPS_LOW:
case OP_DUPPS_HIGH: {
LLVMTypeRef t = simd_op_to_llvm_type (ins->opcode);
LLVMValueRef v1, v2, val;
if (ins->opcode == OP_DUPPS_LOW) {
v1 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
v2 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 2, FALSE), "");
} else {
v1 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 1, FALSE), "");
v2 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 3, FALSE), "");
}
val = LLVMConstNull (t);
val = LLVMBuildInsertElement (builder, val, v1, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
val = LLVMBuildInsertElement (builder, val, v1, LLVMConstInt (LLVMInt32Type (), 1, FALSE), "");
val = LLVMBuildInsertElement (builder, val, v2, LLVMConstInt (LLVMInt32Type (), 2, FALSE), "");
val = LLVMBuildInsertElement (builder, val, v2, LLVMConstInt (LLVMInt32Type (), 3, FALSE), "");
values [ins->dreg] = val;
break;
}
case OP_FCONV_TO_R8_X: {
values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (sse_r8_t), lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
break;
}
case OP_FCONV_TO_R4_X: {
values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (sse_r4_t), lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
break;
}
#if defined(TARGET_X86) || defined(TARGET_AMD64)
case OP_SSE_MOVMSK: {
LLVMValueRef args [1];
if (ins->inst_c1 == MONO_TYPE_R4) {
args [0] = lhs;
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_MOVMSK_PS, args, dname);
} else if (ins->inst_c1 == MONO_TYPE_R8) {
args [0] = lhs;
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_MOVMSK_PD, args, dname);
} else {
args [0] = convert (ctx, lhs, sse_i1_t);
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_PMOVMSKB, args, dname);
}
break;
}
case OP_SSE_MOVS:
case OP_SSE_MOVS2: {
if (ins->inst_c1 == MONO_TYPE_R4)
values [ins->dreg] = LLVMBuildShuffleVector (builder, rhs, lhs, create_const_vector_4_i32 (0, 5, 6, 7), "");
else if (ins->inst_c1 == MONO_TYPE_R8)
values [ins->dreg] = LLVMBuildShuffleVector (builder, rhs, lhs, create_const_vector_2_i32 (0, 3), "");
else if (ins->inst_c1 == MONO_TYPE_I8 || ins->inst_c1 == MONO_TYPE_U8)
values [ins->dreg] = LLVMBuildInsertElement (builder, lhs,
LLVMConstInt (LLVMInt64Type (), 0, FALSE),
LLVMConstInt (LLVMInt32Type (), 1, FALSE), "");
else
g_assert_not_reached (); // will be needed for other types later
break;
}
case OP_SSE_MOVEHL: {
if (ins->inst_c1 == MONO_TYPE_R4)
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (6, 7, 2, 3), "");
else
g_assert_not_reached ();
break;
}
case OP_SSE_MOVELH: {
if (ins->inst_c1 == MONO_TYPE_R4)
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (0, 1, 4, 5), "");
else
g_assert_not_reached ();
break;
}
case OP_SSE_UNPACKLO: {
if (ins->inst_c1 == MONO_TYPE_R8 || ins->inst_c1 == MONO_TYPE_I8 || ins->inst_c1 == MONO_TYPE_U8) {
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_2_i32 (0, 2), "");
} else if (ins->inst_c1 == MONO_TYPE_R4 || ins->inst_c1 == MONO_TYPE_I4 || ins->inst_c1 == MONO_TYPE_U4) {
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (0, 4, 1, 5), "");
} else if (ins->inst_c1 == MONO_TYPE_I2 || ins->inst_c1 == MONO_TYPE_U2) {
const int mask_values [] = { 0, 8, 1, 9, 2, 10, 3, 11 };
LLVMValueRef shuffled = LLVMBuildShuffleVector (builder,
convert (ctx, lhs, sse_i2_t),
convert (ctx, rhs, sse_i2_t),
create_const_vector_i32 (mask_values, 8), "");
values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1));
} else if (ins->inst_c1 == MONO_TYPE_I1 || ins->inst_c1 == MONO_TYPE_U1) {
const int mask_values [] = { 0, 16, 1, 17, 2, 18, 3, 19, 4, 20, 5, 21, 6, 22, 7, 23 };
LLVMValueRef shuffled = LLVMBuildShuffleVector (builder,
convert (ctx, lhs, sse_i1_t),
convert (ctx, rhs, sse_i1_t),
create_const_vector_i32 (mask_values, 16), "");
values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1));
} else {
g_assert_not_reached ();
}
break;
}
case OP_SSE_UNPACKHI: {
if (ins->inst_c1 == MONO_TYPE_R8 || ins->inst_c1 == MONO_TYPE_I8 || ins->inst_c1 == MONO_TYPE_U8) {
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_2_i32 (1, 3), "");
} else if (ins->inst_c1 == MONO_TYPE_R4 || ins->inst_c1 == MONO_TYPE_I4 || ins->inst_c1 == MONO_TYPE_U4) {
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (2, 6, 3, 7), "");
} else if (ins->inst_c1 == MONO_TYPE_I2 || ins->inst_c1 == MONO_TYPE_U2) {
const int mask_values [] = { 4, 12, 5, 13, 6, 14, 7, 15 };
LLVMValueRef shuffled = LLVMBuildShuffleVector (builder,
convert (ctx, lhs, sse_i2_t),
convert (ctx, rhs, sse_i2_t),
create_const_vector_i32 (mask_values, 8), "");
values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1));
} else if (ins->inst_c1 == MONO_TYPE_I1 || ins->inst_c1 == MONO_TYPE_U1) {
const int mask_values [] = { 8, 24, 9, 25, 10, 26, 11, 27, 12, 28, 13, 29, 14, 30, 15, 31 };
LLVMValueRef shuffled = LLVMBuildShuffleVector (builder,
convert (ctx, lhs, sse_i1_t),
convert (ctx, rhs, sse_i1_t),
create_const_vector_i32 (mask_values, 16), "");
values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1));
} else {
g_assert_not_reached ();
}
break;
}
case OP_SSE_LOADU: {
LLVMValueRef dst_ptr = convert (ctx, lhs, LLVMPointerType (primitive_type_to_llvm_type (inst_c1_type (ins)), 0));
LLVMValueRef dst_vec = LLVMBuildBitCast (builder, dst_ptr, LLVMPointerType (type_to_sse_type (ins->inst_c1), 0), "");
values [ins->dreg] = mono_llvm_build_aligned_load (builder, dst_vec, "", FALSE, ins->inst_c0); // inst_c0 is alignment
break;
}
case OP_SSE_MOVSS: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMFloatType (), 0));
LLVMValueRef val = mono_llvm_build_load (builder, addr, "", FALSE);
values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (type_to_sse_type (ins->inst_c1)), val, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
break;
}
case OP_SSE_MOVSS_STORE: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMFloatType (), 0));
LLVMValueRef val = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
mono_llvm_build_store (builder, val, addr, FALSE, LLVM_BARRIER_NONE);
break;
}
case OP_SSE2_MOVD:
case OP_SSE2_MOVQ:
case OP_SSE2_MOVUPD: {
LLVMTypeRef rty = NULL;
switch (ins->opcode) {
case OP_SSE2_MOVD: rty = sse_i4_t; break;
case OP_SSE2_MOVQ: rty = sse_i8_t; break;
case OP_SSE2_MOVUPD: rty = sse_r8_t; break;
}
LLVMTypeRef srcty = LLVMGetElementType (rty);
LLVMValueRef zero = LLVMConstNull (rty);
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (srcty, 0));
LLVMValueRef val = mono_llvm_build_aligned_load (builder, addr, "", FALSE, 1);
values [ins->dreg] = LLVMBuildInsertElement (builder, zero, val, const_int32 (0), dname);
break;
}
case OP_SSE_MOVLPS_LOAD:
case OP_SSE_MOVHPS_LOAD: {
LLVMTypeRef t = LLVMFloatType ();
int size = 4;
gboolean high = ins->opcode == OP_SSE_MOVHPS_LOAD;
/* Load two floats from rhs and store them in the low/high part of lhs */
LLVMValueRef addr = rhs;
LLVMValueRef addr1 = convert (ctx, addr, LLVMPointerType (t, 0));
LLVMValueRef addr2 = convert (ctx, LLVMBuildAdd (builder, convert (ctx, addr, IntPtrType ()), convert (ctx, LLVMConstInt (LLVMInt32Type (), size, FALSE), IntPtrType ()), ""), LLVMPointerType (t, 0));
LLVMValueRef val1 = mono_llvm_build_load (builder, addr1, "", FALSE);
LLVMValueRef val2 = mono_llvm_build_load (builder, addr2, "", FALSE);
int index1, index2;
index1 = high ? 2: 0;
index2 = high ? 3 : 1;
values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMBuildInsertElement (builder, lhs, val1, LLVMConstInt (LLVMInt32Type (), index1, FALSE), ""), val2, LLVMConstInt (LLVMInt32Type (), index2, FALSE), "");
break;
}
case OP_SSE2_MOVLPD_LOAD:
case OP_SSE2_MOVHPD_LOAD: {
LLVMTypeRef t = LLVMDoubleType ();
LLVMValueRef addr = convert (ctx, rhs, LLVMPointerType (t, 0));
LLVMValueRef val = mono_llvm_build_load (builder, addr, "", FALSE);
int index = ins->opcode == OP_SSE2_MOVHPD_LOAD ? 1 : 0;
values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, val, const_int32 (index), "");
break;
}
case OP_SSE_MOVLPS_STORE:
case OP_SSE_MOVHPS_STORE: {
/* Store two floats from the low/hight part of rhs into lhs */
LLVMValueRef addr = lhs;
LLVMValueRef addr1 = convert (ctx, addr, LLVMPointerType (LLVMFloatType (), 0));
LLVMValueRef addr2 = convert (ctx, LLVMBuildAdd (builder, convert (ctx, addr, IntPtrType ()), convert (ctx, LLVMConstInt (LLVMInt32Type (), 4, FALSE), IntPtrType ()), ""), LLVMPointerType (LLVMFloatType (), 0));
int index1 = ins->opcode == OP_SSE_MOVLPS_STORE ? 0 : 2;
int index2 = ins->opcode == OP_SSE_MOVLPS_STORE ? 1 : 3;
LLVMValueRef val1 = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), index1, FALSE), "");
LLVMValueRef val2 = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), index2, FALSE), "");
mono_llvm_build_store (builder, val1, addr1, FALSE, LLVM_BARRIER_NONE);
mono_llvm_build_store (builder, val2, addr2, FALSE, LLVM_BARRIER_NONE);
break;
}
case OP_SSE2_MOVLPD_STORE:
case OP_SSE2_MOVHPD_STORE: {
LLVMTypeRef t = LLVMDoubleType ();
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (t, 0));
int index = ins->opcode == OP_SSE2_MOVHPD_STORE ? 1 : 0;
LLVMValueRef val = LLVMBuildExtractElement (builder, rhs, const_int32 (index), "");
mono_llvm_build_store (builder, val, addr, FALSE, LLVM_BARRIER_NONE);
break;
}
case OP_SSE_STORE: {
LLVMValueRef dst_vec = convert (ctx, lhs, LLVMPointerType (LLVMTypeOf (rhs), 0));
mono_llvm_build_aligned_store (builder, rhs, dst_vec, FALSE, ins->inst_c0);
break;
}
case OP_SSE_STORES: {
LLVMValueRef first_elem = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
LLVMValueRef dst = convert (ctx, lhs, LLVMPointerType (LLVMTypeOf (first_elem), 0));
mono_llvm_build_aligned_store (builder, first_elem, dst, FALSE, 1);
break;
}
case OP_SSE_MOVNTPS: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMTypeOf (rhs), 0));
LLVMValueRef store = mono_llvm_build_aligned_store (builder, rhs, addr, FALSE, ins->inst_c0);
set_nontemporal_flag (store);
break;
}
case OP_SSE_PREFETCHT0: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0));
LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (3), const_int32 (1) };
call_intrins (ctx, INTRINS_PREFETCH, args, "");
break;
}
case OP_SSE_PREFETCHT1: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0));
LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (2), const_int32 (1) };
call_intrins (ctx, INTRINS_PREFETCH, args, "");
break;
}
case OP_SSE_PREFETCHT2: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0));
LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (1), const_int32 (1) };
call_intrins (ctx, INTRINS_PREFETCH, args, "");
break;
}
case OP_SSE_PREFETCHNTA: {
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0));
LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (0), const_int32 (1) };
call_intrins (ctx, INTRINS_PREFETCH, args, "");
break;
}
case OP_SSE_OR: {
LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t);
LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t);
LLVMValueRef vec_and = LLVMBuildOr (builder, vec_lhs_i64, vec_rhs_i64, "");
values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), "");
break;
}
case OP_SSE_XOR: {
LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t);
LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t);
LLVMValueRef vec_and = LLVMBuildXor (builder, vec_lhs_i64, vec_rhs_i64, "");
values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), "");
break;
}
case OP_SSE_AND: {
LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t);
LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t);
LLVMValueRef vec_and = LLVMBuildAnd (builder, vec_lhs_i64, vec_rhs_i64, "");
values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), "");
break;
}
case OP_SSE_ANDN: {
LLVMValueRef minus_one [2];
minus_one [0] = LLVMConstInt (LLVMInt64Type (), -1, FALSE);
minus_one [1] = LLVMConstInt (LLVMInt64Type (), -1, FALSE);
LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t);
LLVMValueRef vec_xor = LLVMBuildXor (builder, vec_lhs_i64, LLVMConstVector (minus_one, 2), "");
LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t);
LLVMValueRef vec_and = LLVMBuildAnd (builder, vec_rhs_i64, vec_xor, "");
values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), "");
break;
}
case OP_SSE_ADDSS:
case OP_SSE_SUBSS:
case OP_SSE_DIVSS:
case OP_SSE_MULSS:
case OP_SSE2_ADDSD:
case OP_SSE2_SUBSD:
case OP_SSE2_DIVSD:
case OP_SSE2_MULSD: {
LLVMValueRef v1 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
LLVMValueRef v2 = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
LLVMValueRef v = NULL;
switch (ins->opcode) {
case OP_SSE_ADDSS:
case OP_SSE2_ADDSD:
v = LLVMBuildFAdd (builder, v1, v2, "");
break;
case OP_SSE_SUBSS:
case OP_SSE2_SUBSD:
v = LLVMBuildFSub (builder, v1, v2, "");
break;
case OP_SSE_DIVSS:
case OP_SSE2_DIVSD:
v = LLVMBuildFDiv (builder, v1, v2, "");
break;
case OP_SSE_MULSS:
case OP_SSE2_MULSD:
v = LLVMBuildFMul (builder, v1, v2, "");
break;
default:
g_assert_not_reached ();
}
values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, v, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
break;
}
case OP_SSE_CMPSS:
case OP_SSE2_CMPSD: {
int imm = -1;
gboolean swap = FALSE;
switch (ins->inst_c0) {
case CMP_EQ: imm = SSE_eq_ord_nosignal; break;
case CMP_GT: imm = SSE_lt_ord_signal; swap = TRUE; break;
case CMP_GE: imm = SSE_le_ord_signal; swap = TRUE; break;
case CMP_LT: imm = SSE_lt_ord_signal; break;
case CMP_LE: imm = SSE_le_ord_signal; break;
case CMP_GT_UN: imm = SSE_nle_unord_signal; break;
case CMP_GE_UN: imm = SSE_nlt_unord_signal; break;
case CMP_LT_UN: imm = SSE_nle_unord_signal; swap = TRUE; break;
case CMP_LE_UN: imm = SSE_nlt_unord_signal; swap = TRUE; break;
case CMP_NE: imm = SSE_neq_unord_nosignal; break;
case CMP_ORD: imm = SSE_ord_nosignal; break;
case CMP_UNORD: imm = SSE_unord_nosignal; break;
default: g_assert_not_reached (); break;
}
LLVMValueRef cmp = LLVMConstInt (LLVMInt8Type (), imm, FALSE);
LLVMValueRef args [] = { lhs, rhs, cmp };
if (swap) {
args [0] = rhs;
args [1] = lhs;
}
IntrinsicId id = (IntrinsicId) 0;
switch (ins->opcode) {
case OP_SSE_CMPSS: id = INTRINS_SSE_CMPSS; break;
case OP_SSE2_CMPSD: id = INTRINS_SSE_CMPSD; break;
default: g_assert_not_reached (); break;
}
int elements = LLVMGetVectorSize (LLVMTypeOf (lhs));
int mask_values [MAX_VECTOR_ELEMS] = { 0 };
for (int i = 1; i < elements; ++i) {
mask_values [i] = elements + i;
}
LLVMValueRef result = call_intrins (ctx, id, args, "");
result = LLVMBuildShuffleVector (builder, result, lhs, create_const_vector_i32 (mask_values, elements), "");
values [ins->dreg] = result;
break;
}
case OP_SSE_COMISS: {
LLVMValueRef args [] = { lhs, rhs };
IntrinsicId id = (IntrinsicId)0;
switch (ins->inst_c0) {
case CMP_EQ: id = INTRINS_SSE_COMIEQ_SS; break;
case CMP_GT: id = INTRINS_SSE_COMIGT_SS; break;
case CMP_GE: id = INTRINS_SSE_COMIGE_SS; break;
case CMP_LT: id = INTRINS_SSE_COMILT_SS; break;
case CMP_LE: id = INTRINS_SSE_COMILE_SS; break;
case CMP_NE: id = INTRINS_SSE_COMINEQ_SS; break;
default: g_assert_not_reached (); break;
}
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_SSE_UCOMISS: {
LLVMValueRef args [] = { lhs, rhs };
IntrinsicId id = (IntrinsicId)0;
switch (ins->inst_c0) {
case CMP_EQ: id = INTRINS_SSE_UCOMIEQ_SS; break;
case CMP_GT: id = INTRINS_SSE_UCOMIGT_SS; break;
case CMP_GE: id = INTRINS_SSE_UCOMIGE_SS; break;
case CMP_LT: id = INTRINS_SSE_UCOMILT_SS; break;
case CMP_LE: id = INTRINS_SSE_UCOMILE_SS; break;
case CMP_NE: id = INTRINS_SSE_UCOMINEQ_SS; break;
default: g_assert_not_reached (); break;
}
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_SSE2_COMISD: {
LLVMValueRef args [] = { lhs, rhs };
IntrinsicId id = (IntrinsicId)0;
switch (ins->inst_c0) {
case CMP_EQ: id = INTRINS_SSE_COMIEQ_SD; break;
case CMP_GT: id = INTRINS_SSE_COMIGT_SD; break;
case CMP_GE: id = INTRINS_SSE_COMIGE_SD; break;
case CMP_LT: id = INTRINS_SSE_COMILT_SD; break;
case CMP_LE: id = INTRINS_SSE_COMILE_SD; break;
case CMP_NE: id = INTRINS_SSE_COMINEQ_SD; break;
default: g_assert_not_reached (); break;
}
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_SSE2_UCOMISD: {
LLVMValueRef args [] = { lhs, rhs };
IntrinsicId id = (IntrinsicId)0;
switch (ins->inst_c0) {
case CMP_EQ: id = INTRINS_SSE_UCOMIEQ_SD; break;
case CMP_GT: id = INTRINS_SSE_UCOMIGT_SD; break;
case CMP_GE: id = INTRINS_SSE_UCOMIGE_SD; break;
case CMP_LT: id = INTRINS_SSE_UCOMILT_SD; break;
case CMP_LE: id = INTRINS_SSE_UCOMILE_SD; break;
case CMP_NE: id = INTRINS_SSE_UCOMINEQ_SD; break;
default: g_assert_not_reached (); break;
}
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_SSE_CVTSI2SS:
case OP_SSE_CVTSI2SS64:
case OP_SSE2_CVTSI2SD:
case OP_SSE2_CVTSI2SD64: {
LLVMTypeRef ty = LLVMFloatType ();
switch (ins->opcode) {
case OP_SSE2_CVTSI2SD:
case OP_SSE2_CVTSI2SD64:
ty = LLVMDoubleType ();
break;
}
LLVMValueRef fp = LLVMBuildSIToFP (builder, rhs, ty, "");
values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, fp, const_int32 (0), dname);
break;
}
case OP_SSE2_PMULUDQ: {
LLVMValueRef i32_max = LLVMConstInt (LLVMInt64Type (), UINT32_MAX, FALSE);
LLVMValueRef maskvals [] = { i32_max, i32_max };
LLVMValueRef mask = LLVMConstVector (maskvals, 2);
LLVMValueRef l = LLVMBuildAnd (builder, convert (ctx, lhs, sse_i8_t), mask, "");
LLVMValueRef r = LLVMBuildAnd (builder, convert (ctx, rhs, sse_i8_t), mask, "");
values [ins->dreg] = LLVMBuildNUWMul (builder, l, r, dname);
break;
}
case OP_SSE_SQRTSS:
case OP_SSE2_SQRTSD: {
LLVMValueRef upper = values [ins->sreg1];
LLVMValueRef lower = values [ins->sreg2];
LLVMValueRef scalar = LLVMBuildExtractElement (builder, lower, const_int32 (0), "");
LLVMValueRef result = call_intrins (ctx, simd_ins_to_intrins (ins->opcode), &scalar, dname);
values [ins->dreg] = LLVMBuildInsertElement (builder, upper, result, const_int32 (0), "");
break;
}
case OP_SSE_RCPSS:
case OP_SSE_RSQRTSS: {
IntrinsicId id = (IntrinsicId)0;
switch (ins->opcode) {
case OP_SSE_RCPSS: id = INTRINS_SSE_RCP_SS; break;
case OP_SSE_RSQRTSS: id = INTRINS_SSE_RSQRT_SS; break;
default: g_assert_not_reached (); break;
};
LLVMValueRef result = call_intrins (ctx, id, &rhs, dname);
const int mask[] = { 0, 5, 6, 7 };
LLVMValueRef shufmask = create_const_vector_i32 (mask, 4);
values [ins->dreg] = LLVMBuildShuffleVector (builder, result, lhs, shufmask, "");
break;
}
case OP_XOP: {
IntrinsicId id = (IntrinsicId)ins->inst_c0;
call_intrins (ctx, id, NULL, "");
break;
}
case OP_XOP_X_I:
case OP_XOP_X_X:
case OP_XOP_I4_X:
case OP_XOP_I8_X:
case OP_XOP_X_X_X:
case OP_XOP_X_X_I4:
case OP_XOP_X_X_I8: {
IntrinsicId id = (IntrinsicId)ins->inst_c0;
LLVMValueRef args [] = { lhs, rhs };
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_XOP_I4_X_X: {
gboolean to_i8_t = FALSE;
gboolean ret_bool = FALSE;
IntrinsicId id = (IntrinsicId)ins->inst_c0;
switch (ins->inst_c0) {
case INTRINS_SSE_TESTC: to_i8_t = TRUE; ret_bool = TRUE; break;
case INTRINS_SSE_TESTZ: to_i8_t = TRUE; ret_bool = TRUE; break;
case INTRINS_SSE_TESTNZ: to_i8_t = TRUE; ret_bool = TRUE; break;
default: g_assert_not_reached (); break;
}
LLVMValueRef args [] = { lhs, rhs };
if (to_i8_t) {
args [0] = convert (ctx, args [0], sse_i8_t);
args [1] = convert (ctx, args [1], sse_i8_t);
}
LLVMValueRef call = call_intrins (ctx, id, args, "");
if (ret_bool) {
// if return type is bool (it's still i32) we need to normalize it to 1/0
LLVMValueRef cmp_zero = LLVMBuildICmp (builder, LLVMIntNE, call, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
values [ins->dreg] = LLVMBuildZExt (builder, cmp_zero, LLVMInt8Type (), "");
} else {
values [ins->dreg] = call;
}
break;
}
case OP_SSE2_MASKMOVDQU: {
LLVMTypeRef i8ptr = LLVMPointerType (LLVMInt8Type (), 0);
LLVMValueRef dstaddr = convert (ctx, values [ins->sreg3], i8ptr);
LLVMValueRef src = convert (ctx, lhs, sse_i1_t);
LLVMValueRef mask = convert (ctx, rhs, sse_i1_t);
LLVMValueRef args[] = { src, mask, dstaddr };
call_intrins (ctx, INTRINS_SSE_MASKMOVDQU, args, "");
break;
}
case OP_PADDB_SAT:
case OP_PADDW_SAT:
case OP_PSUBB_SAT:
case OP_PSUBW_SAT:
case OP_PADDB_SAT_UN:
case OP_PADDW_SAT_UN:
case OP_PSUBB_SAT_UN:
case OP_PSUBW_SAT_UN:
case OP_SSE2_ADDS:
case OP_SSE2_SUBS: {
IntrinsicId id = (IntrinsicId)0;
int type = 0;
gboolean is_add = TRUE;
switch (ins->opcode) {
case OP_PADDB_SAT: type = MONO_TYPE_I1; break;
case OP_PADDW_SAT: type = MONO_TYPE_I2; break;
case OP_PSUBB_SAT: type = MONO_TYPE_I1; is_add = FALSE; break;
case OP_PSUBW_SAT: type = MONO_TYPE_I2; is_add = FALSE; break;
case OP_PADDB_SAT_UN: type = MONO_TYPE_U1; break;
case OP_PADDW_SAT_UN: type = MONO_TYPE_U2; break;
case OP_PSUBB_SAT_UN: type = MONO_TYPE_U1; is_add = FALSE; break;
case OP_PSUBW_SAT_UN: type = MONO_TYPE_U2; is_add = FALSE; break;
case OP_SSE2_ADDS: type = ins->inst_c1; break;
case OP_SSE2_SUBS: type = ins->inst_c1; is_add = FALSE; break;
default: g_assert_not_reached ();
}
if (is_add) {
switch (type) {
case MONO_TYPE_I1: id = INTRINS_SSE_SADD_SATI8; break;
case MONO_TYPE_U1: id = INTRINS_SSE_UADD_SATI8; break;
case MONO_TYPE_I2: id = INTRINS_SSE_SADD_SATI16; break;
case MONO_TYPE_U2: id = INTRINS_SSE_UADD_SATI16; break;
default: g_assert_not_reached (); break;
}
} else {
switch (type) {
case MONO_TYPE_I1: id = INTRINS_SSE_SSUB_SATI8; break;
case MONO_TYPE_U1: id = INTRINS_SSE_USUB_SATI8; break;
case MONO_TYPE_I2: id = INTRINS_SSE_SSUB_SATI16; break;
case MONO_TYPE_U2: id = INTRINS_SSE_USUB_SATI16; break;
default: g_assert_not_reached (); break;
}
}
LLVMTypeRef vecty = type_to_sse_type (type);
LLVMValueRef args [] = { convert (ctx, lhs, vecty), convert (ctx, rhs, vecty) };
LLVMValueRef result = call_intrins (ctx, id, args, dname);
values [ins->dreg] = convert (ctx, result, vecty);
break;
}
case OP_SSE2_PACKUS: {
LLVMValueRef args [2];
args [0] = convert (ctx, lhs, sse_i2_t);
args [1] = convert (ctx, rhs, sse_i2_t);
values [ins->dreg] = convert (ctx,
call_intrins (ctx, INTRINS_SSE_PACKUSWB, args, dname),
type_to_sse_type (ins->inst_c1));
break;
}
case OP_SSE2_SRLI: {
LLVMValueRef args [] = { lhs, rhs };
values [ins->dreg] = convert (ctx,
call_intrins (ctx, INTRINS_SSE_PSRLI_W, args, dname),
type_to_sse_type (ins->inst_c1));
break;
}
case OP_SSE2_PSLLDQ:
case OP_SSE2_PSRLDQ: {
LLVMBasicBlockRef bbs [16 + 1];
LLVMValueRef switch_ins;
LLVMValueRef value = lhs;
LLVMValueRef index = rhs;
LLVMValueRef phi_values [16 + 1];
LLVMTypeRef t = sse_i1_t;
int nelems = 16;
int i;
gboolean shift_right = (ins->opcode == OP_SSE2_PSRLDQ);
value = convert (ctx, value, t);
// No corresponding LLVM intrinsics
// FIXME: Optimize const count
for (i = 0; i < nelems; ++i)
bbs [i] = gen_bb (ctx, "PSLLDQ_CASE_BB");
bbs [nelems] = gen_bb (ctx, "PSLLDQ_DEF_BB");
cbb = gen_bb (ctx, "PSLLDQ_COND_BB");
switch_ins = LLVMBuildSwitch (builder, index, bbs [nelems], 0);
for (i = 0; i < nelems; ++i) {
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]);
LLVMPositionBuilderAtEnd (builder, bbs [i]);
int mask_values [16];
// Implement shift using a shuffle
if (shift_right) {
for (int j = 0; j < nelems - i; ++j)
mask_values [j] = i + j;
for (int j = nelems -i ; j < nelems; ++j)
mask_values [j] = nelems;
} else {
for (int j = 0; j < i; ++j)
mask_values [j] = nelems;
for (int j = 0; j < nelems - i; ++j)
mask_values [j + i] = j;
}
phi_values [i] = LLVMBuildShuffleVector (builder, value, LLVMGetUndef (t), create_const_vector_i32 (mask_values, nelems), "");
LLVMBuildBr (builder, cbb);
}
/* Default case */
LLVMPositionBuilderAtEnd (builder, bbs [nelems]);
phi_values [nelems] = LLVMConstNull (t);
LLVMBuildBr (builder, cbb);
LLVMPositionBuilderAtEnd (builder, cbb);
values [ins->dreg] = LLVMBuildPhi (builder, LLVMTypeOf (phi_values [0]), "");
LLVMAddIncoming (values [ins->dreg], phi_values, bbs, nelems + 1);
values [ins->dreg] = convert (ctx, values [ins->dreg], type_to_sse_type (ins->inst_c1));
ctx->bblocks [bb->block_num].end_bblock = cbb;
break;
}
case OP_SSE2_PSRAW_IMM:
case OP_SSE2_PSRAD_IMM:
case OP_SSE2_PSRLW_IMM:
case OP_SSE2_PSRLD_IMM:
case OP_SSE2_PSRLQ_IMM: {
LLVMValueRef value = lhs;
LLVMValueRef index = rhs;
IntrinsicId id;
// FIXME: Optimize const index case
/* Use the non-immediate version */
switch (ins->opcode) {
case OP_SSE2_PSRAW_IMM: id = INTRINS_SSE_PSRA_W; break;
case OP_SSE2_PSRAD_IMM: id = INTRINS_SSE_PSRA_D; break;
case OP_SSE2_PSRLW_IMM: id = INTRINS_SSE_PSRL_W; break;
case OP_SSE2_PSRLD_IMM: id = INTRINS_SSE_PSRL_D; break;
case OP_SSE2_PSRLQ_IMM: id = INTRINS_SSE_PSRL_Q; break;
default: g_assert_not_reached (); break;
}
LLVMTypeRef t = LLVMTypeOf (value);
LLVMValueRef index_vect = LLVMBuildInsertElement (builder, LLVMConstNull (t), convert (ctx, index, LLVMGetElementType (t)), const_int32 (0), "");
LLVMValueRef args [] = { value, index_vect };
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_SSE_SHUFPS:
case OP_SSE2_SHUFPD:
case OP_SSE2_PSHUFD:
case OP_SSE2_PSHUFHW:
case OP_SSE2_PSHUFLW: {
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
LLVMValueRef l = lhs;
LLVMValueRef r = rhs;
LLVMValueRef ctl = arg3;
const char *oname = "";
int ncases = 0;
switch (ins->opcode) {
case OP_SSE_SHUFPS: ncases = 256; break;
case OP_SSE2_SHUFPD: ncases = 4; break;
case OP_SSE2_PSHUFD: case OP_SSE2_PSHUFHW: case OP_SSE2_PSHUFLW: ncases = 256; r = lhs; ctl = rhs; break;
}
switch (ins->opcode) {
case OP_SSE_SHUFPS: oname = "sse_shufps"; break;
case OP_SSE2_SHUFPD: oname = "sse2_shufpd"; break;
case OP_SSE2_PSHUFD: oname = "sse2_pshufd"; break;
case OP_SSE2_PSHUFHW: oname = "sse2_pshufhw"; break;
case OP_SSE2_PSHUFLW: oname = "sse2_pshuflw"; break;
}
ctl = LLVMBuildAnd (builder, ctl, const_int32 (ncases - 1), "");
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, ncases, ctl, ret_t, oname);
int mask_values [8];
int mask_len = 0;
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
switch (ins->opcode) {
case OP_SSE_SHUFPS:
mask_len = 4;
mask_values [0] = ((i >> 0) & 0x3) + 0; // take two elements from lhs
mask_values [1] = ((i >> 2) & 0x3) + 0;
mask_values [2] = ((i >> 4) & 0x3) + 4; // and two from rhs
mask_values [3] = ((i >> 6) & 0x3) + 4;
break;
case OP_SSE2_SHUFPD:
mask_len = 2;
mask_values [0] = ((i >> 0) & 0x1) + 0;
mask_values [1] = ((i >> 1) & 0x1) + 2;
break;
case OP_SSE2_PSHUFD:
/*
* Each 2 bits in mask selects 1 dword from the the source and copies it to the
* destination.
*/
mask_len = 4;
for (int j = 0; j < 4; ++j) {
int windex = (i >> (j * 2)) & 0x3;
mask_values [j] = windex;
}
break;
case OP_SSE2_PSHUFHW:
/*
* Each 2 bits in mask selects 1 word from the high quadword of the source and copies it to the
* high quadword of the destination.
*/
mask_len = 8;
/* The low quadword stays the same */
for (int j = 0; j < 4; ++j)
mask_values [j] = j;
for (int j = 0; j < 4; ++j) {
int windex = (i >> (j * 2)) & 0x3;
mask_values [j + 4] = 4 + windex;
}
break;
case OP_SSE2_PSHUFLW:
mask_len = 8;
/* The high quadword stays the same */
for (int j = 0; j < 4; ++j)
mask_values [j + 4] = j + 4;
for (int j = 0; j < 4; ++j) {
int windex = (i >> (j * 2)) & 0x3;
mask_values [j] = windex;
}
break;
}
LLVMValueRef mask = create_const_vector_i32 (mask_values, mask_len);
LLVMValueRef result = LLVMBuildShuffleVector (builder, l, r, mask, oname);
immediate_unroll_commit (&ictx, i, result);
}
immediate_unroll_default (&ictx);
immediate_unroll_commit_default (&ictx, LLVMGetUndef (ret_t));
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_SSE3_MOVDDUP: {
int mask [] = { 0, 0 };
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs,
LLVMGetUndef (LLVMTypeOf (lhs)),
create_const_vector_i32 (mask, 2), "");
break;
}
case OP_SSE3_MOVDDUP_MEM: {
LLVMValueRef undef = LLVMGetUndef (v128_r8_t);
LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (r8_t, 0));
LLVMValueRef elem = mono_llvm_build_aligned_load (builder, addr, "sse3_movddup_mem", FALSE, 1);
LLVMValueRef val = LLVMBuildInsertElement (builder, undef, elem, const_int32 (0), "sse3_movddup_mem");
values [ins->dreg] = LLVMBuildShuffleVector (builder, val, undef, LLVMConstNull (LLVMVectorType (i4_t, 2)), "sse3_movddup_mem");
break;
}
case OP_SSE3_MOVSHDUP: {
int mask [] = { 1, 1, 3, 3 };
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), create_const_vector_i32 (mask, 4), "");
break;
}
case OP_SSE3_MOVSLDUP: {
int mask [] = { 0, 0, 2, 2 };
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), create_const_vector_i32 (mask, 4), "");
break;
}
case OP_SSSE3_SHUFFLE: {
LLVMValueRef args [] = { lhs, rhs };
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_PSHUFB, args, dname);
break;
}
case OP_SSSE3_ABS: {
// %sub = sub <16 x i8> zeroinitializer, %arg
// %cmp = icmp sgt <16 x i8> %arg, zeroinitializer
// %abs = select <16 x i1> %cmp, <16 x i8> %arg, <16 x i8> %sub
LLVMTypeRef typ = type_to_sse_type (ins->inst_c1);
LLVMValueRef sub = LLVMBuildSub(builder, LLVMConstNull(typ), lhs, "");
LLVMValueRef cmp = LLVMBuildICmp(builder, LLVMIntSGT, lhs, LLVMConstNull(typ), "");
LLVMValueRef abs = LLVMBuildSelect (builder, cmp, lhs, sub, "");
values [ins->dreg] = convert (ctx, abs, typ);
break;
}
case OP_SSSE3_ALIGNR: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMValueRef zero = LLVMConstNull (v128_i1_t);
LLVMValueRef hivec = convert (ctx, lhs, v128_i1_t);
LLVMValueRef lovec = convert (ctx, rhs, v128_i1_t);
LLVMValueRef rshift_amount = convert (ctx, arg3, i1_t);
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 32, rshift_amount, v128_i1_t, "ssse3_alignr");
LLVMValueRef mask_values [16]; // 128-bit vector, 8-bit elements, 16 total elements
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
LLVMValueRef hi = NULL;
LLVMValueRef lo = NULL;
if (i <= 16) {
for (int j = 0; j < 16; j++)
mask_values [j] = const_int32 (i + j);
lo = lovec;
hi = hivec;
} else {
for (int j = 0; j < 16; j++)
mask_values [j] = const_int32 (i + j - 16);
lo = hivec;
hi = zero;
}
LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, lo, hi, LLVMConstVector (mask_values, 16), "ssse3_alignr");
immediate_unroll_commit (&ictx, i, shuffled);
}
immediate_unroll_default (&ictx);
immediate_unroll_commit_default (&ictx, zero);
LLVMValueRef result = immediate_unroll_end (&ictx, &cbb);
values [ins->dreg] = convert (ctx, result, ret_t);
break;
}
case OP_SSE41_ROUNDP: {
LLVMValueRef args [] = { lhs, LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE) };
values [ins->dreg] = call_intrins (ctx, ins->inst_c1 == MONO_TYPE_R4 ? INTRINS_SSE_ROUNDPS : INTRINS_SSE_ROUNDPD, args, dname);
break;
}
case OP_SSE41_ROUNDS: {
LLVMValueRef args [3];
args [0] = lhs;
args [1] = rhs;
args [2] = LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE);
values [ins->dreg] = call_intrins (ctx, ins->inst_c1 == MONO_TYPE_R4 ? INTRINS_SSE_ROUNDSS : INTRINS_SSE_ROUNDSD, args, dname);
break;
}
case OP_SSE41_DPPS:
case OP_SSE41_DPPD: {
/* Bits 0, 1, 4, 5 are meaningful for the control mask
* in dppd; all bits are meaningful for dpps.
*/
LLVMTypeRef ret_t = NULL;
LLVMValueRef mask = NULL;
int mask_bits = 0;
int high_shift = 0;
int low_mask = 0;
IntrinsicId iid = (IntrinsicId) 0;
const char *oname = "";
switch (ins->opcode) {
case OP_SSE41_DPPS:
ret_t = v128_r4_t;
mask = const_int8 (0xff); // 0b11111111
mask_bits = 8;
high_shift = 4;
low_mask = 0xf;
iid = INTRINS_SSE_DPPS;
oname = "sse41_dpps";
break;
case OP_SSE41_DPPD:
ret_t = v128_r8_t;
mask = const_int8 (0x33); // 0b00110011
mask_bits = 4;
high_shift = 2;
low_mask = 0x3;
iid = INTRINS_SSE_DPPD;
oname = "sse41_dppd";
break;
}
LLVMValueRef args [] = { lhs, rhs, NULL };
LLVMValueRef index = LLVMBuildAnd (builder, convert (ctx, arg3, i1_t), mask, oname);
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 1 << mask_bits, index, ret_t, oname);
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
int imm = ((i >> high_shift) << 4) | (i & low_mask);
args [2] = const_int8 (imm);
LLVMValueRef result = call_intrins (ctx, iid, args, dname);
immediate_unroll_commit (&ictx, imm, result);
}
immediate_unroll_default (&ictx);
immediate_unroll_commit_default (&ictx, LLVMGetUndef (ret_t));
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_SSE41_MPSADBW: {
LLVMValueRef args [] = {
convert (ctx, lhs, sse_i1_t),
convert (ctx, rhs, sse_i1_t),
NULL,
};
LLVMValueRef ctl = convert (ctx, arg3, i1_t);
// Only 3 bits (bits 0-2) are used by mpsadbw and llvm.x86.sse41.mpsadbw
int used_bits = 0x7;
ctl = LLVMBuildAnd (builder, ctl, const_int8 (used_bits), "sse41_mpsadbw");
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, used_bits + 1, ctl, v128_i2_t, "sse41_mpsadbw");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
args [2] = const_int8 (i);
LLVMValueRef result = call_intrins (ctx, INTRINS_SSE_MPSADBW, args, "sse41_mpsadbw");
immediate_unroll_commit (&ictx, i, result);
}
immediate_unroll_unreachable_default (&ictx);
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_SSE41_INSERTPS: {
LLVMValueRef ctl = convert (ctx, arg3, i1_t);
LLVMValueRef args [] = { lhs, rhs, NULL };
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 256, ctl, v128_r4_t, "sse41_insertps");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
args [2] = const_int8 (i);
LLVMValueRef result = call_intrins (ctx, INTRINS_SSE_INSERTPS, args, dname);
immediate_unroll_commit (&ictx, i, result);
}
immediate_unroll_unreachable_default (&ictx);
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_SSE41_BLEND: {
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
int nelem = LLVMGetVectorSize (ret_t);
g_assert (nelem >= 2 && nelem <= 8); // I2, U2, R4, R8
int unique_ctl_patterns = 1 << nelem;
int ctlmask = unique_ctl_patterns - 1;
LLVMValueRef ctl = convert (ctx, arg3, i1_t);
ctl = LLVMBuildAnd (builder, ctl, const_int8 (ctlmask), "sse41_blend");
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, unique_ctl_patterns, ctl, ret_t, "sse41_blend");
int i = 0;
int mask_values [MAX_VECTOR_ELEMS] = { 0 };
while (immediate_unroll_next (&ictx, &i)) {
for (int lane = 0; lane < nelem; ++lane) {
// n-bit in inst_c0 (control byte) is set to 1
gboolean bit_set = (i & (1 << lane)) >> lane;
mask_values [lane] = lane + (bit_set ? nelem : 0);
}
LLVMValueRef mask = create_const_vector_i32 (mask_values, nelem);
LLVMValueRef result = LLVMBuildShuffleVector (builder, lhs, rhs, mask, "sse41_blend");
immediate_unroll_commit (&ictx, i, result);
}
immediate_unroll_default (&ictx);
immediate_unroll_commit_default (&ictx, LLVMGetUndef (ret_t));
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_SSE41_BLENDV: {
LLVMValueRef args [] = { lhs, rhs, values [ins->sreg3] };
if (ins->inst_c1 == MONO_TYPE_R4) {
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_BLENDVPS, args, dname);
} else if (ins->inst_c1 == MONO_TYPE_R8) {
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_BLENDVPD, args, dname);
} else {
// for other non-fp type just convert to <16 x i8> and pass to @llvm.x86.sse41.pblendvb
args [0] = LLVMBuildBitCast (ctx->builder, args [0], sse_i1_t, "");
args [1] = LLVMBuildBitCast (ctx->builder, args [1], sse_i1_t, "");
args [2] = LLVMBuildBitCast (ctx->builder, args [2], sse_i1_t, "");
values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_PBLENDVB, args, dname);
}
break;
}
case OP_SSE_CVTII: {
gboolean is_signed = (ins->inst_c1 == MONO_TYPE_I1) ||
(ins->inst_c1 == MONO_TYPE_I2) || (ins->inst_c1 == MONO_TYPE_I4);
LLVMTypeRef vec_type;
if ((ins->inst_c1 == MONO_TYPE_I1) || (ins->inst_c1 == MONO_TYPE_U1))
vec_type = sse_i1_t;
else if ((ins->inst_c1 == MONO_TYPE_I2) || (ins->inst_c1 == MONO_TYPE_U2))
vec_type = sse_i2_t;
else
vec_type = sse_i4_t;
LLVMValueRef value;
if (LLVMGetTypeKind (LLVMTypeOf (lhs)) != LLVMVectorTypeKind) {
LLVMValueRef bitcasted = LLVMBuildBitCast (ctx->builder, lhs, LLVMPointerType (vec_type, 0), "");
value = mono_llvm_build_aligned_load (builder, bitcasted, "", FALSE, 1);
} else {
value = LLVMBuildBitCast (ctx->builder, lhs, vec_type, "");
}
LLVMValueRef mask_vec;
LLVMTypeRef dst_type;
if (ins->inst_c0 == MONO_TYPE_I2) {
mask_vec = create_const_vector_i32 (mask_0_incr_1, 8);
dst_type = sse_i2_t;
} else if (ins->inst_c0 == MONO_TYPE_I4) {
mask_vec = create_const_vector_i32 (mask_0_incr_1, 4);
dst_type = sse_i4_t;
} else {
g_assert (ins->inst_c0 == MONO_TYPE_I8);
mask_vec = create_const_vector_i32 (mask_0_incr_1, 2);
dst_type = sse_i8_t;
}
LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, value,
LLVMGetUndef (vec_type), mask_vec, "");
if (is_signed)
values [ins->dreg] = LLVMBuildSExt (ctx->builder, shuffled, dst_type, "");
else
values [ins->dreg] = LLVMBuildZExt (ctx->builder, shuffled, dst_type, "");
break;
}
case OP_SSE41_LOADANT: {
LLVMValueRef dst_ptr = convert (ctx, lhs, LLVMPointerType (primitive_type_to_llvm_type (inst_c1_type (ins)), 0));
LLVMValueRef dst_vec = LLVMBuildBitCast (builder, dst_ptr, LLVMPointerType (type_to_sse_type (ins->inst_c1), 0), "");
LLVMValueRef load = mono_llvm_build_aligned_load (builder, dst_vec, "", FALSE, 16);
set_nontemporal_flag (load);
values [ins->dreg] = load;
break;
}
case OP_SSE41_MUL: {
const int shift_vals [] = { 32, 32 };
const LLVMValueRef args [] = {
convert (ctx, lhs, sse_i8_t),
convert (ctx, rhs, sse_i8_t),
};
LLVMValueRef mul_args [2] = { 0 };
LLVMValueRef shift_vec = create_const_vector (LLVMInt64Type (), shift_vals, 2);
for (int i = 0; i < 2; ++i) {
LLVMValueRef padded = LLVMBuildShl (builder, args [i], shift_vec, "");
mul_args[i] = mono_llvm_build_exact_ashr (builder, padded, shift_vec);
}
values [ins->dreg] = LLVMBuildNSWMul (builder, mul_args [0], mul_args [1], dname);
break;
}
case OP_SSE41_MULLO: {
values [ins->dreg] = LLVMBuildMul (ctx->builder, lhs, rhs, "");
break;
}
case OP_SSE42_CRC32:
case OP_SSE42_CRC64: {
LLVMValueRef args [2];
args [0] = lhs;
args [1] = convert (ctx, rhs, primitive_type_to_llvm_type (ins->inst_c0));
IntrinsicId id;
switch (ins->inst_c0) {
case MONO_TYPE_U1: id = INTRINS_SSE_CRC32_32_8; break;
case MONO_TYPE_U2: id = INTRINS_SSE_CRC32_32_16; break;
case MONO_TYPE_U4: id = INTRINS_SSE_CRC32_32_32; break;
case MONO_TYPE_U8: id = INTRINS_SSE_CRC32_64_64; break;
default: g_assert_not_reached (); break;
}
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_PCLMULQDQ: {
LLVMValueRef args [] = { lhs, rhs, NULL };
LLVMValueRef ctl = convert (ctx, arg3, i1_t);
// Only bits 0 and 4 of the immediate operand are used by PCLMULQDQ.
ctl = LLVMBuildAnd (builder, ctl, const_int8 (0x11), "pclmulqdq");
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 1 << 2, ctl, v128_i8_t, "pclmulqdq");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
int imm = ((i & 0x2) << 3) | (i & 0x1);
args [2] = const_int8 (imm);
LLVMValueRef result = call_intrins (ctx, INTRINS_PCLMULQDQ, args, "pclmulqdq");
immediate_unroll_commit (&ictx, imm, result);
}
immediate_unroll_unreachable_default (&ictx);
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_AES_KEYGENASSIST: {
LLVMValueRef roundconstant = convert (ctx, rhs, i1_t);
LLVMValueRef args [] = { convert (ctx, lhs, v128_i8_t), NULL };
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 256, roundconstant, v128_i8_t, "aes_keygenassist");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
args [1] = const_int8 (i);
LLVMValueRef result = call_intrins (ctx, INTRINS_AESNI_AESKEYGENASSIST, args, "aes_keygenassist");
immediate_unroll_commit (&ictx, i, result);
}
immediate_unroll_unreachable_default (&ictx);
LLVMValueRef result = immediate_unroll_end (&ictx, &cbb);
values [ins->dreg] = convert (ctx, result, v128_i1_t);
break;
}
#endif
case OP_XCOMPARE_FP: {
LLVMRealPredicate pred = fpcond_to_llvm_cond [ins->inst_c0];
LLVMValueRef cmp = LLVMBuildFCmp (builder, pred, lhs, rhs, "");
int nelems = LLVMGetVectorSize (LLVMTypeOf (cmp));
g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs));
if (ins->inst_c1 == MONO_TYPE_R8)
values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt64Type (), nelems), ""), LLVMTypeOf (lhs), "");
else
values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt32Type (), nelems), ""), LLVMTypeOf (lhs), "");
break;
}
case OP_XCOMPARE: {
LLVMIntPredicate pred = cond_to_llvm_cond [ins->inst_c0];
LLVMValueRef cmp = LLVMBuildICmp (builder, pred, lhs, rhs, "");
g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs));
values [ins->dreg] = LLVMBuildSExt (builder, cmp, LLVMTypeOf (lhs), "");
break;
}
case OP_POPCNT32:
values [ins->dreg] = call_intrins (ctx, INTRINS_CTPOP_I32, &lhs, "");
break;
case OP_POPCNT64:
values [ins->dreg] = call_intrins (ctx, INTRINS_CTPOP_I64, &lhs, "");
break;
case OP_CTTZ32:
case OP_CTTZ64: {
LLVMValueRef args [2];
args [0] = lhs;
args [1] = LLVMConstInt (LLVMInt1Type (), 0, FALSE);
values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_CTTZ32 ? INTRINS_CTTZ_I32 : INTRINS_CTTZ_I64, args, "");
break;
}
case OP_BMI1_BEXTR32:
case OP_BMI1_BEXTR64: {
LLVMValueRef args [2];
args [0] = lhs;
args [1] = convert (ctx, rhs, ins->opcode == OP_BMI1_BEXTR32 ? i4_t : i8_t); // cast ushort to u32/u64
values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_BMI1_BEXTR32 ? INTRINS_BEXTR_I32 : INTRINS_BEXTR_I64, args, "");
break;
}
case OP_BZHI32:
case OP_BZHI64: {
LLVMValueRef args [2];
args [0] = lhs;
args [1] = rhs;
values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_BZHI32 ? INTRINS_BZHI_I32 : INTRINS_BZHI_I64, args, "");
break;
}
case OP_MULX_H32:
case OP_MULX_H64:
case OP_MULX_HL32:
case OP_MULX_HL64: {
gboolean is_64 = ins->opcode == OP_MULX_H64 || ins->opcode == OP_MULX_HL64;
gboolean only_high = ins->opcode == OP_MULX_H32 || ins->opcode == OP_MULX_H64;
LLVMValueRef lx = LLVMBuildZExt (ctx->builder, lhs, LLVMInt128Type (), "");
LLVMValueRef rx = LLVMBuildZExt (ctx->builder, rhs, LLVMInt128Type (), "");
LLVMValueRef mulx = LLVMBuildMul (ctx->builder, lx, rx, "");
if (!only_high) {
LLVMValueRef addr = convert (ctx, arg3, LLVMPointerType (is_64 ? i8_t : i4_t, 0));
LLVMValueRef lowx = LLVMBuildTrunc (ctx->builder, mulx, is_64 ? LLVMInt64Type () : LLVMInt32Type (), "");
LLVMBuildStore (ctx->builder, lowx, addr);
}
LLVMValueRef shift = LLVMConstInt (LLVMInt128Type (), is_64 ? 64 : 32, FALSE);
LLVMValueRef highx = LLVMBuildLShr (ctx->builder, mulx, shift, "");
values [ins->dreg] = LLVMBuildTrunc (ctx->builder, highx, is_64 ? LLVMInt64Type () : LLVMInt32Type (), "");
break;
}
case OP_PEXT32:
case OP_PEXT64: {
LLVMValueRef args [2];
args [0] = lhs;
args [1] = rhs;
values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_PEXT32 ? INTRINS_PEXT_I32 : INTRINS_PEXT_I64, args, "");
break;
}
case OP_PDEP32:
case OP_PDEP64: {
LLVMValueRef args [2];
args [0] = lhs;
args [1] = rhs;
values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_PDEP32 ? INTRINS_PDEP_I32 : INTRINS_PDEP_I64, args, "");
break;
}
#endif /* defined(TARGET_X86) || defined(TARGET_AMD64) */
// Shared between ARM64 and X86
#if defined(TARGET_ARM64) || defined(TARGET_X86) || defined(TARGET_AMD64)
case OP_LZCNT32:
case OP_LZCNT64: {
IntrinsicId iid = ins->opcode == OP_LZCNT32 ? INTRINS_CTLZ_I32 : INTRINS_CTLZ_I64;
LLVMValueRef args [] = { lhs, const_int1 (FALSE) };
values [ins->dreg] = call_intrins (ctx, iid, args, "");
break;
}
#endif
#if defined(TARGET_ARM64) || defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_WASM)
case OP_XEQUAL: {
LLVMTypeRef t;
LLVMValueRef cmp, mask [MAX_VECTOR_ELEMS], shuffle;
int nelems;
#if defined(TARGET_WASM)
/* The wasm code generator doesn't understand the shuffle/and code sequence below */
LLVMValueRef val;
if (LLVMIsNull (lhs) || LLVMIsNull (rhs)) {
val = LLVMIsNull (lhs) ? rhs : lhs;
nelems = LLVMGetVectorSize (LLVMTypeOf (lhs));
IntrinsicId intrins = (IntrinsicId)0;
switch (nelems) {
case 16:
intrins = INTRINS_WASM_ANYTRUE_V16;
break;
case 8:
intrins = INTRINS_WASM_ANYTRUE_V8;
break;
case 4:
intrins = INTRINS_WASM_ANYTRUE_V4;
break;
case 2:
intrins = INTRINS_WASM_ANYTRUE_V2;
break;
default:
g_assert_not_reached ();
}
/* res = !wasm.anytrue (val) */
values [ins->dreg] = call_intrins (ctx, intrins, &val, "");
values [ins->dreg] = LLVMBuildZExt (builder, LLVMBuildICmp (builder, LLVMIntEQ, values [ins->dreg], LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""), LLVMInt32Type (), dname);
break;
}
#endif
LLVMTypeRef srcelemt = LLVMGetElementType (LLVMTypeOf (lhs));
//%c = icmp sgt <16 x i8> %a0, %a1
if (srcelemt == LLVMDoubleType () || srcelemt == LLVMFloatType ())
cmp = LLVMBuildFCmp (builder, LLVMRealOEQ, lhs, rhs, "");
else
cmp = LLVMBuildICmp (builder, LLVMIntEQ, lhs, rhs, "");
nelems = LLVMGetVectorSize (LLVMTypeOf (cmp));
LLVMTypeRef elemt;
if (srcelemt == LLVMDoubleType ())
elemt = LLVMInt64Type ();
else if (srcelemt == LLVMFloatType ())
elemt = LLVMInt32Type ();
else
elemt = srcelemt;
t = LLVMVectorType (elemt, nelems);
cmp = LLVMBuildSExt (builder, cmp, t, "");
// cmp is a <nelems x elemt> vector, each element is either 0xff... or 0
int half = nelems / 2;
while (half >= 1) {
// AND the top and bottom halfes into the bottom half
for (int i = 0; i < half; ++i)
mask [i] = LLVMConstInt (LLVMInt32Type (), half + i, FALSE);
for (int i = half; i < nelems; ++i)
mask [i] = LLVMConstInt (LLVMInt32Type (), 0, FALSE);
shuffle = LLVMBuildShuffleVector (builder, cmp, LLVMGetUndef (t), LLVMConstVector (mask, LLVMGetVectorSize (t)), "");
cmp = LLVMBuildAnd (builder, cmp, shuffle, "");
half = half / 2;
}
// Extract [0]
LLVMValueRef first_elem = LLVMBuildExtractElement (builder, cmp, LLVMConstInt (LLVMInt32Type (), 0, FALSE), "");
// convert to 0/1
LLVMValueRef cmp_zero = LLVMBuildICmp (builder, LLVMIntNE, first_elem, LLVMConstInt (elemt, 0, FALSE), "");
values [ins->dreg] = LLVMBuildZExt (builder, cmp_zero, LLVMInt8Type (), "");
break;
}
#endif
#if defined(TARGET_ARM64)
case OP_XOP_I4_I4:
case OP_XOP_I8_I8: {
IntrinsicId id = (IntrinsicId)ins->inst_c0;
values [ins->dreg] = call_intrins (ctx, id, &lhs, "");
break;
}
case OP_XOP_X_X_X:
case OP_XOP_I4_I4_I4:
case OP_XOP_I4_I4_I8: {
IntrinsicId id = (IntrinsicId)ins->inst_c0;
gboolean zext_last = FALSE, bitcast_result = FALSE, getElement = FALSE;
int element_idx = -1;
switch (id) {
case INTRINS_AARCH64_PMULL64:
getElement = TRUE;
bitcast_result = TRUE;
element_idx = ins->inst_c1;
break;
case INTRINS_AARCH64_CRC32B:
case INTRINS_AARCH64_CRC32H:
case INTRINS_AARCH64_CRC32W:
case INTRINS_AARCH64_CRC32CB:
case INTRINS_AARCH64_CRC32CH:
case INTRINS_AARCH64_CRC32CW:
zext_last = TRUE;
break;
default:
break;
}
LLVMValueRef arg1 = rhs;
if (zext_last)
arg1 = LLVMBuildZExt (ctx->builder, arg1, LLVMInt32Type (), "");
LLVMValueRef args [] = { lhs, arg1 };
if (getElement) {
args [0] = LLVMBuildExtractElement (ctx->builder, args [0], const_int32 (element_idx), "");
args [1] = LLVMBuildExtractElement (ctx->builder, args [1], const_int32 (element_idx), "");
}
values [ins->dreg] = call_intrins (ctx, id, args, "");
if (bitcast_result)
values [ins->dreg] = convert (ctx, values [ins->dreg], LLVMVectorType (LLVMInt64Type (), 2));
break;
}
case OP_XOP_X_X_X_X: {
IntrinsicId id = (IntrinsicId)ins->inst_c0;
gboolean getLowerElement = FALSE;
int arg_idx = -1;
switch (id) {
case INTRINS_AARCH64_SHA1C:
case INTRINS_AARCH64_SHA1M:
case INTRINS_AARCH64_SHA1P:
getLowerElement = TRUE;
arg_idx = 1;
break;
default:
break;
}
LLVMValueRef args [] = { lhs, rhs, arg3 };
if (getLowerElement)
args [arg_idx] = LLVMBuildExtractElement (ctx->builder, args [arg_idx], const_int32 (0), "");
values [ins->dreg] = call_intrins (ctx, id, args, "");
break;
}
case OP_XOP_X_X: {
IntrinsicId id = (IntrinsicId)ins->inst_c0;
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean getLowerElement = FALSE;
switch (id) {
case INTRINS_AARCH64_SHA1H: getLowerElement = TRUE; break;
default: break;
}
LLVMValueRef arg0 = lhs;
if (getLowerElement)
arg0 = LLVMBuildExtractElement (ctx->builder, arg0, const_int32 (0), "");
LLVMValueRef result = call_intrins (ctx, id, &arg0, "");
if (getLowerElement)
result = vector_from_scalar (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_XCOMPARE_FP_SCALAR:
case OP_XCOMPARE_FP: {
g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs));
gboolean scalar = ins->opcode == OP_XCOMPARE_FP_SCALAR;
LLVMRealPredicate pred = fpcond_to_llvm_cond [ins->inst_c0];
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
LLVMTypeRef reti_t = to_integral_vector_type (ret_t);
LLVMValueRef args [] = { lhs, rhs };
if (scalar)
for (int i = 0; i < 2; ++i)
args [i] = scalar_from_vector (ctx, args [i]);
LLVMValueRef result = LLVMBuildFCmp (builder, pred, args [0], args [1], "xcompare_fp");
if (scalar)
result = vector_from_scalar (ctx, LLVMVectorType (LLVMIntType (1), LLVMGetVectorSize (reti_t)), result);
result = LLVMBuildSExt (builder, result, reti_t, "");
result = LLVMBuildBitCast (builder, result, ret_t, "");
values [ins->dreg] = result;
break;
}
case OP_XCOMPARE_SCALAR:
case OP_XCOMPARE: {
g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs));
gboolean scalar = ins->opcode == OP_XCOMPARE_SCALAR;
LLVMIntPredicate pred = cond_to_llvm_cond [ins->inst_c0];
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
LLVMValueRef args [] = { lhs, rhs };
if (scalar)
for (int i = 0; i < 2; ++i)
args [i] = scalar_from_vector (ctx, args [i]);
LLVMValueRef result = LLVMBuildICmp (builder, pred, args [0], args [1], "xcompare");
if (scalar)
result = vector_from_scalar (ctx, LLVMVectorType (LLVMIntType (1), LLVMGetVectorSize (ret_t)), result);
values [ins->dreg] = LLVMBuildSExt (builder, result, ret_t, "");
break;
}
case OP_ARM64_EXT: {
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
unsigned int elems = LLVMGetVectorSize (ret_t);
g_assert (elems <= ARM64_MAX_VECTOR_ELEMS);
LLVMValueRef index = arg3;
LLVMValueRef default_value = lhs;
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, elems, index, ret_t, "arm64_ext");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
LLVMValueRef mask = create_const_vector_i32 (&mask_0_incr_1 [i], elems);
LLVMValueRef result = LLVMBuildShuffleVector (builder, lhs, rhs, mask, "arm64_ext");
immediate_unroll_commit (&ictx, i, result);
}
immediate_unroll_default (&ictx);
immediate_unroll_commit_default (&ictx, default_value);
values [ins->dreg] = immediate_unroll_end (&ictx, &cbb);
break;
}
case OP_ARM64_MVN: {
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
LLVMValueRef result = bitcast_to_integral (ctx, lhs);
result = LLVMBuildNot (builder, result, "arm64_mvn");
result = convert (ctx, result, ret_t);
values [ins->dreg] = result;
break;
}
case OP_ARM64_BIC: {
LLVMTypeRef ret_t = LLVMTypeOf (lhs);
LLVMValueRef result = bitcast_to_integral (ctx, lhs);
LLVMValueRef mask = bitcast_to_integral (ctx, rhs);
mask = LLVMBuildNot (builder, mask, "");
result = LLVMBuildAnd (builder, mask, result, "arm64_bic");
result = convert (ctx, result, ret_t);
values [ins->dreg] = result;
break;
}
case OP_ARM64_BSL: {
LLVMTypeRef ret_t = LLVMTypeOf (rhs);
LLVMValueRef select = bitcast_to_integral (ctx, lhs);
LLVMValueRef left = bitcast_to_integral (ctx, rhs);
LLVMValueRef right = bitcast_to_integral (ctx, arg3);
LLVMValueRef result1 = LLVMBuildAnd (builder, select, left, "arm64_bsl");
LLVMValueRef result2 = LLVMBuildAnd (builder, LLVMBuildNot (builder, select, ""), right, "");
LLVMValueRef result = LLVMBuildOr (builder, result1, result2, "");
result = convert (ctx, result, ret_t);
values [ins->dreg] = result;
break;
}
case OP_ARM64_CMTST: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMValueRef l = bitcast_to_integral (ctx, lhs);
LLVMValueRef r = bitcast_to_integral (ctx, rhs);
LLVMValueRef result = LLVMBuildAnd (builder, l, r, "arm64_cmtst");
LLVMTypeRef t = LLVMTypeOf (l);
result = LLVMBuildICmp (builder, LLVMIntNE, result, LLVMConstNull (t), "");
result = LLVMBuildSExt (builder, result, t, "");
result = convert (ctx, result, ret_t);
values [ins->dreg] = result;
break;
}
case OP_ARM64_FCVTL:
case OP_ARM64_FCVTL2: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean high = ins->opcode == OP_ARM64_FCVTL2;
LLVMValueRef result = lhs;
if (high)
result = extract_high_elements (ctx, result);
result = LLVMBuildFPExt (builder, result, ret_t, "arm64_fcvtl");
values [ins->dreg] = result;
break;
}
case OP_ARM64_FCVTXN:
case OP_ARM64_FCVTXN2:
case OP_ARM64_FCVTN:
case OP_ARM64_FCVTN2: {
gboolean high = FALSE;
int iid = 0;
switch (ins->opcode) {
case OP_ARM64_FCVTXN2: high = TRUE; case OP_ARM64_FCVTXN: iid = INTRINS_AARCH64_ADV_SIMD_FCVTXN; break;
case OP_ARM64_FCVTN2: high = TRUE; break;
}
LLVMValueRef result = lhs;
if (high)
result = rhs;
if (iid)
result = call_intrins (ctx, iid, &result, "");
else
result = LLVMBuildFPTrunc (builder, result, v64_r4_t, "");
if (high)
result = concatenate_vectors (ctx, lhs, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_UCVTF:
case OP_ARM64_SCVTF:
case OP_ARM64_UCVTF_SCALAR:
case OP_ARM64_SCVTF_SCALAR: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean scalar = FALSE;
gboolean is_unsigned = FALSE;
switch (ins->opcode) {
case OP_ARM64_UCVTF_SCALAR: scalar = TRUE; case OP_ARM64_UCVTF: is_unsigned = TRUE; break;
case OP_ARM64_SCVTF_SCALAR: scalar = TRUE; break;
}
LLVMValueRef result = lhs;
LLVMTypeRef cvt_t = ret_t;
if (scalar) {
result = scalar_from_vector (ctx, result);
cvt_t = LLVMGetElementType (ret_t);
}
if (is_unsigned)
result = LLVMBuildUIToFP (builder, result, cvt_t, "arm64_ucvtf");
else
result = LLVMBuildSIToFP (builder, result, cvt_t, "arm64_scvtf");
if (scalar)
result = vector_from_scalar (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_FCVTZS:
case OP_ARM64_FCVTZS_SCALAR:
case OP_ARM64_FCVTZU:
case OP_ARM64_FCVTZU_SCALAR: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean scalar = FALSE;
gboolean is_unsigned = FALSE;
switch (ins->opcode) {
case OP_ARM64_FCVTZU_SCALAR: scalar = TRUE; case OP_ARM64_FCVTZU: is_unsigned = TRUE; break;
case OP_ARM64_FCVTZS_SCALAR: scalar = TRUE; break;
}
LLVMValueRef result = lhs;
LLVMTypeRef cvt_t = ret_t;
if (scalar) {
result = scalar_from_vector (ctx, result);
cvt_t = LLVMGetElementType (ret_t);
}
if (is_unsigned)
result = LLVMBuildFPToUI (builder, result, cvt_t, "arm64_fcvtzu");
else
result = LLVMBuildFPToSI (builder, result, cvt_t, "arm64_fcvtzs");
if (scalar)
result = vector_from_scalar (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SELECT_SCALAR: {
LLVMValueRef result = LLVMBuildExtractElement (builder, lhs, rhs, "");
LLVMTypeRef elem_t = LLVMTypeOf (result);
unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t);
LLVMTypeRef t = LLVMVectorType (elem_t, 64 / elem_bits);
result = vector_from_scalar (ctx, t, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SELECT_QUAD: {
LLVMTypeRef src_type = simd_class_to_llvm_type (ctx, ins->data.op [1].klass);
LLVMTypeRef ret_type = simd_class_to_llvm_type (ctx, ins->klass);
unsigned int src_type_bits = mono_llvm_get_prim_size_bits (src_type);
unsigned int ret_type_bits = mono_llvm_get_prim_size_bits (ret_type);
unsigned int src_intermediate_elems = src_type_bits / 32;
unsigned int ret_intermediate_elems = ret_type_bits / 32;
LLVMTypeRef intermediate_type = LLVMVectorType (i4_t, src_intermediate_elems);
LLVMValueRef result = LLVMBuildBitCast (builder, lhs, intermediate_type, "arm64_select_quad");
result = LLVMBuildExtractElement (builder, result, rhs, "arm64_select_quad");
result = broadcast_element (ctx, result, ret_intermediate_elems);
result = LLVMBuildBitCast (builder, result, ret_type, "arm64_select_quad");
values [ins->dreg] = result;
break;
}
case OP_LSCNT32:
case OP_LSCNT64: {
// %shr = ashr i32 %x, 31
// %xor = xor i32 %shr, %x
// %mul = shl i32 %xor, 1
// %add = or i32 %mul, 1
// %0 = tail call i32 @llvm.ctlz.i32(i32 %add, i1 false)
LLVMValueRef shr = LLVMBuildAShr (builder, lhs, ins->opcode == OP_LSCNT32 ?
LLVMConstInt (LLVMInt32Type (), 31, FALSE) :
LLVMConstInt (LLVMInt64Type (), 63, FALSE), "");
LLVMValueRef one = ins->opcode == OP_LSCNT32 ?
LLVMConstInt (LLVMInt32Type (), 1, FALSE) :
LLVMConstInt (LLVMInt64Type (), 1, FALSE);
LLVMValueRef xor = LLVMBuildXor (builder, shr, lhs, "");
LLVMValueRef mul = LLVMBuildShl (builder, xor, one, "");
LLVMValueRef add = LLVMBuildOr (builder, mul, one, "");
LLVMValueRef args [2];
args [0] = add;
args [1] = LLVMConstInt (LLVMInt1Type (), 0, FALSE);
values [ins->dreg] = LLVMBuildCall (builder, get_intrins (ctx, ins->opcode == OP_LSCNT32 ? INTRINS_CTLZ_I32 : INTRINS_CTLZ_I64), args, 2, "");
break;
}
case OP_ARM64_SQRDMLAH:
case OP_ARM64_SQRDMLAH_BYSCALAR:
case OP_ARM64_SQRDMLAH_SCALAR:
case OP_ARM64_SQRDMLSH:
case OP_ARM64_SQRDMLSH_BYSCALAR:
case OP_ARM64_SQRDMLSH_SCALAR: {
gboolean byscalar = FALSE;
gboolean scalar = FALSE;
gboolean subtract = FALSE;
switch (ins->opcode) {
case OP_ARM64_SQRDMLAH_BYSCALAR: byscalar = TRUE; break;
case OP_ARM64_SQRDMLAH_SCALAR: scalar = TRUE; break;
case OP_ARM64_SQRDMLSH: subtract = TRUE; break;
case OP_ARM64_SQRDMLSH_BYSCALAR: subtract = TRUE; byscalar = TRUE; break;
case OP_ARM64_SQRDMLSH_SCALAR: subtract = TRUE; scalar = TRUE; break;
}
int acc_iid = subtract ? INTRINS_AARCH64_ADV_SIMD_SQSUB : INTRINS_AARCH64_ADV_SIMD_SQADD;
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (ret_t);
ScalarOpFromVectorOpCtx sctx = scalar_op_from_vector_op (ctx, ret_t, ins);
LLVMValueRef args [] = { lhs, rhs, arg3 };
if (byscalar) {
unsigned int elems = LLVMGetVectorSize (ret_t);
args [2] = broadcast_element (ctx, scalar_from_vector (ctx, args [2]), elems);
}
if (scalar) {
ovr_tag = sctx.ovr_tag;
scalar_op_from_vector_op_process_args (&sctx, args, 3);
}
LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_SQRDMULH, ovr_tag, &args [1], "arm64_sqrdmlxh");
args [1] = result;
result = call_overloaded_intrins (ctx, acc_iid, ovr_tag, &args [0], "arm64_sqrdmlxh");
if (scalar)
result = scalar_op_from_vector_op_process_result (&sctx, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SMULH:
case OP_ARM64_UMULH: {
LLVMValueRef op1, op2;
if (ins->opcode == OP_ARM64_SMULH) {
op1 = LLVMBuildSExt (builder, lhs, LLVMInt128Type (), "");
op2 = LLVMBuildSExt (builder, rhs, LLVMInt128Type (), "");
} else {
op1 = LLVMBuildZExt (builder, lhs, LLVMInt128Type (), "");
op2 = LLVMBuildZExt (builder, rhs, LLVMInt128Type (), "");
}
LLVMValueRef mul = LLVMBuildMul (builder, op1, op2, "");
LLVMValueRef hi64 = LLVMBuildLShr (builder, mul,
LLVMConstInt (LLVMInt128Type (), 64, FALSE), "");
values [ins->dreg] = LLVMBuildTrunc (builder, hi64, LLVMInt64Type (), "");
break;
}
case OP_ARM64_XNARROW_SCALAR: {
// Unfortunately, @llvm.aarch64.neon.scalar.sqxtun isn't available for i8 or i16.
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (ret_t);
LLVMTypeRef elem_t = LLVMGetElementType (ret_t);
LLVMValueRef result = NULL;
int iid = ins->inst_c0;
int scalar_iid = 0;
switch (iid) {
case INTRINS_AARCH64_ADV_SIMD_SQXTUN: scalar_iid = INTRINS_AARCH64_ADV_SIMD_SCALAR_SQXTUN; break;
case INTRINS_AARCH64_ADV_SIMD_SQXTN: scalar_iid = INTRINS_AARCH64_ADV_SIMD_SCALAR_SQXTN; break;
case INTRINS_AARCH64_ADV_SIMD_UQXTN: scalar_iid = INTRINS_AARCH64_ADV_SIMD_SCALAR_UQXTN; break;
default: g_assert_not_reached ();
}
if (elem_t == i4_t) {
LLVMValueRef arg = scalar_from_vector (ctx, lhs);
result = call_intrins (ctx, scalar_iid, &arg, "arm64_xnarrow_scalar");
result = vector_from_scalar (ctx, ret_t, result);
} else {
LLVMTypeRef arg_t = LLVMTypeOf (lhs);
LLVMTypeRef argelem_t = LLVMGetElementType (arg_t);
unsigned int argelems = LLVMGetVectorSize (arg_t);
LLVMValueRef arg = keep_lowest_element (ctx, LLVMVectorType (argelem_t, argelems * 2), lhs);
result = call_overloaded_intrins (ctx, iid, ovr_tag, &arg, "arm64_xnarrow_scalar");
result = keep_lowest_element (ctx, LLVMTypeOf (result), result);
}
values [ins->dreg] = result;
break;
}
case OP_ARM64_SQXTUN2:
case OP_ARM64_UQXTN2:
case OP_ARM64_SQXTN2:
case OP_ARM64_XTN:
case OP_ARM64_XTN2: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
gboolean high = FALSE;
int iid = 0;
switch (ins->opcode) {
case OP_ARM64_SQXTUN2: high = TRUE; iid = INTRINS_AARCH64_ADV_SIMD_SQXTUN; break;
case OP_ARM64_UQXTN2: high = TRUE; iid = INTRINS_AARCH64_ADV_SIMD_UQXTN; break;
case OP_ARM64_SQXTN2: high = TRUE; iid = INTRINS_AARCH64_ADV_SIMD_SQXTN; break;
case OP_ARM64_XTN2: high = TRUE; break;
}
LLVMValueRef result = lhs;
if (high) {
result = rhs;
ovr_tag = ovr_tag_smaller_vector (ovr_tag);
}
LLVMTypeRef t = LLVMTypeOf (result);
LLVMTypeRef elem_t = LLVMGetElementType (t);
unsigned int elems = LLVMGetVectorSize (t);
unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t);
LLVMTypeRef result_t = LLVMVectorType (LLVMIntType (elem_bits / 2), elems);
if (iid != 0)
result = call_overloaded_intrins (ctx, iid, ovr_tag, &result, "");
else
result = LLVMBuildTrunc (builder, result, result_t, "arm64_xtn");
if (high)
result = concatenate_vectors (ctx, lhs, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_CLZ: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
LLVMValueRef args [] = { lhs, const_int1 (0) };
LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_CLZ, ovr_tag, args, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_FMSUB:
case OP_ARM64_FMSUB_BYSCALAR:
case OP_ARM64_FMSUB_SCALAR:
case OP_ARM64_FNMSUB_SCALAR:
case OP_ARM64_FMADD:
case OP_ARM64_FMADD_BYSCALAR:
case OP_ARM64_FMADD_SCALAR:
case OP_ARM64_FNMADD_SCALAR: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
gboolean scalar = FALSE;
gboolean negate = FALSE;
gboolean subtract = FALSE;
gboolean byscalar = FALSE;
switch (ins->opcode) {
case OP_ARM64_FMSUB: subtract = TRUE; break;
case OP_ARM64_FMSUB_BYSCALAR: subtract = TRUE; byscalar = TRUE; break;
case OP_ARM64_FMSUB_SCALAR: subtract = TRUE; scalar = TRUE; break;
case OP_ARM64_FNMSUB_SCALAR: subtract = TRUE; scalar = TRUE; negate = TRUE; break;
case OP_ARM64_FMADD: break;
case OP_ARM64_FMADD_BYSCALAR: byscalar = TRUE; break;
case OP_ARM64_FMADD_SCALAR: scalar = TRUE; break;
case OP_ARM64_FNMADD_SCALAR: scalar = TRUE; negate = TRUE; break;
}
// llvm.fma argument order: mulop1, mulop2, addend
LLVMValueRef args [] = { rhs, arg3, lhs };
if (byscalar) {
unsigned int elems = LLVMGetVectorSize (LLVMTypeOf (args [0]));
args [1] = broadcast_element (ctx, scalar_from_vector (ctx, args [1]), elems);
}
if (scalar) {
ovr_tag = ovr_tag_force_scalar (ovr_tag);
for (int i = 0; i < 3; ++i)
args [i] = scalar_from_vector (ctx, args [i]);
}
if (subtract)
args [0] = LLVMBuildFNeg (builder, args [0], "arm64_fma_sub");
if (negate) {
args [0] = LLVMBuildFNeg (builder, args [0], "arm64_fma_negate");
args [2] = LLVMBuildFNeg (builder, args [2], "arm64_fma_negate");
}
LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_FMA, ovr_tag, args, "arm64_fma");
if (scalar)
result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SQDMULL:
case OP_ARM64_SQDMULL_BYSCALAR:
case OP_ARM64_SQDMULL2:
case OP_ARM64_SQDMULL2_BYSCALAR:
case OP_ARM64_SQDMLAL:
case OP_ARM64_SQDMLAL_BYSCALAR:
case OP_ARM64_SQDMLAL2:
case OP_ARM64_SQDMLAL2_BYSCALAR:
case OP_ARM64_SQDMLSL:
case OP_ARM64_SQDMLSL_BYSCALAR:
case OP_ARM64_SQDMLSL2:
case OP_ARM64_SQDMLSL2_BYSCALAR: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
gboolean scalar = FALSE;
gboolean add = FALSE;
gboolean subtract = FALSE;
gboolean high = FALSE;
switch (ins->opcode) {
case OP_ARM64_SQDMULL_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMULL: break;
case OP_ARM64_SQDMULL2_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMULL2: high = TRUE; break;
case OP_ARM64_SQDMLAL_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLAL: add = TRUE; break;
case OP_ARM64_SQDMLAL2_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLAL2: high = TRUE; add = TRUE; break;
case OP_ARM64_SQDMLSL_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLSL: subtract = TRUE; break;
case OP_ARM64_SQDMLSL2_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLSL2: high = TRUE; subtract = TRUE; break;
}
int iid = 0;
if (add)
iid = INTRINS_AARCH64_ADV_SIMD_SQADD;
else if (subtract)
iid = INTRINS_AARCH64_ADV_SIMD_SQSUB;
LLVMValueRef mul1 = lhs;
LLVMValueRef mul2 = rhs;
if (iid != 0) {
mul1 = rhs;
mul2 = arg3;
}
if (scalar) {
LLVMTypeRef t = LLVMTypeOf (mul1);
unsigned int elems = LLVMGetVectorSize (t);
mul2 = broadcast_element (ctx, scalar_from_vector (ctx, mul2), elems);
}
LLVMValueRef args [] = { mul1, mul2 };
if (high)
for (int i = 0; i < 2; ++i)
args [i] = extract_high_elements (ctx, args [i]);
LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_SQDMULL, ovr_tag, args, "");
LLVMValueRef args2 [] = { lhs, result };
if (iid != 0)
result = call_overloaded_intrins (ctx, iid, ovr_tag, args2, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SQDMULL_SCALAR:
case OP_ARM64_SQDMLAL_SCALAR:
case OP_ARM64_SQDMLSL_SCALAR: {
/*
* define dso_local i32 @__vqdmlslh_lane_s16(i32, i16, <4 x i16>, i32) local_unnamed_addr #0 {
* %5 = insertelement <4 x i16> undef, i16 %1, i64 0
* %6 = shufflevector <4 x i16> %2, <4 x i16> undef, <4 x i32> <i32 3, i32 undef, i32 undef, i32 undef>
* %7 = tail call <4 x i32> @llvm.aarch64.neon.sqdmull.v4i32(<4 x i16> %5, <4 x i16> %6)
* %8 = extractelement <4 x i32> %7, i64 0
* %9 = tail call i32 @llvm.aarch64.neon.sqsub.i32(i32 %0, i32 %8)
* ret i32 %9
* }
*
* define dso_local i64 @__vqdmlals_s32(i64, i32, i32) local_unnamed_addr #0 {
* %4 = tail call i64 @llvm.aarch64.neon.sqdmulls.scalar(i32 %1, i32 %2) #2
* %5 = tail call i64 @llvm.aarch64.neon.sqadd.i64(i64 %0, i64 %4) #2
* ret i64 %5
* }
*/
int mulid = INTRINS_AARCH64_ADV_SIMD_SQDMULL;
int iid = 0;
gboolean scalar_mul_result = FALSE;
gboolean scalar_acc_result = FALSE;
switch (ins->opcode) {
case OP_ARM64_SQDMLAL_SCALAR: iid = INTRINS_AARCH64_ADV_SIMD_SQADD; break;
case OP_ARM64_SQDMLSL_SCALAR: iid = INTRINS_AARCH64_ADV_SIMD_SQSUB; break;
}
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMValueRef mularg = lhs;
LLVMValueRef selected_scalar = rhs;
if (iid != 0) {
mularg = rhs;
selected_scalar = arg3;
}
llvm_ovr_tag_t multag = ovr_tag_smaller_elements (ovr_tag_from_llvm_type (ret_t));
llvm_ovr_tag_t iidtag = ovr_tag_force_scalar (ovr_tag_from_llvm_type (ret_t));
LLVMTypeRef mularg_t = ovr_tag_to_llvm_type (multag);
if (multag & INTRIN_int32) {
/* The (i32, i32) -> i64 variant of aarch64_neon_sqdmull has
* a unique, non-overloaded name.
*/
mulid = INTRINS_AARCH64_ADV_SIMD_SQDMULL_SCALAR;
multag = 0;
iidtag = INTRIN_int64 | INTRIN_scalar;
scalar_mul_result = TRUE;
scalar_acc_result = TRUE;
} else if (multag & INTRIN_int16) {
/* We were passed a (<4 x i16>, <4 x i16>) but the
* widening multiplication intrinsic will yield a <4 x i32>.
*/
multag = INTRIN_int32 | INTRIN_vector128;
} else
g_assert_not_reached ();
if (scalar_mul_result) {
mularg = scalar_from_vector (ctx, mularg);
selected_scalar = scalar_from_vector (ctx, selected_scalar);
} else {
mularg = keep_lowest_element (ctx, mularg_t, mularg);
selected_scalar = keep_lowest_element (ctx, mularg_t, selected_scalar);
}
LLVMValueRef mulargs [] = { mularg, selected_scalar };
LLVMValueRef result = call_overloaded_intrins (ctx, mulid, multag, mulargs, "arm64_sqdmull_scalar");
if (iid != 0) {
LLVMValueRef acc = scalar_from_vector (ctx, lhs);
if (!scalar_mul_result)
result = scalar_from_vector (ctx, result);
LLVMValueRef subargs [] = { acc, result };
result = call_overloaded_intrins (ctx, iid, iidtag, subargs, "arm64_sqdmlxl_scalar");
scalar_acc_result = TRUE;
}
if (scalar_acc_result)
result = vector_from_scalar (ctx, ret_t, result);
else
result = keep_lowest_element (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_FMUL_SEL: {
LLVMValueRef mul2 = LLVMBuildExtractElement (builder, rhs, arg3, "");
LLVMValueRef mul1 = scalar_from_vector (ctx, lhs);
LLVMValueRef result = LLVMBuildFMul (builder, mul1, mul2, "arm64_fmul_sel");
result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_MLA:
case OP_ARM64_MLA_SCALAR:
case OP_ARM64_MLS:
case OP_ARM64_MLS_SCALAR: {
gboolean scalar = FALSE;
gboolean add = FALSE;
switch (ins->opcode) {
case OP_ARM64_MLA_SCALAR: scalar = TRUE; case OP_ARM64_MLA: add = TRUE; break;
case OP_ARM64_MLS_SCALAR: scalar = TRUE; case OP_ARM64_MLS: break;
}
LLVMTypeRef mul_t = LLVMTypeOf (rhs);
unsigned int elems = LLVMGetVectorSize (mul_t);
LLVMValueRef mul2 = arg3;
if (scalar)
mul2 = broadcast_element (ctx, scalar_from_vector (ctx, mul2), elems);
LLVMValueRef result = LLVMBuildMul (builder, rhs, mul2, "");
if (add)
result = LLVMBuildAdd (builder, lhs, result, "");
else
result = LLVMBuildSub (builder, lhs, result, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SMULL:
case OP_ARM64_SMULL_SCALAR:
case OP_ARM64_SMULL2:
case OP_ARM64_SMULL2_SCALAR:
case OP_ARM64_UMULL:
case OP_ARM64_UMULL_SCALAR:
case OP_ARM64_UMULL2:
case OP_ARM64_UMULL2_SCALAR:
case OP_ARM64_SMLAL:
case OP_ARM64_SMLAL_SCALAR:
case OP_ARM64_SMLAL2:
case OP_ARM64_SMLAL2_SCALAR:
case OP_ARM64_UMLAL:
case OP_ARM64_UMLAL_SCALAR:
case OP_ARM64_UMLAL2:
case OP_ARM64_UMLAL2_SCALAR:
case OP_ARM64_SMLSL:
case OP_ARM64_SMLSL_SCALAR:
case OP_ARM64_SMLSL2:
case OP_ARM64_SMLSL2_SCALAR:
case OP_ARM64_UMLSL:
case OP_ARM64_UMLSL_SCALAR:
case OP_ARM64_UMLSL2:
case OP_ARM64_UMLSL2_SCALAR: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
gboolean is_unsigned = FALSE;
gboolean high = FALSE;
gboolean add = FALSE;
gboolean subtract = FALSE;
gboolean scalar = FALSE;
int opcode = ins->opcode;
switch (opcode) {
case OP_ARM64_SMULL_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMULL; break;
case OP_ARM64_UMULL_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMULL; break;
case OP_ARM64_SMLAL_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLAL; break;
case OP_ARM64_UMLAL_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLAL; break;
case OP_ARM64_SMLSL_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLSL; break;
case OP_ARM64_UMLSL_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLSL; break;
case OP_ARM64_SMULL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMULL2; break;
case OP_ARM64_UMULL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMULL2; break;
case OP_ARM64_SMLAL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLAL2; break;
case OP_ARM64_UMLAL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLAL2; break;
case OP_ARM64_SMLSL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLSL2; break;
case OP_ARM64_UMLSL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLSL2; break;
}
switch (opcode) {
case OP_ARM64_SMULL2: high = TRUE; case OP_ARM64_SMULL: break;
case OP_ARM64_UMULL2: high = TRUE; case OP_ARM64_UMULL: is_unsigned = TRUE; break;
case OP_ARM64_SMLAL2: high = TRUE; case OP_ARM64_SMLAL: add = TRUE; break;
case OP_ARM64_UMLAL2: high = TRUE; case OP_ARM64_UMLAL: add = TRUE; is_unsigned = TRUE; break;
case OP_ARM64_SMLSL2: high = TRUE; case OP_ARM64_SMLSL: subtract = TRUE; break;
case OP_ARM64_UMLSL2: high = TRUE; case OP_ARM64_UMLSL: subtract = TRUE; is_unsigned = TRUE; break;
}
int iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UMULL : INTRINS_AARCH64_ADV_SIMD_SMULL;
LLVMValueRef intrin_args [] = { lhs, rhs };
if (add || subtract) {
intrin_args [0] = rhs;
intrin_args [1] = arg3;
}
if (scalar) {
LLVMValueRef sarg = intrin_args [1];
LLVMTypeRef t = LLVMTypeOf (intrin_args [0]);
unsigned int elems = LLVMGetVectorSize (t);
sarg = broadcast_element (ctx, scalar_from_vector (ctx, sarg), elems);
intrin_args [1] = sarg;
}
if (high)
for (int i = 0; i < 2; ++i)
intrin_args [i] = extract_high_elements (ctx, intrin_args [i]);
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, "");
if (add)
result = LLVMBuildAdd (builder, lhs, result, "");
if (subtract)
result = LLVMBuildSub (builder, lhs, result, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_XNEG:
case OP_ARM64_XNEG_SCALAR: {
gboolean scalar = ins->opcode == OP_ARM64_XNEG_SCALAR;
gboolean is_float = FALSE;
switch (inst_c1_type (ins)) {
case MONO_TYPE_R4: case MONO_TYPE_R8: is_float = TRUE;
}
LLVMValueRef result = lhs;
if (scalar)
result = scalar_from_vector (ctx, result);
if (is_float)
result = LLVMBuildFNeg (builder, result, "arm64_xneg");
else
result = LLVMBuildNeg (builder, result, "arm64_xneg");
if (scalar)
result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_PMULL:
case OP_ARM64_PMULL2: {
gboolean high = ins->opcode == OP_ARM64_PMULL2;
LLVMValueRef args [] = { lhs, rhs };
if (high)
for (int i = 0; i < 2; ++i)
args [i] = extract_high_elements (ctx, args [i]);
LLVMValueRef result = call_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_PMULL, args, "arm64_pmull");
values [ins->dreg] = result;
break;
}
case OP_ARM64_REVN: {
LLVMTypeRef t = LLVMTypeOf (lhs);
LLVMTypeRef elem_t = LLVMGetElementType (t);
unsigned int group_bits = mono_llvm_get_prim_size_bits (elem_t);
unsigned int vec_bits = mono_llvm_get_prim_size_bits (t);
unsigned int tmp_bits = ins->inst_c0;
unsigned int tmp_elements = vec_bits / tmp_bits;
const int cycle8 [] = { 7, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8 };
const int cycle4 [] = { 3, 2, 1, 0, 7, 6, 5, 4, 11, 10, 9, 8, 15, 14, 13, 12 };
const int cycle2 [] = { 1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14 };
const int *cycle = NULL;
switch (group_bits / tmp_bits) {
case 2: cycle = cycle2; break;
case 4: cycle = cycle4; break;
case 8: cycle = cycle8; break;
default: g_assert_not_reached ();
}
g_assert (tmp_elements <= ARM64_MAX_VECTOR_ELEMS);
LLVMTypeRef tmp_t = LLVMVectorType (LLVMIntType (tmp_bits), tmp_elements);
LLVMValueRef tmp = LLVMBuildBitCast (builder, lhs, tmp_t, "arm64_revn");
LLVMValueRef result = LLVMBuildShuffleVector (builder, tmp, LLVMGetUndef (tmp_t), create_const_vector_i32 (cycle, tmp_elements), "");
result = LLVMBuildBitCast (builder, result, t, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SHL:
case OP_ARM64_SSHR:
case OP_ARM64_SSRA:
case OP_ARM64_USHR:
case OP_ARM64_USRA: {
gboolean right = FALSE;
gboolean add = FALSE;
gboolean arith = FALSE;
switch (ins->opcode) {
case OP_ARM64_USHR: right = TRUE; break;
case OP_ARM64_USRA: right = TRUE; add = TRUE; break;
case OP_ARM64_SSHR: arith = TRUE; break;
case OP_ARM64_SSRA: arith = TRUE; add = TRUE; break;
}
LLVMValueRef shiftarg = lhs;
LLVMValueRef shift = rhs;
if (add) {
shiftarg = rhs;
shift = arg3;
}
shift = create_shift_vector (ctx, shiftarg, shift);
LLVMValueRef result = NULL;
if (right)
result = LLVMBuildLShr (builder, shiftarg, shift, "");
else if (arith)
result = LLVMBuildAShr (builder, shiftarg, shift, "");
else
result = LLVMBuildShl (builder, shiftarg, shift, "");
if (add)
result = LLVMBuildAdd (builder, lhs, result, "arm64_usra");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SHRN:
case OP_ARM64_SHRN2: {
LLVMValueRef shiftarg = lhs;
LLVMValueRef shift = rhs;
gboolean high = ins->opcode == OP_ARM64_SHRN2;
if (high) {
shiftarg = rhs;
shift = arg3;
}
LLVMTypeRef arg_t = LLVMTypeOf (shiftarg);
LLVMTypeRef elem_t = LLVMGetElementType (arg_t);
unsigned int elems = LLVMGetVectorSize (arg_t);
unsigned int bits = mono_llvm_get_prim_size_bits (elem_t);
LLVMTypeRef trunc_t = LLVMVectorType (LLVMIntType (bits / 2), elems);
shift = create_shift_vector (ctx, shiftarg, shift);
LLVMValueRef result = LLVMBuildLShr (builder, shiftarg, shift, "shrn");
result = LLVMBuildTrunc (builder, result, trunc_t, "");
if (high) {
result = concatenate_vectors (ctx, lhs, result);
}
values [ins->dreg] = result;
break;
}
case OP_ARM64_SRSHR:
case OP_ARM64_SRSRA:
case OP_ARM64_URSHR:
case OP_ARM64_URSRA: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
LLVMValueRef shiftarg = lhs;
LLVMValueRef shift = rhs;
gboolean right = FALSE;
gboolean add = FALSE;
switch (ins->opcode) {
case OP_ARM64_URSRA: add = TRUE; case OP_ARM64_URSHR: right = TRUE; break;
case OP_ARM64_SRSRA: add = TRUE; case OP_ARM64_SRSHR: right = TRUE; break;
}
int iid = 0;
switch (ins->opcode) {
case OP_ARM64_URSRA: case OP_ARM64_URSHR: iid = INTRINS_AARCH64_ADV_SIMD_URSHL; break;
case OP_ARM64_SRSRA: case OP_ARM64_SRSHR: iid = INTRINS_AARCH64_ADV_SIMD_SRSHL; break;
}
if (add) {
shiftarg = rhs;
shift = arg3;
}
if (right)
shift = LLVMBuildNeg (builder, shift, "");
shift = create_shift_vector (ctx, shiftarg, shift);
LLVMValueRef args [] = { shiftarg, shift };
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
if (add)
result = LLVMBuildAdd (builder, result, lhs, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_XNSHIFT_SCALAR:
case OP_ARM64_XNSHIFT:
case OP_ARM64_XNSHIFT2: {
LLVMTypeRef intrin_result_t = simd_class_to_llvm_type (ctx, ins->klass);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (intrin_result_t);
LLVMValueRef shift_arg = lhs;
LLVMValueRef shift_amount = rhs;
gboolean high = FALSE;
gboolean scalar = FALSE;
int iid = ins->inst_c0;
switch (ins->opcode) {
case OP_ARM64_XNSHIFT_SCALAR: scalar = TRUE; break;
case OP_ARM64_XNSHIFT2: high = TRUE; break;
}
if (high) {
shift_arg = rhs;
shift_amount = arg3;
ovr_tag = ovr_tag_smaller_vector (ovr_tag);
intrin_result_t = ovr_tag_to_llvm_type (ovr_tag);
}
LLVMTypeRef shift_arg_t = LLVMTypeOf (shift_arg);
LLVMTypeRef shift_arg_elem_t = LLVMGetElementType (shift_arg_t);
unsigned int element_bits = mono_llvm_get_prim_size_bits (shift_arg_elem_t);
int range_min = 1;
int range_max = element_bits / 2;
if (scalar) {
unsigned int elems = LLVMGetVectorSize (shift_arg_t);
LLVMValueRef lo = scalar_from_vector (ctx, shift_arg);
shift_arg = vector_from_scalar (ctx, LLVMVectorType (shift_arg_elem_t, elems * 2), lo);
}
int max_index = range_max - range_min + 1;
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, max_index, shift_amount, intrin_result_t, "arm64_xnshift");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
int shift_const = i + range_min;
LLVMValueRef intrin_args [] = { shift_arg, const_int32 (shift_const) };
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, "");
immediate_unroll_commit (&ictx, shift_const, result);
}
{
immediate_unroll_default (&ictx);
LLVMValueRef intrin_args [] = { shift_arg, const_int32 (range_max) };
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, "");
immediate_unroll_commit_default (&ictx, result);
}
LLVMValueRef result = immediate_unroll_end (&ictx, &cbb);
if (high)
result = concatenate_vectors (ctx, lhs, result);
if (scalar)
result = keep_lowest_element (ctx, LLVMTypeOf (result), result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SQSHLU:
case OP_ARM64_SQSHLU_SCALAR: {
gboolean scalar = ins->opcode == OP_ARM64_SQSHLU_SCALAR;
LLVMTypeRef intrin_result_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMTypeRef elem_t = LLVMGetElementType (intrin_result_t);
unsigned int element_bits = mono_llvm_get_prim_size_bits (elem_t);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (intrin_result_t);
int max_index = element_bits;
ScalarOpFromVectorOpCtx sctx = scalar_op_from_vector_op (ctx, intrin_result_t, ins);
intrin_result_t = scalar ? sctx.intermediate_type : intrin_result_t;
ovr_tag = scalar ? sctx.ovr_tag : ovr_tag;
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, max_index, rhs, intrin_result_t, "arm64_sqshlu");
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
int shift_const = i;
LLVMValueRef args [2] = { lhs, create_shift_vector (ctx, lhs, const_int32 (shift_const)) };
if (scalar)
scalar_op_from_vector_op_process_args (&sctx, args, 2);
LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_SQSHLU, ovr_tag, args, "");
immediate_unroll_commit (&ictx, shift_const, result);
}
{
immediate_unroll_default (&ictx);
LLVMValueRef srcarg = lhs;
if (scalar)
scalar_op_from_vector_op_process_args (&sctx, &srcarg, 1);
immediate_unroll_commit_default (&ictx, srcarg);
}
LLVMValueRef result = immediate_unroll_end (&ictx, &cbb);
if (scalar)
result = scalar_op_from_vector_op_process_result (&sctx, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SSHLL:
case OP_ARM64_SSHLL2:
case OP_ARM64_USHLL:
case OP_ARM64_USHLL2: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean high = FALSE;
gboolean is_unsigned = FALSE;
switch (ins->opcode) {
case OP_ARM64_SSHLL2: high = TRUE; break;
case OP_ARM64_USHLL2: high = TRUE; case OP_ARM64_USHLL: is_unsigned = TRUE; break;
}
LLVMValueRef result = lhs;
if (high)
result = extract_high_elements (ctx, result);
if (is_unsigned)
result = LLVMBuildZExt (builder, result, ret_t, "arm64_ushll");
else
result = LLVMBuildSExt (builder, result, ret_t, "arm64_ushll");
result = LLVMBuildShl (builder, result, create_shift_vector (ctx, result, rhs), "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SLI:
case OP_ARM64_SRI: {
LLVMTypeRef intrin_result_t = simd_class_to_llvm_type (ctx, ins->klass);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (intrin_result_t);
unsigned int element_bits = mono_llvm_get_prim_size_bits (LLVMGetElementType (intrin_result_t));
int range_min = 0;
int range_max = element_bits - 1;
if (ins->opcode == OP_ARM64_SRI) {
++range_min;
++range_max;
}
int iid = ins->opcode == OP_ARM64_SRI ? INTRINS_AARCH64_ADV_SIMD_SRI : INTRINS_AARCH64_ADV_SIMD_SLI;
int max_index = range_max - range_min + 1;
ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, max_index, arg3, intrin_result_t, "arm64_ext");
LLVMValueRef intrin_args [3] = { lhs, rhs, arg3 };
int i = 0;
while (immediate_unroll_next (&ictx, &i)) {
int shift_const = i + range_min;
intrin_args [2] = const_int32 (shift_const);
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, "");
immediate_unroll_commit (&ictx, shift_const, result);
}
immediate_unroll_default (&ictx);
immediate_unroll_commit_default (&ictx, lhs);
LLVMValueRef result = immediate_unroll_end (&ictx, &cbb);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SQRT_SCALAR: {
int iid = ins->inst_c0 == MONO_TYPE_R8 ? INTRINS_SQRT : INTRINS_SQRTF;
LLVMTypeRef t = LLVMTypeOf (lhs);
LLVMValueRef scalar = LLVMBuildExtractElement (builder, lhs, const_int32 (0), "");
LLVMValueRef result = call_intrins (ctx, iid, &scalar, "arm64_sqrt_scalar");
values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMGetUndef (t), result, const_int32 (0), "");
break;
}
case OP_ARM64_STP:
case OP_ARM64_STP_SCALAR:
case OP_ARM64_STNP:
case OP_ARM64_STNP_SCALAR: {
gboolean nontemporal = FALSE;
gboolean scalar = FALSE;
switch (ins->opcode) {
case OP_ARM64_STNP: nontemporal = TRUE; break;
case OP_ARM64_STNP_SCALAR: nontemporal = TRUE; scalar = TRUE; break;
case OP_ARM64_STP_SCALAR: scalar = TRUE; break;
}
LLVMTypeRef rhs_t = LLVMTypeOf (rhs);
LLVMValueRef val = NULL;
LLVMTypeRef dst_t = LLVMPointerType (rhs_t, 0);
if (scalar)
val = LLVMBuildShuffleVector (builder, rhs, arg3, create_const_vector_2_i32 (0, 2), "");
else {
unsigned int rhs_elems = LLVMGetVectorSize (rhs_t);
LLVMTypeRef rhs_elt_t = LLVMGetElementType (rhs_t);
dst_t = LLVMPointerType (LLVMVectorType (rhs_elt_t, rhs_elems * 2), 0);
val = concatenate_vectors (ctx, rhs, arg3);
}
LLVMValueRef address = convert (ctx, lhs, dst_t);
LLVMValueRef store = mono_llvm_build_store (builder, val, address, FALSE, LLVM_BARRIER_NONE);
if (nontemporal)
set_nontemporal_flag (store);
break;
}
case OP_ARM64_LD1_INSERT: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMTypeRef elem_t = LLVMGetElementType (ret_t);
LLVMValueRef address = convert (ctx, arg3, LLVMPointerType (elem_t, 0));
unsigned int alignment = mono_llvm_get_prim_size_bits (ret_t) / 8;
LLVMValueRef result = mono_llvm_build_aligned_load (builder, address, "arm64_ld1_insert", FALSE, alignment);
result = LLVMBuildInsertElement (builder, lhs, result, rhs, "arm64_ld1_insert");
values [ins->dreg] = result;
break;
}
case OP_ARM64_LD1R:
case OP_ARM64_LD1: {
gboolean replicate = ins->opcode == OP_ARM64_LD1R;
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
unsigned int alignment = mono_llvm_get_prim_size_bits (ret_t) / 8;
LLVMValueRef address = lhs;
LLVMTypeRef address_t = LLVMPointerType (ret_t, 0);
if (replicate) {
LLVMTypeRef elem_t = LLVMGetElementType (ret_t);
address_t = LLVMPointerType (elem_t, 0);
}
address = convert (ctx, address, address_t);
LLVMValueRef result = mono_llvm_build_aligned_load (builder, address, "arm64_ld1", FALSE, alignment);
if (replicate) {
unsigned int elems = LLVMGetVectorSize (ret_t);
result = broadcast_element (ctx, result, elems);
}
values [ins->dreg] = result;
break;
}
case OP_ARM64_LDNP:
case OP_ARM64_LDNP_SCALAR:
case OP_ARM64_LDP:
case OP_ARM64_LDP_SCALAR: {
const char *oname = NULL;
gboolean nontemporal = FALSE;
gboolean scalar = FALSE;
switch (ins->opcode) {
case OP_ARM64_LDNP: oname = "arm64_ldnp"; nontemporal = TRUE; break;
case OP_ARM64_LDNP_SCALAR: oname = "arm64_ldnp_scalar"; nontemporal = TRUE; scalar = TRUE; break;
case OP_ARM64_LDP: oname = "arm64_ldp"; break;
case OP_ARM64_LDP_SCALAR: oname = "arm64_ldp_scalar"; scalar = TRUE; break;
}
if (!addresses [ins->dreg])
addresses [ins->dreg] = build_named_alloca (ctx, m_class_get_byval_arg (ins->klass), oname);
LLVMTypeRef ret_t = simd_valuetuple_to_llvm_type (ctx, ins->klass);
LLVMTypeRef vec_t = LLVMGetElementType (ret_t);
LLVMValueRef ix = const_int32 (1);
LLVMTypeRef src_t = LLVMPointerType (scalar ? LLVMGetElementType (vec_t) : vec_t, 0);
LLVMValueRef src0 = convert (ctx, lhs, src_t);
LLVMValueRef src1 = LLVMBuildGEP (builder, src0, &ix, 1, oname);
LLVMValueRef vals [] = { src0, src1 };
for (int i = 0; i < 2; ++i) {
vals [i] = LLVMBuildLoad (builder, vals [i], oname);
if (nontemporal)
set_nontemporal_flag (vals [i]);
}
unsigned int vec_sz = mono_llvm_get_prim_size_bits (vec_t);
if (scalar) {
g_assert (vec_sz == 64);
LLVMValueRef undef = LLVMGetUndef (vec_t);
for (int i = 0; i < 2; ++i)
vals [i] = LLVMBuildInsertElement (builder, undef, vals [i], const_int32 (0), oname);
}
LLVMValueRef val = LLVMGetUndef (ret_t);
for (int i = 0; i < 2; ++i)
val = LLVMBuildInsertValue (builder, val, vals [i], i, oname);
LLVMTypeRef retptr_t = LLVMPointerType (ret_t, 0);
LLVMValueRef dst = convert (ctx, addresses [ins->dreg], retptr_t);
LLVMBuildStore (builder, val, dst);
values [ins->dreg] = vec_sz == 64 ? val : NULL;
break;
}
case OP_ARM64_ST1: {
LLVMTypeRef t = LLVMTypeOf (rhs);
LLVMValueRef address = convert (ctx, lhs, LLVMPointerType (t, 0));
unsigned int alignment = mono_llvm_get_prim_size_bits (t) / 8;
mono_llvm_build_aligned_store (builder, rhs, address, FALSE, alignment);
break;
}
case OP_ARM64_ST1_SCALAR: {
LLVMTypeRef t = LLVMGetElementType (LLVMTypeOf (rhs));
LLVMValueRef val = LLVMBuildExtractElement (builder, rhs, arg3, "arm64_st1_scalar");
LLVMValueRef address = convert (ctx, lhs, LLVMPointerType (t, 0));
unsigned int alignment = mono_llvm_get_prim_size_bits (t) / 8;
mono_llvm_build_aligned_store (builder, val, address, FALSE, alignment);
break;
}
case OP_ARM64_ADDHN:
case OP_ARM64_ADDHN2:
case OP_ARM64_SUBHN:
case OP_ARM64_SUBHN2:
case OP_ARM64_RADDHN:
case OP_ARM64_RADDHN2:
case OP_ARM64_RSUBHN:
case OP_ARM64_RSUBHN2: {
LLVMValueRef args [2] = { lhs, rhs };
gboolean high = FALSE;
gboolean subtract = FALSE;
int iid = 0;
switch (ins->opcode) {
case OP_ARM64_ADDHN2: high = TRUE; case OP_ARM64_ADDHN: break;
case OP_ARM64_SUBHN2: high = TRUE; case OP_ARM64_SUBHN: subtract = TRUE; break;
case OP_ARM64_RSUBHN2: high = TRUE; case OP_ARM64_RSUBHN: iid = INTRINS_AARCH64_ADV_SIMD_RSUBHN; break;
case OP_ARM64_RADDHN2: high = TRUE; case OP_ARM64_RADDHN: iid = INTRINS_AARCH64_ADV_SIMD_RADDHN; break;
}
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
if (high) {
args [0] = rhs;
args [1] = arg3;
ovr_tag = ovr_tag_smaller_vector (ovr_tag);
}
LLVMValueRef result = NULL;
if (iid != 0)
result = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
else {
LLVMTypeRef t = LLVMTypeOf (args [0]);
LLVMTypeRef elt_t = LLVMGetElementType (t);
unsigned int elems = LLVMGetVectorSize (t);
unsigned int elem_bits = mono_llvm_get_prim_size_bits (elt_t);
if (subtract)
result = LLVMBuildSub (builder, args [0], args [1], "");
else
result = LLVMBuildAdd (builder, args [0], args [1], "");
result = LLVMBuildLShr (builder, result, broadcast_constant (elem_bits / 2, elt_t, elems), "");
result = LLVMBuildTrunc (builder, result, LLVMVectorType (LLVMIntType (elem_bits / 2), elems), "");
}
if (high)
result = concatenate_vectors (ctx, lhs, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SADD:
case OP_ARM64_UADD:
case OP_ARM64_SADD2:
case OP_ARM64_UADD2:
case OP_ARM64_SSUB:
case OP_ARM64_USUB:
case OP_ARM64_SSUB2:
case OP_ARM64_USUB2: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean is_unsigned = FALSE;
gboolean high = FALSE;
gboolean subtract = FALSE;
switch (ins->opcode) {
case OP_ARM64_SADD2: high = TRUE; case OP_ARM64_SADD: break;
case OP_ARM64_UADD2: high = TRUE; case OP_ARM64_UADD: is_unsigned = TRUE; break;
case OP_ARM64_SSUB2: high = TRUE; case OP_ARM64_SSUB: subtract = TRUE; break;
case OP_ARM64_USUB2: high = TRUE; case OP_ARM64_USUB: subtract = TRUE; is_unsigned = TRUE; break;
}
LLVMValueRef args [] = { lhs, rhs };
for (int i = 0; i < 2; ++i) {
LLVMValueRef arg = args [i];
LLVMTypeRef arg_t = LLVMTypeOf (arg);
if (high && arg_t != ret_t)
arg = extract_high_elements (ctx, arg);
if (is_unsigned)
arg = LLVMBuildZExt (builder, arg, ret_t, "");
else
arg = LLVMBuildSExt (builder, arg, ret_t, "");
args [i] = arg;
}
LLVMValueRef result = NULL;
if (subtract)
result = LLVMBuildSub (builder, args [0], args [1], "arm64_sub");
else
result = LLVMBuildAdd (builder, args [0], args [1], "arm64_add");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SABAL:
case OP_ARM64_SABAL2:
case OP_ARM64_UABAL:
case OP_ARM64_UABAL2:
case OP_ARM64_SABDL:
case OP_ARM64_SABDL2:
case OP_ARM64_UABDL:
case OP_ARM64_UABDL2:
case OP_ARM64_SABA:
case OP_ARM64_UABA:
case OP_ARM64_SABD:
case OP_ARM64_UABD: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
gboolean is_unsigned = FALSE;
gboolean high = FALSE;
gboolean add = FALSE;
gboolean widen = FALSE;
switch (ins->opcode) {
case OP_ARM64_SABAL2: high = TRUE; case OP_ARM64_SABAL: widen = TRUE; add = TRUE; break;
case OP_ARM64_UABAL2: high = TRUE; case OP_ARM64_UABAL: widen = TRUE; add = TRUE; is_unsigned = TRUE; break;
case OP_ARM64_SABDL2: high = TRUE; case OP_ARM64_SABDL: widen = TRUE; break;
case OP_ARM64_UABDL2: high = TRUE; case OP_ARM64_UABDL: widen = TRUE; is_unsigned = TRUE; break;
case OP_ARM64_SABA: add = TRUE; break;
case OP_ARM64_UABA: add = TRUE; is_unsigned = TRUE; break;
case OP_ARM64_UABD: is_unsigned = TRUE; break;
}
LLVMValueRef args [] = { lhs, rhs };
if (add) {
args [0] = rhs;
args [1] = arg3;
}
if (high)
for (int i = 0; i < 2; ++i)
args [i] = extract_high_elements (ctx, args [i]);
int iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UABD : INTRINS_AARCH64_ADV_SIMD_SABD;
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (LLVMTypeOf (args [0]));
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
if (widen)
result = LLVMBuildZExt (builder, result, ret_t, "");
if (add)
result = LLVMBuildAdd (builder, result, lhs, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_XHORIZ: {
gboolean truncate = FALSE;
LLVMTypeRef arg_t = LLVMTypeOf (lhs);
LLVMTypeRef elem_t = LLVMGetElementType (arg_t);
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (arg_t);
if (elem_t == i1_t || elem_t == i2_t)
truncate = TRUE;
LLVMValueRef result = call_overloaded_intrins (ctx, ins->inst_c0, ovr_tag, &lhs, "");
if (truncate) {
// @llvm.aarch64.neon.saddv.i32.v8i16 ought to return an i16, but doesn't in LLVM 9.
result = LLVMBuildTrunc (builder, result, elem_t, "");
}
result = vector_from_scalar (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_SADDLV:
case OP_ARM64_UADDLV: {
LLVMTypeRef arg_t = LLVMTypeOf (lhs);
LLVMTypeRef elem_t = LLVMGetElementType (arg_t);
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (arg_t);
gboolean truncate = elem_t == i1_t;
int iid = ins->opcode == OP_ARM64_UADDLV ? INTRINS_AARCH64_ADV_SIMD_UADDLV : INTRINS_AARCH64_ADV_SIMD_SADDLV;
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, &lhs, "");
if (truncate) {
// @llvm.aarch64.neon.saddlv.i32.v16i8 ought to return an i16, but doesn't in LLVM 9.
result = LLVMBuildTrunc (builder, result, i2_t, "");
}
result = vector_from_scalar (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_ARM64_UADALP:
case OP_ARM64_SADALP: {
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
int iid = ins->opcode == OP_ARM64_UADALP ? INTRINS_AARCH64_ADV_SIMD_UADDLP : INTRINS_AARCH64_ADV_SIMD_SADDLP;
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, &rhs, "");
result = LLVMBuildAdd (builder, result, lhs, "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_ADDP_SCALAR: {
llvm_ovr_tag_t ovr_tag = INTRIN_vector128 | INTRIN_int64;
LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_UADDV, ovr_tag, &lhs, "arm64_addp_scalar");
result = LLVMBuildInsertElement (builder, LLVMConstNull (v64_i8_t), result, const_int32 (0), "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_FADDP_SCALAR: {
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMValueRef hi = LLVMBuildExtractElement (builder, lhs, const_int32 (0), "");
LLVMValueRef lo = LLVMBuildExtractElement (builder, lhs, const_int32 (1), "");
LLVMValueRef result = LLVMBuildFAdd (builder, hi, lo, "arm64_faddp_scalar");
result = LLVMBuildInsertElement (builder, LLVMConstNull (ret_t), result, const_int32 (0), "");
values [ins->dreg] = result;
break;
}
case OP_ARM64_SXTL:
case OP_ARM64_SXTL2:
case OP_ARM64_UXTL:
case OP_ARM64_UXTL2: {
gboolean high = FALSE;
gboolean is_unsigned = FALSE;
switch (ins->opcode) {
case OP_ARM64_SXTL2: high = TRUE; break;
case OP_ARM64_UXTL2: high = TRUE; case OP_ARM64_UXTL: is_unsigned = TRUE; break;
}
LLVMTypeRef t = LLVMTypeOf (lhs);
unsigned int elem_bits = LLVMGetIntTypeWidth (LLVMGetElementType (t));
unsigned int src_elems = LLVMGetVectorSize (t);
unsigned int dst_elems = src_elems;
LLVMValueRef arg = lhs;
if (high) {
arg = extract_high_elements (ctx, lhs);
dst_elems = LLVMGetVectorSize (LLVMTypeOf (arg));
}
LLVMTypeRef result_t = LLVMVectorType (LLVMIntType (elem_bits * 2), dst_elems);
LLVMValueRef result = NULL;
if (is_unsigned)
result = LLVMBuildZExt (builder, arg, result_t, "arm64_uxtl");
else
result = LLVMBuildSExt (builder, arg, result_t, "arm64_sxtl");
values [ins->dreg] = result;
break;
}
case OP_ARM64_TRN1:
case OP_ARM64_TRN2: {
gboolean high = ins->opcode == OP_ARM64_TRN2;
LLVMTypeRef t = LLVMTypeOf (lhs);
unsigned int src_elems = LLVMGetVectorSize (t);
int mask [MAX_VECTOR_ELEMS] = { 0 };
int laneix = high ? 1 : 0;
for (unsigned int i = 0; i < src_elems; i += 2) {
mask [i] = laneix;
mask [i + 1] = laneix + src_elems;
laneix += 2;
}
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_i32 (mask, src_elems), "arm64_uzp");
break;
}
case OP_ARM64_UZP1:
case OP_ARM64_UZP2: {
gboolean high = ins->opcode == OP_ARM64_UZP2;
LLVMTypeRef t = LLVMTypeOf (lhs);
unsigned int src_elems = LLVMGetVectorSize (t);
int mask [MAX_VECTOR_ELEMS] = { 0 };
int laneix = high ? 1 : 0;
for (unsigned int i = 0; i < src_elems; ++i) {
mask [i] = laneix;
laneix += 2;
}
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_i32 (mask, src_elems), "arm64_uzp");
break;
}
case OP_ARM64_ZIP1:
case OP_ARM64_ZIP2: {
gboolean high = ins->opcode == OP_ARM64_ZIP2;
LLVMTypeRef t = LLVMTypeOf (lhs);
unsigned int src_elems = LLVMGetVectorSize (t);
int mask [MAX_VECTOR_ELEMS] = { 0 };
int laneix = high ? src_elems / 2 : 0;
for (unsigned int i = 0; i < src_elems; i += 2) {
mask [i] = laneix;
mask [i + 1] = laneix + src_elems;
++laneix;
}
values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_i32 (mask, src_elems), "arm64_zip");
break;
}
case OP_ARM64_ABSCOMPARE: {
IntrinsicId iid = (IntrinsicId) ins->inst_c0;
gboolean scalar = ins->inst_c1;
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
LLVMTypeRef elem_t = LLVMGetElementType (ret_t);
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
ovr_tag = ovr_tag_corresponding_integer (ovr_tag);
LLVMValueRef args [] = { lhs, rhs };
LLVMTypeRef result_t = ret_t;
if (scalar) {
ovr_tag = ovr_tag_force_scalar (ovr_tag);
result_t = elem_t;
for (int i = 0; i < 2; ++i)
args [i] = scalar_from_vector (ctx, args [i]);
}
LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
result = LLVMBuildBitCast (builder, result, result_t, "");
if (scalar)
result = vector_from_scalar (ctx, ret_t, result);
values [ins->dreg] = result;
break;
}
case OP_XOP_OVR_X_X: {
IntrinsicId iid = (IntrinsicId) ins->inst_c0;
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, &lhs, "");
break;
}
case OP_XOP_OVR_X_X_X: {
IntrinsicId iid = (IntrinsicId) ins->inst_c0;
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
LLVMValueRef args [] = { lhs, rhs };
values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
break;
}
case OP_XOP_OVR_X_X_X_X: {
IntrinsicId iid = (IntrinsicId) ins->inst_c0;
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
LLVMValueRef args [] = { lhs, rhs, arg3 };
values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
break;
}
case OP_XOP_OVR_BYSCALAR_X_X_X: {
IntrinsicId iid = (IntrinsicId) ins->inst_c0;
llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass);
LLVMTypeRef t = LLVMTypeOf (lhs);
unsigned int elems = LLVMGetVectorSize (t);
LLVMValueRef arg2 = broadcast_element (ctx, scalar_from_vector (ctx, rhs), elems);
LLVMValueRef args [] = { lhs, arg2 };
values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, args, "");
break;
}
case OP_XOP_OVR_SCALAR_X_X:
case OP_XOP_OVR_SCALAR_X_X_X:
case OP_XOP_OVR_SCALAR_X_X_X_X: {
int num_args = 0;
IntrinsicId iid = (IntrinsicId) ins->inst_c0;
LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass);
switch (ins->opcode) {
case OP_XOP_OVR_SCALAR_X_X: num_args = 1; break;
case OP_XOP_OVR_SCALAR_X_X_X: num_args = 2; break;
case OP_XOP_OVR_SCALAR_X_X_X_X: num_args = 3; break;
}
/* LLVM 9 NEON intrinsic functions have scalar overloads. Unfortunately
* only overloads for 32 and 64-bit integers and floating point types are
* supported. 8 and 16-bit integers are unsupported, and will fail during
* instruction selection. This is worked around by using a vector
* operation and then explicitly clearing the upper bits of the register.
*/
ScalarOpFromVectorOpCtx sctx = scalar_op_from_vector_op (ctx, ret_t, ins);
LLVMValueRef args [3] = { lhs, rhs, arg3 };
scalar_op_from_vector_op_process_args (&sctx, args, num_args);
LLVMValueRef result = call_overloaded_intrins (ctx, iid, sctx.ovr_tag, args, "");
result = scalar_op_from_vector_op_process_result (&sctx, result);
values [ins->dreg] = result;
break;
}
#endif
case OP_DUMMY_USE:
break;
/*
* EXCEPTION HANDLING
*/
case OP_IMPLICIT_EXCEPTION:
/* This marks a place where an implicit exception can happen */
if (bb->region != -1)
set_failure (ctx, "implicit-exception");
break;
case OP_THROW:
case OP_RETHROW: {
gboolean rethrow = (ins->opcode == OP_RETHROW);
if (ctx->llvm_only) {
emit_llvmonly_throw (ctx, bb, rethrow, lhs);
has_terminator = TRUE;
ctx->unreachable [bb->block_num] = TRUE;
} else {
emit_throw (ctx, bb, rethrow, lhs);
builder = ctx->builder;
}
break;
}
case OP_CALL_HANDLER: {
/*
* We don't 'call' handlers, but instead simply branch to them.
* The code generated by ENDFINALLY will branch back to us.
*/
LLVMBasicBlockRef noex_bb;
GSList *bb_list;
BBInfo *info = &bblocks [ins->inst_target_bb->block_num];
bb_list = info->call_handler_return_bbs;
/*
* Set the indicator variable for the finally clause.
*/
lhs = info->finally_ind;
g_assert (lhs);
LLVMBuildStore (builder, LLVMConstInt (LLVMInt32Type (), g_slist_length (bb_list) + 1, FALSE), lhs);
/* Branch to the finally clause */
LLVMBuildBr (builder, info->call_handler_target_bb);
noex_bb = gen_bb (ctx, "CALL_HANDLER_CONT_BB");
info->call_handler_return_bbs = g_slist_append_mempool (cfg->mempool, info->call_handler_return_bbs, noex_bb);
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, noex_bb);
bblocks [bb->block_num].end_bblock = noex_bb;
break;
}
case OP_START_HANDLER: {
break;
}
case OP_ENDFINALLY: {
LLVMBasicBlockRef resume_bb;
MonoBasicBlock *handler_bb;
LLVMValueRef val, switch_ins, callee;
GSList *bb_list;
BBInfo *info;
gboolean is_fault = MONO_REGION_FLAGS (bb->region) == MONO_EXCEPTION_CLAUSE_FAULT;
/*
* Fault clauses are like finally clauses, but they are only called if an exception is thrown.
*/
if (!is_fault) {
handler_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->region_to_handler, GUINT_TO_POINTER (mono_get_block_region_notry (cfg, bb->region)));
g_assert (handler_bb);
info = &bblocks [handler_bb->block_num];
lhs = info->finally_ind;
g_assert (lhs);
bb_list = info->call_handler_return_bbs;
resume_bb = gen_bb (ctx, "ENDFINALLY_RESUME_BB");
/* Load the finally variable */
val = LLVMBuildLoad (builder, lhs, "");
/* Reset the variable */
LLVMBuildStore (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), lhs);
/* Branch to either resume_bb, or to the bblocks in bb_list */
switch_ins = LLVMBuildSwitch (builder, val, resume_bb, g_slist_length (bb_list));
/*
* The other targets are added at the end to handle OP_CALL_HANDLER
* opcodes processed later.
*/
info->endfinally_switch_ins_list = g_slist_append_mempool (cfg->mempool, info->endfinally_switch_ins_list, switch_ins);
builder = ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, resume_bb);
}
if (ctx->llvm_only) {
if (!cfg->deopt) {
emit_resume_eh (ctx, bb);
} else {
/* Not needed */
LLVMBuildUnreachable (builder);
}
} else {
LLVMTypeRef icall_sig = LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE);
if (ctx->cfg->compile_aot) {
callee = get_callee (ctx, icall_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_llvm_resume_unwind_trampoline));
} else {
callee = get_jit_callee (ctx, "llvm_resume_unwind_trampoline", icall_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_llvm_resume_unwind_trampoline));
}
LLVMBuildCall (builder, callee, NULL, 0, "");
LLVMBuildUnreachable (builder);
}
has_terminator = TRUE;
break;
}
case OP_ENDFILTER: {
g_assert (cfg->llvm_only && cfg->deopt);
LLVMBuildUnreachable (builder);
has_terminator = TRUE;
break;
}
case OP_IL_SEQ_POINT:
break;
default: {
char reason [128];
sprintf (reason, "opcode %s", mono_inst_name (ins->opcode));
set_failure (ctx, reason);
break;
}
}
if (!ctx_ok (ctx))
break;
/* Convert the value to the type required by phi nodes */
if (spec [MONO_INST_DEST] != ' ' && !MONO_IS_STORE_MEMBASE (ins) && ctx->vreg_types [ins->dreg]) {
if (ctx->is_vphi [ins->dreg])
/* vtypes */
values [ins->dreg] = addresses [ins->dreg];
else
values [ins->dreg] = convert (ctx, values [ins->dreg], ctx->vreg_types [ins->dreg]);
}
/* Add stores for volatile/ref variables */
if (spec [MONO_INST_DEST] != ' ' && spec [MONO_INST_DEST] != 'v' && !MONO_IS_STORE_MEMBASE (ins)) {
if (!skip_volatile_store)
emit_volatile_store (ctx, ins->dreg);
#ifdef TARGET_WASM
if (vreg_is_ref (cfg, ins->dreg) && ctx->values [ins->dreg])
emit_gc_pin (ctx, builder, ins->dreg);
#endif
}
}
if (!ctx_ok (ctx))
return;
if (!has_terminator && bb->next_bb && (bb == cfg->bb_entry || bb->in_count > 0)) {
LLVMBuildBr (builder, get_bb (ctx, bb->next_bb));
}
if (bb == cfg->bb_exit && sig->ret->type == MONO_TYPE_VOID) {
emit_dbg_loc (ctx, builder, cfg->header->code + cfg->header->code_size - 1);
LLVMBuildRetVoid (builder);
}
if (bb == cfg->bb_entry)
ctx->last_alloca = LLVMGetLastInstruction (get_bb (ctx, cfg->bb_entry));
}
/*
* mono_llvm_check_method_supported:
*
* Do some quick checks to decide whenever cfg->method can be compiled by LLVM, to avoid
* compiling a method twice.
*/
void
mono_llvm_check_method_supported (MonoCompile *cfg)
{
int i, j;
#ifdef TARGET_WASM
if (mono_method_signature_internal (cfg->method)->call_convention == MONO_CALL_VARARG) {
cfg->exception_message = g_strdup ("vararg callconv");
cfg->disable_llvm = TRUE;
return;
}
#endif
if (cfg->llvm_only)
return;
if (cfg->method->save_lmf) {
cfg->exception_message = g_strdup ("lmf");
cfg->disable_llvm = TRUE;
}
if (cfg->disable_llvm)
return;
/*
* Nested clauses where one of the clauses is a finally clause is
* not supported, because LLVM can't figure out the control flow,
* probably because we resume exception handling by calling our
* own function instead of using the 'resume' llvm instruction.
*/
for (i = 0; i < cfg->header->num_clauses; ++i) {
for (j = 0; j < cfg->header->num_clauses; ++j) {
MonoExceptionClause *clause1 = &cfg->header->clauses [i];
MonoExceptionClause *clause2 = &cfg->header->clauses [j];
// FIXME: Nested try clauses fail in some cases too, i.e. #37273
if (i != j && clause1->try_offset >= clause2->try_offset && clause1->handler_offset <= clause2->handler_offset) {
//(clause1->flags == MONO_EXCEPTION_CLAUSE_FINALLY || clause2->flags == MONO_EXCEPTION_CLAUSE_FINALLY)) {
cfg->exception_message = g_strdup ("nested clauses");
cfg->disable_llvm = TRUE;
break;
}
}
}
if (cfg->disable_llvm)
return;
/* FIXME: */
if (cfg->method->dynamic) {
cfg->exception_message = g_strdup ("dynamic.");
cfg->disable_llvm = TRUE;
}
if (cfg->disable_llvm)
return;
}
static LLVMCallInfo*
get_llvm_call_info (MonoCompile *cfg, MonoMethodSignature *sig)
{
LLVMCallInfo *linfo;
int i;
if (cfg->gsharedvt && cfg->llvm_only && mini_is_gsharedvt_variable_signature (sig)) {
int i, n, pindex;
/*
* Gsharedvt methods have the following calling convention:
* - all arguments are passed by ref, even non generic ones
* - the return value is returned by ref too, using a vret
* argument passed after 'this'.
*/
n = sig->param_count + sig->hasthis;
linfo = (LLVMCallInfo*)mono_mempool_alloc0 (cfg->mempool, sizeof (LLVMCallInfo) + (sizeof (LLVMArgInfo) * n));
pindex = 0;
if (sig->hasthis)
linfo->args [pindex ++].storage = LLVMArgNormal;
if (sig->ret->type != MONO_TYPE_VOID) {
if (mini_is_gsharedvt_variable_type (sig->ret))
linfo->ret.storage = LLVMArgGsharedvtVariable;
else if (mini_type_is_vtype (sig->ret))
linfo->ret.storage = LLVMArgGsharedvtFixedVtype;
else
linfo->ret.storage = LLVMArgGsharedvtFixed;
linfo->vret_arg_index = pindex;
} else {
linfo->ret.storage = LLVMArgNone;
}
for (i = 0; i < sig->param_count; ++i) {
if (m_type_is_byref (sig->params [i]))
linfo->args [pindex].storage = LLVMArgNormal;
else if (mini_is_gsharedvt_variable_type (sig->params [i]))
linfo->args [pindex].storage = LLVMArgGsharedvtVariable;
else if (mini_type_is_vtype (sig->params [i]))
linfo->args [pindex].storage = LLVMArgGsharedvtFixedVtype;
else
linfo->args [pindex].storage = LLVMArgGsharedvtFixed;
linfo->args [pindex].type = sig->params [i];
pindex ++;
}
return linfo;
}
linfo = mono_arch_get_llvm_call_info (cfg, sig);
linfo->dummy_arg_pindex = -1;
for (i = 0; i < sig->param_count; ++i)
linfo->args [i + sig->hasthis].type = sig->params [i];
return linfo;
}
static void
emit_method_inner (EmitContext *ctx);
static void
free_ctx (EmitContext *ctx)
{
GSList *l;
g_free (ctx->values);
g_free (ctx->addresses);
g_free (ctx->vreg_types);
g_free (ctx->is_vphi);
g_free (ctx->vreg_cli_types);
g_free (ctx->is_dead);
g_free (ctx->unreachable);
g_free (ctx->gc_var_indexes);
g_ptr_array_free (ctx->phi_values, TRUE);
g_free (ctx->bblocks);
g_hash_table_destroy (ctx->region_to_handler);
g_hash_table_destroy (ctx->clause_to_handler);
g_hash_table_destroy (ctx->jit_callees);
g_ptr_array_free (ctx->callsite_list, TRUE);
g_free (ctx->method_name);
g_ptr_array_free (ctx->bblock_list, TRUE);
for (l = ctx->builders; l; l = l->next) {
LLVMBuilderRef builder = (LLVMBuilderRef)l->data;
LLVMDisposeBuilder (builder);
}
g_free (ctx);
}
static gboolean
is_linkonce_method (MonoMethod *method)
{
#ifdef TARGET_WASM
/*
* Under wasm, linkonce works, so use it instead of the dedup pass for wrappers at least.
* FIXME: Use for everything, i.e. can_dedup ().
* FIXME: Fails System.Core tests
* -> amodule->sorted_methods contains duplicates, screwing up jit tables.
*/
// FIXME: This works, but the aot data for the methods is still kept, so size still increases
#if 0
if (method->wrapper_type == MONO_WRAPPER_OTHER) {
WrapperInfo *info = mono_marshal_get_wrapper_info (method);
if (info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG || info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG)
return TRUE;
}
#endif
#endif
return FALSE;
}
/*
* mono_llvm_emit_method:
*
* Emit LLVM IL from the mono IL, and compile it to native code using LLVM.
*/
void
mono_llvm_emit_method (MonoCompile *cfg)
{
EmitContext *ctx;
char *method_name;
gboolean is_linkonce = FALSE;
int i;
if (cfg->skip)
return;
/* The code below might acquire the loader lock, so use it for global locking */
mono_loader_lock ();
ctx = g_new0 (EmitContext, 1);
ctx->cfg = cfg;
ctx->mempool = cfg->mempool;
/*
* This maps vregs to the LLVM instruction defining them
*/
ctx->values = g_new0 (LLVMValueRef, cfg->next_vreg);
/*
* This maps vregs for volatile variables to the LLVM instruction defining their
* address.
*/
ctx->addresses = g_new0 (LLVMValueRef, cfg->next_vreg);
ctx->vreg_types = g_new0 (LLVMTypeRef, cfg->next_vreg);
ctx->is_vphi = g_new0 (gboolean, cfg->next_vreg);
ctx->vreg_cli_types = g_new0 (MonoType*, cfg->next_vreg);
ctx->phi_values = g_ptr_array_sized_new (256);
/*
* This signals whenever the vreg was defined by a phi node with no input vars
* (i.e. all its input bblocks end with NOT_REACHABLE).
*/
ctx->is_dead = g_new0 (gboolean, cfg->next_vreg);
/* Whenever the bblock is unreachable */
ctx->unreachable = g_new0 (gboolean, cfg->max_block_num);
ctx->bblock_list = g_ptr_array_sized_new (256);
ctx->region_to_handler = g_hash_table_new (NULL, NULL);
ctx->clause_to_handler = g_hash_table_new (NULL, NULL);
ctx->callsite_list = g_ptr_array_new ();
ctx->jit_callees = g_hash_table_new (NULL, NULL);
if (cfg->compile_aot) {
ctx->module = &aot_module;
/*
* Allow the linker to discard duplicate copies of wrappers, generic instances etc. by using the 'linkonce'
* linkage for them. This requires the following:
* - the method needs to have a unique mangled name
* - llvmonly mode, since the code in aot-runtime.c would initialize got slots in the wrong aot image etc.
*/
if (ctx->module->llvm_only && ctx->module->static_link && is_linkonce_method (cfg->method))
is_linkonce = TRUE;
if (is_linkonce || mono_aot_is_externally_callable (cfg->method))
method_name = mono_aot_get_mangled_method_name (cfg->method);
else
method_name = mono_aot_get_method_name (cfg);
cfg->llvm_method_name = g_strdup (method_name);
} else {
ctx->module = init_jit_module ();
method_name = mono_method_full_name (cfg->method, TRUE);
}
ctx->method_name = method_name;
ctx->is_linkonce = is_linkonce;
if (cfg->compile_aot) {
ctx->lmodule = ctx->module->lmodule;
} else {
ctx->lmodule = LLVMModuleCreateWithName (g_strdup_printf ("jit-module-%s", cfg->method->name));
}
ctx->llvm_only = ctx->module->llvm_only;
#ifdef TARGET_WASM
ctx->emit_dummy_arg = TRUE;
#endif
emit_method_inner (ctx);
if (!ctx_ok (ctx)) {
if (ctx->lmethod) {
/* Need to add unused phi nodes as they can be referenced by other values */
LLVMBasicBlockRef phi_bb = LLVMAppendBasicBlock (ctx->lmethod, "PHI_BB");
LLVMBuilderRef builder;
builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, phi_bb);
for (i = 0; i < ctx->phi_values->len; ++i) {
LLVMValueRef v = (LLVMValueRef)g_ptr_array_index (ctx->phi_values, i);
if (LLVMGetInstructionParent (v) == NULL)
LLVMInsertIntoBuilder (builder, v);
}
if (ctx->module->llvm_only && ctx->module->static_link && cfg->interp) {
/* The caller will retry compilation */
LLVMDeleteFunction (ctx->lmethod);
} else if (ctx->module->llvm_only && ctx->module->static_link) {
// Keep a stub for the function since it might be called directly
int nbbs = LLVMCountBasicBlocks (ctx->lmethod);
LLVMBasicBlockRef *bblocks = g_new0 (LLVMBasicBlockRef, nbbs);
LLVMGetBasicBlocks (ctx->lmethod, bblocks);
for (int i = 0; i < nbbs; ++i)
LLVMRemoveBasicBlockFromParent (bblocks [i]);
LLVMBasicBlockRef entry_bb = LLVMAppendBasicBlock (ctx->lmethod, "ENTRY");
builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (builder, entry_bb);
ctx->builder = builder;
LLVMTypeRef sig = LLVMFunctionType0 (LLVMVoidType (), FALSE);
LLVMValueRef callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_nullref_exception));
LLVMBuildCall (builder, callee, NULL, 0, "");
LLVMBuildUnreachable (builder);
/* Clean references to instructions inside the method */
for (int i = 0; i < ctx->callsite_list->len; ++i) {
CallSite *callsite = (CallSite*)g_ptr_array_index (ctx->callsite_list, i);
if (callsite->lmethod == ctx->lmethod)
callsite->load = NULL;
}
} else {
LLVMDeleteFunction (ctx->lmethod);
}
}
}
free_ctx (ctx);
mono_loader_unlock ();
}
static void
emit_method_inner (EmitContext *ctx)
{
MonoCompile *cfg = ctx->cfg;
MonoMethodSignature *sig;
MonoBasicBlock *bb;
LLVMTypeRef method_type;
LLVMValueRef method = NULL;
LLVMValueRef *values = ctx->values;
int i, max_block_num, bb_index;
gboolean llvmonly_fail = FALSE;
LLVMCallInfo *linfo;
LLVMModuleRef lmodule = ctx->lmodule;
BBInfo *bblocks;
GPtrArray *bblock_list = ctx->bblock_list;
MonoMethodHeader *header;
MonoExceptionClause *clause;
char **names;
LLVMBuilderRef entry_builder = NULL;
LLVMBasicBlockRef entry_bb = NULL;
if (cfg->gsharedvt && !cfg->llvm_only) {
set_failure (ctx, "gsharedvt");
return;
}
#if 0
{
static int count = 0;
count ++;
char *llvm_count_str = g_getenv ("LLVM_COUNT");
if (llvm_count_str) {
int lcount = atoi (llvm_count_str);
g_free (llvm_count_str);
if (count == lcount) {
printf ("LAST: %s\n", mono_method_full_name (cfg->method, TRUE));
fflush (stdout);
}
if (count > lcount) {
set_failure (ctx, "count");
return;
}
}
}
#endif
// If we come upon one of the init_method wrappers, we need to find
// the method that we have already emitted and tell LLVM that this
// managed method info for the wrapper is associated with this method
// we constructed ourselves from LLVM IR.
//
// This is necessary to unwind through the init_method, in the case that
// it has to run a static cctor that throws an exception
if (cfg->method->wrapper_type == MONO_WRAPPER_OTHER) {
WrapperInfo *info = mono_marshal_get_wrapper_info (cfg->method);
if (info->subtype == WRAPPER_SUBTYPE_AOT_INIT) {
method = get_init_func (ctx->module, info->d.aot_init.subtype);
ctx->lmethod = method;
ctx->module->max_method_idx = MAX (ctx->module->max_method_idx, cfg->method_index);
const char *init_name = mono_marshal_get_aot_init_wrapper_name (info->d.aot_init.subtype);
ctx->method_name = g_strdup_printf ("%s_%s", ctx->module->global_prefix, init_name);
ctx->cfg->asm_symbol = g_strdup (ctx->method_name);
if (!cfg->llvm_only && ctx->module->external_symbols) {
LLVMSetLinkage (method, LLVMExternalLinkage);
LLVMSetVisibility (method, LLVMHiddenVisibility);
}
/* Not looked up at runtime */
g_hash_table_insert (ctx->module->no_method_table_lmethods, method, method);
goto after_codegen;
} else if (info->subtype == WRAPPER_SUBTYPE_LLVM_FUNC) {
g_assert (info->d.llvm_func.subtype == LLVM_FUNC_WRAPPER_GC_POLL);
if (cfg->compile_aot) {
method = ctx->module->gc_poll_cold_wrapper;
g_assert (method);
} else {
method = emit_icall_cold_wrapper (ctx->module, lmodule, MONO_JIT_ICALL_mono_threads_state_poll, FALSE);
}
ctx->lmethod = method;
ctx->module->max_method_idx = MAX (ctx->module->max_method_idx, cfg->method_index);
ctx->method_name = g_strdup (LLVMGetValueName (method)); //g_strdup_printf ("%s_%s", ctx->module->global_prefix, LLVMGetValueName (method));
ctx->cfg->asm_symbol = g_strdup (ctx->method_name);
if (!cfg->llvm_only && ctx->module->external_symbols) {
LLVMSetLinkage (method, LLVMExternalLinkage);
LLVMSetVisibility (method, LLVMHiddenVisibility);
}
goto after_codegen;
}
}
sig = mono_method_signature_internal (cfg->method);
ctx->sig = sig;
linfo = get_llvm_call_info (cfg, sig);
ctx->linfo = linfo;
if (!ctx_ok (ctx))
return;
if (cfg->rgctx_var)
linfo->rgctx_arg = TRUE;
else if (needs_extra_arg (ctx, cfg->method))
linfo->dummy_arg = TRUE;
ctx->method_type = method_type = sig_to_llvm_sig_full (ctx, sig, linfo);
if (!ctx_ok (ctx))
return;
method = LLVMAddFunction (lmodule, ctx->method_name, method_type);
ctx->lmethod = method;
if (!cfg->llvm_only)
LLVMSetFunctionCallConv (method, LLVMMono1CallConv);
/* if the method doesn't contain
* (1) a call (so it's a leaf method)
* (2) and no loops
* we can skip the GC safepoint on method entry. */
gboolean requires_safepoint;
requires_safepoint = cfg->has_calls;
if (!requires_safepoint) {
for (bb = cfg->bb_entry->next_bb; bb; bb = bb->next_bb) {
if (bb->loop_body_start || (bb->flags & BB_EXCEPTION_HANDLER)) {
requires_safepoint = TRUE;
}
}
}
if (cfg->method->wrapper_type) {
if (cfg->method->wrapper_type == MONO_WRAPPER_ALLOC || cfg->method->wrapper_type == MONO_WRAPPER_WRITE_BARRIER) {
requires_safepoint = FALSE;
} else {
WrapperInfo *info = mono_marshal_get_wrapper_info (cfg->method);
switch (info->subtype) {
case WRAPPER_SUBTYPE_GSHAREDVT_IN:
case WRAPPER_SUBTYPE_GSHAREDVT_OUT:
case WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG:
case WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG:
/* Arguments are not used after the call */
requires_safepoint = FALSE;
break;
}
}
}
ctx->has_safepoints = requires_safepoint;
if (!cfg->llvm_only && mono_threads_are_safepoints_enabled () && requires_safepoint) {
if (!cfg->compile_aot) {
LLVMSetGC (method, "coreclr");
emit_gc_safepoint_poll (ctx->module, ctx->lmodule, cfg);
} else {
LLVMSetGC (method, "coreclr");
}
}
LLVMSetLinkage (method, LLVMPrivateLinkage);
mono_llvm_add_func_attr (method, LLVM_ATTR_UW_TABLE);
if (cfg->disable_omit_fp)
mono_llvm_add_func_attr_nv (method, "frame-pointer", "all");
if (cfg->compile_aot) {
if (mono_aot_is_externally_callable (cfg->method)) {
LLVMSetLinkage (method, LLVMExternalLinkage);
} else {
LLVMSetLinkage (method, LLVMInternalLinkage);
//all methods have internal visibility when doing llvm_only
if (!cfg->llvm_only && ctx->module->external_symbols) {
LLVMSetLinkage (method, LLVMExternalLinkage);
LLVMSetVisibility (method, LLVMHiddenVisibility);
}
}
if (ctx->is_linkonce) {
LLVMSetLinkage (method, LLVMLinkOnceAnyLinkage);
LLVMSetVisibility (method, LLVMDefaultVisibility);
}
} else {
LLVMSetLinkage (method, LLVMExternalLinkage);
}
if (cfg->method->save_lmf && !cfg->llvm_only) {
set_failure (ctx, "lmf");
return;
}
if (sig->pinvoke && cfg->method->wrapper_type != MONO_WRAPPER_RUNTIME_INVOKE && !cfg->llvm_only) {
set_failure (ctx, "pinvoke signature");
return;
}
#ifdef TARGET_WASM
if (ctx->module->interp && cfg->header->code_size > 100000 && !cfg->interp_entry_only) {
/* Large methods slow down llvm too much */
set_failure (ctx, "il code too large.");
return;
}
#endif
header = cfg->header;
for (i = 0; i < header->num_clauses; ++i) {
clause = &header->clauses [i];
if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY && clause->flags != MONO_EXCEPTION_CLAUSE_FAULT && clause->flags != MONO_EXCEPTION_CLAUSE_NONE) {
if (cfg->llvm_only) {
if (!cfg->deopt && !cfg->interp_entry_only)
llvmonly_fail = TRUE;
} else {
set_failure (ctx, "non-finally/catch/fault clause.");
return;
}
}
}
if (header->num_clauses || (cfg->method->iflags & METHOD_IMPL_ATTRIBUTE_NOINLINING) || cfg->no_inline)
/* We can't handle inlined methods with clauses */
mono_llvm_add_func_attr (method, LLVM_ATTR_NO_INLINE);
for (int i = 0; i < cfg->header->num_clauses; i++) {
MonoExceptionClause *clause = &cfg->header->clauses [i];
if (clause->flags == MONO_EXCEPTION_CLAUSE_NONE || clause->flags == MONO_EXCEPTION_CLAUSE_FILTER)
ctx->has_catch = TRUE;
}
if (linfo->rgctx_arg) {
ctx->rgctx_arg = LLVMGetParam (method, linfo->rgctx_arg_pindex);
ctx->rgctx_arg_pindex = linfo->rgctx_arg_pindex;
/*
* We mark the rgctx parameter with the inreg attribute, which is mapped to
* MONO_ARCH_RGCTX_REG in the Mono calling convention in llvm, i.e.
* CC_X86_64_Mono in X86CallingConv.td.
*/
if (!ctx->llvm_only)
mono_llvm_add_param_attr (ctx->rgctx_arg, LLVM_ATTR_IN_REG);
LLVMSetValueName (ctx->rgctx_arg, "rgctx");
} else {
ctx->rgctx_arg_pindex = -1;
}
if (cfg->vret_addr) {
values [cfg->vret_addr->dreg] = LLVMGetParam (method, linfo->vret_arg_pindex);
LLVMSetValueName (values [cfg->vret_addr->dreg], "vret");
if (linfo->ret.storage == LLVMArgVtypeByRef) {
mono_llvm_add_param_attr (LLVMGetParam (method, linfo->vret_arg_pindex), LLVM_ATTR_STRUCT_RET);
mono_llvm_add_param_attr (LLVMGetParam (method, linfo->vret_arg_pindex), LLVM_ATTR_NO_ALIAS);
}
}
if (sig->hasthis) {
ctx->this_arg_pindex = linfo->this_arg_pindex;
ctx->this_arg = LLVMGetParam (method, linfo->this_arg_pindex);
values [cfg->args [0]->dreg] = ctx->this_arg;
LLVMSetValueName (values [cfg->args [0]->dreg], "this");
}
if (linfo->dummy_arg)
LLVMSetValueName (LLVMGetParam (method, linfo->dummy_arg_pindex), "dummy_arg");
names = g_new (char *, sig->param_count);
mono_method_get_param_names (cfg->method, (const char **) names);
/* Set parameter names/attributes */
for (i = 0; i < sig->param_count; ++i) {
LLVMArgInfo *ainfo = &linfo->args [i + sig->hasthis];
char *name;
int pindex = ainfo->pindex + ainfo->ndummy_fpargs;
int j;
for (j = 0; j < ainfo->ndummy_fpargs; ++j) {
name = g_strdup_printf ("dummy_%d_%d", i, j);
LLVMSetValueName (LLVMGetParam (method, ainfo->pindex + j), name);
g_free (name);
}
if (ainfo->storage == LLVMArgVtypeInReg && ainfo->pair_storage [0] == LLVMArgNone && ainfo->pair_storage [1] == LLVMArgNone)
continue;
values [cfg->args [i + sig->hasthis]->dreg] = LLVMGetParam (method, pindex);
if (ainfo->storage == LLVMArgGsharedvtFixed || ainfo->storage == LLVMArgGsharedvtFixedVtype) {
if (names [i] && names [i][0] != '\0')
name = g_strdup_printf ("p_arg_%s", names [i]);
else
name = g_strdup_printf ("p_arg_%d", i);
} else {
if (names [i] && names [i][0] != '\0')
name = g_strdup_printf ("arg_%s", names [i]);
else
name = g_strdup_printf ("arg_%d", i);
}
LLVMSetValueName (LLVMGetParam (method, pindex), name);
g_free (name);
if (ainfo->storage == LLVMArgVtypeByVal)
mono_llvm_add_param_attr (LLVMGetParam (method, pindex), LLVM_ATTR_BY_VAL);
if (ainfo->storage == LLVMArgVtypeByRef || ainfo->storage == LLVMArgVtypeAddr) {
/* For OP_LDADDR */
cfg->args [i + sig->hasthis]->opcode = OP_VTARG_ADDR;
}
#ifdef TARGET_WASM
if (ainfo->storage == LLVMArgVtypeByRef) {
/* This causes llvm to make a copy of the value which is what we need */
mono_llvm_add_param_byval_attr (LLVMGetParam (method, pindex), LLVMGetElementType (LLVMTypeOf (LLVMGetParam (method, pindex))));
}
#endif
}
g_free (names);
if (ctx->module->emit_dwarf && cfg->compile_aot && mono_debug_enabled ()) {
ctx->minfo = mono_debug_lookup_method (cfg->method);
ctx->dbg_md = emit_dbg_subprogram (ctx, cfg, method, ctx->method_name);
}
max_block_num = 0;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
max_block_num = MAX (max_block_num, bb->block_num);
ctx->bblocks = bblocks = g_new0 (BBInfo, max_block_num + 1);
/* Add branches between non-consecutive bblocks */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
if (bb->last_ins && MONO_IS_COND_BRANCH_OP (bb->last_ins) &&
bb->next_bb != bb->last_ins->inst_false_bb) {
MonoInst *inst = (MonoInst*)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoInst));
inst->opcode = OP_BR;
inst->inst_target_bb = bb->last_ins->inst_false_bb;
mono_bblock_add_inst (bb, inst);
}
}
/*
* Make a first pass over the code to precreate PHI nodes/set INDIRECT flags.
*/
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
MonoInst *ins;
LLVMBuilderRef builder;
char *dname;
char dname_buf[128];
builder = create_builder (ctx);
for (ins = bb->code; ins; ins = ins->next) {
switch (ins->opcode) {
case OP_PHI:
case OP_FPHI:
case OP_VPHI:
case OP_XPHI: {
LLVMTypeRef phi_type = llvm_type_to_stack_type (cfg, type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass)));
if (!ctx_ok (ctx))
return;
if (cfg->interp_entry_only)
break;
if (ins->opcode == OP_VPHI) {
/* Treat valuetype PHI nodes as operating on the address itself */
g_assert (ins->klass);
phi_type = LLVMPointerType (type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass)), 0);
}
/*
* Have to precreate these, as they can be referenced by
* earlier instructions.
*/
sprintf (dname_buf, "t%d", ins->dreg);
dname = dname_buf;
values [ins->dreg] = LLVMBuildPhi (builder, phi_type, dname);
if (ins->opcode == OP_VPHI)
ctx->addresses [ins->dreg] = values [ins->dreg];
g_ptr_array_add (ctx->phi_values, values [ins->dreg]);
/*
* Set the expected type of the incoming arguments since these have
* to have the same type.
*/
for (i = 0; i < ins->inst_phi_args [0]; i++) {
int sreg1 = ins->inst_phi_args [i + 1];
if (sreg1 != -1) {
if (ins->opcode == OP_VPHI)
ctx->is_vphi [sreg1] = TRUE;
ctx->vreg_types [sreg1] = phi_type;
}
}
break;
}
case OP_LDADDR:
((MonoInst*)ins->inst_p0)->flags |= MONO_INST_INDIRECT;
break;
default:
break;
}
}
}
/*
* Create an ordering for bblocks, use the depth first order first, then
* put the exception handling bblocks last.
*/
for (bb_index = 0; bb_index < cfg->num_bblocks; ++bb_index) {
bb = cfg->bblocks [bb_index];
if (!(bb->region != -1 && !MONO_BBLOCK_IS_IN_REGION (bb, MONO_REGION_TRY))) {
g_ptr_array_add (bblock_list, bb);
bblocks [bb->block_num].added = TRUE;
}
}
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
if (!bblocks [bb->block_num].added)
g_ptr_array_add (bblock_list, bb);
}
/*
* Second pass: generate code.
*/
// Emit entry point
entry_builder = create_builder (ctx);
entry_bb = get_bb (ctx, cfg->bb_entry);
LLVMPositionBuilderAtEnd (entry_builder, entry_bb);
emit_entry_bb (ctx, entry_builder);
if (llvmonly_fail)
/*
* In llvmonly mode, we want to emit an llvm method for every method even if it fails to compile,
* so direct calls can be made from outside the assembly.
*/
goto after_codegen_1;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
int clause_index;
char name [128];
if (ctx->cfg->interp_entry_only || !(bb->region != -1 && (bb->flags & BB_EXCEPTION_HANDLER)))
continue;
if (ctx->cfg->deopt && MONO_REGION_FLAGS (bb->region) == MONO_EXCEPTION_CLAUSE_FILTER)
continue;
clause_index = MONO_REGION_CLAUSE_INDEX (bb->region);
g_hash_table_insert (ctx->region_to_handler, GUINT_TO_POINTER (mono_get_block_region_notry (cfg, bb->region)), bb);
g_hash_table_insert (ctx->clause_to_handler, GINT_TO_POINTER (clause_index), bb);
/*
* Create a new bblock which CALL_HANDLER/landing pads can branch to, because branching to the
* LLVM bblock containing a landing pad causes problems for the
* LLVM optimizer passes.
*/
sprintf (name, "BB%d_CALL_HANDLER_TARGET", bb->block_num);
ctx->bblocks [bb->block_num].call_handler_target_bb = LLVMAppendBasicBlock (ctx->lmethod, name);
}
// Make landing pads first
ctx->exc_meta = g_hash_table_new_full (NULL, NULL, NULL, NULL);
if (ctx->llvm_only && !ctx->cfg->interp_entry_only) {
size_t group_index = 0;
while (group_index < cfg->header->num_clauses) {
if (cfg->clause_is_dead [group_index]) {
group_index ++;
continue;
}
int count = 0;
size_t cursor = group_index;
while (cursor < cfg->header->num_clauses &&
CLAUSE_START (&cfg->header->clauses [cursor]) == CLAUSE_START (&cfg->header->clauses [group_index]) &&
CLAUSE_END (&cfg->header->clauses [cursor]) == CLAUSE_END (&cfg->header->clauses [group_index])) {
count++;
cursor++;
}
LLVMBasicBlockRef lpad_bb = emit_landing_pad (ctx, group_index, count);
intptr_t key = CLAUSE_END (&cfg->header->clauses [group_index]);
g_hash_table_insert (ctx->exc_meta, (gpointer)key, lpad_bb);
group_index = cursor;
}
}
for (bb_index = 0; bb_index < bblock_list->len; ++bb_index) {
bb = (MonoBasicBlock*)g_ptr_array_index (bblock_list, bb_index);
// Prune unreachable mono BBs.
if (!(bb == cfg->bb_entry || bb->in_count > 0))
continue;
process_bb (ctx, bb);
if (!ctx_ok (ctx))
return;
}
g_hash_table_destroy (ctx->exc_meta);
mono_memory_barrier ();
/* Add incoming phi values */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
GSList *l, *ins_list;
ins_list = bblocks [bb->block_num].phi_nodes;
for (l = ins_list; l; l = l->next) {
PhiNode *node = (PhiNode*)l->data;
MonoInst *phi = node->phi;
int sreg1 = node->sreg;
LLVMBasicBlockRef in_bb;
if (sreg1 == -1)
continue;
in_bb = get_end_bb (ctx, node->in_bb);
if (ctx->unreachable [node->in_bb->block_num])
continue;
if (phi->opcode == OP_VPHI) {
g_assert (LLVMTypeOf (ctx->addresses [sreg1]) == LLVMTypeOf (values [phi->dreg]));
LLVMAddIncoming (values [phi->dreg], &ctx->addresses [sreg1], &in_bb, 1);
} else {
if (!values [sreg1]) {
/* Can happen with values in EH clauses */
set_failure (ctx, "incoming phi sreg1");
return;
}
if (LLVMTypeOf (values [sreg1]) != LLVMTypeOf (values [phi->dreg])) {
set_failure (ctx, "incoming phi arg type mismatch");
return;
}
g_assert (LLVMTypeOf (values [sreg1]) == LLVMTypeOf (values [phi->dreg]));
LLVMAddIncoming (values [phi->dreg], &values [sreg1], &in_bb, 1);
}
}
}
/* Nullify empty phi instructions */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
GSList *l, *ins_list;
ins_list = bblocks [bb->block_num].phi_nodes;
for (l = ins_list; l; l = l->next) {
PhiNode *node = (PhiNode*)l->data;
MonoInst *phi = node->phi;
LLVMValueRef phi_ins = values [phi->dreg];
if (!phi_ins)
/* Already removed */
continue;
if (LLVMCountIncoming (phi_ins) == 0) {
mono_llvm_replace_uses_of (phi_ins, LLVMConstNull (LLVMTypeOf (phi_ins)));
LLVMInstructionEraseFromParent (phi_ins);
values [phi->dreg] = NULL;
}
}
}
/* Create the SWITCH statements for ENDFINALLY instructions */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
BBInfo *info = &bblocks [bb->block_num];
GSList *l;
for (l = info->endfinally_switch_ins_list; l; l = l->next) {
LLVMValueRef switch_ins = (LLVMValueRef)l->data;
GSList *bb_list = info->call_handler_return_bbs;
GSList *bb_list_iter;
i = 0;
for (bb_list_iter = bb_list; bb_list_iter; bb_list_iter = g_slist_next (bb_list_iter)) {
LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i + 1, FALSE), (LLVMBasicBlockRef)bb_list_iter->data);
i ++;
}
}
}
ctx->module->max_method_idx = MAX (ctx->module->max_method_idx, cfg->method_index);
after_codegen_1:
if (llvmonly_fail) {
/*
* FIXME: Maybe fallback to interpreter
*/
static LLVMTypeRef sig;
ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, ctx->inited_bb);
char *name = mono_method_get_full_name (cfg->method);
int len = strlen (name);
LLVMTypeRef type = LLVMArrayType (LLVMInt8Type (), len + 1);
LLVMValueRef name_var = LLVMAddGlobal (ctx->lmodule, type, "missing_method_name");
LLVMSetVisibility (name_var, LLVMHiddenVisibility);
LLVMSetLinkage (name_var, LLVMInternalLinkage);
LLVMSetInitializer (name_var, mono_llvm_create_constant_data_array ((guint8*)name, len + 1));
mono_llvm_set_is_constant (name_var);
g_free (name);
if (!sig)
sig = LLVMFunctionType1 (LLVMVoidType (), ctx->module->ptr_type, FALSE);
LLVMValueRef callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_aot_failed_exception));
LLVMValueRef args [] = { convert (ctx, name_var, ctx->module->ptr_type) };
LLVMBuildCall (ctx->builder, callee, args, 1, "");
LLVMBuildUnreachable (ctx->builder);
}
/* Initialize the method if needed */
if (cfg->compile_aot) {
// FIXME: Add more shared got entries
ctx->builder = create_builder (ctx);
LLVMPositionBuilderAtEnd (ctx->builder, ctx->init_bb);
// FIXME: beforefieldinit
/*
* NATIVE_TO_MANAGED methods might be called on a thread not attached to the runtime, so they are initialized when loaded
* in load_method ().
*/
gboolean needs_init = ctx->cfg->got_access_count > 0;
MonoMethod *cctor = NULL;
if (!needs_init && (cctor = mono_class_get_cctor (cfg->method->klass))) {
/* Needs init to run the cctor */
if (cfg->method->flags & METHOD_ATTRIBUTE_STATIC)
needs_init = TRUE;
if (cctor == cfg->method)
needs_init = FALSE;
// If we are a constructor, we need to init so the static
// constructor gets called.
if (!strcmp (cfg->method->name, ".ctor"))
needs_init = TRUE;
}
if (cfg->method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED)
needs_init = FALSE;
if (needs_init)
emit_method_init (ctx);
else
LLVMBuildBr (ctx->builder, ctx->inited_bb);
// Was observing LLVM moving field accesses into the caller's method
// body before the init call (the inlined one), leading to NULL derefs
// after the init_method returns (GOT is filled out though)
if (needs_init)
mono_llvm_add_func_attr (method, LLVM_ATTR_NO_INLINE);
}
if (mini_get_debug_options ()->llvm_disable_inlining)
mono_llvm_add_func_attr (method, LLVM_ATTR_NO_INLINE);
after_codegen:
if (cfg->compile_aot)
g_ptr_array_add (ctx->module->cfgs, cfg);
if (cfg->llvm_only) {
/*
* Add the contents of ctx->callsite_list to module->callsite_list.
* We can't do this earlier, as it contains llvm instructions which can be
* freed if compilation fails.
* FIXME: Get rid of this when all methods can be llvm compiled.
*/
for (int i = 0; i < ctx->callsite_list->len; ++i)
g_ptr_array_add (ctx->module->callsite_list, g_ptr_array_index (ctx->callsite_list, i));
}
if (cfg->verbose_level > 1) {
g_print ("\n*** Unoptimized LLVM IR for %s ***\n", mono_method_full_name (cfg->method, TRUE));
if (cfg->compile_aot) {
mono_llvm_dump_value (method);
} else {
mono_llvm_dump_module (ctx->lmodule);
}
g_print ("***\n\n");
}
if (cfg->compile_aot && !cfg->llvm_only)
mark_as_used (ctx->module, method);
if (!cfg->llvm_only) {
LLVMValueRef md_args [16];
LLVMValueRef md_node;
int method_index;
if (cfg->compile_aot)
method_index = mono_aot_get_method_index (cfg->orig_method);
else
method_index = 1;
md_args [0] = LLVMMDString (ctx->method_name, strlen (ctx->method_name));
md_args [1] = LLVMConstInt (LLVMInt32Type (), method_index, FALSE);
md_node = LLVMMDNode (md_args, 2);
LLVMAddNamedMetadataOperand (lmodule, "mono.function_indexes", md_node);
//LLVMSetMetadata (method, md_kind, LLVMMDNode (&md_arg, 1));
}
if (cfg->compile_aot) {
/* Don't generate native code, keep the LLVM IR */
if (cfg->verbose_level) {
char *name = mono_method_get_full_name (cfg->method);
printf ("%s emitted as %s\n", name, ctx->method_name);
g_free (name);
}
#if 0
int err = LLVMVerifyFunction (ctx->lmethod, LLVMPrintMessageAction);
if (err != 0)
LLVMDumpValue (ctx->lmethod);
g_assert (err == 0);
#endif
} else {
//LLVMVerifyFunction (method, 0);
llvm_jit_finalize_method (ctx);
}
if (ctx->module->method_to_lmethod)
g_hash_table_insert (ctx->module->method_to_lmethod, cfg->method, ctx->lmethod);
if (ctx->module->idx_to_lmethod)
g_hash_table_insert (ctx->module->idx_to_lmethod, GINT_TO_POINTER (cfg->method_index), ctx->lmethod);
if (ctx->llvm_only && m_class_is_valuetype (cfg->orig_method->klass) && !(cfg->orig_method->flags & METHOD_ATTRIBUTE_STATIC))
emit_unbox_tramp (ctx, ctx->method_name, ctx->method_type, ctx->lmethod, cfg->method_index);
}
/*
* mono_llvm_create_vars:
*
* Same as mono_arch_create_vars () for LLVM.
*/
void
mono_llvm_create_vars (MonoCompile *cfg)
{
MonoMethodSignature *sig;
sig = mono_method_signature_internal (cfg->method);
if (cfg->gsharedvt && cfg->llvm_only) {
gboolean vretaddr = FALSE;
if (mini_is_gsharedvt_variable_signature (sig) && sig->ret->type != MONO_TYPE_VOID) {
vretaddr = TRUE;
} else {
MonoMethodSignature *sig = mono_method_signature_internal (cfg->method);
LLVMCallInfo *linfo;
linfo = get_llvm_call_info (cfg, sig);
vretaddr = (linfo->ret.storage == LLVMArgVtypeRetAddr || linfo->ret.storage == LLVMArgVtypeByRef || linfo->ret.storage == LLVMArgGsharedvtFixed || linfo->ret.storage == LLVMArgGsharedvtVariable || linfo->ret.storage == LLVMArgGsharedvtFixedVtype);
}
if (vretaddr) {
/*
* Creating vret_addr forces CEE_SETRET to store the result into it,
* so we don't have to generate any code in our OP_SETRET case.
*/
cfg->vret_addr = mono_compile_create_var (cfg, m_class_get_byval_arg (mono_get_intptr_class ()), OP_ARG);
if (G_UNLIKELY (cfg->verbose_level > 1)) {
printf ("vret_addr = ");
mono_print_ins (cfg->vret_addr);
}
}
} else {
mono_arch_create_vars (cfg);
}
cfg->lmf_ir = TRUE;
}
/*
* mono_llvm_emit_call:
*
* Same as mono_arch_emit_call () for LLVM.
*/
void
mono_llvm_emit_call (MonoCompile *cfg, MonoCallInst *call)
{
MonoInst *in;
MonoMethodSignature *sig;
int i, n;
LLVMArgInfo *ainfo;
sig = call->signature;
n = sig->param_count + sig->hasthis;
if (sig->call_convention == MONO_CALL_VARARG) {
cfg->exception_message = g_strdup ("varargs");
cfg->disable_llvm = TRUE;
return;
}
call->cinfo = get_llvm_call_info (cfg, sig);
if (cfg->disable_llvm)
return;
for (i = 0; i < n; ++i) {
MonoInst *ins;
ainfo = call->cinfo->args + i;
in = call->args [i];
/* Simply remember the arguments */
switch (ainfo->storage) {
case LLVMArgNormal: {
MonoType *t = (sig->hasthis && i == 0) ? m_class_get_byval_arg (mono_get_intptr_class ()) : ainfo->type;
int opcode;
opcode = mono_type_to_regmove (cfg, t);
if (opcode == OP_FMOVE) {
MONO_INST_NEW (cfg, ins, OP_FMOVE);
ins->dreg = mono_alloc_freg (cfg);
} else if (opcode == OP_LMOVE) {
MONO_INST_NEW (cfg, ins, OP_LMOVE);
ins->dreg = mono_alloc_lreg (cfg);
} else if (opcode == OP_RMOVE) {
MONO_INST_NEW (cfg, ins, OP_RMOVE);
ins->dreg = mono_alloc_freg (cfg);
} else {
MONO_INST_NEW (cfg, ins, OP_MOVE);
ins->dreg = mono_alloc_ireg (cfg);
}
ins->sreg1 = in->dreg;
break;
}
case LLVMArgVtypeByVal:
case LLVMArgVtypeByRef:
case LLVMArgVtypeInReg:
case LLVMArgVtypeAddr:
case LLVMArgVtypeAsScalar:
case LLVMArgAsIArgs:
case LLVMArgAsFpArgs:
case LLVMArgGsharedvtVariable:
case LLVMArgGsharedvtFixed:
case LLVMArgGsharedvtFixedVtype:
case LLVMArgWasmVtypeAsScalar:
MONO_INST_NEW (cfg, ins, OP_LLVM_OUTARG_VT);
ins->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = in->dreg;
ins->inst_p0 = mono_mempool_alloc0 (cfg->mempool, sizeof (LLVMArgInfo));
memcpy (ins->inst_p0, ainfo, sizeof (LLVMArgInfo));
ins->inst_vtype = ainfo->type;
ins->klass = mono_class_from_mono_type_internal (ainfo->type);
break;
default:
cfg->exception_message = g_strdup ("ainfo->storage");
cfg->disable_llvm = TRUE;
return;
}
if (!cfg->disable_llvm) {
MONO_ADD_INS (cfg->cbb, ins);
mono_call_inst_add_outarg_reg (cfg, call, ins->dreg, 0, FALSE);
}
}
}
static inline void
add_func (LLVMModuleRef module, const char *name, LLVMTypeRef ret_type, LLVMTypeRef *param_types, int nparams)
{
LLVMAddFunction (module, name, LLVMFunctionType (ret_type, param_types, nparams, FALSE));
}
static LLVMValueRef
add_intrins (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef *params, int nparams)
{
return mono_llvm_register_overloaded_intrinsic (module, id, params, nparams);
}
static LLVMValueRef
add_intrins1 (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef param1)
{
return mono_llvm_register_overloaded_intrinsic (module, id, ¶m1, 1);
}
static LLVMValueRef
add_intrins2 (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef param1, LLVMTypeRef param2)
{
LLVMTypeRef params [] = { param1, param2 };
return mono_llvm_register_overloaded_intrinsic (module, id, params, 2);
}
static LLVMValueRef
add_intrins3 (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef param1, LLVMTypeRef param2, LLVMTypeRef param3)
{
LLVMTypeRef params [] = { param1, param2, param3 };
return mono_llvm_register_overloaded_intrinsic (module, id, params, 3);
}
static void
add_intrinsic (LLVMModuleRef module, int id)
{
/* Register simple intrinsics */
LLVMValueRef intrins = mono_llvm_register_intrinsic (module, (IntrinsicId)id);
if (intrins) {
g_hash_table_insert (intrins_id_to_intrins, GINT_TO_POINTER (id), intrins);
return;
}
if (intrin_arm64_ovr [id] != 0) {
llvm_ovr_tag_t spec = intrin_arm64_ovr [id];
for (int vw = 0; vw < INTRIN_vectorwidths; ++vw) {
for (int ew = 0; ew < INTRIN_elementwidths; ++ew) {
llvm_ovr_tag_t vec_bit = INTRIN_vector128 >> ((INTRIN_vectorwidths - 1) - vw);
llvm_ovr_tag_t elem_bit = INTRIN_int8 << ew;
llvm_ovr_tag_t test = vec_bit | elem_bit;
if ((spec & test) == test) {
uint8_t kind = intrin_kind [id];
LLVMTypeRef distinguishing_type = intrin_types [vw][ew];
if (kind == INTRIN_kind_ftoi && (elem_bit & (INTRIN_int32 | INTRIN_int64))) {
/*
* @llvm.aarch64.neon.fcvtas.v4i32.v4f32
* @llvm.aarch64.neon.fcvtas.v2i64.v2f64
*/
intrins = add_intrins2 (module, id, distinguishing_type, intrin_types [vw][ew + 2]);
} else if (kind == INTRIN_kind_widen) {
/*
* @llvm.aarch64.neon.saddlp.v2i64.v4i32
* @llvm.aarch64.neon.saddlp.v4i16.v8i8
*/
intrins = add_intrins2 (module, id, distinguishing_type, intrin_types [vw][ew - 1]);
} else if (kind == INTRIN_kind_widen_across) {
/*
* @llvm.aarch64.neon.saddlv.i64.v4i32
* @llvm.aarch64.neon.saddlv.i32.v8i16
* @llvm.aarch64.neon.saddlv.i32.v16i8
* i8/i16 return types for NEON intrinsics will make isel fail as of LLVM 9.
*/
int associated_prim = MAX(ew + 1, 2);
LLVMTypeRef associated_scalar_type = intrin_types [0][associated_prim];
intrins = add_intrins2 (module, id, associated_scalar_type, distinguishing_type);
} else if (kind == INTRIN_kind_across) {
/*
* @llvm.aarch64.neon.uaddv.i64.v4i64
* @llvm.aarch64.neon.uaddv.i32.v4i32
* @llvm.aarch64.neon.uaddv.i32.v8i16
* @llvm.aarch64.neon.uaddv.i32.v16i8
* i8/i16 return types for NEON intrinsics will make isel fail as of LLVM 9.
*/
int associated_prim = MAX(ew, 2);
LLVMTypeRef associated_scalar_type = intrin_types [0][associated_prim];
intrins = add_intrins2 (module, id, associated_scalar_type, distinguishing_type);
} else if (kind == INTRIN_kind_arm64_dot_prod) {
/*
* @llvm.aarch64.neon.sdot.v2i32.v8i8
* @llvm.aarch64.neon.sdot.v4i32.v16i8
*/
LLVMTypeRef associated_type = intrin_types [vw][0];
intrins = add_intrins2 (module, id, distinguishing_type, associated_type);
} else
intrins = add_intrins1 (module, id, distinguishing_type);
int key = key_from_id_and_tag (id, test);
g_hash_table_insert (intrins_id_to_intrins, GINT_TO_POINTER (key), intrins);
}
}
}
return;
}
/* Register overloaded intrinsics */
switch (id) {
#define INTRINS(intrin_name, llvm_id, arch)
#define INTRINS_OVR(intrin_name, llvm_id, arch, llvm_type) case INTRINS_ ## intrin_name: intrins = add_intrins1(module, id, llvm_type); break;
#define INTRINS_OVR_2_ARG(intrin_name, llvm_id, arch, llvm_type1, llvm_type2) case INTRINS_ ## intrin_name: intrins = add_intrins2(module, id, llvm_type1, llvm_type2); break;
#define INTRINS_OVR_3_ARG(intrin_name, llvm_id, arch, llvm_type1, llvm_type2, llvm_type3) case INTRINS_ ## intrin_name: intrins = add_intrins3(module, id, llvm_type1, llvm_type2, llvm_type3); break;
#define INTRINS_OVR_TAG(...)
#define INTRINS_OVR_TAG_KIND(...)
#include "llvm-intrinsics.h"
default:
g_assert_not_reached ();
break;
}
g_assert (intrins);
g_hash_table_insert (intrins_id_to_intrins, GINT_TO_POINTER (id), intrins);
}
static LLVMValueRef
get_intrins_from_module (LLVMModuleRef lmodule, int id)
{
LLVMValueRef res;
res = (LLVMValueRef)g_hash_table_lookup (intrins_id_to_intrins, GINT_TO_POINTER (id));
g_assert (res);
return res;
}
static LLVMValueRef
get_intrins (EmitContext *ctx, int id)
{
return get_intrins_from_module (ctx->lmodule, id);
}
static void
add_intrinsics (LLVMModuleRef module)
{
int i;
/* Emit declarations of instrinsics */
/*
* It would be nicer to emit only the intrinsics actually used, but LLVM's Module
* type doesn't seem to do any locking.
*/
for (i = 0; i < INTRINS_NUM; ++i)
add_intrinsic (module, i);
/* EH intrinsics */
add_func (module, "mono_personality", LLVMVoidType (), NULL, 0);
add_func (module, "llvm_resume_unwind_trampoline", LLVMVoidType (), NULL, 0);
}
static void
add_types (MonoLLVMModule *module)
{
module->ptr_type = LLVMPointerType (TARGET_SIZEOF_VOID_P == 8 ? LLVMInt64Type () : LLVMInt32Type (), 0);
}
void
mono_llvm_init (gboolean enable_jit)
{
intrin_types [0][0] = i1_t = LLVMInt8Type ();
intrin_types [0][1] = i2_t = LLVMInt16Type ();
intrin_types [0][2] = i4_t = LLVMInt32Type ();
intrin_types [0][3] = i8_t = LLVMInt64Type ();
intrin_types [0][4] = r4_t = LLVMFloatType ();
intrin_types [0][5] = r8_t = LLVMDoubleType ();
intrin_types [1][0] = v64_i1_t = LLVMVectorType (LLVMInt8Type (), 8);
intrin_types [1][1] = v64_i2_t = LLVMVectorType (LLVMInt16Type (), 4);
intrin_types [1][2] = v64_i4_t = LLVMVectorType (LLVMInt32Type (), 2);
intrin_types [1][3] = v64_i8_t = LLVMVectorType (LLVMInt64Type (), 1);
intrin_types [1][4] = v64_r4_t = LLVMVectorType (LLVMFloatType (), 2);
intrin_types [1][5] = v64_r8_t = LLVMVectorType (LLVMDoubleType (), 1);
intrin_types [2][0] = v128_i1_t = sse_i1_t = type_to_sse_type (MONO_TYPE_I1);
intrin_types [2][1] = v128_i2_t = sse_i2_t = type_to_sse_type (MONO_TYPE_I2);
intrin_types [2][2] = v128_i4_t = sse_i4_t = type_to_sse_type (MONO_TYPE_I4);
intrin_types [2][3] = v128_i8_t = sse_i8_t = type_to_sse_type (MONO_TYPE_I8);
intrin_types [2][4] = v128_r4_t = sse_r4_t = type_to_sse_type (MONO_TYPE_R4);
intrin_types [2][5] = v128_r8_t = sse_r8_t = type_to_sse_type (MONO_TYPE_R8);
intrins_id_to_intrins = g_hash_table_new (NULL, NULL);
void_func_t = LLVMFunctionType0 (LLVMVoidType (), FALSE);
if (enable_jit)
mono_llvm_jit_init ();
}
void
mono_llvm_free_mem_manager (MonoJitMemoryManager *mem_manager)
{
MonoLLVMModule *module = (MonoLLVMModule*)mem_manager->llvm_module;
int i;
if (!module)
return;
g_hash_table_destroy (module->llvm_types);
mono_llvm_dispose_ee (module->mono_ee);
if (module->bb_names) {
for (i = 0; i < module->bb_names_len; ++i)
g_free (module->bb_names [i]);
g_free (module->bb_names);
}
//LLVMDisposeModule (module->module);
g_free (module);
mem_manager->llvm_module = NULL;
}
void
mono_llvm_create_aot_module (MonoAssembly *assembly, const char *global_prefix, int initial_got_size, LLVMModuleFlags flags)
{
MonoLLVMModule *module = &aot_module;
gboolean emit_dwarf = (flags & LLVM_MODULE_FLAG_DWARF) ? 1 : 0;
#ifdef TARGET_WIN32_MSVC
gboolean emit_codeview = (flags & LLVM_MODULE_FLAG_CODEVIEW) ? 1 : 0;
#endif
gboolean static_link = (flags & LLVM_MODULE_FLAG_STATIC) ? 1 : 0;
gboolean llvm_only = (flags & LLVM_MODULE_FLAG_LLVM_ONLY) ? 1 : 0;
gboolean interp = (flags & LLVM_MODULE_FLAG_INTERP) ? 1 : 0;
/* Delete previous module */
g_hash_table_destroy (module->plt_entries);
if (module->lmodule)
LLVMDisposeModule (module->lmodule);
memset (module, 0, sizeof (aot_module));
module->lmodule = LLVMModuleCreateWithName ("aot");
module->assembly = assembly;
module->global_prefix = g_strdup (global_prefix);
module->eh_frame_symbol = g_strdup_printf ("%s_eh_frame", global_prefix);
module->get_method_symbol = g_strdup_printf ("%s_get_method", global_prefix);
module->get_unbox_tramp_symbol = g_strdup_printf ("%s_get_unbox_tramp", global_prefix);
module->init_aotconst_symbol = g_strdup_printf ("%s_init_aotconst", global_prefix);
module->external_symbols = TRUE;
module->emit_dwarf = emit_dwarf;
module->static_link = static_link;
module->llvm_only = llvm_only;
module->interp = interp;
/* The first few entries are reserved */
module->max_got_offset = initial_got_size;
module->context = LLVMGetGlobalContext ();
module->cfgs = g_ptr_array_new ();
module->aotconst_vars = g_hash_table_new (NULL, NULL);
module->llvm_types = g_hash_table_new (NULL, NULL);
module->plt_entries = g_hash_table_new (g_str_hash, g_str_equal);
module->plt_entries_ji = g_hash_table_new (NULL, NULL);
module->direct_callables = g_hash_table_new (g_str_hash, g_str_equal);
module->idx_to_lmethod = g_hash_table_new (NULL, NULL);
module->method_to_lmethod = g_hash_table_new (NULL, NULL);
module->method_to_call_info = g_hash_table_new (NULL, NULL);
module->idx_to_unbox_tramp = g_hash_table_new (NULL, NULL);
module->no_method_table_lmethods = g_hash_table_new (NULL, NULL);
module->callsite_list = g_ptr_array_new ();
if (llvm_only)
/* clang ignores our debug info because it has an invalid version */
module->emit_dwarf = FALSE;
add_intrinsics (module->lmodule);
add_types (module);
#ifdef MONO_ARCH_LLVM_TARGET_LAYOUT
LLVMSetDataLayout (module->lmodule, MONO_ARCH_LLVM_TARGET_LAYOUT);
#else
g_assert_not_reached ();
#endif
#ifdef MONO_ARCH_LLVM_TARGET_TRIPLE
LLVMSetTarget (module->lmodule, MONO_ARCH_LLVM_TARGET_TRIPLE);
#endif
if (module->emit_dwarf) {
char *dir, *build_info, *s, *cu_name;
module->di_builder = mono_llvm_create_di_builder (module->lmodule);
// FIXME:
dir = g_strdup (".");
build_info = mono_get_runtime_build_info ();
s = g_strdup_printf ("Mono AOT Compiler %s (LLVM)", build_info);
cu_name = g_path_get_basename (assembly->image->name);
module->cu = mono_llvm_di_create_compile_unit (module->di_builder, cu_name, dir, s);
g_free (dir);
g_free (build_info);
g_free (s);
}
#ifdef TARGET_WIN32_MSVC
if (emit_codeview) {
LLVMValueRef codeview_option_args[3];
codeview_option_args[0] = LLVMConstInt (LLVMInt32Type (), 2, FALSE);
codeview_option_args[1] = LLVMMDString ("CodeView", 8);
codeview_option_args[2] = LLVMConstInt (LLVMInt32Type (), 1, FALSE);
LLVMAddNamedMetadataOperand (module->lmodule, "llvm.module.flags", LLVMMDNode (codeview_option_args, G_N_ELEMENTS (codeview_option_args)));
}
if (!static_link) {
const char linker_options[] = "Linker Options";
const char *default_dynamic_lib_names[] = { "/DEFAULTLIB:msvcrt",
"/DEFAULTLIB:ucrt.lib",
"/DEFAULTLIB:vcruntime.lib" };
LLVMValueRef default_lib_args[G_N_ELEMENTS (default_dynamic_lib_names)];
LLVMValueRef default_lib_nodes[G_N_ELEMENTS(default_dynamic_lib_names)];
const char *default_lib_name = NULL;
for (int i = 0; i < G_N_ELEMENTS (default_dynamic_lib_names); ++i) {
const char *default_lib_name = default_dynamic_lib_names[i];
default_lib_args[i] = LLVMMDString (default_lib_name, strlen (default_lib_name));
default_lib_nodes[i] = LLVMMDNode (default_lib_args + i, 1);
}
LLVMAddNamedMetadataOperand (module->lmodule, "llvm.linker.options", LLVMMDNode (default_lib_args, G_N_ELEMENTS (default_lib_args)));
}
#endif
{
LLVMTypeRef got_type = LLVMArrayType (module->ptr_type, 16);
module->dummy_got_var = LLVMAddGlobal (module->lmodule, got_type, "dummy_got");
module->got_idx_to_type = g_hash_table_new (NULL, NULL);
LLVMSetInitializer (module->dummy_got_var, LLVMConstNull (got_type));
LLVMSetVisibility (module->dummy_got_var, LLVMHiddenVisibility);
LLVMSetLinkage (module->dummy_got_var, LLVMInternalLinkage);
}
/* Add initialization array */
LLVMTypeRef inited_type = LLVMArrayType (LLVMInt8Type (), 0);
module->inited_var = LLVMAddGlobal (aot_module.lmodule, inited_type, "mono_inited_tmp");
LLVMSetInitializer (module->inited_var, LLVMConstNull (inited_type));
create_aot_info_var (module);
emit_gc_safepoint_poll (module, module->lmodule, NULL);
emit_llvm_code_start (module);
// Needs idx_to_lmethod
emit_init_funcs (module);
/* Add a dummy personality function */
if (!use_mono_personality_debug) {
LLVMValueRef personality = LLVMAddFunction (module->lmodule, default_personality_name, LLVMFunctionType (LLVMInt32Type (), NULL, 0, TRUE));
LLVMSetLinkage (personality, LLVMExternalLinkage);
//EMCC chockes if the personality function is referenced in the 'used' array
#ifndef TARGET_WASM
mark_as_used (module, personality);
#endif
}
/* Add a reference to the c++ exception we throw/catch */
{
LLVMTypeRef exc = LLVMPointerType (LLVMInt8Type (), 0);
module->sentinel_exception = LLVMAddGlobal (module->lmodule, exc, "_ZTIPi");
LLVMSetLinkage (module->sentinel_exception, LLVMExternalLinkage);
mono_llvm_set_is_constant (module->sentinel_exception);
}
}
void
mono_llvm_fixup_aot_module (void)
{
MonoLLVMModule *module = &aot_module;
MonoMethod *method;
/*
* Replace GOT entries for directly callable methods with the methods themselves.
* It would be easier to implement this by predefining all methods before compiling
* their bodies, but that couldn't handle the case when a method fails to compile
* with llvm.
*/
GHashTable *specializable = g_hash_table_new (NULL, NULL);
GHashTable *patches_to_null = g_hash_table_new (mono_patch_info_hash, mono_patch_info_equal);
for (int sindex = 0; sindex < module->callsite_list->len; ++sindex) {
CallSite *site = (CallSite*)g_ptr_array_index (module->callsite_list, sindex);
method = site->method;
LLVMValueRef lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, method);
LLVMValueRef placeholder = (LLVMValueRef)site->load;
LLVMValueRef load;
if (placeholder == NULL)
/* Method failed LLVM compilation */
continue;
gboolean can_direct_call = FALSE;
/* Replace sharable instances with their shared version */
if (!lmethod && method->is_inflated) {
if (mono_method_is_generic_sharable_full (method, FALSE, TRUE, FALSE)) {
ERROR_DECL (error);
MonoMethod *shared = mini_get_shared_method_full (method, SHARE_MODE_NONE, error);
if (is_ok (error)) {
lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, shared);
if (lmethod)
method = shared;
}
}
}
if (lmethod && !m_method_is_synchronized (method)) {
can_direct_call = TRUE;
} else if (m_method_is_wrapper (method) && !method->is_inflated) {
WrapperInfo *info = mono_marshal_get_wrapper_info (method);
/* This is a call from the synchronized wrapper to the real method */
if (info->subtype == WRAPPER_SUBTYPE_SYNCHRONIZED_INNER) {
method = info->d.synchronized.method;
lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, method);
if (lmethod)
can_direct_call = TRUE;
}
}
if (can_direct_call) {
mono_llvm_replace_uses_of (placeholder, lmethod);
if (mono_aot_can_specialize (method))
g_hash_table_insert (specializable, lmethod, method);
g_hash_table_insert (patches_to_null, site->ji, site->ji);
} else {
// FIXME:
LLVMBuilderRef builder = LLVMCreateBuilder ();
LLVMPositionBuilderBefore (builder, placeholder);
load = get_aotconst_module (module, builder, site->ji->type, site->ji->data.target, site->type, NULL, NULL);
LLVMReplaceAllUsesWith (placeholder, load);
}
g_free (site);
}
mono_llvm_propagate_nonnull_final (specializable, module);
g_hash_table_destroy (specializable);
for (int i = 0; i < module->cfgs->len; ++i) {
/*
* Nullify the patches pointing to direct calls. This is needed to
* avoid allocating extra got slots, which is a perf problem and it
* makes module->max_got_offset invalid.
* It would be better to just store the patch_info in CallSite, but
* cfg->patch_info is copied in aot-compiler.c.
*/
MonoCompile *cfg = (MonoCompile *)g_ptr_array_index (module->cfgs, i);
for (MonoJumpInfo *patch_info = cfg->patch_info; patch_info; patch_info = patch_info->next) {
if (patch_info->type == MONO_PATCH_INFO_METHOD) {
if (g_hash_table_lookup (patches_to_null, patch_info)) {
patch_info->type = MONO_PATCH_INFO_NONE;
/* Nullify the call to init_method () if possible */
g_assert (cfg->got_access_count);
cfg->got_access_count --;
if (cfg->got_access_count == 0) {
LLVMValueRef br = (LLVMValueRef)cfg->llvmonly_init_cond;
if (br)
LLVMSetSuccessor (br, 0, LLVMGetSuccessor (br, 1));
}
}
}
}
}
g_hash_table_destroy (patches_to_null);
}
static LLVMValueRef
llvm_array_from_uints (LLVMTypeRef el_type, guint32 *values, int nvalues)
{
int i;
LLVMValueRef res, *vals;
vals = g_new0 (LLVMValueRef, nvalues);
for (i = 0; i < nvalues; ++i)
vals [i] = LLVMConstInt (LLVMInt32Type (), values [i], FALSE);
res = LLVMConstArray (LLVMInt32Type (), vals, nvalues);
g_free (vals);
return res;
}
static LLVMValueRef
llvm_array_from_bytes (guint8 *values, int nvalues)
{
int i;
LLVMValueRef res, *vals;
vals = g_new0 (LLVMValueRef, nvalues);
for (i = 0; i < nvalues; ++i)
vals [i] = LLVMConstInt (LLVMInt8Type (), values [i], FALSE);
res = LLVMConstArray (LLVMInt8Type (), vals, nvalues);
g_free (vals);
return res;
}
/*
* mono_llvm_emit_aot_file_info:
*
* Emit the MonoAotFileInfo structure.
* Same as emit_aot_file_info () in aot-compiler.c.
*/
void
mono_llvm_emit_aot_file_info (MonoAotFileInfo *info, gboolean has_jitted_code)
{
MonoLLVMModule *module = &aot_module;
/* Save these for later */
memcpy (&module->aot_info, info, sizeof (MonoAotFileInfo));
module->has_jitted_code = has_jitted_code;
}
/*
* mono_llvm_emit_aot_data:
*
* Emit the binary data DATA pointed to by symbol SYMBOL.
* Return the LLVM variable for the data.
*/
gpointer
mono_llvm_emit_aot_data_aligned (const char *symbol, guint8 *data, int data_len, int align)
{
MonoLLVMModule *module = &aot_module;
LLVMTypeRef type;
LLVMValueRef d;
type = LLVMArrayType (LLVMInt8Type (), data_len);
d = LLVMAddGlobal (module->lmodule, type, symbol);
LLVMSetVisibility (d, LLVMHiddenVisibility);
LLVMSetLinkage (d, LLVMInternalLinkage);
LLVMSetInitializer (d, mono_llvm_create_constant_data_array (data, data_len));
if (align != 1)
LLVMSetAlignment (d, align);
mono_llvm_set_is_constant (d);
return d;
}
gpointer
mono_llvm_emit_aot_data (const char *symbol, guint8 *data, int data_len)
{
return mono_llvm_emit_aot_data_aligned (symbol, data, data_len, 8);
}
/* Add a reference to a global defined in JITted code */
static LLVMValueRef
AddJitGlobal (MonoLLVMModule *module, LLVMTypeRef type, const char *name)
{
char *s;
LLVMValueRef v;
s = g_strdup_printf ("%s%s", module->global_prefix, name);
v = LLVMAddGlobal (module->lmodule, LLVMInt8Type (), s);
LLVMSetVisibility (v, LLVMHiddenVisibility);
g_free (s);
return v;
}
#define FILE_INFO_NUM_HEADER_FIELDS 2
#define FILE_INFO_NUM_SCALAR_FIELDS 23
#define FILE_INFO_NUM_ARRAY_FIELDS 5
#define FILE_INFO_NUM_AOTID_FIELDS 1
#define FILE_INFO_NFIELDS (FILE_INFO_NUM_HEADER_FIELDS + MONO_AOT_FILE_INFO_NUM_SYMBOLS + FILE_INFO_NUM_SCALAR_FIELDS + FILE_INFO_NUM_ARRAY_FIELDS + FILE_INFO_NUM_AOTID_FIELDS)
static void
create_aot_info_var (MonoLLVMModule *module)
{
LLVMTypeRef file_info_type;
LLVMTypeRef *eltypes;
LLVMValueRef info_var;
int i, nfields, tindex;
LLVMModuleRef lmodule = module->lmodule;
/* Create an LLVM type to represent MonoAotFileInfo */
nfields = FILE_INFO_NFIELDS;
eltypes = g_new (LLVMTypeRef, nfields);
tindex = 0;
eltypes [tindex ++] = LLVMInt32Type ();
eltypes [tindex ++] = LLVMInt32Type ();
/* Symbols */
for (i = 0; i < MONO_AOT_FILE_INFO_NUM_SYMBOLS; ++i)
eltypes [tindex ++] = LLVMPointerType (LLVMInt8Type (), 0);
/* Scalars */
for (i = 0; i < FILE_INFO_NUM_SCALAR_FIELDS; ++i)
eltypes [tindex ++] = LLVMInt32Type ();
/* Arrays */
eltypes [tindex ++] = LLVMArrayType (LLVMInt32Type (), MONO_AOT_TABLE_NUM);
for (i = 0; i < FILE_INFO_NUM_ARRAY_FIELDS - 1; ++i)
eltypes [tindex ++] = LLVMArrayType (LLVMInt32Type (), MONO_AOT_TRAMP_NUM);
eltypes [tindex ++] = LLVMArrayType (LLVMInt8Type (), 16);
g_assert (tindex == nfields);
file_info_type = LLVMStructCreateNamed (module->context, "MonoAotFileInfo");
LLVMStructSetBody (file_info_type, eltypes, nfields, FALSE);
info_var = LLVMAddGlobal (lmodule, file_info_type, "mono_aot_file_info");
module->info_var = info_var;
module->info_var_eltypes = eltypes;
}
static void
emit_aot_file_info (MonoLLVMModule *module)
{
LLVMTypeRef *eltypes, eltype;
LLVMValueRef info_var;
LLVMValueRef *fields;
int i, nfields, tindex;
MonoAotFileInfo *info;
LLVMModuleRef lmodule = module->lmodule;
info = &module->aot_info;
info_var = module->info_var;
eltypes = module->info_var_eltypes;
nfields = FILE_INFO_NFIELDS;
if (module->static_link) {
LLVMSetVisibility (info_var, LLVMHiddenVisibility);
LLVMSetLinkage (info_var, LLVMInternalLinkage);
}
#ifdef TARGET_WIN32
if (!module->static_link) {
LLVMSetDLLStorageClass (info_var, LLVMDLLExportStorageClass);
}
#endif
fields = g_new (LLVMValueRef, nfields);
tindex = 0;
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->version, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->dummy, FALSE);
/* Symbols */
/*
* We use LLVMGetNamedGlobal () for symbol which are defined in LLVM code, and LLVMAddGlobal ()
* for symbols defined in the .s file emitted by the aot compiler.
*/
eltype = eltypes [tindex];
if (module->llvm_only)
fields [tindex ++] = LLVMConstNull (eltype);
else
fields [tindex ++] = AddJitGlobal (module, eltype, "jit_got");
/* llc defines this directly */
if (!module->llvm_only) {
fields [tindex ++] = LLVMAddGlobal (lmodule, eltype, module->eh_frame_symbol);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
} else {
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = module->get_method;
fields [tindex ++] = module->get_unbox_tramp ? module->get_unbox_tramp : LLVMConstNull (eltype);
}
fields [tindex ++] = module->init_aotconst_func;
if (module->has_jitted_code) {
fields [tindex ++] = AddJitGlobal (module, eltype, "jit_code_start");
fields [tindex ++] = AddJitGlobal (module, eltype, "jit_code_end");
} else {
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
}
if (!module->llvm_only)
fields [tindex ++] = AddJitGlobal (module, eltype, "method_addresses");
else
fields [tindex ++] = LLVMConstNull (eltype);
if (module->llvm_only && module->unbox_tramp_indexes) {
fields [tindex ++] = module->unbox_tramp_indexes;
fields [tindex ++] = module->unbox_trampolines;
} else {
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
}
if (info->flags & MONO_AOT_FILE_FLAG_SEPARATE_DATA) {
for (i = 0; i < MONO_AOT_TABLE_NUM; ++i)
fields [tindex ++] = LLVMConstNull (eltype);
} else {
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "blob");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "class_name_table");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "class_info_offsets");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "method_info_offsets");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "ex_info_offsets");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "extra_method_info_offsets");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "extra_method_table");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "got_info_offsets");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "llvm_got_info_offsets");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "image_table");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "weak_field_indexes");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "method_flags_table");
}
/* Not needed (mem_end) */
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "assembly_guid");
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "runtime_version");
if (info->trampoline_size [0]) {
fields [tindex ++] = AddJitGlobal (module, eltype, "specific_trampolines");
fields [tindex ++] = AddJitGlobal (module, eltype, "static_rgctx_trampolines");
fields [tindex ++] = AddJitGlobal (module, eltype, "imt_trampolines");
fields [tindex ++] = AddJitGlobal (module, eltype, "gsharedvt_arg_trampolines");
fields [tindex ++] = AddJitGlobal (module, eltype, "ftnptr_arg_trampolines");
fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_arbitrary_trampolines");
} else {
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
}
if (module->static_link && !module->llvm_only)
fields [tindex ++] = AddJitGlobal (module, eltype, "globals");
else
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "assembly_name");
if (!module->llvm_only) {
fields [tindex ++] = AddJitGlobal (module, eltype, "plt");
fields [tindex ++] = AddJitGlobal (module, eltype, "plt_end");
fields [tindex ++] = AddJitGlobal (module, eltype, "unwind_info");
fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_trampolines");
fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_trampolines_end");
fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_trampoline_addresses");
} else {
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
fields [tindex ++] = LLVMConstNull (eltype);
}
for (i = 0; i < MONO_AOT_FILE_INFO_NUM_SYMBOLS; ++i) {
g_assert (fields [FILE_INFO_NUM_HEADER_FIELDS + i]);
fields [FILE_INFO_NUM_HEADER_FIELDS + i] = LLVMConstBitCast (fields [FILE_INFO_NUM_HEADER_FIELDS + i], eltype);
}
/* Scalars */
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->plt_got_offset_base, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->plt_got_info_offset_base, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->got_size, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->llvm_got_size, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->plt_size, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->nmethods, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->nextra_methods, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->flags, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->opts, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->simd_opts, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->gc_name_index, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->num_rgctx_fetch_trampolines, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->double_align, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->long_align, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->generic_tramp_num, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->card_table_shift_bits, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->card_table_mask, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->tramp_page_size, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->call_table_entry_size, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->nshared_got_entries, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->datafile_size, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), module->unbox_tramp_num, FALSE);
fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), module->unbox_tramp_elemsize, FALSE);
/* Arrays */
fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->table_offsets, MONO_AOT_TABLE_NUM);
fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->num_trampolines, MONO_AOT_TRAMP_NUM);
fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->trampoline_got_offset_base, MONO_AOT_TRAMP_NUM);
fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->trampoline_size, MONO_AOT_TRAMP_NUM);
fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->tramp_page_code_offsets, MONO_AOT_TRAMP_NUM);
fields [tindex ++] = llvm_array_from_bytes (info->aotid, 16);
g_assert (tindex == nfields);
LLVMSetInitializer (info_var, LLVMConstNamedStruct (LLVMGetElementType (LLVMTypeOf (info_var)), fields, nfields));
if (module->static_link) {
char *s, *p;
LLVMValueRef var;
s = g_strdup_printf ("mono_aot_module_%s_info", module->assembly->aname.name);
/* Get rid of characters which cannot occur in symbols */
p = s;
for (p = s; *p; ++p) {
if (!(isalnum (*p) || *p == '_'))
*p = '_';
}
var = LLVMAddGlobal (module->lmodule, LLVMPointerType (LLVMInt8Type (), 0), s);
g_free (s);
LLVMSetInitializer (var, LLVMConstBitCast (LLVMGetNamedGlobal (module->lmodule, "mono_aot_file_info"), LLVMPointerType (LLVMInt8Type (), 0)));
LLVMSetLinkage (var, LLVMExternalLinkage);
}
}
typedef struct {
LLVMValueRef lmethod;
int argument;
} NonnullPropWorkItem;
static void
mono_llvm_nonnull_state_update (EmitContext *ctx, LLVMValueRef lcall, MonoMethod *call_method, LLVMValueRef *args, int num_params)
{
if (mono_aot_can_specialize (call_method)) {
int num_passed = LLVMGetNumArgOperands (lcall);
g_assert (num_params <= num_passed);
g_assert (ctx->module->method_to_call_info);
GArray *call_site_union = (GArray *) g_hash_table_lookup (ctx->module->method_to_call_info, call_method);
if (!call_site_union) {
call_site_union = g_array_sized_new (FALSE, TRUE, sizeof (gint32), num_params);
int zero = 0;
for (int i = 0; i < num_params; i++)
g_array_insert_val (call_site_union, i, zero);
}
for (int i = 0; i < num_params; i++) {
if (mono_llvm_is_nonnull (args [i])) {
g_assert (i < LLVMGetNumArgOperands (lcall));
mono_llvm_set_call_nonnull_arg (lcall, i);
} else {
gint32 *nullable_count = &g_array_index (call_site_union, gint32, i);
*nullable_count = *nullable_count + 1;
}
}
g_hash_table_insert (ctx->module->method_to_call_info, call_method, call_site_union);
}
}
static void
mono_llvm_propagate_nonnull_final (GHashTable *all_specializable, MonoLLVMModule *module)
{
// When we first traverse the mini IL, we mark the things that are
// nonnull (the roots). Then, for all of the methods that can be specialized, we
// see if their call sites have nonnull attributes.
// If so, we mark the function's param. This param has uses to propagate
// the attribute to. This propagation can trigger a need to mark more attributes
// non-null, and so on and so forth.
GSList *queue = NULL;
GHashTableIter iter;
LLVMValueRef lmethod;
MonoMethod *method;
g_hash_table_iter_init (&iter, all_specializable);
while (g_hash_table_iter_next (&iter, (void**)&lmethod, (void**)&method)) {
GArray *call_site_union = (GArray *) g_hash_table_lookup (module->method_to_call_info, method);
// Basic sanity checking
if (call_site_union)
g_assert (call_site_union->len == LLVMCountParams (lmethod));
// Add root to work queue
for (int i = 0; call_site_union && i < call_site_union->len; i++) {
if (g_array_index (call_site_union, gint32, i) == 0) {
NonnullPropWorkItem *item = g_malloc (sizeof (NonnullPropWorkItem));
item->lmethod = lmethod;
item->argument = i;
queue = g_slist_prepend (queue, item);
}
}
}
// This is essentially reference counting, and we are propagating
// the refcount decrement here. We have less work to do than we may otherwise
// because we are only working with a set of subgraphs of specializable functions.
//
// We rely on being able to see all of the references in the graph.
// This is ensured by the function mono_aot_can_specialize. Everything in
// all_specializable is a function that can be specialized, and is the resulting
// node in the graph after all of the subsitutions are done.
//
// Anything disrupting the direct calls made with self-init will break this optimization.
while (queue) {
// Update the queue state.
// Our only other per-iteration responsibility is now to free current
NonnullPropWorkItem *current = (NonnullPropWorkItem *) queue->data;
queue = queue->next;
g_assert (current->argument < LLVMCountParams (current->lmethod));
// Does the actual leaf-node work here
// Mark the function argument as nonnull for LLVM
mono_llvm_set_func_nonnull_arg (current->lmethod, current->argument);
// The rest of this is for propagating forward nullability changes
// to calls that use the argument that is now nullable.
// Get the actual LLVM value of the argument, so we can see which call instructions
// used that argument
LLVMValueRef caller_argument = LLVMGetParam (current->lmethod, current->argument);
// Iterate over the calls using the newly-non-nullable argument
GSList *calls = mono_llvm_calls_using (caller_argument);
for (GSList *cursor = calls; cursor != NULL; cursor = cursor->next) {
LLVMValueRef lcall = (LLVMValueRef) cursor->data;
LLVMValueRef callee_lmethod = LLVMGetCalledValue (lcall);
// If this wasn't a direct call for which mono_aot_can_specialize is true,
// this lookup won't find a MonoMethod.
MonoMethod *callee_method = (MonoMethod *) g_hash_table_lookup (all_specializable, callee_lmethod);
if (!callee_method)
continue;
// Decrement number of nullable refs at that func's arg offset
GArray *call_site_union = (GArray *) g_hash_table_lookup (module->method_to_call_info, callee_method);
// It has module-local callers and is specializable, should have seen this call site
// and inited this
g_assert (call_site_union);
// The function *definition* parameter arity should always be consistent
int max_params = LLVMCountParams (callee_lmethod);
if (call_site_union->len != max_params) {
mono_llvm_dump_value (callee_lmethod);
g_assert_not_reached ();
}
// Get the values that correspond to the parameters passed to the call
// that used our argument
LLVMValueRef *operands = mono_llvm_call_args (lcall);
for (int call_argument = 0; call_argument < max_params; call_argument++) {
// Every time we used the newly-non-nullable argument, decrement the nullable
// refcount for that function.
if (caller_argument == operands [call_argument]) {
gint32 *nullable_count = &g_array_index (call_site_union, gint32, call_argument);
g_assert (*nullable_count > 0);
*nullable_count = *nullable_count - 1;
// If we caused that callee's parameter to become newly nullable, add to work queue
if (*nullable_count == 0) {
NonnullPropWorkItem *item = g_malloc (sizeof (NonnullPropWorkItem));
item->lmethod = callee_lmethod;
item->argument = call_argument;
queue = g_slist_prepend (queue, item);
}
}
}
g_free (operands);
// Update nullability refcount information for the callee now
g_hash_table_insert (module->method_to_call_info, callee_method, call_site_union);
}
g_slist_free (calls);
g_free (current);
}
}
/*
* Emit the aot module into the LLVM bitcode file FILENAME.
*/
void
mono_llvm_emit_aot_module (const char *filename, const char *cu_name)
{
LLVMTypeRef inited_type;
LLVMValueRef real_inited;
MonoLLVMModule *module = &aot_module;
emit_llvm_code_end (module);
/*
* Create the real init_var and replace all uses of the dummy variable with
* the real one.
*/
inited_type = LLVMArrayType (LLVMInt8Type (), module->max_inited_idx + 1);
real_inited = LLVMAddGlobal (module->lmodule, inited_type, "mono_inited");
LLVMSetInitializer (real_inited, LLVMConstNull (inited_type));
LLVMSetLinkage (real_inited, LLVMInternalLinkage);
mono_llvm_replace_uses_of (module->inited_var, real_inited);
LLVMDeleteGlobal (module->inited_var);
/* Replace the dummy info_ variables with the real ones */
for (int i = 0; i < module->cfgs->len; ++i) {
MonoCompile *cfg = (MonoCompile *)g_ptr_array_index (module->cfgs, i);
// FIXME: Eliminate unused vars
// FIXME: Speed this up
if (cfg->llvm_dummy_info_var) {
if (cfg->llvm_info_var) {
mono_llvm_replace_uses_of (cfg->llvm_dummy_info_var, cfg->llvm_info_var);
LLVMDeleteGlobal (cfg->llvm_dummy_info_var);
} else {
// FIXME: How can this happen ?
LLVMSetInitializer (cfg->llvm_dummy_info_var, mono_llvm_create_constant_data_array (NULL, 0));
}
}
}
if (module->llvm_only) {
emit_get_method (&aot_module);
emit_get_unbox_tramp (&aot_module);
}
emit_init_aotconst (module);
emit_llvm_used (&aot_module);
emit_dbg_info (&aot_module, filename, cu_name);
emit_aot_file_info (&aot_module);
/* Replace PLT entries for directly callable methods with the methods themselves */
{
GHashTableIter iter;
MonoJumpInfo *ji;
LLVMValueRef callee;
GHashTable *specializable = g_hash_table_new (NULL, NULL);
g_hash_table_iter_init (&iter, module->plt_entries_ji);
while (g_hash_table_iter_next (&iter, (void**)&ji, (void**)&callee)) {
if (mono_aot_is_direct_callable (ji)) {
LLVMValueRef lmethod;
lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, ji->data.method);
/* The types might not match because the caller might pass an rgctx */
if (lmethod && LLVMTypeOf (callee) == LLVMTypeOf (lmethod)) {
mono_llvm_replace_uses_of (callee, lmethod);
if (mono_aot_can_specialize (ji->data.method))
g_hash_table_insert (specializable, lmethod, ji->data.method);
mono_aot_mark_unused_llvm_plt_entry (ji);
}
}
}
mono_llvm_propagate_nonnull_final (specializable, module);
g_hash_table_destroy (specializable);
}
#if 0
{
char *verifier_err;
if (LLVMVerifyModule (module->lmodule, LLVMReturnStatusAction, &verifier_err)) {
printf ("%s\n", verifier_err);
g_assert_not_reached ();
}
}
#endif
/* Note: You can still dump an invalid bitcode file by running `llvm-dis`
* in a debugger, set a breakpoint on `LLVMVerifyModule` and fake its
* result to 0 (indicating success). */
LLVMWriteBitcodeToFile (module->lmodule, filename);
}
static LLVMValueRef
md_string (const char *s)
{
return LLVMMDString (s, strlen (s));
}
/* Debugging support */
static void
emit_dbg_info (MonoLLVMModule *module, const char *filename, const char *cu_name)
{
LLVMModuleRef lmodule = module->lmodule;
LLVMValueRef args [16], ver;
/*
* This can only be enabled when LLVM code is emitted into a separate object
* file, since the AOT compiler also emits dwarf info,
* and the abbrev indexes will not be correct since llvm has added its own
* abbrevs.
*/
if (!module->emit_dwarf)
return;
mono_llvm_di_builder_finalize (module->di_builder);
args [0] = LLVMConstInt (LLVMInt32Type (), 2, FALSE);
args [1] = LLVMMDString ("Dwarf Version", strlen ("Dwarf Version"));
args [2] = LLVMConstInt (LLVMInt32Type (), 2, FALSE);
ver = LLVMMDNode (args, 3);
LLVMAddNamedMetadataOperand (lmodule, "llvm.module.flags", ver);
args [0] = LLVMConstInt (LLVMInt32Type (), 2, FALSE);
args [1] = LLVMMDString ("Debug Info Version", strlen ("Debug Info Version"));
args [2] = LLVMConstInt (LLVMInt64Type (), 3, FALSE);
ver = LLVMMDNode (args, 3);
LLVMAddNamedMetadataOperand (lmodule, "llvm.module.flags", ver);
}
static LLVMValueRef
emit_dbg_subprogram (EmitContext *ctx, MonoCompile *cfg, LLVMValueRef method, const char *name)
{
MonoLLVMModule *module = ctx->module;
MonoDebugMethodInfo *minfo = ctx->minfo;
char *source_file, *dir, *filename;
MonoSymSeqPoint *sym_seq_points;
int n_seq_points;
if (!minfo)
return NULL;
mono_debug_get_seq_points (minfo, &source_file, NULL, NULL, &sym_seq_points, &n_seq_points);
if (!source_file)
source_file = g_strdup ("<unknown>");
dir = g_path_get_dirname (source_file);
filename = g_path_get_basename (source_file);
g_free (source_file);
return (LLVMValueRef)mono_llvm_di_create_function (module->di_builder, module->cu, method, cfg->method->name, name, dir, filename, n_seq_points ? sym_seq_points [0].line : 1);
}
static void
emit_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder, const unsigned char *cil_code)
{
MonoCompile *cfg = ctx->cfg;
if (ctx->minfo && cil_code && cil_code >= cfg->header->code && cil_code < cfg->header->code + cfg->header->code_size) {
MonoDebugSourceLocation *loc;
LLVMValueRef loc_md;
loc = mono_debug_method_lookup_location (ctx->minfo, cil_code - cfg->header->code);
if (loc) {
loc_md = (LLVMValueRef)mono_llvm_di_create_location (ctx->module->di_builder, ctx->dbg_md, loc->row, loc->column);
mono_llvm_di_set_location (builder, loc_md);
mono_debug_free_source_location (loc);
}
}
}
static void
emit_default_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder)
{
if (ctx->minfo) {
LLVMValueRef loc_md;
loc_md = (LLVMValueRef)mono_llvm_di_create_location (ctx->module->di_builder, ctx->dbg_md, 0, 0);
mono_llvm_di_set_location (builder, loc_md);
}
}
/*
DESIGN:
- Emit LLVM IR from the mono IR using the LLVM C API.
- The original arch specific code remains, so we can fall back to it if we run
into something we can't handle.
*/
/*
A partial list of issues:
- Handling of opcodes which can throw exceptions.
In the mono JIT, these are implemented using code like this:
method:
<compare>
throw_pos:
b<cond> ex_label
<rest of code>
ex_label:
push throw_pos - method
call <exception trampoline>
The problematic part is push throw_pos - method, which cannot be represented
in the LLVM IR, since it does not support label values.
-> this can be implemented in AOT mode using inline asm + labels, but cannot
be implemented in JIT mode ?
-> a possible but slower implementation would use the normal exception
throwing code but it would need to control the placement of the throw code
(it needs to be exactly after the compare+branch).
-> perhaps add a PC offset intrinsics ?
- efficient implementation of .ovf opcodes.
These are currently implemented as:
<ins which sets the condition codes>
b<cond> ex_label
Some overflow opcodes are now supported by LLVM SVN.
- exception handling, unwinding.
- SSA is disabled for methods with exception handlers
- How to obtain unwind info for LLVM compiled methods ?
-> this is now solved by converting the unwind info generated by LLVM
into our format.
- LLVM uses the c++ exception handling framework, while we use our home grown
code, and couldn't use the c++ one:
- its not supported under VC++, other exotic platforms.
- it might be impossible to support filter clauses with it.
- trampolines.
The trampolines need a predictable call sequence, since they need to disasm
the calling code to obtain register numbers / offsets.
LLVM currently generates this code in non-JIT mode:
mov -0x98(%rax),%eax
callq *%rax
Here, the vtable pointer is lost.
-> solution: use one vtable trampoline per class.
- passing/receiving the IMT pointer/RGCTX.
-> solution: pass them as normal arguments ?
- argument passing.
LLVM does not allow the specification of argument registers etc. This means
that all calls are made according to the platform ABI.
- passing/receiving vtypes.
Vtypes passed/received in registers are handled by the front end by using
a signature with scalar arguments, and loading the parts of the vtype into those
arguments.
Vtypes passed on the stack are handled using the 'byval' attribute.
- ldaddr.
Supported though alloca, we need to emit the load/store code.
- types.
The mono JIT uses pointer sized iregs/double fregs, while LLVM uses precisely
typed registers, so we have to keep track of the precise LLVM type of each vreg.
This is made easier because the IR is already in SSA form.
An additional problem is that our IR is not consistent with types, i.e. i32/i64
types are frequently used incorrectly.
*/
/*
AOT SUPPORT:
Emit LLVM bytecode into a .bc file, compile it using llc into a .s file, then link
it with the file containing the methods emitted by the JIT and the AOT data
structures.
*/
/* FIXME: Normalize some aspects of the mono IR to allow easier translation, like:
* - each bblock should end with a branch
* - setting the return value, making cfg->ret non-volatile
* - avoid some transformations in the JIT which make it harder for us to generate
* code.
* - use pointer types to help optimizations.
*/
#else /* DISABLE_JIT */
void
mono_llvm_cleanup (void)
{
}
void
mono_llvm_free_mem_manager (MonoJitMemoryManager *mem_manager)
{
}
void
mono_llvm_init (gboolean enable_jit)
{
}
#endif /* DISABLE_JIT */
#if !defined(DISABLE_JIT) && !defined(MONO_CROSS_COMPILE)
/* LLVM JIT support */
/*
* decode_llvm_eh_info:
*
* Decode the EH table emitted by llvm in jit mode, and store
* the result into cfg.
*/
static void
decode_llvm_eh_info (EmitContext *ctx, gpointer eh_frame)
{
MonoCompile *cfg = ctx->cfg;
guint8 *cie, *fde;
int fde_len;
MonoLLVMFDEInfo info;
MonoJitExceptionInfo *ei;
guint8 *p = (guint8*)eh_frame;
int version, fde_count, fde_offset;
guint32 ei_len, i, nested_len;
gpointer *type_info;
gint32 *table;
guint8 *unw_info;
/*
* Decode the one element EH table emitted by the MonoException class
* in llvm.
*/
/* Similar to decode_llvm_mono_eh_frame () in aot-runtime.c */
version = *p;
g_assert (version == 3);
p ++;
p ++;
p = (guint8 *)ALIGN_PTR_TO (p, 4);
fde_count = *(guint32*)p;
p += 4;
table = (gint32*)p;
g_assert (fde_count <= 2);
/* The first entry is the real method */
g_assert (table [0] == 1);
fde_offset = table [1];
table += fde_count * 2;
/* Extra entry */
cfg->code_len = table [0];
fde_len = table [1] - fde_offset;
table += 2;
fde = (guint8*)eh_frame + fde_offset;
cie = (guint8*)table;
/* Compute lengths */
mono_unwind_decode_llvm_mono_fde (fde, fde_len, cie, cfg->native_code, &info, NULL, NULL, NULL);
ei = (MonoJitExceptionInfo *)g_malloc0 (info.ex_info_len * sizeof (MonoJitExceptionInfo));
type_info = (gpointer *)g_malloc0 (info.ex_info_len * sizeof (gpointer));
unw_info = (guint8*)g_malloc0 (info.unw_info_len);
mono_unwind_decode_llvm_mono_fde (fde, fde_len, cie, cfg->native_code, &info, ei, type_info, unw_info);
cfg->encoded_unwind_ops = unw_info;
cfg->encoded_unwind_ops_len = info.unw_info_len;
if (cfg->verbose_level > 1)
mono_print_unwind_info (cfg->encoded_unwind_ops, cfg->encoded_unwind_ops_len);
if (info.this_reg != -1) {
cfg->llvm_this_reg = info.this_reg;
cfg->llvm_this_offset = info.this_offset;
}
ei_len = info.ex_info_len;
// Nested clauses are currently disabled
nested_len = 0;
cfg->llvm_ex_info = (MonoJitExceptionInfo*)mono_mempool_alloc0 (cfg->mempool, (ei_len + nested_len) * sizeof (MonoJitExceptionInfo));
cfg->llvm_ex_info_len = ei_len + nested_len;
memcpy (cfg->llvm_ex_info, ei, ei_len * sizeof (MonoJitExceptionInfo));
/* Fill the rest of the information from the type info */
for (i = 0; i < ei_len; ++i) {
gint32 clause_index = *(gint32*)type_info [i];
MonoExceptionClause *clause = &cfg->header->clauses [clause_index];
cfg->llvm_ex_info [i].flags = clause->flags;
cfg->llvm_ex_info [i].data.catch_class = clause->data.catch_class;
cfg->llvm_ex_info [i].clause_index = clause_index;
}
}
static MonoLLVMModule*
init_jit_module (void)
{
MonoJitMemoryManager *jit_mm;
MonoLLVMModule *module;
// FIXME:
jit_mm = get_default_jit_mm ();
if (jit_mm->llvm_module)
return (MonoLLVMModule*)jit_mm->llvm_module;
mono_loader_lock ();
if (jit_mm->llvm_module) {
mono_loader_unlock ();
return (MonoLLVMModule*)jit_mm->llvm_module;
}
module = g_new0 (MonoLLVMModule, 1);
module->context = LLVMGetGlobalContext ();
module->mono_ee = (MonoEERef*)mono_llvm_create_ee (&module->ee);
// This contains just the intrinsics
module->lmodule = LLVMModuleCreateWithName ("jit-global-module");
add_intrinsics (module->lmodule);
add_types (module);
module->llvm_types = g_hash_table_new (NULL, NULL);
mono_memory_barrier ();
jit_mm->llvm_module = module;
mono_loader_unlock ();
return (MonoLLVMModule*)jit_mm->llvm_module;
}
static void
llvm_jit_finalize_method (EmitContext *ctx)
{
MonoCompile *cfg = ctx->cfg;
int nvars = g_hash_table_size (ctx->jit_callees);
LLVMValueRef *callee_vars = g_new0 (LLVMValueRef, nvars);
gpointer *callee_addrs = g_new0 (gpointer, nvars);
GHashTableIter iter;
LLVMValueRef var;
MonoMethod *callee;
gpointer eh_frame;
int i;
/*
* Compute the addresses of the LLVM globals pointing to the
* methods called by the current method. Pass it to the trampoline
* code so it can update them after their corresponding method was
* compiled.
*/
g_hash_table_iter_init (&iter, ctx->jit_callees);
i = 0;
while (g_hash_table_iter_next (&iter, NULL, (void**)&var))
callee_vars [i ++] = var;
mono_llvm_optimize_method (ctx->lmethod);
if (cfg->verbose_level > 1) {
g_print ("\n*** Optimized LLVM IR for %s ***\n", mono_method_full_name (cfg->method, TRUE));
if (cfg->compile_aot) {
mono_llvm_dump_value (ctx->lmethod);
} else {
mono_llvm_dump_module (ctx->lmodule);
}
g_print ("***\n\n");
}
mono_codeman_enable_write ();
cfg->native_code = (guint8*)mono_llvm_compile_method (ctx->module->mono_ee, cfg, ctx->lmethod, nvars, callee_vars, callee_addrs, &eh_frame);
mono_llvm_remove_gc_safepoint_poll (ctx->lmodule);
mono_codeman_disable_write ();
decode_llvm_eh_info (ctx, eh_frame);
// FIXME:
MonoJitMemoryManager *jit_mm = get_default_jit_mm ();
jit_mm_lock (jit_mm);
if (!jit_mm->llvm_jit_callees)
jit_mm->llvm_jit_callees = g_hash_table_new (NULL, NULL);
g_hash_table_iter_init (&iter, ctx->jit_callees);
i = 0;
while (g_hash_table_iter_next (&iter, (void**)&callee, (void**)&var)) {
GSList *addrs = (GSList*)g_hash_table_lookup (jit_mm->llvm_jit_callees, callee);
addrs = g_slist_prepend (addrs, callee_addrs [i]);
g_hash_table_insert (jit_mm->llvm_jit_callees, callee, addrs);
i ++;
}
jit_mm_unlock (jit_mm);
}
#else
static MonoLLVMModule*
init_jit_module (void)
{
g_assert_not_reached ();
}
static void
llvm_jit_finalize_method (EmitContext *ctx)
{
g_assert_not_reached ();
}
#endif
static MonoCPUFeatures cpu_features;
MonoCPUFeatures mono_llvm_get_cpu_features (void)
{
static const CpuFeatureAliasFlag flags_map [] = {
#if defined(TARGET_X86) || defined(TARGET_AMD64)
{ "sse", MONO_CPU_X86_SSE },
{ "sse2", MONO_CPU_X86_SSE2 },
{ "pclmul", MONO_CPU_X86_PCLMUL },
{ "aes", MONO_CPU_X86_AES },
{ "sse2", MONO_CPU_X86_SSE2 },
{ "sse3", MONO_CPU_X86_SSE3 },
{ "ssse3", MONO_CPU_X86_SSSE3 },
{ "sse4.1", MONO_CPU_X86_SSE41 },
{ "sse4.2", MONO_CPU_X86_SSE42 },
{ "popcnt", MONO_CPU_X86_POPCNT },
{ "avx", MONO_CPU_X86_AVX },
{ "avx2", MONO_CPU_X86_AVX2 },
{ "fma", MONO_CPU_X86_FMA },
{ "lzcnt", MONO_CPU_X86_LZCNT },
{ "bmi", MONO_CPU_X86_BMI1 },
{ "bmi2", MONO_CPU_X86_BMI2 },
#endif
#if defined(TARGET_ARM64)
{ "crc", MONO_CPU_ARM64_CRC },
{ "crypto", MONO_CPU_ARM64_CRYPTO },
{ "neon", MONO_CPU_ARM64_NEON },
{ "rdm", MONO_CPU_ARM64_RDM },
{ "dotprod", MONO_CPU_ARM64_DP },
#endif
#if defined(TARGET_WASM)
{ "simd", MONO_CPU_WASM_SIMD },
#endif
// flags_map cannot be zero length in MSVC, so add useless dummy entry for arm32
#if defined(TARGET_ARM) && defined(HOST_WIN32)
{ "inited", MONO_CPU_INITED},
#endif
};
if (!cpu_features)
cpu_features = MONO_CPU_INITED | (MonoCPUFeatures)mono_llvm_check_cpu_features (flags_map, G_N_ELEMENTS (flags_map));
return cpu_features;
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.Data.Odbc/src/Common/System/Data/Common/SafeNativeMethods.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System.Runtime.InteropServices;
namespace System.Data
{
internal static partial class SafeNativeMethods
{
internal static IntPtr LocalAlloc(IntPtr initialSize)
{
var handle = Marshal.AllocHGlobal(initialSize);
ZeroMemory(handle, (int)initialSize);
return handle;
}
internal static void LocalFree(IntPtr ptr)
{
Marshal.FreeHGlobal(ptr);
}
internal static void ZeroMemory(IntPtr ptr, int length)
{
var zeroes = new byte[length];
Marshal.Copy(zeroes, 0, ptr, length);
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System.Runtime.InteropServices;
namespace System.Data
{
internal static partial class SafeNativeMethods
{
internal static IntPtr LocalAlloc(IntPtr initialSize)
{
var handle = Marshal.AllocHGlobal(initialSize);
ZeroMemory(handle, (int)initialSize);
return handle;
}
internal static void LocalFree(IntPtr ptr)
{
Marshal.FreeHGlobal(ptr);
}
internal static void ZeroMemory(IntPtr ptr, int length)
{
var zeroes = new byte[length];
Marshal.Copy(zeroes, 0, ptr, length);
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.CodeDom/src/System/CodeDom/CodeRemoveEventStatement.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
namespace System.CodeDom
{
public class CodeRemoveEventStatement : CodeStatement
{
private CodeEventReferenceExpression _eventRef;
public CodeRemoveEventStatement() { }
public CodeRemoveEventStatement(CodeEventReferenceExpression eventRef, CodeExpression listener)
{
_eventRef = eventRef;
Listener = listener;
}
public CodeRemoveEventStatement(CodeExpression targetObject, string eventName, CodeExpression listener)
{
_eventRef = new CodeEventReferenceExpression(targetObject, eventName);
Listener = listener;
}
public CodeEventReferenceExpression Event
{
get => _eventRef ?? (_eventRef = new CodeEventReferenceExpression());
set => _eventRef = value;
}
public CodeExpression Listener { get; set; }
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
namespace System.CodeDom
{
public class CodeRemoveEventStatement : CodeStatement
{
private CodeEventReferenceExpression _eventRef;
public CodeRemoveEventStatement() { }
public CodeRemoveEventStatement(CodeEventReferenceExpression eventRef, CodeExpression listener)
{
_eventRef = eventRef;
Listener = listener;
}
public CodeRemoveEventStatement(CodeExpression targetObject, string eventName, CodeExpression listener)
{
_eventRef = new CodeEventReferenceExpression(targetObject, eventName);
Listener = listener;
}
public CodeEventReferenceExpression Event
{
get => _eventRef ?? (_eventRef = new CodeEventReferenceExpression());
set => _eventRef = value;
}
public CodeExpression Listener { get; set; }
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/jit/disasm.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/*XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XX XX
XX DisAsm XX
XX XX
XX The dis-assembler to display the native code generated XX
XX XX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
*/
/*****************************************************************************/
#ifndef _DIS_H_
#define _DIS_H_
/*****************************************************************************/
#ifdef LATE_DISASM
// free() is deprecated (we should only allocate and free memory through CLR hosting interfaces)
// and is redefined in clrhost.h to cause a compiler error.
// We don't call free(), but this function is mentioned in STL headers included by msvcdis.h
// (and free() is only called by STL functions that we don't use).
// To avoid the compiler error, but at the same time ensure that we don't accidentally use free(),
// free() is redefined to cause a runtime error instead of a compile time error.
#undef free
#ifdef DEBUG
#define free(x) assert(false && "Must not call free(). Use a ClrXXX function instead.")
#endif
#if CHECK_STRUCT_PADDING
#pragma warning(pop)
#endif // CHECK_STRUCT_PADDING
#define _OLD_IOSTREAMS
// This pragma is needed because public\vc\inc\xiosbase contains
// a static local variable
#pragma warning(disable : 4640)
#include "msvcdis.h"
#pragma warning(default : 4640)
#ifdef TARGET_XARCH
#include "disx86.h"
#elif defined(TARGET_ARM64)
#include "disarm64.h"
#else // TARGET*
#error Unsupported or unset target architecture
#endif
#if CHECK_STRUCT_PADDING
#pragma warning(push)
#pragma warning(default : 4820) // 'bytes' bytes padding added after construct 'member_name'
#endif // CHECK_STRUCT_PADDING
/*****************************************************************************/
#ifdef HOST_64BIT
template <typename T>
struct SizeTKeyFuncs : JitLargePrimitiveKeyFuncs<T>
{
};
#else // !HOST_64BIT
template <typename T>
struct SizeTKeyFuncs : JitSmallPrimitiveKeyFuncs<T>
{
};
#endif // HOST_64BIT
typedef JitHashTable<size_t, SizeTKeyFuncs<size_t>, CORINFO_METHOD_HANDLE> AddrToMethodHandleMap;
typedef JitHashTable<size_t, SizeTKeyFuncs<size_t>, size_t> AddrToAddrMap;
class Compiler;
class DisAssembler
{
public:
// Constructor
void disInit(Compiler* pComp);
// Initialize the class for the current method being generated.
void disOpenForLateDisAsm(const char* curMethodName, const char* curClassName, PCCOR_SIGNATURE sig);
// Disassemble a buffer: called after code for a method is generated.
void disAsmCode(BYTE* hotCodePtr, size_t hotCodeSize, BYTE* coldCodePtr, size_t coldCodeSize);
// Register an address to be associated with a method handle.
void disSetMethod(size_t addr, CORINFO_METHOD_HANDLE methHnd);
// Register a relocation address.
void disRecordRelocation(size_t relocAddr, size_t targetAddr);
private:
/* Address of the hot and cold code blocks to dissasemble */
size_t disHotCodeBlock;
size_t disColdCodeBlock;
/* Size of the hot and cold code blocks to dissasemble */
size_t disHotCodeSize;
size_t disColdCodeSize;
/* Total code size (simply cached version of disHotCodeSize + disColdCodeSize) */
size_t disTotalCodeSize;
/* Address where the code block is to be loaded */
size_t disStartAddr;
/* Current offset in the code block */
size_t disCurOffset;
/* Size (in bytes) of current dissasembled instruction */
size_t disInstSize;
/* Target address of a jump */
size_t disTarget;
/* temporary buffer for function names */
// TODO-Review: there is some issue here where this is never set!
char disFuncTempBuf[1024];
/* Method and class name to output */
const char* disCurMethodName;
const char* disCurClassName;
/* flag that signals when replacing a symbol name has been deferred for following callbacks */
// TODO-Review: there is some issue here where this is never set to 'true'!
bool disHasName;
/* An array of labels, for jumps, LEAs, etc. There is one element in the array for each byte in the generated code.
* That byte is zero if the corresponding byte of generated code is not a label. Otherwise, the value
* is a label number.
*/
BYTE* disLabels;
void DisasmBuffer(FILE* pfile, bool printit);
/* For the purposes of disassembly, we pretend that the hot and cold sections are linear, and not split.
* These functions create this model for the rest of the disassembly code.
*/
/* Given a linear offset into the code, find a pointer to the actual code (either in the hot or cold section) */
const BYTE* disGetLinearAddr(size_t offset);
/* Given a linear offset into the code, determine how many bytes are left in the hot or cold buffer the offset
* points to */
size_t disGetBufferSize(size_t offset);
// Map of instruction addresses to call target method handles for normal calls.
AddrToMethodHandleMap* disAddrToMethodHandleMap;
AddrToMethodHandleMap* GetAddrToMethodHandleMap();
// Map of instruction addresses to call target method handles for JIT helper calls.
AddrToMethodHandleMap* disHelperAddrToMethodHandleMap;
AddrToMethodHandleMap* GetHelperAddrToMethodHandleMap();
// Map of relocation addresses to relocation target.
AddrToAddrMap* disRelocationMap;
AddrToAddrMap* GetRelocationMap();
const char* disGetMethodFullName(size_t addr);
FILE* disAsmFile;
Compiler* disComp;
bool disDiffable; // 'true' if the output should be diffable (hide or obscure absolute addresses)
template <typename T>
T dspAddr(T addr)
{
// silence warning of cast to greater size. It is easier to silence than construct code the compiler is happy with, and
// it is safe in this case
#pragma warning(push)
#pragma warning(disable : 4312)
return (addr == 0) ? 0 : (disDiffable ? T(0xD1FFAB1E) : addr);
#pragma warning(pop)
}
/* Callbacks from msdis */
static size_t __stdcall disCchAddr(
const DIS* pdis, DIS::ADDR addr, _In_reads_(cchMax) wchar_t* wz, size_t cchMax, DWORDLONG* pdwDisp);
size_t disCchAddrMember(
const DIS* pdis, DIS::ADDR addr, _In_reads_(cchMax) wchar_t* wz, size_t cchMax, DWORDLONG* pdwDisp);
static size_t __stdcall disCchFixup(const DIS* pdis,
DIS::ADDR addr,
size_t size,
_In_reads_(cchMax) wchar_t* wz,
size_t cchMax,
DWORDLONG* pdwDisp);
size_t disCchFixupMember(const DIS* pdis,
DIS::ADDR addr,
size_t size,
_In_reads_(cchMax) wchar_t* wz,
size_t cchMax,
DWORDLONG* pdwDisp);
static size_t __stdcall disCchRegRel(
const DIS* pdis, DIS::REGA reg, DWORD disp, _In_reads_(cchMax) wchar_t* wz, size_t cchMax, DWORD* pdwDisp);
size_t disCchRegRelMember(
const DIS* pdis, DIS::REGA reg, DWORD disp, _In_reads_(cchMax) wchar_t* wz, size_t cchMax, DWORD* pdwDisp);
static size_t __stdcall disCchReg(const DIS* pdis, DIS::REGA reg, _In_reads_(cchMax) wchar_t* wz, size_t cchMax);
size_t disCchRegMember(const DIS* pdis, DIS::REGA reg, _In_reads_(cchMax) wchar_t* wz, size_t cchMax);
/* Disassemble helper */
size_t CbDisassemble(DIS* pdis,
size_t offs,
DIS::ADDR addr,
const BYTE* pb,
size_t cbMax,
FILE* pfile,
bool findLabels,
bool printit = false,
bool dispOffs = false,
bool dispCodeBytes = false);
};
/*****************************************************************************/
#endif // LATE_DISASM
/*****************************************************************************/
#endif // _DIS_H_
/*****************************************************************************/
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/*XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XX XX
XX DisAsm XX
XX XX
XX The dis-assembler to display the native code generated XX
XX XX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
*/
/*****************************************************************************/
#ifndef _DIS_H_
#define _DIS_H_
/*****************************************************************************/
#ifdef LATE_DISASM
// free() is deprecated (we should only allocate and free memory through CLR hosting interfaces)
// and is redefined in clrhost.h to cause a compiler error.
// We don't call free(), but this function is mentioned in STL headers included by msvcdis.h
// (and free() is only called by STL functions that we don't use).
// To avoid the compiler error, but at the same time ensure that we don't accidentally use free(),
// free() is redefined to cause a runtime error instead of a compile time error.
#undef free
#ifdef DEBUG
#define free(x) assert(false && "Must not call free(). Use a ClrXXX function instead.")
#endif
#if CHECK_STRUCT_PADDING
#pragma warning(pop)
#endif // CHECK_STRUCT_PADDING
#define _OLD_IOSTREAMS
// This pragma is needed because public\vc\inc\xiosbase contains
// a static local variable
#pragma warning(disable : 4640)
#include "msvcdis.h"
#pragma warning(default : 4640)
#ifdef TARGET_XARCH
#include "disx86.h"
#elif defined(TARGET_ARM64)
#include "disarm64.h"
#else // TARGET*
#error Unsupported or unset target architecture
#endif
#if CHECK_STRUCT_PADDING
#pragma warning(push)
#pragma warning(default : 4820) // 'bytes' bytes padding added after construct 'member_name'
#endif // CHECK_STRUCT_PADDING
/*****************************************************************************/
#ifdef HOST_64BIT
template <typename T>
struct SizeTKeyFuncs : JitLargePrimitiveKeyFuncs<T>
{
};
#else // !HOST_64BIT
template <typename T>
struct SizeTKeyFuncs : JitSmallPrimitiveKeyFuncs<T>
{
};
#endif // HOST_64BIT
typedef JitHashTable<size_t, SizeTKeyFuncs<size_t>, CORINFO_METHOD_HANDLE> AddrToMethodHandleMap;
typedef JitHashTable<size_t, SizeTKeyFuncs<size_t>, size_t> AddrToAddrMap;
class Compiler;
class DisAssembler
{
public:
// Constructor
void disInit(Compiler* pComp);
// Initialize the class for the current method being generated.
void disOpenForLateDisAsm(const char* curMethodName, const char* curClassName, PCCOR_SIGNATURE sig);
// Disassemble a buffer: called after code for a method is generated.
void disAsmCode(BYTE* hotCodePtr, size_t hotCodeSize, BYTE* coldCodePtr, size_t coldCodeSize);
// Register an address to be associated with a method handle.
void disSetMethod(size_t addr, CORINFO_METHOD_HANDLE methHnd);
// Register a relocation address.
void disRecordRelocation(size_t relocAddr, size_t targetAddr);
private:
/* Address of the hot and cold code blocks to dissasemble */
size_t disHotCodeBlock;
size_t disColdCodeBlock;
/* Size of the hot and cold code blocks to dissasemble */
size_t disHotCodeSize;
size_t disColdCodeSize;
/* Total code size (simply cached version of disHotCodeSize + disColdCodeSize) */
size_t disTotalCodeSize;
/* Address where the code block is to be loaded */
size_t disStartAddr;
/* Current offset in the code block */
size_t disCurOffset;
/* Size (in bytes) of current dissasembled instruction */
size_t disInstSize;
/* Target address of a jump */
size_t disTarget;
/* temporary buffer for function names */
// TODO-Review: there is some issue here where this is never set!
char disFuncTempBuf[1024];
/* Method and class name to output */
const char* disCurMethodName;
const char* disCurClassName;
/* flag that signals when replacing a symbol name has been deferred for following callbacks */
// TODO-Review: there is some issue here where this is never set to 'true'!
bool disHasName;
/* An array of labels, for jumps, LEAs, etc. There is one element in the array for each byte in the generated code.
* That byte is zero if the corresponding byte of generated code is not a label. Otherwise, the value
* is a label number.
*/
BYTE* disLabels;
void DisasmBuffer(FILE* pfile, bool printit);
/* For the purposes of disassembly, we pretend that the hot and cold sections are linear, and not split.
* These functions create this model for the rest of the disassembly code.
*/
/* Given a linear offset into the code, find a pointer to the actual code (either in the hot or cold section) */
const BYTE* disGetLinearAddr(size_t offset);
/* Given a linear offset into the code, determine how many bytes are left in the hot or cold buffer the offset
* points to */
size_t disGetBufferSize(size_t offset);
// Map of instruction addresses to call target method handles for normal calls.
AddrToMethodHandleMap* disAddrToMethodHandleMap;
AddrToMethodHandleMap* GetAddrToMethodHandleMap();
// Map of instruction addresses to call target method handles for JIT helper calls.
AddrToMethodHandleMap* disHelperAddrToMethodHandleMap;
AddrToMethodHandleMap* GetHelperAddrToMethodHandleMap();
// Map of relocation addresses to relocation target.
AddrToAddrMap* disRelocationMap;
AddrToAddrMap* GetRelocationMap();
const char* disGetMethodFullName(size_t addr);
FILE* disAsmFile;
Compiler* disComp;
bool disDiffable; // 'true' if the output should be diffable (hide or obscure absolute addresses)
template <typename T>
T dspAddr(T addr)
{
// silence warning of cast to greater size. It is easier to silence than construct code the compiler is happy with, and
// it is safe in this case
#pragma warning(push)
#pragma warning(disable : 4312)
return (addr == 0) ? 0 : (disDiffable ? T(0xD1FFAB1E) : addr);
#pragma warning(pop)
}
/* Callbacks from msdis */
static size_t __stdcall disCchAddr(
const DIS* pdis, DIS::ADDR addr, _In_reads_(cchMax) wchar_t* wz, size_t cchMax, DWORDLONG* pdwDisp);
size_t disCchAddrMember(
const DIS* pdis, DIS::ADDR addr, _In_reads_(cchMax) wchar_t* wz, size_t cchMax, DWORDLONG* pdwDisp);
static size_t __stdcall disCchFixup(const DIS* pdis,
DIS::ADDR addr,
size_t size,
_In_reads_(cchMax) wchar_t* wz,
size_t cchMax,
DWORDLONG* pdwDisp);
size_t disCchFixupMember(const DIS* pdis,
DIS::ADDR addr,
size_t size,
_In_reads_(cchMax) wchar_t* wz,
size_t cchMax,
DWORDLONG* pdwDisp);
static size_t __stdcall disCchRegRel(
const DIS* pdis, DIS::REGA reg, DWORD disp, _In_reads_(cchMax) wchar_t* wz, size_t cchMax, DWORD* pdwDisp);
size_t disCchRegRelMember(
const DIS* pdis, DIS::REGA reg, DWORD disp, _In_reads_(cchMax) wchar_t* wz, size_t cchMax, DWORD* pdwDisp);
static size_t __stdcall disCchReg(const DIS* pdis, DIS::REGA reg, _In_reads_(cchMax) wchar_t* wz, size_t cchMax);
size_t disCchRegMember(const DIS* pdis, DIS::REGA reg, _In_reads_(cchMax) wchar_t* wz, size_t cchMax);
/* Disassemble helper */
size_t CbDisassemble(DIS* pdis,
size_t offs,
DIS::ADDR addr,
const BYTE* pb,
size_t cbMax,
FILE* pfile,
bool findLabels,
bool printit = false,
bool dispOffs = false,
bool dispCodeBytes = false);
};
/*****************************************************************************/
#endif // LATE_DISASM
/*****************************************************************************/
#endif // _DIS_H_
/*****************************************************************************/
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/Interop/IJW/NativeVarargs/NativeVarargsTest.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Reflection;
using System.Runtime.InteropServices;
using Xunit;
namespace NativeVarargsTest
{
class NativeVarargsTest
{
static int Main(string[] args)
{
if(Environment.OSVersion.Platform != PlatformID.Win32NT || TestLibrary.Utilities.IsWindows7 || TestLibrary.Utilities.IsWindowsNanoServer)
{
return 100;
}
// Use the same seed for consistency between runs.
int seed = 42;
try
{
Assembly ijwNativeDll = Assembly.Load("IjwNativeVarargs");
Type testType = ijwNativeDll.GetType("TestClass");
object testInstance = Activator.CreateInstance(testType);
MethodInfo testMethod = testType.GetMethod("RunTests");
IEnumerable failedTests = (IEnumerable)testMethod.Invoke(testInstance, BindingFlags.DoNotWrapExceptions, null, new object[] {seed}, null);
if (failedTests.OfType<object>().Any())
{
Console.WriteLine("Failed Varargs tests:");
foreach (var failedTest in failedTests)
{
Console.WriteLine($"\t{failedTest}");
}
return 102;
}
}
catch (Exception ex)
{
Console.WriteLine(ex);
return 101;
}
return 100;
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Reflection;
using System.Runtime.InteropServices;
using Xunit;
namespace NativeVarargsTest
{
class NativeVarargsTest
{
static int Main(string[] args)
{
if(Environment.OSVersion.Platform != PlatformID.Win32NT || TestLibrary.Utilities.IsWindows7 || TestLibrary.Utilities.IsWindowsNanoServer)
{
return 100;
}
// Use the same seed for consistency between runs.
int seed = 42;
try
{
Assembly ijwNativeDll = Assembly.Load("IjwNativeVarargs");
Type testType = ijwNativeDll.GetType("TestClass");
object testInstance = Activator.CreateInstance(testType);
MethodInfo testMethod = testType.GetMethod("RunTests");
IEnumerable failedTests = (IEnumerable)testMethod.Invoke(testInstance, BindingFlags.DoNotWrapExceptions, null, new object[] {seed}, null);
if (failedTests.OfType<object>().Any())
{
Console.WriteLine("Failed Varargs tests:");
foreach (var failedTest in failedTests)
{
Console.WriteLine($"\t{failedTest}");
}
return 102;
}
}
catch (Exception ex)
{
Console.WriteLine(ex);
return 101;
}
return 100;
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/GC/Scenarios/GCSimulator/GCSimulator_337.csproj | <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
<GCStressIncompatible>true</GCStressIncompatible>
<CLRTestExecutionArguments>-t 1 -tp 0 -dz 17 -sdz 8500 -dc 10000 -sdc 5000 -lt 3 -f -dp 0.0 -dw 0.0</CLRTestExecutionArguments>
<IsGCSimulatorTest>true</IsGCSimulatorTest>
<CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun>
<CLRTestPriority>1</CLRTestPriority>
</PropertyGroup>
<ItemGroup>
<Compile Include="GCSimulator.cs" />
<Compile Include="lifetimefx.cs" />
</ItemGroup>
</Project>
| <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
<GCStressIncompatible>true</GCStressIncompatible>
<CLRTestExecutionArguments>-t 1 -tp 0 -dz 17 -sdz 8500 -dc 10000 -sdc 5000 -lt 3 -f -dp 0.0 -dw 0.0</CLRTestExecutionArguments>
<IsGCSimulatorTest>true</IsGCSimulatorTest>
<CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun>
<CLRTestPriority>1</CLRTestPriority>
</PropertyGroup>
<ItemGroup>
<Compile Include="GCSimulator.cs" />
<Compile Include="lifetimefx.cs" />
</ItemGroup>
</Project>
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/JIT/Regression/JitBlue/Runtime_13669/Runtime_13669.csproj | <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<CLRTestPriority>1</CLRTestPriority>
</PropertyGroup>
<PropertyGroup>
<DebugType />
<Optimize>True</Optimize>
</PropertyGroup>
<ItemGroup>
<Compile Include="$(MSBuildProjectName).cs" />
</ItemGroup>
</Project>
| <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<CLRTestPriority>1</CLRTestPriority>
</PropertyGroup>
<PropertyGroup>
<DebugType />
<Optimize>True</Optimize>
</PropertyGroup>
<ItemGroup>
<Compile Include="$(MSBuildProjectName).cs" />
</ItemGroup>
</Project>
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.Configuration.ConfigurationManager/ref/System.Configuration.ConfigurationManager.csproj | <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFrameworks>$(NetCoreAppCurrent);$(NetCoreAppMinimum);netstandard2.0;$(NetFrameworkMinimum)</TargetFrameworks>
<NoWarn>$(NoWarn);CS0618</NoWarn>
</PropertyGroup>
<ItemGroup >
<Compile Include="System.Configuration.ConfigurationManager.cs" Condition="'$(TargetFrameworkIdentifier)' != '.NETFramework'" />
<Compile Include="$(CommonPath)System\Obsoletions.cs" Link="Common\System\Obsoletions.cs" Condition="'$(TargetFrameworkIdentifier)' != '.NETFramework'" />
<Compile Include="System.Configuration.ConfigurationManager.netframework.cs" Condition="'$(TargetFrameworkIdentifier)' == '.NETFramework'" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="$(LibrariesProjectRoot)System.Security.Permissions\ref\System.Security.Permissions.csproj" />
</ItemGroup>
<ItemGroup Condition="'$(TargetFramework)' == '$(NetCoreAppCurrent)'">
<ProjectReference Include="$(LibrariesProjectRoot)System.Xml.ReaderWriter\ref\System.Xml.ReaderWriter.csproj" />
</ItemGroup>
<ItemGroup Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp' and '$(TargetFramework)' != '$(NetCoreAppCurrent)'">
<Reference Include="System.Collections.NonGeneric" />
<Reference Include="System.Collections.Specialized" />
<Reference Include="System.ComponentModel" />
<Reference Include="System.ComponentModel.Primitives" />
<Reference Include="System.ComponentModel.TypeConverter" />
<Reference Include="System.ObjectModel" />
<Reference Include="System.Runtime" />
<Reference Include="System.Security.Cryptography.Algorithms" />
<Reference Include="System.Xml.ReaderWriter" />
</ItemGroup>
<ItemGroup Condition="'$(TargetFrameworkIdentifier)' == '.NETFramework'">
<Reference Include="System.Configuration" />
</ItemGroup>
</Project> | <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFrameworks>$(NetCoreAppCurrent);$(NetCoreAppMinimum);netstandard2.0;$(NetFrameworkMinimum)</TargetFrameworks>
<NoWarn>$(NoWarn);CS0618</NoWarn>
</PropertyGroup>
<ItemGroup >
<Compile Include="System.Configuration.ConfigurationManager.cs" Condition="'$(TargetFrameworkIdentifier)' != '.NETFramework'" />
<Compile Include="$(CommonPath)System\Obsoletions.cs" Link="Common\System\Obsoletions.cs" Condition="'$(TargetFrameworkIdentifier)' != '.NETFramework'" />
<Compile Include="System.Configuration.ConfigurationManager.netframework.cs" Condition="'$(TargetFrameworkIdentifier)' == '.NETFramework'" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="$(LibrariesProjectRoot)System.Security.Permissions\ref\System.Security.Permissions.csproj" />
</ItemGroup>
<ItemGroup Condition="'$(TargetFramework)' == '$(NetCoreAppCurrent)'">
<ProjectReference Include="$(LibrariesProjectRoot)System.Xml.ReaderWriter\ref\System.Xml.ReaderWriter.csproj" />
</ItemGroup>
<ItemGroup Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp' and '$(TargetFramework)' != '$(NetCoreAppCurrent)'">
<Reference Include="System.Collections.NonGeneric" />
<Reference Include="System.Collections.Specialized" />
<Reference Include="System.ComponentModel" />
<Reference Include="System.ComponentModel.Primitives" />
<Reference Include="System.ComponentModel.TypeConverter" />
<Reference Include="System.ObjectModel" />
<Reference Include="System.Runtime" />
<Reference Include="System.Security.Cryptography.Algorithms" />
<Reference Include="System.Xml.ReaderWriter" />
</ItemGroup>
<ItemGroup Condition="'$(TargetFrameworkIdentifier)' == '.NETFramework'">
<Reference Include="System.Configuration" />
</ItemGroup>
</Project> | -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/coreclr/pal/tests/palsuite/threading/DuplicateHandle/test5/test5.cpp | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/*=====================================================================
**
** Source: test5.c (DuplicateHandle)
**
** Purpose: Tests the PAL implementation of the DuplicateHandle function,
** with CreatePipe. This test will create a pipe and write to it,
** the duplicate the read handle and read what was written.
**
** Depends: WriteFile
** ReadFile
** memcmp
** CloseHandle
**
**
**===================================================================*/
#include <palsuite.h>
#define cTestString "one fish, two fish, read fish, blue fish."
PALTEST(threading_DuplicateHandle_test5_paltest_duplicatehandle_test5, "threading/DuplicateHandle/test5/paltest_duplicatehandle_test5")
{
HANDLE hReadPipe = NULL;
HANDLE hWritePipe = NULL;
HANDLE hDupPipe = NULL;
BOOL bRetVal = FALSE;
DWORD dwBytesWritten;
DWORD dwBytesRead;
char buffer[256];
SECURITY_ATTRIBUTES lpPipeAttributes;
/*Initialize the PAL*/
if ((PAL_Initialize(argc, argv)) != 0)
{
return (FAIL);
}
/*Setup SECURITY_ATTRIBUTES structure for CreatePipe*/
lpPipeAttributes.nLength = sizeof(lpPipeAttributes);
lpPipeAttributes.lpSecurityDescriptor = NULL;
lpPipeAttributes.bInheritHandle = TRUE;
/*Create a Pipe*/
bRetVal = CreatePipe(&hReadPipe, /* read handle*/
&hWritePipe, /* write handle */
&lpPipeAttributes,/* security attributes*/
0); /* pipe size*/
if (bRetVal == FALSE)
{
Fail("ERROR:%u:Unable to create pipe\n", GetLastError());
}
/*Write to the write pipe handle*/
bRetVal = WriteFile(hWritePipe, /* handle to write pipe*/
cTestString, /* buffer to write*/
strlen(cTestString),/* number of bytes to write*/
&dwBytesWritten, /* number of bytes written*/
NULL); /* overlapped buffer*/
if (bRetVal == FALSE)
{
Trace("ERROR:%u:unable to write to write pipe handle "
"hWritePipe=0x%lx\n", GetLastError(), hWritePipe);
CloseHandle(hReadPipe);
CloseHandle(hWritePipe);
Fail("");
}
/*Duplicate the pipe handle*/
if (!(DuplicateHandle(GetCurrentProcess(), /* source handle process*/
hReadPipe, /* handle to duplicate*/
GetCurrentProcess(), /* target process handle*/
&hDupPipe, /* duplicate handle*/
GENERIC_READ|GENERIC_WRITE,/* requested access*/
FALSE, /* handle inheritance*/
DUPLICATE_SAME_ACCESS))) /* optional actions*/
{
Trace("ERROR:%u:Fail to create the duplicate handle"
" to hReadPipe=0x%lx",
GetLastError(),
hReadPipe);
CloseHandle(hReadPipe);
CloseHandle(hWritePipe);
Fail("");
}
/*Read from the duplicated handle, 256 bytes, more bytes
than actually written. This will allow us to use the
value that ReadFile returns for comparision.*/
bRetVal = ReadFile(hDupPipe, /* handle to read pipe*/
buffer, /* buffer to write to*/
256, /* number of bytes to read*/
&dwBytesRead, /* number of bytes read*/
NULL); /* overlapped buffer*/
if (bRetVal == FALSE)
{
Trace("ERROR:%u:unable read from the duplicated pipe "
"hDupPipe=0x%lx\n",
GetLastError(),
hDupPipe);
CloseHandle(hReadPipe);
CloseHandle(hWritePipe);
CloseHandle(hDupPipe);
Fail("");
}
/*Compare what was read with what was written.*/
if ((memcmp(cTestString, buffer, dwBytesRead)) != 0)
{
Trace("ERROR:%u: read \"%s\" expected \"%s\" \n",
GetLastError(),
buffer,
cTestString);
CloseHandle(hReadPipe);
CloseHandle(hWritePipe);
CloseHandle(hDupPipe);
Fail("");
}
/*Compare values returned from WriteFile and ReadFile.*/
if (dwBytesWritten != dwBytesRead)
{
Trace("ERROR:%u: WriteFile wrote \"%s\", but ReadFile read \"%s\","
" these should be the same\n",
GetLastError(),
buffer,
cTestString);
CloseHandle(hReadPipe);
CloseHandle(hWritePipe);
CloseHandle(hDupPipe);
Fail("");
}
/*Cleanup.*/
CloseHandle(hWritePipe);
CloseHandle(hReadPipe);
CloseHandle(hDupPipe);
PAL_Terminate();
return (PASS);
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/*=====================================================================
**
** Source: test5.c (DuplicateHandle)
**
** Purpose: Tests the PAL implementation of the DuplicateHandle function,
** with CreatePipe. This test will create a pipe and write to it,
** the duplicate the read handle and read what was written.
**
** Depends: WriteFile
** ReadFile
** memcmp
** CloseHandle
**
**
**===================================================================*/
#include <palsuite.h>
#define cTestString "one fish, two fish, read fish, blue fish."
PALTEST(threading_DuplicateHandle_test5_paltest_duplicatehandle_test5, "threading/DuplicateHandle/test5/paltest_duplicatehandle_test5")
{
HANDLE hReadPipe = NULL;
HANDLE hWritePipe = NULL;
HANDLE hDupPipe = NULL;
BOOL bRetVal = FALSE;
DWORD dwBytesWritten;
DWORD dwBytesRead;
char buffer[256];
SECURITY_ATTRIBUTES lpPipeAttributes;
/*Initialize the PAL*/
if ((PAL_Initialize(argc, argv)) != 0)
{
return (FAIL);
}
/*Setup SECURITY_ATTRIBUTES structure for CreatePipe*/
lpPipeAttributes.nLength = sizeof(lpPipeAttributes);
lpPipeAttributes.lpSecurityDescriptor = NULL;
lpPipeAttributes.bInheritHandle = TRUE;
/*Create a Pipe*/
bRetVal = CreatePipe(&hReadPipe, /* read handle*/
&hWritePipe, /* write handle */
&lpPipeAttributes,/* security attributes*/
0); /* pipe size*/
if (bRetVal == FALSE)
{
Fail("ERROR:%u:Unable to create pipe\n", GetLastError());
}
/*Write to the write pipe handle*/
bRetVal = WriteFile(hWritePipe, /* handle to write pipe*/
cTestString, /* buffer to write*/
strlen(cTestString),/* number of bytes to write*/
&dwBytesWritten, /* number of bytes written*/
NULL); /* overlapped buffer*/
if (bRetVal == FALSE)
{
Trace("ERROR:%u:unable to write to write pipe handle "
"hWritePipe=0x%lx\n", GetLastError(), hWritePipe);
CloseHandle(hReadPipe);
CloseHandle(hWritePipe);
Fail("");
}
/*Duplicate the pipe handle*/
if (!(DuplicateHandle(GetCurrentProcess(), /* source handle process*/
hReadPipe, /* handle to duplicate*/
GetCurrentProcess(), /* target process handle*/
&hDupPipe, /* duplicate handle*/
GENERIC_READ|GENERIC_WRITE,/* requested access*/
FALSE, /* handle inheritance*/
DUPLICATE_SAME_ACCESS))) /* optional actions*/
{
Trace("ERROR:%u:Fail to create the duplicate handle"
" to hReadPipe=0x%lx",
GetLastError(),
hReadPipe);
CloseHandle(hReadPipe);
CloseHandle(hWritePipe);
Fail("");
}
/*Read from the duplicated handle, 256 bytes, more bytes
than actually written. This will allow us to use the
value that ReadFile returns for comparision.*/
bRetVal = ReadFile(hDupPipe, /* handle to read pipe*/
buffer, /* buffer to write to*/
256, /* number of bytes to read*/
&dwBytesRead, /* number of bytes read*/
NULL); /* overlapped buffer*/
if (bRetVal == FALSE)
{
Trace("ERROR:%u:unable read from the duplicated pipe "
"hDupPipe=0x%lx\n",
GetLastError(),
hDupPipe);
CloseHandle(hReadPipe);
CloseHandle(hWritePipe);
CloseHandle(hDupPipe);
Fail("");
}
/*Compare what was read with what was written.*/
if ((memcmp(cTestString, buffer, dwBytesRead)) != 0)
{
Trace("ERROR:%u: read \"%s\" expected \"%s\" \n",
GetLastError(),
buffer,
cTestString);
CloseHandle(hReadPipe);
CloseHandle(hWritePipe);
CloseHandle(hDupPipe);
Fail("");
}
/*Compare values returned from WriteFile and ReadFile.*/
if (dwBytesWritten != dwBytesRead)
{
Trace("ERROR:%u: WriteFile wrote \"%s\", but ReadFile read \"%s\","
" these should be the same\n",
GetLastError(),
buffer,
cTestString);
CloseHandle(hReadPipe);
CloseHandle(hWritePipe);
CloseHandle(hDupPipe);
Fail("");
}
/*Cleanup.*/
CloseHandle(hWritePipe);
CloseHandle(hReadPipe);
CloseHandle(hDupPipe);
PAL_Terminate();
return (PASS);
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/JIT/HardwareIntrinsics/General/Vector128_1/op_Equality.UInt32.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/******************************************************************************
* This file is auto-generated from a template file by the GenerateTests.csx *
* script in tests\src\JIT\HardwareIntrinsics\X86\Shared. In order to make *
* changes, please update the corresponding template and run according to the *
* directions listed in the file. *
******************************************************************************/
using System;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Runtime.Intrinsics;
namespace JIT.HardwareIntrinsics.General
{
public static partial class Program
{
private static void op_EqualityUInt32()
{
var test = new VectorBooleanBinaryOpTest__op_EqualityUInt32();
// Validates basic functionality works, using Unsafe.Read
test.RunBasicScenario_UnsafeRead();
// Validates calling via reflection works, using Unsafe.Read
test.RunReflectionScenario_UnsafeRead();
// Validates passing a static member works
test.RunClsVarScenario();
// Validates passing a local works, using Unsafe.Read
test.RunLclVarScenario_UnsafeRead();
// Validates passing the field of a local class works
test.RunClassLclFldScenario();
// Validates passing an instance member of a class works
test.RunClassFldScenario();
// Validates passing the field of a local struct works
test.RunStructLclFldScenario();
// Validates passing an instance member of a struct works
test.RunStructFldScenario();
if (!test.Succeeded)
{
throw new Exception("One or more scenarios did not complete as expected.");
}
}
}
public sealed unsafe class VectorBooleanBinaryOpTest__op_EqualityUInt32
{
private struct DataTable
{
private byte[] inArray1;
private byte[] inArray2;
private GCHandle inHandle1;
private GCHandle inHandle2;
private ulong alignment;
public DataTable(UInt32[] inArray1, UInt32[] inArray2, int alignment)
{
int sizeOfinArray1 = inArray1.Length * Unsafe.SizeOf<UInt32>();
int sizeOfinArray2 = inArray2.Length * Unsafe.SizeOf<UInt32>();
if ((alignment != 32 && alignment != 16 && alignment != 8) || (alignment * 2) < sizeOfinArray1 || (alignment * 2) < sizeOfinArray2)
{
throw new ArgumentException("Invalid value of alignment");
}
this.inArray1 = new byte[alignment * 2];
this.inArray2 = new byte[alignment * 2];
this.inHandle1 = GCHandle.Alloc(this.inArray1, GCHandleType.Pinned);
this.inHandle2 = GCHandle.Alloc(this.inArray2, GCHandleType.Pinned);
this.alignment = (ulong)alignment;
Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray1Ptr), ref Unsafe.As<UInt32, byte>(ref inArray1[0]), (uint)sizeOfinArray1);
Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray2Ptr), ref Unsafe.As<UInt32, byte>(ref inArray2[0]), (uint)sizeOfinArray2);
}
public void* inArray1Ptr => Align((byte*)(inHandle1.AddrOfPinnedObject().ToPointer()), alignment);
public void* inArray2Ptr => Align((byte*)(inHandle2.AddrOfPinnedObject().ToPointer()), alignment);
public void Dispose()
{
inHandle1.Free();
inHandle2.Free();
}
private static unsafe void* Align(byte* buffer, ulong expectedAlignment)
{
return (void*)(((ulong)buffer + expectedAlignment - 1) & ~(expectedAlignment - 1));
}
}
private struct TestStruct
{
public Vector128<UInt32> _fld1;
public Vector128<UInt32> _fld2;
public static TestStruct Create()
{
var testStruct = new TestStruct();
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt32(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt32>, byte>(ref testStruct._fld1), ref Unsafe.As<UInt32, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetUInt32(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt32>, byte>(ref testStruct._fld2), ref Unsafe.As<UInt32, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
return testStruct;
}
public void RunStructFldScenario(VectorBooleanBinaryOpTest__op_EqualityUInt32 testClass)
{
var result = _fld1 == _fld2;
testClass.ValidateResult(_fld1, _fld2, result);
}
}
private static readonly int LargestVectorSize = 16;
private static readonly int Op1ElementCount = Unsafe.SizeOf<Vector128<UInt32>>() / sizeof(UInt32);
private static readonly int Op2ElementCount = Unsafe.SizeOf<Vector128<UInt32>>() / sizeof(UInt32);
private static UInt32[] _data1 = new UInt32[Op1ElementCount];
private static UInt32[] _data2 = new UInt32[Op2ElementCount];
private static Vector128<UInt32> _clsVar1;
private static Vector128<UInt32> _clsVar2;
private Vector128<UInt32> _fld1;
private Vector128<UInt32> _fld2;
private DataTable _dataTable;
static VectorBooleanBinaryOpTest__op_EqualityUInt32()
{
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt32(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt32>, byte>(ref _clsVar1), ref Unsafe.As<UInt32, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetUInt32(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt32>, byte>(ref _clsVar2), ref Unsafe.As<UInt32, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
}
public VectorBooleanBinaryOpTest__op_EqualityUInt32()
{
Succeeded = true;
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt32(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt32>, byte>(ref _fld1), ref Unsafe.As<UInt32, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetUInt32(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt32>, byte>(ref _fld2), ref Unsafe.As<UInt32, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt32(); }
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetUInt32(); }
_dataTable = new DataTable(_data1, _data2, LargestVectorSize);
}
public bool Succeeded { get; set; }
public void RunBasicScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_UnsafeRead));
var result = Unsafe.Read<Vector128<UInt32>>(_dataTable.inArray1Ptr) == Unsafe.Read<Vector128<UInt32>>(_dataTable.inArray2Ptr);
ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, result);
}
public void RunReflectionScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_UnsafeRead));
var result = typeof(Vector128<UInt32>).GetMethod("op_Equality", new Type[] { typeof(Vector128<UInt32>), typeof(Vector128<UInt32>) })
.Invoke(null, new object[] {
Unsafe.Read<Vector128<UInt32>>(_dataTable.inArray1Ptr),
Unsafe.Read<Vector128<UInt32>>(_dataTable.inArray2Ptr)
});
ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, (bool)(result));
}
public void RunClsVarScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario));
var result = _clsVar1 == _clsVar2;
ValidateResult(_clsVar1, _clsVar2, result);
}
public void RunLclVarScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_UnsafeRead));
var op1 = Unsafe.Read<Vector128<UInt32>>(_dataTable.inArray1Ptr);
var op2 = Unsafe.Read<Vector128<UInt32>>(_dataTable.inArray2Ptr);
var result = op1 == op2;
ValidateResult(op1, op2, result);
}
public void RunClassLclFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario));
var test = new VectorBooleanBinaryOpTest__op_EqualityUInt32();
var result = test._fld1 == test._fld2;
ValidateResult(test._fld1, test._fld2, result);
}
public void RunClassFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario));
var result = _fld1 == _fld2;
ValidateResult(_fld1, _fld2, result);
}
public void RunStructLclFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario));
var test = TestStruct.Create();
var result = test._fld1 == test._fld2;
ValidateResult(test._fld1, test._fld2, result);
}
public void RunStructFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario));
var test = TestStruct.Create();
test.RunStructFldScenario(this);
}
private void ValidateResult(Vector128<UInt32> op1, Vector128<UInt32> op2, bool result, [CallerMemberName] string method = "")
{
UInt32[] inArray1 = new UInt32[Op1ElementCount];
UInt32[] inArray2 = new UInt32[Op2ElementCount];
Unsafe.WriteUnaligned(ref Unsafe.As<UInt32, byte>(ref inArray1[0]), op1);
Unsafe.WriteUnaligned(ref Unsafe.As<UInt32, byte>(ref inArray2[0]), op2);
ValidateResult(inArray1, inArray2, result, method);
}
private void ValidateResult(void* op1, void* op2, bool result, [CallerMemberName] string method = "")
{
UInt32[] inArray1 = new UInt32[Op1ElementCount];
UInt32[] inArray2 = new UInt32[Op2ElementCount];
Unsafe.CopyBlockUnaligned(ref Unsafe.As<UInt32, byte>(ref inArray1[0]), ref Unsafe.AsRef<byte>(op1), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
Unsafe.CopyBlockUnaligned(ref Unsafe.As<UInt32, byte>(ref inArray2[0]), ref Unsafe.AsRef<byte>(op2), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
ValidateResult(inArray1, inArray2, result, method);
}
private void ValidateResult(UInt32[] left, UInt32[] right, bool result, [CallerMemberName] string method = "")
{
bool succeeded = true;
var expectedResult = true;
for (var i = 0; i < Op1ElementCount; i++)
{
expectedResult &= (left[i] == right[i]);
}
succeeded = (expectedResult == result);
if (!succeeded)
{
TestLibrary.TestFramework.LogInformation($"{nameof(Vector128)}.op_Equality<UInt32>(Vector128<UInt32>, Vector128<UInt32>): {method} failed:");
TestLibrary.TestFramework.LogInformation($" left: ({string.Join(", ", left)})");
TestLibrary.TestFramework.LogInformation($" right: ({string.Join(", ", right)})");
TestLibrary.TestFramework.LogInformation($" result: ({result})");
TestLibrary.TestFramework.LogInformation(string.Empty);
Succeeded = false;
}
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/******************************************************************************
* This file is auto-generated from a template file by the GenerateTests.csx *
* script in tests\src\JIT\HardwareIntrinsics\X86\Shared. In order to make *
* changes, please update the corresponding template and run according to the *
* directions listed in the file. *
******************************************************************************/
using System;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Runtime.Intrinsics;
namespace JIT.HardwareIntrinsics.General
{
public static partial class Program
{
private static void op_EqualityUInt32()
{
var test = new VectorBooleanBinaryOpTest__op_EqualityUInt32();
// Validates basic functionality works, using Unsafe.Read
test.RunBasicScenario_UnsafeRead();
// Validates calling via reflection works, using Unsafe.Read
test.RunReflectionScenario_UnsafeRead();
// Validates passing a static member works
test.RunClsVarScenario();
// Validates passing a local works, using Unsafe.Read
test.RunLclVarScenario_UnsafeRead();
// Validates passing the field of a local class works
test.RunClassLclFldScenario();
// Validates passing an instance member of a class works
test.RunClassFldScenario();
// Validates passing the field of a local struct works
test.RunStructLclFldScenario();
// Validates passing an instance member of a struct works
test.RunStructFldScenario();
if (!test.Succeeded)
{
throw new Exception("One or more scenarios did not complete as expected.");
}
}
}
public sealed unsafe class VectorBooleanBinaryOpTest__op_EqualityUInt32
{
private struct DataTable
{
private byte[] inArray1;
private byte[] inArray2;
private GCHandle inHandle1;
private GCHandle inHandle2;
private ulong alignment;
public DataTable(UInt32[] inArray1, UInt32[] inArray2, int alignment)
{
int sizeOfinArray1 = inArray1.Length * Unsafe.SizeOf<UInt32>();
int sizeOfinArray2 = inArray2.Length * Unsafe.SizeOf<UInt32>();
if ((alignment != 32 && alignment != 16 && alignment != 8) || (alignment * 2) < sizeOfinArray1 || (alignment * 2) < sizeOfinArray2)
{
throw new ArgumentException("Invalid value of alignment");
}
this.inArray1 = new byte[alignment * 2];
this.inArray2 = new byte[alignment * 2];
this.inHandle1 = GCHandle.Alloc(this.inArray1, GCHandleType.Pinned);
this.inHandle2 = GCHandle.Alloc(this.inArray2, GCHandleType.Pinned);
this.alignment = (ulong)alignment;
Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray1Ptr), ref Unsafe.As<UInt32, byte>(ref inArray1[0]), (uint)sizeOfinArray1);
Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray2Ptr), ref Unsafe.As<UInt32, byte>(ref inArray2[0]), (uint)sizeOfinArray2);
}
public void* inArray1Ptr => Align((byte*)(inHandle1.AddrOfPinnedObject().ToPointer()), alignment);
public void* inArray2Ptr => Align((byte*)(inHandle2.AddrOfPinnedObject().ToPointer()), alignment);
public void Dispose()
{
inHandle1.Free();
inHandle2.Free();
}
private static unsafe void* Align(byte* buffer, ulong expectedAlignment)
{
return (void*)(((ulong)buffer + expectedAlignment - 1) & ~(expectedAlignment - 1));
}
}
private struct TestStruct
{
public Vector128<UInt32> _fld1;
public Vector128<UInt32> _fld2;
public static TestStruct Create()
{
var testStruct = new TestStruct();
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt32(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt32>, byte>(ref testStruct._fld1), ref Unsafe.As<UInt32, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetUInt32(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt32>, byte>(ref testStruct._fld2), ref Unsafe.As<UInt32, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
return testStruct;
}
public void RunStructFldScenario(VectorBooleanBinaryOpTest__op_EqualityUInt32 testClass)
{
var result = _fld1 == _fld2;
testClass.ValidateResult(_fld1, _fld2, result);
}
}
private static readonly int LargestVectorSize = 16;
private static readonly int Op1ElementCount = Unsafe.SizeOf<Vector128<UInt32>>() / sizeof(UInt32);
private static readonly int Op2ElementCount = Unsafe.SizeOf<Vector128<UInt32>>() / sizeof(UInt32);
private static UInt32[] _data1 = new UInt32[Op1ElementCount];
private static UInt32[] _data2 = new UInt32[Op2ElementCount];
private static Vector128<UInt32> _clsVar1;
private static Vector128<UInt32> _clsVar2;
private Vector128<UInt32> _fld1;
private Vector128<UInt32> _fld2;
private DataTable _dataTable;
static VectorBooleanBinaryOpTest__op_EqualityUInt32()
{
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt32(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt32>, byte>(ref _clsVar1), ref Unsafe.As<UInt32, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetUInt32(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt32>, byte>(ref _clsVar2), ref Unsafe.As<UInt32, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
}
public VectorBooleanBinaryOpTest__op_EqualityUInt32()
{
Succeeded = true;
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt32(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt32>, byte>(ref _fld1), ref Unsafe.As<UInt32, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetUInt32(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt32>, byte>(ref _fld2), ref Unsafe.As<UInt32, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt32(); }
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetUInt32(); }
_dataTable = new DataTable(_data1, _data2, LargestVectorSize);
}
public bool Succeeded { get; set; }
public void RunBasicScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_UnsafeRead));
var result = Unsafe.Read<Vector128<UInt32>>(_dataTable.inArray1Ptr) == Unsafe.Read<Vector128<UInt32>>(_dataTable.inArray2Ptr);
ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, result);
}
public void RunReflectionScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_UnsafeRead));
var result = typeof(Vector128<UInt32>).GetMethod("op_Equality", new Type[] { typeof(Vector128<UInt32>), typeof(Vector128<UInt32>) })
.Invoke(null, new object[] {
Unsafe.Read<Vector128<UInt32>>(_dataTable.inArray1Ptr),
Unsafe.Read<Vector128<UInt32>>(_dataTable.inArray2Ptr)
});
ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, (bool)(result));
}
public void RunClsVarScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario));
var result = _clsVar1 == _clsVar2;
ValidateResult(_clsVar1, _clsVar2, result);
}
public void RunLclVarScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_UnsafeRead));
var op1 = Unsafe.Read<Vector128<UInt32>>(_dataTable.inArray1Ptr);
var op2 = Unsafe.Read<Vector128<UInt32>>(_dataTable.inArray2Ptr);
var result = op1 == op2;
ValidateResult(op1, op2, result);
}
public void RunClassLclFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario));
var test = new VectorBooleanBinaryOpTest__op_EqualityUInt32();
var result = test._fld1 == test._fld2;
ValidateResult(test._fld1, test._fld2, result);
}
public void RunClassFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario));
var result = _fld1 == _fld2;
ValidateResult(_fld1, _fld2, result);
}
public void RunStructLclFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario));
var test = TestStruct.Create();
var result = test._fld1 == test._fld2;
ValidateResult(test._fld1, test._fld2, result);
}
public void RunStructFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario));
var test = TestStruct.Create();
test.RunStructFldScenario(this);
}
private void ValidateResult(Vector128<UInt32> op1, Vector128<UInt32> op2, bool result, [CallerMemberName] string method = "")
{
UInt32[] inArray1 = new UInt32[Op1ElementCount];
UInt32[] inArray2 = new UInt32[Op2ElementCount];
Unsafe.WriteUnaligned(ref Unsafe.As<UInt32, byte>(ref inArray1[0]), op1);
Unsafe.WriteUnaligned(ref Unsafe.As<UInt32, byte>(ref inArray2[0]), op2);
ValidateResult(inArray1, inArray2, result, method);
}
private void ValidateResult(void* op1, void* op2, bool result, [CallerMemberName] string method = "")
{
UInt32[] inArray1 = new UInt32[Op1ElementCount];
UInt32[] inArray2 = new UInt32[Op2ElementCount];
Unsafe.CopyBlockUnaligned(ref Unsafe.As<UInt32, byte>(ref inArray1[0]), ref Unsafe.AsRef<byte>(op1), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
Unsafe.CopyBlockUnaligned(ref Unsafe.As<UInt32, byte>(ref inArray2[0]), ref Unsafe.AsRef<byte>(op2), (uint)Unsafe.SizeOf<Vector128<UInt32>>());
ValidateResult(inArray1, inArray2, result, method);
}
private void ValidateResult(UInt32[] left, UInt32[] right, bool result, [CallerMemberName] string method = "")
{
bool succeeded = true;
var expectedResult = true;
for (var i = 0; i < Op1ElementCount; i++)
{
expectedResult &= (left[i] == right[i]);
}
succeeded = (expectedResult == result);
if (!succeeded)
{
TestLibrary.TestFramework.LogInformation($"{nameof(Vector128)}.op_Equality<UInt32>(Vector128<UInt32>, Vector128<UInt32>): {method} failed:");
TestLibrary.TestFramework.LogInformation($" left: ({string.Join(", ", left)})");
TestLibrary.TestFramework.LogInformation($" right: ({string.Join(", ", right)})");
TestLibrary.TestFramework.LogInformation($" result: ({result})");
TestLibrary.TestFramework.LogInformation(string.Empty);
Succeeded = false;
}
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/JIT/jit64/rtchecks/overflow/overflow02.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
internal class OVFTest
{
static public volatile bool rtv;
static OVFTest()
{
rtv = Environment.TickCount != 0;
}
private static sbyte Test_sbyte()
{
if (!rtv) return 0;
sbyte a = 1 + sbyte.MaxValue / 2;
checked
{
#if OP_DIV
return (sbyte)(a / 0.5);
#elif OP_ADD
return (sbyte)(a + a);
#elif OP_SUB
return (sbyte)(-1 - a - a);
#else
return (sbyte)(a * 2);
#endif
}
}
private static byte Test_byte()
{
if (!rtv) return 0;
byte a = 1 + byte.MaxValue / 2;
checked
{
#if OP_DIV
return (byte)(a / 0.5);
#elif OP_ADD
return (byte)(a + a);
#elif OP_SUB
return (byte)(0 - a - a);
#else
return (byte)(a * 2);
#endif
}
}
private static short Test_short()
{
if (!rtv) return 0;
short a = 1 + short.MaxValue / 2;
checked
{
#if OP_DIV
return (short)(a / 0.5);
#elif OP_ADD
return (short)(a + a);
#elif OP_SUB
return (short)(-1 - a - a);
#else
return (short)(a * 2);
#endif
}
}
private static ushort Test_ushort()
{
if (!rtv) return 0;
ushort a = 1 + ushort.MaxValue / 2;
checked
{
#if OP_DIV
return (ushort)(a / 0.5);
#elif OP_ADD
return (ushort)(a + a);
#elif OP_SUB
return (ushort)(0 - a - a);
#else
return (ushort)(a * 2);
#endif
}
}
private static int Test_int()
{
if (!rtv) return 0;
int a = 1 + int.MaxValue / 2;
checked
{
#if OP_DIV
return (int)(a / 0.5);
#elif OP_ADD
return a + a;
#elif OP_SUB
return -1 - a - a;
#else
return a * 2;
#endif
}
}
private static uint Test_uint()
{
if (!rtv) return 0;
uint a = 1U + uint.MaxValue / 2U;
checked
{
#if OP_DIV
return (uint)(a / 0.5);
#elif OP_ADD
return a + a;
#elif OP_SUB
return 0U - a - a;
#else
return a * 2;
#endif
}
}
private static long Test_long()
{
if (!rtv) return 0;
long a = 1L + long.MaxValue / 2L;
checked
{
#if OP_DIV
return (long)(a / 0.5);
#elif OP_ADD
return a + a;
#elif OP_SUB
return -1L - a - a;
#else
return a * 2;
#endif
}
}
private static ulong Test_ulong()
{
if (!rtv) return 0;
ulong a = 1UL + ulong.MaxValue / 2UL;
checked
{
#if OP_DIV
return (ulong)(a / 0.5);
#elif OP_ADD
return a + a;
#elif OP_SUB
return 0UL - a - a;
#else
return a * 2;
#endif
}
}
private static int Main(string[] args)
{
#if OP_DIV
const string op = "div.ovf";
#elif OP_ADD
const string op = "add.ovf";
#elif OP_SUB
const string op = "sub.ovf";
#else
const string op = "mul.ovf";
#endif
Console.WriteLine("Runtime Checks [OP: {0}]", op);
int check = 8;
try
{
Console.Write("Type 'byte' . . : ");
byte a = Test_byte();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'sbyte'. . : ");
sbyte a = Test_sbyte();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'short'. . : ");
short a = Test_short();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'ushort' . : ");
ushort a = Test_ushort();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'int'. . . : ");
int a = Test_int();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'uint' . . : ");
uint a = Test_uint();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'long' . . : ");
long a = Test_long();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'ulong'. . : ");
ulong a = Test_ulong();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
return check == 0 ? 100 : 1;
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
internal class OVFTest
{
static public volatile bool rtv;
static OVFTest()
{
rtv = Environment.TickCount != 0;
}
private static sbyte Test_sbyte()
{
if (!rtv) return 0;
sbyte a = 1 + sbyte.MaxValue / 2;
checked
{
#if OP_DIV
return (sbyte)(a / 0.5);
#elif OP_ADD
return (sbyte)(a + a);
#elif OP_SUB
return (sbyte)(-1 - a - a);
#else
return (sbyte)(a * 2);
#endif
}
}
private static byte Test_byte()
{
if (!rtv) return 0;
byte a = 1 + byte.MaxValue / 2;
checked
{
#if OP_DIV
return (byte)(a / 0.5);
#elif OP_ADD
return (byte)(a + a);
#elif OP_SUB
return (byte)(0 - a - a);
#else
return (byte)(a * 2);
#endif
}
}
private static short Test_short()
{
if (!rtv) return 0;
short a = 1 + short.MaxValue / 2;
checked
{
#if OP_DIV
return (short)(a / 0.5);
#elif OP_ADD
return (short)(a + a);
#elif OP_SUB
return (short)(-1 - a - a);
#else
return (short)(a * 2);
#endif
}
}
private static ushort Test_ushort()
{
if (!rtv) return 0;
ushort a = 1 + ushort.MaxValue / 2;
checked
{
#if OP_DIV
return (ushort)(a / 0.5);
#elif OP_ADD
return (ushort)(a + a);
#elif OP_SUB
return (ushort)(0 - a - a);
#else
return (ushort)(a * 2);
#endif
}
}
private static int Test_int()
{
if (!rtv) return 0;
int a = 1 + int.MaxValue / 2;
checked
{
#if OP_DIV
return (int)(a / 0.5);
#elif OP_ADD
return a + a;
#elif OP_SUB
return -1 - a - a;
#else
return a * 2;
#endif
}
}
private static uint Test_uint()
{
if (!rtv) return 0;
uint a = 1U + uint.MaxValue / 2U;
checked
{
#if OP_DIV
return (uint)(a / 0.5);
#elif OP_ADD
return a + a;
#elif OP_SUB
return 0U - a - a;
#else
return a * 2;
#endif
}
}
private static long Test_long()
{
if (!rtv) return 0;
long a = 1L + long.MaxValue / 2L;
checked
{
#if OP_DIV
return (long)(a / 0.5);
#elif OP_ADD
return a + a;
#elif OP_SUB
return -1L - a - a;
#else
return a * 2;
#endif
}
}
private static ulong Test_ulong()
{
if (!rtv) return 0;
ulong a = 1UL + ulong.MaxValue / 2UL;
checked
{
#if OP_DIV
return (ulong)(a / 0.5);
#elif OP_ADD
return a + a;
#elif OP_SUB
return 0UL - a - a;
#else
return a * 2;
#endif
}
}
private static int Main(string[] args)
{
#if OP_DIV
const string op = "div.ovf";
#elif OP_ADD
const string op = "add.ovf";
#elif OP_SUB
const string op = "sub.ovf";
#else
const string op = "mul.ovf";
#endif
Console.WriteLine("Runtime Checks [OP: {0}]", op);
int check = 8;
try
{
Console.Write("Type 'byte' . . : ");
byte a = Test_byte();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'sbyte'. . : ");
sbyte a = Test_sbyte();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'short'. . : ");
short a = Test_short();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'ushort' . : ");
ushort a = Test_ushort();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'int'. . . : ");
int a = Test_int();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'uint' . . : ");
uint a = Test_uint();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'long' . . : ");
long a = Test_long();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
try
{
Console.Write("Type 'ulong'. . : ");
ulong a = Test_ulong();
Console.WriteLine("failed! - a = " + a);
}
catch (System.OverflowException)
{
Console.WriteLine("passed");
check--;
}
return check == 0 ? 100 : 1;
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/Regressions/coreclr/GitHub_41674/genericldtoken.ilproj | <Project Sdk="Microsoft.NET.Sdk.IL">
<PropertyGroup>
<OutputType>Exe</OutputType>
</PropertyGroup>
<ItemGroup>
<Compile Include="genericldtoken.il" />
</ItemGroup>
</Project>
| <Project Sdk="Microsoft.NET.Sdk.IL">
<PropertyGroup>
<OutputType>Exe</OutputType>
</PropertyGroup>
<ItemGroup>
<Compile Include="genericldtoken.il" />
</ItemGroup>
</Project>
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/JIT/HardwareIntrinsics/General/Vector256/GreaterThanOrEqualAll.Int16.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/******************************************************************************
* This file is auto-generated from a template file by the GenerateTests.csx *
* script in tests\src\JIT\HardwareIntrinsics\X86\Shared. In order to make *
* changes, please update the corresponding template and run according to the *
* directions listed in the file. *
******************************************************************************/
using System;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Runtime.Intrinsics;
namespace JIT.HardwareIntrinsics.General
{
public static partial class Program
{
private static void GreaterThanOrEqualAllInt16()
{
var test = new VectorBooleanBinaryOpTest__GreaterThanOrEqualAllInt16();
// Validates basic functionality works, using Unsafe.Read
test.RunBasicScenario_UnsafeRead();
// Validates calling via reflection works, using Unsafe.Read
test.RunReflectionScenario_UnsafeRead();
// Validates passing a static member works
test.RunClsVarScenario();
// Validates passing a local works, using Unsafe.Read
test.RunLclVarScenario_UnsafeRead();
// Validates passing the field of a local class works
test.RunClassLclFldScenario();
// Validates passing an instance member of a class works
test.RunClassFldScenario();
// Validates passing the field of a local struct works
test.RunStructLclFldScenario();
// Validates passing an instance member of a struct works
test.RunStructFldScenario();
if (!test.Succeeded)
{
throw new Exception("One or more scenarios did not complete as expected.");
}
}
}
public sealed unsafe class VectorBooleanBinaryOpTest__GreaterThanOrEqualAllInt16
{
private struct DataTable
{
private byte[] inArray1;
private byte[] inArray2;
private GCHandle inHandle1;
private GCHandle inHandle2;
private ulong alignment;
public DataTable(Int16[] inArray1, Int16[] inArray2, int alignment)
{
int sizeOfinArray1 = inArray1.Length * Unsafe.SizeOf<Int16>();
int sizeOfinArray2 = inArray2.Length * Unsafe.SizeOf<Int16>();
if ((alignment != 32 && alignment != 16 && alignment != 8) || (alignment * 2) < sizeOfinArray1 || (alignment * 2) < sizeOfinArray2)
{
throw new ArgumentException("Invalid value of alignment");
}
this.inArray1 = new byte[alignment * 2];
this.inArray2 = new byte[alignment * 2];
this.inHandle1 = GCHandle.Alloc(this.inArray1, GCHandleType.Pinned);
this.inHandle2 = GCHandle.Alloc(this.inArray2, GCHandleType.Pinned);
this.alignment = (ulong)alignment;
Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray1Ptr), ref Unsafe.As<Int16, byte>(ref inArray1[0]), (uint)sizeOfinArray1);
Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray2Ptr), ref Unsafe.As<Int16, byte>(ref inArray2[0]), (uint)sizeOfinArray2);
}
public void* inArray1Ptr => Align((byte*)(inHandle1.AddrOfPinnedObject().ToPointer()), alignment);
public void* inArray2Ptr => Align((byte*)(inHandle2.AddrOfPinnedObject().ToPointer()), alignment);
public void Dispose()
{
inHandle1.Free();
inHandle2.Free();
}
private static unsafe void* Align(byte* buffer, ulong expectedAlignment)
{
return (void*)(((ulong)buffer + expectedAlignment - 1) & ~(expectedAlignment - 1));
}
}
private struct TestStruct
{
public Vector256<Int16> _fld1;
public Vector256<Int16> _fld2;
public static TestStruct Create()
{
var testStruct = new TestStruct();
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt16(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector256<Int16>, byte>(ref testStruct._fld1), ref Unsafe.As<Int16, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector256<Int16>>());
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt16(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector256<Int16>, byte>(ref testStruct._fld2), ref Unsafe.As<Int16, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector256<Int16>>());
return testStruct;
}
public void RunStructFldScenario(VectorBooleanBinaryOpTest__GreaterThanOrEqualAllInt16 testClass)
{
var result = Vector256.GreaterThanOrEqualAll(_fld1, _fld2);
testClass.ValidateResult(_fld1, _fld2, result);
}
}
private static readonly int LargestVectorSize = 32;
private static readonly int Op1ElementCount = Unsafe.SizeOf<Vector256<Int16>>() / sizeof(Int16);
private static readonly int Op2ElementCount = Unsafe.SizeOf<Vector256<Int16>>() / sizeof(Int16);
private static Int16[] _data1 = new Int16[Op1ElementCount];
private static Int16[] _data2 = new Int16[Op2ElementCount];
private static Vector256<Int16> _clsVar1;
private static Vector256<Int16> _clsVar2;
private Vector256<Int16> _fld1;
private Vector256<Int16> _fld2;
private DataTable _dataTable;
static VectorBooleanBinaryOpTest__GreaterThanOrEqualAllInt16()
{
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt16(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector256<Int16>, byte>(ref _clsVar1), ref Unsafe.As<Int16, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector256<Int16>>());
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt16(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector256<Int16>, byte>(ref _clsVar2), ref Unsafe.As<Int16, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector256<Int16>>());
}
public VectorBooleanBinaryOpTest__GreaterThanOrEqualAllInt16()
{
Succeeded = true;
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt16(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector256<Int16>, byte>(ref _fld1), ref Unsafe.As<Int16, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector256<Int16>>());
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt16(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector256<Int16>, byte>(ref _fld2), ref Unsafe.As<Int16, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector256<Int16>>());
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt16(); }
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt16(); }
_dataTable = new DataTable(_data1, _data2, LargestVectorSize);
}
public bool Succeeded { get; set; }
public void RunBasicScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_UnsafeRead));
var result = Vector256.GreaterThanOrEqualAll(
Unsafe.Read<Vector256<Int16>>(_dataTable.inArray1Ptr),
Unsafe.Read<Vector256<Int16>>(_dataTable.inArray2Ptr)
);
ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, result);
}
public void RunReflectionScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_UnsafeRead));
var method = typeof(Vector256).GetMethod(nameof(Vector256.GreaterThanOrEqualAll), new Type[] {
typeof(Vector256<Int16>),
typeof(Vector256<Int16>)
});
if (method is null)
{
method = typeof(Vector256).GetMethod(nameof(Vector256.GreaterThanOrEqualAll), 1, new Type[] {
typeof(Vector256<>).MakeGenericType(Type.MakeGenericMethodParameter(0)),
typeof(Vector256<>).MakeGenericType(Type.MakeGenericMethodParameter(0))
});
}
if (method.IsGenericMethodDefinition)
{
method = method.MakeGenericMethod(typeof(Int16));
}
var result = method.Invoke(null, new object[] {
Unsafe.Read<Vector256<Int16>>(_dataTable.inArray1Ptr),
Unsafe.Read<Vector256<Int16>>(_dataTable.inArray2Ptr)
});
ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, (bool)(result));
}
public void RunClsVarScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario));
var result = Vector256.GreaterThanOrEqualAll(
_clsVar1,
_clsVar2
);
ValidateResult(_clsVar1, _clsVar2, result);
}
public void RunLclVarScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_UnsafeRead));
var op1 = Unsafe.Read<Vector256<Int16>>(_dataTable.inArray1Ptr);
var op2 = Unsafe.Read<Vector256<Int16>>(_dataTable.inArray2Ptr);
var result = Vector256.GreaterThanOrEqualAll(op1, op2);
ValidateResult(op1, op2, result);
}
public void RunClassLclFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario));
var test = new VectorBooleanBinaryOpTest__GreaterThanOrEqualAllInt16();
var result = Vector256.GreaterThanOrEqualAll(test._fld1, test._fld2);
ValidateResult(test._fld1, test._fld2, result);
}
public void RunClassFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario));
var result = Vector256.GreaterThanOrEqualAll(_fld1, _fld2);
ValidateResult(_fld1, _fld2, result);
}
public void RunStructLclFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario));
var test = TestStruct.Create();
var result = Vector256.GreaterThanOrEqualAll(test._fld1, test._fld2);
ValidateResult(test._fld1, test._fld2, result);
}
public void RunStructFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario));
var test = TestStruct.Create();
test.RunStructFldScenario(this);
}
private void ValidateResult(Vector256<Int16> op1, Vector256<Int16> op2, bool result, [CallerMemberName] string method = "")
{
Int16[] inArray1 = new Int16[Op1ElementCount];
Int16[] inArray2 = new Int16[Op2ElementCount];
Unsafe.WriteUnaligned(ref Unsafe.As<Int16, byte>(ref inArray1[0]), op1);
Unsafe.WriteUnaligned(ref Unsafe.As<Int16, byte>(ref inArray2[0]), op2);
ValidateResult(inArray1, inArray2, result, method);
}
private void ValidateResult(void* op1, void* op2, bool result, [CallerMemberName] string method = "")
{
Int16[] inArray1 = new Int16[Op1ElementCount];
Int16[] inArray2 = new Int16[Op2ElementCount];
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Int16, byte>(ref inArray1[0]), ref Unsafe.AsRef<byte>(op1), (uint)Unsafe.SizeOf<Vector256<Int16>>());
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Int16, byte>(ref inArray2[0]), ref Unsafe.AsRef<byte>(op2), (uint)Unsafe.SizeOf<Vector256<Int16>>());
ValidateResult(inArray1, inArray2, result, method);
}
private void ValidateResult(Int16[] left, Int16[] right, bool result, [CallerMemberName] string method = "")
{
bool succeeded = true;
var expectedResult = true;
for (var i = 0; i < Op1ElementCount; i++)
{
expectedResult &= (left[i] >= right[i]);
}
succeeded = (expectedResult == result);
if (!succeeded)
{
TestLibrary.TestFramework.LogInformation($"{nameof(Vector256)}.{nameof(Vector256.GreaterThanOrEqualAll)}<Int16>(Vector256<Int16>, Vector256<Int16>): {method} failed:");
TestLibrary.TestFramework.LogInformation($" left: ({string.Join(", ", left)})");
TestLibrary.TestFramework.LogInformation($" right: ({string.Join(", ", right)})");
TestLibrary.TestFramework.LogInformation($" result: ({result})");
TestLibrary.TestFramework.LogInformation(string.Empty);
Succeeded = false;
}
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/******************************************************************************
* This file is auto-generated from a template file by the GenerateTests.csx *
* script in tests\src\JIT\HardwareIntrinsics\X86\Shared. In order to make *
* changes, please update the corresponding template and run according to the *
* directions listed in the file. *
******************************************************************************/
using System;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Runtime.Intrinsics;
namespace JIT.HardwareIntrinsics.General
{
public static partial class Program
{
private static void GreaterThanOrEqualAllInt16()
{
var test = new VectorBooleanBinaryOpTest__GreaterThanOrEqualAllInt16();
// Validates basic functionality works, using Unsafe.Read
test.RunBasicScenario_UnsafeRead();
// Validates calling via reflection works, using Unsafe.Read
test.RunReflectionScenario_UnsafeRead();
// Validates passing a static member works
test.RunClsVarScenario();
// Validates passing a local works, using Unsafe.Read
test.RunLclVarScenario_UnsafeRead();
// Validates passing the field of a local class works
test.RunClassLclFldScenario();
// Validates passing an instance member of a class works
test.RunClassFldScenario();
// Validates passing the field of a local struct works
test.RunStructLclFldScenario();
// Validates passing an instance member of a struct works
test.RunStructFldScenario();
if (!test.Succeeded)
{
throw new Exception("One or more scenarios did not complete as expected.");
}
}
}
public sealed unsafe class VectorBooleanBinaryOpTest__GreaterThanOrEqualAllInt16
{
private struct DataTable
{
private byte[] inArray1;
private byte[] inArray2;
private GCHandle inHandle1;
private GCHandle inHandle2;
private ulong alignment;
public DataTable(Int16[] inArray1, Int16[] inArray2, int alignment)
{
int sizeOfinArray1 = inArray1.Length * Unsafe.SizeOf<Int16>();
int sizeOfinArray2 = inArray2.Length * Unsafe.SizeOf<Int16>();
if ((alignment != 32 && alignment != 16 && alignment != 8) || (alignment * 2) < sizeOfinArray1 || (alignment * 2) < sizeOfinArray2)
{
throw new ArgumentException("Invalid value of alignment");
}
this.inArray1 = new byte[alignment * 2];
this.inArray2 = new byte[alignment * 2];
this.inHandle1 = GCHandle.Alloc(this.inArray1, GCHandleType.Pinned);
this.inHandle2 = GCHandle.Alloc(this.inArray2, GCHandleType.Pinned);
this.alignment = (ulong)alignment;
Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray1Ptr), ref Unsafe.As<Int16, byte>(ref inArray1[0]), (uint)sizeOfinArray1);
Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray2Ptr), ref Unsafe.As<Int16, byte>(ref inArray2[0]), (uint)sizeOfinArray2);
}
public void* inArray1Ptr => Align((byte*)(inHandle1.AddrOfPinnedObject().ToPointer()), alignment);
public void* inArray2Ptr => Align((byte*)(inHandle2.AddrOfPinnedObject().ToPointer()), alignment);
public void Dispose()
{
inHandle1.Free();
inHandle2.Free();
}
private static unsafe void* Align(byte* buffer, ulong expectedAlignment)
{
return (void*)(((ulong)buffer + expectedAlignment - 1) & ~(expectedAlignment - 1));
}
}
private struct TestStruct
{
public Vector256<Int16> _fld1;
public Vector256<Int16> _fld2;
public static TestStruct Create()
{
var testStruct = new TestStruct();
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt16(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector256<Int16>, byte>(ref testStruct._fld1), ref Unsafe.As<Int16, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector256<Int16>>());
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt16(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector256<Int16>, byte>(ref testStruct._fld2), ref Unsafe.As<Int16, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector256<Int16>>());
return testStruct;
}
public void RunStructFldScenario(VectorBooleanBinaryOpTest__GreaterThanOrEqualAllInt16 testClass)
{
var result = Vector256.GreaterThanOrEqualAll(_fld1, _fld2);
testClass.ValidateResult(_fld1, _fld2, result);
}
}
private static readonly int LargestVectorSize = 32;
private static readonly int Op1ElementCount = Unsafe.SizeOf<Vector256<Int16>>() / sizeof(Int16);
private static readonly int Op2ElementCount = Unsafe.SizeOf<Vector256<Int16>>() / sizeof(Int16);
private static Int16[] _data1 = new Int16[Op1ElementCount];
private static Int16[] _data2 = new Int16[Op2ElementCount];
private static Vector256<Int16> _clsVar1;
private static Vector256<Int16> _clsVar2;
private Vector256<Int16> _fld1;
private Vector256<Int16> _fld2;
private DataTable _dataTable;
static VectorBooleanBinaryOpTest__GreaterThanOrEqualAllInt16()
{
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt16(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector256<Int16>, byte>(ref _clsVar1), ref Unsafe.As<Int16, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector256<Int16>>());
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt16(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector256<Int16>, byte>(ref _clsVar2), ref Unsafe.As<Int16, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector256<Int16>>());
}
public VectorBooleanBinaryOpTest__GreaterThanOrEqualAllInt16()
{
Succeeded = true;
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt16(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector256<Int16>, byte>(ref _fld1), ref Unsafe.As<Int16, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector256<Int16>>());
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt16(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector256<Int16>, byte>(ref _fld2), ref Unsafe.As<Int16, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector256<Int16>>());
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt16(); }
for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt16(); }
_dataTable = new DataTable(_data1, _data2, LargestVectorSize);
}
public bool Succeeded { get; set; }
public void RunBasicScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_UnsafeRead));
var result = Vector256.GreaterThanOrEqualAll(
Unsafe.Read<Vector256<Int16>>(_dataTable.inArray1Ptr),
Unsafe.Read<Vector256<Int16>>(_dataTable.inArray2Ptr)
);
ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, result);
}
public void RunReflectionScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_UnsafeRead));
var method = typeof(Vector256).GetMethod(nameof(Vector256.GreaterThanOrEqualAll), new Type[] {
typeof(Vector256<Int16>),
typeof(Vector256<Int16>)
});
if (method is null)
{
method = typeof(Vector256).GetMethod(nameof(Vector256.GreaterThanOrEqualAll), 1, new Type[] {
typeof(Vector256<>).MakeGenericType(Type.MakeGenericMethodParameter(0)),
typeof(Vector256<>).MakeGenericType(Type.MakeGenericMethodParameter(0))
});
}
if (method.IsGenericMethodDefinition)
{
method = method.MakeGenericMethod(typeof(Int16));
}
var result = method.Invoke(null, new object[] {
Unsafe.Read<Vector256<Int16>>(_dataTable.inArray1Ptr),
Unsafe.Read<Vector256<Int16>>(_dataTable.inArray2Ptr)
});
ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, (bool)(result));
}
public void RunClsVarScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario));
var result = Vector256.GreaterThanOrEqualAll(
_clsVar1,
_clsVar2
);
ValidateResult(_clsVar1, _clsVar2, result);
}
public void RunLclVarScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_UnsafeRead));
var op1 = Unsafe.Read<Vector256<Int16>>(_dataTable.inArray1Ptr);
var op2 = Unsafe.Read<Vector256<Int16>>(_dataTable.inArray2Ptr);
var result = Vector256.GreaterThanOrEqualAll(op1, op2);
ValidateResult(op1, op2, result);
}
public void RunClassLclFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario));
var test = new VectorBooleanBinaryOpTest__GreaterThanOrEqualAllInt16();
var result = Vector256.GreaterThanOrEqualAll(test._fld1, test._fld2);
ValidateResult(test._fld1, test._fld2, result);
}
public void RunClassFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario));
var result = Vector256.GreaterThanOrEqualAll(_fld1, _fld2);
ValidateResult(_fld1, _fld2, result);
}
public void RunStructLclFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario));
var test = TestStruct.Create();
var result = Vector256.GreaterThanOrEqualAll(test._fld1, test._fld2);
ValidateResult(test._fld1, test._fld2, result);
}
public void RunStructFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario));
var test = TestStruct.Create();
test.RunStructFldScenario(this);
}
private void ValidateResult(Vector256<Int16> op1, Vector256<Int16> op2, bool result, [CallerMemberName] string method = "")
{
Int16[] inArray1 = new Int16[Op1ElementCount];
Int16[] inArray2 = new Int16[Op2ElementCount];
Unsafe.WriteUnaligned(ref Unsafe.As<Int16, byte>(ref inArray1[0]), op1);
Unsafe.WriteUnaligned(ref Unsafe.As<Int16, byte>(ref inArray2[0]), op2);
ValidateResult(inArray1, inArray2, result, method);
}
private void ValidateResult(void* op1, void* op2, bool result, [CallerMemberName] string method = "")
{
Int16[] inArray1 = new Int16[Op1ElementCount];
Int16[] inArray2 = new Int16[Op2ElementCount];
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Int16, byte>(ref inArray1[0]), ref Unsafe.AsRef<byte>(op1), (uint)Unsafe.SizeOf<Vector256<Int16>>());
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Int16, byte>(ref inArray2[0]), ref Unsafe.AsRef<byte>(op2), (uint)Unsafe.SizeOf<Vector256<Int16>>());
ValidateResult(inArray1, inArray2, result, method);
}
private void ValidateResult(Int16[] left, Int16[] right, bool result, [CallerMemberName] string method = "")
{
bool succeeded = true;
var expectedResult = true;
for (var i = 0; i < Op1ElementCount; i++)
{
expectedResult &= (left[i] >= right[i]);
}
succeeded = (expectedResult == result);
if (!succeeded)
{
TestLibrary.TestFramework.LogInformation($"{nameof(Vector256)}.{nameof(Vector256.GreaterThanOrEqualAll)}<Int16>(Vector256<Int16>, Vector256<Int16>): {method} failed:");
TestLibrary.TestFramework.LogInformation($" left: ({string.Join(", ", left)})");
TestLibrary.TestFramework.LogInformation($" right: ({string.Join(", ", right)})");
TestLibrary.TestFramework.LogInformation($" result: ({result})");
TestLibrary.TestFramework.LogInformation(string.Empty);
Succeeded = false;
}
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/JIT/HardwareIntrinsics/General/Shared/VectorAsTest.template | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/******************************************************************************
* This file is auto-generated from a template file by the GenerateTests.csx *
* script in tests\src\JIT\HardwareIntrinsics\General\Shared. In order to make *
* changes, please update the corresponding template and run according to the *
* directions listed in the file. *
******************************************************************************/
using System;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Runtime.Intrinsics;
namespace JIT.HardwareIntrinsics.General
{
public static partial class Program
{
private static void {Method}{BaseType}()
{
var test = new VectorAs__{Method}{BaseType}();
// Validates basic functionality works
test.RunBasicScenario();
// Validates basic functionality works using the generic form, rather than the type-specific form of the method
test.RunGenericScenario();
// Validates calling via reflection works
test.RunReflectionScenario();
if (!test.Succeeded)
{
throw new Exception("One or more scenarios did not complete as expected.");
}
}
}
public sealed unsafe class VectorAs__{Method}{BaseType}
{
private static readonly int LargestVectorSize = {LargestVectorSize};
private static readonly int ElementCount = Unsafe.SizeOf<{VectorType}<{BaseType}>>() / sizeof({BaseType});
public bool Succeeded { get; set; } = true;
public void RunBasicScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario));
{VectorType}<{BaseType}> value;
value = {VectorType}.Create({NextValueOp});
{VectorType}<byte> byteResult = value.{Method}Byte();
ValidateResult(byteResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<double> doubleResult = value.{Method}Double();
ValidateResult(doubleResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<short> shortResult = value.{Method}Int16();
ValidateResult(shortResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<int> intResult = value.{Method}Int32();
ValidateResult(intResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<long> longResult = value.{Method}Int64();
ValidateResult(longResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<sbyte> sbyteResult = value.{Method}SByte();
ValidateResult(sbyteResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<float> floatResult = value.{Method}Single();
ValidateResult(floatResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<ushort> ushortResult = value.{Method}UInt16();
ValidateResult(ushortResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<uint> uintResult = value.{Method}UInt32();
ValidateResult(uintResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<ulong> ulongResult = value.{Method}UInt64();
ValidateResult(ulongResult, value);
}
public void RunGenericScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunGenericScenario));
{VectorType}<{BaseType}> value;
value = {VectorType}.Create({NextValueOp});
{VectorType}<byte> byteResult = value.{Method}<{BaseType}, byte>();
ValidateResult(byteResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<double> doubleResult = value.{Method}<{BaseType}, double>();
ValidateResult(doubleResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<short> shortResult = value.{Method}<{BaseType}, short>();
ValidateResult(shortResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<int> intResult = value.{Method}<{BaseType}, int>();
ValidateResult(intResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<long> longResult = value.{Method}<{BaseType}, long>();
ValidateResult(longResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<sbyte> sbyteResult = value.{Method}<{BaseType}, sbyte>();
ValidateResult(sbyteResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<float> floatResult = value.{Method}<{BaseType}, float>();
ValidateResult(floatResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<ushort> ushortResult = value.{Method}<{BaseType}, ushort>();
ValidateResult(ushortResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<uint> uintResult = value.{Method}<{BaseType}, uint>();
ValidateResult(uintResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<ulong> ulongResult = value.{Method}<{BaseType}, ulong>();
ValidateResult(ulongResult, value);
}
public void RunReflectionScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario));
{VectorType}<{BaseType}> value;
value = {VectorType}.Create({NextValueOp});
object byteResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}Byte))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<byte>)(byteResult), value);
value = {VectorType}.Create({NextValueOp});
object doubleResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}Double))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<double>)(doubleResult), value);
value = {VectorType}.Create({NextValueOp});
object shortResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}Int16))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<short>)(shortResult), value);
value = {VectorType}.Create({NextValueOp});
object intResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}Int32))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<int>)(intResult), value);
value = {VectorType}.Create({NextValueOp});
object longResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}Int64))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<long>)(longResult), value);
value = {VectorType}.Create({NextValueOp});
object sbyteResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}SByte))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<sbyte>)(sbyteResult), value);
value = {VectorType}.Create({NextValueOp});
object floatResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}Single))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<float>)(floatResult), value);
value = {VectorType}.Create({NextValueOp});
object ushortResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}UInt16))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<ushort>)(ushortResult), value);
value = {VectorType}.Create({NextValueOp});
object uintResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}UInt32))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<uint>)(uintResult), value);
value = {VectorType}.Create({NextValueOp});
object ulongResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}UInt64))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<ulong>)(ulongResult), value);
}
private void ValidateResult<T>({VectorType}<T> result, {VectorType}<{BaseType}> value, [CallerMemberName] string method = "")
where T : struct
{
{BaseType}[] resultElements = new {BaseType}[ElementCount];
Unsafe.WriteUnaligned(ref Unsafe.As<{BaseType}, byte>(ref resultElements[0]), result);
{BaseType}[] valueElements = new {BaseType}[ElementCount];
Unsafe.WriteUnaligned(ref Unsafe.As<{BaseType}, byte>(ref valueElements[0]), value);
ValidateResult(resultElements, valueElements, typeof(T), method);
}
private void ValidateResult({BaseType}[] resultElements, {BaseType}[] valueElements, Type targetType, [CallerMemberName] string method = "")
{
bool succeeded = true;
for (var i = 0; i < ElementCount; i++)
{
if (resultElements[i] != valueElements[i])
{
succeeded = false;
break;
}
}
if (!succeeded)
{
TestLibrary.TestFramework.LogInformation($"{VectorType}<{BaseType}>.{Method}{targetType.Name}: {method} failed:");
TestLibrary.TestFramework.LogInformation($" value: ({string.Join(", ", valueElements)})");
TestLibrary.TestFramework.LogInformation($" result: ({string.Join(", ", resultElements)})");
TestLibrary.TestFramework.LogInformation(string.Empty);
Succeeded = false;
}
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/******************************************************************************
* This file is auto-generated from a template file by the GenerateTests.csx *
* script in tests\src\JIT\HardwareIntrinsics\General\Shared. In order to make *
* changes, please update the corresponding template and run according to the *
* directions listed in the file. *
******************************************************************************/
using System;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Runtime.Intrinsics;
namespace JIT.HardwareIntrinsics.General
{
public static partial class Program
{
private static void {Method}{BaseType}()
{
var test = new VectorAs__{Method}{BaseType}();
// Validates basic functionality works
test.RunBasicScenario();
// Validates basic functionality works using the generic form, rather than the type-specific form of the method
test.RunGenericScenario();
// Validates calling via reflection works
test.RunReflectionScenario();
if (!test.Succeeded)
{
throw new Exception("One or more scenarios did not complete as expected.");
}
}
}
public sealed unsafe class VectorAs__{Method}{BaseType}
{
private static readonly int LargestVectorSize = {LargestVectorSize};
private static readonly int ElementCount = Unsafe.SizeOf<{VectorType}<{BaseType}>>() / sizeof({BaseType});
public bool Succeeded { get; set; } = true;
public void RunBasicScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario));
{VectorType}<{BaseType}> value;
value = {VectorType}.Create({NextValueOp});
{VectorType}<byte> byteResult = value.{Method}Byte();
ValidateResult(byteResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<double> doubleResult = value.{Method}Double();
ValidateResult(doubleResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<short> shortResult = value.{Method}Int16();
ValidateResult(shortResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<int> intResult = value.{Method}Int32();
ValidateResult(intResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<long> longResult = value.{Method}Int64();
ValidateResult(longResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<sbyte> sbyteResult = value.{Method}SByte();
ValidateResult(sbyteResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<float> floatResult = value.{Method}Single();
ValidateResult(floatResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<ushort> ushortResult = value.{Method}UInt16();
ValidateResult(ushortResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<uint> uintResult = value.{Method}UInt32();
ValidateResult(uintResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<ulong> ulongResult = value.{Method}UInt64();
ValidateResult(ulongResult, value);
}
public void RunGenericScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunGenericScenario));
{VectorType}<{BaseType}> value;
value = {VectorType}.Create({NextValueOp});
{VectorType}<byte> byteResult = value.{Method}<{BaseType}, byte>();
ValidateResult(byteResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<double> doubleResult = value.{Method}<{BaseType}, double>();
ValidateResult(doubleResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<short> shortResult = value.{Method}<{BaseType}, short>();
ValidateResult(shortResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<int> intResult = value.{Method}<{BaseType}, int>();
ValidateResult(intResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<long> longResult = value.{Method}<{BaseType}, long>();
ValidateResult(longResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<sbyte> sbyteResult = value.{Method}<{BaseType}, sbyte>();
ValidateResult(sbyteResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<float> floatResult = value.{Method}<{BaseType}, float>();
ValidateResult(floatResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<ushort> ushortResult = value.{Method}<{BaseType}, ushort>();
ValidateResult(ushortResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<uint> uintResult = value.{Method}<{BaseType}, uint>();
ValidateResult(uintResult, value);
value = {VectorType}.Create({NextValueOp});
{VectorType}<ulong> ulongResult = value.{Method}<{BaseType}, ulong>();
ValidateResult(ulongResult, value);
}
public void RunReflectionScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario));
{VectorType}<{BaseType}> value;
value = {VectorType}.Create({NextValueOp});
object byteResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}Byte))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<byte>)(byteResult), value);
value = {VectorType}.Create({NextValueOp});
object doubleResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}Double))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<double>)(doubleResult), value);
value = {VectorType}.Create({NextValueOp});
object shortResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}Int16))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<short>)(shortResult), value);
value = {VectorType}.Create({NextValueOp});
object intResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}Int32))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<int>)(intResult), value);
value = {VectorType}.Create({NextValueOp});
object longResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}Int64))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<long>)(longResult), value);
value = {VectorType}.Create({NextValueOp});
object sbyteResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}SByte))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<sbyte>)(sbyteResult), value);
value = {VectorType}.Create({NextValueOp});
object floatResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}Single))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<float>)(floatResult), value);
value = {VectorType}.Create({NextValueOp});
object ushortResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}UInt16))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<ushort>)(ushortResult), value);
value = {VectorType}.Create({NextValueOp});
object uintResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}UInt32))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<uint>)(uintResult), value);
value = {VectorType}.Create({NextValueOp});
object ulongResult = typeof({VectorType})
.GetMethod(nameof({VectorType}.{Method}UInt64))
.MakeGenericMethod(typeof({BaseType}))
.Invoke(null, new object[] { value });
ValidateResult(({VectorType}<ulong>)(ulongResult), value);
}
private void ValidateResult<T>({VectorType}<T> result, {VectorType}<{BaseType}> value, [CallerMemberName] string method = "")
where T : struct
{
{BaseType}[] resultElements = new {BaseType}[ElementCount];
Unsafe.WriteUnaligned(ref Unsafe.As<{BaseType}, byte>(ref resultElements[0]), result);
{BaseType}[] valueElements = new {BaseType}[ElementCount];
Unsafe.WriteUnaligned(ref Unsafe.As<{BaseType}, byte>(ref valueElements[0]), value);
ValidateResult(resultElements, valueElements, typeof(T), method);
}
private void ValidateResult({BaseType}[] resultElements, {BaseType}[] valueElements, Type targetType, [CallerMemberName] string method = "")
{
bool succeeded = true;
for (var i = 0; i < ElementCount; i++)
{
if (resultElements[i] != valueElements[i])
{
succeeded = false;
break;
}
}
if (!succeeded)
{
TestLibrary.TestFramework.LogInformation($"{VectorType}<{BaseType}>.{Method}{targetType.Name}: {method} failed:");
TestLibrary.TestFramework.LogInformation($" value: ({string.Join(", ", valueElements)})");
TestLibrary.TestFramework.LogInformation($" result: ({string.Join(", ", resultElements)})");
TestLibrary.TestFramework.LogInformation(string.Empty);
Succeeded = false;
}
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/Loader/classloader/TypeGeneratorTests/TypeGeneratorTest85/Generated85.ilproj | <Project Sdk="Microsoft.NET.Sdk.IL">
<PropertyGroup>
<CLRTestPriority>1</CLRTestPriority>
</PropertyGroup>
<ItemGroup>
<Compile Include="Generated85.il" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\TestFramework\TestFramework.csproj" />
</ItemGroup>
</Project>
| <Project Sdk="Microsoft.NET.Sdk.IL">
<PropertyGroup>
<CLRTestPriority>1</CLRTestPriority>
</PropertyGroup>
<ItemGroup>
<Compile Include="Generated85.il" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\TestFramework\TestFramework.csproj" />
</ItemGroup>
</Project>
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/reflection/ldtoken/byrefs.il | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//
// This test makes sure that ByRef types are properly unified by the reflection stack
// and that a type constructed by reflection is equivalent to a type constructed through
// the LDTOKEN instruction.
//
.assembly extern mscorlib { }
.assembly extern xunit.core {}
.assembly extern System.Console
{
.publickeytoken = (B0 3F 5F 7F 11 D5 0A 3A )
.ver 4:0:0:0
}
.assembly byrefs
{
.custom instance void [mscorlib]System.Runtime.CompilerServices.CompilationRelaxationsAttribute::.ctor(int32) = ( 01 00 08 00 00 00 00 00 )
}
.class private auto ansi beforefieldinit Test_byrefs
extends [mscorlib]System.Object
{
.method private hidebysig static int32 Main() cil managed
{
.custom instance void [xunit.core]Xunit.FactAttribute::.ctor() = (
01 00 00 00
)
.entrypoint
.maxstack 2
call bool Test_byrefs::LdTokenEqualsMakeByRef()
brfalse Failed
call bool Test_byrefs::MakeByRefEqualsLdToken()
brfalse Failed
ldc.i4 100
ldstr "ByRefs look good"
br.s Done
Failed:
ldc.i4 666
ldstr "ByRefs are broken"
Done:
call void class [System.Console]System.Console::WriteLine(string)
ret
}
.method private hidebysig static bool LdTokenEqualsMakeByRef() cil managed
{
.maxstack 2
ldtoken valuetype MyType1&
call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
ldtoken valuetype MyType1
call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
callvirt instance class [mscorlib]System.Type [mscorlib]System.Type::MakeByRefType()
ceq
ret
}
.method private hidebysig static bool MakeByRefEqualsLdToken() cil managed
{
.maxstack 2
ldtoken valuetype MyType2
call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
callvirt instance class [mscorlib]System.Type [mscorlib]System.Type::MakeByRefType()
ldtoken valuetype MyType2&
call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
ceq
ret
}
}
.class private auto ansi beforefieldinit MyType1
extends [mscorlib]System.ValueType
{
}
.class private auto ansi beforefieldinit MyType2
extends [mscorlib]System.ValueType
{
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
//
// This test makes sure that ByRef types are properly unified by the reflection stack
// and that a type constructed by reflection is equivalent to a type constructed through
// the LDTOKEN instruction.
//
.assembly extern mscorlib { }
.assembly extern xunit.core {}
.assembly extern System.Console
{
.publickeytoken = (B0 3F 5F 7F 11 D5 0A 3A )
.ver 4:0:0:0
}
.assembly byrefs
{
.custom instance void [mscorlib]System.Runtime.CompilerServices.CompilationRelaxationsAttribute::.ctor(int32) = ( 01 00 08 00 00 00 00 00 )
}
.class private auto ansi beforefieldinit Test_byrefs
extends [mscorlib]System.Object
{
.method private hidebysig static int32 Main() cil managed
{
.custom instance void [xunit.core]Xunit.FactAttribute::.ctor() = (
01 00 00 00
)
.entrypoint
.maxstack 2
call bool Test_byrefs::LdTokenEqualsMakeByRef()
brfalse Failed
call bool Test_byrefs::MakeByRefEqualsLdToken()
brfalse Failed
ldc.i4 100
ldstr "ByRefs look good"
br.s Done
Failed:
ldc.i4 666
ldstr "ByRefs are broken"
Done:
call void class [System.Console]System.Console::WriteLine(string)
ret
}
.method private hidebysig static bool LdTokenEqualsMakeByRef() cil managed
{
.maxstack 2
ldtoken valuetype MyType1&
call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
ldtoken valuetype MyType1
call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
callvirt instance class [mscorlib]System.Type [mscorlib]System.Type::MakeByRefType()
ceq
ret
}
.method private hidebysig static bool MakeByRefEqualsLdToken() cil managed
{
.maxstack 2
ldtoken valuetype MyType2
call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
callvirt instance class [mscorlib]System.Type [mscorlib]System.Type::MakeByRefType()
ldtoken valuetype MyType2&
call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
ceq
ret
}
}
.class private auto ansi beforefieldinit MyType1
extends [mscorlib]System.ValueType
{
}
.class private auto ansi beforefieldinit MyType2
extends [mscorlib]System.ValueType
{
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.Security.Cryptography.Pkcs/src/Internal/Cryptography/KeyTransRecipientInfoPal.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using System.Text;
using System.Diagnostics;
using System.Security.Cryptography.Pkcs;
namespace Internal.Cryptography
{
internal abstract class KeyTransRecipientInfoPal : RecipientInfoPal
{
internal KeyTransRecipientInfoPal()
: base()
{
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using System.Text;
using System.Diagnostics;
using System.Security.Cryptography.Pkcs;
namespace Internal.Cryptography
{
internal abstract class KeyTransRecipientInfoPal : RecipientInfoPal
{
internal KeyTransRecipientInfoPal()
: base()
{
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.Drawing.Common/src/System/Drawing/Imaging/MetafileType.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
namespace System.Drawing.Imaging
{
/// <summary>
/// Specifies the format of a <see cref='Metafile'/>.
/// </summary>
public enum MetafileType
{
/// <summary>
/// Specifies an invalid type.
/// </summary>
Invalid,
/// <summary>
/// Specifies a standard Windows metafile.
/// </summary>
Wmf,
/// <summary>
/// Specifies a Windows Placeable metafile.
/// </summary>
WmfPlaceable,
/// <summary>
/// Specifies a Windows enhanced metafile.
/// </summary>
Emf,
/// <summary>
/// Specifies a Windows enhanced metafile plus.
/// </summary>
EmfPlusOnly,
/// <summary>
/// Specifies both enhanced and enhanced plus commands in the same file.
/// </summary>
EmfPlusDual,
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
namespace System.Drawing.Imaging
{
/// <summary>
/// Specifies the format of a <see cref='Metafile'/>.
/// </summary>
public enum MetafileType
{
/// <summary>
/// Specifies an invalid type.
/// </summary>
Invalid,
/// <summary>
/// Specifies a standard Windows metafile.
/// </summary>
Wmf,
/// <summary>
/// Specifies a Windows Placeable metafile.
/// </summary>
WmfPlaceable,
/// <summary>
/// Specifies a Windows enhanced metafile.
/// </summary>
Emf,
/// <summary>
/// Specifies a Windows enhanced metafile plus.
/// </summary>
EmfPlusOnly,
/// <summary>
/// Specifies both enhanced and enhanced plus commands in the same file.
/// </summary>
EmfPlusDual,
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/JIT/HardwareIntrinsics/General/Vector128/Negate.UInt64.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/******************************************************************************
* This file is auto-generated from a template file by the GenerateTests.csx *
* script in tests\src\JIT\HardwareIntrinsics\X86\Shared. In order to make *
* changes, please update the corresponding template and run according to the *
* directions listed in the file. *
******************************************************************************/
using System;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Runtime.Intrinsics;
namespace JIT.HardwareIntrinsics.General
{
public static partial class Program
{
private static void NegateUInt64()
{
var test = new VectorUnaryOpTest__NegateUInt64();
// Validates basic functionality works, using Unsafe.Read
test.RunBasicScenario_UnsafeRead();
// Validates calling via reflection works, using Unsafe.Read
test.RunReflectionScenario_UnsafeRead();
// Validates passing a static member works
test.RunClsVarScenario();
// Validates passing a local works, using Unsafe.Read
test.RunLclVarScenario_UnsafeRead();
// Validates passing the field of a local class works
test.RunClassLclFldScenario();
// Validates passing an instance member of a class works
test.RunClassFldScenario();
// Validates passing the field of a local struct works
test.RunStructLclFldScenario();
// Validates passing an instance member of a struct works
test.RunStructFldScenario();
if (!test.Succeeded)
{
throw new Exception("One or more scenarios did not complete as expected.");
}
}
}
public sealed unsafe class VectorUnaryOpTest__NegateUInt64
{
private struct DataTable
{
private byte[] inArray1;
private byte[] outArray;
private GCHandle inHandle1;
private GCHandle outHandle;
private ulong alignment;
public DataTable(UInt64[] inArray1, UInt64[] outArray, int alignment)
{
int sizeOfinArray1 = inArray1.Length * Unsafe.SizeOf<UInt64>();
int sizeOfoutArray = outArray.Length * Unsafe.SizeOf<UInt64>();
if ((alignment != 32 && alignment != 16 && alignment != 8) || (alignment * 2) < sizeOfinArray1 || (alignment * 2) < sizeOfoutArray)
{
throw new ArgumentException("Invalid value of alignment");
}
this.inArray1 = new byte[alignment * 2];
this.outArray = new byte[alignment * 2];
this.inHandle1 = GCHandle.Alloc(this.inArray1, GCHandleType.Pinned);
this.outHandle = GCHandle.Alloc(this.outArray, GCHandleType.Pinned);
this.alignment = (ulong)alignment;
Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray1Ptr), ref Unsafe.As<UInt64, byte>(ref inArray1[0]), (uint)sizeOfinArray1);
}
public void* inArray1Ptr => Align((byte*)(inHandle1.AddrOfPinnedObject().ToPointer()), alignment);
public void* outArrayPtr => Align((byte*)(outHandle.AddrOfPinnedObject().ToPointer()), alignment);
public void Dispose()
{
inHandle1.Free();
outHandle.Free();
}
private static unsafe void* Align(byte* buffer, ulong expectedAlignment)
{
return (void*)(((ulong)buffer + expectedAlignment - 1) & ~(expectedAlignment - 1));
}
}
private struct TestStruct
{
public Vector128<UInt64> _fld1;
public static TestStruct Create()
{
var testStruct = new TestStruct();
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt64(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt64>, byte>(ref testStruct._fld1), ref Unsafe.As<UInt64, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector128<UInt64>>());
return testStruct;
}
public void RunStructFldScenario(VectorUnaryOpTest__NegateUInt64 testClass)
{
var result = Vector128.Negate(_fld1);
Unsafe.Write(testClass._dataTable.outArrayPtr, result);
testClass.ValidateResult(_fld1, testClass._dataTable.outArrayPtr);
}
}
private static readonly int LargestVectorSize = 16;
private static readonly int Op1ElementCount = Unsafe.SizeOf<Vector128<UInt64>>() / sizeof(UInt64);
private static readonly int RetElementCount = Unsafe.SizeOf<Vector128<UInt64>>() / sizeof(UInt64);
private static UInt64[] _data1 = new UInt64[Op1ElementCount];
private static Vector128<UInt64> _clsVar1;
private Vector128<UInt64> _fld1;
private DataTable _dataTable;
static VectorUnaryOpTest__NegateUInt64()
{
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt64(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt64>, byte>(ref _clsVar1), ref Unsafe.As<UInt64, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector128<UInt64>>());
}
public VectorUnaryOpTest__NegateUInt64()
{
Succeeded = true;
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt64(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt64>, byte>(ref _fld1), ref Unsafe.As<UInt64, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector128<UInt64>>());
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt64(); }
_dataTable = new DataTable(_data1, new UInt64[RetElementCount], LargestVectorSize);
}
public bool Succeeded { get; set; }
public void RunBasicScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_UnsafeRead));
var result = Vector128.Negate(
Unsafe.Read<Vector128<UInt64>>(_dataTable.inArray1Ptr)
);
Unsafe.Write(_dataTable.outArrayPtr, result);
ValidateResult(_dataTable.inArray1Ptr, _dataTable.outArrayPtr);
}
public void RunReflectionScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_UnsafeRead));
var method = typeof(Vector128).GetMethod(nameof(Vector128.Negate), new Type[] {
typeof(Vector128<UInt64>)
});
if (method is null)
{
method = typeof(Vector128).GetMethod(nameof(Vector128.Negate), 1, new Type[] {
typeof(Vector128<>).MakeGenericType(Type.MakeGenericMethodParameter(0))
});
}
if (method.IsGenericMethodDefinition)
{
method = method.MakeGenericMethod(typeof(UInt64));
}
var result = method.Invoke(null, new object[] {
Unsafe.Read<Vector128<UInt64>>(_dataTable.inArray1Ptr)
});
Unsafe.Write(_dataTable.outArrayPtr, (Vector128<UInt64>)(result));
ValidateResult(_dataTable.inArray1Ptr, _dataTable.outArrayPtr);
}
public void RunClsVarScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario));
var result = Vector128.Negate(
_clsVar1
);
Unsafe.Write(_dataTable.outArrayPtr, result);
ValidateResult(_clsVar1, _dataTable.outArrayPtr);
}
public void RunLclVarScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_UnsafeRead));
var op1 = Unsafe.Read<Vector128<UInt64>>(_dataTable.inArray1Ptr);
var result = Vector128.Negate(op1);
Unsafe.Write(_dataTable.outArrayPtr, result);
ValidateResult(op1, _dataTable.outArrayPtr);
}
public void RunClassLclFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario));
var test = new VectorUnaryOpTest__NegateUInt64();
var result = Vector128.Negate(test._fld1);
Unsafe.Write(_dataTable.outArrayPtr, result);
ValidateResult(test._fld1, _dataTable.outArrayPtr);
}
public void RunClassFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario));
var result = Vector128.Negate(_fld1);
Unsafe.Write(_dataTable.outArrayPtr, result);
ValidateResult(_fld1, _dataTable.outArrayPtr);
}
public void RunStructLclFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario));
var test = TestStruct.Create();
var result = Vector128.Negate(test._fld1);
Unsafe.Write(_dataTable.outArrayPtr, result);
ValidateResult(test._fld1, _dataTable.outArrayPtr);
}
public void RunStructFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario));
var test = TestStruct.Create();
test.RunStructFldScenario(this);
}
private void ValidateResult(Vector128<UInt64> op1, void* result, [CallerMemberName] string method = "")
{
UInt64[] inArray1 = new UInt64[Op1ElementCount];
UInt64[] outArray = new UInt64[RetElementCount];
Unsafe.WriteUnaligned(ref Unsafe.As<UInt64, byte>(ref inArray1[0]), op1);
Unsafe.CopyBlockUnaligned(ref Unsafe.As<UInt64, byte>(ref outArray[0]), ref Unsafe.AsRef<byte>(result), (uint)Unsafe.SizeOf<Vector128<UInt64>>());
ValidateResult(inArray1, outArray, method);
}
private void ValidateResult(void* op1, void* result, [CallerMemberName] string method = "")
{
UInt64[] inArray1 = new UInt64[Op1ElementCount];
UInt64[] outArray = new UInt64[RetElementCount];
Unsafe.CopyBlockUnaligned(ref Unsafe.As<UInt64, byte>(ref inArray1[0]), ref Unsafe.AsRef<byte>(op1), (uint)Unsafe.SizeOf<Vector128<UInt64>>());
Unsafe.CopyBlockUnaligned(ref Unsafe.As<UInt64, byte>(ref outArray[0]), ref Unsafe.AsRef<byte>(result), (uint)Unsafe.SizeOf<Vector128<UInt64>>());
ValidateResult(inArray1, outArray, method);
}
private void ValidateResult(UInt64[] firstOp, UInt64[] result, [CallerMemberName] string method = "")
{
bool succeeded = true;
if (result[0] != (ulong)(0 - firstOp[0]))
{
succeeded = false;
}
else
{
for (var i = 1; i < RetElementCount; i++)
{
if (result[i] != (ulong)(0 - firstOp[i]))
{
succeeded = false;
break;
}
}
}
if (!succeeded)
{
TestLibrary.TestFramework.LogInformation($"{nameof(Vector128)}.{nameof(Vector128.Negate)}<UInt64>(Vector128<UInt64>): {method} failed:");
TestLibrary.TestFramework.LogInformation($" firstOp: ({string.Join(", ", firstOp)})");
TestLibrary.TestFramework.LogInformation($" result: ({string.Join(", ", result)})");
TestLibrary.TestFramework.LogInformation(string.Empty);
Succeeded = false;
}
}
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
/******************************************************************************
* This file is auto-generated from a template file by the GenerateTests.csx *
* script in tests\src\JIT\HardwareIntrinsics\X86\Shared. In order to make *
* changes, please update the corresponding template and run according to the *
* directions listed in the file. *
******************************************************************************/
using System;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Runtime.Intrinsics;
namespace JIT.HardwareIntrinsics.General
{
public static partial class Program
{
private static void NegateUInt64()
{
var test = new VectorUnaryOpTest__NegateUInt64();
// Validates basic functionality works, using Unsafe.Read
test.RunBasicScenario_UnsafeRead();
// Validates calling via reflection works, using Unsafe.Read
test.RunReflectionScenario_UnsafeRead();
// Validates passing a static member works
test.RunClsVarScenario();
// Validates passing a local works, using Unsafe.Read
test.RunLclVarScenario_UnsafeRead();
// Validates passing the field of a local class works
test.RunClassLclFldScenario();
// Validates passing an instance member of a class works
test.RunClassFldScenario();
// Validates passing the field of a local struct works
test.RunStructLclFldScenario();
// Validates passing an instance member of a struct works
test.RunStructFldScenario();
if (!test.Succeeded)
{
throw new Exception("One or more scenarios did not complete as expected.");
}
}
}
public sealed unsafe class VectorUnaryOpTest__NegateUInt64
{
private struct DataTable
{
private byte[] inArray1;
private byte[] outArray;
private GCHandle inHandle1;
private GCHandle outHandle;
private ulong alignment;
public DataTable(UInt64[] inArray1, UInt64[] outArray, int alignment)
{
int sizeOfinArray1 = inArray1.Length * Unsafe.SizeOf<UInt64>();
int sizeOfoutArray = outArray.Length * Unsafe.SizeOf<UInt64>();
if ((alignment != 32 && alignment != 16 && alignment != 8) || (alignment * 2) < sizeOfinArray1 || (alignment * 2) < sizeOfoutArray)
{
throw new ArgumentException("Invalid value of alignment");
}
this.inArray1 = new byte[alignment * 2];
this.outArray = new byte[alignment * 2];
this.inHandle1 = GCHandle.Alloc(this.inArray1, GCHandleType.Pinned);
this.outHandle = GCHandle.Alloc(this.outArray, GCHandleType.Pinned);
this.alignment = (ulong)alignment;
Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray1Ptr), ref Unsafe.As<UInt64, byte>(ref inArray1[0]), (uint)sizeOfinArray1);
}
public void* inArray1Ptr => Align((byte*)(inHandle1.AddrOfPinnedObject().ToPointer()), alignment);
public void* outArrayPtr => Align((byte*)(outHandle.AddrOfPinnedObject().ToPointer()), alignment);
public void Dispose()
{
inHandle1.Free();
outHandle.Free();
}
private static unsafe void* Align(byte* buffer, ulong expectedAlignment)
{
return (void*)(((ulong)buffer + expectedAlignment - 1) & ~(expectedAlignment - 1));
}
}
private struct TestStruct
{
public Vector128<UInt64> _fld1;
public static TestStruct Create()
{
var testStruct = new TestStruct();
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt64(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt64>, byte>(ref testStruct._fld1), ref Unsafe.As<UInt64, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector128<UInt64>>());
return testStruct;
}
public void RunStructFldScenario(VectorUnaryOpTest__NegateUInt64 testClass)
{
var result = Vector128.Negate(_fld1);
Unsafe.Write(testClass._dataTable.outArrayPtr, result);
testClass.ValidateResult(_fld1, testClass._dataTable.outArrayPtr);
}
}
private static readonly int LargestVectorSize = 16;
private static readonly int Op1ElementCount = Unsafe.SizeOf<Vector128<UInt64>>() / sizeof(UInt64);
private static readonly int RetElementCount = Unsafe.SizeOf<Vector128<UInt64>>() / sizeof(UInt64);
private static UInt64[] _data1 = new UInt64[Op1ElementCount];
private static Vector128<UInt64> _clsVar1;
private Vector128<UInt64> _fld1;
private DataTable _dataTable;
static VectorUnaryOpTest__NegateUInt64()
{
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt64(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt64>, byte>(ref _clsVar1), ref Unsafe.As<UInt64, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector128<UInt64>>());
}
public VectorUnaryOpTest__NegateUInt64()
{
Succeeded = true;
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt64(); }
Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector128<UInt64>, byte>(ref _fld1), ref Unsafe.As<UInt64, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector128<UInt64>>());
for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetUInt64(); }
_dataTable = new DataTable(_data1, new UInt64[RetElementCount], LargestVectorSize);
}
public bool Succeeded { get; set; }
public void RunBasicScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_UnsafeRead));
var result = Vector128.Negate(
Unsafe.Read<Vector128<UInt64>>(_dataTable.inArray1Ptr)
);
Unsafe.Write(_dataTable.outArrayPtr, result);
ValidateResult(_dataTable.inArray1Ptr, _dataTable.outArrayPtr);
}
public void RunReflectionScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_UnsafeRead));
var method = typeof(Vector128).GetMethod(nameof(Vector128.Negate), new Type[] {
typeof(Vector128<UInt64>)
});
if (method is null)
{
method = typeof(Vector128).GetMethod(nameof(Vector128.Negate), 1, new Type[] {
typeof(Vector128<>).MakeGenericType(Type.MakeGenericMethodParameter(0))
});
}
if (method.IsGenericMethodDefinition)
{
method = method.MakeGenericMethod(typeof(UInt64));
}
var result = method.Invoke(null, new object[] {
Unsafe.Read<Vector128<UInt64>>(_dataTable.inArray1Ptr)
});
Unsafe.Write(_dataTable.outArrayPtr, (Vector128<UInt64>)(result));
ValidateResult(_dataTable.inArray1Ptr, _dataTable.outArrayPtr);
}
public void RunClsVarScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario));
var result = Vector128.Negate(
_clsVar1
);
Unsafe.Write(_dataTable.outArrayPtr, result);
ValidateResult(_clsVar1, _dataTable.outArrayPtr);
}
public void RunLclVarScenario_UnsafeRead()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_UnsafeRead));
var op1 = Unsafe.Read<Vector128<UInt64>>(_dataTable.inArray1Ptr);
var result = Vector128.Negate(op1);
Unsafe.Write(_dataTable.outArrayPtr, result);
ValidateResult(op1, _dataTable.outArrayPtr);
}
public void RunClassLclFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario));
var test = new VectorUnaryOpTest__NegateUInt64();
var result = Vector128.Negate(test._fld1);
Unsafe.Write(_dataTable.outArrayPtr, result);
ValidateResult(test._fld1, _dataTable.outArrayPtr);
}
public void RunClassFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario));
var result = Vector128.Negate(_fld1);
Unsafe.Write(_dataTable.outArrayPtr, result);
ValidateResult(_fld1, _dataTable.outArrayPtr);
}
public void RunStructLclFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario));
var test = TestStruct.Create();
var result = Vector128.Negate(test._fld1);
Unsafe.Write(_dataTable.outArrayPtr, result);
ValidateResult(test._fld1, _dataTable.outArrayPtr);
}
public void RunStructFldScenario()
{
TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario));
var test = TestStruct.Create();
test.RunStructFldScenario(this);
}
private void ValidateResult(Vector128<UInt64> op1, void* result, [CallerMemberName] string method = "")
{
UInt64[] inArray1 = new UInt64[Op1ElementCount];
UInt64[] outArray = new UInt64[RetElementCount];
Unsafe.WriteUnaligned(ref Unsafe.As<UInt64, byte>(ref inArray1[0]), op1);
Unsafe.CopyBlockUnaligned(ref Unsafe.As<UInt64, byte>(ref outArray[0]), ref Unsafe.AsRef<byte>(result), (uint)Unsafe.SizeOf<Vector128<UInt64>>());
ValidateResult(inArray1, outArray, method);
}
private void ValidateResult(void* op1, void* result, [CallerMemberName] string method = "")
{
UInt64[] inArray1 = new UInt64[Op1ElementCount];
UInt64[] outArray = new UInt64[RetElementCount];
Unsafe.CopyBlockUnaligned(ref Unsafe.As<UInt64, byte>(ref inArray1[0]), ref Unsafe.AsRef<byte>(op1), (uint)Unsafe.SizeOf<Vector128<UInt64>>());
Unsafe.CopyBlockUnaligned(ref Unsafe.As<UInt64, byte>(ref outArray[0]), ref Unsafe.AsRef<byte>(result), (uint)Unsafe.SizeOf<Vector128<UInt64>>());
ValidateResult(inArray1, outArray, method);
}
private void ValidateResult(UInt64[] firstOp, UInt64[] result, [CallerMemberName] string method = "")
{
bool succeeded = true;
if (result[0] != (ulong)(0 - firstOp[0]))
{
succeeded = false;
}
else
{
for (var i = 1; i < RetElementCount; i++)
{
if (result[i] != (ulong)(0 - firstOp[i]))
{
succeeded = false;
break;
}
}
}
if (!succeeded)
{
TestLibrary.TestFramework.LogInformation($"{nameof(Vector128)}.{nameof(Vector128.Negate)}<UInt64>(Vector128<UInt64>): {method} failed:");
TestLibrary.TestFramework.LogInformation($" firstOp: ({string.Join(", ", firstOp)})");
TestLibrary.TestFramework.LogInformation($" result: ({string.Join(", ", result)})");
TestLibrary.TestFramework.LogInformation(string.Empty);
Succeeded = false;
}
}
}
}
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/tests/JIT/Methodical/eh/finallyexec/tryCatchFinallyThrow_nonlocalexit1_do.csproj | <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<CLRTestPriority>1</CLRTestPriority>
</PropertyGroup>
<PropertyGroup>
<DebugType>Full</DebugType>
<Optimize>True</Optimize>
</PropertyGroup>
<ItemGroup>
<Compile Include="tryCatchFinallyThrow_nonlocalexit1.cs" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\..\..\common\eh_common.csproj" />
</ItemGroup>
</Project>
| <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<CLRTestPriority>1</CLRTestPriority>
</PropertyGroup>
<PropertyGroup>
<DebugType>Full</DebugType>
<Optimize>True</Optimize>
</PropertyGroup>
<ItemGroup>
<Compile Include="tryCatchFinallyThrow_nonlocalexit1.cs" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\..\..\common\eh_common.csproj" />
</ItemGroup>
</Project>
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.IO.Pipes/src/System.IO.Pipes.csproj | <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
<OmitTransitiveCompileReferences>true</OmitTransitiveCompileReferences>
<TargetFrameworks>$(NetCoreAppCurrent)-windows;$(NetCoreAppCurrent)-Unix;$(NetCoreAppCurrent)</TargetFrameworks>
<Nullable>enable</Nullable>
</PropertyGroup>
<!-- DesignTimeBuild requires all the TargetFramework Derived Properties to not be present in the first property group. -->
<PropertyGroup>
<TargetPlatformIdentifier>$([MSBuild]::GetTargetPlatformIdentifier('$(TargetFramework)'))</TargetPlatformIdentifier>
<GeneratePlatformNotSupportedAssemblyMessage Condition="'$(TargetPlatformIdentifier)' == ''">SR.Pipes_PlatformNotSupported</GeneratePlatformNotSupportedAssemblyMessage>
</PropertyGroup>
<!-- Compiled Source Files -->
<ItemGroup Condition="'$(TargetPlatformIdentifier)' != ''">
<Compile Include="Microsoft\Win32\SafeHandles\SafePipeHandle.cs" />
<Compile Include="System\IO\Error.cs" />
<Compile Include="System\IO\Pipes\AnonymousPipeClientStream.cs" />
<Compile Include="System\IO\Pipes\AnonymousPipeServerStream.cs" />
<Compile Include="System\IO\Pipes\NamedPipeClientStream.cs" />
<Compile Include="System\IO\Pipes\NamedPipeServerStream.cs" />
<Compile Include="System\IO\Pipes\PipeDirection.cs" />
<Compile Include="System\IO\Pipes\PipeOptions.cs" />
<Compile Include="System\IO\Pipes\PipeState.cs" />
<Compile Include="System\IO\Pipes\PipeStream.cs" />
<Compile Include="System\IO\Pipes\PipeTransmissionMode.cs" />
<Compile Include="$(CommonPath)DisableRuntimeMarshalling.cs"
Link="Common\DisableRuntimeMarshalling.cs" />
<Compile Include="$(CommonPath)System\Threading\Tasks\TaskToApm.cs"
Link="Common\System\Threading\Tasks\TaskToApm.cs" />
</ItemGroup>
<ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'windows'">
<Compile Include="$(CommonPath)Interop\Windows\Interop.Libraries.cs"
Link="Common\Interop\Windows\Interop.Libraries.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.CloseHandle.cs"
Link="Common\Interop\Windows\Interop.CloseHandle.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Interop.Errors.cs"
Link="Common\Interop\Windows\Interop.Errors.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.FormatMessage.cs"
Link="Common\Interop\Windows\Interop.FormatMessage.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.GenericOperations.cs"
Link="Interop\Windows\Interop.GenericOperations.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.SecurityOptions.cs"
Link="Common\CoreLib\Interop\Windows\Interop.SecurityOptions.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Interop.BOOL.cs"
Link="Common\CoreLib\Interop\Windows\Interop.BOOL.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.SECURITY_ATTRIBUTES.cs"
Link="Common\CoreLib\Interop\Windows\Interop.SECURITY_ATTRIBUTES.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.HandleOptions.cs"
Link="Common\Interop\Windows\Interop.HandleOptions.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.PipeOptions.cs"
Link="Common\Interop\Windows\Interop.PipeOptions.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.FileOperations.cs"
Link="Common\Interop\Windows\Interop.FileOperations.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.FileTypes.cs"
Link="Common\Interop\Windows\Interop.FileTypes.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.GetCurrentProcess.cs"
Link="Common\Interop\Windows\Interop.GetCurrentProcess.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.DuplicateHandle_SafePipeHandle.cs"
Link="Common\Interop\Windows\Interop.DuplicateHandle_SafePipeHandle.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.GetFileType_SafeHandle.cs"
Link="Common\Interop\Windows\Interop.GetFileType.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.CreatePipe_SafePipeHandle.cs"
Link="Common\Interop\Windows\Interop.CreatePipe_SafePipeHandle.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.ConnectNamedPipe.cs"
Link="Common\Interop\Windows\Interop.ConnectNamedPipe.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.WaitNamedPipe.cs"
Link="Common\Interop\Windows\Interop.WaitNamedPipe.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.GetNamedPipeHandleState.cs"
Link="Common\Interop\Windows\Interop.GetNamedPipeHandleState.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.GetNamedPipeInfo.cs"
Link="Common\Interop\Windows\Interop.GetNamedPipeInfo.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.SetNamedPipeHandleState.cs"
Link="Common\Interop\Windows\Interop.SetNamedPipeHandleState.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.CancelIoEx.cs"
Link="Common\Interop\Windows\Interop.CancelIoEx.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.FlushFileBuffers.cs"
Link="Common\Interop\Windows\Interop.FlushFileBuffers.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.ReadFile_SafeHandle_IntPtr.cs"
Link="Common\Interop\Windows\Interop.ReadFile_IntPtr.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.ReadFile_SafeHandle_NativeOverlapped.cs"
Link="Common\Interop\Windows\Interop.ReadFile_NativeOverlapped.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.WriteFile_SafeHandle_IntPtr.cs"
Link="Common\Interop\Windows\Interop.WriteFile_IntPtr.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.WriteFile_SafeHandle_NativeOverlapped.cs"
Link="Common\Interop\Windows\Interop.WriteFile_NativeOverlapped.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.DisconnectNamedPipe.cs"
Link="Common\Interop\Windows\Interop.DisconnectNamedPipe.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.CreateNamedPipe.cs"
Link="Common\Interop\Windows\Interop.CreateNamedPipe.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.MaxLengths.cs"
Link="Common\Interop\Windows\Interop.MaxLengths.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Advapi32\Interop.RevertToSelf.cs"
Link="Common\Interop\Windows\Interop.RevertToSelf.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Advapi32\Interop.ImpersonateNamedPipeClient.cs"
Link="Common\Interop\Windows\Interop.ImpersonateNamedPipeClient.cs" />
<Compile Include="$(CommonPath)System\IO\Win32Marshal.cs"
Link="Common\CoreLib\System\IO\Win32Marshal.cs" />
<Compile Include="Microsoft\Win32\SafeHandles\SafePipeHandle.Windows.cs" />
<Compile Include="System\IO\Pipes\AnonymousPipeServerStreamAcl.cs" />
<Compile Include="System\IO\Pipes\AnonymousPipeServerStream.Windows.cs" />
<Compile Include="System\IO\Pipes\NamedPipeServerStreamAcl.cs" />
<Compile Include="System\IO\Pipes\NamedPipeClientStream.Windows.cs" />
<Compile Include="System\IO\Pipes\NamedPipeServerStream.Windows.cs" />
<Compile Include="System\IO\Pipes\PipeAccessRights.cs" />
<Compile Include="System\IO\Pipes\PipeAccessRule.cs" />
<Compile Include="System\IO\Pipes\PipeAuditRule.cs" />
<Compile Include="System\IO\Pipes\PipesAclExtensions.cs" />
<Compile Include="System\IO\Pipes\PipeSecurity.cs" />
<Compile Include="System\IO\Pipes\PipeStream.ValueTaskSource.cs" />
<Compile Include="System\IO\Pipes\PipeStream.Windows.cs" />
</ItemGroup>
<!-- Windows : Win32 only -->
<ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'windows'">
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.CreateNamedPipeClient.cs"
Link="Common\Interop\Windows\Interop.CreateNamedPipeClient.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.LoadLibraryEx_IntPtr.cs"
Link="Common\Interop\Windows\Interop.LoadLibraryEx_IntPtr.cs" />
<Compile Include="System\IO\Pipes\NamedPipeServerStream.Win32.cs" />
</ItemGroup>
<ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'Unix'">
<Compile Include="Microsoft\Win32\SafeHandles\SafePipeHandle.Unix.cs" />
<Compile Include="System\IO\Pipes\AnonymousPipeServerStream.Unix.cs" />
<Compile Include="System\IO\Pipes\NamedPipeClientStream.Unix.cs" />
<Compile Include="System\IO\Pipes\NamedPipeServerStream.Unix.cs" />
<Compile Include="System\IO\Pipes\PipeStream.Unix.cs" />
<Compile Include="$(CommonPath)Interop\Unix\Interop.Libraries.cs"
Link="Common\Interop\Unix\Interop.Libraries.cs" />
<Compile Include="$(CommonPath)Interop\Unix\Interop.Errors.cs"
Link="Common\CoreLib\Interop\Unix\Interop.Errors.cs" />
<Compile Include="$(CommonPath)Interop\Unix\Interop.IOErrors.cs"
Link="Common\CoreLib\Interop\Unix\Interop.IOErrors.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Close.cs"
Link="Common\Interop\Unix\Interop.Close.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Fcntl.Pipe.cs"
Link="Common\Interop\Unix\Interop.Fcntl.Pipe.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Fcntl.cs"
Link="Common\Interop\Unix\Interop.Fcntl.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.FLock.cs"
Link="Common\Interop\Unix\Interop.FLock.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.GetHostName.cs"
Link="Common\Interop\Unix\Interop.GetHostName.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.GetPeerUserName.cs"
Link="Common\Interop\Unix\Interop.GetPeerUserName.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Open.cs"
Link="Common\Interop\Unix\Interop.Open.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.OpenFlags.cs"
Link="Common\Interop\Unix\Interop.OpenFlags.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Permissions.cs"
Link="Common\Interop\Unix\Interop.Permissions.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Pipe.cs"
Link="Common\Interop\Unix\Interop.Pipe.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Read.Pipe.cs"
Link="Common\Interop\Unix\Interop.Read.Pipe.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Unlink.cs"
Link="Common\Interop\Unix\Interop.Unlink.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Write.Pipe.cs"
Link="Common\Interop\Unix\Interop.Write.Pipe.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Stat.cs"
Link="Common\Interop\Unix\Interop.Stat.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Stat.Pipe.cs"
Link="Common\Interop\Unix\Interop.Stat.Pipe.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.GetPeerID.cs"
Link="Common\Interop\Unix\Interop.GetPeerID.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.GetEUid.cs"
Link="Common\Interop\Unix\Interop.GetEUid.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.SetEUid.cs"
Link="Common\Interop\Unix\Interop.SetEUid.cs" />
</ItemGroup>
<ItemGroup>
<Reference Include="System.Collections" />
<Reference Include="System.Memory" />
<Reference Include="System.Runtime" />
<Reference Include="System.Runtime.Extensions" />
<Reference Include="System.Runtime.CompilerServices.Unsafe" />
<Reference Include="System.Runtime.InteropServices" />
<Reference Include="System.Security.AccessControl" />
<Reference Include="System.Security.Principal" />
<Reference Include="System.Security.Principal.Windows" />
<Reference Include="System.Threading" />
<Reference Include="System.Threading.Overlapped" />
</ItemGroup>
<ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'windows'">
<Reference Include="System.Collections.NonGeneric" />
<Reference Include="System.Security.Claims" />
</ItemGroup>
<ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'Unix'">
<Reference Include="Microsoft.Win32.Primitives" />
<Reference Include="System.IO.FileSystem" />
<Reference Include="System.Net.Primitives" />
<Reference Include="System.Net.Sockets" />
</ItemGroup>
</Project>
| <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
<OmitTransitiveCompileReferences>true</OmitTransitiveCompileReferences>
<TargetFrameworks>$(NetCoreAppCurrent)-windows;$(NetCoreAppCurrent)-Unix;$(NetCoreAppCurrent)</TargetFrameworks>
<Nullable>enable</Nullable>
</PropertyGroup>
<!-- DesignTimeBuild requires all the TargetFramework Derived Properties to not be present in the first property group. -->
<PropertyGroup>
<TargetPlatformIdentifier>$([MSBuild]::GetTargetPlatformIdentifier('$(TargetFramework)'))</TargetPlatformIdentifier>
<GeneratePlatformNotSupportedAssemblyMessage Condition="'$(TargetPlatformIdentifier)' == ''">SR.Pipes_PlatformNotSupported</GeneratePlatformNotSupportedAssemblyMessage>
</PropertyGroup>
<!-- Compiled Source Files -->
<ItemGroup Condition="'$(TargetPlatformIdentifier)' != ''">
<Compile Include="Microsoft\Win32\SafeHandles\SafePipeHandle.cs" />
<Compile Include="System\IO\Error.cs" />
<Compile Include="System\IO\Pipes\AnonymousPipeClientStream.cs" />
<Compile Include="System\IO\Pipes\AnonymousPipeServerStream.cs" />
<Compile Include="System\IO\Pipes\NamedPipeClientStream.cs" />
<Compile Include="System\IO\Pipes\NamedPipeServerStream.cs" />
<Compile Include="System\IO\Pipes\PipeDirection.cs" />
<Compile Include="System\IO\Pipes\PipeOptions.cs" />
<Compile Include="System\IO\Pipes\PipeState.cs" />
<Compile Include="System\IO\Pipes\PipeStream.cs" />
<Compile Include="System\IO\Pipes\PipeTransmissionMode.cs" />
<Compile Include="$(CommonPath)DisableRuntimeMarshalling.cs"
Link="Common\DisableRuntimeMarshalling.cs" />
<Compile Include="$(CommonPath)System\Threading\Tasks\TaskToApm.cs"
Link="Common\System\Threading\Tasks\TaskToApm.cs" />
</ItemGroup>
<ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'windows'">
<Compile Include="$(CommonPath)Interop\Windows\Interop.Libraries.cs"
Link="Common\Interop\Windows\Interop.Libraries.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.CloseHandle.cs"
Link="Common\Interop\Windows\Interop.CloseHandle.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Interop.Errors.cs"
Link="Common\Interop\Windows\Interop.Errors.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.FormatMessage.cs"
Link="Common\Interop\Windows\Interop.FormatMessage.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.GenericOperations.cs"
Link="Interop\Windows\Interop.GenericOperations.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.SecurityOptions.cs"
Link="Common\CoreLib\Interop\Windows\Interop.SecurityOptions.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Interop.BOOL.cs"
Link="Common\CoreLib\Interop\Windows\Interop.BOOL.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.SECURITY_ATTRIBUTES.cs"
Link="Common\CoreLib\Interop\Windows\Interop.SECURITY_ATTRIBUTES.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.HandleOptions.cs"
Link="Common\Interop\Windows\Interop.HandleOptions.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.PipeOptions.cs"
Link="Common\Interop\Windows\Interop.PipeOptions.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.FileOperations.cs"
Link="Common\Interop\Windows\Interop.FileOperations.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.FileTypes.cs"
Link="Common\Interop\Windows\Interop.FileTypes.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.GetCurrentProcess.cs"
Link="Common\Interop\Windows\Interop.GetCurrentProcess.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.DuplicateHandle_SafePipeHandle.cs"
Link="Common\Interop\Windows\Interop.DuplicateHandle_SafePipeHandle.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.GetFileType_SafeHandle.cs"
Link="Common\Interop\Windows\Interop.GetFileType.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.CreatePipe_SafePipeHandle.cs"
Link="Common\Interop\Windows\Interop.CreatePipe_SafePipeHandle.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.ConnectNamedPipe.cs"
Link="Common\Interop\Windows\Interop.ConnectNamedPipe.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.WaitNamedPipe.cs"
Link="Common\Interop\Windows\Interop.WaitNamedPipe.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.GetNamedPipeHandleState.cs"
Link="Common\Interop\Windows\Interop.GetNamedPipeHandleState.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.GetNamedPipeInfo.cs"
Link="Common\Interop\Windows\Interop.GetNamedPipeInfo.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.SetNamedPipeHandleState.cs"
Link="Common\Interop\Windows\Interop.SetNamedPipeHandleState.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.CancelIoEx.cs"
Link="Common\Interop\Windows\Interop.CancelIoEx.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.FlushFileBuffers.cs"
Link="Common\Interop\Windows\Interop.FlushFileBuffers.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.ReadFile_SafeHandle_IntPtr.cs"
Link="Common\Interop\Windows\Interop.ReadFile_IntPtr.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.ReadFile_SafeHandle_NativeOverlapped.cs"
Link="Common\Interop\Windows\Interop.ReadFile_NativeOverlapped.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.WriteFile_SafeHandle_IntPtr.cs"
Link="Common\Interop\Windows\Interop.WriteFile_IntPtr.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.WriteFile_SafeHandle_NativeOverlapped.cs"
Link="Common\Interop\Windows\Interop.WriteFile_NativeOverlapped.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.DisconnectNamedPipe.cs"
Link="Common\Interop\Windows\Interop.DisconnectNamedPipe.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.CreateNamedPipe.cs"
Link="Common\Interop\Windows\Interop.CreateNamedPipe.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.MaxLengths.cs"
Link="Common\Interop\Windows\Interop.MaxLengths.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Advapi32\Interop.RevertToSelf.cs"
Link="Common\Interop\Windows\Interop.RevertToSelf.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Advapi32\Interop.ImpersonateNamedPipeClient.cs"
Link="Common\Interop\Windows\Interop.ImpersonateNamedPipeClient.cs" />
<Compile Include="$(CommonPath)System\IO\Win32Marshal.cs"
Link="Common\CoreLib\System\IO\Win32Marshal.cs" />
<Compile Include="Microsoft\Win32\SafeHandles\SafePipeHandle.Windows.cs" />
<Compile Include="System\IO\Pipes\AnonymousPipeServerStreamAcl.cs" />
<Compile Include="System\IO\Pipes\AnonymousPipeServerStream.Windows.cs" />
<Compile Include="System\IO\Pipes\NamedPipeServerStreamAcl.cs" />
<Compile Include="System\IO\Pipes\NamedPipeClientStream.Windows.cs" />
<Compile Include="System\IO\Pipes\NamedPipeServerStream.Windows.cs" />
<Compile Include="System\IO\Pipes\PipeAccessRights.cs" />
<Compile Include="System\IO\Pipes\PipeAccessRule.cs" />
<Compile Include="System\IO\Pipes\PipeAuditRule.cs" />
<Compile Include="System\IO\Pipes\PipesAclExtensions.cs" />
<Compile Include="System\IO\Pipes\PipeSecurity.cs" />
<Compile Include="System\IO\Pipes\PipeStream.ValueTaskSource.cs" />
<Compile Include="System\IO\Pipes\PipeStream.Windows.cs" />
</ItemGroup>
<!-- Windows : Win32 only -->
<ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'windows'">
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.CreateNamedPipeClient.cs"
Link="Common\Interop\Windows\Interop.CreateNamedPipeClient.cs" />
<Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.LoadLibraryEx_IntPtr.cs"
Link="Common\Interop\Windows\Interop.LoadLibraryEx_IntPtr.cs" />
<Compile Include="System\IO\Pipes\NamedPipeServerStream.Win32.cs" />
</ItemGroup>
<ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'Unix'">
<Compile Include="Microsoft\Win32\SafeHandles\SafePipeHandle.Unix.cs" />
<Compile Include="System\IO\Pipes\AnonymousPipeServerStream.Unix.cs" />
<Compile Include="System\IO\Pipes\NamedPipeClientStream.Unix.cs" />
<Compile Include="System\IO\Pipes\NamedPipeServerStream.Unix.cs" />
<Compile Include="System\IO\Pipes\PipeStream.Unix.cs" />
<Compile Include="$(CommonPath)Interop\Unix\Interop.Libraries.cs"
Link="Common\Interop\Unix\Interop.Libraries.cs" />
<Compile Include="$(CommonPath)Interop\Unix\Interop.Errors.cs"
Link="Common\CoreLib\Interop\Unix\Interop.Errors.cs" />
<Compile Include="$(CommonPath)Interop\Unix\Interop.IOErrors.cs"
Link="Common\CoreLib\Interop\Unix\Interop.IOErrors.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Close.cs"
Link="Common\Interop\Unix\Interop.Close.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Fcntl.Pipe.cs"
Link="Common\Interop\Unix\Interop.Fcntl.Pipe.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Fcntl.cs"
Link="Common\Interop\Unix\Interop.Fcntl.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.FLock.cs"
Link="Common\Interop\Unix\Interop.FLock.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.GetHostName.cs"
Link="Common\Interop\Unix\Interop.GetHostName.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.GetPeerUserName.cs"
Link="Common\Interop\Unix\Interop.GetPeerUserName.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Open.cs"
Link="Common\Interop\Unix\Interop.Open.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.OpenFlags.cs"
Link="Common\Interop\Unix\Interop.OpenFlags.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Permissions.cs"
Link="Common\Interop\Unix\Interop.Permissions.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Pipe.cs"
Link="Common\Interop\Unix\Interop.Pipe.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Read.Pipe.cs"
Link="Common\Interop\Unix\Interop.Read.Pipe.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Unlink.cs"
Link="Common\Interop\Unix\Interop.Unlink.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Write.Pipe.cs"
Link="Common\Interop\Unix\Interop.Write.Pipe.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Stat.cs"
Link="Common\Interop\Unix\Interop.Stat.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.Stat.Pipe.cs"
Link="Common\Interop\Unix\Interop.Stat.Pipe.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.GetPeerID.cs"
Link="Common\Interop\Unix\Interop.GetPeerID.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.GetEUid.cs"
Link="Common\Interop\Unix\Interop.GetEUid.cs" />
<Compile Include="$(CommonPath)Interop\Unix\System.Native\Interop.SetEUid.cs"
Link="Common\Interop\Unix\Interop.SetEUid.cs" />
</ItemGroup>
<ItemGroup>
<Reference Include="System.Collections" />
<Reference Include="System.Memory" />
<Reference Include="System.Runtime" />
<Reference Include="System.Runtime.Extensions" />
<Reference Include="System.Runtime.CompilerServices.Unsafe" />
<Reference Include="System.Runtime.InteropServices" />
<Reference Include="System.Security.AccessControl" />
<Reference Include="System.Security.Principal" />
<Reference Include="System.Security.Principal.Windows" />
<Reference Include="System.Threading" />
<Reference Include="System.Threading.Overlapped" />
</ItemGroup>
<ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'windows'">
<Reference Include="System.Collections.NonGeneric" />
<Reference Include="System.Security.Claims" />
</ItemGroup>
<ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'Unix'">
<Reference Include="Microsoft.Win32.Primitives" />
<Reference Include="System.IO.FileSystem" />
<Reference Include="System.Net.Primitives" />
<Reference Include="System.Net.Sockets" />
</ItemGroup>
</Project>
| -1 |
dotnet/runtime | 66,213 | [mono] Put WeakAttribute support under an ifdef | This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | lambdageek | 2022-03-04T19:50:51Z | 2022-03-08T14:08:40Z | 34880aaeb463cc17268b44aecb80dda0914cfa01 | 8342a08b8bf0ff2ffb64cd5992e2d60c8d6d63c1 | [mono] Put WeakAttribute support under an ifdef. This code has been effectively dead in .NET 6 on all platforms since `System.WeakAttribute` is not in CoreLib (it was a pre-.NET 5 mono-specific extension)
Not everything is removed:
- `MonoClass:has_weak_fields` bit is still present (and always 0).
- the AOT compiler still emits a (size 0) `weak_field_indexes` symbol (but the AOT runtime reader code is under an ifdef). Likewise the AOT table definitions, etc are still there. It seemed unnecessary to perturb the AOT format. | ./src/libraries/System.Runtime.InteropServices/src/System/Runtime/InteropServices/ComAliasNameAttribute.cs | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
namespace System.Runtime.InteropServices
{
[AttributeUsage(AttributeTargets.Parameter | AttributeTargets.Field | AttributeTargets.Property | AttributeTargets.ReturnValue, Inherited = false)]
public sealed class ComAliasNameAttribute : Attribute
{
public ComAliasNameAttribute(string alias) => Value = alias;
public string Value { get; }
}
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
namespace System.Runtime.InteropServices
{
[AttributeUsage(AttributeTargets.Parameter | AttributeTargets.Field | AttributeTargets.Property | AttributeTargets.ReturnValue, Inherited = false)]
public sealed class ComAliasNameAttribute : Attribute
{
public ComAliasNameAttribute(string alias) => Value = alias;
public string Value { get; }
}
}
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/cmake/config.h.in | #ifndef __MONO_CONFIG_H__
#define __MONO_CONFIG_H__
#ifdef _MSC_VER
// FIXME This is all questionable but the logs are flooded and nothing else is fixing them.
#pragma warning(disable:4018) // signed/unsigned mismatch
#pragma warning(disable:4090) // const problem
#pragma warning(disable:4146) // unary minus operator applied to unsigned type, result still unsigned
#pragma warning(disable:4244) // integer conversion, possible loss of data
#pragma warning(disable:4267) // integer conversion, possible loss of data
// promote warnings to errors
#pragma warning( error:4013) // function undefined; assuming extern returning int
#pragma warning( error:4022) // call and prototype disagree
#pragma warning( error:4047) // differs in level of indirection
#pragma warning( error:4098) // void return returns a value
#pragma warning( error:4113) // call and prototype disagree
#pragma warning( error:4172) // returning address of local variable or temporary
#pragma warning( error:4197) // top-level volatile in cast is ignored
#pragma warning( error:4273) // inconsistent dll linkage
#pragma warning( error:4293) // shift count negative or too big, undefined behavior
#pragma warning( error:4312) // 'type cast': conversion from 'MonoNativeThreadId' to 'gpointer' of greater size
#pragma warning( error:4715) // 'keyword' not all control paths return a value
#include <SDKDDKVer.h>
#if _WIN32_WINNT < 0x0601
#error "Mono requires Windows 7 or later."
#endif /* _WIN32_WINNT < 0x0601 */
#ifndef HAVE_WINAPI_FAMILY_SUPPORT
#define HAVE_WINAPI_FAMILY_SUPPORT
/* WIN API Family support */
#include <winapifamily.h>
#if WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_DESKTOP)
#define HAVE_CLASSIC_WINAPI_SUPPORT 1
#define HAVE_UWP_WINAPI_SUPPORT 0
#elif WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_APP)
#define HAVE_CLASSIC_WINAPI_SUPPORT 0
#define HAVE_UWP_WINAPI_SUPPORT 1
#else
#define HAVE_CLASSIC_WINAPI_SUPPORT 0
#define HAVE_UWP_WINAPI_SUPPORT 0
#ifndef HAVE_EXTERN_DEFINED_WINAPI_SUPPORT
#error Unsupported WINAPI family
#endif
#endif
#endif
#endif
/* This platform does not support symlinks */
#cmakedefine HOST_NO_SYMLINKS 1
/* pthread is a pointer */
#cmakedefine PTHREAD_POINTER_ID 1
/* Targeting the Android platform */
#cmakedefine HOST_ANDROID 1
/* ... */
#cmakedefine TARGET_ANDROID 1
/* ... */
#cmakedefine USE_MACH_SEMA 1
/* Targeting the Fuchsia platform */
#cmakedefine HOST_FUCHSIA 1
/* Targeting the AIX and PASE platforms */
#cmakedefine HOST_AIX 1
/* Host Platform is Win32 */
#cmakedefine HOST_WIN32 1
/* Target Platform is Win32 */
#cmakedefine TARGET_WIN32 1
/* Host Platform is Darwin */
#cmakedefine HOST_DARWIN 1
/* Host Platform is iOS */
#cmakedefine HOST_IOS 1
/* Host Platform is tvOS */
#cmakedefine HOST_TVOS 1
/* Host Platform is Mac Catalyst */
#cmakedefine HOST_MACCAT 1
/* Use classic Windows API support */
#cmakedefine HAVE_CLASSIC_WINAPI_SUPPORT 1
/* Don't use UWP Windows API support */
#cmakedefine HAVE_UWP_WINAPI_SUPPORT 1
/* Define to 1 if you have the <sys/types.h> header file. */
#cmakedefine HAVE_SYS_TYPES_H 1
/* Define to 1 if you have the <sys/stat.h> header file. */
#cmakedefine HAVE_SYS_STAT_H 1
/* Define to 1 if you have the <strings.h> header file. */
#cmakedefine HAVE_STRINGS_H 1
/* Define to 1 if you have the <stdint.h> header file. */
#cmakedefine HAVE_STDINT_H 1
/* Define to 1 if you have the <unistd.h> header file. */
#cmakedefine HAVE_UNISTD_H 1
/* Define to 1 if you have the <signal.h> header file. */
#cmakedefine HAVE_SIGNAL_H 1
/* Define to 1 if you have the <setjmp.h> header file. */
#cmakedefine HAVE_SETJMP_H 1
/* Define to 1 if you have the <syslog.h> header file. */
#cmakedefine HAVE_SYSLOG_H 1
/* Define to 1 if `major', `minor', and `makedev' are declared in <mkdev.h>.
*/
#cmakedefine MAJOR_IN_MKDEV 1
/* Define to 1 if `major', `minor', and `makedev' are declared in
<sysmacros.h>. */
#cmakedefine MAJOR_IN_SYSMACROS 1
/* Define to 1 if you have the <sys/filio.h> header file. */
#cmakedefine HAVE_SYS_FILIO_H 1
/* Define to 1 if you have the <sys/sockio.h> header file. */
#cmakedefine HAVE_SYS_SOCKIO_H 1
/* Define to 1 if you have the <netdb.h> header file. */
#cmakedefine HAVE_NETDB_H 1
/* Define to 1 if you have the <utime.h> header file. */
#cmakedefine HAVE_UTIME_H 1
/* Define to 1 if you have the <sys/utime.h> header file. */
#cmakedefine HAVE_SYS_UTIME_H 1
/* Define to 1 if you have the <semaphore.h> header file. */
#cmakedefine HAVE_SEMAPHORE_H 1
/* Define to 1 if you have the <sys/un.h> header file. */
#cmakedefine HAVE_SYS_UN_H 1
/* Define to 1 if you have the <sys/syscall.h> header file. */
#cmakedefine HAVE_SYS_SYSCALL_H 1
/* Define to 1 if you have the <sys/uio.h> header file. */
#cmakedefine HAVE_SYS_UIO_H 1
/* Define to 1 if you have the <sys/param.h> header file. */
#cmakedefine HAVE_SYS_PARAM_H 1
/* Define to 1 if you have the <sys/sysctl.h> header file. */
#cmakedefine HAVE_SYS_SYSCTL_H 1
/* Define to 1 if you have the <libproc.h> header file. */
#cmakedefine HAVE_LIBPROC_H 1
/* Define to 1 if you have the <sys/prctl.h> header file. */
#cmakedefine HAVE_SYS_PRCTL_H 1
/* Define to 1 if you have the <gnu/lib-names.h> header file. */
#cmakedefine HAVE_GNU_LIB_NAMES_H 1
/* Define to 1 if you have the <sys/socket.h> header file. */
#cmakedefine HAVE_SYS_SOCKET_H 1
/* Define to 1 if you have the <sys/utsname.h> header file. */
#cmakedefine HAVE_SYS_UTSNAME_H 1
/* Define to 1 if you have the <alloca.h> header file. */
#cmakedefine HAVE_ALLOCA_H 1
/* Define to 1 if you have the <ucontext.h> header file. */
#cmakedefine HAVE_UCONTEXT_H 1
/* Define to 1 if you have the <pwd.h> header file. */
#cmakedefine HAVE_PWD_H 1
/* Define to 1 if you have the <sys/select.h> header file. */
#cmakedefine HAVE_SYS_SELECT_H 1
/* Define to 1 if you have the <netinet/tcp.h> header file. */
#cmakedefine HAVE_NETINET_TCP_H 1
/* Define to 1 if you have the <netinet/in.h> header file. */
#cmakedefine HAVE_NETINET_IN_H 1
/* Define to 1 if you have the <link.h> header file. */
#cmakedefine HAVE_LINK_H 1
/* Define to 1 if you have the <arpa/inet.h> header file. */
#cmakedefine HAVE_ARPA_INET_H 1
/* Define to 1 if you have the <unwind.h> header file. */
#cmakedefine HAVE_UNWIND_H 1
/* Define to 1 if you have the <sys/user.h> header file. */
#cmakedefine HAVE_SYS_USER_H 1
/* Use static ICU */
#cmakedefine STATIC_ICU 1
/* Use in-tree zlib */
#cmakedefine INTERNAL_ZLIB 1
/* Define to 1 if you have the <poll.h> header file. */
#cmakedefine HAVE_POLL_H 1
/* Define to 1 if you have the <sys/poll.h> header file. */
#cmakedefine HAVE_SYS_POLL_H 1
/* Define to 1 if you have the <sys/wait.h> header file. */
#cmakedefine HAVE_SYS_WAIT_H 1
/* Define to 1 if you have the <wchar.h> header file. */
#cmakedefine HAVE_WCHAR_H 1
/* Define to 1 if you have the <linux/magic.h> header file. */
#cmakedefine HAVE_LINUX_MAGIC_H 1
/* Define to 1 if you have the <android/legacy_signal_inlines.h> header file.
*/
#cmakedefine HAVE_ANDROID_LEGACY_SIGNAL_INLINES_H 1
/* Define to 1 if you have the <android/ndk-version.h> header file. */
#cmakedefine HAVE_ANDROID_NDK_VERSION_H 1
/* Whether Android NDK unified headers are used */
#cmakedefine ANDROID_UNIFIED_HEADERS 1
/* The size of `void *', as computed by sizeof. */
#define SIZEOF_VOID_P @SIZEOF_VOID_P@
/* The size of `long', as computed by sizeof. */
#define SIZEOF_LONG @SIZEOF_LONG@
/* The size of `int', as computed by sizeof. */
#define SIZEOF_INT @SIZEOF_INT@
/* The size of `long long', as computed by sizeof. */
#define SIZEOF_LONG_LONG @SIZEOF_LONG_LONG@
/* Xen-specific behaviour */
#cmakedefine MONO_XEN_OPT 1
/* Reduce runtime requirements (and capabilities) */
#cmakedefine MONO_SMALL_CONFIG 1
/* Make jemalloc assert for mono */
#cmakedefine MONO_JEMALLOC_ASSERT 1
/* Make jemalloc default for mono */
#cmakedefine MONO_JEMALLOC_DEFAULT 1
/* Enable jemalloc usage for mono */
#cmakedefine MONO_JEMALLOC_ENABLED 1
/* Do not include names of unmanaged functions in the crash dump */
#cmakedefine MONO_PRIVATE_CRASHES 1
/* Do not create structured crash files during unmanaged crashes */
#cmakedefine DISABLE_STRUCTURED_CRASH 1
/* String of disabled features */
#define DISABLED_FEATURES @DISABLED_FEATURES@
/* Disable AOT Compiler */
#cmakedefine DISABLE_AOT 1
/* Disable runtime debugging support */
#cmakedefine DISABLE_DEBUG 1
/* Disable reflection emit support */
#cmakedefine DISABLE_REFLECTION_EMIT 1
/* Disable support debug logging */
#cmakedefine DISABLE_LOGGING 1
/* Disable COM support */
#cmakedefine DISABLE_COM 1
/* Disable advanced SSA JIT optimizations */
#cmakedefine DISABLE_SSA 1
/* Disable the JIT, only full-aot mode or interpreter will be supported by the
runtime. */
#cmakedefine DISABLE_JIT 1
/* Disable the interpreter. */
#cmakedefine DISABLE_INTERPRETER 1
/* Some VES is available at runtime */
#cmakedefine ENABLE_ILGEN 1
/* Disable non-blittable marshalling */
#cmakedefine DISABLE_NONBLITTABLE
/* Disable SIMD intrinsics related optimizations. */
#cmakedefine DISABLE_SIMD 1
/* Disable Soft Debugger Agent. */
#cmakedefine DISABLE_DEBUGGER_AGENT 1
/* Disable Performance Counters. */
#cmakedefine DISABLE_PERFCOUNTERS 1
/* Disable shared perfcounters. */
#cmakedefine DISABLE_SHARED_PERFCOUNTERS 1
/* Disable support code for the LLDB plugin. */
#cmakedefine DISABLE_LLDB 1
/* Disable assertion messages. */
#cmakedefine DISABLE_ASSERT_MESSAGES 1
/* Disable concurrent gc support in SGEN. */
#cmakedefine DISABLE_SGEN_MAJOR_MARKSWEEP_CONC 1
/* Disable minor=split support in SGEN. */
#cmakedefine DISABLE_SGEN_SPLIT_NURSERY 1
/* Disable gc bridge support in SGEN. */
#cmakedefine DISABLE_SGEN_GC_BRIDGE 1
/* Disable debug helpers in SGEN. */
#cmakedefine DISABLE_SGEN_DEBUG_HELPERS 1
/* Disable sockets */
#cmakedefine DISABLE_SOCKETS 1
/* Disables use of DllMaps in MonoVM */
#cmakedefine DISABLE_DLLMAP 1
/* Disable Threads */
#cmakedefine DISABLE_THREADS 1
/* Disable perf counters */
#cmakedefine DISABLE_PERF_COUNTERS
/* Disable MONO_LOG_DEST */
#cmakedefine DISABLE_LOG_DEST
/* GC description */
#cmakedefine DEFAULT_GC_NAME 1
/* No GC support. */
#cmakedefine HAVE_NULL_GC 1
/* Length of zero length arrays */
#define MONO_ZERO_LEN_ARRAY @MONO_ZERO_LEN_ARRAY@
/* Define to 1 if you have the `sigaction' function. */
#cmakedefine HAVE_SIGACTION 1
/* Define to 1 if you have the `kill' function. */
#cmakedefine HAVE_KILL 1
/* CLOCK_MONOTONIC */
#cmakedefine HAVE_CLOCK_MONOTONIC 1
/* CLOCK_MONOTONIC_COARSE */
#cmakedefine HAVE_CLOCK_MONOTONIC_COARSE 1
/* clockid_t */
#cmakedefine HAVE_CLOCKID_T 1
/* mach_absolute_time */
#cmakedefine HAVE_MACH_ABSOLUTE_TIME 1
/* gethrtime */
#cmakedefine HAVE_GETHRTIME 1
/* read_real_time */
#cmakedefine HAVE_READ_REAL_TIME 1
/* Define to 1 if you have the `clock_nanosleep' function. */
#cmakedefine HAVE_CLOCK_NANOSLEEP 1
/* Does dlsym require leading underscore. */
#cmakedefine MONO_DL_NEED_USCORE 1
/* Define to 1 if you have the <execinfo.h> header file. */
#cmakedefine HAVE_EXECINFO_H 1
/* Define to 1 if you have the <sys/auxv.h> header file. */
#cmakedefine HAVE_SYS_AUXV_H 1
/* Define to 1 if you have the <sys/resource.h> header file. */
#cmakedefine HAVE_SYS_RESOURCE_H 1
/* kqueue */
#cmakedefine HAVE_KQUEUE 1
/* Define to 1 if you have the `backtrace_symbols' function. */
#cmakedefine HAVE_BACKTRACE_SYMBOLS 1
/* Define to 1 if you have the `mkstemp' function. */
#cmakedefine HAVE_MKSTEMP 1
/* Define to 1 if you have the `mmap' function. */
#cmakedefine HAVE_MMAP 1
/* Define to 1 if you have the `madvise' function. */
#cmakedefine HAVE_MADVISE 1
/* Define to 1 if you have the `getrusage' function. */
#cmakedefine HAVE_GETRUSAGE 1
/* Define to 1 if you have the `dladdr' function. */
#cmakedefine HAVE_DLADDR 1
/* Define to 1 if you have the `sysconf' function. */
#cmakedefine HAVE_SYSCONF 1
/* Define to 1 if you have the `getrlimit' function. */
#cmakedefine HAVE_GETRLIMIT 1
/* Define to 1 if you have the `prctl' function. */
#cmakedefine HAVE_PRCTL 1
/* Define to 1 if you have the `nl_langinfo' function. */
#cmakedefine HAVE_NL_LANGINFO 1
/* sched_getaffinity */
#cmakedefine HAVE_SCHED_GETAFFINITY 1
/* sched_setaffinity */
#cmakedefine HAVE_SCHED_SETAFFINITY 1
/* Define to 1 if you have the `sched_getcpu' function. */
#cmakedefine HAVE_SCHED_GETCPU 1
/* Define to 1 if you have the `getpwuid_r' function. */
#cmakedefine HAVE_GETPWUID_R 1
/* Define to 1 if you have the `readlink' function. */
#cmakedefine HAVE_READLINK 1
/* Define to 1 if you have the `chmod' function. */
#cmakedefine HAVE_CHMOD 1
/* Define to 1 if you have the `lstat' function. */
#cmakedefine HAVE_LSTAT 1
/* Define to 1 if you have the `getdtablesize' function. */
#cmakedefine HAVE_GETDTABLESIZE 1
/* Define to 1 if you have the `ftruncate' function. */
#cmakedefine HAVE_FTRUNCATE 1
/* Define to 1 if you have the `msync' function. */
#cmakedefine HAVE_MSYNC 1
/* Define to 1 if you have the `getpeername' function. */
#cmakedefine HAVE_GETPEERNAME 1
/* Define to 1 if you have the `utime' function. */
#cmakedefine HAVE_UTIME 1
/* Define to 1 if you have the `utimes' function. */
#cmakedefine HAVE_UTIMES 1
/* Define to 1 if you have the `openlog' function. */
#cmakedefine HAVE_OPENLOG 1
/* Define to 1 if you have the `closelog' function. */
#cmakedefine HAVE_CLOSELOG 1
/* Define to 1 if you have the `atexit' function. */
#cmakedefine HAVE_ATEXIT 1
/* Define to 1 if you have the `popen' function. */
#cmakedefine HAVE_POPEN 1
/* Define to 1 if you have the `strerror_r' function. */
#cmakedefine HAVE_STRERROR_R 1
/* Have GLIBC_BEFORE_2_3_4_SCHED_SETAFFINITY */
#cmakedefine GLIBC_BEFORE_2_3_4_SCHED_SETAFFINITY 1
/* GLIBC has CPU_COUNT macro in sched.h */
#cmakedefine HAVE_GNU_CPU_COUNT
/* Have large file support */
#cmakedefine HAVE_LARGE_FILE_SUPPORT 1
/* Have getaddrinfo */
#cmakedefine HAVE_GETADDRINFO 1
/* Have gethostbyname2 */
#cmakedefine HAVE_GETHOSTBYNAME2 1
/* Have gethostbyname */
#cmakedefine HAVE_GETHOSTBYNAME 1
/* Have getprotobyname */
#cmakedefine HAVE_GETPROTOBYNAME 1
/* Have getprotobyname_r */
#cmakedefine HAVE_GETPROTOBYNAME_R 1
/* Have getnameinfo */
#cmakedefine HAVE_GETNAMEINFO 1
/* Have inet_ntop */
#cmakedefine HAVE_INET_NTOP 1
/* Have inet_pton */
#cmakedefine HAVE_INET_PTON 1
/* Define to 1 if you have the `inet_aton' function. */
#cmakedefine HAVE_INET_ATON 1
/* Define to 1 if you have the <pthread.h> header file. */
#cmakedefine HAVE_PTHREAD_H 1
/* Define to 1 if you have the <pthread_np.h> header file. */
#cmakedefine HAVE_PTHREAD_NP_H 1
/* Define to 1 if you have the `pthread_mutex_timedlock' function. */
#cmakedefine HAVE_PTHREAD_MUTEX_TIMEDLOCK 1
/* Define to 1 if you have the `pthread_getattr_np' function. */
#cmakedefine HAVE_PTHREAD_GETATTR_NP 1
/* Define to 1 if you have the `pthread_attr_get_np' function. */
#cmakedefine HAVE_PTHREAD_ATTR_GET_NP 1
/* Define to 1 if you have the `pthread_getname_np' function. */
#cmakedefine HAVE_PTHREAD_GETNAME_NP 1
/* Define to 1 if you have the `pthread_setname_np' function. */
#cmakedefine HAVE_PTHREAD_SETNAME_NP 1
/* Define to 1 if you have the `pthread_cond_timedwait_relative_np' function.
*/
#cmakedefine HAVE_PTHREAD_COND_TIMEDWAIT_RELATIVE_NP 1
/* Define to 1 if you have the `pthread_kill' function. */
#cmakedefine HAVE_PTHREAD_KILL 1
/* Define to 1 if you have the `pthread_attr_setstacksize' function. */
#cmakedefine HAVE_PTHREAD_ATTR_SETSTACKSIZE 1
/* Define to 1 if you have the `pthread_get_stackaddr_np' function. */
#cmakedefine HAVE_PTHREAD_GET_STACKADDR_NP 1
/* Define to 1 if you have the `pthread_jit_write_protect_np' function. */
#cmakedefine HAVE_PTHREAD_JIT_WRITE_PROTECT_NP 1
#cmakedefine01 HAVE_GETAUXVAL
/* Define to 1 if you have the declaration of `pthread_mutexattr_setprotocol',
and to 0 if you don't. */
#cmakedefine HAVE_DECL_PTHREAD_MUTEXATTR_SETPROTOCOL 1
/* Have a working sigaltstack */
#cmakedefine HAVE_WORKING_SIGALTSTACK 1
/* Define to 1 if you have the `shm_open' function. */
#cmakedefine HAVE_SHM_OPEN 1
/* Define to 1 if you have the `poll' function. */
#cmakedefine HAVE_POLL 1
/* epoll_create1 */
#cmakedefine HAVE_EPOLL 1
/* Define to 1 if you have the <sys/ioctl.h> header file. */
#cmakedefine HAVE_SYS_IOCTL_H 1
/* Define to 1 if you have the <net/if.h> header file. */
#cmakedefine HAVE_NET_IF_H 1
/* Can get interface list */
#cmakedefine HAVE_SIOCGIFCONF 1
/* sockaddr_in has sin_len */
#cmakedefine HAVE_SOCKADDR_IN_SIN_LEN 1
/* sockaddr_in6 has sin6_len */
#cmakedefine HAVE_SOCKADDR_IN6_SIN_LEN 1
/* Have getifaddrs */
#cmakedefine HAVE_GETIFADDRS 1
/* Have struct ifaddrs */
#cmakedefine HAVE_IFADDRS 1
/* Have access */
#cmakedefine HAVE_ACCESS 1
/* Have getpid */
#cmakedefine HAVE_GETPID 1
/* Have mktemp */
#cmakedefine HAVE_MKTEMP 1
/* Define to 1 if you have the <sys/errno.h> header file. */
#cmakedefine HAVE_SYS_ERRNO_H 1
/* Define to 1 if you have the <sys/sendfile.h> header file. */
#cmakedefine HAVE_SYS_SENDFILE_H 1
/* Define to 1 if you have the <sys/statvfs.h> header file. */
#cmakedefine HAVE_SYS_STATVFS_H 1
/* Define to 1 if you have the <sys/statfs.h> header file. */
#cmakedefine HAVE_SYS_STATFS_H 1
/* Define to 1 if you have the <sys/mman.h> header file. */
#cmakedefine HAVE_SYS_MMAN_H 1
/* Define to 1 if you have the <sys/mount.h> header file. */
#cmakedefine HAVE_SYS_MOUNT_H 1
/* Define to 1 if you have the `getfsstat' function. */
#cmakedefine HAVE_GETFSSTAT 1
/* Define to 1 if you have the `mremap' function. */
#cmakedefine HAVE_MREMAP 1
/* Define to 1 if you have the `posix_fadvise' function. */
#cmakedefine HAVE_POSIX_FADVISE 1
/* Define to 1 if you have the `vsnprintf' function. */
#cmakedefine HAVE_VSNPRINTF 1
/* Define to 1 if you have the `sendfile' function. */
#cmakedefine HAVE_SENDFILE 1
/* struct statfs */
#cmakedefine HAVE_STATFS 1
/* Define to 1 if you have the `statvfs' function. */
#cmakedefine HAVE_STATVFS 1
/* Define to 1 if you have the `setpgid' function. */
#cmakedefine HAVE_SETPGID 1
/* Define to 1 if you have the `system' function. */
#ifdef _MSC_VER
#if HAVE_WINAPI_FAMILY_SUPPORT(HAVE_CLASSIC_WINAPI_SUPPORT)
#cmakedefine HAVE_SYSTEM 1
#endif
#else
#cmakedefine HAVE_SYSTEM 1
#endif
/* Define to 1 if you have the `fork' function. */
#cmakedefine HAVE_FORK 1
/* Define to 1 if you have the `execv' function. */
#cmakedefine HAVE_EXECV 1
/* Define to 1 if you have the `execve' function. */
#cmakedefine HAVE_EXECVE 1
/* Define to 1 if you have the `waitpid' function. */
#cmakedefine HAVE_WAITPID 1
/* Define to 1 if you have the `localtime_r' function. */
#cmakedefine HAVE_LOCALTIME_R 1
/* Define to 1 if you have the `mkdtemp' function. */
#cmakedefine HAVE_MKDTEMP 1
/* The size of `size_t', as computed by sizeof. */
#define SIZEOF_SIZE_T @SIZEOF_SIZE_T@
#cmakedefine01 HAVE_GNU_STRERROR_R
/* Define to 1 if the system has the type `struct sockaddr'. */
#cmakedefine HAVE_STRUCT_SOCKADDR 1
/* Define to 1 if the system has the type `struct sockaddr_in'. */
#cmakedefine HAVE_STRUCT_SOCKADDR_IN 1
/* Define to 1 if the system has the type `struct sockaddr_in6'. */
#cmakedefine HAVE_STRUCT_SOCKADDR_IN6 1
/* Define to 1 if the system has the type `struct stat'. */
#cmakedefine HAVE_STRUCT_STAT 1
/* Define to 1 if the system has the type `struct timeval'. */
#cmakedefine HAVE_STRUCT_TIMEVAL 1
/* Define to 1 if `st_atim' is a member of `struct stat'. */
#cmakedefine HAVE_STRUCT_STAT_ST_ATIM 1
/* Define to 1 if `st_atimespec' is a member of `struct stat'. */
#cmakedefine HAVE_STRUCT_STAT_ST_ATIMESPEC 1
/* Define to 1 if `kp_proc' is a member of `struct kinfo_proc'. */
#cmakedefine HAVE_STRUCT_KINFO_PROC_KP_PROC 1
/* Define to 1 if you have the <sys/time.h> header file. */
#cmakedefine HAVE_SYS_TIME_H 1
/* Define to 1 if you have the <dirent.h> header file. */
#cmakedefine HAVE_DIRENT_H 1
/* Define to 1 if you have the <CommonCrypto/CommonDigest.h> header file. */
#cmakedefine HAVE_COMMONCRYPTO_COMMONDIGEST_H 1
/* Define to 1 if you have the <sys/random.h> header file. */
#cmakedefine HAVE_SYS_RANDOM_H 1
/* Define to 1 if you have the `getrandom' function. */
#cmakedefine HAVE_GETRANDOM 1
/* Define to 1 if you have the `getentropy' function. */
#cmakedefine HAVE_GETENTROPY 1
/* Qp2getifaddrs */
#cmakedefine HAVE_QP2GETIFADDRS 1
/* Define to 1 if you have the `strlcpy' function. */
#cmakedefine HAVE_STRLCPY 1
/* Define to 1 if you have the <winternl.h> header file. */
#cmakedefine HAVE_WINTERNL_H 1
/* Have socklen_t */
#cmakedefine HAVE_SOCKLEN_T 1
/* Define to 1 if you have the `execvp' function. */
#cmakedefine HAVE_EXECVP 1
/* Name of /dev/random */
#define NAME_DEV_RANDOM @NAME_DEV_RANDOM@
/* Enable the allocation and indexing of arrays greater than Int32.MaxValue */
#cmakedefine MONO_BIG_ARRAYS 1
/* Enable DTrace probes */
#cmakedefine ENABLE_DTRACE 1
/* AOT cross offsets file */
#cmakedefine MONO_OFFSETS_FILE "@MONO_OFFSETS_FILE@"
/* Enable the LLVM back end */
#cmakedefine ENABLE_LLVM 1
/* Runtime support code for llvm enabled */
#cmakedefine ENABLE_LLVM_RUNTIME 1
/* 64 bit mode with 4 byte longs and pointers */
#cmakedefine MONO_ARCH_ILP32 1
/* The runtime is compiled for cross-compiling mode */
#cmakedefine MONO_CROSS_COMPILE 1
/* ... */
#cmakedefine TARGET_WASM 1
/* The JIT/AOT targets WatchOS */
#cmakedefine TARGET_WATCHOS 1
/* ... */
#cmakedefine TARGET_PS3 1
/* ... */
#cmakedefine __mono_ppc64__ 1
/* ... */
#cmakedefine TARGET_XBOX360 1
/* ... */
#cmakedefine TARGET_PS4 1
/* ... */
#cmakedefine DISABLE_HW_TRAPS 1
/* Target is RISC-V */
#cmakedefine TARGET_RISCV 1
/* Target is 32-bit RISC-V */
#cmakedefine TARGET_RISCV32 1
/* Target is 64-bit RISC-V */
#cmakedefine TARGET_RISCV64 1
/* ... */
#cmakedefine TARGET_X86 1
/* ... */
#cmakedefine TARGET_AMD64 1
/* ... */
#cmakedefine TARGET_ARM 1
/* ... */
#cmakedefine TARGET_ARM64 1
/* ... */
#cmakedefine TARGET_POWERPC 1
/* ... */
#cmakedefine TARGET_POWERPC64 1
/* ... */
#cmakedefine TARGET_S390X 1
/* ... */
#cmakedefine TARGET_MIPS 1
/* ... */
#cmakedefine TARGET_SPARC 1
/* ... */
#cmakedefine TARGET_SPARC64 1
/* ... */
#cmakedefine HOST_WASM 1
/* ... */
#cmakedefine HOST_BROWSER 1
/* ... */
#cmakedefine HOST_WASI 1
/* ... */
#cmakedefine HOST_X86 1
/* ... */
#cmakedefine HOST_AMD64 1
/* ... */
#cmakedefine HOST_ARM 1
/* ... */
#cmakedefine HOST_ARM64 1
/* ... */
#cmakedefine HOST_POWERPC 1
/* ... */
#cmakedefine HOST_POWERPC64 1
/* ... */
#cmakedefine HOST_S390X 1
/* ... */
#cmakedefine HOST_MIPS 1
/* ... */
#cmakedefine HOST_SPARC 1
/* ... */
#cmakedefine HOST_SPARC64 1
/* Host is RISC-V */
#cmakedefine HOST_RISCV 1
/* Host is 32-bit RISC-V */
#cmakedefine HOST_RISCV32 1
/* Host is 64-bit RISC-V */
#cmakedefine HOST_RISCV64 1
/* ... */
#cmakedefine USE_GCC_ATOMIC_OPS 1
/* The JIT/AOT targets iOS */
#cmakedefine TARGET_IOS 1
/* The JIT/AOT targets tvOS */
#cmakedefine TARGET_TVOS 1
/* The JIT/AOT targets Mac Catalyst */
#cmakedefine TARGET_MACCAT 1
/* The JIT/AOT targets OSX */
#cmakedefine TARGET_OSX 1
/* The JIT/AOT targets Apple platforms */
#cmakedefine TARGET_MACH 1
/* byte order of target */
#define TARGET_BYTE_ORDER @TARGET_BYTE_ORDER@
/* wordsize of target */
#define TARGET_SIZEOF_VOID_P @TARGET_SIZEOF_VOID_P@
/* size of target machine integer registers */
#define SIZEOF_REGISTER @SIZEOF_REGISTER@
/* host or target doesn't allow unaligned memory access */
#cmakedefine NO_UNALIGNED_ACCESS 1
/* Support for the visibility ("hidden") attribute */
#cmakedefine HAVE_VISIBILITY_HIDDEN 1
/* Support for the deprecated attribute */
#cmakedefine HAVE_DEPRECATED 1
/* Moving collector */
#cmakedefine HAVE_MOVING_COLLECTOR 1
/* Defaults to concurrent GC */
#cmakedefine HAVE_CONC_GC_AS_DEFAULT 1
/* Define to 1 if you have the `stpcpy' function. */
#cmakedefine HAVE_STPCPY 1
/* Define to 1 if you have the `strtok_r' function. */
#cmakedefine HAVE_STRTOK_R 1
/* Define to 1 if you have the `rewinddir' function. */
#cmakedefine HAVE_REWINDDIR 1
/* Define to 1 if you have the `vasprintf' function. */
#cmakedefine HAVE_VASPRINTF 1
/* Overridable allocator support enabled */
#cmakedefine ENABLE_OVERRIDABLE_ALLOCATORS 1
/* Define to 1 if you have the `strndup' function. */
#cmakedefine HAVE_STRNDUP 1
/* Define to 1 if you have the <getopt.h> header file. */
#cmakedefine HAVE_GETOPT_H 1
/* Define to 1 if you have the <iconv.h> header file. */
#cmakedefine HAVE_ICONV_H 1
/* Define to 1 if you have the `iconv' library (-liconv). */
#cmakedefine HAVE_LIBICONV 1
/* Icall symbol map enabled */
#cmakedefine ENABLE_ICALL_SYMBOL_MAP 1
/* Icall export enabled */
#cmakedefine ENABLE_ICALL_EXPORT 1
/* Icall tables disabled */
#cmakedefine DISABLE_ICALL_TABLES 1
/* QCalls disabled */
#cmakedefine DISABLE_QCALLS 1
/* Embedded PDB support disabled */
#cmakedefine DISABLE_EMBEDDED_PDB
/* log profiler compressed output disabled */
#cmakedefine DISABLE_LOG_PROFILER_GZ
/* Have __thread keyword */
#cmakedefine MONO_KEYWORD_THREAD @MONO_KEYWORD_THREAD@
/* tls_model available */
#cmakedefine HAVE_TLS_MODEL_ATTR 1
/* ARM v5 */
#cmakedefine HAVE_ARMV5 1
/* ARM v6 */
#cmakedefine HAVE_ARMV6 1
/* ARM v7 */
#cmakedefine HAVE_ARMV7 1
/* RISC-V FPABI is double-precision */
#cmakedefine RISCV_FPABI_DOUBLE 1
/* RISC-V FPABI is single-precision */
#cmakedefine RISCV_FPABI_SINGLE 1
/* RISC-V FPABI is soft float */
#cmakedefine RISCV_FPABI_SOFT 1
/* Use malloc for each single mempool allocation */
#cmakedefine USE_MALLOC_FOR_MEMPOOLS 1
/* Enable lazy gc thread creation by the embedding host. */
#cmakedefine LAZY_GC_THREAD_CREATION 1
/* Enable cooperative stop-the-world garbage collection. */
#cmakedefine ENABLE_COOP_SUSPEND 1
/* Enable hybrid suspend for GC stop-the-world */
#cmakedefine ENABLE_HYBRID_SUSPEND 1
/* Enable feature experiments */
#cmakedefine ENABLE_EXPERIMENTS 1
/* Enable experiment 'null' */
#cmakedefine ENABLE_EXPERIMENT_null 1
/* Enable experiment 'Tiered Compilation' */
#cmakedefine ENABLE_EXPERIMENT_TIERED 1
/* Enable checked build */
#cmakedefine ENABLE_CHECKED_BUILD 1
/* Enable GC checked build */
#cmakedefine ENABLE_CHECKED_BUILD_GC 1
/* Enable metadata checked build */
#cmakedefine ENABLE_CHECKED_BUILD_METADATA 1
/* Enable thread checked build */
#cmakedefine ENABLE_CHECKED_BUILD_THREAD 1
/* Enable private types checked build */
#cmakedefine ENABLE_CHECKED_BUILD_PRIVATE_TYPES 1
/* Enable EventPipe library support */
#cmakedefine ENABLE_PERFTRACING 1
/* Define to 1 if you have /usr/include/malloc.h. */
#cmakedefine HAVE_USR_INCLUDE_MALLOC_H 1
/* The architecture this is running on */
#define MONO_ARCHITECTURE @MONO_ARCHITECTURE@
/* Disable banned functions from being used by the runtime */
#cmakedefine MONO_INSIDE_RUNTIME 1
/* Version number of package */
#define VERSION @VERSION@
/* Full version number of package */
#define FULL_VERSION @FULL_VERSION@
/* Define to 1 if you have the <dlfcn.h> header file. */
#cmakedefine HAVE_DLFCN_H 1
/* Enable lazy gc thread creation by the embedding host */
#cmakedefine LAZY_GC_THREAD_CREATION 1
/* Enable additional checks */
#cmakedefine ENABLE_CHECKED_BUILD 1
/* Enable compile time checking that getter functions are used */
#cmakedefine ENABLE_CHECKED_BUILD_PRIVATE_TYPES 1
/* Enable runtime GC Safe / Unsafe mode assertion checks (must set env var MONO_CHECK_MODE=gc) */
#cmakedefine ENABLE_CHECKED_BUILD_GC 1
/* Enable runtime history of per-thread coop state transitions (must set env var MONO_CHECK_MODE=thread) */
#cmakedefine ENABLE_CHECKED_BUILD_THREAD 1
/* Enable runtime checks of mempool references between metadata images (must set env var MONO_CHECK_MODE=metadata) */
#cmakedefine ENABLE_CHECKED_BUILD_METADATA 1
/* Enable static linking of mono runtime components */
#cmakedefine STATIC_COMPONENTS
/* Enable perf jit dump support */
#cmakedefine ENABLE_JIT_DUMP 1
#if defined(ENABLE_LLVM) && defined(HOST_WIN32) && defined(TARGET_WIN32) && (!defined(TARGET_AMD64) || !defined(_MSC_VER))
#error LLVM for host=Windows and target=Windows is only supported on x64 MSVC build.
#endif
#endif
| #ifndef __MONO_CONFIG_H__
#define __MONO_CONFIG_H__
#ifdef _MSC_VER
// FIXME This is all questionable but the logs are flooded and nothing else is fixing them.
#pragma warning(disable:4018) // signed/unsigned mismatch
#pragma warning(disable:4090) // const problem
#pragma warning(disable:4146) // unary minus operator applied to unsigned type, result still unsigned
#pragma warning(disable:4244) // integer conversion, possible loss of data
#pragma warning(disable:4267) // integer conversion, possible loss of data
// promote warnings to errors
#pragma warning( error:4013) // function undefined; assuming extern returning int
#pragma warning( error:4022) // call and prototype disagree
#pragma warning( error:4047) // differs in level of indirection
#pragma warning( error:4098) // void return returns a value
#pragma warning( error:4113) // call and prototype disagree
#pragma warning( error:4172) // returning address of local variable or temporary
#pragma warning( error:4197) // top-level volatile in cast is ignored
#pragma warning( error:4273) // inconsistent dll linkage
#pragma warning( error:4293) // shift count negative or too big, undefined behavior
#pragma warning( error:4312) // 'type cast': conversion from 'MonoNativeThreadId' to 'gpointer' of greater size
#pragma warning( error:4715) // 'keyword' not all control paths return a value
#include <SDKDDKVer.h>
#if _WIN32_WINNT < 0x0601
#error "Mono requires Windows 7 or later."
#endif /* _WIN32_WINNT < 0x0601 */
#ifndef HAVE_WINAPI_FAMILY_SUPPORT
#define HAVE_WINAPI_FAMILY_SUPPORT
/* WIN API Family support */
#include <winapifamily.h>
#if WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_DESKTOP)
#define HAVE_CLASSIC_WINAPI_SUPPORT 1
#define HAVE_UWP_WINAPI_SUPPORT 0
#elif WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_APP)
#define HAVE_CLASSIC_WINAPI_SUPPORT 0
#define HAVE_UWP_WINAPI_SUPPORT 1
#else
#define HAVE_CLASSIC_WINAPI_SUPPORT 0
#define HAVE_UWP_WINAPI_SUPPORT 0
#ifndef HAVE_EXTERN_DEFINED_WINAPI_SUPPORT
#error Unsupported WINAPI family
#endif
#endif
#endif
#endif
/* This platform does not support symlinks */
#cmakedefine HOST_NO_SYMLINKS 1
/* pthread is a pointer */
#cmakedefine PTHREAD_POINTER_ID 1
/* Targeting the Android platform */
#cmakedefine HOST_ANDROID 1
/* ... */
#cmakedefine TARGET_ANDROID 1
/* ... */
#cmakedefine USE_MACH_SEMA 1
/* Targeting the Fuchsia platform */
#cmakedefine HOST_FUCHSIA 1
/* Targeting the AIX and PASE platforms */
#cmakedefine HOST_AIX 1
/* Host Platform is Win32 */
#cmakedefine HOST_WIN32 1
/* Target Platform is Win32 */
#cmakedefine TARGET_WIN32 1
/* Host Platform is Darwin */
#cmakedefine HOST_DARWIN 1
/* Host Platform is iOS */
#cmakedefine HOST_IOS 1
/* Host Platform is tvOS */
#cmakedefine HOST_TVOS 1
/* Host Platform is Mac Catalyst */
#cmakedefine HOST_MACCAT 1
/* Use classic Windows API support */
#cmakedefine HAVE_CLASSIC_WINAPI_SUPPORT 1
/* Don't use UWP Windows API support */
#cmakedefine HAVE_UWP_WINAPI_SUPPORT 1
/* Define to 1 if you have the <sys/types.h> header file. */
#cmakedefine HAVE_SYS_TYPES_H 1
/* Define to 1 if you have the <sys/stat.h> header file. */
#cmakedefine HAVE_SYS_STAT_H 1
/* Define to 1 if you have the <strings.h> header file. */
#cmakedefine HAVE_STRINGS_H 1
/* Define to 1 if you have the <stdint.h> header file. */
#cmakedefine HAVE_STDINT_H 1
/* Define to 1 if you have the <unistd.h> header file. */
#cmakedefine HAVE_UNISTD_H 1
/* Define to 1 if you have the <signal.h> header file. */
#cmakedefine HAVE_SIGNAL_H 1
/* Define to 1 if you have the <setjmp.h> header file. */
#cmakedefine HAVE_SETJMP_H 1
/* Define to 1 if you have the <syslog.h> header file. */
#cmakedefine HAVE_SYSLOG_H 1
/* Define to 1 if `major', `minor', and `makedev' are declared in <mkdev.h>.
*/
#cmakedefine MAJOR_IN_MKDEV 1
/* Define to 1 if `major', `minor', and `makedev' are declared in
<sysmacros.h>. */
#cmakedefine MAJOR_IN_SYSMACROS 1
/* Define to 1 if you have the <sys/filio.h> header file. */
#cmakedefine HAVE_SYS_FILIO_H 1
/* Define to 1 if you have the <sys/sockio.h> header file. */
#cmakedefine HAVE_SYS_SOCKIO_H 1
/* Define to 1 if you have the <netdb.h> header file. */
#cmakedefine HAVE_NETDB_H 1
/* Define to 1 if you have the <utime.h> header file. */
#cmakedefine HAVE_UTIME_H 1
/* Define to 1 if you have the <sys/utime.h> header file. */
#cmakedefine HAVE_SYS_UTIME_H 1
/* Define to 1 if you have the <semaphore.h> header file. */
#cmakedefine HAVE_SEMAPHORE_H 1
/* Define to 1 if you have the <sys/un.h> header file. */
#cmakedefine HAVE_SYS_UN_H 1
/* Define to 1 if you have the <sys/syscall.h> header file. */
#cmakedefine HAVE_SYS_SYSCALL_H 1
/* Define to 1 if you have the <sys/uio.h> header file. */
#cmakedefine HAVE_SYS_UIO_H 1
/* Define to 1 if you have the <sys/param.h> header file. */
#cmakedefine HAVE_SYS_PARAM_H 1
/* Define to 1 if you have the <sys/sysctl.h> header file. */
#cmakedefine HAVE_SYS_SYSCTL_H 1
/* Define to 1 if you have the <libproc.h> header file. */
#cmakedefine HAVE_LIBPROC_H 1
/* Define to 1 if you have the <sys/prctl.h> header file. */
#cmakedefine HAVE_SYS_PRCTL_H 1
/* Define to 1 if you have the <gnu/lib-names.h> header file. */
#cmakedefine HAVE_GNU_LIB_NAMES_H 1
/* Define to 1 if you have the <sys/socket.h> header file. */
#cmakedefine HAVE_SYS_SOCKET_H 1
/* Define to 1 if you have the <sys/utsname.h> header file. */
#cmakedefine HAVE_SYS_UTSNAME_H 1
/* Define to 1 if you have the <alloca.h> header file. */
#cmakedefine HAVE_ALLOCA_H 1
/* Define to 1 if you have the <ucontext.h> header file. */
#cmakedefine HAVE_UCONTEXT_H 1
/* Define to 1 if you have the <pwd.h> header file. */
#cmakedefine HAVE_PWD_H 1
/* Define to 1 if you have the <sys/select.h> header file. */
#cmakedefine HAVE_SYS_SELECT_H 1
/* Define to 1 if you have the <netinet/tcp.h> header file. */
#cmakedefine HAVE_NETINET_TCP_H 1
/* Define to 1 if you have the <netinet/in.h> header file. */
#cmakedefine HAVE_NETINET_IN_H 1
/* Define to 1 if you have the <link.h> header file. */
#cmakedefine HAVE_LINK_H 1
/* Define to 1 if you have the <arpa/inet.h> header file. */
#cmakedefine HAVE_ARPA_INET_H 1
/* Define to 1 if you have the <unwind.h> header file. */
#cmakedefine HAVE_UNWIND_H 1
/* Define to 1 if you have the <sys/user.h> header file. */
#cmakedefine HAVE_SYS_USER_H 1
/* Use static ICU */
#cmakedefine STATIC_ICU 1
/* Use in-tree zlib */
#cmakedefine INTERNAL_ZLIB 1
/* Define to 1 if you have the <poll.h> header file. */
#cmakedefine HAVE_POLL_H 1
/* Define to 1 if you have the <sys/poll.h> header file. */
#cmakedefine HAVE_SYS_POLL_H 1
/* Define to 1 if you have the <sys/wait.h> header file. */
#cmakedefine HAVE_SYS_WAIT_H 1
/* Define to 1 if you have the <wchar.h> header file. */
#cmakedefine HAVE_WCHAR_H 1
/* Define to 1 if you have the <linux/magic.h> header file. */
#cmakedefine HAVE_LINUX_MAGIC_H 1
/* Define to 1 if you have the <android/legacy_signal_inlines.h> header file.
*/
#cmakedefine HAVE_ANDROID_LEGACY_SIGNAL_INLINES_H 1
/* Define to 1 if you have the <android/ndk-version.h> header file. */
#cmakedefine HAVE_ANDROID_NDK_VERSION_H 1
/* Whether Android NDK unified headers are used */
#cmakedefine ANDROID_UNIFIED_HEADERS 1
/* The size of `void *', as computed by sizeof. */
#define SIZEOF_VOID_P @SIZEOF_VOID_P@
/* The size of `long', as computed by sizeof. */
#define SIZEOF_LONG @SIZEOF_LONG@
/* The size of `int', as computed by sizeof. */
#define SIZEOF_INT @SIZEOF_INT@
/* The size of `long long', as computed by sizeof. */
#define SIZEOF_LONG_LONG @SIZEOF_LONG_LONG@
/* Xen-specific behaviour */
#cmakedefine MONO_XEN_OPT 1
/* Reduce runtime requirements (and capabilities) */
#cmakedefine MONO_SMALL_CONFIG 1
/* Make jemalloc assert for mono */
#cmakedefine MONO_JEMALLOC_ASSERT 1
/* Make jemalloc default for mono */
#cmakedefine MONO_JEMALLOC_DEFAULT 1
/* Enable jemalloc usage for mono */
#cmakedefine MONO_JEMALLOC_ENABLED 1
/* Do not include names of unmanaged functions in the crash dump */
#cmakedefine MONO_PRIVATE_CRASHES 1
/* Do not create structured crash files during unmanaged crashes */
#cmakedefine DISABLE_STRUCTURED_CRASH 1
/* String of disabled features */
#define DISABLED_FEATURES @DISABLED_FEATURES@
/* Disable AOT Compiler */
#cmakedefine DISABLE_AOT 1
/* Disable runtime debugging support */
#cmakedefine DISABLE_DEBUG 1
/* Disable reflection emit support */
#cmakedefine DISABLE_REFLECTION_EMIT 1
/* Disable support debug logging */
#cmakedefine DISABLE_LOGGING 1
/* Disable COM support */
#cmakedefine DISABLE_COM 1
/* Disable advanced SSA JIT optimizations */
#cmakedefine DISABLE_SSA 1
/* Disable the JIT, only full-aot mode or interpreter will be supported by the
runtime. */
#cmakedefine DISABLE_JIT 1
/* Disable the interpreter. */
#cmakedefine DISABLE_INTERPRETER 1
/* Some VES is available at runtime */
#cmakedefine ENABLE_ILGEN 1
/* Disable non-blittable marshalling */
#cmakedefine DISABLE_NONBLITTABLE
/* Disable SIMD intrinsics related optimizations. */
#cmakedefine DISABLE_SIMD 1
/* Disable Soft Debugger Agent. */
#cmakedefine DISABLE_DEBUGGER_AGENT 1
/* Disable Performance Counters. */
#cmakedefine DISABLE_PERFCOUNTERS 1
/* Disable shared perfcounters. */
#cmakedefine DISABLE_SHARED_PERFCOUNTERS 1
/* Disable support code for the LLDB plugin. */
#cmakedefine DISABLE_LLDB 1
/* Disable assertion messages. */
#cmakedefine DISABLE_ASSERT_MESSAGES 1
/* Disable concurrent gc support in SGEN. */
#cmakedefine DISABLE_SGEN_MAJOR_MARKSWEEP_CONC 1
/* Disable minor=split support in SGEN. */
#cmakedefine DISABLE_SGEN_SPLIT_NURSERY 1
/* Disable gc bridge support in SGEN. */
#cmakedefine DISABLE_SGEN_GC_BRIDGE 1
/* Disable debug helpers in SGEN. */
#cmakedefine DISABLE_SGEN_DEBUG_HELPERS 1
/* Disable sockets */
#cmakedefine DISABLE_SOCKETS 1
/* Disables use of DllMaps in MonoVM */
#cmakedefine DISABLE_DLLMAP 1
/* Disable Threads */
#cmakedefine DISABLE_THREADS 1
/* Disable perf counters */
#cmakedefine DISABLE_PERF_COUNTERS
/* Disable MONO_LOG_DEST */
#cmakedefine DISABLE_LOG_DEST
/* GC description */
#cmakedefine DEFAULT_GC_NAME 1
/* No GC support. */
#cmakedefine HAVE_NULL_GC 1
/* Length of zero length arrays */
#define MONO_ZERO_LEN_ARRAY @MONO_ZERO_LEN_ARRAY@
/* Define to 1 if you have the `sigaction' function. */
#cmakedefine HAVE_SIGACTION 1
/* Define to 1 if you have the `kill' function. */
#cmakedefine HAVE_KILL 1
/* CLOCK_MONOTONIC */
#cmakedefine HAVE_CLOCK_MONOTONIC 1
/* CLOCK_MONOTONIC_COARSE */
#cmakedefine HAVE_CLOCK_MONOTONIC_COARSE 1
/* clockid_t */
#cmakedefine HAVE_CLOCKID_T 1
/* mach_absolute_time */
#cmakedefine HAVE_MACH_ABSOLUTE_TIME 1
/* gethrtime */
#cmakedefine HAVE_GETHRTIME 1
/* read_real_time */
#cmakedefine HAVE_READ_REAL_TIME 1
/* Define to 1 if you have the `clock_nanosleep' function. */
#cmakedefine HAVE_CLOCK_NANOSLEEP 1
/* Does dlsym require leading underscore. */
#cmakedefine MONO_DL_NEED_USCORE 1
/* Define to 1 if you have the <execinfo.h> header file. */
#cmakedefine HAVE_EXECINFO_H 1
/* Define to 1 if you have the <sys/auxv.h> header file. */
#cmakedefine HAVE_SYS_AUXV_H 1
/* Define to 1 if you have the <sys/resource.h> header file. */
#cmakedefine HAVE_SYS_RESOURCE_H 1
/* kqueue */
#cmakedefine HAVE_KQUEUE 1
/* Define to 1 if you have the `backtrace_symbols' function. */
#cmakedefine HAVE_BACKTRACE_SYMBOLS 1
/* Define to 1 if you have the `mkstemp' function. */
#cmakedefine HAVE_MKSTEMP 1
/* Define to 1 if you have the `mmap' function. */
#cmakedefine HAVE_MMAP 1
/* Define to 1 if you have the `madvise' function. */
#cmakedefine HAVE_MADVISE 1
/* Define to 1 if you have the `getrusage' function. */
#cmakedefine HAVE_GETRUSAGE 1
/* Define to 1 if you have the `dladdr' function. */
#cmakedefine HAVE_DLADDR 1
/* Define to 1 if you have the `sysconf' function. */
#cmakedefine HAVE_SYSCONF 1
/* Define to 1 if you have the `getrlimit' function. */
#cmakedefine HAVE_GETRLIMIT 1
/* Define to 1 if you have the `prctl' function. */
#cmakedefine HAVE_PRCTL 1
/* Define to 1 if you have the `nl_langinfo' function. */
#cmakedefine HAVE_NL_LANGINFO 1
/* sched_getaffinity */
#cmakedefine HAVE_SCHED_GETAFFINITY 1
/* sched_setaffinity */
#cmakedefine HAVE_SCHED_SETAFFINITY 1
/* Define to 1 if you have the `sched_getcpu' function. */
#cmakedefine HAVE_SCHED_GETCPU 1
/* Define to 1 if you have the `getpwuid_r' function. */
#cmakedefine HAVE_GETPWUID_R 1
/* Define to 1 if you have the `readlink' function. */
#cmakedefine HAVE_READLINK 1
/* Define to 1 if you have the `chmod' function. */
#cmakedefine HAVE_CHMOD 1
/* Define to 1 if you have the `lstat' function. */
#cmakedefine HAVE_LSTAT 1
/* Define to 1 if you have the `getdtablesize' function. */
#cmakedefine HAVE_GETDTABLESIZE 1
/* Define to 1 if you have the `ftruncate' function. */
#cmakedefine HAVE_FTRUNCATE 1
/* Define to 1 if you have the `msync' function. */
#cmakedefine HAVE_MSYNC 1
/* Define to 1 if you have the `getpeername' function. */
#cmakedefine HAVE_GETPEERNAME 1
/* Define to 1 if you have the `utime' function. */
#cmakedefine HAVE_UTIME 1
/* Define to 1 if you have the `utimes' function. */
#cmakedefine HAVE_UTIMES 1
/* Define to 1 if you have the `openlog' function. */
#cmakedefine HAVE_OPENLOG 1
/* Define to 1 if you have the `closelog' function. */
#cmakedefine HAVE_CLOSELOG 1
/* Define to 1 if you have the `atexit' function. */
#cmakedefine HAVE_ATEXIT 1
/* Define to 1 if you have the `popen' function. */
#cmakedefine HAVE_POPEN 1
/* Define to 1 if you have the `strerror_r' function. */
#cmakedefine HAVE_STRERROR_R 1
/* Have GLIBC_BEFORE_2_3_4_SCHED_SETAFFINITY */
#cmakedefine GLIBC_BEFORE_2_3_4_SCHED_SETAFFINITY 1
/* GLIBC has CPU_COUNT macro in sched.h */
#cmakedefine HAVE_GNU_CPU_COUNT
/* Have large file support */
#cmakedefine HAVE_LARGE_FILE_SUPPORT 1
/* Have getaddrinfo */
#cmakedefine HAVE_GETADDRINFO 1
/* Have gethostbyname2 */
#cmakedefine HAVE_GETHOSTBYNAME2 1
/* Have gethostbyname */
#cmakedefine HAVE_GETHOSTBYNAME 1
/* Have getprotobyname */
#cmakedefine HAVE_GETPROTOBYNAME 1
/* Have getprotobyname_r */
#cmakedefine HAVE_GETPROTOBYNAME_R 1
/* Have getnameinfo */
#cmakedefine HAVE_GETNAMEINFO 1
/* Have inet_ntop */
#cmakedefine HAVE_INET_NTOP 1
/* Have inet_pton */
#cmakedefine HAVE_INET_PTON 1
/* Define to 1 if you have the `inet_aton' function. */
#cmakedefine HAVE_INET_ATON 1
/* Define to 1 if you have the <pthread.h> header file. */
#cmakedefine HAVE_PTHREAD_H 1
/* Define to 1 if you have the <pthread_np.h> header file. */
#cmakedefine HAVE_PTHREAD_NP_H 1
/* Define to 1 if you have the `pthread_mutex_timedlock' function. */
#cmakedefine HAVE_PTHREAD_MUTEX_TIMEDLOCK 1
/* Define to 1 if you have the `pthread_getattr_np' function. */
#cmakedefine HAVE_PTHREAD_GETATTR_NP 1
/* Define to 1 if you have the `pthread_attr_get_np' function. */
#cmakedefine HAVE_PTHREAD_ATTR_GET_NP 1
/* Define to 1 if you have the `pthread_getname_np' function. */
#cmakedefine HAVE_PTHREAD_GETNAME_NP 1
/* Define to 1 if you have the `pthread_setname_np' function. */
#cmakedefine HAVE_PTHREAD_SETNAME_NP 1
/* Define to 1 if you have the `pthread_cond_timedwait_relative_np' function.
*/
#cmakedefine HAVE_PTHREAD_COND_TIMEDWAIT_RELATIVE_NP 1
/* Define to 1 if you have the `pthread_kill' function. */
#cmakedefine HAVE_PTHREAD_KILL 1
/* Define to 1 if you have the `pthread_attr_setstacksize' function. */
#cmakedefine HAVE_PTHREAD_ATTR_SETSTACKSIZE 1
/* Define to 1 if you have the `pthread_get_stackaddr_np' function. */
#cmakedefine HAVE_PTHREAD_GET_STACKADDR_NP 1
/* Define to 1 if you have the `pthread_jit_write_protect_np' function. */
#cmakedefine HAVE_PTHREAD_JIT_WRITE_PROTECT_NP 1
#cmakedefine01 HAVE_GETAUXVAL
/* Define to 1 if you have the declaration of `pthread_mutexattr_setprotocol',
and to 0 if you don't. */
#cmakedefine HAVE_DECL_PTHREAD_MUTEXATTR_SETPROTOCOL 1
/* Have a working sigaltstack */
#cmakedefine HAVE_WORKING_SIGALTSTACK 1
/* Define to 1 if you have the `shm_open' function. */
#cmakedefine HAVE_SHM_OPEN 1
/* Define to 1 if you have the `poll' function. */
#cmakedefine HAVE_POLL 1
/* epoll_create1 */
#cmakedefine HAVE_EPOLL 1
/* Define to 1 if you have the <sys/ioctl.h> header file. */
#cmakedefine HAVE_SYS_IOCTL_H 1
/* Define to 1 if you have the <net/if.h> header file. */
#cmakedefine HAVE_NET_IF_H 1
/* Can get interface list */
#cmakedefine HAVE_SIOCGIFCONF 1
/* sockaddr_in has sin_len */
#cmakedefine HAVE_SOCKADDR_IN_SIN_LEN 1
/* sockaddr_in6 has sin6_len */
#cmakedefine HAVE_SOCKADDR_IN6_SIN_LEN 1
/* Have getifaddrs */
#cmakedefine HAVE_GETIFADDRS 1
/* Have struct ifaddrs */
#cmakedefine HAVE_IFADDRS 1
/* Have access */
#cmakedefine HAVE_ACCESS 1
/* Have getpid */
#cmakedefine HAVE_GETPID 1
/* Have mktemp */
#cmakedefine HAVE_MKTEMP 1
/* Define to 1 if you have the <sys/errno.h> header file. */
#cmakedefine HAVE_SYS_ERRNO_H 1
/* Define to 1 if you have the <sys/sendfile.h> header file. */
#cmakedefine HAVE_SYS_SENDFILE_H 1
/* Define to 1 if you have the <sys/statvfs.h> header file. */
#cmakedefine HAVE_SYS_STATVFS_H 1
/* Define to 1 if you have the <sys/statfs.h> header file. */
#cmakedefine HAVE_SYS_STATFS_H 1
/* Define to 1 if you have the <sys/mman.h> header file. */
#cmakedefine HAVE_SYS_MMAN_H 1
/* Define to 1 if you have the <sys/mount.h> header file. */
#cmakedefine HAVE_SYS_MOUNT_H 1
/* Define to 1 if you have the `getfsstat' function. */
#cmakedefine HAVE_GETFSSTAT 1
/* Define to 1 if you have the `mremap' function. */
#cmakedefine HAVE_MREMAP 1
/* Define to 1 if you have the `posix_fadvise' function. */
#cmakedefine HAVE_POSIX_FADVISE 1
/* Define to 1 if you have the `vsnprintf' function. */
#cmakedefine HAVE_VSNPRINTF 1
/* Define to 1 if you have the `sendfile' function. */
#cmakedefine HAVE_SENDFILE 1
/* struct statfs */
#cmakedefine HAVE_STATFS 1
/* Define to 1 if you have the `statvfs' function. */
#cmakedefine HAVE_STATVFS 1
/* Define to 1 if you have the `setpgid' function. */
#cmakedefine HAVE_SETPGID 1
/* Define to 1 if you have the `system' function. */
#ifdef _MSC_VER
#if HAVE_WINAPI_FAMILY_SUPPORT(HAVE_CLASSIC_WINAPI_SUPPORT)
#cmakedefine HAVE_SYSTEM 1
#endif
#else
#cmakedefine HAVE_SYSTEM 1
#endif
/* Define to 1 if you have the `fork' function. */
#cmakedefine HAVE_FORK 1
/* Define to 1 if you have the `execv' function. */
#cmakedefine HAVE_EXECV 1
/* Define to 1 if you have the `execve' function. */
#cmakedefine HAVE_EXECVE 1
/* Define to 1 if you have the `waitpid' function. */
#cmakedefine HAVE_WAITPID 1
/* Define to 1 if you have the `localtime_r' function. */
#cmakedefine HAVE_LOCALTIME_R 1
/* Define to 1 if you have the `mkdtemp' function. */
#cmakedefine HAVE_MKDTEMP 1
/* The size of `size_t', as computed by sizeof. */
#define SIZEOF_SIZE_T @SIZEOF_SIZE_T@
#cmakedefine01 HAVE_GNU_STRERROR_R
/* Define to 1 if the system has the type `struct sockaddr'. */
#cmakedefine HAVE_STRUCT_SOCKADDR 1
/* Define to 1 if the system has the type `struct sockaddr_in'. */
#cmakedefine HAVE_STRUCT_SOCKADDR_IN 1
/* Define to 1 if the system has the type `struct sockaddr_in6'. */
#cmakedefine HAVE_STRUCT_SOCKADDR_IN6 1
/* Define to 1 if the system has the type `struct stat'. */
#cmakedefine HAVE_STRUCT_STAT 1
/* Define to 1 if the system has the type `struct timeval'. */
#cmakedefine HAVE_STRUCT_TIMEVAL 1
/* Define to 1 if `st_atim' is a member of `struct stat'. */
#cmakedefine HAVE_STRUCT_STAT_ST_ATIM 1
/* Define to 1 if `st_atimespec' is a member of `struct stat'. */
#cmakedefine HAVE_STRUCT_STAT_ST_ATIMESPEC 1
/* Define to 1 if `kp_proc' is a member of `struct kinfo_proc'. */
#cmakedefine HAVE_STRUCT_KINFO_PROC_KP_PROC 1
/* Define to 1 if you have the <sys/time.h> header file. */
#cmakedefine HAVE_SYS_TIME_H 1
/* Define to 1 if you have the <dirent.h> header file. */
#cmakedefine HAVE_DIRENT_H 1
/* Define to 1 if you have the <CommonCrypto/CommonDigest.h> header file. */
#cmakedefine HAVE_COMMONCRYPTO_COMMONDIGEST_H 1
/* Define to 1 if you have the <sys/random.h> header file. */
#cmakedefine HAVE_SYS_RANDOM_H 1
/* Define to 1 if you have the `getrandom' function. */
#cmakedefine HAVE_GETRANDOM 1
/* Define to 1 if you have the `getentropy' function. */
#cmakedefine HAVE_GETENTROPY 1
/* Qp2getifaddrs */
#cmakedefine HAVE_QP2GETIFADDRS 1
/* Define to 1 if you have the `strlcpy' function. */
#cmakedefine HAVE_STRLCPY 1
/* Define to 1 if you have the <winternl.h> header file. */
#cmakedefine HAVE_WINTERNL_H 1
/* Have socklen_t */
#cmakedefine HAVE_SOCKLEN_T 1
/* Define to 1 if you have the `execvp' function. */
#cmakedefine HAVE_EXECVP 1
/* Name of /dev/random */
#define NAME_DEV_RANDOM @NAME_DEV_RANDOM@
/* Enable the allocation and indexing of arrays greater than Int32.MaxValue */
#cmakedefine MONO_BIG_ARRAYS 1
/* Enable DTrace probes */
#cmakedefine ENABLE_DTRACE 1
/* AOT cross offsets file */
#cmakedefine MONO_OFFSETS_FILE "@MONO_OFFSETS_FILE@"
/* Enable the LLVM back end */
#cmakedefine ENABLE_LLVM 1
/* Runtime support code for llvm enabled */
#cmakedefine ENABLE_LLVM_RUNTIME 1
/* 64 bit mode with 4 byte longs and pointers */
#cmakedefine MONO_ARCH_ILP32 1
/* The runtime is compiled for cross-compiling mode */
#cmakedefine MONO_CROSS_COMPILE 1
/* ... */
#cmakedefine TARGET_WASM 1
/* The JIT/AOT targets WatchOS */
#cmakedefine TARGET_WATCHOS 1
/* ... */
#cmakedefine TARGET_PS3 1
/* ... */
#cmakedefine __mono_ppc64__ 1
/* ... */
#cmakedefine TARGET_XBOX360 1
/* ... */
#cmakedefine TARGET_PS4 1
/* ... */
#cmakedefine DISABLE_HW_TRAPS 1
/* Target is RISC-V */
#cmakedefine TARGET_RISCV 1
/* Target is 32-bit RISC-V */
#cmakedefine TARGET_RISCV32 1
/* Target is 64-bit RISC-V */
#cmakedefine TARGET_RISCV64 1
/* ... */
#cmakedefine TARGET_X86 1
/* ... */
#cmakedefine TARGET_AMD64 1
/* ... */
#cmakedefine TARGET_ARM 1
/* ... */
#cmakedefine TARGET_ARM64 1
/* ... */
#cmakedefine TARGET_POWERPC 1
/* ... */
#cmakedefine TARGET_POWERPC64 1
/* ... */
#cmakedefine TARGET_S390X 1
/* ... */
#cmakedefine TARGET_MIPS 1
/* ... */
#cmakedefine TARGET_SPARC 1
/* ... */
#cmakedefine TARGET_SPARC64 1
/* ... */
#cmakedefine HOST_WASM 1
/* ... */
#cmakedefine HOST_BROWSER 1
/* ... */
#cmakedefine HOST_WASI 1
/* ... */
#cmakedefine HOST_X86 1
/* ... */
#cmakedefine HOST_AMD64 1
/* ... */
#cmakedefine HOST_ARM 1
/* ... */
#cmakedefine HOST_ARM64 1
/* ... */
#cmakedefine HOST_POWERPC 1
/* ... */
#cmakedefine HOST_POWERPC64 1
/* ... */
#cmakedefine HOST_S390X 1
/* ... */
#cmakedefine HOST_MIPS 1
/* ... */
#cmakedefine HOST_SPARC 1
/* ... */
#cmakedefine HOST_SPARC64 1
/* Host is RISC-V */
#cmakedefine HOST_RISCV 1
/* Host is 32-bit RISC-V */
#cmakedefine HOST_RISCV32 1
/* Host is 64-bit RISC-V */
#cmakedefine HOST_RISCV64 1
/* ... */
#cmakedefine USE_GCC_ATOMIC_OPS 1
/* The JIT/AOT targets iOS */
#cmakedefine TARGET_IOS 1
/* The JIT/AOT targets tvOS */
#cmakedefine TARGET_TVOS 1
/* The JIT/AOT targets Mac Catalyst */
#cmakedefine TARGET_MACCAT 1
/* The JIT/AOT targets OSX */
#cmakedefine TARGET_OSX 1
/* The JIT/AOT targets Apple platforms */
#cmakedefine TARGET_MACH 1
/* byte order of target */
#define TARGET_BYTE_ORDER @TARGET_BYTE_ORDER@
/* wordsize of target */
#define TARGET_SIZEOF_VOID_P @TARGET_SIZEOF_VOID_P@
/* size of target machine integer registers */
#define SIZEOF_REGISTER @SIZEOF_REGISTER@
/* host or target doesn't allow unaligned memory access */
#cmakedefine NO_UNALIGNED_ACCESS 1
/* Support for the visibility ("hidden") attribute */
#cmakedefine HAVE_VISIBILITY_HIDDEN 1
/* Support for the deprecated attribute */
#cmakedefine HAVE_DEPRECATED 1
/* Moving collector */
#cmakedefine HAVE_MOVING_COLLECTOR 1
/* Defaults to concurrent GC */
#cmakedefine HAVE_CONC_GC_AS_DEFAULT 1
/* Define to 1 if you have the `stpcpy' function. */
#cmakedefine HAVE_STPCPY 1
/* Define to 1 if you have the `strtok_r' function. */
#cmakedefine HAVE_STRTOK_R 1
/* Define to 1 if you have the `rewinddir' function. */
#cmakedefine HAVE_REWINDDIR 1
/* Define to 1 if you have the `vasprintf' function. */
#cmakedefine HAVE_VASPRINTF 1
/* Overridable allocator support enabled */
#cmakedefine ENABLE_OVERRIDABLE_ALLOCATORS 1
/* Define to 1 if you have the `strndup' function. */
#cmakedefine HAVE_STRNDUP 1
/* Define to 1 if you have the <getopt.h> header file. */
#cmakedefine HAVE_GETOPT_H 1
/* Icall symbol map enabled */
#cmakedefine ENABLE_ICALL_SYMBOL_MAP 1
/* Icall export enabled */
#cmakedefine ENABLE_ICALL_EXPORT 1
/* Icall tables disabled */
#cmakedefine DISABLE_ICALL_TABLES 1
/* QCalls disabled */
#cmakedefine DISABLE_QCALLS 1
/* Embedded PDB support disabled */
#cmakedefine DISABLE_EMBEDDED_PDB
/* log profiler compressed output disabled */
#cmakedefine DISABLE_LOG_PROFILER_GZ
/* Have __thread keyword */
#cmakedefine MONO_KEYWORD_THREAD @MONO_KEYWORD_THREAD@
/* tls_model available */
#cmakedefine HAVE_TLS_MODEL_ATTR 1
/* ARM v5 */
#cmakedefine HAVE_ARMV5 1
/* ARM v6 */
#cmakedefine HAVE_ARMV6 1
/* ARM v7 */
#cmakedefine HAVE_ARMV7 1
/* RISC-V FPABI is double-precision */
#cmakedefine RISCV_FPABI_DOUBLE 1
/* RISC-V FPABI is single-precision */
#cmakedefine RISCV_FPABI_SINGLE 1
/* RISC-V FPABI is soft float */
#cmakedefine RISCV_FPABI_SOFT 1
/* Use malloc for each single mempool allocation */
#cmakedefine USE_MALLOC_FOR_MEMPOOLS 1
/* Enable lazy gc thread creation by the embedding host. */
#cmakedefine LAZY_GC_THREAD_CREATION 1
/* Enable cooperative stop-the-world garbage collection. */
#cmakedefine ENABLE_COOP_SUSPEND 1
/* Enable hybrid suspend for GC stop-the-world */
#cmakedefine ENABLE_HYBRID_SUSPEND 1
/* Enable feature experiments */
#cmakedefine ENABLE_EXPERIMENTS 1
/* Enable experiment 'null' */
#cmakedefine ENABLE_EXPERIMENT_null 1
/* Enable experiment 'Tiered Compilation' */
#cmakedefine ENABLE_EXPERIMENT_TIERED 1
/* Enable checked build */
#cmakedefine ENABLE_CHECKED_BUILD 1
/* Enable GC checked build */
#cmakedefine ENABLE_CHECKED_BUILD_GC 1
/* Enable metadata checked build */
#cmakedefine ENABLE_CHECKED_BUILD_METADATA 1
/* Enable thread checked build */
#cmakedefine ENABLE_CHECKED_BUILD_THREAD 1
/* Enable private types checked build */
#cmakedefine ENABLE_CHECKED_BUILD_PRIVATE_TYPES 1
/* Enable EventPipe library support */
#cmakedefine ENABLE_PERFTRACING 1
/* Define to 1 if you have /usr/include/malloc.h. */
#cmakedefine HAVE_USR_INCLUDE_MALLOC_H 1
/* The architecture this is running on */
#define MONO_ARCHITECTURE @MONO_ARCHITECTURE@
/* Disable banned functions from being used by the runtime */
#cmakedefine MONO_INSIDE_RUNTIME 1
/* Version number of package */
#define VERSION @VERSION@
/* Full version number of package */
#define FULL_VERSION @FULL_VERSION@
/* Define to 1 if you have the <dlfcn.h> header file. */
#cmakedefine HAVE_DLFCN_H 1
/* Enable lazy gc thread creation by the embedding host */
#cmakedefine LAZY_GC_THREAD_CREATION 1
/* Enable additional checks */
#cmakedefine ENABLE_CHECKED_BUILD 1
/* Enable compile time checking that getter functions are used */
#cmakedefine ENABLE_CHECKED_BUILD_PRIVATE_TYPES 1
/* Enable runtime GC Safe / Unsafe mode assertion checks (must set env var MONO_CHECK_MODE=gc) */
#cmakedefine ENABLE_CHECKED_BUILD_GC 1
/* Enable runtime history of per-thread coop state transitions (must set env var MONO_CHECK_MODE=thread) */
#cmakedefine ENABLE_CHECKED_BUILD_THREAD 1
/* Enable runtime checks of mempool references between metadata images (must set env var MONO_CHECK_MODE=metadata) */
#cmakedefine ENABLE_CHECKED_BUILD_METADATA 1
/* Enable static linking of mono runtime components */
#cmakedefine STATIC_COMPONENTS
/* Enable perf jit dump support */
#cmakedefine ENABLE_JIT_DUMP 1
#if defined(ENABLE_LLVM) && defined(HOST_WIN32) && defined(TARGET_WIN32) && (!defined(TARGET_AMD64) || !defined(_MSC_VER))
#error LLVM for host=Windows and target=Windows is only supported on x64 MSVC build.
#endif
#endif
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/cmake/configure.cmake | #
# Configure checks
#
include(CheckCCompilerFlag)
include(CheckCSourceCompiles)
include(CheckIncludeFiles)
include(CheckStructHasMember)
include(CheckSymbolExists)
include(CheckTypeSize)
# Apple platforms like macOS/iOS allow targeting older operating system versions with a single SDK,
# the mere presence of a symbol in the SDK doesn't tell us whether the deployment target really supports it.
# The compiler raises a warning when using an unsupported API, turn that into an error so check_symbol_exists()
# can correctly identify whether the API is supported on the target.
check_c_compiler_flag("-Wunguarded-availability" "C_SUPPORTS_WUNGUARDED_AVAILABILITY")
if(C_SUPPORTS_WUNGUARDED_AVAILABILITY)
set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} -Werror=unguarded-availability")
endif()
if(HOST_SOLARIS)
set(CMAKE_REQUIRED_DEFINITIONS "${CMAKE_REQUIRED_DEFINITIONS} -DGC_SOLARIS_THREADS -DGC_SOLARIS_PTHREADS -D_REENTRANT -D_POSIX_PTHREAD_SEMANTICS -DUSE_MMAP -DUSE_MUNMAP -DHOST_SOLARIS -D__EXTENSIONS__ -D_XPG4_2")
endif()
if(HOST_WASI)
set(CMAKE_REQUIRED_DEFINITIONS "${CMAKE_REQUIRED_DEFINITIONS} -D_WASI_EMULATED_SIGNAL -D_WASI_EMULATED_MMAN")
endif()
function(ac_check_headers)
foreach(arg ${ARGN})
check_include_file ("${arg}" FOUND_${arg})
string(TOUPPER "${arg}" var1)
string(REPLACE "/" "_" var2 ${var1})
string(REPLACE "." "_" var3 ${var2})
string(REPLACE "-" "_" var4 ${var3})
if (FOUND_${arg})
set(HAVE_${var4} 1 PARENT_SCOPE)
endif()
endforeach(arg)
endfunction()
function(ac_check_funcs)
foreach(arg ${ARGN})
check_function_exists ("${arg}" FOUND_${arg})
string(TOUPPER "${arg}" var1)
string(REPLACE "/" "_" var2 ${var1})
string(REPLACE "." "_" var3 ${var2})
if (FOUND_${arg})
set(HAVE_${var3} 1 PARENT_SCOPE)
endif()
endforeach(arg)
endfunction()
function(ac_check_type type suffix includes)
set(CMAKE_EXTRA_INCLUDE_FILES ${includes})
check_type_size(${type} AC_CHECK_TYPE_${suffix})
if (AC_CHECK_TYPE_${suffix})
string(TOUPPER "${type}" var1)
string(REPLACE "/" "_" var2 ${var1})
string(REPLACE "." "_" var3 ${var2})
string(REPLACE " " "_" var4 ${var3})
set(HAVE_${var4} 1 PARENT_SCOPE)
endif()
set(CMAKE_EXTRA_INCLUDE_FILES)
endfunction()
ac_check_headers (
sys/types.h sys/stat.h sys/filio.h sys/sockio.h sys/utime.h sys/un.h sys/syscall.h sys/uio.h sys/param.h
sys/prctl.h sys/socket.h sys/utsname.h sys/select.h sys/poll.h sys/wait.h sts/auxv.h sys/resource.h
sys/ioctl.h sys/errno.h sys/sendfile.h sys/statvfs.h sys/statfs.h sys/mman.h sys/mount.h sys/time.h sys/random.h
strings.h stdint.h unistd.h signal.h setjmp.h syslog.h netdb.h utime.h semaphore.h libproc.h alloca.h ucontext.h pwd.h elf.h
gnu/lib-names.h netinet/tcp.h netinet/in.h link.h arpa/inet.h unwind.h poll.h wchar.h linux/magic.h
android/legacy_signal_inlines.h android/ndk-version.h execinfo.h pthread.h pthread_np.h net/if.h dirent.h
CommonCrypto/CommonDigest.h dlfcn.h getopt.h pwd.h iconv.h alloca.h
/usr/include/malloc.h)
ac_check_funcs (
sigaction kill clock_nanosleep kqueue backtrace_symbols mkstemp mmap
getrusage dladdr sysconf getrlimit prctl nl_langinfo
sched_getaffinity sched_setaffinity getpwuid_r readlink chmod lstat getdtablesize ftruncate msync
getpeername utime utimes openlog closelog atexit popen strerror_r inet_pton inet_aton
shm_open poll getfsstat mremap posix_fadvise vsnprintf sendfile statfs statvfs setpgid system
fork execv execve waitpid localtime_r mkdtemp getrandom execvp strlcpy stpcpy strtok_r rewinddir
vasprintf strndup getpwuid_r getprotobyname getprotobyname_r getaddrinfo mach_absolute_time
gethrtime read_real_time gethostbyname gethostbyname2 getnameinfo getifaddrs
access inet_ntop Qp2getifaddrs getpid mktemp)
if (HOST_LINUX)
# sysctl is deprecated on Linux
set(HAVE_SYS_SYSCTL_H 0)
else ()
check_include_files("sys/types.h;sys/sysctl.h" HAVE_SYS_SYSCTL_H)
endif()
check_include_files("sys/types.h;sys/user.h" HAVE_SYS_USER_H)
if(NOT HOST_DARWIN)
# getentropy was introduced in macOS 10.12 / iOS 10.0
ac_check_funcs (getentropy)
endif()
find_package(Threads)
# Needed to find pthread_ symbols
set(CMAKE_REQUIRED_LIBRARIES "${CMAKE_REQUIRED_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT}")
ac_check_funcs(
pthread_getname_np pthread_setname_np pthread_cond_timedwait_relative_np pthread_kill
pthread_attr_setstacksize pthread_get_stackaddr_np
)
check_symbol_exists(madvise "sys/mman.h" HAVE_MADVISE)
check_symbol_exists(pthread_mutexattr_setprotocol "pthread.h" HAVE_DECL_PTHREAD_MUTEXATTR_SETPROTOCOL)
check_symbol_exists(CLOCK_MONOTONIC "time.h" HAVE_CLOCK_MONOTONIC)
check_symbol_exists(CLOCK_MONOTONIC_COARSE "time.h" HAVE_CLOCK_MONOTONIC_COARSE)
check_symbol_exists(sys_signame "signal.h" HAVE_SYSSIGNAME)
check_symbol_exists(pthread_jit_write_protect_np "pthread.h" HAVE_PTHREAD_JIT_WRITE_PROTECT_NP)
check_symbol_exists(getauxval sys/auxv.h HAVE_GETAUXVAL)
ac_check_type("struct sockaddr_in6" sockaddr_in6 "netinet/in.h")
ac_check_type("struct timeval" timeval "sys/time.h;sys/types.h;utime.h")
ac_check_type("socklen_t" socklen_t "sys/types.h;sys/socket.h")
ac_check_type("struct ip_mreqn" ip_mreqn "netinet/in.h")
ac_check_type("struct ip_mreq" ip_mreq "netinet/in.h")
ac_check_type("clockid_t" clockid_t "sys/types.h")
check_struct_has_member("struct kinfo_proc" kp_proc "sys/types.h;sys/param.h;sys/sysctl.h;sys/proc.h" HAVE_STRUCT_KINFO_PROC_KP_PROC)
check_struct_has_member("struct sockaddr_in" sin_len "netinet/in.h" HAVE_SOCKADDR_IN_SIN_LEN)
check_struct_has_member("struct sockaddr_in6" sin6_len "netinet/in.h" HAVE_SOCKADDR_IN6_SIN_LEN)
check_struct_has_member("struct stat" st_atim "sys/types.h;sys/stat.h;unistd.h" HAVE_STRUCT_STAT_ST_ATIM)
check_struct_has_member("struct stat" st_atimespec "sys/types.h;sys/stat.h;unistd.h" HAVE_STRUCT_STAT_ST_ATIMESPEC)
check_type_size("int" SIZEOF_INT)
check_type_size("void*" SIZEOF_VOID_P)
check_type_size("long" SIZEOF_LONG)
check_type_size("long long" SIZEOF_LONG_LONG)
check_type_size("size_t" SIZEOF_SIZE_T)
if (HOST_LINUX)
set(CMAKE_REQUIRED_DEFINITIONS -D_GNU_SOURCE)
endif()
check_c_source_compiles(
"
#include <string.h>
int main(void)
{
char buffer[1];
char c = *strerror_r(0, buffer, 0);
return 0;
}
"
HAVE_GNU_STRERROR_R)
check_c_source_compiles(
"
#include <sched.h>
int main(void)
{
CPU_COUNT((void *) 0);
return 0;
}
"
HAVE_GNU_CPU_COUNT)
if (HOST_LINUX OR HOST_ANDROID)
set(CMAKE_REQUIRED_DEFINITIONS)
endif()
# ICONV
set(ICONV_LIB)
find_library(LIBICONV_FOUND iconv)
if(NOT LIBICONV_FOUND STREQUAL "LIBICONV_FOUND-NOTFOUND")
set(ICONV_LIB "iconv")
endif()
if(HOST_WIN32)
# checking for this doesn't work for some reason, hardcode result
set(HAVE_WINTERNL_H 1)
set(HAVE_CRYPT_RNG 1)
set(HAVE_GETADDRINFO 1)
set(HAVE_GETNAMEINFO 1)
set(HAVE_GETPROTOBYNAME 1)
set(HAVE_INET_NTOP 1)
set(HAVE_INET_PTON 1)
set(HAVE_STRUCT_SOCKADDR_IN6 1)
set(HAVE_STRTOK_R 1)
set(HAVE_EXECVP 0)
elseif(HOST_IOS)
set(HAVE_SYSTEM 0)
set(HAVE_GETPWUID_R 0)
set(HAVE_SYS_USER_H 0)
set(HAVE_GETENTROPY 0)
if(HOST_TVOS)
set(HAVE_PTHREAD_KILL 0)
set(HAVE_KILL 0)
set(HAVE_SIGACTION 0)
set(HAVE_FORK 0)
set(HAVE_EXECV 0)
set(HAVE_EXECVE 0)
set(HAVE_EXECVP 0)
endif()
elseif(HOST_MACCAT)
set(HAVE_SYSTEM 0)
elseif(HOST_BROWSER)
set(HAVE_FORK 0)
elseif(HOST_SOLARIS)
set(HAVE_GETPROTOBYNAME 1)
set(HAVE_NETINET_TCP_H 1)
set(HAVE_GETADDRINFO 1)
elseif(HOST_WASI)
# Redirected to errno.h
set(SYS_ERRNO_H 0)
# Some headers exist, but don't compile (wasi sdk 12.0)
set(HAVE_SYS_SOCKET_H 0)
set(HAVE_SYS_UN_H 0)
set(HAVE_NETINET_TCP_H 0)
set(HAVE_ARPA_INET_H 0)
set(HAVE_GETPWUID_R 0)
set(HAVE_MKDTEMP 0)
set(HAVE_EXECVE 0)
set(HAVE_FORK 0)
set(HAVE_GETRLIMIT 0)
set(HAVE_GETDTABLESIZE 0)
set(HAVE_MKSTEMP 0)
set(HAVE_BACKTRACE_SYMBOLS 0)
set(HAVE_GETPID 0)
set(HAVE_MACH_ABSOLUTE_TIME 0)
set(HAVE_GETHRTIME 0)
set(HAVE_READ_REAL_TIME 0)
set(HAVE_SCHED_GETAFFINITY 0)
set(HAVE_SCHED_SETAFFINITY 0)
set(HAVE_GETIFADDRS 0)
set(HAVE_QP2GETIFADDRS 0)
set(HAVE_GETADDRINFO 0)
set(HAVE_GETHOSTBYNAME 0)
set(HAVE_GETHOSTBYNAME2 0)
set(HAVE_GETPROTOBYNAME 0)
set(HAVE_GETNAMEINFO 0)
set(HAVE_INET_NTOP 0)
set(HAVE_SYS_ICU 0)
set(HAVE_EXECVP 0)
set(HAVE_MMAP 1)
set(DISABLE_PROFILER 1)
set(ENABLE_INTERP_LIB 1)
endif()
| #
# Configure checks
#
include(CheckCCompilerFlag)
include(CheckCSourceCompiles)
include(CheckIncludeFiles)
include(CheckStructHasMember)
include(CheckSymbolExists)
include(CheckTypeSize)
# Apple platforms like macOS/iOS allow targeting older operating system versions with a single SDK,
# the mere presence of a symbol in the SDK doesn't tell us whether the deployment target really supports it.
# The compiler raises a warning when using an unsupported API, turn that into an error so check_symbol_exists()
# can correctly identify whether the API is supported on the target.
check_c_compiler_flag("-Wunguarded-availability" "C_SUPPORTS_WUNGUARDED_AVAILABILITY")
if(C_SUPPORTS_WUNGUARDED_AVAILABILITY)
set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} -Werror=unguarded-availability")
endif()
if(HOST_SOLARIS)
set(CMAKE_REQUIRED_DEFINITIONS "${CMAKE_REQUIRED_DEFINITIONS} -DGC_SOLARIS_THREADS -DGC_SOLARIS_PTHREADS -D_REENTRANT -D_POSIX_PTHREAD_SEMANTICS -DUSE_MMAP -DUSE_MUNMAP -DHOST_SOLARIS -D__EXTENSIONS__ -D_XPG4_2")
endif()
if(HOST_WASI)
set(CMAKE_REQUIRED_DEFINITIONS "${CMAKE_REQUIRED_DEFINITIONS} -D_WASI_EMULATED_SIGNAL -D_WASI_EMULATED_MMAN")
endif()
function(ac_check_headers)
foreach(arg ${ARGN})
check_include_file ("${arg}" FOUND_${arg})
string(TOUPPER "${arg}" var1)
string(REPLACE "/" "_" var2 ${var1})
string(REPLACE "." "_" var3 ${var2})
string(REPLACE "-" "_" var4 ${var3})
if (FOUND_${arg})
set(HAVE_${var4} 1 PARENT_SCOPE)
endif()
endforeach(arg)
endfunction()
function(ac_check_funcs)
foreach(arg ${ARGN})
check_function_exists ("${arg}" FOUND_${arg})
string(TOUPPER "${arg}" var1)
string(REPLACE "/" "_" var2 ${var1})
string(REPLACE "." "_" var3 ${var2})
if (FOUND_${arg})
set(HAVE_${var3} 1 PARENT_SCOPE)
endif()
endforeach(arg)
endfunction()
function(ac_check_type type suffix includes)
set(CMAKE_EXTRA_INCLUDE_FILES ${includes})
check_type_size(${type} AC_CHECK_TYPE_${suffix})
if (AC_CHECK_TYPE_${suffix})
string(TOUPPER "${type}" var1)
string(REPLACE "/" "_" var2 ${var1})
string(REPLACE "." "_" var3 ${var2})
string(REPLACE " " "_" var4 ${var3})
set(HAVE_${var4} 1 PARENT_SCOPE)
endif()
set(CMAKE_EXTRA_INCLUDE_FILES)
endfunction()
ac_check_headers (
sys/types.h sys/stat.h sys/filio.h sys/sockio.h sys/utime.h sys/un.h sys/syscall.h sys/uio.h sys/param.h
sys/prctl.h sys/socket.h sys/utsname.h sys/select.h sys/poll.h sys/wait.h sts/auxv.h sys/resource.h
sys/ioctl.h sys/errno.h sys/sendfile.h sys/statvfs.h sys/statfs.h sys/mman.h sys/mount.h sys/time.h sys/random.h
strings.h stdint.h unistd.h signal.h setjmp.h syslog.h netdb.h utime.h semaphore.h libproc.h alloca.h ucontext.h pwd.h elf.h
gnu/lib-names.h netinet/tcp.h netinet/in.h link.h arpa/inet.h unwind.h poll.h wchar.h linux/magic.h
android/legacy_signal_inlines.h android/ndk-version.h execinfo.h pthread.h pthread_np.h net/if.h dirent.h
CommonCrypto/CommonDigest.h dlfcn.h getopt.h pwd.h alloca.h
/usr/include/malloc.h)
ac_check_funcs (
sigaction kill clock_nanosleep kqueue backtrace_symbols mkstemp mmap
getrusage dladdr sysconf getrlimit prctl nl_langinfo
sched_getaffinity sched_setaffinity getpwuid_r readlink chmod lstat getdtablesize ftruncate msync
getpeername utime utimes openlog closelog atexit popen strerror_r inet_pton inet_aton
shm_open poll getfsstat mremap posix_fadvise vsnprintf sendfile statfs statvfs setpgid system
fork execv execve waitpid localtime_r mkdtemp getrandom execvp strlcpy stpcpy strtok_r rewinddir
vasprintf strndup getpwuid_r getprotobyname getprotobyname_r getaddrinfo mach_absolute_time
gethrtime read_real_time gethostbyname gethostbyname2 getnameinfo getifaddrs
access inet_ntop Qp2getifaddrs getpid mktemp)
if (HOST_LINUX)
# sysctl is deprecated on Linux
set(HAVE_SYS_SYSCTL_H 0)
else ()
check_include_files("sys/types.h;sys/sysctl.h" HAVE_SYS_SYSCTL_H)
endif()
check_include_files("sys/types.h;sys/user.h" HAVE_SYS_USER_H)
if(NOT HOST_DARWIN)
# getentropy was introduced in macOS 10.12 / iOS 10.0
ac_check_funcs (getentropy)
endif()
find_package(Threads)
# Needed to find pthread_ symbols
set(CMAKE_REQUIRED_LIBRARIES "${CMAKE_REQUIRED_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT}")
ac_check_funcs(
pthread_getname_np pthread_setname_np pthread_cond_timedwait_relative_np pthread_kill
pthread_attr_setstacksize pthread_get_stackaddr_np
)
check_symbol_exists(madvise "sys/mman.h" HAVE_MADVISE)
check_symbol_exists(pthread_mutexattr_setprotocol "pthread.h" HAVE_DECL_PTHREAD_MUTEXATTR_SETPROTOCOL)
check_symbol_exists(CLOCK_MONOTONIC "time.h" HAVE_CLOCK_MONOTONIC)
check_symbol_exists(CLOCK_MONOTONIC_COARSE "time.h" HAVE_CLOCK_MONOTONIC_COARSE)
check_symbol_exists(sys_signame "signal.h" HAVE_SYSSIGNAME)
check_symbol_exists(pthread_jit_write_protect_np "pthread.h" HAVE_PTHREAD_JIT_WRITE_PROTECT_NP)
check_symbol_exists(getauxval sys/auxv.h HAVE_GETAUXVAL)
ac_check_type("struct sockaddr_in6" sockaddr_in6 "netinet/in.h")
ac_check_type("struct timeval" timeval "sys/time.h;sys/types.h;utime.h")
ac_check_type("socklen_t" socklen_t "sys/types.h;sys/socket.h")
ac_check_type("struct ip_mreqn" ip_mreqn "netinet/in.h")
ac_check_type("struct ip_mreq" ip_mreq "netinet/in.h")
ac_check_type("clockid_t" clockid_t "sys/types.h")
check_struct_has_member("struct kinfo_proc" kp_proc "sys/types.h;sys/param.h;sys/sysctl.h;sys/proc.h" HAVE_STRUCT_KINFO_PROC_KP_PROC)
check_struct_has_member("struct sockaddr_in" sin_len "netinet/in.h" HAVE_SOCKADDR_IN_SIN_LEN)
check_struct_has_member("struct sockaddr_in6" sin6_len "netinet/in.h" HAVE_SOCKADDR_IN6_SIN_LEN)
check_struct_has_member("struct stat" st_atim "sys/types.h;sys/stat.h;unistd.h" HAVE_STRUCT_STAT_ST_ATIM)
check_struct_has_member("struct stat" st_atimespec "sys/types.h;sys/stat.h;unistd.h" HAVE_STRUCT_STAT_ST_ATIMESPEC)
check_type_size("int" SIZEOF_INT)
check_type_size("void*" SIZEOF_VOID_P)
check_type_size("long" SIZEOF_LONG)
check_type_size("long long" SIZEOF_LONG_LONG)
check_type_size("size_t" SIZEOF_SIZE_T)
if (HOST_LINUX)
set(CMAKE_REQUIRED_DEFINITIONS -D_GNU_SOURCE)
endif()
check_c_source_compiles(
"
#include <string.h>
int main(void)
{
char buffer[1];
char c = *strerror_r(0, buffer, 0);
return 0;
}
"
HAVE_GNU_STRERROR_R)
check_c_source_compiles(
"
#include <sched.h>
int main(void)
{
CPU_COUNT((void *) 0);
return 0;
}
"
HAVE_GNU_CPU_COUNT)
if (HOST_LINUX OR HOST_ANDROID)
set(CMAKE_REQUIRED_DEFINITIONS)
endif()
if(HOST_WIN32)
# checking for this doesn't work for some reason, hardcode result
set(HAVE_WINTERNL_H 1)
set(HAVE_CRYPT_RNG 1)
set(HAVE_GETADDRINFO 1)
set(HAVE_GETNAMEINFO 1)
set(HAVE_GETPROTOBYNAME 1)
set(HAVE_INET_NTOP 1)
set(HAVE_INET_PTON 1)
set(HAVE_STRUCT_SOCKADDR_IN6 1)
set(HAVE_STRTOK_R 1)
set(HAVE_EXECVP 0)
elseif(HOST_IOS)
set(HAVE_SYSTEM 0)
set(HAVE_GETPWUID_R 0)
set(HAVE_SYS_USER_H 0)
set(HAVE_GETENTROPY 0)
if(HOST_TVOS)
set(HAVE_PTHREAD_KILL 0)
set(HAVE_KILL 0)
set(HAVE_SIGACTION 0)
set(HAVE_FORK 0)
set(HAVE_EXECV 0)
set(HAVE_EXECVE 0)
set(HAVE_EXECVP 0)
endif()
elseif(HOST_MACCAT)
set(HAVE_SYSTEM 0)
elseif(HOST_BROWSER)
set(HAVE_FORK 0)
elseif(HOST_SOLARIS)
set(HAVE_GETPROTOBYNAME 1)
set(HAVE_NETINET_TCP_H 1)
set(HAVE_GETADDRINFO 1)
elseif(HOST_WASI)
# Redirected to errno.h
set(SYS_ERRNO_H 0)
# Some headers exist, but don't compile (wasi sdk 12.0)
set(HAVE_SYS_SOCKET_H 0)
set(HAVE_SYS_UN_H 0)
set(HAVE_NETINET_TCP_H 0)
set(HAVE_ARPA_INET_H 0)
set(HAVE_GETPWUID_R 0)
set(HAVE_MKDTEMP 0)
set(HAVE_EXECVE 0)
set(HAVE_FORK 0)
set(HAVE_GETRLIMIT 0)
set(HAVE_GETDTABLESIZE 0)
set(HAVE_MKSTEMP 0)
set(HAVE_BACKTRACE_SYMBOLS 0)
set(HAVE_GETPID 0)
set(HAVE_MACH_ABSOLUTE_TIME 0)
set(HAVE_GETHRTIME 0)
set(HAVE_READ_REAL_TIME 0)
set(HAVE_SCHED_GETAFFINITY 0)
set(HAVE_SCHED_SETAFFINITY 0)
set(HAVE_GETIFADDRS 0)
set(HAVE_QP2GETIFADDRS 0)
set(HAVE_GETADDRINFO 0)
set(HAVE_GETHOSTBYNAME 0)
set(HAVE_GETHOSTBYNAME2 0)
set(HAVE_GETPROTOBYNAME 0)
set(HAVE_GETNAMEINFO 0)
set(HAVE_INET_NTOP 0)
set(HAVE_SYS_ICU 0)
set(HAVE_EXECVP 0)
set(HAVE_MMAP 1)
set(DISABLE_PROFILER 1)
set(ENABLE_INTERP_LIB 1)
endif()
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/cmake/defines-todo.cmake | #
# Defines no supported by the CMAKE build yet
#
#option (MAJOR_IN_MKDEV "Define to 1 if `major', `minor', and `makedev' are declared in <mkdev.h>.")
#option (MAJOR_IN_SYSMACROS "Define to 1 if `major', `minor', and `makedev' are declared in <sysmacros.h>.")
#option (HAVE_LIBICONV "Define to 1 if you have the `iconv' library (-liconv).")
#option (ANDROID_UNIFIED_HEADERS "Whether Android NDK unified headers are used")
#option (MONO_DL_NEED_USCORE "Does dlsym require leading underscore.")
#option (GLIBC_BEFORE_2_3_4_SCHED_SETAFFINITY "Have GLIBC_BEFORE_2_3_4_SCHED_SETAFFINITY")
#option (GLIBC_HAS_CPU_COUNT "GLIBC has CPU_COUNT macro in sched.h")
#option (HAVE_LARGE_FILE_SUPPORT "Have large file support")
#option (HAVE_WORKING_SIGALTSTACK "Have a working sigaltstack")
#option (HAVE_EPOLL "epoll_create1")
#option (USE_KQUEUE_FOR_THREADPOOL "Use kqueue for the threadpool")
#option (HAVE_SIOCGIFCONF "Can get interface list")
#option (HOST_NO_SYMLINKS "This platform does not support symlinks")
#option (NEED_LINK_UNLINK "Define if Unix sockets cannot be created in an anonymous namespace")
#option (HAVE_CLASSIC_WINAPI_SUPPORT "Use classic Windows API support")
#option (HAVE_UWP_WINAPI_SUPPORT "Don't use UWP Windows API support")
#option (MONO_XEN_OPT "Xen-specific behaviour")
#option (MONO_SMALL_CONFIG "Reduce runtime requirements (and capabilities)")
#option (AC_APPLE_UNIVERSAL_BUILD "Define if building universal (internal helper macro)")
#option (MONO_JEMALLOC_ASSERT "Make jemalloc assert for mono")
#option (MONO_JEMALLOC_DEFAULT "Make jemalloc default for mono")
#option (MONO_JEMALLOC_ENABLED "Enable jemalloc usage for mono")
#option (MONO_PRIVATE_CRASHES "Do not include names of unmanaged functions in the crash dump")
#option (DISABLE_STRUCTURED_CRASH "Do not create structured crash files during unmanaged crashes")
#option (ENABLE_MONODROID "Enable runtime support for Monodroid (Xamarin.Android)")
#option (ENABLE_MONOTOUCH "Enable runtime support for Monotouch (Xamarin.iOS and Xamarin.Mac)")
#option (DISABLED_FEATURES "String of disabled features")
#option (ENABLE_ILGEN "Some VES is available at runtime")
#option (ENABLE_EXTENSION_MODULE "Extension module enabled")
#option (DEFAULT_GC_NAME "GC description")
#option (HAVE_NULL_GC "No GC support.")
#option (MONO_ZERO_LEN_ARRAY "Length of zero length arrays")
#option (KEVENT_HAS_VOID_UDATA "kevent with void *data")
#option (BIND_ADDRLEN_UNSIGNED "bind with unsigned addrlen")
#option (IPV6MR_INTERFACE_UNSIGNED "struct ipv6_mreq with unsigned ipv6mr_interface")
#option (INOTIFY_RM_WATCH_WD_UNSIGNED "inotify_rm_watch with unsigned wd")
#option (PRIORITY_REQUIRES_INT_WHO "getpriority with int who")
#option (KEVENT_REQUIRES_INT_PARAMS "kevent with int parameters")
#option (ENABLE_GSS "ENABLE_GSS")
#option (NAME_DEV_RANDOM "Name of /dev/random")
#option (MONO_BIG_ARRAYS "Enable the allocation and indexing of arrays greater than Int32.MaxValue")
#option (ENABLE_DTRACE "Enable DTrace probes")
#option (MONO_OFFSETS_FILE "AOT cross offsets file")
#option (ENABLE_LLVM "Enable the LLVM back end")
#option (ENABLE_LLVM_MSVC_ONLY "Enable the LLVM back end")
#option (INTERNAL_LLVM "LLVM used is being build during mono build")
#option (INTERNAL_LLVM_MSVC_ONLY "LLVM used is being build during mono build")
#option (INTERNAL_LLVM_ASSERTS "Build LLVM with assertions")
#option (INTERNAL_LLVM_ASSERTS_MSVC_ONLY "Build LLVM with assertions")
#option (MONO_LLVM_LOADED "The LLVM back end is dynamically loaded")
#option (ENABLE_LLVM_RUNTIME "Runtime support code for llvm enabled")
#option (ENABLE_LLVM_RUNTIME_MSVC_ONLY "Runtime support code for llvm enabled")
#option (MONO_ARCH_ILP32 "64 bit mode with 4 byte longs and pointers")
#option (MONO_CROSS_COMPILE "The runtime is compiled for cross-compiling mode")
#option (__mono_ppc64__ "...")
#option (DISABLE_HW_TRAPS "...")
#option (HAVE_VISIBILITY_HIDDEN "Support for the visibility (hidden) attribute")
#option (ENABLE_OVERRIDABLE_ALLOCATORS "Overridable allocator support enabled")
#option (ENABLE_ICALL_SYMBOL_MAP "Icall symbol map enabled")
#option (ENABLE_ICALL_EXPORT "Icall export enabled")
#option (DISABLE_ICALL_TABLES "Icall tables disabled")
#option (MONO_KEYWORD_THREAD "Have __thread keyword")
#option (HAVE_TLS_MODEL_ATTR "tls_model available")
#option (HAVE_ARMV5 "ARM v5")
#option (HAVE_ARMV6 "ARM v6")
#option (HAVE_ARMV7 "ARM v7")
#option (RISCV_FPABI_DOUBLE "RISC-V FPABI is double-precision")
#option (RISCV_FPABI_SINGLE "RISC-V FPABI is single-precision")
#option (RISCV_FPABI_SOFT "RISC-V FPABI is soft float")
#option (USE_MALLOC_FOR_MEMPOOLS "Use malloc for each single mempool allocation")
#option (LAZY_GC_THREAD_CREATION "Enable lazy gc thread creation by the embedding host.")
#option (ENABLE_COOP_SUSPEND "Enable cooperative stop-the-world garbage collection.")
#option (ENABLE_HYBRID_SUSPEND "Enable hybrid suspend for GC stop-the-world")
#option (ENABLE_EXPERIMENTS "Enable feature experiments")
#option (ENABLE_EXPERIMENT_null "Enable experiment 'null'")
#option (ENABLE_EXPERIMENT_TIERED "Enable experiment 'Tiered Compilation'")
#option (ENABLE_CHECKED_BUILD "Enable checked build")
#option (ENABLE_CHECKED_BUILD_GC "Enable GC checked build")
#option (ENABLE_CHECKED_BUILD_METADATA "Enable metadata checked build")
#option (ENABLE_CHECKED_BUILD_THREAD "Enable thread checked build")
#option (ENABLE_CHECKED_BUILD_PRIVATE_TYPES "Enable private types checked build")
#option (HAVE_BTLS "BoringTls is supported")
#option (ENABLE_JIT_DUMP "Enable jit dump support on Linux")
#option (ENABLE_CXX)
#option (STATIC_ICU)
| #
# Defines no supported by the CMAKE build yet
#
#option (MAJOR_IN_MKDEV "Define to 1 if `major', `minor', and `makedev' are declared in <mkdev.h>.")
#option (MAJOR_IN_SYSMACROS "Define to 1 if `major', `minor', and `makedev' are declared in <sysmacros.h>.")
#option (ANDROID_UNIFIED_HEADERS "Whether Android NDK unified headers are used")
#option (MONO_DL_NEED_USCORE "Does dlsym require leading underscore.")
#option (GLIBC_BEFORE_2_3_4_SCHED_SETAFFINITY "Have GLIBC_BEFORE_2_3_4_SCHED_SETAFFINITY")
#option (GLIBC_HAS_CPU_COUNT "GLIBC has CPU_COUNT macro in sched.h")
#option (HAVE_LARGE_FILE_SUPPORT "Have large file support")
#option (HAVE_WORKING_SIGALTSTACK "Have a working sigaltstack")
#option (HAVE_EPOLL "epoll_create1")
#option (USE_KQUEUE_FOR_THREADPOOL "Use kqueue for the threadpool")
#option (HAVE_SIOCGIFCONF "Can get interface list")
#option (HOST_NO_SYMLINKS "This platform does not support symlinks")
#option (NEED_LINK_UNLINK "Define if Unix sockets cannot be created in an anonymous namespace")
#option (HAVE_CLASSIC_WINAPI_SUPPORT "Use classic Windows API support")
#option (HAVE_UWP_WINAPI_SUPPORT "Don't use UWP Windows API support")
#option (MONO_XEN_OPT "Xen-specific behaviour")
#option (MONO_SMALL_CONFIG "Reduce runtime requirements (and capabilities)")
#option (AC_APPLE_UNIVERSAL_BUILD "Define if building universal (internal helper macro)")
#option (MONO_JEMALLOC_ASSERT "Make jemalloc assert for mono")
#option (MONO_JEMALLOC_DEFAULT "Make jemalloc default for mono")
#option (MONO_JEMALLOC_ENABLED "Enable jemalloc usage for mono")
#option (MONO_PRIVATE_CRASHES "Do not include names of unmanaged functions in the crash dump")
#option (DISABLE_STRUCTURED_CRASH "Do not create structured crash files during unmanaged crashes")
#option (ENABLE_MONODROID "Enable runtime support for Monodroid (Xamarin.Android)")
#option (ENABLE_MONOTOUCH "Enable runtime support for Monotouch (Xamarin.iOS and Xamarin.Mac)")
#option (DISABLED_FEATURES "String of disabled features")
#option (ENABLE_ILGEN "Some VES is available at runtime")
#option (ENABLE_EXTENSION_MODULE "Extension module enabled")
#option (DEFAULT_GC_NAME "GC description")
#option (HAVE_NULL_GC "No GC support.")
#option (MONO_ZERO_LEN_ARRAY "Length of zero length arrays")
#option (KEVENT_HAS_VOID_UDATA "kevent with void *data")
#option (BIND_ADDRLEN_UNSIGNED "bind with unsigned addrlen")
#option (IPV6MR_INTERFACE_UNSIGNED "struct ipv6_mreq with unsigned ipv6mr_interface")
#option (INOTIFY_RM_WATCH_WD_UNSIGNED "inotify_rm_watch with unsigned wd")
#option (PRIORITY_REQUIRES_INT_WHO "getpriority with int who")
#option (KEVENT_REQUIRES_INT_PARAMS "kevent with int parameters")
#option (ENABLE_GSS "ENABLE_GSS")
#option (NAME_DEV_RANDOM "Name of /dev/random")
#option (MONO_BIG_ARRAYS "Enable the allocation and indexing of arrays greater than Int32.MaxValue")
#option (ENABLE_DTRACE "Enable DTrace probes")
#option (MONO_OFFSETS_FILE "AOT cross offsets file")
#option (ENABLE_LLVM "Enable the LLVM back end")
#option (ENABLE_LLVM_MSVC_ONLY "Enable the LLVM back end")
#option (INTERNAL_LLVM "LLVM used is being build during mono build")
#option (INTERNAL_LLVM_MSVC_ONLY "LLVM used is being build during mono build")
#option (INTERNAL_LLVM_ASSERTS "Build LLVM with assertions")
#option (INTERNAL_LLVM_ASSERTS_MSVC_ONLY "Build LLVM with assertions")
#option (MONO_LLVM_LOADED "The LLVM back end is dynamically loaded")
#option (ENABLE_LLVM_RUNTIME "Runtime support code for llvm enabled")
#option (ENABLE_LLVM_RUNTIME_MSVC_ONLY "Runtime support code for llvm enabled")
#option (MONO_ARCH_ILP32 "64 bit mode with 4 byte longs and pointers")
#option (MONO_CROSS_COMPILE "The runtime is compiled for cross-compiling mode")
#option (__mono_ppc64__ "...")
#option (DISABLE_HW_TRAPS "...")
#option (HAVE_VISIBILITY_HIDDEN "Support for the visibility (hidden) attribute")
#option (ENABLE_OVERRIDABLE_ALLOCATORS "Overridable allocator support enabled")
#option (ENABLE_ICALL_SYMBOL_MAP "Icall symbol map enabled")
#option (ENABLE_ICALL_EXPORT "Icall export enabled")
#option (DISABLE_ICALL_TABLES "Icall tables disabled")
#option (MONO_KEYWORD_THREAD "Have __thread keyword")
#option (HAVE_TLS_MODEL_ATTR "tls_model available")
#option (HAVE_ARMV5 "ARM v5")
#option (HAVE_ARMV6 "ARM v6")
#option (HAVE_ARMV7 "ARM v7")
#option (RISCV_FPABI_DOUBLE "RISC-V FPABI is double-precision")
#option (RISCV_FPABI_SINGLE "RISC-V FPABI is single-precision")
#option (RISCV_FPABI_SOFT "RISC-V FPABI is soft float")
#option (USE_MALLOC_FOR_MEMPOOLS "Use malloc for each single mempool allocation")
#option (LAZY_GC_THREAD_CREATION "Enable lazy gc thread creation by the embedding host.")
#option (ENABLE_COOP_SUSPEND "Enable cooperative stop-the-world garbage collection.")
#option (ENABLE_HYBRID_SUSPEND "Enable hybrid suspend for GC stop-the-world")
#option (ENABLE_EXPERIMENTS "Enable feature experiments")
#option (ENABLE_EXPERIMENT_null "Enable experiment 'null'")
#option (ENABLE_EXPERIMENT_TIERED "Enable experiment 'Tiered Compilation'")
#option (ENABLE_CHECKED_BUILD "Enable checked build")
#option (ENABLE_CHECKED_BUILD_GC "Enable GC checked build")
#option (ENABLE_CHECKED_BUILD_METADATA "Enable metadata checked build")
#option (ENABLE_CHECKED_BUILD_THREAD "Enable thread checked build")
#option (ENABLE_CHECKED_BUILD_PRIVATE_TYPES "Enable private types checked build")
#option (HAVE_BTLS "BoringTls is supported")
#option (ENABLE_JIT_DUMP "Enable jit dump support on Linux")
#option (ENABLE_CXX)
#option (STATIC_ICU)
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/mono/eglib/eglib-remap.h | #undef g_malloc
#undef g_realloc
#undef g_malloc0
#undef g_calloc
#undef g_try_malloc
#undef g_try_realloc
#undef g_memdup
#define g_array_append monoeg_g_array_append
#define g_array_append_vals monoeg_g_array_append_vals
#define g_array_free monoeg_g_array_free
#define g_array_insert_vals monoeg_g_array_insert_vals
#define g_array_new monoeg_g_array_new
#define g_array_remove_index monoeg_g_array_remove_index
#define g_array_remove_index_fast monoeg_g_array_remove_index_fast
#define g_array_set_size monoeg_g_array_set_size
#define g_array_sized_new monoeg_g_array_sized_new
#define g_ascii_strdown monoeg_g_ascii_strdown
#define g_ascii_strdown_no_alloc monoeg_g_ascii_strdown_no_alloc
#define g_ascii_strncasecmp monoeg_g_ascii_strncasecmp
#define g_ascii_tolower monoeg_g_ascii_tolower
#define g_ascii_xdigit_value monoeg_g_ascii_xdigit_value
#define g_build_path monoeg_g_build_path
#define g_byte_array_append monoeg_g_byte_array_append
#define g_byte_array_free monoeg_g_byte_array_free
#define g_byte_array_new monoeg_g_byte_array_new
#define g_byte_array_set_size monoeg_g_byte_array_set_size
#define g_calloc monoeg_g_calloc
#define g_clear_error monoeg_g_clear_error
#define g_convert monoeg_g_convert
#define g_convert_error_quark monoeg_g_convert_error_quark
#define g_fixed_buffer_custom_allocator monoeg_g_fixed_buffer_custom_allocator
#define g_dir_close monoeg_g_dir_close
#define g_dir_open monoeg_g_dir_open
#define g_dir_read_name monoeg_g_dir_read_name
#define g_dir_rewind monoeg_g_dir_rewind
#define g_mkdir_with_parents monoeg_g_mkdir_with_parents
#define g_direct_equal monoeg_g_direct_equal
#define g_direct_hash monoeg_g_direct_hash
#define g_ensure_directory_exists monoeg_g_ensure_directory_exists
#define g_error_free monoeg_g_error_free
#define g_error_new monoeg_g_error_new
#define g_error_vnew monoeg_g_error_vnew
#define g_file_error_quark monoeg_g_file_error_quark
#define g_file_error_from_errno monoeg_g_file_error_from_errno
#define g_file_get_contents monoeg_g_file_get_contents
#define g_file_set_contents monoeg_g_file_set_contents
#define g_file_open_tmp monoeg_g_file_open_tmp
#define g_file_test monoeg_g_file_test
#define g_find_program_in_path monoeg_g_find_program_in_path
#define g_fprintf monoeg_g_fprintf
#define g_free monoeg_g_free
#define g_get_current_dir monoeg_g_get_current_dir
#define g_get_current_time monoeg_g_get_current_time
#define g_get_home_dir monoeg_g_get_home_dir
#define g_get_prgname monoeg_g_get_prgname
#define g_get_tmp_dir monoeg_g_get_tmp_dir
#define g_get_user_name monoeg_g_get_user_name
#define g_getenv monoeg_g_getenv
#define g_hasenv monoeg_g_hasenv
#define g_hash_table_destroy monoeg_g_hash_table_destroy
#define g_hash_table_find monoeg_g_hash_table_find
#define g_hash_table_foreach monoeg_g_hash_table_foreach
#define g_hash_table_foreach_remove monoeg_g_hash_table_foreach_remove
#define g_hash_table_foreach_steal monoeg_g_hash_table_foreach_steal
#define g_hash_table_get_keys monoeg_g_hash_table_get_keys
#define g_hash_table_get_values monoeg_g_hash_table_get_values
#define g_hash_table_contains monoeg_g_hash_table_contains
#define g_hash_table_insert_replace monoeg_g_hash_table_insert_replace
#define g_hash_table_lookup monoeg_g_hash_table_lookup
#define g_hash_table_lookup_extended monoeg_g_hash_table_lookup_extended
#define g_hash_table_new monoeg_g_hash_table_new
#define g_hash_table_new_full monoeg_g_hash_table_new_full
#define g_hash_table_remove monoeg_g_hash_table_remove
#define g_hash_table_steal monoeg_g_hash_table_steal
#define g_hash_table_size monoeg_g_hash_table_size
#define g_hash_table_print_stats monoeg_g_hash_table_print_stats
#define g_hash_table_remove_all monoeg_g_hash_table_remove_all
#define g_hash_table_iter_init monoeg_g_hash_table_iter_init
#define g_hash_table_iter_next monoeg_g_hash_table_iter_next
#define g_iconv monoeg_g_iconv
#define g_iconv_close monoeg_g_iconv_close
#define g_iconv_open monoeg_g_iconv_open
#define g_int_equal monoeg_g_int_equal
#define g_int_hash monoeg_g_int_hash
#define g_list_alloc monoeg_g_list_alloc
#define g_list_append monoeg_g_list_append
#define g_list_concat monoeg_g_list_concat
#define g_list_copy monoeg_g_list_copy
#define g_list_delete_link monoeg_g_list_delete_link
#define g_list_find monoeg_g_list_find
#define g_list_find_custom monoeg_g_list_find_custom
#define g_list_first monoeg_g_list_first
#define g_list_foreach monoeg_g_list_foreach
#define g_list_free monoeg_g_list_free
#define g_list_free_1 monoeg_g_list_free_1
#define g_list_index monoeg_g_list_index
#define g_list_insert_before monoeg_g_list_insert_before
#define g_list_insert_sorted monoeg_g_list_insert_sorted
#define g_list_last monoeg_g_list_last
#define g_list_length monoeg_g_list_length
#define g_list_nth monoeg_g_list_nth
#define g_list_nth_data monoeg_g_list_nth_data
#define g_list_prepend monoeg_g_list_prepend
#define g_list_remove monoeg_g_list_remove
#define g_list_remove_all monoeg_g_list_remove_all
#define g_list_remove_link monoeg_g_list_remove_link
#define g_list_reverse monoeg_g_list_reverse
#define g_list_sort monoeg_g_list_sort
#define g_log monoeg_g_log
#define g_log_set_always_fatal monoeg_g_log_set_always_fatal
#define g_log_set_fatal_mask monoeg_g_log_set_fatal_mask
#define g_logv monoeg_g_logv
#define g_memdup monoeg_g_memdup
#define g_mem_set_vtable monoeg_g_mem_set_vtable
#define g_mem_get_vtable monoeg_g_mem_get_vtable
#define g_mkdtemp monoeg_g_mkdtemp
#define g_module_address monoeg_g_module_address
#define g_module_build_path monoeg_g_module_build_path
#define g_module_close monoeg_g_module_close
#define g_module_error monoeg_g_module_error
#define g_module_open monoeg_g_module_open
#define g_module_symbol monoeg_g_module_symbol
#define g_path_get_basename monoeg_g_path_get_basename
#define g_path_get_dirname monoeg_g_path_get_dirname
#define g_path_is_absolute monoeg_g_path_is_absolute
#define g_async_safe_fgets monoeg_g_async_safe_fgets
#define g_async_safe_fprintf monoeg_g_async_safe_fprintf
#define g_async_safe_vfprintf monoeg_g_async_safe_vfprintf
#define g_async_safe_printf monoeg_g_async_safe_printf
#define g_async_safe_vprintf monoeg_g_async_safe_vprintf
#define g_print monoeg_g_print
#define g_printf monoeg_g_printf
#define g_printv monoeg_g_printv
#define g_printerr monoeg_g_printerr
#define g_propagate_error monoeg_g_propagate_error
#define g_ptr_array_add monoeg_g_ptr_array_add
#define g_ptr_array_capacity monoeg_g_ptr_array_capacity
#define g_ptr_array_foreach monoeg_g_ptr_array_foreach
#define g_ptr_array_free monoeg_g_ptr_array_free
#define g_ptr_array_new monoeg_g_ptr_array_new
#define g_ptr_array_remove monoeg_g_ptr_array_remove
#define g_ptr_array_remove_fast monoeg_g_ptr_array_remove_fast
#define g_ptr_array_remove_index monoeg_g_ptr_array_remove_index
#define g_ptr_array_remove_index_fast monoeg_g_ptr_array_remove_index_fast
#define g_ptr_array_set_size monoeg_g_ptr_array_set_size
#define g_ptr_array_sized_new monoeg_g_ptr_array_sized_new
#define g_ptr_array_sort monoeg_g_ptr_array_sort
#define g_ptr_array_find monoeg_g_ptr_array_find
#define g_queue_free monoeg_g_queue_free
#define g_queue_is_empty monoeg_g_queue_is_empty
#define g_queue_foreach monoeg_g_queue_foreach
#define g_queue_new monoeg_g_queue_new
#define g_queue_pop_head monoeg_g_queue_pop_head
#define g_queue_push_head monoeg_g_queue_push_head
#define g_queue_push_tail monoeg_g_queue_push_tail
#define g_set_error monoeg_g_set_error
#define g_set_prgname monoeg_g_set_prgname
#define g_setenv monoeg_g_setenv
#define g_slist_alloc monoeg_g_slist_alloc
#define g_slist_append monoeg_g_slist_append
#define g_slist_concat monoeg_g_slist_concat
#define g_slist_copy monoeg_g_slist_copy
#define g_slist_delete_link monoeg_g_slist_delete_link
#define g_slist_find monoeg_g_slist_find
#define g_slist_find_custom monoeg_g_slist_find_custom
#define g_slist_foreach monoeg_g_slist_foreach
#define g_slist_free monoeg_g_slist_free
#define g_slist_free_1 monoeg_g_slist_free_1
#define g_slist_index monoeg_g_slist_index
#define g_slist_insert_before monoeg_g_slist_insert_before
#define g_slist_insert_sorted monoeg_g_slist_insert_sorted
#define g_slist_last monoeg_g_slist_last
#define g_slist_length monoeg_g_slist_length
#define g_slist_nth monoeg_g_slist_nth
#define g_slist_nth_data monoeg_g_slist_nth_data
#define g_slist_prepend monoeg_g_slist_prepend
#define g_slist_remove monoeg_g_slist_remove
#define g_slist_remove_all monoeg_g_slist_remove_all
#define g_slist_remove_link monoeg_g_slist_remove_link
#define g_slist_reverse monoeg_g_slist_reverse
#define g_slist_sort monoeg_g_slist_sort
#define g_snprintf monoeg_g_snprintf
#define g_spaced_primes_closest monoeg_g_spaced_primes_closest
#define g_spawn_async_with_pipes monoeg_g_spawn_async_with_pipes
#define g_sprintf monoeg_g_sprintf
#define g_stpcpy monoeg_g_stpcpy
#define g_str_equal monoeg_g_str_equal
#define g_str_has_prefix monoeg_g_str_has_prefix
#define g_str_has_suffix monoeg_g_str_has_suffix
#define g_str_hash monoeg_g_str_hash
#define g_strchomp monoeg_g_strchomp
#define g_strchug monoeg_g_strchug
#define g_strconcat monoeg_g_strconcat
#define g_strdelimit monoeg_g_strdelimit
#define g_strdup_printf monoeg_g_strdup_printf
#define g_strdup_vprintf monoeg_g_strdup_vprintf
#define g_strerror monoeg_g_strerror
#define g_strfreev monoeg_g_strfreev
#define g_strdupv monoeg_g_strdupv
#define g_string_append monoeg_g_string_append
#define g_string_append_c monoeg_g_string_append_c
#define g_string_append_len monoeg_g_string_append_len
#define g_string_append_unichar monoeg_g_string_append_unichar
#define g_string_append_printf monoeg_g_string_append_printf
#define g_string_append_vprintf monoeg_g_string_append_vprintf
#define g_string_free monoeg_g_string_free
#define g_string_new monoeg_g_string_new
#define g_string_new_len monoeg_g_string_new_len
#define g_string_printf monoeg_g_string_printf
#define g_string_set_size monoeg_g_string_set_size
#define g_string_sized_new monoeg_g_string_sized_new
#define g_string_truncate monoeg_g_string_truncate
#define g_strjoin monoeg_g_strjoin
#define g_strjoinv monoeg_g_strjoinv
#define g_strlcpy monoeg_g_strlcpy
#define g_strndup monoeg_g_strndup
#define g_strnfill monoeg_g_strnfill
#define g_strnlen monoeg_g_strnlen
#define g_str_from_file_region monoeg_g_str_from_file_region
#define g_strreverse monoeg_g_strreverse
#define g_strsplit monoeg_g_strsplit
#define g_strsplit_set monoeg_g_strsplit_set
#define g_strv_length monoeg_g_strv_length
#define g_timer_destroy monoeg_g_timer_destroy
#define g_timer_elapsed monoeg_g_timer_elapsed
#define g_timer_new monoeg_g_timer_new
#define g_timer_start monoeg_g_timer_start
#define g_timer_stop monoeg_g_timer_stop
#define g_trailingBytesForUTF8 monoeg_g_trailingBytesForUTF8
#define g_ucs4_to_utf8 monoeg_g_ucs4_to_utf8
#define g_ucs4_to_utf16 monoeg_g_ucs4_to_utf16
#define g_usleep monoeg_g_usleep
#define g_utf16_to_ucs4 monoeg_g_utf16_to_ucs4
#define g_utf16_to_utf8 monoeg_g_utf16_to_utf8
#define g_utf16_to_utf8_custom_alloc monoeg_g_utf16_to_utf8_custom_alloc
#define g_utf16_ascii_equal monoeg_g_utf16_ascii_equal
#define g_utf16_asciiz_equal monoeg_g_utf16_asciiz_equal
#define g_utf8_jump_table monoeg_g_utf8_jump_table
#define g_utf8_get_char monoeg_g_utf8_get_char
#define g_utf8_strlen monoeg_g_utf8_strlen
#define g_utf8_to_utf16 monoeg_g_utf8_to_utf16
#define g_utf8_to_utf16_custom_alloc monoeg_g_utf8_to_utf16_custom_alloc
#define g_utf8_validate monoeg_g_utf8_validate
#define g_unichar_to_utf8 monoeg_g_unichar_to_utf8
#define g_utf8_offset_to_pointer monoeg_g_utf8_offset_to_pointer
#define g_utf8_pointer_to_offset monoeg_g_utf8_pointer_to_offset
#define g_utf8_to_ucs4_fast monoeg_g_utf8_to_ucs4_fast
#define g_vasprintf monoeg_g_vasprintf
#define g_win32_getlocale monoeg_g_win32_getlocale
#define g_assertion_disable_global monoeg_assertion_disable_global
#define g_assert_abort monoeg_assert_abort
#define g_assertion_message monoeg_assertion_message
#define g_get_assertion_message monoeg_get_assertion_message
#define g_malloc monoeg_malloc
#define g_malloc0 monoeg_malloc0
#define g_ptr_array_grow monoeg_ptr_array_grow
#define g_realloc monoeg_realloc
#define g_try_malloc monoeg_try_malloc
#define g_try_realloc monoeg_try_realloc
#define g_strdup monoeg_strdup
#define g_ucs4_to_utf16_len monoeg_ucs4_to_utf16_len
#define g_utf16_to_ucs4_len monoeg_utf16_to_ucs4_len
#define g_utf16_len monoeg_utf16_len
#define g_ascii_strcasecmp monoeg_ascii_strcasecmp
#define g_ascii_strup monoeg_ascii_strup
#define g_ascii_toupper monoeg_ascii_toupper
#define g_unichar_to_utf16 monoeg_unichar_to_utf16
#define g_utf8_get_char_validated monoeg_utf8_get_char_validated
#define g_utf8_to_ucs4 monoeg_utf8_to_ucs4
#define g_log_default_handler monoeg_log_default_handler
#define g_log_set_default_handler monoeg_log_set_default_handler
#define g_set_print_handler monoeg_set_print_handler
#define g_set_printerr_handler monoeg_set_printerr_handler
#define g_size_to_int monoeg_size_to_int
#define g_ascii_charcmp monoeg_ascii_charcmp
#define g_ascii_charcasecmp monoeg_ascii_charcasecmp
#define g_warning_d monoeg_warning_d
#ifdef HAVE_CLOCK_NANOSLEEP
#define g_clock_nanosleep monoeg_clock_nanosleep
#endif
| #undef g_malloc
#undef g_realloc
#undef g_malloc0
#undef g_calloc
#undef g_try_malloc
#undef g_try_realloc
#undef g_memdup
#define g_array_append monoeg_g_array_append
#define g_array_append_vals monoeg_g_array_append_vals
#define g_array_free monoeg_g_array_free
#define g_array_insert_vals monoeg_g_array_insert_vals
#define g_array_new monoeg_g_array_new
#define g_array_remove_index monoeg_g_array_remove_index
#define g_array_remove_index_fast monoeg_g_array_remove_index_fast
#define g_array_set_size monoeg_g_array_set_size
#define g_array_sized_new monoeg_g_array_sized_new
#define g_ascii_strdown monoeg_g_ascii_strdown
#define g_ascii_strdown_no_alloc monoeg_g_ascii_strdown_no_alloc
#define g_ascii_strncasecmp monoeg_g_ascii_strncasecmp
#define g_ascii_tolower monoeg_g_ascii_tolower
#define g_ascii_xdigit_value monoeg_g_ascii_xdigit_value
#define g_build_path monoeg_g_build_path
#define g_byte_array_append monoeg_g_byte_array_append
#define g_byte_array_free monoeg_g_byte_array_free
#define g_byte_array_new monoeg_g_byte_array_new
#define g_byte_array_set_size monoeg_g_byte_array_set_size
#define g_calloc monoeg_g_calloc
#define g_clear_error monoeg_g_clear_error
#define g_convert_error_quark monoeg_g_convert_error_quark
#define g_fixed_buffer_custom_allocator monoeg_g_fixed_buffer_custom_allocator
#define g_dir_close monoeg_g_dir_close
#define g_dir_open monoeg_g_dir_open
#define g_dir_read_name monoeg_g_dir_read_name
#define g_dir_rewind monoeg_g_dir_rewind
#define g_mkdir_with_parents monoeg_g_mkdir_with_parents
#define g_direct_equal monoeg_g_direct_equal
#define g_direct_hash monoeg_g_direct_hash
#define g_ensure_directory_exists monoeg_g_ensure_directory_exists
#define g_error_free monoeg_g_error_free
#define g_error_new monoeg_g_error_new
#define g_error_vnew monoeg_g_error_vnew
#define g_file_error_quark monoeg_g_file_error_quark
#define g_file_error_from_errno monoeg_g_file_error_from_errno
#define g_file_get_contents monoeg_g_file_get_contents
#define g_file_set_contents monoeg_g_file_set_contents
#define g_file_open_tmp monoeg_g_file_open_tmp
#define g_file_test monoeg_g_file_test
#define g_find_program_in_path monoeg_g_find_program_in_path
#define g_fprintf monoeg_g_fprintf
#define g_free monoeg_g_free
#define g_get_current_dir monoeg_g_get_current_dir
#define g_get_current_time monoeg_g_get_current_time
#define g_get_home_dir monoeg_g_get_home_dir
#define g_get_prgname monoeg_g_get_prgname
#define g_get_tmp_dir monoeg_g_get_tmp_dir
#define g_get_user_name monoeg_g_get_user_name
#define g_getenv monoeg_g_getenv
#define g_hasenv monoeg_g_hasenv
#define g_hash_table_destroy monoeg_g_hash_table_destroy
#define g_hash_table_find monoeg_g_hash_table_find
#define g_hash_table_foreach monoeg_g_hash_table_foreach
#define g_hash_table_foreach_remove monoeg_g_hash_table_foreach_remove
#define g_hash_table_foreach_steal monoeg_g_hash_table_foreach_steal
#define g_hash_table_get_keys monoeg_g_hash_table_get_keys
#define g_hash_table_get_values monoeg_g_hash_table_get_values
#define g_hash_table_contains monoeg_g_hash_table_contains
#define g_hash_table_insert_replace monoeg_g_hash_table_insert_replace
#define g_hash_table_lookup monoeg_g_hash_table_lookup
#define g_hash_table_lookup_extended monoeg_g_hash_table_lookup_extended
#define g_hash_table_new monoeg_g_hash_table_new
#define g_hash_table_new_full monoeg_g_hash_table_new_full
#define g_hash_table_remove monoeg_g_hash_table_remove
#define g_hash_table_steal monoeg_g_hash_table_steal
#define g_hash_table_size monoeg_g_hash_table_size
#define g_hash_table_print_stats monoeg_g_hash_table_print_stats
#define g_hash_table_remove_all monoeg_g_hash_table_remove_all
#define g_hash_table_iter_init monoeg_g_hash_table_iter_init
#define g_hash_table_iter_next monoeg_g_hash_table_iter_next
#define g_int_equal monoeg_g_int_equal
#define g_int_hash monoeg_g_int_hash
#define g_list_alloc monoeg_g_list_alloc
#define g_list_append monoeg_g_list_append
#define g_list_concat monoeg_g_list_concat
#define g_list_copy monoeg_g_list_copy
#define g_list_delete_link monoeg_g_list_delete_link
#define g_list_find monoeg_g_list_find
#define g_list_find_custom monoeg_g_list_find_custom
#define g_list_first monoeg_g_list_first
#define g_list_foreach monoeg_g_list_foreach
#define g_list_free monoeg_g_list_free
#define g_list_free_1 monoeg_g_list_free_1
#define g_list_index monoeg_g_list_index
#define g_list_insert_before monoeg_g_list_insert_before
#define g_list_insert_sorted monoeg_g_list_insert_sorted
#define g_list_last monoeg_g_list_last
#define g_list_length monoeg_g_list_length
#define g_list_nth monoeg_g_list_nth
#define g_list_nth_data monoeg_g_list_nth_data
#define g_list_prepend monoeg_g_list_prepend
#define g_list_remove monoeg_g_list_remove
#define g_list_remove_all monoeg_g_list_remove_all
#define g_list_remove_link monoeg_g_list_remove_link
#define g_list_reverse monoeg_g_list_reverse
#define g_list_sort monoeg_g_list_sort
#define g_log monoeg_g_log
#define g_log_set_always_fatal monoeg_g_log_set_always_fatal
#define g_log_set_fatal_mask monoeg_g_log_set_fatal_mask
#define g_logv monoeg_g_logv
#define g_memdup monoeg_g_memdup
#define g_mem_set_vtable monoeg_g_mem_set_vtable
#define g_mem_get_vtable monoeg_g_mem_get_vtable
#define g_mkdtemp monoeg_g_mkdtemp
#define g_module_address monoeg_g_module_address
#define g_module_build_path monoeg_g_module_build_path
#define g_module_close monoeg_g_module_close
#define g_module_error monoeg_g_module_error
#define g_module_open monoeg_g_module_open
#define g_module_symbol monoeg_g_module_symbol
#define g_path_get_basename monoeg_g_path_get_basename
#define g_path_get_dirname monoeg_g_path_get_dirname
#define g_path_is_absolute monoeg_g_path_is_absolute
#define g_async_safe_fgets monoeg_g_async_safe_fgets
#define g_async_safe_fprintf monoeg_g_async_safe_fprintf
#define g_async_safe_vfprintf monoeg_g_async_safe_vfprintf
#define g_async_safe_printf monoeg_g_async_safe_printf
#define g_async_safe_vprintf monoeg_g_async_safe_vprintf
#define g_print monoeg_g_print
#define g_printf monoeg_g_printf
#define g_printv monoeg_g_printv
#define g_printerr monoeg_g_printerr
#define g_propagate_error monoeg_g_propagate_error
#define g_ptr_array_add monoeg_g_ptr_array_add
#define g_ptr_array_capacity monoeg_g_ptr_array_capacity
#define g_ptr_array_foreach monoeg_g_ptr_array_foreach
#define g_ptr_array_free monoeg_g_ptr_array_free
#define g_ptr_array_new monoeg_g_ptr_array_new
#define g_ptr_array_remove monoeg_g_ptr_array_remove
#define g_ptr_array_remove_fast monoeg_g_ptr_array_remove_fast
#define g_ptr_array_remove_index monoeg_g_ptr_array_remove_index
#define g_ptr_array_remove_index_fast monoeg_g_ptr_array_remove_index_fast
#define g_ptr_array_set_size monoeg_g_ptr_array_set_size
#define g_ptr_array_sized_new monoeg_g_ptr_array_sized_new
#define g_ptr_array_sort monoeg_g_ptr_array_sort
#define g_ptr_array_find monoeg_g_ptr_array_find
#define g_queue_free monoeg_g_queue_free
#define g_queue_is_empty monoeg_g_queue_is_empty
#define g_queue_foreach monoeg_g_queue_foreach
#define g_queue_new monoeg_g_queue_new
#define g_queue_pop_head monoeg_g_queue_pop_head
#define g_queue_push_head monoeg_g_queue_push_head
#define g_queue_push_tail monoeg_g_queue_push_tail
#define g_set_error monoeg_g_set_error
#define g_set_prgname monoeg_g_set_prgname
#define g_setenv monoeg_g_setenv
#define g_slist_alloc monoeg_g_slist_alloc
#define g_slist_append monoeg_g_slist_append
#define g_slist_concat monoeg_g_slist_concat
#define g_slist_copy monoeg_g_slist_copy
#define g_slist_delete_link monoeg_g_slist_delete_link
#define g_slist_find monoeg_g_slist_find
#define g_slist_find_custom monoeg_g_slist_find_custom
#define g_slist_foreach monoeg_g_slist_foreach
#define g_slist_free monoeg_g_slist_free
#define g_slist_free_1 monoeg_g_slist_free_1
#define g_slist_index monoeg_g_slist_index
#define g_slist_insert_before monoeg_g_slist_insert_before
#define g_slist_insert_sorted monoeg_g_slist_insert_sorted
#define g_slist_last monoeg_g_slist_last
#define g_slist_length monoeg_g_slist_length
#define g_slist_nth monoeg_g_slist_nth
#define g_slist_nth_data monoeg_g_slist_nth_data
#define g_slist_prepend monoeg_g_slist_prepend
#define g_slist_remove monoeg_g_slist_remove
#define g_slist_remove_all monoeg_g_slist_remove_all
#define g_slist_remove_link monoeg_g_slist_remove_link
#define g_slist_reverse monoeg_g_slist_reverse
#define g_slist_sort monoeg_g_slist_sort
#define g_snprintf monoeg_g_snprintf
#define g_spaced_primes_closest monoeg_g_spaced_primes_closest
#define g_spawn_async_with_pipes monoeg_g_spawn_async_with_pipes
#define g_sprintf monoeg_g_sprintf
#define g_stpcpy monoeg_g_stpcpy
#define g_str_equal monoeg_g_str_equal
#define g_str_has_prefix monoeg_g_str_has_prefix
#define g_str_has_suffix monoeg_g_str_has_suffix
#define g_str_hash monoeg_g_str_hash
#define g_strchomp monoeg_g_strchomp
#define g_strchug monoeg_g_strchug
#define g_strconcat monoeg_g_strconcat
#define g_strdelimit monoeg_g_strdelimit
#define g_strdup_printf monoeg_g_strdup_printf
#define g_strdup_vprintf monoeg_g_strdup_vprintf
#define g_strerror monoeg_g_strerror
#define g_strfreev monoeg_g_strfreev
#define g_strdupv monoeg_g_strdupv
#define g_string_append monoeg_g_string_append
#define g_string_append_c monoeg_g_string_append_c
#define g_string_append_len monoeg_g_string_append_len
#define g_string_append_unichar monoeg_g_string_append_unichar
#define g_string_append_printf monoeg_g_string_append_printf
#define g_string_append_vprintf monoeg_g_string_append_vprintf
#define g_string_free monoeg_g_string_free
#define g_string_new monoeg_g_string_new
#define g_string_new_len monoeg_g_string_new_len
#define g_string_printf monoeg_g_string_printf
#define g_string_set_size monoeg_g_string_set_size
#define g_string_sized_new monoeg_g_string_sized_new
#define g_string_truncate monoeg_g_string_truncate
#define g_strjoin monoeg_g_strjoin
#define g_strjoinv monoeg_g_strjoinv
#define g_strlcpy monoeg_g_strlcpy
#define g_strndup monoeg_g_strndup
#define g_strnfill monoeg_g_strnfill
#define g_strnlen monoeg_g_strnlen
#define g_str_from_file_region monoeg_g_str_from_file_region
#define g_strreverse monoeg_g_strreverse
#define g_strsplit monoeg_g_strsplit
#define g_strsplit_set monoeg_g_strsplit_set
#define g_strv_length monoeg_g_strv_length
#define g_timer_destroy monoeg_g_timer_destroy
#define g_timer_elapsed monoeg_g_timer_elapsed
#define g_timer_new monoeg_g_timer_new
#define g_timer_start monoeg_g_timer_start
#define g_timer_stop monoeg_g_timer_stop
#define g_trailingBytesForUTF8 monoeg_g_trailingBytesForUTF8
#define g_ucs4_to_utf8 monoeg_g_ucs4_to_utf8
#define g_ucs4_to_utf16 monoeg_g_ucs4_to_utf16
#define g_usleep monoeg_g_usleep
#define g_utf16_to_ucs4 monoeg_g_utf16_to_ucs4
#define g_utf16_to_utf8 monoeg_g_utf16_to_utf8
#define g_utf16_to_utf8_custom_alloc monoeg_g_utf16_to_utf8_custom_alloc
#define g_utf16_ascii_equal monoeg_g_utf16_ascii_equal
#define g_utf16_asciiz_equal monoeg_g_utf16_asciiz_equal
#define g_utf8_jump_table monoeg_g_utf8_jump_table
#define g_utf8_get_char monoeg_g_utf8_get_char
#define g_utf8_strlen monoeg_g_utf8_strlen
#define g_utf8_to_utf16 monoeg_g_utf8_to_utf16
#define g_utf8_to_utf16_custom_alloc monoeg_g_utf8_to_utf16_custom_alloc
#define g_utf8_validate monoeg_g_utf8_validate
#define g_unichar_to_utf8 monoeg_g_unichar_to_utf8
#define g_utf8_offset_to_pointer monoeg_g_utf8_offset_to_pointer
#define g_utf8_pointer_to_offset monoeg_g_utf8_pointer_to_offset
#define g_utf8_to_ucs4_fast monoeg_g_utf8_to_ucs4_fast
#define g_vasprintf monoeg_g_vasprintf
#define g_win32_getlocale monoeg_g_win32_getlocale
#define g_assertion_disable_global monoeg_assertion_disable_global
#define g_assert_abort monoeg_assert_abort
#define g_assertion_message monoeg_assertion_message
#define g_get_assertion_message monoeg_get_assertion_message
#define g_malloc monoeg_malloc
#define g_malloc0 monoeg_malloc0
#define g_ptr_array_grow monoeg_ptr_array_grow
#define g_realloc monoeg_realloc
#define g_try_malloc monoeg_try_malloc
#define g_try_realloc monoeg_try_realloc
#define g_strdup monoeg_strdup
#define g_ucs4_to_utf16_len monoeg_ucs4_to_utf16_len
#define g_utf16_to_ucs4_len monoeg_utf16_to_ucs4_len
#define g_utf16_len monoeg_utf16_len
#define g_ascii_strcasecmp monoeg_ascii_strcasecmp
#define g_ascii_strup monoeg_ascii_strup
#define g_ascii_toupper monoeg_ascii_toupper
#define g_unichar_to_utf16 monoeg_unichar_to_utf16
#define g_utf8_get_char_validated monoeg_utf8_get_char_validated
#define g_utf8_to_ucs4 monoeg_utf8_to_ucs4
#define g_log_default_handler monoeg_log_default_handler
#define g_log_set_default_handler monoeg_log_set_default_handler
#define g_set_print_handler monoeg_set_print_handler
#define g_set_printerr_handler monoeg_set_printerr_handler
#define g_size_to_int monoeg_size_to_int
#define g_ascii_charcmp monoeg_ascii_charcmp
#define g_ascii_charcasecmp monoeg_ascii_charcasecmp
#define g_warning_d monoeg_warning_d
#ifdef HAVE_CLOCK_NANOSLEEP
#define g_clock_nanosleep monoeg_clock_nanosleep
#endif
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/mono/eglib/giconv.c | /* -*- Mode: C; tab-width: 8; indent-tabs-mode: t; c-basic-offset: 8 -*- */
/*
* Copyright (C) 2011 Jeffrey Stedfast
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
* files (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy,
* modify, merge, publish, distribute, sublicense, and/or sell copies
* of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <config.h>
#include <glib.h>
#include <string.h>
#ifdef HAVE_ICONV_H
#include <iconv.h>
#endif
#include <errno.h>
#include "../utils/mono-errno.h"
#ifdef _MSC_VER
#define FORCE_INLINE(RET_TYPE) __forceinline RET_TYPE
#else
#define FORCE_INLINE(RET_TYPE) inline RET_TYPE __attribute__((always_inline))
#endif
#define UNROLL_DECODE_UTF8 0
#define UNROLL_ENCODE_UTF8 0
typedef int (* Decoder) (char *inbuf, size_t inleft, gunichar *outchar);
typedef int (* Encoder) (gunichar c, char *outbuf, size_t outleft);
struct _GIConv {
Decoder decode;
Encoder encode;
gunichar c;
#ifdef HAVE_LIBICONV
iconv_t cd;
#endif
};
static int decode_utf32be (char *inbuf, size_t inleft, gunichar *outchar);
static int encode_utf32be (gunichar c, char *outbuf, size_t outleft);
static int decode_utf32le (char *inbuf, size_t inleft, gunichar *outchar);
static int encode_utf32le (gunichar c, char *outbuf, size_t outleft);
static int decode_utf16be (char *inbuf, size_t inleft, gunichar *outchar);
static int encode_utf16be (gunichar c, char *outbuf, size_t outleft);
static int decode_utf16le (char *inbuf, size_t inleft, gunichar *outchar);
static int encode_utf16le (gunichar c, char *outbuf, size_t outleft);
static FORCE_INLINE (int) decode_utf8 (char *inbuf, size_t inleft, gunichar *outchar);
static int encode_utf8 (gunichar c, char *outbuf, size_t outleft);
static int decode_latin1 (char *inbuf, size_t inleft, gunichar *outchar);
static int encode_latin1 (gunichar c, char *outbuf, size_t outleft);
#if G_BYTE_ORDER == G_LITTLE_ENDIAN
#define decode_utf32 decode_utf32le
#define encode_utf32 encode_utf32le
#define decode_utf16 decode_utf16le
#define encode_utf16 encode_utf16le
#else
#define decode_utf32 decode_utf32be
#define encode_utf32 encode_utf32be
#define decode_utf16 decode_utf16be
#define encode_utf16 encode_utf16be
#endif
static struct {
const char *name;
Decoder decoder;
Encoder encoder;
} charsets[] = {
{ "ISO-8859-1", decode_latin1, encode_latin1 },
{ "ISO8859-1", decode_latin1, encode_latin1 },
{ "UTF-32BE", decode_utf32be, encode_utf32be },
{ "UTF-32LE", decode_utf32le, encode_utf32le },
{ "UTF-16BE", decode_utf16be, encode_utf16be },
{ "UTF-16LE", decode_utf16le, encode_utf16le },
{ "UTF-32", decode_utf32, encode_utf32 },
{ "UTF-16", decode_utf16, encode_utf16 },
{ "UTF-8", decode_utf8, encode_utf8 },
{ "US-ASCII", decode_latin1, encode_latin1 },
{ "Latin1", decode_latin1, encode_latin1 },
{ "ASCII", decode_latin1, encode_latin1 },
{ "UTF32", decode_utf32, encode_utf32 },
{ "UTF16", decode_utf16, encode_utf16 },
{ "UTF8", decode_utf8, encode_utf8 },
};
GIConv
g_iconv_open (const char *to_charset, const char *from_charset)
{
#ifdef HAVE_LIBICONV
iconv_t icd = (iconv_t) -1;
#endif
Decoder decoder = NULL;
Encoder encoder = NULL;
GIConv cd;
guint i;
if (!to_charset || !from_charset || !to_charset[0] || !from_charset[0]) {
mono_set_errno (EINVAL);
return (GIConv) -1;
}
for (i = 0; i < G_N_ELEMENTS (charsets); i++) {
if (!g_ascii_strcasecmp (charsets[i].name, from_charset))
decoder = charsets[i].decoder;
if (!g_ascii_strcasecmp (charsets[i].name, to_charset))
encoder = charsets[i].encoder;
}
if (!encoder || !decoder) {
#ifdef HAVE_LIBICONV
if ((icd = iconv_open (to_charset, from_charset)) == (iconv_t) -1)
return (GIConv) -1;
#else
mono_set_errno (EINVAL);
return (GIConv) -1;
#endif
}
cd = (GIConv) g_malloc (sizeof (struct _GIConv));
cd->decode = decoder;
cd->encode = encoder;
cd->c = -1;
#ifdef HAVE_LIBICONV
cd->cd = icd;
#endif
return cd;
}
int
g_iconv_close (GIConv cd)
{
#ifdef HAVE_LIBICONV
if (cd->cd != (iconv_t) -1)
iconv_close (cd->cd);
#endif
g_free (cd);
return 0;
}
gsize
g_iconv (GIConv cd, gchar **inbytes, gsize *inbytesleft,
gchar **outbytes, gsize *outbytesleft)
{
gsize inleft, outleft;
char *inptr, *outptr;
gunichar c;
int rc = 0;
#ifdef HAVE_LIBICONV
if (cd->cd != (iconv_t) -1) {
/* Note: gsize may have a different size than size_t, so we need to
remap inbytesleft and outbytesleft to size_t's. */
size_t *outleftptr, *inleftptr;
size_t n_outleft, n_inleft;
if (inbytesleft) {
n_inleft = *inbytesleft;
inleftptr = &n_inleft;
} else {
inleftptr = NULL;
}
if (outbytesleft) {
n_outleft = *outbytesleft;
outleftptr = &n_outleft;
} else {
outleftptr = NULL;
}
// AIX needs this for C++ and GNU iconv
#if defined(__NetBSD__) || defined(_AIX)
return iconv (cd->cd, (const gchar **)inbytes, inleftptr, outbytes, outleftptr);
#else
return iconv (cd->cd, inbytes, inleftptr, outbytes, outleftptr);
#endif
}
#endif
if (outbytes == NULL || outbytesleft == NULL) {
/* reset converter */
cd->c = -1;
return 0;
}
inleft = inbytesleft ? *inbytesleft : 0;
inptr = inbytes ? *inbytes : NULL;
outleft = *outbytesleft;
outptr = *outbytes;
if ((c = cd->c) != (gunichar) -1)
goto encode;
while (inleft > 0) {
if ((rc = cd->decode (inptr, inleft, &c)) < 0)
break;
inleft -= rc;
inptr += rc;
encode:
if ((rc = cd->encode (c, outptr, outleft)) < 0)
break;
c = (gunichar) -1;
outleft -= rc;
outptr += rc;
}
if (inbytesleft)
*inbytesleft = inleft;
if (inbytes)
*inbytes = inptr;
*outbytesleft = outleft;
*outbytes = outptr;
cd->c = c;
return rc < 0 ? -1 : 0;
}
/*
* Unicode encoders and decoders
*/
static FORCE_INLINE (uint32_t)
read_uint32_endian (unsigned char *inptr, unsigned endian)
{
if (endian == G_LITTLE_ENDIAN)
return (inptr[3] << 24) | (inptr[2] << 16) | (inptr[1] << 8) | inptr[0];
return (inptr[0] << 24) | (inptr[1] << 16) | (inptr[2] << 8) | inptr[3];
}
static int
decode_utf32_endian (char *inbuf, size_t inleft, gunichar *outchar, unsigned endian)
{
unsigned char *inptr = (unsigned char *) inbuf;
gunichar c;
if (inleft < 4) {
mono_set_errno (EINVAL);
return -1;
}
c = read_uint32_endian (inptr, endian);
if (c >= 0xd800 && c < 0xe000) {
mono_set_errno (EILSEQ);
return -1;
} else if (c >= 0x110000) {
mono_set_errno (EILSEQ);
return -1;
}
*outchar = c;
return 4;
}
static int
decode_utf32be (char *inbuf, size_t inleft, gunichar *outchar)
{
return decode_utf32_endian (inbuf, inleft, outchar, G_BIG_ENDIAN);
}
static int
decode_utf32le (char *inbuf, size_t inleft, gunichar *outchar)
{
return decode_utf32_endian (inbuf, inleft, outchar, G_LITTLE_ENDIAN);
}
static int
encode_utf32be (gunichar c, char *outbuf, size_t outleft)
{
unsigned char *outptr = (unsigned char *) outbuf;
if (outleft < 4) {
mono_set_errno (E2BIG);
return -1;
}
outptr[0] = (c >> 24) & 0xff;
outptr[1] = (c >> 16) & 0xff;
outptr[2] = (c >> 8) & 0xff;
outptr[3] = c & 0xff;
return 4;
}
static int
encode_utf32le (gunichar c, char *outbuf, size_t outleft)
{
unsigned char *outptr = (unsigned char *) outbuf;
if (outleft < 4) {
mono_set_errno (E2BIG);
return -1;
}
outptr[0] = c & 0xff;
outptr[1] = (c >> 8) & 0xff;
outptr[2] = (c >> 16) & 0xff;
outptr[3] = (c >> 24) & 0xff;
return 4;
}
static FORCE_INLINE (uint16_t)
read_uint16_endian (unsigned char *inptr, unsigned endian)
{
if (endian == G_LITTLE_ENDIAN)
return (inptr[1] << 8) | inptr[0];
return (inptr[0] << 8) | inptr[1];
}
static FORCE_INLINE (int)
decode_utf16_endian (char *inbuf, size_t inleft, gunichar *outchar, unsigned endian)
{
unsigned char *inptr = (unsigned char *) inbuf;
gunichar2 c;
gunichar u;
if (inleft < 2) {
mono_set_errno (E2BIG);
return -1;
}
u = read_uint16_endian (inptr, endian);
if (u < 0xd800) {
/* 0x0000 -> 0xd7ff */
*outchar = u;
return 2;
} else if (u < 0xdc00) {
/* 0xd800 -> 0xdbff */
if (inleft < 4) {
mono_set_errno (EINVAL);
return -2;
}
c = read_uint16_endian (inptr + 2, endian);
if (c < 0xdc00 || c > 0xdfff) {
mono_set_errno (EILSEQ);
return -2;
}
u = ((u - 0xd800) << 10) + (c - 0xdc00) + 0x0010000UL;
*outchar = u;
return 4;
} else if (u < 0xe000) {
/* 0xdc00 -> 0xdfff */
mono_set_errno (EILSEQ);
return -1;
} else {
/* 0xe000 -> 0xffff */
*outchar = u;
return 2;
}
}
static int
decode_utf16be (char *inbuf, size_t inleft, gunichar *outchar)
{
return decode_utf16_endian (inbuf, inleft, outchar, G_BIG_ENDIAN);
}
static int
decode_utf16le (char *inbuf, size_t inleft, gunichar *outchar)
{
return decode_utf16_endian (inbuf, inleft, outchar, G_LITTLE_ENDIAN);
}
static FORCE_INLINE (void)
write_uint16_endian (unsigned char *outptr, uint16_t c, unsigned endian)
{
if (endian == G_LITTLE_ENDIAN) {
outptr[0] = c & 0xff;
outptr[1] = (c >> 8) & 0xff;
return;
}
outptr[0] = (c >> 8) & 0xff;
outptr[1] = c & 0xff;
}
static FORCE_INLINE (int)
encode_utf16_endian (gunichar c, char *outbuf, size_t outleft, unsigned endian)
{
unsigned char *outptr = (unsigned char *) outbuf;
gunichar2 ch;
gunichar c2;
if (c < 0x10000) {
if (outleft < 2) {
mono_set_errno (E2BIG);
return -1;
}
write_uint16_endian (outptr, c, endian);
return 2;
} else {
if (outleft < 4) {
mono_set_errno (E2BIG);
return -1;
}
c2 = c - 0x10000;
ch = (gunichar2) ((c2 >> 10) + 0xd800);
write_uint16_endian (outptr, ch, endian);
ch = (gunichar2) ((c2 & 0x3ff) + 0xdc00);
write_uint16_endian (outptr + 2, ch, endian);
return 4;
}
}
static int
encode_utf16be (gunichar c, char *outbuf, size_t outleft)
{
return encode_utf16_endian (c, outbuf, outleft, G_BIG_ENDIAN);
}
static int
encode_utf16le (gunichar c, char *outbuf, size_t outleft)
{
return encode_utf16_endian (c, outbuf, outleft, G_LITTLE_ENDIAN);
}
static FORCE_INLINE (int)
decode_utf8 (char *inbuf, size_t inleft, gunichar *outchar)
{
unsigned char *inptr = (unsigned char *) inbuf;
gunichar u;
int n, i;
u = *inptr;
if (u < 0x80) {
/* simple ascii case */
*outchar = u;
return 1;
} else if (u < 0xc2) {
mono_set_errno (EILSEQ);
return -1;
} else if (u < 0xe0) {
u &= 0x1f;
n = 2;
} else if (u < 0xf0) {
u &= 0x0f;
n = 3;
} else if (u < 0xf8) {
u &= 0x07;
n = 4;
} else if (u < 0xfc) {
u &= 0x03;
n = 5;
} else if (u < 0xfe) {
u &= 0x01;
n = 6;
} else {
mono_set_errno (EILSEQ);
return -1;
}
if (n > inleft) {
mono_set_errno (EINVAL);
return -1;
}
#if UNROLL_DECODE_UTF8
switch (n) {
case 6: u = (u << 6) | (*++inptr ^ 0x80);
case 5: u = (u << 6) | (*++inptr ^ 0x80);
case 4: u = (u << 6) | (*++inptr ^ 0x80);
case 3: u = (u << 6) | (*++inptr ^ 0x80);
case 2: u = (u << 6) | (*++inptr ^ 0x80);
}
#else
for (i = 1; i < n; i++)
u = (u << 6) | (*++inptr ^ 0x80);
#endif
*outchar = u;
return n;
}
static int
encode_utf8 (gunichar c, char *outbuf, size_t outleft)
{
unsigned char *outptr = (unsigned char *) outbuf;
int base, n, i;
if (c < 0x80) {
outptr[0] = c;
return 1;
} else if (c < 0x800) {
base = 192;
n = 2;
} else if (c < 0x10000) {
base = 224;
n = 3;
} else if (c < 0x200000) {
base = 240;
n = 4;
} else if (c < 0x4000000) {
base = 248;
n = 5;
} else {
base = 252;
n = 6;
}
if (outleft < n) {
mono_set_errno (E2BIG);
return -1;
}
#if UNROLL_ENCODE_UTF8
switch (n) {
case 6: outptr[5] = (c & 0x3f) | 0x80; c >>= 6;
case 5: outptr[4] = (c & 0x3f) | 0x80; c >>= 6;
case 4: outptr[3] = (c & 0x3f) | 0x80; c >>= 6;
case 3: outptr[2] = (c & 0x3f) | 0x80; c >>= 6;
case 2: outptr[1] = (c & 0x3f) | 0x80; c >>= 6;
case 1: outptr[0] = c | base;
}
#else
for (i = n - 1; i > 0; i--) {
outptr[i] = (c & 0x3f) | 0x80;
c >>= 6;
}
outptr[0] = c | base;
#endif
return n;
}
static int
decode_latin1 (char *inbuf, size_t inleft, gunichar *outchar)
{
*outchar = (unsigned char) *inbuf;
return 1;
}
static int
encode_latin1 (gunichar c, char *outbuf, size_t outleft)
{
if (outleft < 1) {
mono_set_errno (E2BIG);
return -1;
}
if (c > 0xff) {
mono_set_errno (EILSEQ);
return -1;
}
*outbuf = (char) c;
return 1;
}
/*
* Simple conversion API
*/
static gpointer error_quark = (gpointer)"ConvertError";
gpointer
g_convert_error_quark (void)
{
return error_quark;
}
gchar *
g_convert (const gchar *str, gssize len, const gchar *to_charset, const gchar *from_charset,
gsize *bytes_read, gsize *bytes_written, GError **err)
{
gsize outsize, outused, outleft, inleft, grow, rc;
char *result, *outbuf, *inbuf;
gboolean flush = FALSE;
gboolean done = FALSE;
GIConv cd;
g_return_val_if_fail (str != NULL, NULL);
g_return_val_if_fail (to_charset != NULL, NULL);
g_return_val_if_fail (from_charset != NULL, NULL);
if ((cd = g_iconv_open (to_charset, from_charset)) == (GIConv) -1) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_NO_CONVERSION,
"Conversion from %s to %s not supported.",
from_charset, to_charset);
if (bytes_written)
*bytes_written = 0;
if (bytes_read)
*bytes_read = 0;
return NULL;
}
inleft = len < 0 ? strlen (str) : len;
inbuf = (char *) str;
outleft = outsize = MAX (inleft, 8);
outbuf = result = g_malloc (outsize + 4);
do {
if (!flush)
rc = g_iconv (cd, &inbuf, &inleft, &outbuf, &outleft);
else
rc = g_iconv (cd, NULL, NULL, &outbuf, &outleft);
if (rc == (gsize) -1) {
switch (errno) {
case E2BIG:
/* grow our result buffer */
grow = MAX (inleft, 8) << 1;
outused = outbuf - result;
outsize += grow;
outleft += grow;
result = g_realloc (result, outsize + 4);
outbuf = result + outused;
break;
case EINVAL:
/* incomplete input, stop converting and terminate here */
if (flush)
done = TRUE;
else
flush = TRUE;
break;
case EILSEQ:
/* illegal sequence in the input */
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE, "%s", g_strerror (errno));
if (bytes_read) {
/* save offset of the illegal input sequence */
*bytes_read = (inbuf - str);
}
if (bytes_written)
*bytes_written = 0;
g_iconv_close (cd);
g_free (result);
return NULL;
default:
/* unknown errno */
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_FAILED, "%s", g_strerror (errno));
if (bytes_written)
*bytes_written = 0;
if (bytes_read)
*bytes_read = 0;
g_iconv_close (cd);
g_free (result);
return NULL;
}
} else if (flush) {
/* input has been converted and output has been flushed */
break;
} else {
/* input has been converted, need to flush the output */
flush = TRUE;
}
} while (!done);
g_iconv_close (cd);
/* Note: not all charsets can be null-terminated with a single
null byte. UCS2, for example, needs 2 null bytes and UCS4
needs 4. I hope that 4 null bytes is enough to terminate all
multibyte charsets? */
/* null-terminate the result */
memset (outbuf, 0, 4);
if (bytes_written)
*bytes_written = outbuf - result;
if (bytes_read)
*bytes_read = inbuf - str;
return result;
}
/*
* Unicode conversion
*/
/**
* An explanation of the conversion can be found at:
* http://home.tiscali.nl/t876506/utf8tbl.html
*
**/
gint
g_unichar_to_utf8 (gunichar c, gchar *outbuf)
{
int base, n, i;
if (c < 0x80) {
base = 0;
n = 1;
} else if (c < 0x800) {
base = 192;
n = 2;
} else if (c < 0x10000) {
base = 224;
n = 3;
} else if (c < 0x200000) {
base = 240;
n = 4;
} else if (c < 0x4000000) {
base = 248;
n = 5;
} else if (c < 0x80000000) {
base = 252;
n = 6;
} else {
return -1;
}
if (outbuf != NULL) {
for (i = n - 1; i > 0; i--) {
/* mask off 6 bits worth and add 128 */
outbuf[i] = (c & 0x3f) | 0x80;
c >>= 6;
}
/* first character has a different base */
outbuf[0] = c | base;
}
return n;
}
static FORCE_INLINE (int)
g_unichar_to_utf16 (gunichar c, gunichar2 *outbuf)
{
gunichar c2;
if (c < 0xd800) {
if (outbuf)
*outbuf = (gunichar2) c;
return 1;
} else if (c < 0xe000) {
return -1;
} else if (c < 0x10000) {
if (outbuf)
*outbuf = (gunichar2) c;
return 1;
} else if (c < 0x110000) {
if (outbuf) {
c2 = c - 0x10000;
outbuf[0] = (gunichar2) ((c2 >> 10) + 0xd800);
outbuf[1] = (gunichar2) ((c2 & 0x3ff) + 0xdc00);
}
return 2;
} else {
return -1;
}
}
gunichar *
g_utf8_to_ucs4_fast (const gchar *str, glong len, glong *items_written)
{
gunichar *outbuf, *outptr;
char *inptr;
glong n, i;
g_return_val_if_fail (str != NULL, NULL);
n = g_utf8_strlen (str, len);
if (items_written)
*items_written = n;
outptr = outbuf = g_malloc ((n + 1) * sizeof (gunichar));
inptr = (char *) str;
for (i = 0; i < n; i++) {
*outptr++ = g_utf8_get_char (inptr);
inptr = g_utf8_next_char (inptr);
}
*outptr = 0;
return outbuf;
}
static gunichar2 *
eg_utf8_to_utf16_general (const gchar *str, glong len, glong *items_read, glong *items_written, gboolean include_nuls, gboolean replace_invalid_codepoints, GCustomAllocator custom_alloc_func, gpointer custom_alloc_data, GError **err)
{
gunichar2 *outbuf, *outptr;
size_t outlen = 0;
size_t inleft;
char *inptr;
gunichar c;
int u, n;
g_return_val_if_fail (str != NULL, NULL);
if (len < 0) {
if (include_nuls) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_FAILED, "Conversions with embedded nulls must pass the string length");
return NULL;
}
len = strlen (str);
}
inptr = (char *) str;
inleft = len;
while (inleft > 0) {
if ((n = decode_utf8 (inptr, inleft, &c)) < 0)
goto error;
if (c == 0 && !include_nuls)
break;
if ((u = g_unichar_to_utf16 (c, NULL)) < 0) {
if (replace_invalid_codepoints) {
u = 2;
} else {
mono_set_errno (EILSEQ);
goto error;
}
}
outlen += u;
inleft -= n;
inptr += n;
}
if (items_read)
*items_read = inptr - str;
if (items_written)
*items_written = outlen;
if (G_LIKELY (!custom_alloc_func))
outptr = outbuf = g_malloc ((outlen + 1) * sizeof (gunichar2));
else
outptr = outbuf = (gunichar2 *)custom_alloc_func ((outlen + 1) * sizeof (gunichar2), custom_alloc_data);
if (G_UNLIKELY (custom_alloc_func && !outbuf)) {
mono_set_errno (ENOMEM);
goto error;
}
inptr = (char *) str;
inleft = len;
while (inleft > 0) {
if ((n = decode_utf8 (inptr, inleft, &c)) < 0)
break;
if (c == 0 && !include_nuls)
break;
u = g_unichar_to_utf16 (c, outptr);
if ((u < 0) && replace_invalid_codepoints) {
outptr[0] = 0xFFFD;
outptr[1] = 0xFFFD;
u = 2;
}
outptr += u;
inleft -= n;
inptr += n;
}
*outptr = '\0';
return outbuf;
error:
if (errno == ENOMEM) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_NO_MEMORY,
"Allocation failed.");
} else if (errno == EILSEQ) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
} else if (items_read) {
/* partial input is ok if we can let our caller know... */
} else {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_PARTIAL_INPUT,
"Partial byte sequence encountered in the input.");
}
if (items_read)
*items_read = inptr - str;
if (items_written)
*items_written = 0;
return NULL;
}
gunichar2 *
g_utf8_to_utf16 (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err)
{
return eg_utf8_to_utf16_general (str, len, items_read, items_written, FALSE, FALSE, NULL, NULL, err);
}
gunichar2 *
g_utf8_to_utf16_custom_alloc (const gchar *str, glong len, glong *items_read, glong *items_written, GCustomAllocator custom_alloc_func, gpointer custom_alloc_data, GError **err)
{
return eg_utf8_to_utf16_general (str, len, items_read, items_written, FALSE, FALSE, custom_alloc_func, custom_alloc_data, err);
}
gunichar2 *
eg_utf8_to_utf16_with_nuls (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err)
{
return eg_utf8_to_utf16_general (str, len, items_read, items_written, TRUE, FALSE, NULL, NULL, err);
}
gunichar2 *
eg_wtf8_to_utf16 (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err)
{
return eg_utf8_to_utf16_general (str, len, items_read, items_written, TRUE, TRUE, NULL, NULL, err);
}
gunichar *
g_utf8_to_ucs4 (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err)
{
gunichar *outbuf, *outptr;
size_t outlen = 0;
size_t inleft;
char *inptr;
gunichar c;
int n;
g_return_val_if_fail (str != NULL, NULL);
if (len < 0)
len = strlen (str);
inptr = (char *) str;
inleft = len;
while (inleft > 0) {
if ((n = decode_utf8 (inptr, inleft, &c)) < 0) {
if (errno == EILSEQ) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
} else if (items_read) {
/* partial input is ok if we can let our caller know... */
break;
} else {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_PARTIAL_INPUT,
"Partial byte sequence encountered in the input.");
}
if (items_read)
*items_read = inptr - str;
if (items_written)
*items_written = 0;
return NULL;
} else if (c == 0)
break;
outlen += 4;
inleft -= n;
inptr += n;
}
if (items_written)
*items_written = outlen / 4;
if (items_read)
*items_read = inptr - str;
outptr = outbuf = g_malloc (outlen + 4);
inptr = (char *) str;
inleft = len;
while (inleft > 0) {
if ((n = decode_utf8 (inptr, inleft, &c)) < 0)
break;
else if (c == 0)
break;
*outptr++ = c;
inleft -= n;
inptr += n;
}
*outptr = 0;
return outbuf;
}
static
gchar *
eg_utf16_to_utf8_general (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GCustomAllocator custom_alloc_func, gpointer custom_alloc_data, GError **err)
{
char *inptr, *outbuf, *outptr;
size_t outlen = 0;
size_t inleft;
gunichar c;
int n;
g_return_val_if_fail (str != NULL, NULL);
if (len < 0) {
len = 0;
while (str[len])
len++;
}
inptr = (char *) str;
inleft = len * 2;
while (inleft > 0) {
if ((n = decode_utf16 (inptr, inleft, &c)) < 0) {
if (n == -2 && inleft > 2) {
/* This means that the first UTF-16 char was read, but second failed */
inleft -= 2;
inptr += 2;
}
if (errno == EILSEQ) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
} else if (items_read) {
/* partial input is ok if we can let our caller know... */
break;
} else {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_PARTIAL_INPUT,
"Partial byte sequence encountered in the input.");
}
if (items_read)
*items_read = (inptr - (char *) str) / 2;
if (items_written)
*items_written = 0;
return NULL;
} else if (c == 0)
break;
outlen += g_unichar_to_utf8 (c, NULL);
inleft -= n;
inptr += n;
}
if (items_read)
*items_read = (inptr - (char *) str) / 2;
if (items_written)
*items_written = outlen;
if (G_LIKELY (!custom_alloc_func))
outptr = outbuf = g_malloc (outlen + 1);
else
outptr = outbuf = (char *)custom_alloc_func (outlen + 1, custom_alloc_data);
if (G_UNLIKELY (custom_alloc_func && !outbuf)) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_NO_MEMORY, "Allocation failed.");
if (items_written)
*items_written = 0;
return NULL;
}
inptr = (char *) str;
inleft = len * 2;
while (inleft > 0) {
if ((n = decode_utf16 (inptr, inleft, &c)) < 0)
break;
else if (c == 0)
break;
outptr += g_unichar_to_utf8 (c, outptr);
inleft -= n;
inptr += n;
}
*outptr = '\0';
return outbuf;
}
gchar *
g_utf16_to_utf8 (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GError **err)
{
return eg_utf16_to_utf8_general (str, len, items_read, items_written, NULL, NULL, err);
}
gchar *
g_utf16_to_utf8_custom_alloc (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GCustomAllocator custom_alloc_func, gpointer custom_alloc_data, GError **err)
{
return eg_utf16_to_utf8_general (str, len, items_read, items_written, custom_alloc_func, custom_alloc_data, err);
}
gunichar *
g_utf16_to_ucs4 (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GError **err)
{
gunichar *outbuf, *outptr;
size_t outlen = 0;
size_t inleft;
char *inptr;
gunichar c;
int n;
g_return_val_if_fail (str != NULL, NULL);
if (len < 0) {
len = 0;
while (str[len])
len++;
}
inptr = (char *) str;
inleft = len * 2;
while (inleft > 0) {
if ((n = decode_utf16 (inptr, inleft, &c)) < 0) {
if (n == -2 && inleft > 2) {
/* This means that the first UTF-16 char was read, but second failed */
inleft -= 2;
inptr += 2;
}
if (errno == EILSEQ) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
} else if (items_read) {
/* partial input is ok if we can let our caller know... */
break;
} else {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_PARTIAL_INPUT,
"Partial byte sequence encountered in the input.");
}
if (items_read)
*items_read = (inptr - (char *) str) / 2;
if (items_written)
*items_written = 0;
return NULL;
} else if (c == 0)
break;
outlen += 4;
inleft -= n;
inptr += n;
}
if (items_read)
*items_read = (inptr - (char *) str) / 2;
if (items_written)
*items_written = outlen / 4;
outptr = outbuf = g_malloc (outlen + 4);
inptr = (char *) str;
inleft = len * 2;
while (inleft > 0) {
if ((n = decode_utf16 (inptr, inleft, &c)) < 0)
break;
else if (c == 0)
break;
*outptr++ = c;
inleft -= n;
inptr += n;
}
*outptr = 0;
return outbuf;
}
gchar *
g_ucs4_to_utf8 (const gunichar *str, glong len, glong *items_read, glong *items_written, GError **err)
{
char *outbuf, *outptr;
size_t outlen = 0;
glong i;
int n;
g_return_val_if_fail (str != NULL, NULL);
if (len < 0) {
for (i = 0; str[i] != 0; i++) {
if ((n = g_unichar_to_utf8 (str[i], NULL)) < 0) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
if (items_written)
*items_written = 0;
if (items_read)
*items_read = i;
return NULL;
}
outlen += n;
}
} else {
for (i = 0; i < len && str[i] != 0; i++) {
if ((n = g_unichar_to_utf8 (str[i], NULL)) < 0) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
if (items_written)
*items_written = 0;
if (items_read)
*items_read = i;
return NULL;
}
outlen += n;
}
}
len = i;
outptr = outbuf = g_malloc (outlen + 1);
for (i = 0; i < len; i++)
outptr += g_unichar_to_utf8 (str[i], outptr);
*outptr = 0;
if (items_written)
*items_written = outlen;
if (items_read)
*items_read = i;
return outbuf;
}
gunichar2 *
g_ucs4_to_utf16 (const gunichar *str, glong len, glong *items_read, glong *items_written, GError **err)
{
gunichar2 *outbuf, *outptr;
size_t outlen = 0;
glong i;
int n;
g_return_val_if_fail (str != NULL, NULL);
if (len < 0) {
for (i = 0; str[i] != 0; i++) {
if ((n = g_unichar_to_utf16 (str[i], NULL)) < 0) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
if (items_written)
*items_written = 0;
if (items_read)
*items_read = i;
return NULL;
}
outlen += n;
}
} else {
for (i = 0; i < len && str[i] != 0; i++) {
if ((n = g_unichar_to_utf16 (str[i], NULL)) < 0) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
if (items_written)
*items_written = 0;
if (items_read)
*items_read = i;
return NULL;
}
outlen += n;
}
}
len = i;
outptr = outbuf = g_malloc ((outlen + 1) * sizeof (gunichar2));
for (i = 0; i < len; i++)
outptr += g_unichar_to_utf16 (str[i], outptr);
*outptr = 0;
if (items_written)
*items_written = outlen;
if (items_read)
*items_read = i;
return outbuf;
}
gpointer
g_fixed_buffer_custom_allocator (gsize req_size, gpointer custom_alloc_data)
{
GFixedBufferCustomAllocatorData *fixed_buffer_custom_alloc_data = (GFixedBufferCustomAllocatorData *)custom_alloc_data;
if (!fixed_buffer_custom_alloc_data)
return NULL;
fixed_buffer_custom_alloc_data->req_buffer_size = req_size;
if (req_size > fixed_buffer_custom_alloc_data->buffer_size)
return NULL;
return fixed_buffer_custom_alloc_data->buffer;
}
| /* -*- Mode: C; tab-width: 8; indent-tabs-mode: t; c-basic-offset: 8 -*- */
/*
* Copyright (C) 2011 Jeffrey Stedfast
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
* files (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy,
* modify, merge, publish, distribute, sublicense, and/or sell copies
* of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <config.h>
#include <glib.h>
#include <string.h>
#include <errno.h>
#include "../utils/mono-errno.h"
#ifdef _MSC_VER
#define FORCE_INLINE(RET_TYPE) __forceinline RET_TYPE
#else
#define FORCE_INLINE(RET_TYPE) inline RET_TYPE __attribute__((always_inline))
#endif
#define UNROLL_DECODE_UTF8 0
#define UNROLL_ENCODE_UTF8 0
static int decode_utf32be (char *inbuf, size_t inleft, gunichar *outchar);
static int encode_utf32be (gunichar c, char *outbuf, size_t outleft);
static int decode_utf32le (char *inbuf, size_t inleft, gunichar *outchar);
static int encode_utf32le (gunichar c, char *outbuf, size_t outleft);
static int decode_utf16be (char *inbuf, size_t inleft, gunichar *outchar);
static int encode_utf16be (gunichar c, char *outbuf, size_t outleft);
static int decode_utf16le (char *inbuf, size_t inleft, gunichar *outchar);
static int encode_utf16le (gunichar c, char *outbuf, size_t outleft);
static FORCE_INLINE (int) decode_utf8 (char *inbuf, size_t inleft, gunichar *outchar);
static int encode_utf8 (gunichar c, char *outbuf, size_t outleft);
static int decode_latin1 (char *inbuf, size_t inleft, gunichar *outchar);
static int encode_latin1 (gunichar c, char *outbuf, size_t outleft);
#if G_BYTE_ORDER == G_LITTLE_ENDIAN
#define decode_utf32 decode_utf32le
#define encode_utf32 encode_utf32le
#define decode_utf16 decode_utf16le
#define encode_utf16 encode_utf16le
#else
#define decode_utf32 decode_utf32be
#define encode_utf32 encode_utf32be
#define decode_utf16 decode_utf16be
#define encode_utf16 encode_utf16be
#endif
/*
* Unicode encoders and decoders
*/
static FORCE_INLINE (uint32_t)
read_uint32_endian (unsigned char *inptr, unsigned endian)
{
if (endian == G_LITTLE_ENDIAN)
return (inptr[3] << 24) | (inptr[2] << 16) | (inptr[1] << 8) | inptr[0];
return (inptr[0] << 24) | (inptr[1] << 16) | (inptr[2] << 8) | inptr[3];
}
static int
decode_utf32_endian (char *inbuf, size_t inleft, gunichar *outchar, unsigned endian)
{
unsigned char *inptr = (unsigned char *) inbuf;
gunichar c;
if (inleft < 4) {
mono_set_errno (EINVAL);
return -1;
}
c = read_uint32_endian (inptr, endian);
if (c >= 0xd800 && c < 0xe000) {
mono_set_errno (EILSEQ);
return -1;
} else if (c >= 0x110000) {
mono_set_errno (EILSEQ);
return -1;
}
*outchar = c;
return 4;
}
static int
decode_utf32be (char *inbuf, size_t inleft, gunichar *outchar)
{
return decode_utf32_endian (inbuf, inleft, outchar, G_BIG_ENDIAN);
}
static int
decode_utf32le (char *inbuf, size_t inleft, gunichar *outchar)
{
return decode_utf32_endian (inbuf, inleft, outchar, G_LITTLE_ENDIAN);
}
static int
encode_utf32be (gunichar c, char *outbuf, size_t outleft)
{
unsigned char *outptr = (unsigned char *) outbuf;
if (outleft < 4) {
mono_set_errno (E2BIG);
return -1;
}
outptr[0] = (c >> 24) & 0xff;
outptr[1] = (c >> 16) & 0xff;
outptr[2] = (c >> 8) & 0xff;
outptr[3] = c & 0xff;
return 4;
}
static int
encode_utf32le (gunichar c, char *outbuf, size_t outleft)
{
unsigned char *outptr = (unsigned char *) outbuf;
if (outleft < 4) {
mono_set_errno (E2BIG);
return -1;
}
outptr[0] = c & 0xff;
outptr[1] = (c >> 8) & 0xff;
outptr[2] = (c >> 16) & 0xff;
outptr[3] = (c >> 24) & 0xff;
return 4;
}
static FORCE_INLINE (uint16_t)
read_uint16_endian (unsigned char *inptr, unsigned endian)
{
if (endian == G_LITTLE_ENDIAN)
return (inptr[1] << 8) | inptr[0];
return (inptr[0] << 8) | inptr[1];
}
static FORCE_INLINE (int)
decode_utf16_endian (char *inbuf, size_t inleft, gunichar *outchar, unsigned endian)
{
unsigned char *inptr = (unsigned char *) inbuf;
gunichar2 c;
gunichar u;
if (inleft < 2) {
mono_set_errno (E2BIG);
return -1;
}
u = read_uint16_endian (inptr, endian);
if (u < 0xd800) {
/* 0x0000 -> 0xd7ff */
*outchar = u;
return 2;
} else if (u < 0xdc00) {
/* 0xd800 -> 0xdbff */
if (inleft < 4) {
mono_set_errno (EINVAL);
return -2;
}
c = read_uint16_endian (inptr + 2, endian);
if (c < 0xdc00 || c > 0xdfff) {
mono_set_errno (EILSEQ);
return -2;
}
u = ((u - 0xd800) << 10) + (c - 0xdc00) + 0x0010000UL;
*outchar = u;
return 4;
} else if (u < 0xe000) {
/* 0xdc00 -> 0xdfff */
mono_set_errno (EILSEQ);
return -1;
} else {
/* 0xe000 -> 0xffff */
*outchar = u;
return 2;
}
}
static int
decode_utf16be (char *inbuf, size_t inleft, gunichar *outchar)
{
return decode_utf16_endian (inbuf, inleft, outchar, G_BIG_ENDIAN);
}
static int
decode_utf16le (char *inbuf, size_t inleft, gunichar *outchar)
{
return decode_utf16_endian (inbuf, inleft, outchar, G_LITTLE_ENDIAN);
}
static FORCE_INLINE (void)
write_uint16_endian (unsigned char *outptr, uint16_t c, unsigned endian)
{
if (endian == G_LITTLE_ENDIAN) {
outptr[0] = c & 0xff;
outptr[1] = (c >> 8) & 0xff;
return;
}
outptr[0] = (c >> 8) & 0xff;
outptr[1] = c & 0xff;
}
static FORCE_INLINE (int)
encode_utf16_endian (gunichar c, char *outbuf, size_t outleft, unsigned endian)
{
unsigned char *outptr = (unsigned char *) outbuf;
gunichar2 ch;
gunichar c2;
if (c < 0x10000) {
if (outleft < 2) {
mono_set_errno (E2BIG);
return -1;
}
write_uint16_endian (outptr, c, endian);
return 2;
} else {
if (outleft < 4) {
mono_set_errno (E2BIG);
return -1;
}
c2 = c - 0x10000;
ch = (gunichar2) ((c2 >> 10) + 0xd800);
write_uint16_endian (outptr, ch, endian);
ch = (gunichar2) ((c2 & 0x3ff) + 0xdc00);
write_uint16_endian (outptr + 2, ch, endian);
return 4;
}
}
static int
encode_utf16be (gunichar c, char *outbuf, size_t outleft)
{
return encode_utf16_endian (c, outbuf, outleft, G_BIG_ENDIAN);
}
static int
encode_utf16le (gunichar c, char *outbuf, size_t outleft)
{
return encode_utf16_endian (c, outbuf, outleft, G_LITTLE_ENDIAN);
}
static FORCE_INLINE (int)
decode_utf8 (char *inbuf, size_t inleft, gunichar *outchar)
{
unsigned char *inptr = (unsigned char *) inbuf;
gunichar u;
int n, i;
u = *inptr;
if (u < 0x80) {
/* simple ascii case */
*outchar = u;
return 1;
} else if (u < 0xc2) {
mono_set_errno (EILSEQ);
return -1;
} else if (u < 0xe0) {
u &= 0x1f;
n = 2;
} else if (u < 0xf0) {
u &= 0x0f;
n = 3;
} else if (u < 0xf8) {
u &= 0x07;
n = 4;
} else if (u < 0xfc) {
u &= 0x03;
n = 5;
} else if (u < 0xfe) {
u &= 0x01;
n = 6;
} else {
mono_set_errno (EILSEQ);
return -1;
}
if (n > inleft) {
mono_set_errno (EINVAL);
return -1;
}
#if UNROLL_DECODE_UTF8
switch (n) {
case 6: u = (u << 6) | (*++inptr ^ 0x80);
case 5: u = (u << 6) | (*++inptr ^ 0x80);
case 4: u = (u << 6) | (*++inptr ^ 0x80);
case 3: u = (u << 6) | (*++inptr ^ 0x80);
case 2: u = (u << 6) | (*++inptr ^ 0x80);
}
#else
for (i = 1; i < n; i++)
u = (u << 6) | (*++inptr ^ 0x80);
#endif
*outchar = u;
return n;
}
static int
encode_utf8 (gunichar c, char *outbuf, size_t outleft)
{
unsigned char *outptr = (unsigned char *) outbuf;
int base, n, i;
if (c < 0x80) {
outptr[0] = c;
return 1;
} else if (c < 0x800) {
base = 192;
n = 2;
} else if (c < 0x10000) {
base = 224;
n = 3;
} else if (c < 0x200000) {
base = 240;
n = 4;
} else if (c < 0x4000000) {
base = 248;
n = 5;
} else {
base = 252;
n = 6;
}
if (outleft < n) {
mono_set_errno (E2BIG);
return -1;
}
#if UNROLL_ENCODE_UTF8
switch (n) {
case 6: outptr[5] = (c & 0x3f) | 0x80; c >>= 6;
case 5: outptr[4] = (c & 0x3f) | 0x80; c >>= 6;
case 4: outptr[3] = (c & 0x3f) | 0x80; c >>= 6;
case 3: outptr[2] = (c & 0x3f) | 0x80; c >>= 6;
case 2: outptr[1] = (c & 0x3f) | 0x80; c >>= 6;
case 1: outptr[0] = c | base;
}
#else
for (i = n - 1; i > 0; i--) {
outptr[i] = (c & 0x3f) | 0x80;
c >>= 6;
}
outptr[0] = c | base;
#endif
return n;
}
static int
decode_latin1 (char *inbuf, size_t inleft, gunichar *outchar)
{
*outchar = (unsigned char) *inbuf;
return 1;
}
static int
encode_latin1 (gunichar c, char *outbuf, size_t outleft)
{
if (outleft < 1) {
mono_set_errno (E2BIG);
return -1;
}
if (c > 0xff) {
mono_set_errno (EILSEQ);
return -1;
}
*outbuf = (char) c;
return 1;
}
/*
* Simple conversion API
*/
static gpointer error_quark = (gpointer)"ConvertError";
gpointer
g_convert_error_quark (void)
{
return error_quark;
}
/*
* Unicode conversion
*/
/**
* An explanation of the conversion can be found at:
* http://home.tiscali.nl/t876506/utf8tbl.html
*
**/
gint
g_unichar_to_utf8 (gunichar c, gchar *outbuf)
{
int base, n, i;
if (c < 0x80) {
base = 0;
n = 1;
} else if (c < 0x800) {
base = 192;
n = 2;
} else if (c < 0x10000) {
base = 224;
n = 3;
} else if (c < 0x200000) {
base = 240;
n = 4;
} else if (c < 0x4000000) {
base = 248;
n = 5;
} else if (c < 0x80000000) {
base = 252;
n = 6;
} else {
return -1;
}
if (outbuf != NULL) {
for (i = n - 1; i > 0; i--) {
/* mask off 6 bits worth and add 128 */
outbuf[i] = (c & 0x3f) | 0x80;
c >>= 6;
}
/* first character has a different base */
outbuf[0] = c | base;
}
return n;
}
static FORCE_INLINE (int)
g_unichar_to_utf16 (gunichar c, gunichar2 *outbuf)
{
gunichar c2;
if (c < 0xd800) {
if (outbuf)
*outbuf = (gunichar2) c;
return 1;
} else if (c < 0xe000) {
return -1;
} else if (c < 0x10000) {
if (outbuf)
*outbuf = (gunichar2) c;
return 1;
} else if (c < 0x110000) {
if (outbuf) {
c2 = c - 0x10000;
outbuf[0] = (gunichar2) ((c2 >> 10) + 0xd800);
outbuf[1] = (gunichar2) ((c2 & 0x3ff) + 0xdc00);
}
return 2;
} else {
return -1;
}
}
gunichar *
g_utf8_to_ucs4_fast (const gchar *str, glong len, glong *items_written)
{
gunichar *outbuf, *outptr;
char *inptr;
glong n, i;
g_return_val_if_fail (str != NULL, NULL);
n = g_utf8_strlen (str, len);
if (items_written)
*items_written = n;
outptr = outbuf = g_malloc ((n + 1) * sizeof (gunichar));
inptr = (char *) str;
for (i = 0; i < n; i++) {
*outptr++ = g_utf8_get_char (inptr);
inptr = g_utf8_next_char (inptr);
}
*outptr = 0;
return outbuf;
}
static gunichar2 *
eg_utf8_to_utf16_general (const gchar *str, glong len, glong *items_read, glong *items_written, gboolean include_nuls, gboolean replace_invalid_codepoints, GCustomAllocator custom_alloc_func, gpointer custom_alloc_data, GError **err)
{
gunichar2 *outbuf, *outptr;
size_t outlen = 0;
size_t inleft;
char *inptr;
gunichar c;
int u, n;
g_return_val_if_fail (str != NULL, NULL);
if (len < 0) {
if (include_nuls) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_FAILED, "Conversions with embedded nulls must pass the string length");
return NULL;
}
len = strlen (str);
}
inptr = (char *) str;
inleft = len;
while (inleft > 0) {
if ((n = decode_utf8 (inptr, inleft, &c)) < 0)
goto error;
if (c == 0 && !include_nuls)
break;
if ((u = g_unichar_to_utf16 (c, NULL)) < 0) {
if (replace_invalid_codepoints) {
u = 2;
} else {
mono_set_errno (EILSEQ);
goto error;
}
}
outlen += u;
inleft -= n;
inptr += n;
}
if (items_read)
*items_read = inptr - str;
if (items_written)
*items_written = outlen;
if (G_LIKELY (!custom_alloc_func))
outptr = outbuf = g_malloc ((outlen + 1) * sizeof (gunichar2));
else
outptr = outbuf = (gunichar2 *)custom_alloc_func ((outlen + 1) * sizeof (gunichar2), custom_alloc_data);
if (G_UNLIKELY (custom_alloc_func && !outbuf)) {
mono_set_errno (ENOMEM);
goto error;
}
inptr = (char *) str;
inleft = len;
while (inleft > 0) {
if ((n = decode_utf8 (inptr, inleft, &c)) < 0)
break;
if (c == 0 && !include_nuls)
break;
u = g_unichar_to_utf16 (c, outptr);
if ((u < 0) && replace_invalid_codepoints) {
outptr[0] = 0xFFFD;
outptr[1] = 0xFFFD;
u = 2;
}
outptr += u;
inleft -= n;
inptr += n;
}
*outptr = '\0';
return outbuf;
error:
if (errno == ENOMEM) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_NO_MEMORY,
"Allocation failed.");
} else if (errno == EILSEQ) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
} else if (items_read) {
/* partial input is ok if we can let our caller know... */
} else {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_PARTIAL_INPUT,
"Partial byte sequence encountered in the input.");
}
if (items_read)
*items_read = inptr - str;
if (items_written)
*items_written = 0;
return NULL;
}
gunichar2 *
g_utf8_to_utf16 (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err)
{
return eg_utf8_to_utf16_general (str, len, items_read, items_written, FALSE, FALSE, NULL, NULL, err);
}
gunichar2 *
g_utf8_to_utf16_custom_alloc (const gchar *str, glong len, glong *items_read, glong *items_written, GCustomAllocator custom_alloc_func, gpointer custom_alloc_data, GError **err)
{
return eg_utf8_to_utf16_general (str, len, items_read, items_written, FALSE, FALSE, custom_alloc_func, custom_alloc_data, err);
}
gunichar2 *
eg_utf8_to_utf16_with_nuls (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err)
{
return eg_utf8_to_utf16_general (str, len, items_read, items_written, TRUE, FALSE, NULL, NULL, err);
}
gunichar2 *
eg_wtf8_to_utf16 (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err)
{
return eg_utf8_to_utf16_general (str, len, items_read, items_written, TRUE, TRUE, NULL, NULL, err);
}
gunichar *
g_utf8_to_ucs4 (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err)
{
gunichar *outbuf, *outptr;
size_t outlen = 0;
size_t inleft;
char *inptr;
gunichar c;
int n;
g_return_val_if_fail (str != NULL, NULL);
if (len < 0)
len = strlen (str);
inptr = (char *) str;
inleft = len;
while (inleft > 0) {
if ((n = decode_utf8 (inptr, inleft, &c)) < 0) {
if (errno == EILSEQ) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
} else if (items_read) {
/* partial input is ok if we can let our caller know... */
break;
} else {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_PARTIAL_INPUT,
"Partial byte sequence encountered in the input.");
}
if (items_read)
*items_read = inptr - str;
if (items_written)
*items_written = 0;
return NULL;
} else if (c == 0)
break;
outlen += 4;
inleft -= n;
inptr += n;
}
if (items_written)
*items_written = outlen / 4;
if (items_read)
*items_read = inptr - str;
outptr = outbuf = g_malloc (outlen + 4);
inptr = (char *) str;
inleft = len;
while (inleft > 0) {
if ((n = decode_utf8 (inptr, inleft, &c)) < 0)
break;
else if (c == 0)
break;
*outptr++ = c;
inleft -= n;
inptr += n;
}
*outptr = 0;
return outbuf;
}
static
gchar *
eg_utf16_to_utf8_general (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GCustomAllocator custom_alloc_func, gpointer custom_alloc_data, GError **err)
{
char *inptr, *outbuf, *outptr;
size_t outlen = 0;
size_t inleft;
gunichar c;
int n;
g_return_val_if_fail (str != NULL, NULL);
if (len < 0) {
len = 0;
while (str[len])
len++;
}
inptr = (char *) str;
inleft = len * 2;
while (inleft > 0) {
if ((n = decode_utf16 (inptr, inleft, &c)) < 0) {
if (n == -2 && inleft > 2) {
/* This means that the first UTF-16 char was read, but second failed */
inleft -= 2;
inptr += 2;
}
if (errno == EILSEQ) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
} else if (items_read) {
/* partial input is ok if we can let our caller know... */
break;
} else {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_PARTIAL_INPUT,
"Partial byte sequence encountered in the input.");
}
if (items_read)
*items_read = (inptr - (char *) str) / 2;
if (items_written)
*items_written = 0;
return NULL;
} else if (c == 0)
break;
outlen += g_unichar_to_utf8 (c, NULL);
inleft -= n;
inptr += n;
}
if (items_read)
*items_read = (inptr - (char *) str) / 2;
if (items_written)
*items_written = outlen;
if (G_LIKELY (!custom_alloc_func))
outptr = outbuf = g_malloc (outlen + 1);
else
outptr = outbuf = (char *)custom_alloc_func (outlen + 1, custom_alloc_data);
if (G_UNLIKELY (custom_alloc_func && !outbuf)) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_NO_MEMORY, "Allocation failed.");
if (items_written)
*items_written = 0;
return NULL;
}
inptr = (char *) str;
inleft = len * 2;
while (inleft > 0) {
if ((n = decode_utf16 (inptr, inleft, &c)) < 0)
break;
else if (c == 0)
break;
outptr += g_unichar_to_utf8 (c, outptr);
inleft -= n;
inptr += n;
}
*outptr = '\0';
return outbuf;
}
gchar *
g_utf16_to_utf8 (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GError **err)
{
return eg_utf16_to_utf8_general (str, len, items_read, items_written, NULL, NULL, err);
}
gchar *
g_utf16_to_utf8_custom_alloc (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GCustomAllocator custom_alloc_func, gpointer custom_alloc_data, GError **err)
{
return eg_utf16_to_utf8_general (str, len, items_read, items_written, custom_alloc_func, custom_alloc_data, err);
}
gunichar *
g_utf16_to_ucs4 (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GError **err)
{
gunichar *outbuf, *outptr;
size_t outlen = 0;
size_t inleft;
char *inptr;
gunichar c;
int n;
g_return_val_if_fail (str != NULL, NULL);
if (len < 0) {
len = 0;
while (str[len])
len++;
}
inptr = (char *) str;
inleft = len * 2;
while (inleft > 0) {
if ((n = decode_utf16 (inptr, inleft, &c)) < 0) {
if (n == -2 && inleft > 2) {
/* This means that the first UTF-16 char was read, but second failed */
inleft -= 2;
inptr += 2;
}
if (errno == EILSEQ) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
} else if (items_read) {
/* partial input is ok if we can let our caller know... */
break;
} else {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_PARTIAL_INPUT,
"Partial byte sequence encountered in the input.");
}
if (items_read)
*items_read = (inptr - (char *) str) / 2;
if (items_written)
*items_written = 0;
return NULL;
} else if (c == 0)
break;
outlen += 4;
inleft -= n;
inptr += n;
}
if (items_read)
*items_read = (inptr - (char *) str) / 2;
if (items_written)
*items_written = outlen / 4;
outptr = outbuf = g_malloc (outlen + 4);
inptr = (char *) str;
inleft = len * 2;
while (inleft > 0) {
if ((n = decode_utf16 (inptr, inleft, &c)) < 0)
break;
else if (c == 0)
break;
*outptr++ = c;
inleft -= n;
inptr += n;
}
*outptr = 0;
return outbuf;
}
gchar *
g_ucs4_to_utf8 (const gunichar *str, glong len, glong *items_read, glong *items_written, GError **err)
{
char *outbuf, *outptr;
size_t outlen = 0;
glong i;
int n;
g_return_val_if_fail (str != NULL, NULL);
if (len < 0) {
for (i = 0; str[i] != 0; i++) {
if ((n = g_unichar_to_utf8 (str[i], NULL)) < 0) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
if (items_written)
*items_written = 0;
if (items_read)
*items_read = i;
return NULL;
}
outlen += n;
}
} else {
for (i = 0; i < len && str[i] != 0; i++) {
if ((n = g_unichar_to_utf8 (str[i], NULL)) < 0) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
if (items_written)
*items_written = 0;
if (items_read)
*items_read = i;
return NULL;
}
outlen += n;
}
}
len = i;
outptr = outbuf = g_malloc (outlen + 1);
for (i = 0; i < len; i++)
outptr += g_unichar_to_utf8 (str[i], outptr);
*outptr = 0;
if (items_written)
*items_written = outlen;
if (items_read)
*items_read = i;
return outbuf;
}
gunichar2 *
g_ucs4_to_utf16 (const gunichar *str, glong len, glong *items_read, glong *items_written, GError **err)
{
gunichar2 *outbuf, *outptr;
size_t outlen = 0;
glong i;
int n;
g_return_val_if_fail (str != NULL, NULL);
if (len < 0) {
for (i = 0; str[i] != 0; i++) {
if ((n = g_unichar_to_utf16 (str[i], NULL)) < 0) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
if (items_written)
*items_written = 0;
if (items_read)
*items_read = i;
return NULL;
}
outlen += n;
}
} else {
for (i = 0; i < len && str[i] != 0; i++) {
if ((n = g_unichar_to_utf16 (str[i], NULL)) < 0) {
g_set_error (err, G_CONVERT_ERROR, G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
"Illegal byte sequence encounted in the input.");
if (items_written)
*items_written = 0;
if (items_read)
*items_read = i;
return NULL;
}
outlen += n;
}
}
len = i;
outptr = outbuf = g_malloc ((outlen + 1) * sizeof (gunichar2));
for (i = 0; i < len; i++)
outptr += g_unichar_to_utf16 (str[i], outptr);
*outptr = 0;
if (items_written)
*items_written = outlen;
if (items_read)
*items_read = i;
return outbuf;
}
gpointer
g_fixed_buffer_custom_allocator (gsize req_size, gpointer custom_alloc_data)
{
GFixedBufferCustomAllocatorData *fixed_buffer_custom_alloc_data = (GFixedBufferCustomAllocatorData *)custom_alloc_data;
if (!fixed_buffer_custom_alloc_data)
return NULL;
fixed_buffer_custom_alloc_data->req_buffer_size = req_size;
if (req_size > fixed_buffer_custom_alloc_data->buffer_size)
return NULL;
return fixed_buffer_custom_alloc_data->buffer;
}
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/mono/eglib/glib.h | #ifndef __GLIB_H
#define __GLIB_H
// Ask stdint.h and inttypes.h for the full C99 features for CentOS 6 g++ 4.4, Android, etc.
// See for example:
// $HOME/android-toolchain/toolchains/armeabi-v7a-clang/sysroot/usr/include/inttypes.h
// $HOME/android-toolchain/toolchains/armeabi-v7a-clang/sysroot/usr/include/stdint.h
#ifdef __cplusplus
#ifndef __STDC_LIMIT_MACROS
#define __STDC_LIMIT_MACROS
#endif
#ifndef __STDC_CONSTANT_MACROS
#define __STDC_CONSTANT_MACROS
#endif
#ifndef __STDC_FORMAT_MACROS
#define __STDC_FORMAT_MACROS
#endif
#endif // __cplusplus
#include <stdarg.h>
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <stddef.h>
#include <ctype.h>
#include <limits.h>
#include <stdint.h>
#include <inttypes.h>
#include <eglib-config.h>
#include <minipal/utils.h>
#include <time.h>
// - Pointers should only be converted to or from pointer-sized integers.
// - Any size integer can be converted to any other size integer.
// - Therefore a pointer-sized integer is the intermediary between
// a pointer and any integer type.
#define GPOINTER_TO_INT(ptr) ((gint)(gssize)(ptr))
#define GPOINTER_TO_UINT(ptr) ((guint)(gsize)(ptr))
#define GINT_TO_POINTER(v) ((gpointer)(gssize)(v))
#define GUINT_TO_POINTER(v) ((gpointer)(gsize)(v))
#ifndef EGLIB_NO_REMAP
#include <eglib-remap.h>
#endif
#ifdef G_HAVE_ALLOCA_H
#include <alloca.h>
#endif
#ifdef WIN32
/* For alloca */
#include <malloc.h>
#endif
#ifdef G_HAVE_UNISTD_H
#include <unistd.h>
#endif
#ifndef offsetof
# define offsetof(s_name,n_name) (size_t)(char *)&(((s_name*)0)->m_name)
#endif
#ifdef __cplusplus
#define G_BEGIN_DECLS extern "C" {
#define G_END_DECLS }
#define G_EXTERN_C extern "C"
#else
#define G_BEGIN_DECLS /* nothing */
#define G_END_DECLS /* nothing */
#define G_EXTERN_C /* nothing */
#endif
#ifdef __cplusplus
#define g_cast monoeg_g_cast // in case not inlined (see eglib-remap.h)
// g_cast converts void* to T*.
// e.g. #define malloc(x) (g_cast (malloc (x)))
// FIXME It used to do more. Rename?
struct g_cast
{
private:
void * const x;
public:
explicit g_cast (void volatile *y) : x((void*)y) { }
// Lack of rvalue constructor inhibits ternary operator.
// Either don't use ternary, or cast each side.
// sa = (salen <= 128) ? g_alloca (salen) : g_malloc (salen);
// w32socket.c:1045:24: error: call to deleted constructor of 'monoeg_g_cast'
//g_cast (g_cast&& y) : x(y.x) { }
g_cast (g_cast&&) = delete;
g_cast () = delete;
g_cast (const g_cast&) = delete;
template <typename TTo>
operator TTo* () const
{
return (TTo*)x;
}
};
#else
// FIXME? Parens are omitted to preserve prior meaning.
#define g_cast(x) x
#endif
// G++4.4 breaks opeq below without this.
#if defined (__GNUC__) || defined (__clang__)
#define G_MAY_ALIAS __attribute__((__may_alias__))
#else
#define G_MAY_ALIAS /* nothing */
#endif
#ifdef __cplusplus
// Provide for bit operations on enums, but not all integer operations.
// This alleviates a fair number of casts in porting C to C++.
// Forward declare template with no generic implementation.
template <size_t> struct g_size_to_int;
// Template specializations.
template <> struct g_size_to_int<1> { typedef int8_t type; };
template <> struct g_size_to_int<2> { typedef int16_t type; };
template <> struct g_size_to_int<4> { typedef int32_t type; };
template <> struct g_size_to_int<8> { typedef int64_t type; };
// g++4.4 does not accept:
//template <typename T>
//using g_size_to_int_t = typename g_size_to_int <sizeof (T)>::type;
#define g_size_to_int_t(x) g_size_to_int <sizeof (x)>::type
#define G_ENUM_BINOP(Enum, op, opeq) \
inline Enum \
operator op (Enum a, Enum b) \
{ \
typedef g_size_to_int_t (Enum) type; \
return static_cast<Enum>(static_cast<type>(a) op b); \
} \
\
inline Enum& \
operator opeq (Enum& a, Enum b) \
{ \
typedef g_size_to_int_t (Enum) G_MAY_ALIAS type; \
return (Enum&)((type&)a opeq b); \
} \
#define G_ENUM_FUNCTIONS(Enum) \
extern "C++" { /* in case within extern "C" */ \
inline Enum \
operator~ (Enum a) \
{ \
typedef g_size_to_int_t (Enum) type; \
return static_cast<Enum>(~static_cast<type>(a)); \
} \
\
G_ENUM_BINOP (Enum, |, |=) \
G_ENUM_BINOP (Enum, &, &=) \
G_ENUM_BINOP (Enum, ^, ^=) \
\
} /* extern "C++" */
#else
#define G_ENUM_FUNCTIONS(Enum) /* nothing */
#endif
G_BEGIN_DECLS
/*
* Basic data types
*/
typedef int gint;
typedef unsigned int guint;
typedef short gshort;
typedef unsigned short gushort;
typedef long glong;
typedef unsigned long gulong;
typedef void * gpointer;
typedef const void * gconstpointer;
typedef char gchar;
typedef unsigned char guchar;
/* Types defined in terms of the stdint.h */
typedef int8_t gint8;
typedef uint8_t guint8;
typedef int16_t gint16;
typedef uint16_t guint16;
typedef int32_t gint32;
typedef uint32_t guint32;
typedef int64_t gint64;
typedef uint64_t guint64;
typedef float gfloat;
typedef double gdouble;
typedef int32_t gboolean;
#if defined (HOST_WIN32) || defined (_WIN32)
G_END_DECLS
#include <wchar.h>
typedef wchar_t gunichar2;
G_BEGIN_DECLS
#else
typedef guint16 gunichar2;
#endif
typedef guint32 gunichar;
/*
* Macros
*/
#define G_N_ELEMENTS(s) ARRAY_SIZE(s)
#define FALSE 0
#define TRUE 1
#define G_MINSHORT SHRT_MIN
#define G_MAXSHORT SHRT_MAX
#define G_MAXUSHORT USHRT_MAX
#define G_MAXINT INT_MAX
#define G_MININT INT_MIN
#define G_MAXINT8 INT8_MAX
#define G_MAXUINT8 UINT8_MAX
#define G_MININT8 INT8_MIN
#define G_MAXINT16 INT16_MAX
#define G_MAXUINT16 UINT16_MAX
#define G_MININT16 INT16_MIN
#define G_MAXINT32 INT32_MAX
#define G_MAXUINT32 UINT32_MAX
#define G_MININT32 INT32_MIN
#define G_MININT64 INT64_MIN
#define G_MAXINT64 INT64_MAX
#define G_MAXUINT64 UINT64_MAX
#define G_LITTLE_ENDIAN 1234
#define G_BIG_ENDIAN 4321
#define G_STMT_START do
#define G_STMT_END while (0)
#define G_USEC_PER_SEC 1000000
#ifndef ABS
#define ABS(a) ((a) > 0 ? (a) : -(a))
#endif
#define ALIGN_TO(val,align) ((((gssize)val) + (gssize)((align) - 1)) & (~((gssize)(align - 1))))
#define ALIGN_DOWN_TO(val,align) (((gssize)val) & (~((gssize)(align - 1))))
#define ALIGN_PTR_TO(ptr,align) (gpointer)((((gssize)(ptr)) + (gssize)(align - 1)) & (~((gssize)(align - 1))))
#define G_STRUCT_OFFSET(p_type,field) offsetof(p_type,field)
#define EGLIB_STRINGIFY(x) #x
#define EGLIB_TOSTRING(x) EGLIB_STRINGIFY(x)
#define G_STRLOC __FILE__ ":" EGLIB_TOSTRING(__LINE__) ":"
#define G_CONST_RETURN const
#define G_GUINT64_FORMAT PRIu64
#define G_GINT64_FORMAT PRIi64
#define G_GUINT32_FORMAT PRIu32
#define G_GINT32_FORMAT PRIi32
#ifdef __GNUC__
#define G_ATTR_FORMAT_PRINTF(fmt_pos,arg_pos) __attribute__((__format__(__printf__,fmt_pos,arg_pos)))
#else
#define G_ATTR_FORMAT_PRINTF(fmt_pos,arg_pos)
#endif
/*
* Allocation
*/
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
void g_free (void *ptr);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
gpointer g_realloc (gpointer obj, gsize size);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
gpointer g_malloc (gsize x);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
gpointer g_malloc0 (gsize x);
G_EXTERN_C // Used by profilers, at least.
gpointer g_calloc (gsize n, gsize x);
gpointer g_try_malloc (gsize x);
gpointer g_try_realloc (gpointer obj, gsize size);
#define g_new(type,size) ((type *) g_malloc (sizeof (type) * (size)))
#define g_new0(type,size) ((type *) g_malloc0 (sizeof (type)* (size)))
#define g_newa(type,size) ((type *) alloca (sizeof (type) * (size)))
#define g_newa0(type,size) ((type *) memset (alloca (sizeof (type) * (size)), 0, sizeof (type) * (size)))
#define g_memmove(dest,src,len) memmove (dest, src, len)
#define g_renew(struct_type, mem, n_structs) ((struct_type*)g_realloc (mem, sizeof (struct_type) * n_structs))
#define g_alloca(size) (g_cast (alloca (size)))
G_EXTERN_C // Used by libtest, at least.
gpointer g_memdup (gconstpointer mem, guint byte_size);
static inline gchar *g_strdup (const gchar *str) { if (str) { return (gchar*) g_memdup (str, (guint)strlen (str) + 1); } return NULL; }
gchar **g_strdupv (gchar **str_array);
typedef struct {
gpointer (*malloc) (gsize n_bytes);
gpointer (*realloc) (gpointer mem, gsize n_bytes);
void (*free) (gpointer mem);
gpointer (*calloc) (gsize n_blocks, gsize n_block_bytes);
} GMemVTable;
void g_mem_set_vtable (GMemVTable* vtable);
void g_mem_get_vtable (GMemVTable* vtable);
struct _GMemChunk {
guint alloc_size;
};
typedef struct _GMemChunk GMemChunk;
/*
* Misc.
*/
gboolean g_hasenv(const gchar *variable);
gchar * g_getenv(const gchar *variable);
G_EXTERN_C // sdks/wasm/driver.c is C and uses this
gboolean g_setenv(const gchar *variable, const gchar *value, gboolean overwrite);
gchar* g_win32_getlocale(void);
/*
* Precondition macros
*/
#define g_warn_if_fail(x) G_STMT_START { if (!(x)) { g_warning ("%s:%d: assertion '%s' failed\n", __FILE__, __LINE__, #x); } } G_STMT_END
#define g_return_if_fail(x) G_STMT_START { if (!(x)) { g_critical ("%s:%d: assertion '%s' failed\n", __FILE__, __LINE__, #x); return; } } G_STMT_END
#define g_return_val_if_fail(x,e) G_STMT_START { if (!(x)) { g_critical ("%s:%d: assertion '%s' failed\n", __FILE__, __LINE__, #x); return (e); } } G_STMT_END
/*
* Errors
*/
typedef struct {
/* In the real glib, this is a GQuark, but we dont use/need that */
gpointer domain;
gint code;
gchar *message;
} GError;
void g_clear_error (GError **gerror);
void g_error_free (GError *gerror);
GError *g_error_new (gpointer domain, gint code, const char *format, ...);
void g_set_error (GError **err, gpointer domain, gint code, const gchar *format, ...);
void g_propagate_error (GError **dest, GError *src);
/*
* Strings utility
*/
G_EXTERN_C // Used by libtest, at least.
gchar *g_strdup_printf (const gchar *format, ...) G_ATTR_FORMAT_PRINTF(1, 2);
gchar *g_strdup_vprintf (const gchar *format, va_list args);
gchar *g_strndup (const gchar *str, gsize n);
const gchar *g_strerror (gint errnum);
gchar *g_strndup (const gchar *str, gsize n);
void g_strfreev (gchar **str_array);
gchar *g_strconcat (const gchar *first, ...);
gchar **g_strsplit (const gchar *string, const gchar *delimiter, gint max_tokens);
gchar **g_strsplit_set (const gchar *string, const gchar *delimiter, gint max_tokens);
gchar *g_strreverse (gchar *str);
gboolean g_str_has_prefix (const gchar *str, const gchar *prefix);
gboolean g_str_has_suffix (const gchar *str, const gchar *suffix);
guint g_strv_length (gchar **str_array);
gchar *g_strjoin (const gchar *separator, ...);
gchar *g_strjoinv (const gchar *separator, gchar **str_array);
gchar *g_strchug (gchar *str);
gchar *g_strchomp (gchar *str);
gchar *g_strnfill (gsize length, gchar fill_char);
gsize g_strnlen (const char*, gsize);
char *g_str_from_file_region (int fd, guint64 offset, gsize size);
void g_strdelimit (char *string, char delimiter, char new_delimiter);
gint g_printf (gchar const *format, ...) G_ATTR_FORMAT_PRINTF(1, 2);
gint g_fprintf (FILE *file, gchar const *format, ...) G_ATTR_FORMAT_PRINTF(2, 3);
gint g_sprintf (gchar *string, gchar const *format, ...) G_ATTR_FORMAT_PRINTF(2, 3);
gint g_snprintf (gchar *string, gulong n, gchar const *format, ...) G_ATTR_FORMAT_PRINTF(3, 4);
gint g_vasprintf (gchar **ret, const gchar *fmt, va_list ap);
#define g_vprintf vprintf
#define g_vfprintf vfprintf
#define g_vsprintf vsprintf
#define g_vsnprintf vsnprintf
gsize g_strlcpy (gchar *dest, const gchar *src, gsize dest_size);
gchar *g_stpcpy (gchar *dest, const char *src);
gchar g_ascii_tolower (gchar c);
gchar g_ascii_toupper (gchar c);
gchar *g_ascii_strdown (const gchar *str, gssize len);
void g_ascii_strdown_no_alloc (char* dst, const char* src, gsize len);
gchar *g_ascii_strup (const gchar *str, gssize len);
gint g_ascii_strncasecmp (const gchar *s1, const gchar *s2, gsize n);
gint g_ascii_strcasecmp (const gchar *s1, const gchar *s2);
gint g_ascii_xdigit_value (gchar c);
#define g_ascii_isspace(c) (isspace (c) != 0)
#define g_ascii_isalpha(c) (isalpha (c) != 0)
#define g_ascii_isprint(c) (isprint (c) != 0)
#define g_ascii_isxdigit(c) (isxdigit (c) != 0)
gboolean g_utf16_ascii_equal (const gunichar2 *utf16, size_t ulen, const char *ascii, size_t alen);
gboolean g_utf16_asciiz_equal (const gunichar2 *utf16, const char *ascii);
static inline
gboolean g_ascii_equal (const char *s1, gsize len1, const char *s2, gsize len2)
{
return len1 == len2 && (s1 == s2 || memcmp (s1, s2, len1) == 0);
}
static inline
gboolean g_asciiz_equal (const char *s1, const char *s2)
{
return s1 == s2 || strcmp (s1, s2) == 0;
}
static inline
gboolean
g_ascii_equal_caseinsensitive (const char *s1, gsize len1, const char *s2, gsize len2)
{
return len1 == len2 && (s1 == s2 || g_ascii_strncasecmp (s1, s2, len1) == 0);
}
static inline
gboolean
g_asciiz_equal_caseinsensitive (const char *s1, const char *s2)
{
return s1 == s2 || g_ascii_strcasecmp (s1, s2) == 0;
}
/* FIXME: g_strcasecmp supports utf8 unicode stuff */
#ifdef _MSC_VER
#define g_strcasecmp _stricmp
#define g_strncasecmp _strnicmp
#define g_strstrip(a) g_strchug (g_strchomp (a))
#else
#define g_strcasecmp strcasecmp
#define g_ascii_strtoull strtoull
#define g_strncasecmp strncasecmp
#define g_strstrip(a) g_strchug (g_strchomp (a))
#endif
#define g_ascii_strdup strdup
/*
* String type
*/
typedef struct {
char *str;
gsize len;
gsize allocated_len;
} GString;
GString *g_string_new (const gchar *init);
GString *g_string_new_len (const gchar *init, gssize len);
GString *g_string_sized_new (gsize default_size);
gchar *g_string_free (GString *string, gboolean free_segment);
GString *g_string_append (GString *string, const gchar *val);
void g_string_printf (GString *string, const gchar *format, ...) G_ATTR_FORMAT_PRINTF(2, 3);
void g_string_append_printf (GString *string, const gchar *format, ...) G_ATTR_FORMAT_PRINTF(2, 3);
void g_string_append_vprintf (GString *string, const gchar *format, va_list args);
GString *g_string_append_unichar (GString *string, gunichar c);
GString *g_string_append_c (GString *string, gchar c);
GString *g_string_append (GString *string, const gchar *val);
GString *g_string_append_len (GString *string, const gchar *val, gssize len);
GString *g_string_truncate (GString *string, gsize len);
GString *g_string_set_size (GString *string, gsize len);
#define g_string_sprintfa g_string_append_printf
typedef void (*GFunc) (gpointer data, gpointer user_data);
typedef gint (*GCompareFunc) (gconstpointer a, gconstpointer b);
typedef gint (*GCompareDataFunc) (gconstpointer a, gconstpointer b, gpointer user_data);
typedef void (*GHFunc) (gpointer key, gpointer value, gpointer user_data);
typedef gboolean (*GHRFunc) (gpointer key, gpointer value, gpointer user_data);
typedef void (*GDestroyNotify) (gpointer data);
typedef guint (*GHashFunc) (gconstpointer key);
typedef gboolean (*GEqualFunc) (gconstpointer a, gconstpointer b);
typedef void (*GFreeFunc) (gpointer data);
/*
* Lists
*/
typedef struct _GSList GSList;
struct _GSList {
gpointer data;
GSList *next;
};
GSList *g_slist_alloc (void);
GSList *g_slist_append (GSList *list,
gpointer data);
GSList *g_slist_prepend (GSList *list,
gpointer data);
void g_slist_free (GSList *list);
void g_slist_free_1 (GSList *list);
GSList *g_slist_copy (GSList *list);
GSList *g_slist_concat (GSList *list1,
GSList *list2);
void g_slist_foreach (GSList *list,
GFunc func,
gpointer user_data);
GSList *g_slist_last (GSList *list);
GSList *g_slist_find (GSList *list,
gconstpointer data);
GSList *g_slist_find_custom (GSList *list,
gconstpointer data,
GCompareFunc func);
GSList *g_slist_remove (GSList *list,
gconstpointer data);
GSList *g_slist_remove_all (GSList *list,
gconstpointer data);
GSList *g_slist_reverse (GSList *list);
guint g_slist_length (GSList *list);
GSList *g_slist_remove_link (GSList *list,
GSList *link);
GSList *g_slist_delete_link (GSList *list,
GSList *link);
GSList *g_slist_insert_sorted (GSList *list,
gpointer data,
GCompareFunc func);
GSList *g_slist_insert_before (GSList *list,
GSList *sibling,
gpointer data);
GSList *g_slist_sort (GSList *list,
GCompareFunc func);
gint g_slist_index (GSList *list,
gconstpointer data);
GSList *g_slist_nth (GSList *list,
guint n);
gpointer g_slist_nth_data (GSList *list,
guint n);
#define g_slist_next(slist) ((slist) ? (((GSList *) (slist))->next) : NULL)
typedef struct _GList GList;
struct _GList {
gpointer data;
GList *next;
GList *prev;
};
#define g_list_next(list) ((list) ? (((GList *) (list))->next) : NULL)
#define g_list_previous(list) ((list) ? (((GList *) (list))->prev) : NULL)
GList *g_list_alloc (void);
GList *g_list_append (GList *list,
gpointer data);
GList *g_list_prepend (GList *list,
gpointer data);
void g_list_free (GList *list);
void g_list_free_1 (GList *list);
GList *g_list_copy (GList *list);
guint g_list_length (GList *list);
gint g_list_index (GList *list,
gconstpointer data);
GList *g_list_nth (GList *list,
guint n);
gpointer g_list_nth_data (GList *list,
guint n);
GList *g_list_last (GList *list);
GList *g_list_concat (GList *list1,
GList *list2);
void g_list_foreach (GList *list,
GFunc func,
gpointer user_data);
GList *g_list_first (GList *list);
GList *g_list_find (GList *list,
gconstpointer data);
GList *g_list_find_custom (GList *list,
gconstpointer data,
GCompareFunc func);
GList *g_list_remove (GList *list,
gconstpointer data);
GList *g_list_remove_all (GList *list,
gconstpointer data);
GList *g_list_reverse (GList *list);
GList *g_list_remove_link (GList *list,
GList *link);
GList *g_list_delete_link (GList *list,
GList *link);
GList *g_list_insert_sorted (GList *list,
gpointer data,
GCompareFunc func);
GList *g_list_insert_before (GList *list,
GList *sibling,
gpointer data);
GList *g_list_sort (GList *sort,
GCompareFunc func);
/*
* Hashtables
*/
typedef struct _GHashTable GHashTable;
typedef struct _GHashTableIter GHashTableIter;
/* Private, but needed for stack allocation */
struct _GHashTableIter
{
gpointer dummy [8];
};
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
GHashTable *g_hash_table_new (GHashFunc hash_func, GEqualFunc key_equal_func);
GHashTable *g_hash_table_new_full (GHashFunc hash_func, GEqualFunc key_equal_func,
GDestroyNotify key_destroy_func, GDestroyNotify value_destroy_func);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
void g_hash_table_insert_replace (GHashTable *hash, gpointer key, gpointer value, gboolean replace);
guint g_hash_table_size (GHashTable *hash);
GList *g_hash_table_get_keys (GHashTable *hash);
GList *g_hash_table_get_values (GHashTable *hash);
gboolean g_hash_table_contains (GHashTable *hash, gconstpointer key);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
gpointer g_hash_table_lookup (GHashTable *hash, gconstpointer key);
gboolean g_hash_table_lookup_extended (GHashTable *hash, gconstpointer key, gpointer *orig_key, gpointer *value);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
void g_hash_table_foreach (GHashTable *hash, GHFunc func, gpointer user_data);
gpointer g_hash_table_find (GHashTable *hash, GHRFunc predicate, gpointer user_data);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
gboolean g_hash_table_remove (GHashTable *hash, gconstpointer key);
gboolean g_hash_table_steal (GHashTable *hash, gconstpointer key);
void g_hash_table_remove_all (GHashTable *hash);
guint g_hash_table_foreach_remove (GHashTable *hash, GHRFunc func, gpointer user_data);
guint g_hash_table_foreach_steal (GHashTable *hash, GHRFunc func, gpointer user_data);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
void g_hash_table_destroy (GHashTable *hash);
void g_hash_table_print_stats (GHashTable *table);
void g_hash_table_iter_init (GHashTableIter *iter, GHashTable *hash_table);
gboolean g_hash_table_iter_next (GHashTableIter *iter, gpointer *key, gpointer *value);
guint g_spaced_primes_closest (guint x);
#define g_hash_table_insert(h,k,v) g_hash_table_insert_replace ((h),(k),(v),FALSE)
#define g_hash_table_replace(h,k,v) g_hash_table_insert_replace ((h),(k),(v),TRUE)
#define g_hash_table_add(h,k) g_hash_table_insert_replace ((h),(k),(k),TRUE)
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
gboolean g_direct_equal (gconstpointer v1, gconstpointer v2);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
guint g_direct_hash (gconstpointer v1);
gboolean g_int_equal (gconstpointer v1, gconstpointer v2);
guint g_int_hash (gconstpointer v1);
gboolean g_str_equal (gconstpointer v1, gconstpointer v2);
guint g_str_hash (gconstpointer v1);
/*
* ByteArray
*/
typedef struct _GByteArray GByteArray;
struct _GByteArray {
guint8 *data;
gint len;
};
GByteArray *g_byte_array_new (void);
GByteArray* g_byte_array_append (GByteArray *array, const guint8 *data, guint len);
guint8* g_byte_array_free (GByteArray *array, gboolean free_segment);
void g_byte_array_set_size (GByteArray *array, gint length);
/*
* Array
*/
typedef struct _GArray GArray;
struct _GArray {
gchar *data;
gint len;
};
GArray *g_array_new (gboolean zero_terminated, gboolean clear_, guint element_size);
GArray *g_array_sized_new (gboolean zero_terminated, gboolean clear_, guint element_size, guint reserved_size);
gchar* g_array_free (GArray *array, gboolean free_segment);
GArray *g_array_append_vals (GArray *array, gconstpointer data, guint len);
GArray* g_array_insert_vals (GArray *array, guint index_, gconstpointer data, guint len);
GArray* g_array_remove_index (GArray *array, guint index_);
GArray* g_array_remove_index_fast (GArray *array, guint index_);
void g_array_set_size (GArray *array, gint length);
#define g_array_append_val(a,v) (g_array_append_vals((a),&(v),1))
#define g_array_insert_val(a,i,v) (g_array_insert_vals((a),(i),&(v),1))
#define g_array_index(a,t,i) *(t*)(((a)->data) + sizeof(t) * (i))
//FIXME previous missing parens
/*
* Pointer Array
*/
typedef struct _GPtrArray GPtrArray;
struct _GPtrArray {
gpointer *pdata;
guint len;
};
GPtrArray *g_ptr_array_new (void);
GPtrArray *g_ptr_array_sized_new (guint reserved_size);
void g_ptr_array_add (GPtrArray *array, gpointer data);
gboolean g_ptr_array_remove (GPtrArray *array, gpointer data);
gpointer g_ptr_array_remove_index (GPtrArray *array, guint index);
gboolean g_ptr_array_remove_fast (GPtrArray *array, gpointer data);
gpointer g_ptr_array_remove_index_fast (GPtrArray *array, guint index);
void g_ptr_array_sort (GPtrArray *array, GCompareFunc compare_func);
void g_ptr_array_set_size (GPtrArray *array, gint length);
gpointer *g_ptr_array_free (GPtrArray *array, gboolean free_seg);
void g_ptr_array_foreach (GPtrArray *array, GFunc func, gpointer user_data);
guint g_ptr_array_capacity (GPtrArray *array);
gboolean g_ptr_array_find (GPtrArray *array, gconstpointer needle, guint *index);
#define g_ptr_array_index(array,index) (array)->pdata[(index)]
//FIXME previous missing parens
/*
* Queues
*/
typedef struct {
GList *head;
GList *tail;
guint length;
} GQueue;
gpointer g_queue_pop_head (GQueue *queue);
void g_queue_push_head (GQueue *queue,
gpointer data);
void g_queue_push_tail (GQueue *queue,
gpointer data);
gboolean g_queue_is_empty (GQueue *queue);
GQueue *g_queue_new (void);
void g_queue_free (GQueue *queue);
void g_queue_foreach (GQueue *queue, GFunc func, gpointer user_data);
/*
* Messages
*/
#ifndef G_LOG_DOMAIN
#define G_LOG_DOMAIN ((gchar*) 0)
#endif
typedef enum {
G_LOG_FLAG_RECURSION = 1 << 0,
G_LOG_FLAG_FATAL = 1 << 1,
G_LOG_LEVEL_ERROR = 1 << 2,
G_LOG_LEVEL_CRITICAL = 1 << 3,
G_LOG_LEVEL_WARNING = 1 << 4,
G_LOG_LEVEL_MESSAGE = 1 << 5,
G_LOG_LEVEL_INFO = 1 << 6,
G_LOG_LEVEL_DEBUG = 1 << 7,
G_LOG_LEVEL_MASK = ~(G_LOG_FLAG_RECURSION | G_LOG_FLAG_FATAL)
} GLogLevelFlags;
G_ENUM_FUNCTIONS (GLogLevelFlags)
gint g_printv (const gchar *format, va_list args);
void g_print (const gchar *format, ...);
void g_printerr (const gchar *format, ...);
GLogLevelFlags g_log_set_always_fatal (GLogLevelFlags fatal_mask);
GLogLevelFlags g_log_set_fatal_mask (const gchar *log_domain, GLogLevelFlags fatal_mask);
void g_logv (const gchar *log_domain, GLogLevelFlags log_level, const gchar *format, va_list args);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
void g_log (const gchar *log_domain, GLogLevelFlags log_level, const gchar *format, ...);
void g_log_disabled (const gchar *log_domain, GLogLevelFlags log_level, const char *file, int line);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
void g_assertion_message (const gchar *format, ...) G_GNUC_NORETURN;
void mono_assertion_message_disabled (const char *file, int line) G_GNUC_NORETURN;
void mono_assertion_message (const char *file, int line, const char *condition) G_GNUC_NORETURN;
void mono_assertion_message_unreachable (const char *file, int line) G_GNUC_NORETURN;
const char * g_get_assertion_message (void);
#ifndef DISABLE_ASSERT_MESSAGES
/* The for (;;) tells gc thats g_error () doesn't return, avoiding warnings */
#define g_error(...) do { g_log (G_LOG_DOMAIN, G_LOG_LEVEL_ERROR, __VA_ARGS__); for (;;); } while (0)
#define g_critical(...) g_log (G_LOG_DOMAIN, G_LOG_LEVEL_CRITICAL, __VA_ARGS__)
#define g_warning(...) g_log (G_LOG_DOMAIN, G_LOG_LEVEL_WARNING, __VA_ARGS__)
#define g_message(...) g_log (G_LOG_DOMAIN, G_LOG_LEVEL_MESSAGE, __VA_ARGS__)
#define g_debug(...) g_log (G_LOG_DOMAIN, G_LOG_LEVEL_DEBUG, __VA_ARGS__)
#else
#define g_error(...) do { g_log_disabled (G_LOG_DOMAIN, G_LOG_LEVEL_ERROR, __FILE__, __LINE__); for (;;); } while (0)
#define g_critical(...) g_log_disabled (G_LOG_DOMAIN, G_LOG_LEVEL_CRITICAL, __FILE__, __LINE__)
#define g_warning(...) g_log_disabled (G_LOG_DOMAIN, G_LOG_LEVEL_WARNING, __FILE__, __LINE__)
#define g_message(...) g_log_disabled (G_LOG_DOMAIN, G_LOG_LEVEL_MESSAGE, __FILE__, __LINE__)
#define g_debug(...) g_log_disabled (G_LOG_DOMAIN, G_LOG_LEVEL_DEBUG, __FILE__, __LINE__)
#endif
typedef void (*GLogFunc) (const gchar *log_domain, GLogLevelFlags log_level, const gchar *message, gpointer user_data);
typedef void (*GPrintFunc) (const gchar *string);
typedef void (*GAbortFunc) (void);
void g_assertion_disable_global (GAbortFunc func);
void g_assert_abort (void);
void g_log_default_handler (const gchar *log_domain, GLogLevelFlags log_level, const gchar *message, gpointer unused_data);
GLogFunc g_log_set_default_handler (GLogFunc log_func, gpointer user_data);
GPrintFunc g_set_print_handler (GPrintFunc func);
GPrintFunc g_set_printerr_handler (GPrintFunc func);
/*
* Conversions
*/
gpointer g_convert_error_quark(void);
#ifndef MAX
#define MAX(a,b) (((a)>(b)) ? (a) : (b))
#endif
#ifndef MIN
#define MIN(a,b) (((a)<(b)) ? (a) : (b))
#endif
#ifndef CLAMP
#define CLAMP(a,low,high) (((a) < (low)) ? (low) : (((a) > (high)) ? (high) : (a)))
#endif
#if defined(__GNUC__) && (__GNUC__ > 2)
#define G_LIKELY(expr) (__builtin_expect ((expr) != 0, 1))
#define G_UNLIKELY(expr) (__builtin_expect ((expr) != 0, 0))
#else
#define G_LIKELY(x) (x)
#define G_UNLIKELY(x) (x)
#endif
#if defined(_MSC_VER)
#define eg_unreachable() __assume(0)
#elif defined(__GNUC__) && ((__GNUC__ > 4) || (__GNUC__ == 4 && (__GNUC_MINOR__ >= 5)))
#define eg_unreachable() __builtin_unreachable()
#else
#define eg_unreachable()
#endif
/* g_assert is a boolean expression; the precise value is not preserved, just true or false. */
#ifdef DISABLE_ASSERT_MESSAGES
// This is smaller than the equivalent mono_assertion_message (..."disabled");
#define g_assert(x) (G_LIKELY((x)) ? 1 : (mono_assertion_message_disabled (__FILE__, __LINE__), 0))
#else
#define g_assert(x) (G_LIKELY((x)) ? 1 : (mono_assertion_message (__FILE__, __LINE__, #x), 0))
#endif
#ifdef __cplusplus
#define g_static_assert(x) static_assert (x, "")
#else
#define g_static_assert(x) g_assert (x)
#endif
#define g_assert_not_reached() G_STMT_START { mono_assertion_message_unreachable (__FILE__, __LINE__); eg_unreachable(); } G_STMT_END
/* f is format -- like printf and scanf
* Where you might have said:
* if (!(expr))
* g_error("%s invalid bar:%d", __func__, bar)
*
* You can say:
* g_assertf(expr, "bar:%d", bar);
*
* The usual assertion text of file/line/expr/newline are builtin, and __func__.
*
* g_assertf is a boolean expression -- the precise value is not preserved, just true or false.
*
* Other than expr, the parameters are not evaluated unless expr is false.
*
* format must be a string literal, in order to be concatenated.
* If this is too restrictive, g_error remains.
*/
#ifdef DISABLE_ASSERT_MESSAGES
#define g_assertf(x, format, ...) (G_LIKELY((x)) ? 1 : (mono_assertion_message_disabled (__FILE__, __LINE__), 0))
#elif defined(_MSC_VER) && (_MSC_VER < 1910)
#define g_assertf(x, format, ...) (G_LIKELY((x)) ? 1 : (g_assertion_message ("* Assertion at %s:%d, condition `%s' not met, function:%s, " format "\n", __FILE__, __LINE__, #x, __func__, __VA_ARGS__), 0))
#else
#define g_assertf(x, format, ...) (G_LIKELY((x)) ? 1 : (g_assertion_message ("* Assertion at %s:%d, condition `%s' not met, function:%s, " format "\n", __FILE__, __LINE__, #x, __func__, ##__VA_ARGS__), 0))
#endif
/*
* Unicode conversion
*/
#define G_CONVERT_ERROR g_convert_error_quark()
typedef enum {
G_CONVERT_ERROR_NO_CONVERSION,
G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
G_CONVERT_ERROR_FAILED,
G_CONVERT_ERROR_PARTIAL_INPUT,
G_CONVERT_ERROR_BAD_URI,
G_CONVERT_ERROR_NOT_ABSOLUTE_PATH,
G_CONVERT_ERROR_NO_MEMORY
} GConvertError;
gint g_unichar_to_utf8 (gunichar c, gchar *outbuf);
gunichar *g_utf8_to_ucs4_fast (const gchar *str, glong len, glong *items_written);
gunichar *g_utf8_to_ucs4 (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err);
G_EXTERN_C // Used by libtest, at least.
gunichar2 *g_utf8_to_utf16 (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err);
gunichar2 *eg_utf8_to_utf16_with_nuls (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err);
gunichar2 *eg_wtf8_to_utf16 (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err);
G_EXTERN_C // Used by libtest, at least.
gchar *g_utf16_to_utf8 (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GError **err);
gunichar *g_utf16_to_ucs4 (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GError **err);
gchar *g_ucs4_to_utf8 (const gunichar *str, glong len, glong *items_read, glong *items_written, GError **err);
gunichar2 *g_ucs4_to_utf16 (const gunichar *str, glong len, glong *items_read, glong *items_written, GError **err);
size_t g_utf16_len (const gunichar2 *);
#define u8to16(str) g_utf8_to_utf16(str, (glong)strlen(str), NULL, NULL, NULL)
#ifdef G_OS_WIN32
#define u16to8(str) g_utf16_to_utf8((gunichar2 *) (str), (glong)wcslen((wchar_t *) (str)), NULL, NULL, NULL)
#else
#define u16to8(str) g_utf16_to_utf8(str, (glong)strlen(str), NULL, NULL, NULL)
#endif
typedef gpointer (*GCustomAllocator) (gsize req_size, gpointer custom_alloc_data);
typedef struct {
gpointer buffer;
gsize buffer_size;
gsize req_buffer_size;
} GFixedBufferCustomAllocatorData;
gpointer
g_fixed_buffer_custom_allocator (gsize req_size, gpointer custom_alloc_data);
gunichar2 *g_utf8_to_utf16_custom_alloc (const gchar *str, glong len, glong *items_read, glong *items_written, GCustomAllocator custom_alloc_func, gpointer custom_alloc_data, GError **err);
gchar *g_utf16_to_utf8_custom_alloc (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GCustomAllocator custom_alloc_func, gpointer custom_alloc_data, GError **err);
/*
* Path
*/
gchar *g_build_path (const gchar *separator, const gchar *first_element, ...);
#define g_build_filename(x, ...) g_build_path(G_DIR_SEPARATOR_S, x, __VA_ARGS__)
gchar *g_path_get_dirname (const gchar *filename);
gchar *g_path_get_basename (const char *filename);
gchar *g_find_program_in_path (const gchar *program);
gchar *g_get_current_dir (void);
gboolean g_path_is_absolute (const char *filename);
const gchar *g_get_home_dir (void);
const gchar *g_get_tmp_dir (void);
const gchar *g_get_user_name (void);
gchar *g_get_prgname (void);
void g_set_prgname (const gchar *prgname);
gboolean g_ensure_directory_exists (const gchar *filename);
#ifndef G_OS_WIN32 // Spawn could be implemented but is not.
int eg_getdtablesize (void);
#if !defined (HAVE_FORK) || !defined (HAVE_EXECVE)
#define HAVE_G_SPAWN 0
#else
#define HAVE_G_SPAWN 1
/*
* Spawn
*/
typedef enum {
G_SPAWN_LEAVE_DESCRIPTORS_OPEN = 1,
G_SPAWN_DO_NOT_REAP_CHILD = 1 << 1,
G_SPAWN_SEARCH_PATH = 1 << 2,
G_SPAWN_STDOUT_TO_DEV_NULL = 1 << 3,
G_SPAWN_STDERR_TO_DEV_NULL = 1 << 4,
G_SPAWN_CHILD_INHERITS_STDIN = 1 << 5,
G_SPAWN_FILE_AND_ARGV_ZERO = 1 << 6
} GSpawnFlags;
typedef void (*GSpawnChildSetupFunc) (gpointer user_data);
gboolean g_spawn_async_with_pipes (const gchar *working_directory, gchar **argv, gchar **envp, GSpawnFlags flags, GSpawnChildSetupFunc child_setup,
gpointer user_data, GPid *child_pid, gint *standard_input, gint *standard_output, gint *standard_error, GError **gerror);
#endif
#endif
/*
* Timer
*/
typedef struct _GTimer GTimer;
GTimer *g_timer_new (void);
void g_timer_destroy (GTimer *timer);
gdouble g_timer_elapsed (GTimer *timer, gulong *microseconds);
void g_timer_stop (GTimer *timer);
void g_timer_start (GTimer *timer);
/*
* Date and time
*/
typedef struct {
glong tv_sec;
glong tv_usec;
} GTimeVal;
void g_get_current_time (GTimeVal *result);
void g_usleep (gulong microseconds);
/*
* File
*/
gpointer g_file_error_quark (void);
#define G_FILE_ERROR g_file_error_quark ()
typedef enum {
G_FILE_ERROR_EXIST,
G_FILE_ERROR_ISDIR,
G_FILE_ERROR_ACCES,
G_FILE_ERROR_NAMETOOLONG,
G_FILE_ERROR_NOENT,
G_FILE_ERROR_NOTDIR,
G_FILE_ERROR_NXIO,
G_FILE_ERROR_NODEV,
G_FILE_ERROR_ROFS,
G_FILE_ERROR_TXTBSY,
G_FILE_ERROR_FAULT,
G_FILE_ERROR_LOOP,
G_FILE_ERROR_NOSPC,
G_FILE_ERROR_NOMEM,
G_FILE_ERROR_MFILE,
G_FILE_ERROR_NFILE,
G_FILE_ERROR_BADF,
G_FILE_ERROR_INVAL,
G_FILE_ERROR_PIPE,
G_FILE_ERROR_AGAIN,
G_FILE_ERROR_INTR,
G_FILE_ERROR_IO,
G_FILE_ERROR_PERM,
G_FILE_ERROR_NOSYS,
G_FILE_ERROR_FAILED
} GFileError;
typedef enum {
G_FILE_TEST_IS_REGULAR = 1 << 0,
G_FILE_TEST_IS_SYMLINK = 1 << 1,
G_FILE_TEST_IS_DIR = 1 << 2,
G_FILE_TEST_IS_EXECUTABLE = 1 << 3,
G_FILE_TEST_EXISTS = 1 << 4
} GFileTest;
G_ENUM_FUNCTIONS (GFileTest)
gboolean g_file_set_contents (const gchar *filename, const gchar *contents, gssize length, GError **gerror);
gboolean g_file_get_contents (const gchar *filename, gchar **contents, gsize *length, GError **gerror);
GFileError g_file_error_from_errno (gint err_no);
gint g_file_open_tmp (const gchar *tmpl, gchar **name_used, GError **gerror);
gboolean g_file_test (const gchar *filename, GFileTest test);
#ifdef G_OS_WIN32
#define g_open _open
#else
#define g_open open
#endif
#define g_rename rename
#define g_stat stat
#ifdef G_OS_WIN32
#define g_access _access
#else
#define g_access access
#endif
#ifdef G_OS_WIN32
#define g_mktemp _mktemp
#else
#define g_mktemp mktemp
#endif
#ifdef G_OS_WIN32
#define g_unlink _unlink
#else
#define g_unlink unlink
#endif
#ifdef G_OS_WIN32
#define g_write _write
#else
#define g_write write
#endif
#ifdef G_OS_WIN32
#define g_read _read
#else
#define g_read read
#endif
#define g_fopen fopen
#define g_lstat lstat
#define g_rmdir rmdir
#define g_mkstemp mkstemp
#define g_ascii_isdigit isdigit
#define g_ascii_strtod strtod
#define g_ascii_isalnum isalnum
gchar *g_mkdtemp (gchar *tmpl);
/*
* Low-level write-based printing functions
*/
static inline int
g_async_safe_fgets (char *str, int num, int handle, gboolean *newline)
{
memset (str, 0, num);
// Make sure we don't overwrite the last index so that we are
// guaranteed to be NULL-terminated
int without_padding = num - 1;
int i=0;
while (i < without_padding && g_read (handle, &str [i], sizeof(char))) {
if (str [i] == '\n') {
str [i] = '\0';
*newline = TRUE;
}
if (!isprint (str [i]))
str [i] = '\0';
if (str [i] == '\0')
break;
i++;
}
return i;
}
static inline gint
g_async_safe_vfprintf (int handle, gchar const *format, va_list args)
{
char print_buff [1024];
print_buff [0] = '\0';
g_vsnprintf (print_buff, sizeof(print_buff), format, args);
int ret = g_write (handle, print_buff, (guint32) strlen (print_buff));
return ret;
}
static inline gint
g_async_safe_fprintf (int handle, gchar const *format, ...)
{
va_list args;
va_start (args, format);
int ret = g_async_safe_vfprintf (handle, format, args);
va_end (args);
return ret;
}
static inline gint
g_async_safe_vprintf (gchar const *format, va_list args)
{
return g_async_safe_vfprintf (1, format, args);
}
static inline gint
g_async_safe_printf (gchar const *format, ...)
{
va_list args;
va_start (args, format);
int ret = g_async_safe_vfprintf (1, format, args);
va_end (args);
return ret;
}
/*
* Directory
*/
typedef struct _GDir GDir;
GDir *g_dir_open (const gchar *path, guint flags, GError **gerror);
const gchar *g_dir_read_name (GDir *dir);
void g_dir_rewind (GDir *dir);
void g_dir_close (GDir *dir);
int g_mkdir_with_parents (const gchar *pathname, int mode);
#define g_mkdir mkdir
/*
* Character set conversion
*/
typedef struct _GIConv *GIConv;
gsize g_iconv (GIConv cd, gchar **inbytes, gsize *inbytesleft, gchar **outbytes, gsize *outbytesleft);
GIConv g_iconv_open (const gchar *to_charset, const gchar *from_charset);
int g_iconv_close (GIConv cd);
gchar *g_convert (const gchar *str, gssize len,
const gchar *to_codeset, const gchar *from_codeset,
gsize *bytes_read, gsize *bytes_written, GError **gerror);
/*
* Unicode manipulation
*/
extern const guchar g_utf8_jump_table[256];
gboolean g_utf8_validate (const gchar *str, gssize max_len, const gchar **end);
gunichar g_utf8_get_char_validated (const gchar *str, gssize max_len);
#define g_utf8_next_char(p) ((p) + g_utf8_jump_table[(guchar)(*p)])
gunichar g_utf8_get_char (const gchar *src);
glong g_utf8_strlen (const gchar *str, gssize max);
gchar *g_utf8_offset_to_pointer (const gchar *str, glong offset);
glong g_utf8_pointer_to_offset (const gchar *str, const gchar *pos);
/*
* priorities
*/
#define G_PRIORITY_DEFAULT 0
#define G_PRIORITY_DEFAULT_IDLE 200
#define GUINT16_SWAP_LE_BE_CONSTANT(x) ((((guint16) x) >> 8) | ((((guint16) x) << 8)))
#define GUINT16_SWAP_LE_BE(x) ((guint16) (((guint16) x) >> 8) | ((((guint16)(x)) & 0xff) << 8))
#define GUINT32_SWAP_LE_BE(x) ((guint32) \
( (((guint32) (x)) << 24)| \
((((guint32) (x)) & 0xff0000) >> 8) | \
((((guint32) (x)) & 0xff00) << 8) | \
(((guint32) (x)) >> 24)) )
#define GUINT64_SWAP_LE_BE(x) ((guint64) (((guint64)(GUINT32_SWAP_LE_BE(((guint64)x) & 0xffffffff))) << 32) | \
GUINT32_SWAP_LE_BE(((guint64)x) >> 32))
#if G_BYTE_ORDER == G_LITTLE_ENDIAN
# define GUINT64_FROM_BE(x) GUINT64_SWAP_LE_BE(x)
# define GUINT32_FROM_BE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT16_FROM_BE(x) GUINT16_SWAP_LE_BE(x)
# define GUINT_FROM_BE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT64_FROM_LE(x) (x)
# define GUINT32_FROM_LE(x) (x)
# define GUINT16_FROM_LE(x) (x)
# define GUINT_FROM_LE(x) (x)
# define GUINT64_TO_BE(x) GUINT64_SWAP_LE_BE(x)
# define GUINT32_TO_BE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT16_TO_BE(x) GUINT16_SWAP_LE_BE(x)
# define GUINT_TO_BE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT64_TO_LE(x) (x)
# define GUINT32_TO_LE(x) (x)
# define GUINT16_TO_LE(x) (x)
# define GUINT_TO_LE(x) (x)
#else
# define GUINT64_FROM_BE(x) (x)
# define GUINT32_FROM_BE(x) (x)
# define GUINT16_FROM_BE(x) (x)
# define GUINT_FROM_BE(x) (x)
# define GUINT64_FROM_LE(x) GUINT64_SWAP_LE_BE(x)
# define GUINT32_FROM_LE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT16_FROM_LE(x) GUINT16_SWAP_LE_BE(x)
# define GUINT_FROM_LE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT64_TO_BE(x) (x)
# define GUINT32_TO_BE(x) (x)
# define GUINT16_TO_BE(x) (x)
# define GUINT_TO_BE(x) (x)
# define GUINT64_TO_LE(x) GUINT64_SWAP_LE_BE(x)
# define GUINT32_TO_LE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT16_TO_LE(x) GUINT16_SWAP_LE_BE(x)
# define GUINT_TO_LE(x) GUINT32_SWAP_LE_BE(x)
#endif
#define GINT64_FROM_BE(x) (GUINT64_TO_BE (x))
#define GINT32_FROM_BE(x) (GUINT32_TO_BE (x))
#define GINT16_FROM_BE(x) (GUINT16_TO_BE (x))
#define GINT64_FROM_LE(x) (GUINT64_TO_LE (x))
#define GINT32_FROM_LE(x) (GUINT32_TO_LE (x))
#define GINT16_FROM_LE(x) (GUINT16_TO_LE (x))
#define _EGLIB_MAJOR 2
#define _EGLIB_MIDDLE 4
#define _EGLIB_MINOR 0
#define GLIB_CHECK_VERSION(a,b,c) ((a < _EGLIB_MAJOR) || (a == _EGLIB_MAJOR && (b < _EGLIB_MIDDLE || (b == _EGLIB_MIDDLE && c <= _EGLIB_MINOR))))
#define G_HAVE_API_SUPPORT(x) (x)
#define G_UNSUPPORTED_API "%s:%d: '%s' not supported.", __FILE__, __LINE__
#define g_unsupported_api(name) G_STMT_START { g_debug (G_UNSUPPORTED_API, name); } G_STMT_END
#if _WIN32
// g_free the result
// No MAX_PATH limit.
gboolean
mono_get_module_filename (gpointer mod, gunichar2 **pstr, guint32 *plength);
// g_free the result
// No MAX_PATH limit.
gboolean
mono_get_module_filename_ex (gpointer process, gpointer mod, gunichar2 **pstr, guint32 *plength);
// g_free the result
// No MAX_PATH limit.
gboolean
mono_get_module_basename (gpointer process, gpointer mod, gunichar2 **pstr, guint32 *plength);
// g_free the result
// No MAX_PATH limit.
gboolean
mono_get_current_directory (gunichar2 **pstr, guint32 *plength);
#endif
G_END_DECLS // FIXME: There is more extern C than there should be.
static inline
void
mono_qsort (void* base, size_t num, size_t size, int (*compare)(const void*, const void*))
{
g_assert (compare);
g_assert (size);
if (num < 2 || !size || !base)
return;
qsort (base, num, size, compare);
}
#define MONO_DECL_CALLBACK(prefix, ret, name, sig) ret (*name) sig;
#define MONO_INIT_CALLBACK(prefix, ret, name, sig) prefix ## _ ## name,
// For each allocator; i.e. returning gpointer that needs to be cast.
// Macros do not recurse, so naming function and macro the same is ok.
// However these are also already macros.
#undef g_malloc
#undef g_realloc
#undef g_malloc0
#undef g_calloc
#undef g_try_malloc
#undef g_try_realloc
#undef g_memdup
#define g_malloc(x) (g_cast (monoeg_malloc (x)))
#define g_realloc(obj, size) (g_cast (monoeg_realloc ((obj), (size))))
#define g_malloc0(x) (g_cast (monoeg_malloc0 (x)))
#define g_calloc(x, y) (g_cast (monoeg_g_calloc ((x), (y))))
#define g_try_malloc(x) (g_cast (monoeg_try_malloc (x)))
#define g_try_realloc(obj, size) (g_cast (monoeg_try_realloc ((obj), (size))))
#define g_memdup(mem, size) (g_cast (monoeg_g_memdup ((mem), (size))))
/*
* Clock Nanosleep
*/
#ifdef HAVE_CLOCK_NANOSLEEP
gint
g_clock_nanosleep (clockid_t clockid, gint flags, const struct timespec *request, struct timespec *remain);
#endif
#endif // __GLIB_H
| #ifndef __GLIB_H
#define __GLIB_H
// Ask stdint.h and inttypes.h for the full C99 features for CentOS 6 g++ 4.4, Android, etc.
// See for example:
// $HOME/android-toolchain/toolchains/armeabi-v7a-clang/sysroot/usr/include/inttypes.h
// $HOME/android-toolchain/toolchains/armeabi-v7a-clang/sysroot/usr/include/stdint.h
#ifdef __cplusplus
#ifndef __STDC_LIMIT_MACROS
#define __STDC_LIMIT_MACROS
#endif
#ifndef __STDC_CONSTANT_MACROS
#define __STDC_CONSTANT_MACROS
#endif
#ifndef __STDC_FORMAT_MACROS
#define __STDC_FORMAT_MACROS
#endif
#endif // __cplusplus
#include <stdarg.h>
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <stddef.h>
#include <ctype.h>
#include <limits.h>
#include <stdint.h>
#include <inttypes.h>
#include <eglib-config.h>
#include <minipal/utils.h>
#include <time.h>
// - Pointers should only be converted to or from pointer-sized integers.
// - Any size integer can be converted to any other size integer.
// - Therefore a pointer-sized integer is the intermediary between
// a pointer and any integer type.
#define GPOINTER_TO_INT(ptr) ((gint)(gssize)(ptr))
#define GPOINTER_TO_UINT(ptr) ((guint)(gsize)(ptr))
#define GINT_TO_POINTER(v) ((gpointer)(gssize)(v))
#define GUINT_TO_POINTER(v) ((gpointer)(gsize)(v))
#ifndef EGLIB_NO_REMAP
#include <eglib-remap.h>
#endif
#ifdef G_HAVE_ALLOCA_H
#include <alloca.h>
#endif
#ifdef WIN32
/* For alloca */
#include <malloc.h>
#endif
#ifdef G_HAVE_UNISTD_H
#include <unistd.h>
#endif
#ifndef offsetof
# define offsetof(s_name,n_name) (size_t)(char *)&(((s_name*)0)->m_name)
#endif
#ifdef __cplusplus
#define G_BEGIN_DECLS extern "C" {
#define G_END_DECLS }
#define G_EXTERN_C extern "C"
#else
#define G_BEGIN_DECLS /* nothing */
#define G_END_DECLS /* nothing */
#define G_EXTERN_C /* nothing */
#endif
#ifdef __cplusplus
#define g_cast monoeg_g_cast // in case not inlined (see eglib-remap.h)
// g_cast converts void* to T*.
// e.g. #define malloc(x) (g_cast (malloc (x)))
// FIXME It used to do more. Rename?
struct g_cast
{
private:
void * const x;
public:
explicit g_cast (void volatile *y) : x((void*)y) { }
// Lack of rvalue constructor inhibits ternary operator.
// Either don't use ternary, or cast each side.
// sa = (salen <= 128) ? g_alloca (salen) : g_malloc (salen);
// w32socket.c:1045:24: error: call to deleted constructor of 'monoeg_g_cast'
//g_cast (g_cast&& y) : x(y.x) { }
g_cast (g_cast&&) = delete;
g_cast () = delete;
g_cast (const g_cast&) = delete;
template <typename TTo>
operator TTo* () const
{
return (TTo*)x;
}
};
#else
// FIXME? Parens are omitted to preserve prior meaning.
#define g_cast(x) x
#endif
// G++4.4 breaks opeq below without this.
#if defined (__GNUC__) || defined (__clang__)
#define G_MAY_ALIAS __attribute__((__may_alias__))
#else
#define G_MAY_ALIAS /* nothing */
#endif
#ifdef __cplusplus
// Provide for bit operations on enums, but not all integer operations.
// This alleviates a fair number of casts in porting C to C++.
// Forward declare template with no generic implementation.
template <size_t> struct g_size_to_int;
// Template specializations.
template <> struct g_size_to_int<1> { typedef int8_t type; };
template <> struct g_size_to_int<2> { typedef int16_t type; };
template <> struct g_size_to_int<4> { typedef int32_t type; };
template <> struct g_size_to_int<8> { typedef int64_t type; };
// g++4.4 does not accept:
//template <typename T>
//using g_size_to_int_t = typename g_size_to_int <sizeof (T)>::type;
#define g_size_to_int_t(x) g_size_to_int <sizeof (x)>::type
#define G_ENUM_BINOP(Enum, op, opeq) \
inline Enum \
operator op (Enum a, Enum b) \
{ \
typedef g_size_to_int_t (Enum) type; \
return static_cast<Enum>(static_cast<type>(a) op b); \
} \
\
inline Enum& \
operator opeq (Enum& a, Enum b) \
{ \
typedef g_size_to_int_t (Enum) G_MAY_ALIAS type; \
return (Enum&)((type&)a opeq b); \
} \
#define G_ENUM_FUNCTIONS(Enum) \
extern "C++" { /* in case within extern "C" */ \
inline Enum \
operator~ (Enum a) \
{ \
typedef g_size_to_int_t (Enum) type; \
return static_cast<Enum>(~static_cast<type>(a)); \
} \
\
G_ENUM_BINOP (Enum, |, |=) \
G_ENUM_BINOP (Enum, &, &=) \
G_ENUM_BINOP (Enum, ^, ^=) \
\
} /* extern "C++" */
#else
#define G_ENUM_FUNCTIONS(Enum) /* nothing */
#endif
G_BEGIN_DECLS
/*
* Basic data types
*/
typedef int gint;
typedef unsigned int guint;
typedef short gshort;
typedef unsigned short gushort;
typedef long glong;
typedef unsigned long gulong;
typedef void * gpointer;
typedef const void * gconstpointer;
typedef char gchar;
typedef unsigned char guchar;
/* Types defined in terms of the stdint.h */
typedef int8_t gint8;
typedef uint8_t guint8;
typedef int16_t gint16;
typedef uint16_t guint16;
typedef int32_t gint32;
typedef uint32_t guint32;
typedef int64_t gint64;
typedef uint64_t guint64;
typedef float gfloat;
typedef double gdouble;
typedef int32_t gboolean;
#if defined (HOST_WIN32) || defined (_WIN32)
G_END_DECLS
#include <wchar.h>
typedef wchar_t gunichar2;
G_BEGIN_DECLS
#else
typedef guint16 gunichar2;
#endif
typedef guint32 gunichar;
/*
* Macros
*/
#define G_N_ELEMENTS(s) ARRAY_SIZE(s)
#define FALSE 0
#define TRUE 1
#define G_MINSHORT SHRT_MIN
#define G_MAXSHORT SHRT_MAX
#define G_MAXUSHORT USHRT_MAX
#define G_MAXINT INT_MAX
#define G_MININT INT_MIN
#define G_MAXINT8 INT8_MAX
#define G_MAXUINT8 UINT8_MAX
#define G_MININT8 INT8_MIN
#define G_MAXINT16 INT16_MAX
#define G_MAXUINT16 UINT16_MAX
#define G_MININT16 INT16_MIN
#define G_MAXINT32 INT32_MAX
#define G_MAXUINT32 UINT32_MAX
#define G_MININT32 INT32_MIN
#define G_MININT64 INT64_MIN
#define G_MAXINT64 INT64_MAX
#define G_MAXUINT64 UINT64_MAX
#define G_LITTLE_ENDIAN 1234
#define G_BIG_ENDIAN 4321
#define G_STMT_START do
#define G_STMT_END while (0)
#define G_USEC_PER_SEC 1000000
#ifndef ABS
#define ABS(a) ((a) > 0 ? (a) : -(a))
#endif
#define ALIGN_TO(val,align) ((((gssize)val) + (gssize)((align) - 1)) & (~((gssize)(align - 1))))
#define ALIGN_DOWN_TO(val,align) (((gssize)val) & (~((gssize)(align - 1))))
#define ALIGN_PTR_TO(ptr,align) (gpointer)((((gssize)(ptr)) + (gssize)(align - 1)) & (~((gssize)(align - 1))))
#define G_STRUCT_OFFSET(p_type,field) offsetof(p_type,field)
#define EGLIB_STRINGIFY(x) #x
#define EGLIB_TOSTRING(x) EGLIB_STRINGIFY(x)
#define G_STRLOC __FILE__ ":" EGLIB_TOSTRING(__LINE__) ":"
#define G_CONST_RETURN const
#define G_GUINT64_FORMAT PRIu64
#define G_GINT64_FORMAT PRIi64
#define G_GUINT32_FORMAT PRIu32
#define G_GINT32_FORMAT PRIi32
#ifdef __GNUC__
#define G_ATTR_FORMAT_PRINTF(fmt_pos,arg_pos) __attribute__((__format__(__printf__,fmt_pos,arg_pos)))
#else
#define G_ATTR_FORMAT_PRINTF(fmt_pos,arg_pos)
#endif
/*
* Allocation
*/
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
void g_free (void *ptr);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
gpointer g_realloc (gpointer obj, gsize size);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
gpointer g_malloc (gsize x);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
gpointer g_malloc0 (gsize x);
G_EXTERN_C // Used by profilers, at least.
gpointer g_calloc (gsize n, gsize x);
gpointer g_try_malloc (gsize x);
gpointer g_try_realloc (gpointer obj, gsize size);
#define g_new(type,size) ((type *) g_malloc (sizeof (type) * (size)))
#define g_new0(type,size) ((type *) g_malloc0 (sizeof (type)* (size)))
#define g_newa(type,size) ((type *) alloca (sizeof (type) * (size)))
#define g_newa0(type,size) ((type *) memset (alloca (sizeof (type) * (size)), 0, sizeof (type) * (size)))
#define g_memmove(dest,src,len) memmove (dest, src, len)
#define g_renew(struct_type, mem, n_structs) ((struct_type*)g_realloc (mem, sizeof (struct_type) * n_structs))
#define g_alloca(size) (g_cast (alloca (size)))
G_EXTERN_C // Used by libtest, at least.
gpointer g_memdup (gconstpointer mem, guint byte_size);
static inline gchar *g_strdup (const gchar *str) { if (str) { return (gchar*) g_memdup (str, (guint)strlen (str) + 1); } return NULL; }
gchar **g_strdupv (gchar **str_array);
typedef struct {
gpointer (*malloc) (gsize n_bytes);
gpointer (*realloc) (gpointer mem, gsize n_bytes);
void (*free) (gpointer mem);
gpointer (*calloc) (gsize n_blocks, gsize n_block_bytes);
} GMemVTable;
void g_mem_set_vtable (GMemVTable* vtable);
void g_mem_get_vtable (GMemVTable* vtable);
struct _GMemChunk {
guint alloc_size;
};
typedef struct _GMemChunk GMemChunk;
/*
* Misc.
*/
gboolean g_hasenv(const gchar *variable);
gchar * g_getenv(const gchar *variable);
G_EXTERN_C // sdks/wasm/driver.c is C and uses this
gboolean g_setenv(const gchar *variable, const gchar *value, gboolean overwrite);
gchar* g_win32_getlocale(void);
/*
* Precondition macros
*/
#define g_warn_if_fail(x) G_STMT_START { if (!(x)) { g_warning ("%s:%d: assertion '%s' failed\n", __FILE__, __LINE__, #x); } } G_STMT_END
#define g_return_if_fail(x) G_STMT_START { if (!(x)) { g_critical ("%s:%d: assertion '%s' failed\n", __FILE__, __LINE__, #x); return; } } G_STMT_END
#define g_return_val_if_fail(x,e) G_STMT_START { if (!(x)) { g_critical ("%s:%d: assertion '%s' failed\n", __FILE__, __LINE__, #x); return (e); } } G_STMT_END
/*
* Errors
*/
typedef struct {
/* In the real glib, this is a GQuark, but we dont use/need that */
gpointer domain;
gint code;
gchar *message;
} GError;
void g_clear_error (GError **gerror);
void g_error_free (GError *gerror);
GError *g_error_new (gpointer domain, gint code, const char *format, ...);
void g_set_error (GError **err, gpointer domain, gint code, const gchar *format, ...);
void g_propagate_error (GError **dest, GError *src);
/*
* Strings utility
*/
G_EXTERN_C // Used by libtest, at least.
gchar *g_strdup_printf (const gchar *format, ...) G_ATTR_FORMAT_PRINTF(1, 2);
gchar *g_strdup_vprintf (const gchar *format, va_list args);
gchar *g_strndup (const gchar *str, gsize n);
const gchar *g_strerror (gint errnum);
gchar *g_strndup (const gchar *str, gsize n);
void g_strfreev (gchar **str_array);
gchar *g_strconcat (const gchar *first, ...);
gchar **g_strsplit (const gchar *string, const gchar *delimiter, gint max_tokens);
gchar **g_strsplit_set (const gchar *string, const gchar *delimiter, gint max_tokens);
gchar *g_strreverse (gchar *str);
gboolean g_str_has_prefix (const gchar *str, const gchar *prefix);
gboolean g_str_has_suffix (const gchar *str, const gchar *suffix);
guint g_strv_length (gchar **str_array);
gchar *g_strjoin (const gchar *separator, ...);
gchar *g_strjoinv (const gchar *separator, gchar **str_array);
gchar *g_strchug (gchar *str);
gchar *g_strchomp (gchar *str);
gchar *g_strnfill (gsize length, gchar fill_char);
gsize g_strnlen (const char*, gsize);
char *g_str_from_file_region (int fd, guint64 offset, gsize size);
void g_strdelimit (char *string, char delimiter, char new_delimiter);
gint g_printf (gchar const *format, ...) G_ATTR_FORMAT_PRINTF(1, 2);
gint g_fprintf (FILE *file, gchar const *format, ...) G_ATTR_FORMAT_PRINTF(2, 3);
gint g_sprintf (gchar *string, gchar const *format, ...) G_ATTR_FORMAT_PRINTF(2, 3);
gint g_snprintf (gchar *string, gulong n, gchar const *format, ...) G_ATTR_FORMAT_PRINTF(3, 4);
gint g_vasprintf (gchar **ret, const gchar *fmt, va_list ap);
#define g_vprintf vprintf
#define g_vfprintf vfprintf
#define g_vsprintf vsprintf
#define g_vsnprintf vsnprintf
gsize g_strlcpy (gchar *dest, const gchar *src, gsize dest_size);
gchar *g_stpcpy (gchar *dest, const char *src);
gchar g_ascii_tolower (gchar c);
gchar g_ascii_toupper (gchar c);
gchar *g_ascii_strdown (const gchar *str, gssize len);
void g_ascii_strdown_no_alloc (char* dst, const char* src, gsize len);
gchar *g_ascii_strup (const gchar *str, gssize len);
gint g_ascii_strncasecmp (const gchar *s1, const gchar *s2, gsize n);
gint g_ascii_strcasecmp (const gchar *s1, const gchar *s2);
gint g_ascii_xdigit_value (gchar c);
#define g_ascii_isspace(c) (isspace (c) != 0)
#define g_ascii_isalpha(c) (isalpha (c) != 0)
#define g_ascii_isprint(c) (isprint (c) != 0)
#define g_ascii_isxdigit(c) (isxdigit (c) != 0)
gboolean g_utf16_ascii_equal (const gunichar2 *utf16, size_t ulen, const char *ascii, size_t alen);
gboolean g_utf16_asciiz_equal (const gunichar2 *utf16, const char *ascii);
static inline
gboolean g_ascii_equal (const char *s1, gsize len1, const char *s2, gsize len2)
{
return len1 == len2 && (s1 == s2 || memcmp (s1, s2, len1) == 0);
}
static inline
gboolean g_asciiz_equal (const char *s1, const char *s2)
{
return s1 == s2 || strcmp (s1, s2) == 0;
}
static inline
gboolean
g_ascii_equal_caseinsensitive (const char *s1, gsize len1, const char *s2, gsize len2)
{
return len1 == len2 && (s1 == s2 || g_ascii_strncasecmp (s1, s2, len1) == 0);
}
static inline
gboolean
g_asciiz_equal_caseinsensitive (const char *s1, const char *s2)
{
return s1 == s2 || g_ascii_strcasecmp (s1, s2) == 0;
}
/* FIXME: g_strcasecmp supports utf8 unicode stuff */
#ifdef _MSC_VER
#define g_strcasecmp _stricmp
#define g_strncasecmp _strnicmp
#define g_strstrip(a) g_strchug (g_strchomp (a))
#else
#define g_strcasecmp strcasecmp
#define g_ascii_strtoull strtoull
#define g_strncasecmp strncasecmp
#define g_strstrip(a) g_strchug (g_strchomp (a))
#endif
#define g_ascii_strdup strdup
/*
* String type
*/
typedef struct {
char *str;
gsize len;
gsize allocated_len;
} GString;
GString *g_string_new (const gchar *init);
GString *g_string_new_len (const gchar *init, gssize len);
GString *g_string_sized_new (gsize default_size);
gchar *g_string_free (GString *string, gboolean free_segment);
GString *g_string_append (GString *string, const gchar *val);
void g_string_printf (GString *string, const gchar *format, ...) G_ATTR_FORMAT_PRINTF(2, 3);
void g_string_append_printf (GString *string, const gchar *format, ...) G_ATTR_FORMAT_PRINTF(2, 3);
void g_string_append_vprintf (GString *string, const gchar *format, va_list args);
GString *g_string_append_unichar (GString *string, gunichar c);
GString *g_string_append_c (GString *string, gchar c);
GString *g_string_append (GString *string, const gchar *val);
GString *g_string_append_len (GString *string, const gchar *val, gssize len);
GString *g_string_truncate (GString *string, gsize len);
GString *g_string_set_size (GString *string, gsize len);
#define g_string_sprintfa g_string_append_printf
typedef void (*GFunc) (gpointer data, gpointer user_data);
typedef gint (*GCompareFunc) (gconstpointer a, gconstpointer b);
typedef gint (*GCompareDataFunc) (gconstpointer a, gconstpointer b, gpointer user_data);
typedef void (*GHFunc) (gpointer key, gpointer value, gpointer user_data);
typedef gboolean (*GHRFunc) (gpointer key, gpointer value, gpointer user_data);
typedef void (*GDestroyNotify) (gpointer data);
typedef guint (*GHashFunc) (gconstpointer key);
typedef gboolean (*GEqualFunc) (gconstpointer a, gconstpointer b);
typedef void (*GFreeFunc) (gpointer data);
/*
* Lists
*/
typedef struct _GSList GSList;
struct _GSList {
gpointer data;
GSList *next;
};
GSList *g_slist_alloc (void);
GSList *g_slist_append (GSList *list,
gpointer data);
GSList *g_slist_prepend (GSList *list,
gpointer data);
void g_slist_free (GSList *list);
void g_slist_free_1 (GSList *list);
GSList *g_slist_copy (GSList *list);
GSList *g_slist_concat (GSList *list1,
GSList *list2);
void g_slist_foreach (GSList *list,
GFunc func,
gpointer user_data);
GSList *g_slist_last (GSList *list);
GSList *g_slist_find (GSList *list,
gconstpointer data);
GSList *g_slist_find_custom (GSList *list,
gconstpointer data,
GCompareFunc func);
GSList *g_slist_remove (GSList *list,
gconstpointer data);
GSList *g_slist_remove_all (GSList *list,
gconstpointer data);
GSList *g_slist_reverse (GSList *list);
guint g_slist_length (GSList *list);
GSList *g_slist_remove_link (GSList *list,
GSList *link);
GSList *g_slist_delete_link (GSList *list,
GSList *link);
GSList *g_slist_insert_sorted (GSList *list,
gpointer data,
GCompareFunc func);
GSList *g_slist_insert_before (GSList *list,
GSList *sibling,
gpointer data);
GSList *g_slist_sort (GSList *list,
GCompareFunc func);
gint g_slist_index (GSList *list,
gconstpointer data);
GSList *g_slist_nth (GSList *list,
guint n);
gpointer g_slist_nth_data (GSList *list,
guint n);
#define g_slist_next(slist) ((slist) ? (((GSList *) (slist))->next) : NULL)
typedef struct _GList GList;
struct _GList {
gpointer data;
GList *next;
GList *prev;
};
#define g_list_next(list) ((list) ? (((GList *) (list))->next) : NULL)
#define g_list_previous(list) ((list) ? (((GList *) (list))->prev) : NULL)
GList *g_list_alloc (void);
GList *g_list_append (GList *list,
gpointer data);
GList *g_list_prepend (GList *list,
gpointer data);
void g_list_free (GList *list);
void g_list_free_1 (GList *list);
GList *g_list_copy (GList *list);
guint g_list_length (GList *list);
gint g_list_index (GList *list,
gconstpointer data);
GList *g_list_nth (GList *list,
guint n);
gpointer g_list_nth_data (GList *list,
guint n);
GList *g_list_last (GList *list);
GList *g_list_concat (GList *list1,
GList *list2);
void g_list_foreach (GList *list,
GFunc func,
gpointer user_data);
GList *g_list_first (GList *list);
GList *g_list_find (GList *list,
gconstpointer data);
GList *g_list_find_custom (GList *list,
gconstpointer data,
GCompareFunc func);
GList *g_list_remove (GList *list,
gconstpointer data);
GList *g_list_remove_all (GList *list,
gconstpointer data);
GList *g_list_reverse (GList *list);
GList *g_list_remove_link (GList *list,
GList *link);
GList *g_list_delete_link (GList *list,
GList *link);
GList *g_list_insert_sorted (GList *list,
gpointer data,
GCompareFunc func);
GList *g_list_insert_before (GList *list,
GList *sibling,
gpointer data);
GList *g_list_sort (GList *sort,
GCompareFunc func);
/*
* Hashtables
*/
typedef struct _GHashTable GHashTable;
typedef struct _GHashTableIter GHashTableIter;
/* Private, but needed for stack allocation */
struct _GHashTableIter
{
gpointer dummy [8];
};
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
GHashTable *g_hash_table_new (GHashFunc hash_func, GEqualFunc key_equal_func);
GHashTable *g_hash_table_new_full (GHashFunc hash_func, GEqualFunc key_equal_func,
GDestroyNotify key_destroy_func, GDestroyNotify value_destroy_func);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
void g_hash_table_insert_replace (GHashTable *hash, gpointer key, gpointer value, gboolean replace);
guint g_hash_table_size (GHashTable *hash);
GList *g_hash_table_get_keys (GHashTable *hash);
GList *g_hash_table_get_values (GHashTable *hash);
gboolean g_hash_table_contains (GHashTable *hash, gconstpointer key);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
gpointer g_hash_table_lookup (GHashTable *hash, gconstpointer key);
gboolean g_hash_table_lookup_extended (GHashTable *hash, gconstpointer key, gpointer *orig_key, gpointer *value);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
void g_hash_table_foreach (GHashTable *hash, GHFunc func, gpointer user_data);
gpointer g_hash_table_find (GHashTable *hash, GHRFunc predicate, gpointer user_data);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
gboolean g_hash_table_remove (GHashTable *hash, gconstpointer key);
gboolean g_hash_table_steal (GHashTable *hash, gconstpointer key);
void g_hash_table_remove_all (GHashTable *hash);
guint g_hash_table_foreach_remove (GHashTable *hash, GHRFunc func, gpointer user_data);
guint g_hash_table_foreach_steal (GHashTable *hash, GHRFunc func, gpointer user_data);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
void g_hash_table_destroy (GHashTable *hash);
void g_hash_table_print_stats (GHashTable *table);
void g_hash_table_iter_init (GHashTableIter *iter, GHashTable *hash_table);
gboolean g_hash_table_iter_next (GHashTableIter *iter, gpointer *key, gpointer *value);
guint g_spaced_primes_closest (guint x);
#define g_hash_table_insert(h,k,v) g_hash_table_insert_replace ((h),(k),(v),FALSE)
#define g_hash_table_replace(h,k,v) g_hash_table_insert_replace ((h),(k),(v),TRUE)
#define g_hash_table_add(h,k) g_hash_table_insert_replace ((h),(k),(k),TRUE)
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
gboolean g_direct_equal (gconstpointer v1, gconstpointer v2);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
guint g_direct_hash (gconstpointer v1);
gboolean g_int_equal (gconstpointer v1, gconstpointer v2);
guint g_int_hash (gconstpointer v1);
gboolean g_str_equal (gconstpointer v1, gconstpointer v2);
guint g_str_hash (gconstpointer v1);
/*
* ByteArray
*/
typedef struct _GByteArray GByteArray;
struct _GByteArray {
guint8 *data;
gint len;
};
GByteArray *g_byte_array_new (void);
GByteArray* g_byte_array_append (GByteArray *array, const guint8 *data, guint len);
guint8* g_byte_array_free (GByteArray *array, gboolean free_segment);
void g_byte_array_set_size (GByteArray *array, gint length);
/*
* Array
*/
typedef struct _GArray GArray;
struct _GArray {
gchar *data;
gint len;
};
GArray *g_array_new (gboolean zero_terminated, gboolean clear_, guint element_size);
GArray *g_array_sized_new (gboolean zero_terminated, gboolean clear_, guint element_size, guint reserved_size);
gchar* g_array_free (GArray *array, gboolean free_segment);
GArray *g_array_append_vals (GArray *array, gconstpointer data, guint len);
GArray* g_array_insert_vals (GArray *array, guint index_, gconstpointer data, guint len);
GArray* g_array_remove_index (GArray *array, guint index_);
GArray* g_array_remove_index_fast (GArray *array, guint index_);
void g_array_set_size (GArray *array, gint length);
#define g_array_append_val(a,v) (g_array_append_vals((a),&(v),1))
#define g_array_insert_val(a,i,v) (g_array_insert_vals((a),(i),&(v),1))
#define g_array_index(a,t,i) *(t*)(((a)->data) + sizeof(t) * (i))
//FIXME previous missing parens
/*
* Pointer Array
*/
typedef struct _GPtrArray GPtrArray;
struct _GPtrArray {
gpointer *pdata;
guint len;
};
GPtrArray *g_ptr_array_new (void);
GPtrArray *g_ptr_array_sized_new (guint reserved_size);
void g_ptr_array_add (GPtrArray *array, gpointer data);
gboolean g_ptr_array_remove (GPtrArray *array, gpointer data);
gpointer g_ptr_array_remove_index (GPtrArray *array, guint index);
gboolean g_ptr_array_remove_fast (GPtrArray *array, gpointer data);
gpointer g_ptr_array_remove_index_fast (GPtrArray *array, guint index);
void g_ptr_array_sort (GPtrArray *array, GCompareFunc compare_func);
void g_ptr_array_set_size (GPtrArray *array, gint length);
gpointer *g_ptr_array_free (GPtrArray *array, gboolean free_seg);
void g_ptr_array_foreach (GPtrArray *array, GFunc func, gpointer user_data);
guint g_ptr_array_capacity (GPtrArray *array);
gboolean g_ptr_array_find (GPtrArray *array, gconstpointer needle, guint *index);
#define g_ptr_array_index(array,index) (array)->pdata[(index)]
//FIXME previous missing parens
/*
* Queues
*/
typedef struct {
GList *head;
GList *tail;
guint length;
} GQueue;
gpointer g_queue_pop_head (GQueue *queue);
void g_queue_push_head (GQueue *queue,
gpointer data);
void g_queue_push_tail (GQueue *queue,
gpointer data);
gboolean g_queue_is_empty (GQueue *queue);
GQueue *g_queue_new (void);
void g_queue_free (GQueue *queue);
void g_queue_foreach (GQueue *queue, GFunc func, gpointer user_data);
/*
* Messages
*/
#ifndef G_LOG_DOMAIN
#define G_LOG_DOMAIN ((gchar*) 0)
#endif
typedef enum {
G_LOG_FLAG_RECURSION = 1 << 0,
G_LOG_FLAG_FATAL = 1 << 1,
G_LOG_LEVEL_ERROR = 1 << 2,
G_LOG_LEVEL_CRITICAL = 1 << 3,
G_LOG_LEVEL_WARNING = 1 << 4,
G_LOG_LEVEL_MESSAGE = 1 << 5,
G_LOG_LEVEL_INFO = 1 << 6,
G_LOG_LEVEL_DEBUG = 1 << 7,
G_LOG_LEVEL_MASK = ~(G_LOG_FLAG_RECURSION | G_LOG_FLAG_FATAL)
} GLogLevelFlags;
G_ENUM_FUNCTIONS (GLogLevelFlags)
gint g_printv (const gchar *format, va_list args);
void g_print (const gchar *format, ...);
void g_printerr (const gchar *format, ...);
GLogLevelFlags g_log_set_always_fatal (GLogLevelFlags fatal_mask);
GLogLevelFlags g_log_set_fatal_mask (const gchar *log_domain, GLogLevelFlags fatal_mask);
void g_logv (const gchar *log_domain, GLogLevelFlags log_level, const gchar *format, va_list args);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
void g_log (const gchar *log_domain, GLogLevelFlags log_level, const gchar *format, ...);
void g_log_disabled (const gchar *log_domain, GLogLevelFlags log_level, const char *file, int line);
G_EXTERN_C // Used by MonoPosixHelper or MonoSupportW, at least.
void g_assertion_message (const gchar *format, ...) G_GNUC_NORETURN;
void mono_assertion_message_disabled (const char *file, int line) G_GNUC_NORETURN;
void mono_assertion_message (const char *file, int line, const char *condition) G_GNUC_NORETURN;
void mono_assertion_message_unreachable (const char *file, int line) G_GNUC_NORETURN;
const char * g_get_assertion_message (void);
#ifndef DISABLE_ASSERT_MESSAGES
/* The for (;;) tells gc thats g_error () doesn't return, avoiding warnings */
#define g_error(...) do { g_log (G_LOG_DOMAIN, G_LOG_LEVEL_ERROR, __VA_ARGS__); for (;;); } while (0)
#define g_critical(...) g_log (G_LOG_DOMAIN, G_LOG_LEVEL_CRITICAL, __VA_ARGS__)
#define g_warning(...) g_log (G_LOG_DOMAIN, G_LOG_LEVEL_WARNING, __VA_ARGS__)
#define g_message(...) g_log (G_LOG_DOMAIN, G_LOG_LEVEL_MESSAGE, __VA_ARGS__)
#define g_debug(...) g_log (G_LOG_DOMAIN, G_LOG_LEVEL_DEBUG, __VA_ARGS__)
#else
#define g_error(...) do { g_log_disabled (G_LOG_DOMAIN, G_LOG_LEVEL_ERROR, __FILE__, __LINE__); for (;;); } while (0)
#define g_critical(...) g_log_disabled (G_LOG_DOMAIN, G_LOG_LEVEL_CRITICAL, __FILE__, __LINE__)
#define g_warning(...) g_log_disabled (G_LOG_DOMAIN, G_LOG_LEVEL_WARNING, __FILE__, __LINE__)
#define g_message(...) g_log_disabled (G_LOG_DOMAIN, G_LOG_LEVEL_MESSAGE, __FILE__, __LINE__)
#define g_debug(...) g_log_disabled (G_LOG_DOMAIN, G_LOG_LEVEL_DEBUG, __FILE__, __LINE__)
#endif
typedef void (*GLogFunc) (const gchar *log_domain, GLogLevelFlags log_level, const gchar *message, gpointer user_data);
typedef void (*GPrintFunc) (const gchar *string);
typedef void (*GAbortFunc) (void);
void g_assertion_disable_global (GAbortFunc func);
void g_assert_abort (void);
void g_log_default_handler (const gchar *log_domain, GLogLevelFlags log_level, const gchar *message, gpointer unused_data);
GLogFunc g_log_set_default_handler (GLogFunc log_func, gpointer user_data);
GPrintFunc g_set_print_handler (GPrintFunc func);
GPrintFunc g_set_printerr_handler (GPrintFunc func);
/*
* Conversions
*/
gpointer g_convert_error_quark(void);
#ifndef MAX
#define MAX(a,b) (((a)>(b)) ? (a) : (b))
#endif
#ifndef MIN
#define MIN(a,b) (((a)<(b)) ? (a) : (b))
#endif
#ifndef CLAMP
#define CLAMP(a,low,high) (((a) < (low)) ? (low) : (((a) > (high)) ? (high) : (a)))
#endif
#if defined(__GNUC__) && (__GNUC__ > 2)
#define G_LIKELY(expr) (__builtin_expect ((expr) != 0, 1))
#define G_UNLIKELY(expr) (__builtin_expect ((expr) != 0, 0))
#else
#define G_LIKELY(x) (x)
#define G_UNLIKELY(x) (x)
#endif
#if defined(_MSC_VER)
#define eg_unreachable() __assume(0)
#elif defined(__GNUC__) && ((__GNUC__ > 4) || (__GNUC__ == 4 && (__GNUC_MINOR__ >= 5)))
#define eg_unreachable() __builtin_unreachable()
#else
#define eg_unreachable()
#endif
/* g_assert is a boolean expression; the precise value is not preserved, just true or false. */
#ifdef DISABLE_ASSERT_MESSAGES
// This is smaller than the equivalent mono_assertion_message (..."disabled");
#define g_assert(x) (G_LIKELY((x)) ? 1 : (mono_assertion_message_disabled (__FILE__, __LINE__), 0))
#else
#define g_assert(x) (G_LIKELY((x)) ? 1 : (mono_assertion_message (__FILE__, __LINE__, #x), 0))
#endif
#ifdef __cplusplus
#define g_static_assert(x) static_assert (x, "")
#else
#define g_static_assert(x) g_assert (x)
#endif
#define g_assert_not_reached() G_STMT_START { mono_assertion_message_unreachable (__FILE__, __LINE__); eg_unreachable(); } G_STMT_END
/* f is format -- like printf and scanf
* Where you might have said:
* if (!(expr))
* g_error("%s invalid bar:%d", __func__, bar)
*
* You can say:
* g_assertf(expr, "bar:%d", bar);
*
* The usual assertion text of file/line/expr/newline are builtin, and __func__.
*
* g_assertf is a boolean expression -- the precise value is not preserved, just true or false.
*
* Other than expr, the parameters are not evaluated unless expr is false.
*
* format must be a string literal, in order to be concatenated.
* If this is too restrictive, g_error remains.
*/
#ifdef DISABLE_ASSERT_MESSAGES
#define g_assertf(x, format, ...) (G_LIKELY((x)) ? 1 : (mono_assertion_message_disabled (__FILE__, __LINE__), 0))
#elif defined(_MSC_VER) && (_MSC_VER < 1910)
#define g_assertf(x, format, ...) (G_LIKELY((x)) ? 1 : (g_assertion_message ("* Assertion at %s:%d, condition `%s' not met, function:%s, " format "\n", __FILE__, __LINE__, #x, __func__, __VA_ARGS__), 0))
#else
#define g_assertf(x, format, ...) (G_LIKELY((x)) ? 1 : (g_assertion_message ("* Assertion at %s:%d, condition `%s' not met, function:%s, " format "\n", __FILE__, __LINE__, #x, __func__, ##__VA_ARGS__), 0))
#endif
/*
* Unicode conversion
*/
#define G_CONVERT_ERROR g_convert_error_quark()
typedef enum {
G_CONVERT_ERROR_NO_CONVERSION,
G_CONVERT_ERROR_ILLEGAL_SEQUENCE,
G_CONVERT_ERROR_FAILED,
G_CONVERT_ERROR_PARTIAL_INPUT,
G_CONVERT_ERROR_BAD_URI,
G_CONVERT_ERROR_NOT_ABSOLUTE_PATH,
G_CONVERT_ERROR_NO_MEMORY
} GConvertError;
gint g_unichar_to_utf8 (gunichar c, gchar *outbuf);
gunichar *g_utf8_to_ucs4_fast (const gchar *str, glong len, glong *items_written);
gunichar *g_utf8_to_ucs4 (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err);
G_EXTERN_C // Used by libtest, at least.
gunichar2 *g_utf8_to_utf16 (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err);
gunichar2 *eg_utf8_to_utf16_with_nuls (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err);
gunichar2 *eg_wtf8_to_utf16 (const gchar *str, glong len, glong *items_read, glong *items_written, GError **err);
G_EXTERN_C // Used by libtest, at least.
gchar *g_utf16_to_utf8 (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GError **err);
gunichar *g_utf16_to_ucs4 (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GError **err);
gchar *g_ucs4_to_utf8 (const gunichar *str, glong len, glong *items_read, glong *items_written, GError **err);
gunichar2 *g_ucs4_to_utf16 (const gunichar *str, glong len, glong *items_read, glong *items_written, GError **err);
size_t g_utf16_len (const gunichar2 *);
#define u8to16(str) g_utf8_to_utf16(str, (glong)strlen(str), NULL, NULL, NULL)
#ifdef G_OS_WIN32
#define u16to8(str) g_utf16_to_utf8((gunichar2 *) (str), (glong)wcslen((wchar_t *) (str)), NULL, NULL, NULL)
#else
#define u16to8(str) g_utf16_to_utf8(str, (glong)strlen(str), NULL, NULL, NULL)
#endif
typedef gpointer (*GCustomAllocator) (gsize req_size, gpointer custom_alloc_data);
typedef struct {
gpointer buffer;
gsize buffer_size;
gsize req_buffer_size;
} GFixedBufferCustomAllocatorData;
gpointer
g_fixed_buffer_custom_allocator (gsize req_size, gpointer custom_alloc_data);
gunichar2 *g_utf8_to_utf16_custom_alloc (const gchar *str, glong len, glong *items_read, glong *items_written, GCustomAllocator custom_alloc_func, gpointer custom_alloc_data, GError **err);
gchar *g_utf16_to_utf8_custom_alloc (const gunichar2 *str, glong len, glong *items_read, glong *items_written, GCustomAllocator custom_alloc_func, gpointer custom_alloc_data, GError **err);
/*
* Path
*/
gchar *g_build_path (const gchar *separator, const gchar *first_element, ...);
#define g_build_filename(x, ...) g_build_path(G_DIR_SEPARATOR_S, x, __VA_ARGS__)
gchar *g_path_get_dirname (const gchar *filename);
gchar *g_path_get_basename (const char *filename);
gchar *g_find_program_in_path (const gchar *program);
gchar *g_get_current_dir (void);
gboolean g_path_is_absolute (const char *filename);
const gchar *g_get_home_dir (void);
const gchar *g_get_tmp_dir (void);
const gchar *g_get_user_name (void);
gchar *g_get_prgname (void);
void g_set_prgname (const gchar *prgname);
gboolean g_ensure_directory_exists (const gchar *filename);
#ifndef G_OS_WIN32 // Spawn could be implemented but is not.
int eg_getdtablesize (void);
#if !defined (HAVE_FORK) || !defined (HAVE_EXECVE)
#define HAVE_G_SPAWN 0
#else
#define HAVE_G_SPAWN 1
/*
* Spawn
*/
typedef enum {
G_SPAWN_LEAVE_DESCRIPTORS_OPEN = 1,
G_SPAWN_DO_NOT_REAP_CHILD = 1 << 1,
G_SPAWN_SEARCH_PATH = 1 << 2,
G_SPAWN_STDOUT_TO_DEV_NULL = 1 << 3,
G_SPAWN_STDERR_TO_DEV_NULL = 1 << 4,
G_SPAWN_CHILD_INHERITS_STDIN = 1 << 5,
G_SPAWN_FILE_AND_ARGV_ZERO = 1 << 6
} GSpawnFlags;
typedef void (*GSpawnChildSetupFunc) (gpointer user_data);
gboolean g_spawn_async_with_pipes (const gchar *working_directory, gchar **argv, gchar **envp, GSpawnFlags flags, GSpawnChildSetupFunc child_setup,
gpointer user_data, GPid *child_pid, gint *standard_input, gint *standard_output, gint *standard_error, GError **gerror);
#endif
#endif
/*
* Timer
*/
typedef struct _GTimer GTimer;
GTimer *g_timer_new (void);
void g_timer_destroy (GTimer *timer);
gdouble g_timer_elapsed (GTimer *timer, gulong *microseconds);
void g_timer_stop (GTimer *timer);
void g_timer_start (GTimer *timer);
/*
* Date and time
*/
typedef struct {
glong tv_sec;
glong tv_usec;
} GTimeVal;
void g_get_current_time (GTimeVal *result);
void g_usleep (gulong microseconds);
/*
* File
*/
gpointer g_file_error_quark (void);
#define G_FILE_ERROR g_file_error_quark ()
typedef enum {
G_FILE_ERROR_EXIST,
G_FILE_ERROR_ISDIR,
G_FILE_ERROR_ACCES,
G_FILE_ERROR_NAMETOOLONG,
G_FILE_ERROR_NOENT,
G_FILE_ERROR_NOTDIR,
G_FILE_ERROR_NXIO,
G_FILE_ERROR_NODEV,
G_FILE_ERROR_ROFS,
G_FILE_ERROR_TXTBSY,
G_FILE_ERROR_FAULT,
G_FILE_ERROR_LOOP,
G_FILE_ERROR_NOSPC,
G_FILE_ERROR_NOMEM,
G_FILE_ERROR_MFILE,
G_FILE_ERROR_NFILE,
G_FILE_ERROR_BADF,
G_FILE_ERROR_INVAL,
G_FILE_ERROR_PIPE,
G_FILE_ERROR_AGAIN,
G_FILE_ERROR_INTR,
G_FILE_ERROR_IO,
G_FILE_ERROR_PERM,
G_FILE_ERROR_NOSYS,
G_FILE_ERROR_FAILED
} GFileError;
typedef enum {
G_FILE_TEST_IS_REGULAR = 1 << 0,
G_FILE_TEST_IS_SYMLINK = 1 << 1,
G_FILE_TEST_IS_DIR = 1 << 2,
G_FILE_TEST_IS_EXECUTABLE = 1 << 3,
G_FILE_TEST_EXISTS = 1 << 4
} GFileTest;
G_ENUM_FUNCTIONS (GFileTest)
gboolean g_file_set_contents (const gchar *filename, const gchar *contents, gssize length, GError **gerror);
gboolean g_file_get_contents (const gchar *filename, gchar **contents, gsize *length, GError **gerror);
GFileError g_file_error_from_errno (gint err_no);
gint g_file_open_tmp (const gchar *tmpl, gchar **name_used, GError **gerror);
gboolean g_file_test (const gchar *filename, GFileTest test);
#ifdef G_OS_WIN32
#define g_open _open
#else
#define g_open open
#endif
#define g_rename rename
#define g_stat stat
#ifdef G_OS_WIN32
#define g_access _access
#else
#define g_access access
#endif
#ifdef G_OS_WIN32
#define g_mktemp _mktemp
#else
#define g_mktemp mktemp
#endif
#ifdef G_OS_WIN32
#define g_unlink _unlink
#else
#define g_unlink unlink
#endif
#ifdef G_OS_WIN32
#define g_write _write
#else
#define g_write write
#endif
#ifdef G_OS_WIN32
#define g_read _read
#else
#define g_read read
#endif
#define g_fopen fopen
#define g_lstat lstat
#define g_rmdir rmdir
#define g_mkstemp mkstemp
#define g_ascii_isdigit isdigit
#define g_ascii_strtod strtod
#define g_ascii_isalnum isalnum
gchar *g_mkdtemp (gchar *tmpl);
/*
* Low-level write-based printing functions
*/
static inline int
g_async_safe_fgets (char *str, int num, int handle, gboolean *newline)
{
memset (str, 0, num);
// Make sure we don't overwrite the last index so that we are
// guaranteed to be NULL-terminated
int without_padding = num - 1;
int i=0;
while (i < without_padding && g_read (handle, &str [i], sizeof(char))) {
if (str [i] == '\n') {
str [i] = '\0';
*newline = TRUE;
}
if (!isprint (str [i]))
str [i] = '\0';
if (str [i] == '\0')
break;
i++;
}
return i;
}
static inline gint
g_async_safe_vfprintf (int handle, gchar const *format, va_list args)
{
char print_buff [1024];
print_buff [0] = '\0';
g_vsnprintf (print_buff, sizeof(print_buff), format, args);
int ret = g_write (handle, print_buff, (guint32) strlen (print_buff));
return ret;
}
static inline gint
g_async_safe_fprintf (int handle, gchar const *format, ...)
{
va_list args;
va_start (args, format);
int ret = g_async_safe_vfprintf (handle, format, args);
va_end (args);
return ret;
}
static inline gint
g_async_safe_vprintf (gchar const *format, va_list args)
{
return g_async_safe_vfprintf (1, format, args);
}
static inline gint
g_async_safe_printf (gchar const *format, ...)
{
va_list args;
va_start (args, format);
int ret = g_async_safe_vfprintf (1, format, args);
va_end (args);
return ret;
}
/*
* Directory
*/
typedef struct _GDir GDir;
GDir *g_dir_open (const gchar *path, guint flags, GError **gerror);
const gchar *g_dir_read_name (GDir *dir);
void g_dir_rewind (GDir *dir);
void g_dir_close (GDir *dir);
int g_mkdir_with_parents (const gchar *pathname, int mode);
#define g_mkdir mkdir
/*
* Unicode manipulation
*/
extern const guchar g_utf8_jump_table[256];
gboolean g_utf8_validate (const gchar *str, gssize max_len, const gchar **end);
gunichar g_utf8_get_char_validated (const gchar *str, gssize max_len);
#define g_utf8_next_char(p) ((p) + g_utf8_jump_table[(guchar)(*p)])
gunichar g_utf8_get_char (const gchar *src);
glong g_utf8_strlen (const gchar *str, gssize max);
gchar *g_utf8_offset_to_pointer (const gchar *str, glong offset);
glong g_utf8_pointer_to_offset (const gchar *str, const gchar *pos);
/*
* priorities
*/
#define G_PRIORITY_DEFAULT 0
#define G_PRIORITY_DEFAULT_IDLE 200
#define GUINT16_SWAP_LE_BE_CONSTANT(x) ((((guint16) x) >> 8) | ((((guint16) x) << 8)))
#define GUINT16_SWAP_LE_BE(x) ((guint16) (((guint16) x) >> 8) | ((((guint16)(x)) & 0xff) << 8))
#define GUINT32_SWAP_LE_BE(x) ((guint32) \
( (((guint32) (x)) << 24)| \
((((guint32) (x)) & 0xff0000) >> 8) | \
((((guint32) (x)) & 0xff00) << 8) | \
(((guint32) (x)) >> 24)) )
#define GUINT64_SWAP_LE_BE(x) ((guint64) (((guint64)(GUINT32_SWAP_LE_BE(((guint64)x) & 0xffffffff))) << 32) | \
GUINT32_SWAP_LE_BE(((guint64)x) >> 32))
#if G_BYTE_ORDER == G_LITTLE_ENDIAN
# define GUINT64_FROM_BE(x) GUINT64_SWAP_LE_BE(x)
# define GUINT32_FROM_BE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT16_FROM_BE(x) GUINT16_SWAP_LE_BE(x)
# define GUINT_FROM_BE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT64_FROM_LE(x) (x)
# define GUINT32_FROM_LE(x) (x)
# define GUINT16_FROM_LE(x) (x)
# define GUINT_FROM_LE(x) (x)
# define GUINT64_TO_BE(x) GUINT64_SWAP_LE_BE(x)
# define GUINT32_TO_BE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT16_TO_BE(x) GUINT16_SWAP_LE_BE(x)
# define GUINT_TO_BE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT64_TO_LE(x) (x)
# define GUINT32_TO_LE(x) (x)
# define GUINT16_TO_LE(x) (x)
# define GUINT_TO_LE(x) (x)
#else
# define GUINT64_FROM_BE(x) (x)
# define GUINT32_FROM_BE(x) (x)
# define GUINT16_FROM_BE(x) (x)
# define GUINT_FROM_BE(x) (x)
# define GUINT64_FROM_LE(x) GUINT64_SWAP_LE_BE(x)
# define GUINT32_FROM_LE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT16_FROM_LE(x) GUINT16_SWAP_LE_BE(x)
# define GUINT_FROM_LE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT64_TO_BE(x) (x)
# define GUINT32_TO_BE(x) (x)
# define GUINT16_TO_BE(x) (x)
# define GUINT_TO_BE(x) (x)
# define GUINT64_TO_LE(x) GUINT64_SWAP_LE_BE(x)
# define GUINT32_TO_LE(x) GUINT32_SWAP_LE_BE(x)
# define GUINT16_TO_LE(x) GUINT16_SWAP_LE_BE(x)
# define GUINT_TO_LE(x) GUINT32_SWAP_LE_BE(x)
#endif
#define GINT64_FROM_BE(x) (GUINT64_TO_BE (x))
#define GINT32_FROM_BE(x) (GUINT32_TO_BE (x))
#define GINT16_FROM_BE(x) (GUINT16_TO_BE (x))
#define GINT64_FROM_LE(x) (GUINT64_TO_LE (x))
#define GINT32_FROM_LE(x) (GUINT32_TO_LE (x))
#define GINT16_FROM_LE(x) (GUINT16_TO_LE (x))
#define _EGLIB_MAJOR 2
#define _EGLIB_MIDDLE 4
#define _EGLIB_MINOR 0
#define GLIB_CHECK_VERSION(a,b,c) ((a < _EGLIB_MAJOR) || (a == _EGLIB_MAJOR && (b < _EGLIB_MIDDLE || (b == _EGLIB_MIDDLE && c <= _EGLIB_MINOR))))
#define G_HAVE_API_SUPPORT(x) (x)
#define G_UNSUPPORTED_API "%s:%d: '%s' not supported.", __FILE__, __LINE__
#define g_unsupported_api(name) G_STMT_START { g_debug (G_UNSUPPORTED_API, name); } G_STMT_END
#if _WIN32
// g_free the result
// No MAX_PATH limit.
gboolean
mono_get_module_filename (gpointer mod, gunichar2 **pstr, guint32 *plength);
// g_free the result
// No MAX_PATH limit.
gboolean
mono_get_module_filename_ex (gpointer process, gpointer mod, gunichar2 **pstr, guint32 *plength);
// g_free the result
// No MAX_PATH limit.
gboolean
mono_get_module_basename (gpointer process, gpointer mod, gunichar2 **pstr, guint32 *plength);
// g_free the result
// No MAX_PATH limit.
gboolean
mono_get_current_directory (gunichar2 **pstr, guint32 *plength);
#endif
G_END_DECLS // FIXME: There is more extern C than there should be.
static inline
void
mono_qsort (void* base, size_t num, size_t size, int (*compare)(const void*, const void*))
{
g_assert (compare);
g_assert (size);
if (num < 2 || !size || !base)
return;
qsort (base, num, size, compare);
}
#define MONO_DECL_CALLBACK(prefix, ret, name, sig) ret (*name) sig;
#define MONO_INIT_CALLBACK(prefix, ret, name, sig) prefix ## _ ## name,
// For each allocator; i.e. returning gpointer that needs to be cast.
// Macros do not recurse, so naming function and macro the same is ok.
// However these are also already macros.
#undef g_malloc
#undef g_realloc
#undef g_malloc0
#undef g_calloc
#undef g_try_malloc
#undef g_try_realloc
#undef g_memdup
#define g_malloc(x) (g_cast (monoeg_malloc (x)))
#define g_realloc(obj, size) (g_cast (monoeg_realloc ((obj), (size))))
#define g_malloc0(x) (g_cast (monoeg_malloc0 (x)))
#define g_calloc(x, y) (g_cast (monoeg_g_calloc ((x), (y))))
#define g_try_malloc(x) (g_cast (monoeg_try_malloc (x)))
#define g_try_realloc(obj, size) (g_cast (monoeg_try_realloc ((obj), (size))))
#define g_memdup(mem, size) (g_cast (monoeg_g_memdup ((mem), (size))))
/*
* Clock Nanosleep
*/
#ifdef HAVE_CLOCK_NANOSLEEP
gint
g_clock_nanosleep (clockid_t clockid, gint flags, const struct timespec *request, struct timespec *remain);
#endif
#endif // __GLIB_H
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/mono/eglib/test/utf8.c | #include <stdlib.h>
#include "test.h"
/*
* g_utf16_to_utf8
*/
static glong
compare_strings_utf8_pos (const gchar *expected, const gchar *actual, glong size)
{
int i;
for (i = 0; i < size; i++)
if (expected [i] != actual [i])
return i;
return -1;
}
static RESULT
compare_strings_utf8_RESULT (const gchar *expected, const gchar *actual, glong size)
{
glong ret;
ret = compare_strings_utf8_pos (expected, actual, size);
if (ret < 0)
return OK;
return FAILED ("Incorrect output: expected '%s' but was '%s', differ at %d\n", expected, actual, ret);
}
static void
gchar_to_gunichar2 (gunichar2 ret[], const gchar *src)
{
int i;
for (i = 0; src [i]; i++)
ret [i] = src [i];
ret [i] = 0;
}
static RESULT
compare_utf16_to_utf8_explicit (const gchar *expected, const gunichar2 *utf16, glong len_in, glong len_out, glong size_spec)
{
GError *gerror;
gchar* ret;
RESULT result;
glong in_read, out_read;
result = NULL;
gerror = NULL;
ret = g_utf16_to_utf8 (utf16, size_spec, &in_read, &out_read, &gerror);
if (gerror) {
result = FAILED ("The error is %d %s\n", gerror->code, gerror->message);
g_error_free (gerror);
if (ret)
g_free (ret);
return result;
}
if (in_read != len_in)
result = FAILED ("Read size is incorrect: expected %d but was %d\n", len_in, in_read);
else if (out_read != len_out)
result = FAILED ("Converted size is incorrect: expected %d but was %d\n", len_out, out_read);
else
result = compare_strings_utf8_RESULT (expected, ret, len_out);
g_free (ret);
if (result)
return result;
return OK;
}
static RESULT
compare_utf16_to_utf8 (const gchar *expected, const gunichar2 *utf16, glong len_in, glong len_out)
{
RESULT result;
result = compare_utf16_to_utf8_explicit (expected, utf16, len_in, len_out, -1);
if (result != OK)
return result;
return compare_utf16_to_utf8_explicit (expected, utf16, len_in, len_out, len_in);
}
static RESULT
test_utf16_to_utf8 (void)
{
const gchar *src0 = "", *src1 = "ABCDE", *src2 = "\xE5\xB9\xB4\x27", *src3 = "\xEF\xBC\xA1", *src4 = "\xEF\xBD\x81", *src5 = "\xF0\x90\x90\x80";
gunichar2 str0 [] = {0}, str1 [6], str2 [] = {0x5E74, 39, 0}, str3 [] = {0xFF21, 0}, str4 [] = {0xFF41, 0}, str5 [] = {0xD801, 0xDC00, 0};
RESULT result;
gchar_to_gunichar2 (str1, src1);
/* empty string */
result = compare_utf16_to_utf8 (src0, str0, 0, 0);
if (result != OK)
return result;
result = compare_utf16_to_utf8 (src1, str1, 5, 5);
if (result != OK)
return result;
result = compare_utf16_to_utf8 (src2, str2, 2, 4);
if (result != OK)
return result;
result = compare_utf16_to_utf8 (src3, str3, 1, 3);
if (result != OK)
return result;
result = compare_utf16_to_utf8 (src4, str4, 1, 3);
if (result != OK)
return result;
result = compare_utf16_to_utf8 (src5, str5, 2, 4);
if (result != OK)
return result;
return OK;
}
/*
* g_utf8_to_utf16
*/
static glong
compare_strings_utf16_pos (const gunichar2 *expected, const gunichar2 *actual, glong size)
{
int i;
for (i = 0; i < size; i++)
if (expected [i] != actual [i])
return i;
return -1;
}
static RESULT
compare_strings_utf16_RESULT (const gunichar2 *expected, const gunichar2 *actual, glong size)
{
glong ret;
ret = compare_strings_utf16_pos (expected, actual, size);
if (ret < 0)
return OK;
return FAILED ("Incorrect output: expected '%s' but was '%s', differ at %d ('%c' x '%c')\n", expected, actual, ret, expected [ret], actual [ret]);
}
#if !defined(EGLIB_TESTS)
#define eg_utf8_to_utf16_with_nuls g_utf8_to_utf16
#endif
static RESULT
compare_utf8_to_utf16_explicit (const gunichar2 *expected, const gchar *utf8, glong len_in, glong len_out, glong size_spec, gboolean include_nuls)
{
GError *gerror;
gunichar2* ret;
RESULT result;
glong in_read, out_read;
result = NULL;
gerror = NULL;
if (include_nuls)
ret = eg_utf8_to_utf16_with_nuls (utf8, size_spec, &in_read, &out_read, &gerror);
else
ret = g_utf8_to_utf16 (utf8, size_spec, &in_read, &out_read, &gerror);
if (gerror) {
result = FAILED ("The error is %d %s\n", gerror->code, gerror->message);
g_error_free (gerror);
if (ret)
g_free (ret);
return result;
}
if (in_read != len_in)
result = FAILED ("Read size is incorrect: expected %d but was %d\n", len_in, in_read);
else if (out_read != len_out)
result = FAILED ("Converted size is incorrect: expected %d but was %d\n", len_out, out_read);
else
result = compare_strings_utf16_RESULT (expected, ret, len_out);
g_free (ret);
if (result)
return result;
return OK;
}
static RESULT
compare_utf8_to_utf16_general (const gunichar2 *expected, const gchar *utf8, glong len_in, glong len_out, gboolean include_nuls)
{
RESULT result;
result = compare_utf8_to_utf16_explicit (expected, utf8, len_in, len_out, -1, include_nuls);
if (result != OK)
return result;
return compare_utf8_to_utf16_explicit (expected, utf8, len_in, len_out, len_in, include_nuls);
}
static RESULT
compare_utf8_to_utf16 (const gunichar2 *expected, const gchar *utf8, glong len_in, glong len_out)
{
return compare_utf8_to_utf16_general (expected, utf8, len_in, len_out, FALSE);
}
static RESULT
compare_utf8_to_utf16_with_nuls (const gunichar2 *expected, const gchar *utf8, glong len_in, glong len_out)
{
return compare_utf8_to_utf16_explicit (expected, utf8, len_in, len_out, len_in, TRUE);
}
static RESULT
test_utf8_seq (void)
{
const gchar *src = "\xE5\xB9\xB4\x27";
glong in_read, out_read;
//gunichar2 expected [6];
GError *gerror = NULL;
gunichar2 *dst;
//printf ("got: %s\n", src);
dst = g_utf8_to_utf16 (src, (glong)strlen (src), &in_read, &out_read, &gerror);
if (gerror != NULL){
return gerror->message;
}
if (in_read != 4) {
return FAILED ("in_read is expected to be 4 but was %d\n", in_read);
}
if (out_read != 2) {
return FAILED ("out_read is expected to be 2 but was %d\n", out_read);
}
g_free (dst);
return OK;
}
static RESULT
test_utf8_to_utf16 (void)
{
const gchar *src0 = "", *src1 = "ABCDE", *src2 = "\xE5\xB9\xB4\x27", *src3 = "\xEF\xBC\xA1", *src4 = "\xEF\xBD\x81";
gunichar2 str0 [] = {0}, str1 [6], str2 [] = {0x5E74, 39, 0}, str3 [] = {0xFF21, 0}, str4 [] = {0xFF41, 0};
RESULT result;
gchar_to_gunichar2 (str1, src1);
/* empty string */
result = compare_utf8_to_utf16 (str0, src0, 0, 0);
if (result != OK)
return result;
result = compare_utf8_to_utf16 (str1, src1, 5, 5);
if (result != OK)
return result;
result = compare_utf8_to_utf16 (str2, src2, 4, 2);
if (result != OK)
return result;
result = compare_utf8_to_utf16 (str3, src3, 3, 1);
if (result != OK)
return result;
result = compare_utf8_to_utf16 (str4, src4, 3, 1);
if (result != OK)
return result;
return OK;
}
static RESULT
test_utf8_to_utf16_with_nuls (void)
{
const gchar *src0 = "", *src1 = "AB\0DE", *src2 = "\xE5\xB9\xB4\x27", *src3 = "\xEF\xBC\xA1", *src4 = "\xEF\xBD\x81";
gunichar2 str0 [] = {0}, str1 [] = {'A', 'B', 0, 'D', 'E', 0}, str2 [] = {0x5E74, 39, 0}, str3 [] = {0xFF21, 0}, str4 [] = {0xFF41, 0};
RESULT result;
#if !defined(EGLIB_TESTS)
return OK;
#endif
/* implicit length is forbidden */
if (eg_utf8_to_utf16_with_nuls (src1, -1, NULL, NULL, NULL) != NULL)
return FAILED ("explicit nulls must fail with -1 length\n");
/* empty string */
result = compare_utf8_to_utf16_with_nuls (str0, src0, 0, 0);
if (result != OK)
return result;
result = compare_utf8_to_utf16_with_nuls (str1, src1, 5, 5);
if (result != OK)
return result;
result = compare_utf8_to_utf16_with_nuls (str2, src2, 4, 2);
if (result != OK)
return result;
result = compare_utf8_to_utf16_with_nuls (str3, src3, 3, 1);
if (result != OK)
return result;
result = compare_utf8_to_utf16_with_nuls (str4, src4, 3, 1);
if (result != OK)
return result;
return OK;
}
typedef struct {
char *content;
size_t length;
} convert_result_t;
static RESULT
test_convert (void)
{
static const char *charsets[] = { "UTF-8", "UTF-16LE", "UTF-16BE", "UTF-32LE", "UTF-32BE" };
gsize length, converted_length, n;
char *content, *converted, *path;
convert_result_t **expected;
GError *err = NULL;
const char *srcdir;
gboolean loaded;
guint i, j, k;
char c;
if (!(srcdir = getenv ("srcdir")) && !(srcdir = getenv ("PWD")))
return FAILED ("srcdir not defined!");
expected = g_malloc (sizeof (convert_result_t *) * G_N_ELEMENTS (charsets));
/* first load all our test samples... */
for (i = 0; i < G_N_ELEMENTS (charsets); i++) {
path = g_strdup_printf ("%s%c%s.txt", srcdir, G_DIR_SEPARATOR, charsets[i]);
loaded = g_file_get_contents (path, &content, &length, &err);
g_free (path);
if (!loaded) {
for (j = 0; j < i; j++) {
g_free (expected[j]->content);
g_free (expected[j]);
}
g_free (expected);
return FAILED ("Failed to load content for %s: %s", charsets[i], err->message);
}
expected[i] = g_malloc (sizeof (convert_result_t));
expected[i]->content = content;
expected[i]->length = length;
}
/* test conversion from every charset to every other charset */
for (i = 0; i < G_N_ELEMENTS (charsets); i++) {
for (j = 0; j < G_N_ELEMENTS (charsets); j++) {
converted = g_convert (expected[i]->content, expected[i]->length, charsets[j],
charsets[i], NULL, &converted_length, NULL);
if (converted == NULL) {
for (k = 0; k < G_N_ELEMENTS (charsets); k++) {
g_free (expected[k]->content);
g_free (expected[k]);
}
g_free (expected);
return FAILED ("Failed to convert from %s to %s: NULL", charsets[i], charsets[j]);
}
if (converted_length != expected[j]->length) {
length = expected[j]->length;
for (k = 0; k < G_N_ELEMENTS (charsets); k++) {
g_free (expected[k]->content);
g_free (expected[k]);
}
g_free (converted);
g_free (expected);
return FAILED ("Failed to convert from %s to %s: expected %u bytes, got %u",
charsets[i], charsets[j], length, converted_length);
}
for (n = 0; n < converted_length; n++) {
if (converted[n] != expected[j]->content[n]) {
c = expected[j]->content[n];
for (k = 0; k < G_N_ELEMENTS (charsets); k++) {
g_free (expected[k]->content);
g_free (expected[k]);
}
g_free (converted);
g_free (expected);
return FAILED ("Failed to convert from %s to %s: expected 0x%x at offset %u, got 0x%x",
charsets[i], charsets[j], c, n, converted[n]);
}
}
g_free (converted);
}
}
for (k = 0; k < G_N_ELEMENTS (charsets); k++) {
g_free (expected[k]->content);
g_free (expected[k]);
}
g_free (expected);
return OK;
}
static RESULT
ucs4_to_utf16_check_result (const gunichar2 *result_str, const gunichar2 *expected_str,
glong result_items_read, glong expected_items_read,
glong result_items_written, glong expected_items_written,
GError* result_error, gboolean expect_error)
{
glong i;
if (result_items_read != expected_items_read)
return FAILED("Incorrect number of items read; expected %d, got %d", expected_items_read, result_items_read);
if (result_items_written != expected_items_written)
return FAILED("Incorrect number of items written; expected %d, got %d", expected_items_written, result_items_written);
if (result_error && !expect_error)
return FAILED("There should not be an error code.");
if (!result_error && expect_error)
return FAILED("Unexpected error object.");
if (expect_error && result_str)
return FAILED("NULL should be returned when an error occurs.");
if (!expect_error && !result_str)
return FAILED("When no error occurs NULL should not be returned.");
for (i=0; i<expected_items_written;i++) {
if (result_str [i] != expected_str [i])
return FAILED("Incorrect value %d at index %d", result_str [i], i);
}
if (result_str && result_str[expected_items_written] != '\0')
return FAILED("Null termination not found at the end of the string.");
return OK;
}
static RESULT
test_ucs4_to_utf16 (void)
{
static gunichar str1[12] = {'H','e','l','l','o',' ','W','o','r','l','d','\0'};
static gunichar2 exp1[12] = {'H','e','l','l','o',' ','W','o','r','l','d','\0'};
static gunichar str2[3] = {'h',0x80000000,'\0'};
static gunichar2 exp2[2] = {'h','\0'};
static gunichar str3[3] = {'h',0xDA00,'\0'};
static gunichar str4[3] = {'h',0x10FFFF,'\0'};
static gunichar2 exp4[4] = {'h',0xdbff,0xdfff,'\0'};
static gunichar str5[7] = {0xD7FF,0xD800,0xDFFF,0xE000,0x110000,0x10FFFF,'\0'};
static gunichar2 exp5[5] = {0xD7FF,0xE000,0xdbff,0xdfff,'\0'};
static gunichar str6[2] = {0x10400, '\0'};
static gunichar2 exp6[3] = {0xD801, 0xDC00, '\0'};
static glong read_write[12] = {1,1,0,0,0,0,1,1,0,0,1,2};
gunichar2* res;
glong items_read, items_written, current_write_index;
GError* err=0;
RESULT check_result;
glong i;
res = g_ucs4_to_utf16 (str1, 12, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, exp1, items_read, 11, items_written, 11, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_ucs4_to_utf16 (str2, 0, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, exp2, items_read, 0, items_written, 0, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_ucs4_to_utf16 (str2, 1, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, exp2, items_read, 1, items_written, 1, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_ucs4_to_utf16 (str2, 2, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, 0, items_read, 1, items_written, 0, err, TRUE);
g_free (res);
if (check_result) return check_result;
items_read = items_written = 0;
err = 0;
res = g_ucs4_to_utf16 (str3, 2, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, 0, items_read, 1, items_written, 0, err, TRUE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
err = 0;
res = g_ucs4_to_utf16 (str4, 5, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, exp4, items_read, 2, items_written, 3, err, FALSE);
if (check_result) return check_result;
g_free (res);
// This loop tests the bounds of the conversion algorithm
current_write_index = 0;
for (i=0;i<6;i++) {
items_read = items_written = 0;
err = 0;
res = g_ucs4_to_utf16 (&str5[i], 1, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, &exp5[current_write_index],
items_read, read_write[i*2], items_written, read_write[(i*2)+1], err, !read_write[(i*2)+1]);
if (check_result) return check_result;
g_free (res);
current_write_index += items_written;
}
items_read = items_written = 0;
err = 0;
res = g_ucs4_to_utf16 (str6, 1, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, exp6, items_read, 1, items_written, 2, err, FALSE);
if (check_result) return check_result;
g_free (res);
return OK;
}
static RESULT
utf16_to_ucs4_check_result (const gunichar *result_str, const gunichar *expected_str,
glong result_items_read, glong expected_items_read,
glong result_items_written, glong expected_items_written,
GError* result_error, gboolean expect_error)
{
glong i;
if (result_items_read != expected_items_read)
return FAILED("Incorrect number of items read; expected %d, got %d", expected_items_read, result_items_read);
if (result_items_written != expected_items_written)
return FAILED("Incorrect number of items written; expected %d, got %d", expected_items_written, result_items_written);
if (result_error && !expect_error)
return FAILED("There should not be an error code.");
if (!result_error && expect_error)
return FAILED("Unexpected error object.");
if (expect_error && result_str)
return FAILED("NULL should be returned when an error occurs.");
if (!expect_error && !result_str)
return FAILED("When no error occurs NULL should not be returned.");
for (i=0; i<expected_items_written;i++) {
if (result_str [i] != expected_str [i])
return FAILED("Incorrect value %d at index %d", result_str [i], i);
}
if (result_str && result_str[expected_items_written] != '\0')
return FAILED("Null termination not found at the end of the string.");
return OK;
}
static RESULT
test_utf16_to_ucs4 (void)
{
static gunichar2 str1[12] = {'H','e','l','l','o',' ','W','o','r','l','d','\0'};
static gunichar exp1[12] = {'H','e','l','l','o',' ','W','o','r','l','d','\0'};
static gunichar2 str2[7] = {'H', 0xD800, 0xDC01,0xD800,0xDBFF,'l','\0'};
static gunichar exp2[3] = {'H',0x00010001,'\0'};
static gunichar2 str3[4] = {'H', 0xDC00 ,'l','\0'};
static gunichar exp3[2] = {'H','\0'};
static gunichar2 str4[20] = {0xDC00,0xDFFF,0xDFF,0xD800,0xDBFF,0xD800,0xDC00,0xD800,0xDFFF,
0xD800,0xE000,0xDBFF,0xDBFF,0xDBFF,0xDC00,0xDBFF,0xDFFF,0xDBFF,0xE000,'\0'};
static gunichar exp4[6] = {0xDFF,0x10000,0x103ff,0x10fc00,0x10FFFF,'\0'};
static gunichar2 str5[3] = {0xD801, 0xDC00, 0};
static gunichar exp5[2] = {0x10400, 0};
static glong read_write[33] = {1,0,0,1,0,0,1,1,1,2,1,0,2,2,1,2,2,1,2,1,0,2,1,0,2,2,1,2,2,1,2,1,0};
gunichar* res;
glong items_read, items_written, current_read_index,current_write_index;
GError* err=0;
RESULT check_result;
glong i;
res = g_utf16_to_ucs4 (str1, 12, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp1, items_read, 11, items_written, 11, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_utf16_to_ucs4 (str2, 0, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp2, items_read, 0, items_written, 0, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_utf16_to_ucs4 (str2, 1, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp2, items_read, 1, items_written, 1, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_utf16_to_ucs4 (str2, 2, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp2, items_read, 1, items_written, 1, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_utf16_to_ucs4 (str2, 3, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp2, items_read, 3, items_written, 2, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_utf16_to_ucs4 (str2, 4, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp2, items_read, 3, items_written, 2, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_utf16_to_ucs4 (str2, 5, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp2, items_read, 4, items_written, 0, err, TRUE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
err = 0;
res = g_utf16_to_ucs4 (str3, 5, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp3, items_read, 1, items_written, 0, err, TRUE);
if (check_result) return check_result;
g_free (res);
// This loop tests the bounds of the conversion algorithm
current_read_index = current_write_index = 0;
for (i=0;i<11;i++) {
items_read = items_written = 0;
err = 0;
res = g_utf16_to_ucs4 (&str4[current_read_index], read_write[i*3], &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, &exp4[current_write_index], items_read,
read_write[(i*3)+1], items_written, read_write[(i*3)+2], err,
!read_write[(i*3)+2]);
if (check_result) return check_result;
g_free (res);
current_read_index += read_write[i*3];
current_write_index += items_written;
}
items_read = items_written = 0;
err = 0;
res = g_utf16_to_ucs4 (str5, 2, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp5, items_read, 2, items_written, 1, err, FALSE);
if (check_result) return check_result;
g_free (res);
return OK;
}
static RESULT
test_utf8_strlen (void)
{
gchar word1 [] = {0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,0xF1, 0x82, 0x82, 0x82,'\0'};//Valid, len = 5
gchar word2 [] = {0xF1, 0x82, 0x82, 0x82,0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,'\0'};//Valid, len = 5
gchar word3 [] = {'h','e',0xC2, 0x82,0x45,'\0'}; //Valid, len = 4
gchar word4 [] = {0x62,0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,'\0'}; //Valid, len = 5
glong len = 0;
//Test word1
len = g_utf8_strlen (word1,-1);
if (len != 5)
return FAILED ("Word1 expected length of 5, but was %i", len);
//Do tests with different values for max parameter.
len = g_utf8_strlen (word1,1);
if (len != 0)
return FAILED ("Word1, max = 1, expected length of 0, but was %i", len);
len = g_utf8_strlen (word1,2);
if (len != 1)
return FAILED ("Word1, max = 1, expected length of 1, but was %i", len);
len = g_utf8_strlen (word1,3);
if (len != 2)
return FAILED ("Word1, max = 2, expected length of 2, but was %i", len);
//Test word2
len = g_utf8_strlen (word2,-1);
if (len != 5)
return FAILED ("Word2 expected length of 5, but was %i", len);
//Test word3
len = g_utf8_strlen (word3,-1);
if (len != 4)
return FAILED ("Word3 expected length of 4, but was %i", len);
//Test word4
len = g_utf8_strlen (word4,-1);
if (len != 5)
return FAILED ("Word4 expected length of 5, but was %i", len);
//Test null case
len = g_utf8_strlen(NULL,0);
if (len != 0)
return FAILED ("Expected passing null to result in a length of 0");
return OK;
}
static RESULT
test_utf8_get_char (void)
{
gchar word1 [] = {0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,0xF1, 0x82, 0x82, 0x82,'\0'}; //Valid, len = 5
gunichar value = g_utf8_get_char (&word1 [0]);
if (value != 0x82UL)
return FAILED ("Expected value of 0x82, but was %x", value);
value = g_utf8_get_char (&word1 [2]);
if (value != 0x45UL)
return FAILED ("Expected value of 0x45, but was %x", value);
value = g_utf8_get_char (&word1 [3]);
if (value != 0x1043UL)
return FAILED ("Expected value of 0x1043, but was %x", value);
value = g_utf8_get_char (&word1 [6]);
if (value != 0x58UL)
return FAILED ("Expected value of 0x58, but was %x", value);
value = g_utf8_get_char (&word1 [7]);
if (value != 0x42082UL)
return FAILED ("Expected value of 0x42082, but was %x", value);
return OK;
}
static RESULT
test_utf8_next_char (void)
{
gchar word1 [] = {0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,0xF1, 0x82, 0x82, 0x82,'\0'}; //Valid, len = 5
gchar word2 [] = {0xF1, 0x82, 0x82, 0x82,0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,'\0'}; //Valid, len = 5
gchar word1ExpectedValues [] = {0xC2, 0x45,0xE1, 0x58, 0xF1};
gchar word2ExpectedValues [] = {0xF1, 0xC2, 0x45, 0xE1, 0x58};
gchar* ptr = word1;
gint count = 0;
//Test word1
while (*ptr != 0) {
if (count > 4)
return FAILED ("Word1 has gone past its expected length");
if (*ptr != word1ExpectedValues[count])
return FAILED ("Word1 has an incorrect next_char at index %i", count);
ptr = g_utf8_next_char (ptr);
count++;
}
//Test word2
count = 0;
ptr = word2;
while (*ptr != 0) {
if (count > 4)
return FAILED ("Word2 has gone past its expected length");
if (*ptr != word2ExpectedValues[count])
return FAILED ("Word2 has an incorrect next_char at index %i", count);
ptr = g_utf8_next_char (ptr);
count++;
}
return OK;
}
static RESULT
test_utf8_validate (void)
{
gchar invalidWord1 [] = {0xC3, 0x82, 0xC1,0x90,'\0'}; //Invalid, 1nd oct Can't be 0xC0 or 0xC1
gchar invalidWord2 [] = {0xC1, 0x89, 0x60, '\0'}; //Invalid, 1st oct can not be 0xC1
gchar invalidWord3 [] = {0xC2, 0x45,0xE1, 0x81, 0x83,0x58,'\0'}; //Invalid, oct after 0xC2 must be > 0x80
gchar validWord1 [] = {0xC2, 0x82, 0xC3,0xA0,'\0'}; //Valid
gchar validWord2 [] = {0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,0xF1, 0x82, 0x82, 0x82,'\0'}; //Valid
const gchar* end;
gboolean retVal = g_utf8_validate (invalidWord1, -1, &end);
if (retVal != FALSE)
return FAILED ("Expected invalidWord1 to be invalid");
if (end != &invalidWord1 [2])
return FAILED ("Expected end parameter to be pointing to invalidWord1[2]");
end = NULL;
retVal = g_utf8_validate (invalidWord2, -1, &end);
if (retVal != FALSE)
return FAILED ("Expected invalidWord2 to be invalid");
if (end != &invalidWord2 [0])
return FAILED ("Expected end parameter to be pointing to invalidWord2[0]");
end = NULL;
retVal = g_utf8_validate (invalidWord3, -1, &end);
if (retVal != FALSE)
return FAILED ("Expected invalidWord3 to be invalid");
if (end != &invalidWord3 [0])
return FAILED ("Expected end parameter to be pointing to invalidWord3[1]");
end = NULL;
retVal = g_utf8_validate (validWord1, -1, &end);
if (retVal != TRUE)
return FAILED ("Expected validWord1 to be valid");
if (end != &validWord1 [4])
return FAILED ("Expected end parameter to be pointing to validWord1[4]");
end = NULL;
retVal = g_utf8_validate (validWord2, -1, &end);
if (retVal != TRUE)
return FAILED ("Expected validWord2 to be valid");
if (end != &validWord2 [11])
return FAILED ("Expected end parameter to be pointing to validWord2[11]");
return OK;
}
static glong
utf8_byteslen (const gchar *src)
{
int i = 0;
do {
if (src [i] == '\0')
return i;
i++;
} while (TRUE);
}
/*
* test initialization
*/
static Test utf8_tests [] = {
{"g_utf16_to_utf8", test_utf16_to_utf8},
{"g_utf8_to_utf16", test_utf8_to_utf16},
{"g_utf8_to_utf16_with_nuls", test_utf8_to_utf16_with_nuls},
{"g_utf8_seq", test_utf8_seq},
{"g_convert", test_convert },
{"g_ucs4_to_utf16", test_ucs4_to_utf16 },
{"g_utf16_to_ucs4", test_utf16_to_ucs4 },
{"g_utf8_strlen", test_utf8_strlen },
{"g_utf8_get_char", test_utf8_get_char },
{"g_utf8_next_char", test_utf8_next_char },
{"g_utf8_validate", test_utf8_validate },
{NULL, NULL}
};
DEFINE_TEST_GROUP_INIT(utf8_tests_init, utf8_tests)
| #include <stdlib.h>
#include "test.h"
/*
* g_utf16_to_utf8
*/
static glong
compare_strings_utf8_pos (const gchar *expected, const gchar *actual, glong size)
{
int i;
for (i = 0; i < size; i++)
if (expected [i] != actual [i])
return i;
return -1;
}
static RESULT
compare_strings_utf8_RESULT (const gchar *expected, const gchar *actual, glong size)
{
glong ret;
ret = compare_strings_utf8_pos (expected, actual, size);
if (ret < 0)
return OK;
return FAILED ("Incorrect output: expected '%s' but was '%s', differ at %d\n", expected, actual, ret);
}
static void
gchar_to_gunichar2 (gunichar2 ret[], const gchar *src)
{
int i;
for (i = 0; src [i]; i++)
ret [i] = src [i];
ret [i] = 0;
}
static RESULT
compare_utf16_to_utf8_explicit (const gchar *expected, const gunichar2 *utf16, glong len_in, glong len_out, glong size_spec)
{
GError *gerror;
gchar* ret;
RESULT result;
glong in_read, out_read;
result = NULL;
gerror = NULL;
ret = g_utf16_to_utf8 (utf16, size_spec, &in_read, &out_read, &gerror);
if (gerror) {
result = FAILED ("The error is %d %s\n", gerror->code, gerror->message);
g_error_free (gerror);
if (ret)
g_free (ret);
return result;
}
if (in_read != len_in)
result = FAILED ("Read size is incorrect: expected %d but was %d\n", len_in, in_read);
else if (out_read != len_out)
result = FAILED ("Converted size is incorrect: expected %d but was %d\n", len_out, out_read);
else
result = compare_strings_utf8_RESULT (expected, ret, len_out);
g_free (ret);
if (result)
return result;
return OK;
}
static RESULT
compare_utf16_to_utf8 (const gchar *expected, const gunichar2 *utf16, glong len_in, glong len_out)
{
RESULT result;
result = compare_utf16_to_utf8_explicit (expected, utf16, len_in, len_out, -1);
if (result != OK)
return result;
return compare_utf16_to_utf8_explicit (expected, utf16, len_in, len_out, len_in);
}
static RESULT
test_utf16_to_utf8 (void)
{
const gchar *src0 = "", *src1 = "ABCDE", *src2 = "\xE5\xB9\xB4\x27", *src3 = "\xEF\xBC\xA1", *src4 = "\xEF\xBD\x81", *src5 = "\xF0\x90\x90\x80";
gunichar2 str0 [] = {0}, str1 [6], str2 [] = {0x5E74, 39, 0}, str3 [] = {0xFF21, 0}, str4 [] = {0xFF41, 0}, str5 [] = {0xD801, 0xDC00, 0};
RESULT result;
gchar_to_gunichar2 (str1, src1);
/* empty string */
result = compare_utf16_to_utf8 (src0, str0, 0, 0);
if (result != OK)
return result;
result = compare_utf16_to_utf8 (src1, str1, 5, 5);
if (result != OK)
return result;
result = compare_utf16_to_utf8 (src2, str2, 2, 4);
if (result != OK)
return result;
result = compare_utf16_to_utf8 (src3, str3, 1, 3);
if (result != OK)
return result;
result = compare_utf16_to_utf8 (src4, str4, 1, 3);
if (result != OK)
return result;
result = compare_utf16_to_utf8 (src5, str5, 2, 4);
if (result != OK)
return result;
return OK;
}
/*
* g_utf8_to_utf16
*/
static glong
compare_strings_utf16_pos (const gunichar2 *expected, const gunichar2 *actual, glong size)
{
int i;
for (i = 0; i < size; i++)
if (expected [i] != actual [i])
return i;
return -1;
}
static RESULT
compare_strings_utf16_RESULT (const gunichar2 *expected, const gunichar2 *actual, glong size)
{
glong ret;
ret = compare_strings_utf16_pos (expected, actual, size);
if (ret < 0)
return OK;
return FAILED ("Incorrect output: expected '%s' but was '%s', differ at %d ('%c' x '%c')\n", expected, actual, ret, expected [ret], actual [ret]);
}
#if !defined(EGLIB_TESTS)
#define eg_utf8_to_utf16_with_nuls g_utf8_to_utf16
#endif
static RESULT
compare_utf8_to_utf16_explicit (const gunichar2 *expected, const gchar *utf8, glong len_in, glong len_out, glong size_spec, gboolean include_nuls)
{
GError *gerror;
gunichar2* ret;
RESULT result;
glong in_read, out_read;
result = NULL;
gerror = NULL;
if (include_nuls)
ret = eg_utf8_to_utf16_with_nuls (utf8, size_spec, &in_read, &out_read, &gerror);
else
ret = g_utf8_to_utf16 (utf8, size_spec, &in_read, &out_read, &gerror);
if (gerror) {
result = FAILED ("The error is %d %s\n", gerror->code, gerror->message);
g_error_free (gerror);
if (ret)
g_free (ret);
return result;
}
if (in_read != len_in)
result = FAILED ("Read size is incorrect: expected %d but was %d\n", len_in, in_read);
else if (out_read != len_out)
result = FAILED ("Converted size is incorrect: expected %d but was %d\n", len_out, out_read);
else
result = compare_strings_utf16_RESULT (expected, ret, len_out);
g_free (ret);
if (result)
return result;
return OK;
}
static RESULT
compare_utf8_to_utf16_general (const gunichar2 *expected, const gchar *utf8, glong len_in, glong len_out, gboolean include_nuls)
{
RESULT result;
result = compare_utf8_to_utf16_explicit (expected, utf8, len_in, len_out, -1, include_nuls);
if (result != OK)
return result;
return compare_utf8_to_utf16_explicit (expected, utf8, len_in, len_out, len_in, include_nuls);
}
static RESULT
compare_utf8_to_utf16 (const gunichar2 *expected, const gchar *utf8, glong len_in, glong len_out)
{
return compare_utf8_to_utf16_general (expected, utf8, len_in, len_out, FALSE);
}
static RESULT
compare_utf8_to_utf16_with_nuls (const gunichar2 *expected, const gchar *utf8, glong len_in, glong len_out)
{
return compare_utf8_to_utf16_explicit (expected, utf8, len_in, len_out, len_in, TRUE);
}
static RESULT
test_utf8_seq (void)
{
const gchar *src = "\xE5\xB9\xB4\x27";
glong in_read, out_read;
//gunichar2 expected [6];
GError *gerror = NULL;
gunichar2 *dst;
//printf ("got: %s\n", src);
dst = g_utf8_to_utf16 (src, (glong)strlen (src), &in_read, &out_read, &gerror);
if (gerror != NULL){
return gerror->message;
}
if (in_read != 4) {
return FAILED ("in_read is expected to be 4 but was %d\n", in_read);
}
if (out_read != 2) {
return FAILED ("out_read is expected to be 2 but was %d\n", out_read);
}
g_free (dst);
return OK;
}
static RESULT
test_utf8_to_utf16 (void)
{
const gchar *src0 = "", *src1 = "ABCDE", *src2 = "\xE5\xB9\xB4\x27", *src3 = "\xEF\xBC\xA1", *src4 = "\xEF\xBD\x81";
gunichar2 str0 [] = {0}, str1 [6], str2 [] = {0x5E74, 39, 0}, str3 [] = {0xFF21, 0}, str4 [] = {0xFF41, 0};
RESULT result;
gchar_to_gunichar2 (str1, src1);
/* empty string */
result = compare_utf8_to_utf16 (str0, src0, 0, 0);
if (result != OK)
return result;
result = compare_utf8_to_utf16 (str1, src1, 5, 5);
if (result != OK)
return result;
result = compare_utf8_to_utf16 (str2, src2, 4, 2);
if (result != OK)
return result;
result = compare_utf8_to_utf16 (str3, src3, 3, 1);
if (result != OK)
return result;
result = compare_utf8_to_utf16 (str4, src4, 3, 1);
if (result != OK)
return result;
return OK;
}
static RESULT
test_utf8_to_utf16_with_nuls (void)
{
const gchar *src0 = "", *src1 = "AB\0DE", *src2 = "\xE5\xB9\xB4\x27", *src3 = "\xEF\xBC\xA1", *src4 = "\xEF\xBD\x81";
gunichar2 str0 [] = {0}, str1 [] = {'A', 'B', 0, 'D', 'E', 0}, str2 [] = {0x5E74, 39, 0}, str3 [] = {0xFF21, 0}, str4 [] = {0xFF41, 0};
RESULT result;
#if !defined(EGLIB_TESTS)
return OK;
#endif
/* implicit length is forbidden */
if (eg_utf8_to_utf16_with_nuls (src1, -1, NULL, NULL, NULL) != NULL)
return FAILED ("explicit nulls must fail with -1 length\n");
/* empty string */
result = compare_utf8_to_utf16_with_nuls (str0, src0, 0, 0);
if (result != OK)
return result;
result = compare_utf8_to_utf16_with_nuls (str1, src1, 5, 5);
if (result != OK)
return result;
result = compare_utf8_to_utf16_with_nuls (str2, src2, 4, 2);
if (result != OK)
return result;
result = compare_utf8_to_utf16_with_nuls (str3, src3, 3, 1);
if (result != OK)
return result;
result = compare_utf8_to_utf16_with_nuls (str4, src4, 3, 1);
if (result != OK)
return result;
return OK;
}
static RESULT
ucs4_to_utf16_check_result (const gunichar2 *result_str, const gunichar2 *expected_str,
glong result_items_read, glong expected_items_read,
glong result_items_written, glong expected_items_written,
GError* result_error, gboolean expect_error)
{
glong i;
if (result_items_read != expected_items_read)
return FAILED("Incorrect number of items read; expected %d, got %d", expected_items_read, result_items_read);
if (result_items_written != expected_items_written)
return FAILED("Incorrect number of items written; expected %d, got %d", expected_items_written, result_items_written);
if (result_error && !expect_error)
return FAILED("There should not be an error code.");
if (!result_error && expect_error)
return FAILED("Unexpected error object.");
if (expect_error && result_str)
return FAILED("NULL should be returned when an error occurs.");
if (!expect_error && !result_str)
return FAILED("When no error occurs NULL should not be returned.");
for (i=0; i<expected_items_written;i++) {
if (result_str [i] != expected_str [i])
return FAILED("Incorrect value %d at index %d", result_str [i], i);
}
if (result_str && result_str[expected_items_written] != '\0')
return FAILED("Null termination not found at the end of the string.");
return OK;
}
static RESULT
test_ucs4_to_utf16 (void)
{
static gunichar str1[12] = {'H','e','l','l','o',' ','W','o','r','l','d','\0'};
static gunichar2 exp1[12] = {'H','e','l','l','o',' ','W','o','r','l','d','\0'};
static gunichar str2[3] = {'h',0x80000000,'\0'};
static gunichar2 exp2[2] = {'h','\0'};
static gunichar str3[3] = {'h',0xDA00,'\0'};
static gunichar str4[3] = {'h',0x10FFFF,'\0'};
static gunichar2 exp4[4] = {'h',0xdbff,0xdfff,'\0'};
static gunichar str5[7] = {0xD7FF,0xD800,0xDFFF,0xE000,0x110000,0x10FFFF,'\0'};
static gunichar2 exp5[5] = {0xD7FF,0xE000,0xdbff,0xdfff,'\0'};
static gunichar str6[2] = {0x10400, '\0'};
static gunichar2 exp6[3] = {0xD801, 0xDC00, '\0'};
static glong read_write[12] = {1,1,0,0,0,0,1,1,0,0,1,2};
gunichar2* res;
glong items_read, items_written, current_write_index;
GError* err=0;
RESULT check_result;
glong i;
res = g_ucs4_to_utf16 (str1, 12, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, exp1, items_read, 11, items_written, 11, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_ucs4_to_utf16 (str2, 0, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, exp2, items_read, 0, items_written, 0, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_ucs4_to_utf16 (str2, 1, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, exp2, items_read, 1, items_written, 1, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_ucs4_to_utf16 (str2, 2, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, 0, items_read, 1, items_written, 0, err, TRUE);
g_free (res);
if (check_result) return check_result;
items_read = items_written = 0;
err = 0;
res = g_ucs4_to_utf16 (str3, 2, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, 0, items_read, 1, items_written, 0, err, TRUE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
err = 0;
res = g_ucs4_to_utf16 (str4, 5, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, exp4, items_read, 2, items_written, 3, err, FALSE);
if (check_result) return check_result;
g_free (res);
// This loop tests the bounds of the conversion algorithm
current_write_index = 0;
for (i=0;i<6;i++) {
items_read = items_written = 0;
err = 0;
res = g_ucs4_to_utf16 (&str5[i], 1, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, &exp5[current_write_index],
items_read, read_write[i*2], items_written, read_write[(i*2)+1], err, !read_write[(i*2)+1]);
if (check_result) return check_result;
g_free (res);
current_write_index += items_written;
}
items_read = items_written = 0;
err = 0;
res = g_ucs4_to_utf16 (str6, 1, &items_read, &items_written, &err);
check_result = ucs4_to_utf16_check_result (res, exp6, items_read, 1, items_written, 2, err, FALSE);
if (check_result) return check_result;
g_free (res);
return OK;
}
static RESULT
utf16_to_ucs4_check_result (const gunichar *result_str, const gunichar *expected_str,
glong result_items_read, glong expected_items_read,
glong result_items_written, glong expected_items_written,
GError* result_error, gboolean expect_error)
{
glong i;
if (result_items_read != expected_items_read)
return FAILED("Incorrect number of items read; expected %d, got %d", expected_items_read, result_items_read);
if (result_items_written != expected_items_written)
return FAILED("Incorrect number of items written; expected %d, got %d", expected_items_written, result_items_written);
if (result_error && !expect_error)
return FAILED("There should not be an error code.");
if (!result_error && expect_error)
return FAILED("Unexpected error object.");
if (expect_error && result_str)
return FAILED("NULL should be returned when an error occurs.");
if (!expect_error && !result_str)
return FAILED("When no error occurs NULL should not be returned.");
for (i=0; i<expected_items_written;i++) {
if (result_str [i] != expected_str [i])
return FAILED("Incorrect value %d at index %d", result_str [i], i);
}
if (result_str && result_str[expected_items_written] != '\0')
return FAILED("Null termination not found at the end of the string.");
return OK;
}
static RESULT
test_utf16_to_ucs4 (void)
{
static gunichar2 str1[12] = {'H','e','l','l','o',' ','W','o','r','l','d','\0'};
static gunichar exp1[12] = {'H','e','l','l','o',' ','W','o','r','l','d','\0'};
static gunichar2 str2[7] = {'H', 0xD800, 0xDC01,0xD800,0xDBFF,'l','\0'};
static gunichar exp2[3] = {'H',0x00010001,'\0'};
static gunichar2 str3[4] = {'H', 0xDC00 ,'l','\0'};
static gunichar exp3[2] = {'H','\0'};
static gunichar2 str4[20] = {0xDC00,0xDFFF,0xDFF,0xD800,0xDBFF,0xD800,0xDC00,0xD800,0xDFFF,
0xD800,0xE000,0xDBFF,0xDBFF,0xDBFF,0xDC00,0xDBFF,0xDFFF,0xDBFF,0xE000,'\0'};
static gunichar exp4[6] = {0xDFF,0x10000,0x103ff,0x10fc00,0x10FFFF,'\0'};
static gunichar2 str5[3] = {0xD801, 0xDC00, 0};
static gunichar exp5[2] = {0x10400, 0};
static glong read_write[33] = {1,0,0,1,0,0,1,1,1,2,1,0,2,2,1,2,2,1,2,1,0,2,1,0,2,2,1,2,2,1,2,1,0};
gunichar* res;
glong items_read, items_written, current_read_index,current_write_index;
GError* err=0;
RESULT check_result;
glong i;
res = g_utf16_to_ucs4 (str1, 12, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp1, items_read, 11, items_written, 11, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_utf16_to_ucs4 (str2, 0, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp2, items_read, 0, items_written, 0, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_utf16_to_ucs4 (str2, 1, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp2, items_read, 1, items_written, 1, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_utf16_to_ucs4 (str2, 2, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp2, items_read, 1, items_written, 1, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_utf16_to_ucs4 (str2, 3, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp2, items_read, 3, items_written, 2, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_utf16_to_ucs4 (str2, 4, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp2, items_read, 3, items_written, 2, err, FALSE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
res = g_utf16_to_ucs4 (str2, 5, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp2, items_read, 4, items_written, 0, err, TRUE);
if (check_result) return check_result;
g_free (res);
items_read = items_written = 0;
err = 0;
res = g_utf16_to_ucs4 (str3, 5, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp3, items_read, 1, items_written, 0, err, TRUE);
if (check_result) return check_result;
g_free (res);
// This loop tests the bounds of the conversion algorithm
current_read_index = current_write_index = 0;
for (i=0;i<11;i++) {
items_read = items_written = 0;
err = 0;
res = g_utf16_to_ucs4 (&str4[current_read_index], read_write[i*3], &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, &exp4[current_write_index], items_read,
read_write[(i*3)+1], items_written, read_write[(i*3)+2], err,
!read_write[(i*3)+2]);
if (check_result) return check_result;
g_free (res);
current_read_index += read_write[i*3];
current_write_index += items_written;
}
items_read = items_written = 0;
err = 0;
res = g_utf16_to_ucs4 (str5, 2, &items_read, &items_written, &err);
check_result = utf16_to_ucs4_check_result (res, exp5, items_read, 2, items_written, 1, err, FALSE);
if (check_result) return check_result;
g_free (res);
return OK;
}
static RESULT
test_utf8_strlen (void)
{
gchar word1 [] = {0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,0xF1, 0x82, 0x82, 0x82,'\0'};//Valid, len = 5
gchar word2 [] = {0xF1, 0x82, 0x82, 0x82,0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,'\0'};//Valid, len = 5
gchar word3 [] = {'h','e',0xC2, 0x82,0x45,'\0'}; //Valid, len = 4
gchar word4 [] = {0x62,0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,'\0'}; //Valid, len = 5
glong len = 0;
//Test word1
len = g_utf8_strlen (word1,-1);
if (len != 5)
return FAILED ("Word1 expected length of 5, but was %i", len);
//Do tests with different values for max parameter.
len = g_utf8_strlen (word1,1);
if (len != 0)
return FAILED ("Word1, max = 1, expected length of 0, but was %i", len);
len = g_utf8_strlen (word1,2);
if (len != 1)
return FAILED ("Word1, max = 1, expected length of 1, but was %i", len);
len = g_utf8_strlen (word1,3);
if (len != 2)
return FAILED ("Word1, max = 2, expected length of 2, but was %i", len);
//Test word2
len = g_utf8_strlen (word2,-1);
if (len != 5)
return FAILED ("Word2 expected length of 5, but was %i", len);
//Test word3
len = g_utf8_strlen (word3,-1);
if (len != 4)
return FAILED ("Word3 expected length of 4, but was %i", len);
//Test word4
len = g_utf8_strlen (word4,-1);
if (len != 5)
return FAILED ("Word4 expected length of 5, but was %i", len);
//Test null case
len = g_utf8_strlen(NULL,0);
if (len != 0)
return FAILED ("Expected passing null to result in a length of 0");
return OK;
}
static RESULT
test_utf8_get_char (void)
{
gchar word1 [] = {0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,0xF1, 0x82, 0x82, 0x82,'\0'}; //Valid, len = 5
gunichar value = g_utf8_get_char (&word1 [0]);
if (value != 0x82UL)
return FAILED ("Expected value of 0x82, but was %x", value);
value = g_utf8_get_char (&word1 [2]);
if (value != 0x45UL)
return FAILED ("Expected value of 0x45, but was %x", value);
value = g_utf8_get_char (&word1 [3]);
if (value != 0x1043UL)
return FAILED ("Expected value of 0x1043, but was %x", value);
value = g_utf8_get_char (&word1 [6]);
if (value != 0x58UL)
return FAILED ("Expected value of 0x58, but was %x", value);
value = g_utf8_get_char (&word1 [7]);
if (value != 0x42082UL)
return FAILED ("Expected value of 0x42082, but was %x", value);
return OK;
}
static RESULT
test_utf8_next_char (void)
{
gchar word1 [] = {0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,0xF1, 0x82, 0x82, 0x82,'\0'}; //Valid, len = 5
gchar word2 [] = {0xF1, 0x82, 0x82, 0x82,0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,'\0'}; //Valid, len = 5
gchar word1ExpectedValues [] = {0xC2, 0x45,0xE1, 0x58, 0xF1};
gchar word2ExpectedValues [] = {0xF1, 0xC2, 0x45, 0xE1, 0x58};
gchar* ptr = word1;
gint count = 0;
//Test word1
while (*ptr != 0) {
if (count > 4)
return FAILED ("Word1 has gone past its expected length");
if (*ptr != word1ExpectedValues[count])
return FAILED ("Word1 has an incorrect next_char at index %i", count);
ptr = g_utf8_next_char (ptr);
count++;
}
//Test word2
count = 0;
ptr = word2;
while (*ptr != 0) {
if (count > 4)
return FAILED ("Word2 has gone past its expected length");
if (*ptr != word2ExpectedValues[count])
return FAILED ("Word2 has an incorrect next_char at index %i", count);
ptr = g_utf8_next_char (ptr);
count++;
}
return OK;
}
static RESULT
test_utf8_validate (void)
{
gchar invalidWord1 [] = {0xC3, 0x82, 0xC1,0x90,'\0'}; //Invalid, 1nd oct Can't be 0xC0 or 0xC1
gchar invalidWord2 [] = {0xC1, 0x89, 0x60, '\0'}; //Invalid, 1st oct can not be 0xC1
gchar invalidWord3 [] = {0xC2, 0x45,0xE1, 0x81, 0x83,0x58,'\0'}; //Invalid, oct after 0xC2 must be > 0x80
gchar validWord1 [] = {0xC2, 0x82, 0xC3,0xA0,'\0'}; //Valid
gchar validWord2 [] = {0xC2, 0x82,0x45,0xE1, 0x81, 0x83,0x58,0xF1, 0x82, 0x82, 0x82,'\0'}; //Valid
const gchar* end;
gboolean retVal = g_utf8_validate (invalidWord1, -1, &end);
if (retVal != FALSE)
return FAILED ("Expected invalidWord1 to be invalid");
if (end != &invalidWord1 [2])
return FAILED ("Expected end parameter to be pointing to invalidWord1[2]");
end = NULL;
retVal = g_utf8_validate (invalidWord2, -1, &end);
if (retVal != FALSE)
return FAILED ("Expected invalidWord2 to be invalid");
if (end != &invalidWord2 [0])
return FAILED ("Expected end parameter to be pointing to invalidWord2[0]");
end = NULL;
retVal = g_utf8_validate (invalidWord3, -1, &end);
if (retVal != FALSE)
return FAILED ("Expected invalidWord3 to be invalid");
if (end != &invalidWord3 [0])
return FAILED ("Expected end parameter to be pointing to invalidWord3[1]");
end = NULL;
retVal = g_utf8_validate (validWord1, -1, &end);
if (retVal != TRUE)
return FAILED ("Expected validWord1 to be valid");
if (end != &validWord1 [4])
return FAILED ("Expected end parameter to be pointing to validWord1[4]");
end = NULL;
retVal = g_utf8_validate (validWord2, -1, &end);
if (retVal != TRUE)
return FAILED ("Expected validWord2 to be valid");
if (end != &validWord2 [11])
return FAILED ("Expected end parameter to be pointing to validWord2[11]");
return OK;
}
static glong
utf8_byteslen (const gchar *src)
{
int i = 0;
do {
if (src [i] == '\0')
return i;
i++;
} while (TRUE);
}
/*
* test initialization
*/
static Test utf8_tests [] = {
{"g_utf16_to_utf8", test_utf16_to_utf8},
{"g_utf8_to_utf16", test_utf8_to_utf16},
{"g_utf8_to_utf16_with_nuls", test_utf8_to_utf16_with_nuls},
{"g_utf8_seq", test_utf8_seq},
{"g_ucs4_to_utf16", test_ucs4_to_utf16 },
{"g_utf16_to_ucs4", test_utf16_to_ucs4 },
{"g_utf8_strlen", test_utf8_strlen },
{"g_utf8_get_char", test_utf8_get_char },
{"g_utf8_next_char", test_utf8_next_char },
{"g_utf8_validate", test_utf8_validate },
{NULL, NULL}
};
DEFINE_TEST_GROUP_INIT(utf8_tests_init, utf8_tests)
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/mono/eventpipe/test/CMakeLists.txt | if(ENABLE_PERFTRACING)
# TODO: Add support for dynamic components once package/deploy have been resolved.
if(ENABLE_EVENTPIPE_TEST AND STATIC_COMPONENTS AND (NOT DISABLE_COMPONENTS) AND (NOT DISABLE_LIBS) AND (NOT DISABLE_EXECUTABLES))
set(EVENTPIPE_TEST_SOURCES "")
set(EVENTPIPE_TEST_HEADERS "")
list(APPEND EVENTPIPE_TEST_SOURCES
ep-buffer-manager-tests.c
ep-buffer-tests.c
ep-fake-tests.c
ep-fastserializer-tests.c
ep-file-tests.c
ep-provider-callback-dataqueue-tests.c
ep-rt-tests.c
ep-session-tests.c
ep-setup-tests.c
ep-teardown-tests.c
ep-tests.c
ep-test-runner.c
ep-test-driver.c
ep-thread-tests.c
)
list(APPEND EVENTPIPE_TEST_HEADERS
ep-tests.h
ep-tests-debug.h
)
addprefix(EVENTPIPE_TEST_SOURCES ${MONO_EVENTPIPE_TEST_SOURCE_PATH} "${EVENTPIPE_TEST_SOURCES}")
addprefix(EVENTPIPE_TEST_HEADERS ${MONO_EVENTPIPE_TEST_SOURCE_PATH} "${EVENTPIPE_TEST_HEADERS}")
set(CMAKE_SKIP_RPATH 1)
add_executable(ep-test ${EVENTPIPE_TEST_SOURCES} ${EVENTPIPE_TEST_HEADERS})
target_sources(ep-test PRIVATE "${mono-components-objects}")
target_link_libraries(ep-test monosgen-static ${OS_LIBS} ${ICONV_LIB} ${LLVM_LIBS} ${ICU_LIBS})
if(ICU_LDFLAGS)
set_target_properties(ep-test PROPERTIES LINK_FLAGS ${ICU_LDFLAGS})
endif()
install(TARGETS ep-test RUNTIME)
else(ENABLE_EVENTPIPE_TEST AND STATIC_COMPONENTS AND (NOT DISABLE_COMPONENTS) AND (NOT DISABLE_LIBS) AND (NOT DISABLE_EXECUTABLES))
message(VERBOSE "Skip building native EventPipe library test runner.")
endif(ENABLE_EVENTPIPE_TEST AND STATIC_COMPONENTS AND (NOT DISABLE_COMPONENTS) AND (NOT DISABLE_LIBS) AND (NOT DISABLE_EXECUTABLES))
endif(ENABLE_PERFTRACING)
| if(ENABLE_PERFTRACING)
# TODO: Add support for dynamic components once package/deploy have been resolved.
if(ENABLE_EVENTPIPE_TEST AND STATIC_COMPONENTS AND (NOT DISABLE_COMPONENTS) AND (NOT DISABLE_LIBS) AND (NOT DISABLE_EXECUTABLES))
set(EVENTPIPE_TEST_SOURCES "")
set(EVENTPIPE_TEST_HEADERS "")
list(APPEND EVENTPIPE_TEST_SOURCES
ep-buffer-manager-tests.c
ep-buffer-tests.c
ep-fake-tests.c
ep-fastserializer-tests.c
ep-file-tests.c
ep-provider-callback-dataqueue-tests.c
ep-rt-tests.c
ep-session-tests.c
ep-setup-tests.c
ep-teardown-tests.c
ep-tests.c
ep-test-runner.c
ep-test-driver.c
ep-thread-tests.c
)
list(APPEND EVENTPIPE_TEST_HEADERS
ep-tests.h
ep-tests-debug.h
)
addprefix(EVENTPIPE_TEST_SOURCES ${MONO_EVENTPIPE_TEST_SOURCE_PATH} "${EVENTPIPE_TEST_SOURCES}")
addprefix(EVENTPIPE_TEST_HEADERS ${MONO_EVENTPIPE_TEST_SOURCE_PATH} "${EVENTPIPE_TEST_HEADERS}")
set(CMAKE_SKIP_RPATH 1)
add_executable(ep-test ${EVENTPIPE_TEST_SOURCES} ${EVENTPIPE_TEST_HEADERS})
target_sources(ep-test PRIVATE "${mono-components-objects}")
target_link_libraries(ep-test monosgen-static ${OS_LIBS} ${LLVM_LIBS} ${ICU_LIBS})
if(ICU_LDFLAGS)
set_target_properties(ep-test PROPERTIES LINK_FLAGS ${ICU_LDFLAGS})
endif()
install(TARGETS ep-test RUNTIME)
else(ENABLE_EVENTPIPE_TEST AND STATIC_COMPONENTS AND (NOT DISABLE_COMPONENTS) AND (NOT DISABLE_LIBS) AND (NOT DISABLE_EXECUTABLES))
message(VERBOSE "Skip building native EventPipe library test runner.")
endif(ENABLE_EVENTPIPE_TEST AND STATIC_COMPONENTS AND (NOT DISABLE_COMPONENTS) AND (NOT DISABLE_LIBS) AND (NOT DISABLE_EXECUTABLES))
endif(ENABLE_PERFTRACING)
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/mono/metadata/assembly.c | /**
* \file
* Routines for loading assemblies.
*
* Author:
* Miguel de Icaza ([email protected])
*
* Copyright 2001-2003 Ximian, Inc (http://www.ximian.com)
* Copyright 2004-2009 Novell, Inc (http://www.novell.com)
* Copyright 2011 Xamarin, Inc (http://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#include <config.h>
#include <stdio.h>
#include <glib.h>
#include <errno.h>
#include <string.h>
#include <stdlib.h>
#include <mono/metadata/assembly.h>
#include "assembly-internals.h"
#include <mono/metadata/image.h>
#include "image-internals.h"
#include "object-internals.h"
#include <mono/metadata/loader.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/custom-attrs-internals.h>
#include <mono/metadata/metadata-internals.h>
#include <mono/metadata/profiler-private.h>
#include <mono/metadata/class-internals.h>
#include <mono/metadata/domain-internals.h>
#include <mono/metadata/exception-internals.h>
#include <mono/metadata/reflection-internals.h>
#include <mono/metadata/mono-endian.h>
#include <mono/metadata/mono-debug.h>
#include <mono/utils/mono-uri.h>
#include <mono/metadata/mono-config.h>
#include <mono/metadata/mono-config-internals.h>
#include <mono/metadata/mono-config-dirs.h>
#include <mono/utils/mono-digest.h>
#include <mono/utils/mono-logger-internals.h>
#include <mono/utils/mono-path.h>
#include <mono/utils/mono-proclib.h>
#include <mono/metadata/reflection.h>
#include <mono/metadata/coree.h>
#include <mono/metadata/cil-coff.h>
#include <mono/utils/atomic.h>
#include <mono/utils/mono-os-mutex.h>
#include <mono/metadata/mono-private-unstable.h>
#include <minipal/getexepath.h>
#ifndef HOST_WIN32
#include <sys/types.h>
#include <unistd.h>
#include <sys/stat.h>
#endif
#ifdef HOST_DARWIN
#include <mach-o/dyld.h>
#endif
/* the default search path is empty, the first slot is replaced with the computed value */
static char*
default_path [] = {
NULL,
NULL,
NULL
};
/* Contains the list of directories to be searched for assemblies (MONO_PATH) */
static char **assemblies_path = NULL;
/* keeps track of loaded assemblies, excluding dynamic ones */
static GList *loaded_assemblies = NULL;
static guint32 loaded_assembly_count = 0;
static MonoAssembly *corlib;
static char* unquote (const char *str);
// This protects loaded_assemblies
static mono_mutex_t assemblies_mutex;
static inline void
mono_assemblies_lock (void)
{
mono_os_mutex_lock (&assemblies_mutex);
}
static inline void
mono_assemblies_unlock (void)
{
mono_os_mutex_unlock (&assemblies_mutex);
}
/* If defined, points to the bundled assembly information */
static const MonoBundledAssembly **bundles;
static const MonoBundledSatelliteAssembly **satellite_bundles;
/* Class lazy loading functions */
static GENERATE_TRY_GET_CLASS_WITH_CACHE (debuggable_attribute, "System.Diagnostics", "DebuggableAttribute")
static GENERATE_TRY_GET_CLASS_WITH_CACHE (internals_visible, "System.Runtime.CompilerServices", "InternalsVisibleToAttribute")
static MonoAssembly*
mono_assembly_invoke_search_hook_internal (MonoAssemblyLoadContext *alc, MonoAssembly *requesting, MonoAssemblyName *aname, gboolean postload);
static MonoAssembly *
invoke_assembly_preload_hook (MonoAssemblyLoadContext *alc, MonoAssemblyName *aname, gchar **apath);
static gchar*
encode_public_tok (const guchar *token, gint32 len)
{
const static gchar allowed [] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f' };
gchar *res;
int i;
res = (gchar *)g_malloc (len * 2 + 1);
for (i = 0; i < len; i++) {
res [i * 2] = allowed [token [i] >> 4];
res [i * 2 + 1] = allowed [token [i] & 0xF];
}
res [len * 2] = 0;
return res;
}
/**
* mono_public_tokens_are_equal:
* \param pubt1 first public key token
* \param pubt2 second public key token
*
* Compare two public key tokens and return TRUE is they are equal and FALSE
* otherwise.
*/
gboolean
mono_public_tokens_are_equal (const unsigned char *pubt1, const unsigned char *pubt2)
{
return g_ascii_strncasecmp ((const char*) pubt1, (const char*) pubt2, 16) == 0;
}
/**
* mono_set_assemblies_path:
* \param path list of paths that contain directories where Mono will look for assemblies
*
* Use this method to override the standard assembly lookup system and
* override any assemblies coming from the GAC. This is the method
* that supports the \c MONO_PATH variable.
*
* Notice that \c MONO_PATH and this method are really a very bad idea as
* it prevents the GAC from working and it prevents the standard
* resolution mechanisms from working. Nonetheless, for some debugging
* situations and bootstrapping setups, this is useful to have.
*/
void
mono_set_assemblies_path (const char* path)
{
char **splitted, **dest;
splitted = g_strsplit (path, G_SEARCHPATH_SEPARATOR_S, 1000);
if (assemblies_path)
g_strfreev (assemblies_path);
assemblies_path = dest = splitted;
while (*splitted) {
char *tmp = *splitted;
if (*tmp)
*dest++ = mono_path_canonicalize (tmp);
g_free (tmp);
splitted++;
}
*dest = *splitted;
if (g_hasenv ("MONO_DEBUG"))
return;
splitted = assemblies_path;
while (*splitted) {
if (**splitted && !g_file_test (*splitted, G_FILE_TEST_IS_DIR))
g_warning ("'%s' in MONO_PATH doesn't exist or has wrong permissions.", *splitted);
splitted++;
}
}
void
mono_set_assemblies_path_direct (char **path)
{
g_strfreev (assemblies_path);
assemblies_path = path;
}
static void
check_path_env (void)
{
if (assemblies_path != NULL)
return;
char* path = g_getenv ("MONO_PATH");
if (!path)
return;
mono_set_assemblies_path(path);
g_free (path);
}
static void
mono_assembly_binding_info_free (MonoAssemblyBindingInfo *info)
{
if (!info)
return;
g_free (info->name);
g_free (info->culture);
}
/**
* mono_assembly_names_equal:
* \param l first assembly
* \param r second assembly.
*
* Compares two \c MonoAssemblyName instances and returns whether they are equal.
*
* This compares the names, the cultures, the release version and their
* public tokens.
*
* \returns TRUE if both assembly names are equal.
*/
gboolean
mono_assembly_names_equal (MonoAssemblyName *l, MonoAssemblyName *r)
{
return mono_assembly_names_equal_flags (l, r, MONO_ANAME_EQ_NONE);
}
/**
* mono_assembly_names_equal_flags:
* \param l first assembly name
* \param r second assembly name
* \param flags flags that affect what is compared.
*
* Compares two \c MonoAssemblyName instances and returns whether they are equal.
*
* This compares the simple names and cultures and optionally the versions and
* public key tokens, depending on the \c flags.
*
* \returns TRUE if both assembly names are equal.
*/
gboolean
mono_assembly_names_equal_flags (MonoAssemblyName *l, MonoAssemblyName *r, MonoAssemblyNameEqFlags flags)
{
g_assert (l != NULL);
g_assert (r != NULL);
if (!l->name || !r->name)
return FALSE;
if ((flags & MONO_ANAME_EQ_IGNORE_CASE) != 0 && g_strcasecmp (l->name, r->name))
return FALSE;
if ((flags & MONO_ANAME_EQ_IGNORE_CASE) == 0 && strcmp (l->name, r->name))
return FALSE;
if (l->culture && r->culture && strcmp (l->culture, r->culture))
return FALSE;
if ((l->major != r->major || l->minor != r->minor ||
l->build != r->build || l->revision != r->revision) &&
(flags & MONO_ANAME_EQ_IGNORE_VERSION) == 0)
if (! ((l->major == 0 && l->minor == 0 && l->build == 0 && l->revision == 0) || (r->major == 0 && r->minor == 0 && r->build == 0 && r->revision == 0)))
return FALSE;
if (!l->public_key_token [0] || !r->public_key_token [0] || (flags & MONO_ANAME_EQ_IGNORE_PUBKEY) != 0)
return TRUE;
if (!mono_public_tokens_are_equal (l->public_key_token, r->public_key_token))
return FALSE;
return TRUE;
}
/**
* assembly_names_compare_versions:
* \param l left assembly name
* \param r right assembly name
* \param maxcomps how many version components to compare, or -1 to compare all.
*
* \returns a negative if \p l is a lower version than \p r; a positive value
* if \p r is a lower version than \p l, or zero if \p l and \p r are equal
* versions (comparing upto \p maxcomps components).
*
* Components are \c major, \c minor, \c revision, and \c build. \p maxcomps 1 means just compare
* majors. 2 means majors then minors. etc.
*/
static int
assembly_names_compare_versions (MonoAssemblyName *l, MonoAssemblyName *r, int maxcomps)
{
int i = 0;
if (maxcomps < 0) maxcomps = 4;
#define CMP(field) do { \
if (l-> field < r-> field && i < maxcomps) return -1; \
if (l-> field > r-> field && i < maxcomps) return 1; \
} while (0)
CMP (major);
++i;
CMP (minor);
++i;
CMP (revision);
++i;
CMP (build);
#undef CMP
return 0;
}
/**
* mono_assembly_request_prepare_load:
* \param req the load request to be initialized
* \param alc the AssemblyLoadContext in netcore
*
* Initialize an assembly loader request. Its state will be reset and the assembly context kind will be prefilled with \p asmctx.
*/
void
mono_assembly_request_prepare_load (MonoAssemblyLoadRequest *req, MonoAssemblyLoadContext *alc)
{
memset (req, 0, sizeof (MonoAssemblyLoadRequest));
req->alc = alc;
}
/**
* mono_assembly_request_prepare_open:
* \param req the open request to be initialized
* \param alc the AssemblyLoadContext in netcore
*
* Initialize an assembly loader request intended to be used for open operations. Its state will be reset and the assembly context kind will be prefilled with \p asmctx.
*/
void
mono_assembly_request_prepare_open (MonoAssemblyOpenRequest *req, MonoAssemblyLoadContext *alc)
{
memset (req, 0, sizeof (MonoAssemblyOpenRequest));
req->request.alc = alc;
}
/**
* mono_assembly_request_prepare_byname:
* \param req the byname request to be initialized
* \param alc the AssemblyLoadContext in netcore
*
* Initialize an assembly load by name request. Its state will be reset and the assembly context kind will be prefilled with \p asmctx.
*/
void
mono_assembly_request_prepare_byname (MonoAssemblyByNameRequest *req, MonoAssemblyLoadContext *alc)
{
memset (req, 0, sizeof (MonoAssemblyByNameRequest));
req->request.alc = alc;
}
static MonoAssembly *
load_in_path (const char *basename, const char** search_path, const MonoAssemblyOpenRequest *req, MonoImageOpenStatus *status)
{
int i;
char *fullpath;
MonoAssembly *result;
for (i = 0; search_path [i]; ++i) {
fullpath = g_build_filename (search_path [i], basename, (const char*)NULL);
result = mono_assembly_request_open (fullpath, req, status);
g_free (fullpath);
if (result)
return result;
}
return NULL;
}
/**
* mono_assembly_setrootdir:
* \param root_dir The pathname of the root directory where we will locate assemblies
*
* This routine sets the internal default root directory for looking up
* assemblies.
*
* This is used by Windows installations to compute dynamically the
* place where the Mono assemblies are located.
*
*/
void
mono_assembly_setrootdir (const char *root_dir)
{
/*
* Override the MONO_ASSEMBLIES directory configured at compile time.
*/
if (default_path [0])
g_free (default_path [0]);
default_path [0] = g_strdup (root_dir);
}
/**
* mono_assembly_getrootdir:
*
* Obtains the root directory used for looking up assemblies.
*
* Returns: a string with the directory, this string should not be freed.
*/
G_CONST_RETURN gchar *
mono_assembly_getrootdir (void)
{
return default_path [0];
}
/**
* mono_native_getrootdir:
*
* Obtains the root directory used for looking up native libs (.so, .dylib).
*
* Returns: a string with the directory, this string should be freed by
* the caller.
*/
gchar *
mono_native_getrootdir (void)
{
gchar* fullpath = g_build_path (G_DIR_SEPARATOR_S, mono_assembly_getrootdir (), mono_config_get_reloc_lib_dir(), (const char*)NULL);
return fullpath;
}
/**
* mono_set_dirs:
* \param assembly_dir the base directory for assemblies
* \param config_dir the base directory for configuration files
*
* This routine is used internally and by developers embedding
* the runtime into their own applications.
*
* There are a number of cases to consider: Mono as a system-installed
* package that is available on the location preconfigured or Mono in
* a relocated location.
*
* If you are using a system-installed Mono, you can pass NULL
* to both parameters. If you are not, you should compute both
* directory values and call this routine.
*
* The values for a given PREFIX are:
*
* assembly_dir: PREFIX/lib
* config_dir: PREFIX/etc
*
* Notice that embedders that use Mono in a relocated way must
* compute the location at runtime, as they will be in control
* of where Mono is installed.
*/
void
mono_set_dirs (const char *assembly_dir, const char *config_dir)
{
if (assembly_dir == NULL)
assembly_dir = mono_config_get_assemblies_dir ();
if (config_dir == NULL)
config_dir = mono_config_get_cfg_dir ();
mono_assembly_setrootdir (assembly_dir);
mono_set_config_dir (config_dir);
}
#ifndef HOST_WIN32
static char *
compute_base (char *path)
{
char *p = strrchr (path, '/');
if (p == NULL)
return NULL;
/* Not a well known Mono executable, we are embedded, cant guess the base */
if (strcmp (p, "/mono") && strcmp (p, "/mono-boehm") && strcmp (p, "/mono-sgen") && strcmp (p, "/pedump") && strcmp (p, "/monodis"))
return NULL;
*p = 0;
p = strrchr (path, '/');
if (p == NULL)
return NULL;
if (strcmp (p, "/bin") != 0)
return NULL;
*p = 0;
return path;
}
static void
fallback (void)
{
mono_set_dirs (mono_config_get_assemblies_dir (), mono_config_get_cfg_dir ());
}
static G_GNUC_UNUSED void
set_dirs (char *exe)
{
char *base;
char *config, *lib, *mono;
struct stat buf;
const char *bindir;
/*
* Only /usr prefix is treated specially
*/
bindir = mono_config_get_bin_dir ();
g_assert (bindir);
if (strncmp (exe, bindir, strlen (bindir)) == 0 || (base = compute_base (exe)) == NULL){
fallback ();
return;
}
config = g_build_filename (base, "etc", (const char*)NULL);
lib = g_build_filename (base, "lib", (const char*)NULL);
mono = g_build_filename (lib, "mono/4.5", (const char*)NULL); // FIXME: stop hardcoding 4.5 here
if (stat (mono, &buf) == -1)
fallback ();
else {
mono_set_dirs (lib, config);
}
g_free (config);
g_free (lib);
g_free (mono);
}
#endif /* HOST_WIN32 */
/**
* mono_set_rootdir:
*
* Registers the root directory for the Mono runtime, for Linux and Solaris 10,
* this auto-detects the prefix where Mono was installed.
*/
void
mono_set_rootdir (void)
{
char *path = minipal_getexepath();
if (path == NULL) {
#ifndef HOST_WIN32
fallback ();
#endif
return;
}
#if defined(HOST_WIN32) || (defined(HOST_DARWIN) && !defined(TARGET_ARM))
gchar *bindir, *installdir, *root, *config;
bindir = g_path_get_dirname (path);
installdir = g_path_get_dirname (bindir);
root = g_build_path (G_DIR_SEPARATOR_S, installdir, "lib", (const char*)NULL);
config = g_build_filename (root, "..", "etc", (const char*)NULL);
#ifdef HOST_WIN32
mono_set_dirs (root, config);
#else
if (g_file_test (root, G_FILE_TEST_EXISTS) && g_file_test (config, G_FILE_TEST_EXISTS))
mono_set_dirs (root, config);
else
fallback ();
#endif
g_free (config);
g_free (root);
g_free (installdir);
g_free (bindir);
g_free (path);
#elif defined(DISABLE_MONO_AUTODETECTION)
fallback ();
#else
set_dirs (path);
return;
#endif
}
/**
* mono_assemblies_init:
*
* Initialize global variables used by this module.
*/
void
mono_assemblies_init (void)
{
/*
* Initialize our internal paths if we have not been initialized yet.
* This happens when embedders use Mono.
*/
if (mono_assembly_getrootdir () == NULL)
mono_set_rootdir ();
check_path_env ();
mono_os_mutex_init_recursive (&assemblies_mutex);
}
gboolean
mono_assembly_fill_assembly_name_full (MonoImage *image, MonoAssemblyName *aname, gboolean copyBlobs)
{
MonoTableInfo *t = &image->tables [MONO_TABLE_ASSEMBLY];
guint32 cols [MONO_ASSEMBLY_SIZE];
gint32 machine, flags;
if (!table_info_get_rows (t))
return FALSE;
mono_metadata_decode_row (t, 0, cols, MONO_ASSEMBLY_SIZE);
aname->hash_len = 0;
aname->hash_value = NULL;
aname->name = mono_metadata_string_heap (image, cols [MONO_ASSEMBLY_NAME]);
if (copyBlobs)
aname->name = g_strdup (aname->name);
aname->culture = mono_metadata_string_heap (image, cols [MONO_ASSEMBLY_CULTURE]);
if (copyBlobs)
aname->culture = g_strdup (aname->culture);
aname->flags = cols [MONO_ASSEMBLY_FLAGS];
aname->major = cols [MONO_ASSEMBLY_MAJOR_VERSION];
aname->minor = cols [MONO_ASSEMBLY_MINOR_VERSION];
aname->build = cols [MONO_ASSEMBLY_BUILD_NUMBER];
aname->revision = cols [MONO_ASSEMBLY_REV_NUMBER];
aname->hash_alg = cols [MONO_ASSEMBLY_HASH_ALG];
if (cols [MONO_ASSEMBLY_PUBLIC_KEY]) {
guchar* token = (guchar *)g_malloc (8);
gchar* encoded;
const gchar* pkey;
int len;
pkey = mono_metadata_blob_heap (image, cols [MONO_ASSEMBLY_PUBLIC_KEY]);
len = mono_metadata_decode_blob_size (pkey, &pkey);
aname->public_key = (guchar*)pkey;
mono_digest_get_public_token (token, aname->public_key, len);
encoded = encode_public_tok (token, 8);
g_strlcpy ((char*)aname->public_key_token, encoded, MONO_PUBLIC_KEY_TOKEN_LENGTH);
g_free (encoded);
g_free (token);
}
else {
aname->public_key = NULL;
memset (aname->public_key_token, 0, MONO_PUBLIC_KEY_TOKEN_LENGTH);
}
if (cols [MONO_ASSEMBLY_PUBLIC_KEY]) {
aname->public_key = (guchar*)mono_metadata_blob_heap (image, cols [MONO_ASSEMBLY_PUBLIC_KEY]);
if (copyBlobs) {
const gchar *pkey_end;
int len = mono_metadata_decode_blob_size ((const gchar*) aname->public_key, &pkey_end);
pkey_end += len; /* move to end */
size_t size = pkey_end - (const gchar*)aname->public_key;
guchar *tmp = g_new (guchar, size);
memcpy (tmp, aname->public_key, size);
aname->public_key = tmp;
}
}
else
aname->public_key = 0;
machine = image->image_info->cli_header.coff.coff_machine;
flags = image->image_info->cli_cli_header.ch_flags;
switch (machine) {
case COFF_MACHINE_I386:
/* https://bugzilla.xamarin.com/show_bug.cgi?id=17632 */
if (flags & (CLI_FLAGS_32BITREQUIRED|CLI_FLAGS_PREFERRED32BIT))
aname->arch = MONO_PROCESSOR_ARCHITECTURE_X86;
else if ((flags & 0x70) == 0x70)
aname->arch = MONO_PROCESSOR_ARCHITECTURE_NONE;
else
aname->arch = MONO_PROCESSOR_ARCHITECTURE_MSIL;
break;
case COFF_MACHINE_IA64:
aname->arch = MONO_PROCESSOR_ARCHITECTURE_IA64;
break;
case COFF_MACHINE_AMD64:
aname->arch = MONO_PROCESSOR_ARCHITECTURE_AMD64;
break;
case COFF_MACHINE_ARM:
aname->arch = MONO_PROCESSOR_ARCHITECTURE_ARM;
break;
default:
break;
}
return TRUE;
}
/**
* mono_assembly_fill_assembly_name:
* \param image Image
* \param aname Name
* \returns TRUE if successful
*/
gboolean
mono_assembly_fill_assembly_name (MonoImage *image, MonoAssemblyName *aname)
{
return mono_assembly_fill_assembly_name_full (image, aname, FALSE);
}
/**
* mono_stringify_assembly_name:
* \param aname the assembly name.
*
* Convert \p aname into its string format. The returned string is dynamically
* allocated and should be freed by the caller.
*
* \returns a newly allocated string with a string representation of
* the assembly name.
*/
char*
mono_stringify_assembly_name (MonoAssemblyName *aname)
{
const char *quote = (aname->name && g_ascii_isspace (aname->name [0])) ? "\"" : "";
GString *str;
str = g_string_new (NULL);
g_string_append_printf (str, "%s%s%s", quote, aname->name, quote);
if (!aname->without_version)
g_string_append_printf (str, ", Version=%d.%d.%d.%d", aname->major, aname->minor, aname->build, aname->revision);
if (!aname->without_culture) {
if (aname->culture && *aname->culture)
g_string_append_printf (str, ", Culture=%s", aname->culture);
else
g_string_append_printf (str, ", Culture=%s", "neutral");
}
if (!aname->without_public_key_token) {
if (aname->public_key_token [0])
g_string_append_printf (str,", PublicKeyToken=%s%s", (char *)aname->public_key_token, (aname->flags & ASSEMBLYREF_RETARGETABLE_FLAG) ? ", Retargetable=Yes" : "");
else g_string_append_printf (str,", PublicKeyToken=%s%s", "null", (aname->flags & ASSEMBLYREF_RETARGETABLE_FLAG) ? ", Retargetable=Yes" : "");
}
char *result = g_string_free (str, FALSE); // result is the final formatted string.
return result;
}
static gchar*
assemblyref_public_tok (MonoImage *image, guint32 key_index, guint32 flags)
{
const gchar *public_tok;
int len;
public_tok = mono_metadata_blob_heap (image, key_index);
len = mono_metadata_decode_blob_size (public_tok, &public_tok);
if (flags & ASSEMBLYREF_FULL_PUBLIC_KEY_FLAG) {
guchar token [8];
mono_digest_get_public_token (token, (guchar*)public_tok, len);
return encode_public_tok (token, 8);
}
return encode_public_tok ((guchar*)public_tok, len);
}
static gchar*
assemblyref_public_tok_checked (MonoImage *image, guint32 key_index, guint32 flags, MonoError *error)
{
const gchar *public_tok;
int len;
public_tok = mono_metadata_blob_heap_checked (image, key_index, error);
return_val_if_nok (error, NULL);
if (!public_tok) {
mono_error_set_bad_image (error, image, "expected public key token (index = %d) in assembly reference, but the Blob heap is NULL", key_index);
return NULL;
}
len = mono_metadata_decode_blob_size (public_tok, &public_tok);
if (flags & ASSEMBLYREF_FULL_PUBLIC_KEY_FLAG) {
guchar token [8];
mono_digest_get_public_token (token, (guchar*)public_tok, len);
return encode_public_tok (token, 8);
}
return encode_public_tok ((guchar*)public_tok, len);
}
/**
* mono_assembly_addref:
* \param assembly the assembly to reference
*
* This routine increments the reference count on a MonoAssembly.
* The reference count is reduced every time the method mono_assembly_close() is
* invoked.
*/
gint32
mono_assembly_addref (MonoAssembly *assembly)
{
return mono_atomic_inc_i32 (&assembly->ref_count);
}
gint32
mono_assembly_decref (MonoAssembly *assembly)
{
return mono_atomic_dec_i32 (&assembly->ref_count);
}
/*
* CAUTION: This table must be kept in sync with
* ivkm/reflect/Fusion.cs
*/
#define SILVERLIGHT_KEY "7cec85d7bea7798e"
#define WINFX_KEY "31bf3856ad364e35"
#define ECMA_KEY "b77a5c561934e089"
#define MSFINAL_KEY "b03f5f7f11d50a3a"
#define COMPACTFRAMEWORK_KEY "969db8053d3322ac"
typedef struct {
const char *name;
const char *from;
const char *to;
} KeyRemapEntry;
static KeyRemapEntry key_remap_table[] = {
{ "CustomMarshalers", COMPACTFRAMEWORK_KEY, MSFINAL_KEY },
{ "Microsoft.CSharp", WINFX_KEY, MSFINAL_KEY },
{ "Microsoft.VisualBasic", COMPACTFRAMEWORK_KEY, MSFINAL_KEY },
{ "System", SILVERLIGHT_KEY, ECMA_KEY },
{ "System", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.ComponentModel.Composition", WINFX_KEY, ECMA_KEY },
{ "System.ComponentModel.DataAnnotations", "ddd0da4d3e678217", WINFX_KEY },
{ "System.Core", SILVERLIGHT_KEY, ECMA_KEY },
{ "System.Core", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.Data", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.Data.DataSetExtensions", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.Drawing", COMPACTFRAMEWORK_KEY, MSFINAL_KEY },
{ "System.Messaging", COMPACTFRAMEWORK_KEY, MSFINAL_KEY },
// FIXME: MS uses MSFINAL_KEY for .NET 4.5
{ "System.Net", SILVERLIGHT_KEY, MSFINAL_KEY },
{ "System.Numerics", WINFX_KEY, ECMA_KEY },
{ "System.Runtime.Serialization", SILVERLIGHT_KEY, ECMA_KEY },
{ "System.Runtime.Serialization", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.ServiceModel", WINFX_KEY, ECMA_KEY },
{ "System.ServiceModel", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.ServiceModel.Web", SILVERLIGHT_KEY, WINFX_KEY },
{ "System.Web.Services", COMPACTFRAMEWORK_KEY, MSFINAL_KEY },
{ "System.Windows", SILVERLIGHT_KEY, MSFINAL_KEY },
{ "System.Windows.Forms", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.Xml", SILVERLIGHT_KEY, ECMA_KEY },
{ "System.Xml", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.Xml.Linq", WINFX_KEY, ECMA_KEY },
{ "System.Xml.Linq", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.Xml.Serialization", WINFX_KEY, ECMA_KEY }
};
static void
remap_keys (MonoAssemblyName *aname)
{
int i;
for (i = 0; i < G_N_ELEMENTS (key_remap_table); i++) {
const KeyRemapEntry *entry = &key_remap_table [i];
if (strcmp (aname->name, entry->name) ||
!mono_public_tokens_are_equal (aname->public_key_token, (const unsigned char*) entry->from))
continue;
memcpy (aname->public_key_token, entry->to, MONO_PUBLIC_KEY_TOKEN_LENGTH);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY,
"Remapped public key token of retargetable assembly %s from %s to %s",
aname->name, entry->from, entry->to);
return;
}
}
static MonoAssemblyName *
mono_assembly_remap_version (MonoAssemblyName *aname, MonoAssemblyName *dest_aname)
{
const MonoRuntimeInfo *current_runtime;
if (aname->name == NULL) return aname;
current_runtime = mono_get_runtime_info ();
if (aname->flags & ASSEMBLYREF_RETARGETABLE_FLAG) {
const AssemblyVersionSet* vset;
/* Remap to current runtime */
vset = ¤t_runtime->version_sets [0];
memcpy (dest_aname, aname, sizeof(MonoAssemblyName));
dest_aname->major = vset->major;
dest_aname->minor = vset->minor;
dest_aname->build = vset->build;
dest_aname->revision = vset->revision;
dest_aname->flags &= ~ASSEMBLYREF_RETARGETABLE_FLAG;
/* Remap assembly name */
if (!strcmp (aname->name, "System.Net"))
dest_aname->name = g_strdup ("System");
remap_keys (dest_aname);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY,
"The request to load the retargetable assembly %s v%d.%d.%d.%d was remapped to %s v%d.%d.%d.%d",
aname->name,
aname->major, aname->minor, aname->build, aname->revision,
dest_aname->name,
vset->major, vset->minor, vset->build, vset->revision
);
return dest_aname;
}
return aname;
}
/**
* mono_assembly_get_assemblyref:
* \param image pointer to the \c MonoImage to extract the information from.
* \param index index to the assembly reference in the image.
* \param aname pointer to a \c MonoAssemblyName that will hold the returned value.
*
* Fills out the \p aname with the assembly name of the \p index assembly reference in \p image.
*/
void
mono_assembly_get_assemblyref (MonoImage *image, int index, MonoAssemblyName *aname)
{
MonoTableInfo *t;
guint32 cols [MONO_ASSEMBLYREF_SIZE];
const char *hash;
t = &image->tables [MONO_TABLE_ASSEMBLYREF];
mono_metadata_decode_row (t, index, cols, MONO_ASSEMBLYREF_SIZE);
// ECMA-335: II.22.5 - AssemblyRef
// HashValue can be null or non-null. If non-null it's an index into the blob heap
// Sometimes ILasm can create an image without a Blob heap.
hash = mono_metadata_blob_heap_null_ok (image, cols [MONO_ASSEMBLYREF_HASH_VALUE]);
if (hash) {
aname->hash_len = mono_metadata_decode_blob_size (hash, &hash);
aname->hash_value = hash;
} else {
aname->hash_len = 0;
aname->hash_value = NULL;
}
aname->name = mono_metadata_string_heap (image, cols [MONO_ASSEMBLYREF_NAME]);
aname->culture = mono_metadata_string_heap (image, cols [MONO_ASSEMBLYREF_CULTURE]);
aname->flags = cols [MONO_ASSEMBLYREF_FLAGS];
aname->major = cols [MONO_ASSEMBLYREF_MAJOR_VERSION];
aname->minor = cols [MONO_ASSEMBLYREF_MINOR_VERSION];
aname->build = cols [MONO_ASSEMBLYREF_BUILD_NUMBER];
aname->revision = cols [MONO_ASSEMBLYREF_REV_NUMBER];
if (cols [MONO_ASSEMBLYREF_PUBLIC_KEY]) {
gchar *token = assemblyref_public_tok (image, cols [MONO_ASSEMBLYREF_PUBLIC_KEY], aname->flags);
g_strlcpy ((char*)aname->public_key_token, token, MONO_PUBLIC_KEY_TOKEN_LENGTH);
g_free (token);
} else {
memset (aname->public_key_token, 0, MONO_PUBLIC_KEY_TOKEN_LENGTH);
}
}
static MonoAssembly *
search_bundle_for_assembly (MonoAssemblyLoadContext *alc, MonoAssemblyName *aname)
{
if (bundles == NULL && satellite_bundles == NULL)
return NULL;
MonoImageOpenStatus status;
MonoImage *image;
MonoAssemblyLoadRequest req;
image = mono_assembly_open_from_bundle (alc, aname->name, &status, aname->culture);
if (!image && !g_str_has_suffix (aname->name, ".dll")) {
char *name = g_strdup_printf ("%s.dll", aname->name);
image = mono_assembly_open_from_bundle (alc, name, &status, aname->culture);
}
if (image) {
mono_assembly_request_prepare_load (&req, alc);
return mono_assembly_request_load_from (image, aname->name, &req, &status);
}
return NULL;
}
static MonoAssembly*
netcore_load_reference (MonoAssemblyName *aname, MonoAssemblyLoadContext *alc, MonoAssembly *requesting, gboolean postload)
{
g_assert (alc != NULL);
MonoAssemblyName mapped_aname;
aname = mono_assembly_remap_version (aname, &mapped_aname);
MonoAssembly *reference = NULL;
gboolean is_satellite = !mono_assembly_name_culture_is_neutral (aname);
gboolean is_default = mono_alc_is_default (alc);
/*
* Try these until one of them succeeds (by returning a non-NULL reference):
* 1. Check if it's already loaded by the ALC.
*
* 2. If it's a non-default ALC, call the Load() method.
*
* 3. If the ALC is not the default and this is not a satellite request,
* check if it's already loaded by the default ALC.
*
* 4. If we have a bundle registered and this is not a satellite request,
* search the images for a matching name.
*
* 5. If we have a satellite bundle registered and this is a satellite request,
* find the parent ALC and search the images for a matching name and culture.
*
* 6. If the ALC is the default or this is not a satellite request,
* check the TPA list, APP_PATHS, and ApplicationBase.
*
* 7. If this is a satellite request, call the ALC ResolveSatelliteAssembly method.
*
* 8. Call the ALC Resolving event. If the ALC is not the default and this is not
* a satellite request, call the Resolving event in the default ALC first.
*
* 9. Call the ALC AssemblyResolve event (except for corlib satellite assemblies).
*
* 10. Return NULL.
*/
reference = mono_assembly_loaded_internal (alc, aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly already loaded in the active ALC: '%s'.", aname->name);
goto leave;
}
if (!is_default) {
reference = mono_alc_invoke_resolve_using_load_nofail (alc, aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found using Load method: '%s'.", aname->name);
goto leave;
}
}
if (!is_default && !is_satellite) {
reference = mono_assembly_loaded_internal (mono_alc_get_default (), aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly already loaded in the default ALC: '%s'.", aname->name);
goto leave;
}
}
if (bundles != NULL && !is_satellite) {
reference = search_bundle_for_assembly (mono_alc_get_default (), aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found in the bundle: '%s'.", aname->name);
goto leave;
}
}
if (satellite_bundles != NULL && is_satellite) {
// Satellite assembly byname requests should be loaded in the same ALC as their parent assembly
size_t name_len = strlen (aname->name);
char *parent_name = NULL;
MonoAssemblyLoadContext *parent_alc = NULL;
if (g_str_has_suffix (aname->name, MONO_ASSEMBLY_RESOURCE_SUFFIX))
parent_name = g_strdup_printf ("%s.dll", g_strndup (aname->name, name_len - strlen (MONO_ASSEMBLY_RESOURCE_SUFFIX)));
if (parent_name) {
MonoAssemblyOpenRequest req;
mono_assembly_request_prepare_open (&req, alc);
MonoAssembly *parent_assembly = mono_assembly_request_open (parent_name, &req, NULL);
parent_alc = mono_assembly_get_alc (parent_assembly);
}
if (parent_alc)
reference = search_bundle_for_assembly (parent_alc, aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found in the satellite bundle: '%s'.", aname->name);
goto leave;
}
}
if (is_default || !is_satellite) {
reference = invoke_assembly_preload_hook (mono_alc_get_default (), aname, assemblies_path);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found with the filesystem probing logic: '%s'.", aname->name);
goto leave;
}
}
if (is_satellite) {
reference = mono_alc_invoke_resolve_using_resolve_satellite_nofail (alc, aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found with ResolveSatelliteAssembly method: '%s'.", aname->name);
goto leave;
}
}
// For compatibility with CoreCLR, invoke the Resolving event in the default ALC first whenever loading
// a non-satellite assembly into a non-default ALC. See: https://github.com/dotnet/runtime/issues/54814
if (!is_default && !is_satellite) {
reference = mono_alc_invoke_resolve_using_resolving_event_nofail (mono_alc_get_default (), aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found with the Resolving event (default ALC): '%s'.", aname->name);
goto leave;
}
}
reference = mono_alc_invoke_resolve_using_resolving_event_nofail (alc, aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found with the Resolving event: '%s'.", aname->name);
goto leave;
}
// Looking up corlib resources here can cause an infinite loop
// See: https://github.com/dotnet/coreclr/blob/0a762eb2f3a299489c459da1ddeb69e042008f07/src/vm/appdomain.cpp#L5178-L5239
if (!(strcmp (aname->name, MONO_ASSEMBLY_CORLIB_RESOURCE_NAME) == 0 && is_satellite) && postload) {
reference = mono_assembly_invoke_search_hook_internal (alc, requesting, aname, TRUE);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found with AssemblyResolve event: '%s'.", aname->name);
goto leave;
}
}
leave:
return reference;
}
/**
* mono_assembly_get_assemblyref_checked:
* \param image pointer to the \c MonoImage to extract the information from.
* \param index index to the assembly reference in the image.
* \param aname pointer to a \c MonoAssemblyName that will hold the returned value.
* \param error set on error
*
* Fills out the \p aname with the assembly name of the \p index assembly reference in \p image.
*
* \returns TRUE on success, otherwise sets \p error and returns FALSE
*/
gboolean
mono_assembly_get_assemblyref_checked (MonoImage *image, int index, MonoAssemblyName *aname, MonoError *error)
{
guint32 cols [MONO_ASSEMBLYREF_SIZE];
const char *hash;
if (image_is_dynamic (image)) {
MonoDynamicTable *t = &(((MonoDynamicImage*) image)->tables [MONO_TABLE_ASSEMBLYREF]);
if (!mono_metadata_decode_row_dynamic_checked ((MonoDynamicImage*)image, t, index, cols, MONO_ASSEMBLYREF_SIZE, error))
return FALSE;
}
else {
MonoTableInfo *t = &image->tables [MONO_TABLE_ASSEMBLYREF];
if (!mono_metadata_decode_row_checked (image, t, index, cols, MONO_ASSEMBLYREF_SIZE, error))
return FALSE;
}
// ECMA-335: II.22.5 - AssemblyRef
// HashValue can be null or non-null. If non-null it's an index into the blob heap
// Sometimes ILasm can create an image without a Blob heap.
hash = mono_metadata_blob_heap_checked (image, cols [MONO_ASSEMBLYREF_HASH_VALUE], error);
return_val_if_nok (error, FALSE);
if (hash) {
aname->hash_len = mono_metadata_decode_blob_size (hash, &hash);
aname->hash_value = hash;
} else {
aname->hash_len = 0;
aname->hash_value = NULL;
}
aname->name = mono_metadata_string_heap_checked (image, cols [MONO_ASSEMBLYREF_NAME], error);
return_val_if_nok (error, FALSE);
aname->culture = mono_metadata_string_heap_checked (image, cols [MONO_ASSEMBLYREF_CULTURE], error);
return_val_if_nok (error, FALSE);
aname->flags = cols [MONO_ASSEMBLYREF_FLAGS];
aname->major = cols [MONO_ASSEMBLYREF_MAJOR_VERSION];
aname->minor = cols [MONO_ASSEMBLYREF_MINOR_VERSION];
aname->build = cols [MONO_ASSEMBLYREF_BUILD_NUMBER];
aname->revision = cols [MONO_ASSEMBLYREF_REV_NUMBER];
if (cols [MONO_ASSEMBLYREF_PUBLIC_KEY]) {
gchar *token = assemblyref_public_tok_checked (image, cols [MONO_ASSEMBLYREF_PUBLIC_KEY], aname->flags, error);
return_val_if_nok (error, FALSE);
g_strlcpy ((char*)aname->public_key_token, token, MONO_PUBLIC_KEY_TOKEN_LENGTH);
g_free (token);
} else {
memset (aname->public_key_token, 0, MONO_PUBLIC_KEY_TOKEN_LENGTH);
}
return TRUE;
}
/**
* mono_assembly_load_reference:
*/
void
mono_assembly_load_reference (MonoImage *image, int index)
{
MonoAssembly *reference;
MonoAssemblyName aname;
MonoImageOpenStatus status = MONO_IMAGE_OK;
memset (&aname, 0, sizeof (MonoAssemblyName));
/*
* image->references is shared between threads, so we need to access
* it inside a critical section.
*/
mono_image_lock (image);
if (!image->references) {
MonoTableInfo *t = &image->tables [MONO_TABLE_ASSEMBLYREF];
int n = table_info_get_rows (t);
image->references = g_new0 (MonoAssembly *, n + 1);
image->nreferences = n;
}
reference = image->references [index];
mono_image_unlock (image);
if (reference)
return;
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Requesting loading reference %d (of %d) of %s", index, image->nreferences, image->name);
ERROR_DECL (local_error);
mono_assembly_get_assemblyref_checked (image, index, &aname, local_error);
if (!is_ok (local_error)) {
mono_trace (G_LOG_LEVEL_WARNING, MONO_TRACE_ASSEMBLY, "Decoding assembly reference %d (of %d) of %s failed due to: %s", index, image->nreferences, image->name, mono_error_get_message (local_error));
mono_error_cleanup (local_error);
goto commit_reference;
}
if (image->assembly) {
if (mono_trace_is_traced (G_LOG_LEVEL_INFO, MONO_TRACE_ASSEMBLY)) {
char *aname_str = mono_stringify_assembly_name (&aname);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Loading reference %d of %s (%s), looking for %s",
index, image->name, mono_alc_is_default (mono_image_get_alc (image)) ? "default ALC" : "custom ALC" ,
aname_str);
g_free (aname_str);
}
MonoAssemblyByNameRequest req;
mono_assembly_request_prepare_byname (&req, mono_image_get_alc (image));
req.requesting_assembly = image->assembly;
//req.no_postload_search = TRUE; // FIXME: should this be set?
reference = mono_assembly_request_byname (&aname, &req, NULL);
} else {
g_assertf (image->assembly, "While loading reference %d MonoImage %s doesn't have a MonoAssembly", index, image->name);
}
if (reference == NULL){
char *extra_msg;
if (status == MONO_IMAGE_ERROR_ERRNO && errno == ENOENT) {
extra_msg = g_strdup_printf ("The assembly was not found in the Global Assembly Cache, a path listed in the MONO_PATH environment variable, or in the location of the executing assembly (%s).\n", image->assembly != NULL ? image->assembly->basedir : "" );
} else if (status == MONO_IMAGE_ERROR_ERRNO) {
extra_msg = g_strdup_printf ("System error: %s\n", strerror (errno));
} else if (status == MONO_IMAGE_MISSING_ASSEMBLYREF) {
extra_msg = g_strdup ("Cannot find an assembly referenced from this one.\n");
} else if (status == MONO_IMAGE_IMAGE_INVALID) {
extra_msg = g_strdup ("The file exists but is not a valid assembly.\n");
} else {
extra_msg = g_strdup ("");
}
mono_trace (G_LOG_LEVEL_WARNING, MONO_TRACE_ASSEMBLY, "The following assembly referenced from %s could not be loaded:\n"
" Assembly: %s (assemblyref_index=%d)\n"
" Version: %d.%d.%d.%d\n"
" Public Key: %s\n%s",
image->name, aname.name, index,
aname.major, aname.minor, aname.build, aname.revision,
strlen ((char*)aname.public_key_token) == 0 ? "(none)" : (char*)aname.public_key_token, extra_msg);
g_free (extra_msg);
}
commit_reference:
mono_image_lock (image);
if (reference == NULL) {
/* Flag as not found */
reference = (MonoAssembly *)REFERENCE_MISSING;
}
if (!image->references [index]) {
if (reference != REFERENCE_MISSING){
mono_assembly_addref (reference);
if (image->assembly)
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly Ref addref %s[%p] -> %s[%p]: %d",
image->assembly->aname.name, image->assembly, reference->aname.name, reference, reference->ref_count);
} else {
if (image->assembly)
mono_trace (G_LOG_LEVEL_INFO, MONO_TRACE_ASSEMBLY, "Failed to load assembly %s[%p].",
image->assembly->aname.name, image->assembly);
}
image->references [index] = reference;
}
mono_image_unlock (image);
if (image->references [index] != reference) {
/* Somebody loaded it before us */
mono_assembly_close (reference);
}
}
/**
* mono_assembly_load_references:
* \param image
* \param status
* \deprecated There is no reason to use this method anymore, it does nothing
*
* This method is now a no-op, it does nothing other than setting the \p status to \c MONO_IMAGE_OK
*/
void
mono_assembly_load_references (MonoImage *image, MonoImageOpenStatus *status)
{
/* This is a no-op now but it is part of the embedding API so we can't remove it */
if (status)
*status = MONO_IMAGE_OK;
}
typedef struct AssemblyLoadHook AssemblyLoadHook;
struct AssemblyLoadHook {
AssemblyLoadHook *next;
union {
MonoAssemblyLoadFunc v1;
MonoAssemblyLoadFuncV2 v2;
} func;
int version;
gpointer user_data;
};
static AssemblyLoadHook *assembly_load_hook = NULL;
void
mono_assembly_invoke_load_hook_internal (MonoAssemblyLoadContext *alc, MonoAssembly *ass)
{
AssemblyLoadHook *hook;
for (hook = assembly_load_hook; hook; hook = hook->next) {
if (hook->version == 1) {
hook->func.v1 (ass, hook->user_data);
} else {
ERROR_DECL (hook_error);
g_assert (hook->version == 2);
hook->func.v2 (alc, ass, hook->user_data, hook_error);
mono_error_assert_ok (hook_error); /* FIXME: proper error handling */
}
}
}
/**
* mono_assembly_invoke_load_hook:
*/
void
mono_assembly_invoke_load_hook (MonoAssembly *ass)
{
mono_assembly_invoke_load_hook_internal (mono_alc_get_default (), ass);
}
static void
mono_install_assembly_load_hook_v1 (MonoAssemblyLoadFunc func, gpointer user_data)
{
AssemblyLoadHook *hook;
g_return_if_fail (func != NULL);
hook = g_new0 (AssemblyLoadHook, 1);
hook->version = 1;
hook->func.v1 = func;
hook->user_data = user_data;
hook->next = assembly_load_hook;
assembly_load_hook = hook;
}
void
mono_install_assembly_load_hook_v2 (MonoAssemblyLoadFuncV2 func, gpointer user_data, gboolean append)
{
g_return_if_fail (func != NULL);
AssemblyLoadHook *hook = g_new0 (AssemblyLoadHook, 1);
hook->version = 2;
hook->func.v2 = func;
hook->user_data = user_data;
if (append && assembly_load_hook != NULL) { // If we don't have any installed hooks, append vs prepend is irrelevant
AssemblyLoadHook *old = assembly_load_hook;
while (old->next != NULL)
old = old->next;
old->next = hook;
} else {
hook->next = assembly_load_hook;
assembly_load_hook = hook;
}
}
/**
* mono_install_assembly_load_hook:
*/
void
mono_install_assembly_load_hook (MonoAssemblyLoadFunc func, gpointer user_data)
{
mono_install_assembly_load_hook_v1 (func, user_data);
}
typedef struct AssemblySearchHook AssemblySearchHook;
struct AssemblySearchHook {
AssemblySearchHook *next;
union {
MonoAssemblySearchFunc v1;
MonoAssemblySearchFuncV2 v2;
} func;
gboolean postload;
int version;
gpointer user_data;
};
static AssemblySearchHook *assembly_search_hook = NULL;
static MonoAssembly*
mono_assembly_invoke_search_hook_internal (MonoAssemblyLoadContext *alc, MonoAssembly *requesting, MonoAssemblyName *aname, gboolean postload)
{
AssemblySearchHook *hook;
for (hook = assembly_search_hook; hook; hook = hook->next) {
if (hook->postload == postload) {
MonoAssembly *ass;
if (hook->version == 1) {
ass = hook->func.v1 (aname, hook->user_data);
} else {
ERROR_DECL (hook_error);
g_assert (hook->version == 2);
ass = hook->func.v2 (alc, requesting, aname, postload, hook->user_data, hook_error);
mono_error_assert_ok (hook_error); /* FIXME: proper error handling */
}
if (ass)
return ass;
}
}
return NULL;
}
/**
* mono_assembly_invoke_search_hook:
*/
MonoAssembly*
mono_assembly_invoke_search_hook (MonoAssemblyName *aname)
{
return mono_assembly_invoke_search_hook_internal (NULL, NULL, aname, FALSE);
}
static void
mono_install_assembly_search_hook_internal_v1 (MonoAssemblySearchFunc func, gpointer user_data, gboolean postload)
{
AssemblySearchHook *hook;
g_return_if_fail (func != NULL);
hook = g_new0 (AssemblySearchHook, 1);
hook->version = 1;
hook->func.v1 = func;
hook->user_data = user_data;
hook->postload = postload;
hook->next = assembly_search_hook;
assembly_search_hook = hook;
}
void
mono_install_assembly_search_hook_v2 (MonoAssemblySearchFuncV2 func, gpointer user_data, gboolean postload, gboolean append)
{
if (func == NULL)
return;
AssemblySearchHook *hook = g_new0 (AssemblySearchHook, 1);
hook->version = 2;
hook->func.v2 = func;
hook->user_data = user_data;
hook->postload = postload;
if (append && assembly_search_hook != NULL) { // If we don't have any installed hooks, append vs prepend is irrelevant
AssemblySearchHook *old = assembly_search_hook;
while (old->next != NULL)
old = old->next;
old->next = hook;
} else {
hook->next = assembly_search_hook;
assembly_search_hook = hook;
}
}
/**
* mono_install_assembly_search_hook:
*/
void
mono_install_assembly_search_hook (MonoAssemblySearchFunc func, gpointer user_data)
{
mono_install_assembly_search_hook_internal_v1 (func, user_data, FALSE);
}
/**
* mono_install_assembly_refonly_search_hook:
*/
void
mono_install_assembly_refonly_search_hook (MonoAssemblySearchFunc func, gpointer user_data)
{
/* Ignore refonly hooks, they will never flre */
}
/**
* mono_install_assembly_postload_search_hook:
*/
void
mono_install_assembly_postload_search_hook (MonoAssemblySearchFunc func, gpointer user_data)
{
mono_install_assembly_search_hook_internal_v1 (func, user_data, TRUE);
}
void
mono_install_assembly_postload_refonly_search_hook (MonoAssemblySearchFunc func, gpointer user_data)
{
/* Ignore refonly hooks, they will never flre */
}
typedef struct AssemblyPreLoadHook AssemblyPreLoadHook;
struct AssemblyPreLoadHook {
AssemblyPreLoadHook *next;
union {
MonoAssemblyPreLoadFunc v1; // legacy internal use
MonoAssemblyPreLoadFuncV2 v2; // current internal use
MonoAssemblyPreLoadFuncV3 v3; // netcore external use
} func;
gpointer user_data;
gint32 version;
};
static AssemblyPreLoadHook *assembly_preload_hook = NULL;
static MonoAssembly *
invoke_assembly_preload_hook (MonoAssemblyLoadContext *alc, MonoAssemblyName *aname, gchar **apath)
{
AssemblyPreLoadHook *hook;
MonoAssembly *assembly;
for (hook = assembly_preload_hook; hook; hook = hook->next) {
if (hook->version == 1)
assembly = hook->func.v1 (aname, apath, hook->user_data);
else {
ERROR_DECL (error);
g_assert (hook->version == 2 || hook->version == 3);
if (hook->version == 2)
assembly = hook->func.v2 (alc, aname, apath, hook->user_data, error);
else { // v3
/*
* For the default ALC, pass the globally known gchandle (since it's never collectible, it's always a strong handle).
* For other ALCs, make a new strong handle that is passed to the caller.
* Early at startup, when the default ALC exists, but its managed object doesn't, so the default ALC gchandle points to null.
*/
gboolean needs_free = TRUE;
MonoGCHandle strong_gchandle;
if (mono_alc_is_default (alc)) {
needs_free = FALSE;
strong_gchandle = alc->gchandle;
} else
strong_gchandle = mono_gchandle_from_handle (mono_gchandle_get_target_handle (alc->gchandle), TRUE);
assembly = hook->func.v3 (strong_gchandle, aname, apath, hook->user_data, error);
if (needs_free)
mono_gchandle_free_internal (strong_gchandle);
}
/* TODO: propagage error out to callers */
mono_error_assert_ok (error);
}
if (assembly != NULL)
return assembly;
}
return NULL;
}
/**
* mono_install_assembly_preload_hook:
*/
void
mono_install_assembly_preload_hook (MonoAssemblyPreLoadFunc func, gpointer user_data)
{
AssemblyPreLoadHook *hook;
g_return_if_fail (func != NULL);
hook = g_new0 (AssemblyPreLoadHook, 1);
hook->version = 1;
hook->func.v1 = func;
hook->user_data = user_data;
hook->next = assembly_preload_hook;
assembly_preload_hook = hook;
}
/**
* mono_install_assembly_refonly_preload_hook:
*/
void
mono_install_assembly_refonly_preload_hook (MonoAssemblyPreLoadFunc func, gpointer user_data)
{
/* Ignore refonly hooks, they never fire */
}
void
mono_install_assembly_preload_hook_v2 (MonoAssemblyPreLoadFuncV2 func, gpointer user_data, gboolean append)
{
AssemblyPreLoadHook *hook;
g_return_if_fail (func != NULL);
AssemblyPreLoadHook **hooks = &assembly_preload_hook;
hook = g_new0 (AssemblyPreLoadHook, 1);
hook->version = 2;
hook->func.v2 = func;
hook->user_data = user_data;
if (append && *hooks != NULL) { // If we don't have any installed hooks, append vs prepend is irrelevant
AssemblyPreLoadHook *old = *hooks;
while (old->next != NULL)
old = old->next;
old->next = hook;
} else {
hook->next = *hooks;
*hooks = hook;
}
}
void
mono_install_assembly_preload_hook_v3 (MonoAssemblyPreLoadFuncV3 func, gpointer user_data, gboolean append)
{
AssemblyPreLoadHook *hook;
g_return_if_fail (func != NULL);
hook = g_new0 (AssemblyPreLoadHook, 1);
hook->version = 3;
hook->func.v3 = func;
hook->user_data = user_data;
if (append && assembly_preload_hook != NULL) {
AssemblyPreLoadHook *old = assembly_preload_hook;
while (old->next != NULL)
old = old->next;
old->next = hook;
} else {
hook->next = assembly_preload_hook;
assembly_preload_hook = hook;
}
}
static gchar *
absolute_dir (const gchar *filename)
{
gchar *cwd;
gchar *mixed;
gchar **parts;
gchar *part;
GList *list, *tmp;
GString *result;
gchar *res;
gint i;
if (g_path_is_absolute (filename)) {
part = g_path_get_dirname (filename);
res = g_strconcat (part, G_DIR_SEPARATOR_S, (const char*)NULL);
g_free (part);
return res;
}
cwd = g_get_current_dir ();
mixed = g_build_filename (cwd, filename, (const char*)NULL);
parts = g_strsplit (mixed, G_DIR_SEPARATOR_S, 0);
g_free (mixed);
g_free (cwd);
list = NULL;
for (i = 0; (part = parts [i]) != NULL; i++) {
if (!strcmp (part, "."))
continue;
if (!strcmp (part, "..")) {
if (list && list->next) /* Don't remove root */
list = g_list_delete_link (list, list);
} else {
list = g_list_prepend (list, part);
}
}
result = g_string_new ("");
list = g_list_reverse (list);
/* Ignores last data pointer, which should be the filename */
for (tmp = list; tmp && tmp->next != NULL; tmp = tmp->next){
if (tmp->data)
g_string_append_printf (result, "%s%c", (char *) tmp->data,
G_DIR_SEPARATOR);
}
res = result->str;
g_string_free (result, FALSE);
g_list_free (list);
g_strfreev (parts);
if (*res == '\0') {
g_free (res);
return g_strdup (".");
}
return res;
}
static MonoImage *
open_from_bundle_internal (MonoAssemblyLoadContext *alc, const char *filename, MonoImageOpenStatus *status, gboolean is_satellite)
{
if (!bundles)
return NULL;
MonoImage *image = NULL;
char *name = is_satellite ? g_strdup (filename) : g_path_get_basename (filename);
for (int i = 0; !image && bundles [i]; ++i) {
if (strcmp (bundles [i]->name, name) == 0) {
// Since bundled images don't exist on disk, don't give them a legit filename
image = mono_image_open_from_data_internal (alc, (char*)bundles [i]->data, bundles [i]->size, FALSE, status, FALSE, name, NULL);
break;
}
}
g_free (name);
return image;
}
static MonoImage *
open_from_satellite_bundle (MonoAssemblyLoadContext *alc, const char *filename, MonoImageOpenStatus *status, const char *culture)
{
if (!satellite_bundles)
return NULL;
MonoImage *image = NULL;
char *name = g_strdup (filename);
for (int i = 0; !image && satellite_bundles [i]; ++i) {
if (strcmp (satellite_bundles [i]->name, name) == 0 && strcmp (satellite_bundles [i]->culture, culture) == 0) {
char *bundle_name = g_strconcat (culture, "/", name, (const char *)NULL);
image = mono_image_open_from_data_internal (alc, (char *)satellite_bundles [i]->data, satellite_bundles [i]->size, FALSE, status, FALSE, bundle_name, NULL);
g_free (bundle_name);
break;
}
}
g_free (name);
return image;
}
/**
* mono_assembly_open_from_bundle:
* \param filename Filename requested
* \param status return status code
*
* This routine tries to open the assembly specified by \p filename from the
* defined bundles, if found, returns the MonoImage for it, if not found
* returns NULL
*/
MonoImage *
mono_assembly_open_from_bundle (MonoAssemblyLoadContext *alc, const char *filename, MonoImageOpenStatus *status, const char *culture)
{
/*
* we do a very simple search for bundled assemblies: it's not a general
* purpose assembly loading mechanism.
*/
MonoImage *image = NULL;
gboolean is_satellite = culture && culture [0] != 0;
if (is_satellite)
image = open_from_satellite_bundle (alc, filename, status, culture);
else
image = open_from_bundle_internal (alc, filename, status, FALSE);
if (image) {
mono_image_addref (image);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly Loader loaded assembly from bundle: '%s'.", filename);
}
return image;
}
/**
* mono_assembly_open_full:
* \param filename the file to load
* \param status return status code
* \param refonly Whether this assembly is being opened in "reflection-only" mode.
*
* This loads an assembly from the specified \p filename. The \p filename allows
* a local URL (starting with a \c file:// prefix). If a file prefix is used, the
* filename is interpreted as a URL, and the filename is URL-decoded. Otherwise the file
* is treated as a local path.
*
* First, an attempt is made to load the assembly from the bundled executable (for those
* deployments that have been done with the \c mkbundle tool or for scenarios where the
* assembly has been registered as an embedded assembly). If this is not the case, then
* the assembly is loaded from disk using `api:mono_image_open_full`.
*
* If \p refonly is set to true, then the assembly is loaded purely for inspection with
* the \c System.Reflection API.
*
* \returns NULL on error, with the \p status set to an error code, or a pointer
* to the assembly.
*/
MonoAssembly *
mono_assembly_open_full (const char *filename, MonoImageOpenStatus *status, gboolean refonly)
{
if (refonly) {
if (status)
*status = MONO_IMAGE_IMAGE_INVALID;
return NULL;
}
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyOpenRequest req;
mono_assembly_request_prepare_open (&req, mono_alc_get_default ());
res = mono_assembly_request_open (filename, &req, status);
MONO_EXIT_GC_UNSAFE;
return res;
}
MonoAssembly *
mono_assembly_request_open (const char *filename, const MonoAssemblyOpenRequest *open_req,
MonoImageOpenStatus *status)
{
MonoImage *image;
MonoAssembly *ass;
MonoImageOpenStatus def_status;
gchar *fname;
gboolean loaded_from_bundle;
MonoAssemblyLoadRequest load_req;
/* we will be overwriting the load request's asmctx.*/
memcpy (&load_req, &open_req->request, sizeof (load_req));
g_return_val_if_fail (filename != NULL, NULL);
if (!status)
status = &def_status;
*status = MONO_IMAGE_OK;
fname = g_strdup (filename);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY,
"Assembly Loader probing location: '%s'.", fname);
image = NULL;
// If VM built with mkbundle
loaded_from_bundle = FALSE;
if (bundles != NULL || satellite_bundles != NULL) {
/* We don't know the culture of the filename we're loading here, so this call is not culture aware. */
image = mono_assembly_open_from_bundle (load_req.alc, fname, status, NULL);
loaded_from_bundle = image != NULL;
}
if (!image)
image = mono_image_open_a_lot (load_req.alc, fname, status);
if (!image){
if (*status == MONO_IMAGE_OK)
*status = MONO_IMAGE_ERROR_ERRNO;
g_free (fname);
return NULL;
}
if (image->assembly) {
/* We want to return the MonoAssembly that's already loaded,
* but if we're using the strict assembly loader, we also need
* to check that the previously loaded assembly matches the
* predicate. It could be that we previously loaded a
* different version that happens to have the filename that
* we're currently probing. */
if (mono_loader_get_strict_assembly_name_check () &&
load_req.predicate && !load_req.predicate (image->assembly, load_req.predicate_ud)) {
mono_image_close (image);
g_free (fname);
return NULL;
} else {
/* Already loaded by another appdomain */
mono_assembly_invoke_load_hook_internal (load_req.alc, image->assembly);
mono_image_close (image);
g_free (fname);
return image->assembly;
}
}
ass = mono_assembly_request_load_from (image, fname, &load_req, status);
if (ass) {
if (!loaded_from_bundle)
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY,
"Assembly Loader loaded assembly from location: '%s'.", filename);
}
/* Clear the reference added by mono_image_open */
mono_image_close (image);
g_free (fname);
return ass;
}
static void
free_assembly_name_item (gpointer val, gpointer user_data)
{
mono_assembly_name_free_internal ((MonoAssemblyName *)val);
g_free (val);
}
/**
* mono_assembly_load_friends:
* \param ass an assembly
*
* Load the list of friend assemblies that are allowed to access
* the assembly's internal types and members. They are stored as assembly
* names in custom attributes.
*
* This is an internal method, we need this because when we load mscorlib
* we do not have the internals visible cattr loaded yet,
* so we need to load these after we initialize the runtime.
*
* LOCKING: Acquires the assemblies lock plus the loader lock.
*/
void
mono_assembly_load_friends (MonoAssembly* ass)
{
ERROR_DECL (error);
int i;
MonoCustomAttrInfo* attrs;
if (ass->friend_assembly_names_inited)
return;
attrs = mono_custom_attrs_from_assembly_checked (ass, FALSE, error);
mono_error_assert_ok (error);
if (!attrs) {
mono_assemblies_lock ();
ass->friend_assembly_names_inited = TRUE;
mono_assemblies_unlock ();
return;
}
mono_assemblies_lock ();
if (ass->friend_assembly_names_inited) {
mono_assemblies_unlock ();
return;
}
mono_assemblies_unlock ();
GSList *visible_list = NULL;
GSList *ignores_list = NULL;
/*
* We build the list outside the assemblies lock, the worse that can happen
* is that we'll need to free the allocated list.
*/
for (i = 0; i < attrs->num_attrs; ++i) {
MonoCustomAttrEntry *attr = &attrs->attrs [i];
MonoAssemblyName *aname;
const gchar *data;
uint32_t data_length;
gchar *data_with_terminator;
/* Do some sanity checking */
if (!attr->ctor)
continue;
gboolean has_visible = FALSE;
gboolean has_ignores = FALSE;
has_visible = attr->ctor->klass == mono_class_try_get_internals_visible_class ();
/* IgnoresAccessChecksToAttribute is dynamically generated, so it's not necessarily in CoreLib */
/* FIXME: should we only check for it in dynamic modules? */
has_ignores = (!strcmp ("IgnoresAccessChecksToAttribute", m_class_get_name (attr->ctor->klass)) &&
!strcmp ("System.Runtime.CompilerServices", m_class_get_name_space (attr->ctor->klass)));
if (!has_visible && !has_ignores)
continue;
if (attr->data_size < 4)
continue;
data = (const char*)attr->data;
/* 0xFF means null string, see custom attr format */
if (data [0] != 1 || data [1] != 0 || (data [2] & 0xFF) == 0xFF)
continue;
data_length = mono_metadata_decode_value (data + 2, &data);
data_with_terminator = (char *)g_memdup (data, data_length + 1);
data_with_terminator[data_length] = 0;
aname = g_new0 (MonoAssemblyName, 1);
/*g_print ("friend ass: %s\n", data);*/
if (mono_assembly_name_parse_full (data_with_terminator, aname, TRUE, NULL, NULL)) {
if (has_visible)
visible_list = g_slist_prepend (visible_list, aname);
if (has_ignores)
ignores_list = g_slist_prepend (ignores_list, aname);
} else {
g_free (aname);
}
g_free (data_with_terminator);
}
mono_custom_attrs_free (attrs);
mono_assemblies_lock ();
if (ass->friend_assembly_names_inited) {
mono_assemblies_unlock ();
g_slist_foreach (visible_list, free_assembly_name_item, NULL);
g_slist_free (visible_list);
g_slist_foreach (ignores_list, free_assembly_name_item, NULL);
g_slist_free (ignores_list);
return;
}
ass->friend_assembly_names = visible_list;
ass->ignores_checks_assembly_names = ignores_list;
/* Because of the double checked locking pattern above */
mono_memory_barrier ();
ass->friend_assembly_names_inited = TRUE;
mono_assemblies_unlock ();
}
struct HasReferenceAssemblyAttributeIterData {
gboolean has_attr;
};
static gboolean
has_reference_assembly_attribute_iterator (MonoImage *image, guint32 typeref_scope_token, const char *nspace, const char *name, guint32 method_token, gpointer user_data)
{
gboolean stop_scanning = FALSE;
struct HasReferenceAssemblyAttributeIterData *iter_data = (struct HasReferenceAssemblyAttributeIterData*)user_data;
if (!strcmp (name, "ReferenceAssemblyAttribute") && !strcmp (nspace, "System.Runtime.CompilerServices")) {
/* Note we don't check the assembly name, same as coreCLR. */
iter_data->has_attr = TRUE;
stop_scanning = TRUE;
}
return stop_scanning;
}
/**
* mono_assembly_has_reference_assembly_attribute:
* \param assembly a MonoAssembly
* \param error set on error.
*
* \returns TRUE if \p assembly has the \c System.Runtime.CompilerServices.ReferenceAssemblyAttribute set.
* On error returns FALSE and sets \p error.
*/
gboolean
mono_assembly_has_reference_assembly_attribute (MonoAssembly *assembly, MonoError *error)
{
g_assert (assembly && assembly->image);
/* .NET Framework appears to ignore the attribute on dynamic
* assemblies, so don't call this function for dynamic assemblies. */
g_assert (!image_is_dynamic (assembly->image));
error_init (error);
/*
* This might be called during assembly loading, so do everything using the low-level
* metadata APIs.
*/
struct HasReferenceAssemblyAttributeIterData iter_data = { FALSE };
mono_assembly_metadata_foreach_custom_attr (assembly, &has_reference_assembly_attribute_iterator, &iter_data);
return iter_data.has_attr;
}
/**
* mono_assembly_open:
* \param filename Opens the assembly pointed out by this name
* \param status return status code
*
* This loads an assembly from the specified \p filename. The \p filename allows
* a local URL (starting with a \c file:// prefix). If a file prefix is used, the
* filename is interpreted as a URL, and the filename is URL-decoded. Otherwise the file
* is treated as a local path.
*
* First, an attempt is made to load the assembly from the bundled executable (for those
* deployments that have been done with the \c mkbundle tool or for scenarios where the
* assembly has been registered as an embedded assembly). If this is not the case, then
* the assembly is loaded from disk using `api:mono_image_open_full`.
*
* \returns a pointer to the \c MonoAssembly if \p filename contains a valid
* assembly or NULL on error. Details about the error are stored in the
* \p status variable.
*/
MonoAssembly *
mono_assembly_open (const char *filename, MonoImageOpenStatus *status)
{
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyOpenRequest req;
mono_assembly_request_prepare_open (&req, mono_alc_get_default ());
res = mono_assembly_request_open (filename, &req, status);
MONO_EXIT_GC_UNSAFE;
return res;
}
/**
* mono_assembly_load_from_full:
* \param image Image to load the assembly from
* \param fname assembly name to associate with the assembly
* \param status returns the status condition
* \param refonly Whether this assembly is being opened in "reflection-only" mode.
*
* If the provided \p image has an assembly reference, it will process the given
* image as an assembly with the given name.
*
* Most likely you want to use the `api:mono_assembly_load_full` method instead.
*
* Returns: A valid pointer to a \c MonoAssembly* on success and the \p status will be
* set to \c MONO_IMAGE_OK; or NULL on error.
*
* If there is an error loading the assembly the \p status will indicate the
* reason with \p status being set to \c MONO_IMAGE_INVALID if the
* image did not contain an assembly reference table.
*/
MonoAssembly *
mono_assembly_load_from_full (MonoImage *image, const char*fname,
MonoImageOpenStatus *status, gboolean refonly)
{
if (refonly) {
if (status)
*status = MONO_IMAGE_IMAGE_INVALID;
return NULL;
}
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyLoadRequest req;
MonoImageOpenStatus def_status;
if (!status)
status = &def_status;
mono_assembly_request_prepare_load (&req, mono_alc_get_default ());
res = mono_assembly_request_load_from (image, fname, &req, status);
MONO_EXIT_GC_UNSAFE;
return res;
}
MonoAssembly *
mono_assembly_request_load_from (MonoImage *image, const char *fname,
const MonoAssemblyLoadRequest *req,
MonoImageOpenStatus *status)
{
MonoAssemblyCandidatePredicate predicate;
gpointer user_data;
MonoAssembly *ass, *ass2;
char *base_dir;
g_assert (status != NULL);
predicate = req->predicate;
user_data = req->predicate_ud;
if (!table_info_get_rows (&image->tables [MONO_TABLE_ASSEMBLY])) {
/* 'image' doesn't have a manifest -- maybe someone is trying to Assembly.Load a .netmodule */
*status = MONO_IMAGE_IMAGE_INVALID;
return NULL;
}
#if defined (HOST_WIN32)
{
gchar *tmp_fn;
int i;
tmp_fn = g_strdup (fname);
for (i = strlen (tmp_fn) - 1; i >= 0; i--) {
if (tmp_fn [i] == '/')
tmp_fn [i] = '\\';
}
base_dir = absolute_dir (tmp_fn);
g_free (tmp_fn);
}
#else
base_dir = absolute_dir (fname);
#endif
/*
* Create assembly struct, and enter it into the assembly cache
*/
ass = g_new0 (MonoAssembly, 1);
ass->basedir = base_dir;
ass->context.no_managed_load_event = req->no_managed_load_event;
ass->image = image;
MONO_PROFILER_RAISE (assembly_loading, (ass));
mono_assembly_fill_assembly_name (image, &ass->aname);
if (mono_defaults.corlib && strcmp (ass->aname.name, MONO_ASSEMBLY_CORLIB_NAME) == 0) {
// MS.NET doesn't support loading other mscorlibs
g_free (ass);
g_free (base_dir);
mono_image_addref (mono_defaults.corlib);
*status = MONO_IMAGE_OK;
return mono_defaults.corlib->assembly;
}
/* Add a non-temporary reference because of ass->image */
mono_image_addref (image);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Image addref %s[%p] (%s) -> %s[%p]: %d", ass->aname.name, ass, mono_alc_is_default (mono_image_get_alc (image)) ? "default ALC" : "custom ALC", image->name, image, image->ref_count);
/*
* The load hooks might take locks so we can't call them while holding the
* assemblies lock.
*/
if (ass->aname.name && !req->no_invoke_search_hook) {
/* FIXME: I think individual context should probably also look for an existing MonoAssembly here, we just need to pass the asmctx to the search hook so that it does a filename match (I guess?) */
ass2 = mono_assembly_invoke_search_hook_internal (req->alc, NULL, &ass->aname, FALSE);
if (ass2) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Image %s[%p] reusing existing assembly %s[%p]", ass->aname.name, ass, ass2->aname.name, ass2);
g_free (ass);
g_free (base_dir);
mono_image_close (image);
*status = MONO_IMAGE_OK;
return ass2;
}
}
/* We need to check for ReferenceAssemblyAttribute before we
* mark the assembly as loaded and before we fire the load
* hook. Otherwise mono_domain_fire_assembly_load () in
* appdomain.c will cache a mapping from the assembly name to
* this image and we won't be able to look for a different
* candidate. */
{
ERROR_DECL (refasm_error);
if (mono_assembly_has_reference_assembly_attribute (ass, refasm_error)) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Image for assembly '%s' (%s) has ReferenceAssemblyAttribute, skipping", ass->aname.name, image->name);
g_free (ass);
g_free (base_dir);
mono_image_close (image);
*status = MONO_IMAGE_IMAGE_INVALID;
return NULL;
}
mono_error_cleanup (refasm_error);
}
if (predicate && !predicate (ass, user_data)) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Predicate returned FALSE, skipping '%s' (%s)\n", ass->aname.name, image->name);
g_free (ass);
g_free (base_dir);
mono_image_close (image);
*status = MONO_IMAGE_IMAGE_INVALID;
return NULL;
}
mono_assemblies_lock ();
/* If an assembly is loaded into an individual context, always return a
* new MonoAssembly, even if another assembly with the same name has
* already been loaded.
*/
if (image->assembly && !req->no_invoke_search_hook) {
/*
* This means another thread has already loaded the assembly, but not yet
* called the load hooks so the search hook can't find the assembly.
*/
mono_assemblies_unlock ();
ass2 = image->assembly;
g_free (ass);
g_free (base_dir);
mono_image_close (image);
*status = MONO_IMAGE_OK;
return ass2;
}
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Prepared to set up assembly '%s' (%s)", ass->aname.name, image->name);
/* If asmctx is INDIVIDUAL, image->assembly might not be NULL, so don't
* overwrite it. */
if (image->assembly == NULL)
image->assembly = ass;
loaded_assemblies = g_list_prepend (loaded_assemblies, ass);
loaded_assembly_count++;
mono_assemblies_unlock ();
#ifdef HOST_WIN32
if (m_image_is_module_handle (image))
mono_image_fixup_vtable (image);
#endif
mono_assembly_invoke_load_hook_internal (req->alc, ass);
MONO_PROFILER_RAISE (assembly_loaded, (ass));
return ass;
}
/**
* mono_assembly_load_from:
* \param image Image to load the assembly from
* \param fname assembly name to associate with the assembly
* \param status return status code
*
* If the provided \p image has an assembly reference, it will process the given
* image as an assembly with the given name.
*
* Most likely you want to use the `api:mono_assembly_load_full` method instead.
*
* This is equivalent to calling `api:mono_assembly_load_from_full` with the
* \p refonly parameter set to FALSE.
* \returns A valid pointer to a \c MonoAssembly* on success and then \p status will be
* set to \c MONO_IMAGE_OK; or NULL on error.
*
* If there is an error loading the assembly the \p status will indicate the
* reason with \p status being set to \c MONO_IMAGE_INVALID if the
* image did not contain an assembly reference table.
*/
MonoAssembly *
mono_assembly_load_from (MonoImage *image, const char *fname,
MonoImageOpenStatus *status)
{
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyLoadRequest req;
MonoImageOpenStatus def_status;
if (!status)
status = &def_status;
mono_assembly_request_prepare_load (&req, mono_alc_get_default ());
res = mono_assembly_request_load_from (image, fname, &req, status);
MONO_EXIT_GC_UNSAFE;
return res;
}
/**
* mono_assembly_name_free_internal:
* \param aname assembly name to free
*
* Frees the provided assembly name object.
* (it does not frees the object itself, only the name members).
*/
void
mono_assembly_name_free_internal (MonoAssemblyName *aname)
{
MONO_REQ_GC_UNSAFE_MODE;
if (aname == NULL)
return;
g_free ((void *) aname->name);
g_free ((void *) aname->culture);
g_free ((void *) aname->hash_value);
g_free ((guint8*) aname->public_key);
}
static gboolean
parse_public_key (const gchar *key, gchar** pubkey, gboolean *is_ecma)
{
const gchar *pkey;
gchar header [16], val, *arr, *endp;
gint i, j, offset, bitlen, keylen, pkeylen;
//both pubkey and is_ecma are required arguments
g_assert (pubkey && is_ecma);
keylen = strlen (key) >> 1;
if (keylen < 1)
return FALSE;
/* allow the ECMA standard key */
if (strcmp (key, "00000000000000000400000000000000") == 0) {
*pubkey = NULL;
*is_ecma = TRUE;
return TRUE;
}
*is_ecma = FALSE;
val = g_ascii_xdigit_value (key [0]) << 4;
val |= g_ascii_xdigit_value (key [1]);
switch (val) {
case 0x00:
if (keylen < 13)
return FALSE;
val = g_ascii_xdigit_value (key [24]);
val |= g_ascii_xdigit_value (key [25]);
if (val != 0x06)
return FALSE;
pkey = key + 24;
break;
case 0x06:
pkey = key;
break;
default:
return FALSE;
}
/* We need the first 16 bytes
* to check whether this key is valid or not */
pkeylen = strlen (pkey) >> 1;
if (pkeylen < 16)
return FALSE;
for (i = 0, j = 0; i < 16; i++) {
header [i] = g_ascii_xdigit_value (pkey [j++]) << 4;
header [i] |= g_ascii_xdigit_value (pkey [j++]);
}
if (header [0] != 0x06 || /* PUBLICKEYBLOB (0x06) */
header [1] != 0x02 || /* Version (0x02) */
header [2] != 0x00 || /* Reserved (word) */
header [3] != 0x00 ||
(guint)(read32 (header + 8)) != 0x31415352) /* DWORD magic = RSA1 */
return FALSE;
/* Based on this length, we _should_ be able to know if the length is right */
bitlen = read32 (header + 12) >> 3;
if ((bitlen + 16 + 4) != pkeylen)
return FALSE;
arr = (gchar *)g_malloc (keylen + 4);
/* Encode the size of the blob */
mono_metadata_encode_value (keylen, &arr[0], &endp);
offset = (gint)(endp-arr);
for (i = offset, j = 0; i < keylen + offset; i++) {
arr [i] = g_ascii_xdigit_value (key [j++]) << 4;
arr [i] |= g_ascii_xdigit_value (key [j++]);
}
*pubkey = arr;
return TRUE;
}
static gboolean
build_assembly_name (const char *name, const char *version, const char *culture, const char *token, const char *key, guint32 flags, guint32 arch, MonoAssemblyName *aname, gboolean save_public_key)
{
gint len;
gint version_parts;
gchar *pkeyptr, *encoded, tok [8];
memset (aname, 0, sizeof (MonoAssemblyName));
if (version) {
int parts [4];
int i;
int part_len;
parts [2] = -1;
parts [3] = -1;
const char *s = version;
version_parts = 0;
for (i = 0; i < 4; ++i) {
int n = sscanf (s, "%u%n", &parts [i], &part_len);
if (n != 1)
return FALSE;
if (parts [i] < 0 || parts [i] > 65535)
return FALSE;
if (i < 2 && parts [i] == 65535)
return FALSE;
version_parts ++;
s += part_len;
if (s [0] == '\0')
break;
if (i < 3) {
if (s [0] != '.')
return FALSE;
s ++;
}
}
if (s [0] != '\0')
return FALSE;
if (version_parts < 2 || version_parts > 4)
return FALSE;
aname->major = parts [0];
aname->minor = parts [1];
if (version_parts >= 3)
aname->build = parts [2];
else
aname->build = -1;
if (version_parts == 4)
aname->revision = parts [3];
else
aname->revision = -1;
}
aname->flags = flags;
aname->arch = arch;
aname->name = g_strdup (name);
if (culture) {
if (g_ascii_strcasecmp (culture, "neutral") == 0)
aname->culture = g_strdup ("");
else
aname->culture = g_strdup (culture);
}
if (token && strncmp (token, "null", 4) != 0) {
char *lower;
/* the constant includes the ending NULL, hence the -1 */
if (strlen (token) != (MONO_PUBLIC_KEY_TOKEN_LENGTH - 1)) {
mono_assembly_name_free_internal (aname);
return FALSE;
}
lower = g_ascii_strdown (token, MONO_PUBLIC_KEY_TOKEN_LENGTH);
g_strlcpy ((char*)aname->public_key_token, lower, MONO_PUBLIC_KEY_TOKEN_LENGTH);
g_free (lower);
}
if (key) {
gboolean is_ecma = FALSE;
gchar *pkey = NULL;
if (strcmp (key, "null") == 0 || !parse_public_key (key, &pkey, &is_ecma)) {
mono_assembly_name_free_internal (aname);
return FALSE;
}
if (is_ecma) {
g_assert (pkey == NULL);
aname->public_key = NULL;
g_strlcpy ((gchar*)aname->public_key_token, "b77a5c561934e089", MONO_PUBLIC_KEY_TOKEN_LENGTH);
return TRUE;
}
len = mono_metadata_decode_blob_size ((const gchar *) pkey, (const gchar **) &pkeyptr);
// We also need to generate the key token
mono_digest_get_public_token ((guchar*) tok, (guint8*) pkeyptr, len);
encoded = encode_public_tok ((guchar*) tok, 8);
g_strlcpy ((gchar*)aname->public_key_token, encoded, MONO_PUBLIC_KEY_TOKEN_LENGTH);
g_free (encoded);
if (save_public_key)
aname->public_key = (guint8*) pkey;
else
g_free (pkey);
}
return TRUE;
}
static gboolean
split_key_value (const gchar *pair, gchar **key, guint32 *keylen, gchar **value)
{
char *eqsign = (char*)strchr (pair, '=');
if (!eqsign) {
*key = NULL;
*keylen = 0;
*value = NULL;
return FALSE;
}
*key = (gchar*)pair;
*keylen = eqsign - *key;
while (*keylen > 0 && g_ascii_isspace ((*key) [*keylen - 1]))
(*keylen)--;
*value = g_strstrip (eqsign + 1);
return TRUE;
}
gboolean
mono_assembly_name_parse_full (const char *name, MonoAssemblyName *aname, gboolean save_public_key, gboolean *is_version_defined, gboolean *is_token_defined)
{
gchar *dllname;
gchar *dllname_uq;
gchar *version = NULL;
gchar *version_uq;
gchar *culture = NULL;
gchar *culture_uq;
gchar *token = NULL;
gchar *token_uq;
gchar *key = NULL;
gchar *key_uq;
gchar *retargetable = NULL;
gchar *retargetable_uq;
gchar *procarch = NULL;
gchar *procarch_uq;
gboolean res;
gchar *value, *part_name;
guint32 part_name_len;
gchar **parts;
gchar **tmp;
gboolean version_defined;
gboolean token_defined;
guint32 flags = 0;
guint32 arch = MONO_PROCESSOR_ARCHITECTURE_NONE;
if (!is_version_defined)
is_version_defined = &version_defined;
*is_version_defined = FALSE;
if (!is_token_defined)
is_token_defined = &token_defined;
*is_token_defined = FALSE;
parts = tmp = g_strsplit (name, ",", 6);
if (!tmp || !*tmp) {
goto cleanup_and_fail;
}
dllname = g_strstrip (*tmp);
// Simple name cannot be empty
if (!*dllname) {
goto cleanup_and_fail;
}
// Characters /, :, and \ not allowed in simple names
while (*dllname) {
gchar tmp_char = *dllname;
if (tmp_char == '/' || tmp_char == ':' || tmp_char == '\\')
goto cleanup_and_fail;
dllname++;
}
dllname = *tmp;
tmp++;
while (*tmp) {
if (!split_key_value (g_strstrip (*tmp), &part_name, &part_name_len, &value))
goto cleanup_and_fail;
if (part_name_len == 7 && !g_ascii_strncasecmp (part_name, "Version", part_name_len)) {
*is_version_defined = TRUE;
if (version != NULL || strlen (value) == 0) {
goto cleanup_and_fail;
}
version = value;
tmp++;
continue;
}
if (part_name_len == 7 && !g_ascii_strncasecmp (part_name, "Culture", part_name_len)) {
if (culture != NULL || strlen (value) == 0) {
goto cleanup_and_fail;
}
culture = value;
tmp++;
continue;
}
if (part_name_len == 14 && !g_ascii_strncasecmp (part_name, "PublicKeyToken", part_name_len)) {
*is_token_defined = TRUE;
if (token != NULL || key != NULL || strlen (value) == 0) {
goto cleanup_and_fail;
}
token = value;
tmp++;
continue;
}
if (part_name_len == 9 && !g_ascii_strncasecmp (part_name, "PublicKey", part_name_len)) {
if (token != NULL || key != NULL || strlen (value) == 0) {
goto cleanup_and_fail;
}
key = value;
tmp++;
continue;
}
if (part_name_len == 12 && !g_ascii_strncasecmp (part_name, "Retargetable", part_name_len)) {
if (retargetable != NULL) {
goto cleanup_and_fail;
}
retargetable = value;
retargetable_uq = unquote (retargetable);
if (retargetable_uq != NULL)
retargetable = retargetable_uq;
if (!g_ascii_strcasecmp (retargetable, "yes")) {
flags |= ASSEMBLYREF_RETARGETABLE_FLAG;
} else if (g_ascii_strcasecmp (retargetable, "no")) {
g_free (retargetable_uq);
goto cleanup_and_fail;
}
g_free (retargetable_uq);
tmp++;
continue;
}
if (part_name_len == 21 && !g_ascii_strncasecmp (part_name, "ProcessorArchitecture", part_name_len)) {
if (procarch != NULL) {
goto cleanup_and_fail;
}
procarch = value;
procarch_uq = unquote (procarch);
if (procarch_uq != NULL)
procarch = procarch_uq;
if (!g_ascii_strcasecmp (procarch, "MSIL"))
arch = MONO_PROCESSOR_ARCHITECTURE_MSIL;
else if (!g_ascii_strcasecmp (procarch, "X86"))
arch = MONO_PROCESSOR_ARCHITECTURE_X86;
else if (!g_ascii_strcasecmp (procarch, "IA64"))
arch = MONO_PROCESSOR_ARCHITECTURE_IA64;
else if (!g_ascii_strcasecmp (procarch, "AMD64"))
arch = MONO_PROCESSOR_ARCHITECTURE_AMD64;
else if (!g_ascii_strcasecmp (procarch, "ARM"))
arch = MONO_PROCESSOR_ARCHITECTURE_ARM;
else {
g_free (procarch_uq);
goto cleanup_and_fail;
}
flags |= arch << 4;
g_free (procarch_uq);
tmp++;
continue;
}
// compat: If we got here, the attribute name is unknown to us. Ignore it.
tmp++;
}
/* if retargetable flag is set, then we must have a fully qualified name */
if (retargetable != NULL && (version == NULL || culture == NULL || (key == NULL && token == NULL))) {
goto cleanup_and_fail;
}
dllname_uq = unquote (dllname);
version_uq = unquote (version);
culture_uq = unquote (culture);
token_uq = unquote (token);
key_uq = unquote (key);
res = build_assembly_name (
dllname_uq == NULL ? dllname : dllname_uq,
version_uq == NULL ? version : version_uq,
culture_uq == NULL ? culture : culture_uq,
token_uq == NULL ? token : token_uq,
key_uq == NULL ? key : key_uq,
flags, arch, aname, save_public_key);
g_free (dllname_uq);
g_free (version_uq);
g_free (culture_uq);
g_free (token_uq);
g_free (key_uq);
g_strfreev (parts);
return res;
cleanup_and_fail:
g_strfreev (parts);
return FALSE;
}
static char*
unquote (const char *str)
{
gint slen;
const char *end;
if (str == NULL)
return NULL;
slen = strlen (str);
if (slen < 2)
return NULL;
if (*str != '\'' && *str != '\"')
return NULL;
end = str + slen - 1;
if (*str != *end)
return NULL;
return g_strndup (str + 1, slen - 2);
}
/**
* mono_assembly_name_parse:
* \param name name to parse
* \param aname the destination assembly name
*
* Parses an assembly qualified type name and assigns the name,
* version, culture and token to the provided assembly name object.
*
* \returns TRUE if the name could be parsed.
*/
gboolean
mono_assembly_name_parse (const char *name, MonoAssemblyName *aname)
{
return mono_assembly_name_parse_full (name, aname, FALSE, NULL, NULL);
}
/**
* mono_assembly_name_new:
* \param name name to parse
*
* Allocate a new \c MonoAssemblyName and fill its values from the
* passed \p name.
*
* \returns a newly allocated structure or NULL if there was any failure.
*/
MonoAssemblyName*
mono_assembly_name_new (const char *name)
{
MonoAssemblyName *result = NULL;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyName *aname = g_new0 (MonoAssemblyName, 1);
if (mono_assembly_name_parse (name, aname))
result = aname;
else
g_free (aname);
MONO_EXIT_GC_UNSAFE;
return result;
}
/**
* mono_assembly_name_get_name:
*/
const char*
mono_assembly_name_get_name (MonoAssemblyName *aname)
{
const char *result = NULL;
MONO_ENTER_GC_UNSAFE;
result = aname->name;
MONO_EXIT_GC_UNSAFE;
return result;
}
/**
* mono_assembly_name_get_culture:
*/
const char*
mono_assembly_name_get_culture (MonoAssemblyName *aname)
{
const char *result = NULL;
MONO_ENTER_GC_UNSAFE;
result = aname->culture;
MONO_EXIT_GC_UNSAFE;
return result;
}
/**
* mono_assembly_name_get_pubkeytoken:
*/
mono_byte*
mono_assembly_name_get_pubkeytoken (MonoAssemblyName *aname)
{
if (aname->public_key_token [0])
return aname->public_key_token;
return NULL;
}
/**
* mono_assembly_name_get_version:
*/
uint16_t
mono_assembly_name_get_version (MonoAssemblyName *aname, uint16_t *minor, uint16_t *build, uint16_t *revision)
{
if (minor)
*minor = aname->minor;
if (build)
*build = aname->build;
if (revision)
*revision = aname->revision;
return aname->major;
}
gboolean
mono_assembly_name_culture_is_neutral (const MonoAssemblyName *aname)
{
return (!aname->culture || aname->culture [0] == 0);
}
/**
* mono_assembly_load_with_partial_name:
* \param name an assembly name that is then parsed by `api:mono_assembly_name_parse`.
* \param status return status code
*
* Loads a \c MonoAssembly from a name. The name is parsed using `api:mono_assembly_name_parse`,
* so it might contain a qualified type name, version, culture and token.
*
* This will load the assembly from the file whose name is derived from the assembly name
* by appending the \c .dll extension.
*
* The assembly is loaded from either one of the extra Global Assembly Caches specified
* by the extra GAC paths (specified by the \c MONO_GAC_PREFIX environment variable) or
* if that fails from the GAC.
*
* \returns NULL on failure, or a pointer to a \c MonoAssembly on success.
*/
MonoAssembly*
mono_assembly_load_with_partial_name (const char *name, MonoImageOpenStatus *status)
{
MonoAssembly *result;
MONO_ENTER_GC_UNSAFE;
MonoImageOpenStatus def_status;
if (!status)
status = &def_status;
result = mono_assembly_load_with_partial_name_internal (name, mono_alc_get_default (), status);
MONO_EXIT_GC_UNSAFE;
return result;
}
MonoAssembly*
mono_assembly_load_with_partial_name_internal (const char *name, MonoAssemblyLoadContext *alc, MonoImageOpenStatus *status)
{
ERROR_DECL (error);
MonoAssembly *res;
MonoAssemblyName *aname, base_name;
MonoAssemblyName mapped_aname;
MONO_REQ_GC_UNSAFE_MODE;
g_assert (status != NULL);
memset (&base_name, 0, sizeof (MonoAssemblyName));
aname = &base_name;
if (!mono_assembly_name_parse (name, aname))
return NULL;
/*
* If no specific version has been requested, make sure we load the
* correct version for system assemblies.
*/
if ((aname->major | aname->minor | aname->build | aname->revision) == 0)
aname = mono_assembly_remap_version (aname, &mapped_aname);
res = mono_assembly_loaded_internal (alc, aname);
if (res) {
mono_assembly_name_free_internal (aname);
return res;
}
res = invoke_assembly_preload_hook (alc, aname, assemblies_path);
if (res) {
mono_assembly_name_free_internal (aname);
return res;
}
mono_assembly_name_free_internal (aname);
if (!res) {
res = mono_try_assembly_resolve (alc, name, NULL, error);
if (!is_ok (error)) {
mono_error_cleanup (error);
if (*status == MONO_IMAGE_OK)
*status = MONO_IMAGE_IMAGE_INVALID;
}
}
return res;
}
MonoAssembly*
mono_assembly_load_corlib (MonoImageOpenStatus *status)
{
MonoAssemblyName *aname;
MonoAssemblyOpenRequest req;
mono_assembly_request_prepare_open (&req, mono_alc_get_default ());
if (corlib) {
/* g_print ("corlib already loaded\n"); */
return corlib;
}
aname = mono_assembly_name_new (MONO_ASSEMBLY_CORLIB_NAME);
corlib = invoke_assembly_preload_hook (req.request.alc, aname, NULL);
/* MonoCore preload hook should know how to find it */
/* FIXME: AOT compiler comes here without an installed hook. */
if (!corlib) {
if (assemblies_path) { // Custom assemblies path set via MONO_PATH or mono_set_assemblies_path
char *corlib_name = g_strdup_printf ("%s.dll", MONO_ASSEMBLY_CORLIB_NAME);
corlib = load_in_path (corlib_name, (const char**)assemblies_path, &req, status);
}
}
if (!corlib) {
/* Maybe its in a bundle */
char *corlib_name = g_strdup_printf ("%s.dll", MONO_ASSEMBLY_CORLIB_NAME);
corlib = mono_assembly_request_open (corlib_name, &req, status);
}
g_assert (corlib);
return corlib;
}
gboolean
mono_assembly_candidate_predicate_sn_same_name (MonoAssembly *candidate, gpointer ud)
{
MonoAssemblyName *wanted_name = (MonoAssemblyName*)ud;
MonoAssemblyName *candidate_name = &candidate->aname;
g_assert (wanted_name != NULL);
g_assert (candidate_name != NULL);
if (mono_trace_is_traced (G_LOG_LEVEL_INFO, MONO_TRACE_ASSEMBLY)) {
char * s = mono_stringify_assembly_name (wanted_name);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Predicate: wanted = %s", s);
g_free (s);
s = mono_stringify_assembly_name (candidate_name);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Predicate: candidate = %s", s);
g_free (s);
}
return mono_assembly_check_name_match (wanted_name, candidate_name);
}
gboolean
mono_assembly_check_name_match (MonoAssemblyName *wanted_name, MonoAssemblyName *candidate_name)
{
gboolean result = mono_assembly_names_equal_flags (wanted_name, candidate_name, MONO_ANAME_EQ_IGNORE_VERSION | MONO_ANAME_EQ_IGNORE_PUBKEY);
if (result && assembly_names_compare_versions (wanted_name, candidate_name, -1) > 0)
result = FALSE;
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Predicate: candidate and wanted names %s",
result ? "match, returning TRUE" : "don't match, returning FALSE");
return result;
}
MonoAssembly*
mono_assembly_request_byname (MonoAssemblyName *aname, const MonoAssemblyByNameRequest *req, MonoImageOpenStatus *status)
{
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Request to load %s in alc %p", aname->name, (gpointer)req->request.alc);
MonoAssembly *result;
if (status)
*status = MONO_IMAGE_OK;
result = netcore_load_reference (aname, req->request.alc, req->requesting_assembly, !req->no_postload_search);
return result;
}
MonoAssembly *
mono_assembly_load_full_alc (MonoGCHandle alc_gchandle, MonoAssemblyName *aname, const char *basedir, MonoImageOpenStatus *status)
{
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyByNameRequest req;
MonoAssemblyLoadContext *alc = mono_alc_from_gchandle (alc_gchandle);
mono_assembly_request_prepare_byname (&req, alc);
req.requesting_assembly = NULL;
req.basedir = basedir;
res = mono_assembly_request_byname (aname, &req, status);
MONO_EXIT_GC_UNSAFE;
return res;
}
/**
* mono_assembly_load_full:
* \param aname A MonoAssemblyName with the assembly name to load.
* \param basedir A directory to look up the assembly at.
* \param status a pointer to a MonoImageOpenStatus to return the status of the load operation
* \param refonly Whether this assembly is being opened in "reflection-only" mode.
*
* Loads the assembly referenced by \p aname, if the value of \p basedir is not NULL, it
* attempts to load the assembly from that directory before probing the standard locations.
*
* If the assembly is being opened in reflection-only mode (\p refonly set to TRUE) then no
* assembly binding takes place.
*
* \returns the assembly referenced by \p aname loaded or NULL on error. On error the
* value pointed by \p status is updated with an error code.
*/
MonoAssembly*
mono_assembly_load_full (MonoAssemblyName *aname, const char *basedir, MonoImageOpenStatus *status, gboolean refonly)
{
if (refonly) {
if (status)
*status = MONO_IMAGE_IMAGE_INVALID;
return NULL;
}
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyByNameRequest req;
mono_assembly_request_prepare_byname (&req, mono_alc_get_default ());
req.requesting_assembly = NULL;
req.basedir = basedir;
res = mono_assembly_request_byname (aname, &req, status);
MONO_EXIT_GC_UNSAFE;
return res;
}
/**
* mono_assembly_load:
* \param aname A MonoAssemblyName with the assembly name to load.
* \param basedir A directory to look up the assembly at.
* \param status a pointer to a MonoImageOpenStatus to return the status of the load operation
*
* Loads the assembly referenced by \p aname, if the value of \p basedir is not NULL, it
* attempts to load the assembly from that directory before probing the standard locations.
*
* \returns the assembly referenced by \p aname loaded or NULL on error. On error the
* value pointed by \p status is updated with an error code.
*/
MonoAssembly*
mono_assembly_load (MonoAssemblyName *aname, const char *basedir, MonoImageOpenStatus *status)
{
MonoAssemblyByNameRequest req;
mono_assembly_request_prepare_byname (&req, mono_alc_get_default ());
req.requesting_assembly = NULL;
req.basedir = basedir;
return mono_assembly_request_byname (aname, &req, status);
}
/**
* mono_assembly_loaded_full:
* \param aname an assembly to look for.
* \param refonly Whether this assembly is being opened in "reflection-only" mode.
*
* This is used to determine if the specified assembly has been loaded
* \returns NULL If the given \p aname assembly has not been loaded, or a pointer to
* a \c MonoAssembly that matches the \c MonoAssemblyName specified.
*/
MonoAssembly*
mono_assembly_loaded_full (MonoAssemblyName *aname, gboolean refonly)
{
if (refonly)
return NULL;
MonoAssemblyLoadContext *alc = mono_alc_get_default ();
return mono_assembly_loaded_internal (alc, aname);
}
MonoAssembly *
mono_assembly_loaded_internal (MonoAssemblyLoadContext *alc, MonoAssemblyName *aname)
{
MonoAssembly *res;
MonoAssemblyName mapped_aname;
aname = mono_assembly_remap_version (aname, &mapped_aname);
res = mono_assembly_invoke_search_hook_internal (alc, NULL, aname, FALSE);
return res;
}
/**
* mono_assembly_loaded:
* \param aname an assembly to look for.
*
* This is used to determine if the specified assembly has been loaded
* \returns NULL If the given \p aname assembly has not been loaded, or a pointer to
* a \c MonoAssembly that matches the \c MonoAssemblyName specified.
*/
MonoAssembly*
mono_assembly_loaded (MonoAssemblyName *aname)
{
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
res = mono_assembly_loaded_internal (mono_alc_get_default (), aname);
MONO_EXIT_GC_UNSAFE;
return res;
}
void
mono_assembly_release_gc_roots (MonoAssembly *assembly)
{
if (assembly == NULL || assembly == REFERENCE_MISSING)
return;
if (assembly_is_dynamic (assembly)) {
int i;
MonoDynamicImage *dynimg = (MonoDynamicImage *)assembly->image;
for (i = 0; i < dynimg->image.module_count; ++i)
mono_dynamic_image_release_gc_roots ((MonoDynamicImage *)dynimg->image.modules [i]);
mono_dynamic_image_release_gc_roots (dynimg);
}
}
/*
* Returns whether mono_assembly_close_finish() must be called as
* well. See comment for mono_image_close_except_pools() for why we
* unload in two steps.
*/
gboolean
mono_assembly_close_except_image_pools (MonoAssembly *assembly)
{
g_return_val_if_fail (assembly != NULL, FALSE);
if (assembly == REFERENCE_MISSING)
return FALSE;
/* Might be 0 already */
if (mono_assembly_decref (assembly) > 0)
return FALSE;
MONO_PROFILER_RAISE (assembly_unloading, (assembly));
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Unloading assembly %s [%p].", assembly->aname.name, assembly);
mono_debug_close_image (assembly->image);
mono_assemblies_lock ();
loaded_assemblies = g_list_remove (loaded_assemblies, assembly);
loaded_assembly_count--;
mono_assemblies_unlock ();
assembly->image->assembly = NULL;
if (!mono_image_close_except_pools (assembly->image))
assembly->image = NULL;
g_slist_foreach (assembly->friend_assembly_names, free_assembly_name_item, NULL);
g_slist_foreach (assembly->ignores_checks_assembly_names, free_assembly_name_item, NULL);
g_slist_free (assembly->friend_assembly_names);
g_slist_free (assembly->ignores_checks_assembly_names);
g_free (assembly->basedir);
MONO_PROFILER_RAISE (assembly_unloaded, (assembly));
return TRUE;
}
void
mono_assembly_close_finish (MonoAssembly *assembly)
{
g_assert (assembly && assembly != REFERENCE_MISSING);
if (assembly->image)
mono_image_close_finish (assembly->image);
if (assembly_is_dynamic (assembly)) {
g_free ((char*)assembly->aname.culture);
} else {
g_free (assembly);
}
}
/**
* mono_assembly_close:
* \param assembly the assembly to release.
*
* This method releases a reference to the \p assembly. The assembly is
* only released when all the outstanding references to it are released.
*/
void
mono_assembly_close (MonoAssembly *assembly)
{
if (mono_assembly_close_except_image_pools (assembly))
mono_assembly_close_finish (assembly);
}
/**
* mono_assembly_load_module:
*/
MonoImage*
mono_assembly_load_module (MonoAssembly *assembly, guint32 idx)
{
ERROR_DECL (error);
MonoImage *result = mono_assembly_load_module_checked (assembly, idx, error);
mono_error_assert_ok (error);
return result;
}
MONO_API MonoImage*
mono_assembly_load_module_checked (MonoAssembly *assembly, uint32_t idx, MonoError *error)
{
return mono_image_load_file_for_image_checked (assembly->image, idx, error);
}
/**
* mono_assembly_foreach:
* \param func function to invoke for each assembly loaded
* \param user_data data passed to the callback
*
* Invokes the provided \p func callback for each assembly loaded into
* the runtime. The first parameter passed to the callback is the
* \c MonoAssembly*, and the second parameter is the \p user_data.
*
* This is done for all assemblies loaded in the runtime, not just
* those loaded in the current application domain.
*/
void
mono_assembly_foreach (GFunc func, gpointer user_data)
{
GList *copy;
/*
* We make a copy of the list to avoid calling the callback inside the
* lock, which could lead to deadlocks.
*/
mono_assemblies_lock ();
copy = g_list_copy (loaded_assemblies);
mono_assemblies_unlock ();
g_list_foreach (loaded_assemblies, func, user_data);
g_list_free (copy);
}
/**
* mono_assemblies_cleanup:
*
* Free all resources used by this module.
*/
void
mono_assemblies_cleanup (void)
{
}
/*
* Holds the assembly of the application, for
* System.Diagnostics.Process::MainModule
*/
static MonoAssembly *main_assembly=NULL;
/**
* mono_assembly_set_main:
*/
void
mono_assembly_set_main (MonoAssembly *assembly)
{
main_assembly = assembly;
}
/**
* mono_assembly_get_main:
*
* Returns: the assembly for the application, the first assembly that is loaded by the VM
*/
MonoAssembly *
mono_assembly_get_main (void)
{
return (main_assembly);
}
/**
* mono_assembly_get_image:
* \param assembly The assembly to retrieve the image from
*
* \returns the \c MonoImage associated with this assembly.
*/
MonoImage*
mono_assembly_get_image (MonoAssembly *assembly)
{
MonoImage *res;
MONO_ENTER_GC_UNSAFE;
res = mono_assembly_get_image_internal (assembly);
MONO_EXIT_GC_UNSAFE;
return res;
}
MonoImage*
mono_assembly_get_image_internal (MonoAssembly *assembly)
{
MONO_REQ_GC_UNSAFE_MODE;
return assembly->image;
}
/**
* mono_assembly_get_name:
* \param assembly The assembly to retrieve the name from
*
* The returned name's lifetime is the same as \p assembly's.
*
* \returns the \c MonoAssemblyName associated with this assembly.
*/
MonoAssemblyName *
mono_assembly_get_name (MonoAssembly *assembly)
{
MonoAssemblyName *res;
MONO_ENTER_GC_UNSAFE;
res = mono_assembly_get_name_internal (assembly);
MONO_EXIT_GC_UNSAFE;
return res;
}
MonoAssemblyName *
mono_assembly_get_name_internal (MonoAssembly *assembly)
{
MONO_REQ_GC_UNSAFE_MODE;
return &assembly->aname;
}
/**
* mono_register_bundled_assemblies:
*/
void
mono_register_bundled_assemblies (const MonoBundledAssembly **assemblies)
{
bundles = assemblies;
}
/**
* mono_create_new_bundled_satellite_assembly:
*/
MonoBundledSatelliteAssembly *
mono_create_new_bundled_satellite_assembly (const char *name, const char *culture, const unsigned char *data, unsigned int size)
{
MonoBundledSatelliteAssembly *satellite_assembly = g_new0 (MonoBundledSatelliteAssembly, 1);
satellite_assembly->name = strdup (name);
satellite_assembly->culture = strdup (culture);
satellite_assembly->data = data;
satellite_assembly->size = size;
return satellite_assembly;
}
/**
* mono_register_bundled_satellite_assemblies:
*/
void
mono_register_bundled_satellite_assemblies (const MonoBundledSatelliteAssembly **assemblies)
{
satellite_bundles = assemblies;
}
#define MONO_DECLSEC_FORMAT_10 0x3C
#define MONO_DECLSEC_FORMAT_20 0x2E
#define MONO_DECLSEC_FIELD 0x53
#define MONO_DECLSEC_PROPERTY 0x54
#define SKIP_VISIBILITY_XML_ATTRIBUTE ("\"SkipVerification\"")
#define SKIP_VISIBILITY_ATTRIBUTE_NAME ("System.Security.Permissions.SecurityPermissionAttribute")
#define SKIP_VISIBILITY_ATTRIBUTE_SIZE (sizeof (SKIP_VISIBILITY_ATTRIBUTE_NAME) - 1)
#define SKIP_VISIBILITY_PROPERTY_NAME ("SkipVerification")
#define SKIP_VISIBILITY_PROPERTY_SIZE (sizeof (SKIP_VISIBILITY_PROPERTY_NAME) - 1)
static gboolean
mono_assembly_try_decode_skip_verification_param (const char *p, const char **resp, gboolean *abort_decoding)
{
int len;
switch (*p++) {
case MONO_DECLSEC_PROPERTY:
break;
case MONO_DECLSEC_FIELD:
default:
*abort_decoding = TRUE;
return FALSE;
break;
}
if (*p++ != MONO_TYPE_BOOLEAN) {
*abort_decoding = TRUE;
return FALSE;
}
/* property name length */
len = mono_metadata_decode_value (p, &p);
if (len >= SKIP_VISIBILITY_PROPERTY_SIZE && !memcmp (p, SKIP_VISIBILITY_PROPERTY_NAME, SKIP_VISIBILITY_PROPERTY_SIZE)) {
p += len;
return *p;
}
p += len + 1;
*resp = p;
return FALSE;
}
static gboolean
mono_assembly_try_decode_skip_verification (const char *p, const char *endn)
{
int i, j, num, len, params_len;
if (*p == MONO_DECLSEC_FORMAT_10) {
gsize read, written;
char *res = g_convert (p, endn - p, "UTF-8", "UTF-16LE", &read, &written, NULL);
if (res) {
gboolean found = strstr (res, SKIP_VISIBILITY_XML_ATTRIBUTE) != NULL;
g_free (res);
return found;
}
return FALSE;
}
if (*p++ != MONO_DECLSEC_FORMAT_20)
return FALSE;
/* number of encoded permission attributes */
num = mono_metadata_decode_value (p, &p);
for (i = 0; i < num; ++i) {
gboolean is_valid = FALSE;
gboolean abort_decoding = FALSE;
/* attribute name length */
len = mono_metadata_decode_value (p, &p);
/* We don't really need to fully decode the type. Comparing the name is enough */
is_valid = len >= SKIP_VISIBILITY_ATTRIBUTE_SIZE && !memcmp (p, SKIP_VISIBILITY_ATTRIBUTE_NAME, SKIP_VISIBILITY_ATTRIBUTE_SIZE);
p += len;
/*size of the params table*/
params_len = mono_metadata_decode_value (p, &p);
if (is_valid) {
const char *params_end = p + params_len;
/* number of parameters */
len = mono_metadata_decode_value (p, &p);
for (j = 0; j < len; ++j) {
if (mono_assembly_try_decode_skip_verification_param (p, &p, &abort_decoding))
return TRUE;
if (abort_decoding)
break;
}
p = params_end;
} else {
p += params_len;
}
}
return FALSE;
}
gboolean
mono_assembly_has_skip_verification (MonoAssembly *assembly)
{
MonoTableInfo *t;
guint32 cols [MONO_DECL_SECURITY_SIZE];
const char *blob;
int i, len;
if (MONO_SECMAN_FLAG_INIT (assembly->skipverification))
return MONO_SECMAN_FLAG_GET_VALUE (assembly->skipverification);
t = &assembly->image->tables [MONO_TABLE_DECLSECURITY];
int rows = table_info_get_rows (t);
for (i = 0; i < rows; ++i) {
mono_metadata_decode_row (t, i, cols, MONO_DECL_SECURITY_SIZE);
if ((cols [MONO_DECL_SECURITY_PARENT] & MONO_HAS_DECL_SECURITY_MASK) != MONO_HAS_DECL_SECURITY_ASSEMBLY)
continue;
if (cols [MONO_DECL_SECURITY_ACTION] != SECURITY_ACTION_REQMIN)
continue;
blob = mono_metadata_blob_heap (assembly->image, cols [MONO_DECL_SECURITY_PERMISSIONSET]);
len = mono_metadata_decode_blob_size (blob, &blob);
if (!len)
continue;
if (mono_assembly_try_decode_skip_verification (blob, blob + len)) {
MONO_SECMAN_FLAG_SET_VALUE (assembly->skipverification, TRUE);
return TRUE;
}
}
MONO_SECMAN_FLAG_SET_VALUE (assembly->skipverification, FALSE);
return FALSE;
}
/**
* mono_assembly_is_jit_optimizer_disabled:
*
* \param assm the assembly
*
* Returns TRUE if the System.Diagnostics.DebuggableAttribute has the
* DebuggingModes.DisableOptimizations bit set.
*
*/
gboolean
mono_assembly_is_jit_optimizer_disabled (MonoAssembly *ass)
{
ERROR_DECL (error);
g_assert (ass);
if (ass->jit_optimizer_disabled_inited)
return ass->jit_optimizer_disabled;
MonoClass *klass = mono_class_try_get_debuggable_attribute_class ();
if (!klass) {
/* Linked away */
ass->jit_optimizer_disabled = FALSE;
mono_memory_barrier ();
ass->jit_optimizer_disabled_inited = TRUE;
return FALSE;
}
gboolean disable_opts = FALSE;
MonoCustomAttrInfo* attrs = mono_custom_attrs_from_assembly_checked (ass, FALSE, error);
mono_error_cleanup (error); /* FIXME don't swallow the error */
if (attrs) {
for (int i = 0; i < attrs->num_attrs; ++i) {
MonoCustomAttrEntry *attr = &attrs->attrs [i];
const gchar *p;
MonoMethodSignature *sig;
if (!attr->ctor || attr->ctor->klass != klass)
continue;
/* Decode the attribute. See reflection.c */
p = (const char*)attr->data;
g_assert (read16 (p) == 0x0001);
p += 2;
// FIXME: Support named parameters
sig = mono_method_signature_internal (attr->ctor);
MonoClass *param_class;
if (sig->param_count == 2 && sig->params [0]->type == MONO_TYPE_BOOLEAN && sig->params [1]->type == MONO_TYPE_BOOLEAN) {
/* Two boolean arguments */
p ++;
disable_opts = *p;
} else if (sig->param_count == 1 &&
sig->params[0]->type == MONO_TYPE_VALUETYPE &&
(param_class = mono_class_from_mono_type_internal (sig->params[0])) != NULL &&
m_class_is_enumtype (param_class) &&
!strcmp (m_class_get_name (param_class), "DebuggingModes")) {
/* System.Diagnostics.DebuggableAttribute+DebuggingModes */
int32_t flags = read32 (p);
p += 4;
disable_opts = (flags & 0x0100) != 0;
}
}
mono_custom_attrs_free (attrs);
}
ass->jit_optimizer_disabled = disable_opts;
mono_memory_barrier ();
ass->jit_optimizer_disabled_inited = TRUE;
return disable_opts;
}
guint32
mono_assembly_get_count (void)
{
return loaded_assembly_count;
}
| /**
* \file
* Routines for loading assemblies.
*
* Author:
* Miguel de Icaza ([email protected])
*
* Copyright 2001-2003 Ximian, Inc (http://www.ximian.com)
* Copyright 2004-2009 Novell, Inc (http://www.novell.com)
* Copyright 2011 Xamarin, Inc (http://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#include <config.h>
#include <stdio.h>
#include <glib.h>
#include <errno.h>
#include <string.h>
#include <stdlib.h>
#include <mono/metadata/assembly.h>
#include "assembly-internals.h"
#include <mono/metadata/image.h>
#include "image-internals.h"
#include "object-internals.h"
#include <mono/metadata/loader.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/custom-attrs-internals.h>
#include <mono/metadata/metadata-internals.h>
#include <mono/metadata/profiler-private.h>
#include <mono/metadata/class-internals.h>
#include <mono/metadata/domain-internals.h>
#include <mono/metadata/exception-internals.h>
#include <mono/metadata/reflection-internals.h>
#include <mono/metadata/mono-endian.h>
#include <mono/metadata/mono-debug.h>
#include <mono/utils/mono-uri.h>
#include <mono/metadata/mono-config.h>
#include <mono/metadata/mono-config-internals.h>
#include <mono/metadata/mono-config-dirs.h>
#include <mono/utils/mono-digest.h>
#include <mono/utils/mono-logger-internals.h>
#include <mono/utils/mono-path.h>
#include <mono/utils/mono-proclib.h>
#include <mono/metadata/reflection.h>
#include <mono/metadata/coree.h>
#include <mono/metadata/cil-coff.h>
#include <mono/utils/atomic.h>
#include <mono/utils/mono-os-mutex.h>
#include <mono/metadata/mono-private-unstable.h>
#include <minipal/getexepath.h>
#ifndef HOST_WIN32
#include <sys/types.h>
#include <unistd.h>
#include <sys/stat.h>
#endif
#ifdef HOST_DARWIN
#include <mach-o/dyld.h>
#endif
/* the default search path is empty, the first slot is replaced with the computed value */
static char*
default_path [] = {
NULL,
NULL,
NULL
};
/* Contains the list of directories to be searched for assemblies (MONO_PATH) */
static char **assemblies_path = NULL;
/* keeps track of loaded assemblies, excluding dynamic ones */
static GList *loaded_assemblies = NULL;
static guint32 loaded_assembly_count = 0;
static MonoAssembly *corlib;
static char* unquote (const char *str);
// This protects loaded_assemblies
static mono_mutex_t assemblies_mutex;
static inline void
mono_assemblies_lock (void)
{
mono_os_mutex_lock (&assemblies_mutex);
}
static inline void
mono_assemblies_unlock (void)
{
mono_os_mutex_unlock (&assemblies_mutex);
}
/* If defined, points to the bundled assembly information */
static const MonoBundledAssembly **bundles;
static const MonoBundledSatelliteAssembly **satellite_bundles;
/* Class lazy loading functions */
static GENERATE_TRY_GET_CLASS_WITH_CACHE (debuggable_attribute, "System.Diagnostics", "DebuggableAttribute")
static GENERATE_TRY_GET_CLASS_WITH_CACHE (internals_visible, "System.Runtime.CompilerServices", "InternalsVisibleToAttribute")
static MonoAssembly*
mono_assembly_invoke_search_hook_internal (MonoAssemblyLoadContext *alc, MonoAssembly *requesting, MonoAssemblyName *aname, gboolean postload);
static MonoAssembly *
invoke_assembly_preload_hook (MonoAssemblyLoadContext *alc, MonoAssemblyName *aname, gchar **apath);
static gchar*
encode_public_tok (const guchar *token, gint32 len)
{
const static gchar allowed [] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f' };
gchar *res;
int i;
res = (gchar *)g_malloc (len * 2 + 1);
for (i = 0; i < len; i++) {
res [i * 2] = allowed [token [i] >> 4];
res [i * 2 + 1] = allowed [token [i] & 0xF];
}
res [len * 2] = 0;
return res;
}
/**
* mono_public_tokens_are_equal:
* \param pubt1 first public key token
* \param pubt2 second public key token
*
* Compare two public key tokens and return TRUE is they are equal and FALSE
* otherwise.
*/
gboolean
mono_public_tokens_are_equal (const unsigned char *pubt1, const unsigned char *pubt2)
{
return g_ascii_strncasecmp ((const char*) pubt1, (const char*) pubt2, 16) == 0;
}
/**
* mono_set_assemblies_path:
* \param path list of paths that contain directories where Mono will look for assemblies
*
* Use this method to override the standard assembly lookup system and
* override any assemblies coming from the GAC. This is the method
* that supports the \c MONO_PATH variable.
*
* Notice that \c MONO_PATH and this method are really a very bad idea as
* it prevents the GAC from working and it prevents the standard
* resolution mechanisms from working. Nonetheless, for some debugging
* situations and bootstrapping setups, this is useful to have.
*/
void
mono_set_assemblies_path (const char* path)
{
char **splitted, **dest;
splitted = g_strsplit (path, G_SEARCHPATH_SEPARATOR_S, 1000);
if (assemblies_path)
g_strfreev (assemblies_path);
assemblies_path = dest = splitted;
while (*splitted) {
char *tmp = *splitted;
if (*tmp)
*dest++ = mono_path_canonicalize (tmp);
g_free (tmp);
splitted++;
}
*dest = *splitted;
if (g_hasenv ("MONO_DEBUG"))
return;
splitted = assemblies_path;
while (*splitted) {
if (**splitted && !g_file_test (*splitted, G_FILE_TEST_IS_DIR))
g_warning ("'%s' in MONO_PATH doesn't exist or has wrong permissions.", *splitted);
splitted++;
}
}
void
mono_set_assemblies_path_direct (char **path)
{
g_strfreev (assemblies_path);
assemblies_path = path;
}
static void
check_path_env (void)
{
if (assemblies_path != NULL)
return;
char* path = g_getenv ("MONO_PATH");
if (!path)
return;
mono_set_assemblies_path(path);
g_free (path);
}
static void
mono_assembly_binding_info_free (MonoAssemblyBindingInfo *info)
{
if (!info)
return;
g_free (info->name);
g_free (info->culture);
}
/**
* mono_assembly_names_equal:
* \param l first assembly
* \param r second assembly.
*
* Compares two \c MonoAssemblyName instances and returns whether they are equal.
*
* This compares the names, the cultures, the release version and their
* public tokens.
*
* \returns TRUE if both assembly names are equal.
*/
gboolean
mono_assembly_names_equal (MonoAssemblyName *l, MonoAssemblyName *r)
{
return mono_assembly_names_equal_flags (l, r, MONO_ANAME_EQ_NONE);
}
/**
* mono_assembly_names_equal_flags:
* \param l first assembly name
* \param r second assembly name
* \param flags flags that affect what is compared.
*
* Compares two \c MonoAssemblyName instances and returns whether they are equal.
*
* This compares the simple names and cultures and optionally the versions and
* public key tokens, depending on the \c flags.
*
* \returns TRUE if both assembly names are equal.
*/
gboolean
mono_assembly_names_equal_flags (MonoAssemblyName *l, MonoAssemblyName *r, MonoAssemblyNameEqFlags flags)
{
g_assert (l != NULL);
g_assert (r != NULL);
if (!l->name || !r->name)
return FALSE;
if ((flags & MONO_ANAME_EQ_IGNORE_CASE) != 0 && g_strcasecmp (l->name, r->name))
return FALSE;
if ((flags & MONO_ANAME_EQ_IGNORE_CASE) == 0 && strcmp (l->name, r->name))
return FALSE;
if (l->culture && r->culture && strcmp (l->culture, r->culture))
return FALSE;
if ((l->major != r->major || l->minor != r->minor ||
l->build != r->build || l->revision != r->revision) &&
(flags & MONO_ANAME_EQ_IGNORE_VERSION) == 0)
if (! ((l->major == 0 && l->minor == 0 && l->build == 0 && l->revision == 0) || (r->major == 0 && r->minor == 0 && r->build == 0 && r->revision == 0)))
return FALSE;
if (!l->public_key_token [0] || !r->public_key_token [0] || (flags & MONO_ANAME_EQ_IGNORE_PUBKEY) != 0)
return TRUE;
if (!mono_public_tokens_are_equal (l->public_key_token, r->public_key_token))
return FALSE;
return TRUE;
}
/**
* assembly_names_compare_versions:
* \param l left assembly name
* \param r right assembly name
* \param maxcomps how many version components to compare, or -1 to compare all.
*
* \returns a negative if \p l is a lower version than \p r; a positive value
* if \p r is a lower version than \p l, or zero if \p l and \p r are equal
* versions (comparing upto \p maxcomps components).
*
* Components are \c major, \c minor, \c revision, and \c build. \p maxcomps 1 means just compare
* majors. 2 means majors then minors. etc.
*/
static int
assembly_names_compare_versions (MonoAssemblyName *l, MonoAssemblyName *r, int maxcomps)
{
int i = 0;
if (maxcomps < 0) maxcomps = 4;
#define CMP(field) do { \
if (l-> field < r-> field && i < maxcomps) return -1; \
if (l-> field > r-> field && i < maxcomps) return 1; \
} while (0)
CMP (major);
++i;
CMP (minor);
++i;
CMP (revision);
++i;
CMP (build);
#undef CMP
return 0;
}
/**
* mono_assembly_request_prepare_load:
* \param req the load request to be initialized
* \param alc the AssemblyLoadContext in netcore
*
* Initialize an assembly loader request. Its state will be reset and the assembly context kind will be prefilled with \p asmctx.
*/
void
mono_assembly_request_prepare_load (MonoAssemblyLoadRequest *req, MonoAssemblyLoadContext *alc)
{
memset (req, 0, sizeof (MonoAssemblyLoadRequest));
req->alc = alc;
}
/**
* mono_assembly_request_prepare_open:
* \param req the open request to be initialized
* \param alc the AssemblyLoadContext in netcore
*
* Initialize an assembly loader request intended to be used for open operations. Its state will be reset and the assembly context kind will be prefilled with \p asmctx.
*/
void
mono_assembly_request_prepare_open (MonoAssemblyOpenRequest *req, MonoAssemblyLoadContext *alc)
{
memset (req, 0, sizeof (MonoAssemblyOpenRequest));
req->request.alc = alc;
}
/**
* mono_assembly_request_prepare_byname:
* \param req the byname request to be initialized
* \param alc the AssemblyLoadContext in netcore
*
* Initialize an assembly load by name request. Its state will be reset and the assembly context kind will be prefilled with \p asmctx.
*/
void
mono_assembly_request_prepare_byname (MonoAssemblyByNameRequest *req, MonoAssemblyLoadContext *alc)
{
memset (req, 0, sizeof (MonoAssemblyByNameRequest));
req->request.alc = alc;
}
static MonoAssembly *
load_in_path (const char *basename, const char** search_path, const MonoAssemblyOpenRequest *req, MonoImageOpenStatus *status)
{
int i;
char *fullpath;
MonoAssembly *result;
for (i = 0; search_path [i]; ++i) {
fullpath = g_build_filename (search_path [i], basename, (const char*)NULL);
result = mono_assembly_request_open (fullpath, req, status);
g_free (fullpath);
if (result)
return result;
}
return NULL;
}
/**
* mono_assembly_setrootdir:
* \param root_dir The pathname of the root directory where we will locate assemblies
*
* This routine sets the internal default root directory for looking up
* assemblies.
*
* This is used by Windows installations to compute dynamically the
* place where the Mono assemblies are located.
*
*/
void
mono_assembly_setrootdir (const char *root_dir)
{
/*
* Override the MONO_ASSEMBLIES directory configured at compile time.
*/
if (default_path [0])
g_free (default_path [0]);
default_path [0] = g_strdup (root_dir);
}
/**
* mono_assembly_getrootdir:
*
* Obtains the root directory used for looking up assemblies.
*
* Returns: a string with the directory, this string should not be freed.
*/
G_CONST_RETURN gchar *
mono_assembly_getrootdir (void)
{
return default_path [0];
}
/**
* mono_native_getrootdir:
*
* Obtains the root directory used for looking up native libs (.so, .dylib).
*
* Returns: a string with the directory, this string should be freed by
* the caller.
*/
gchar *
mono_native_getrootdir (void)
{
gchar* fullpath = g_build_path (G_DIR_SEPARATOR_S, mono_assembly_getrootdir (), mono_config_get_reloc_lib_dir(), (const char*)NULL);
return fullpath;
}
/**
* mono_set_dirs:
* \param assembly_dir the base directory for assemblies
* \param config_dir the base directory for configuration files
*
* This routine is used internally and by developers embedding
* the runtime into their own applications.
*
* There are a number of cases to consider: Mono as a system-installed
* package that is available on the location preconfigured or Mono in
* a relocated location.
*
* If you are using a system-installed Mono, you can pass NULL
* to both parameters. If you are not, you should compute both
* directory values and call this routine.
*
* The values for a given PREFIX are:
*
* assembly_dir: PREFIX/lib
* config_dir: PREFIX/etc
*
* Notice that embedders that use Mono in a relocated way must
* compute the location at runtime, as they will be in control
* of where Mono is installed.
*/
void
mono_set_dirs (const char *assembly_dir, const char *config_dir)
{
if (assembly_dir == NULL)
assembly_dir = mono_config_get_assemblies_dir ();
if (config_dir == NULL)
config_dir = mono_config_get_cfg_dir ();
mono_assembly_setrootdir (assembly_dir);
mono_set_config_dir (config_dir);
}
#ifndef HOST_WIN32
static char *
compute_base (char *path)
{
char *p = strrchr (path, '/');
if (p == NULL)
return NULL;
/* Not a well known Mono executable, we are embedded, cant guess the base */
if (strcmp (p, "/mono") && strcmp (p, "/mono-boehm") && strcmp (p, "/mono-sgen") && strcmp (p, "/pedump") && strcmp (p, "/monodis"))
return NULL;
*p = 0;
p = strrchr (path, '/');
if (p == NULL)
return NULL;
if (strcmp (p, "/bin") != 0)
return NULL;
*p = 0;
return path;
}
static void
fallback (void)
{
mono_set_dirs (mono_config_get_assemblies_dir (), mono_config_get_cfg_dir ());
}
static G_GNUC_UNUSED void
set_dirs (char *exe)
{
char *base;
char *config, *lib, *mono;
struct stat buf;
const char *bindir;
/*
* Only /usr prefix is treated specially
*/
bindir = mono_config_get_bin_dir ();
g_assert (bindir);
if (strncmp (exe, bindir, strlen (bindir)) == 0 || (base = compute_base (exe)) == NULL){
fallback ();
return;
}
config = g_build_filename (base, "etc", (const char*)NULL);
lib = g_build_filename (base, "lib", (const char*)NULL);
mono = g_build_filename (lib, "mono/4.5", (const char*)NULL); // FIXME: stop hardcoding 4.5 here
if (stat (mono, &buf) == -1)
fallback ();
else {
mono_set_dirs (lib, config);
}
g_free (config);
g_free (lib);
g_free (mono);
}
#endif /* HOST_WIN32 */
/**
* mono_set_rootdir:
*
* Registers the root directory for the Mono runtime, for Linux and Solaris 10,
* this auto-detects the prefix where Mono was installed.
*/
void
mono_set_rootdir (void)
{
char *path = minipal_getexepath();
if (path == NULL) {
#ifndef HOST_WIN32
fallback ();
#endif
return;
}
#if defined(HOST_WIN32) || (defined(HOST_DARWIN) && !defined(TARGET_ARM))
gchar *bindir, *installdir, *root, *config;
bindir = g_path_get_dirname (path);
installdir = g_path_get_dirname (bindir);
root = g_build_path (G_DIR_SEPARATOR_S, installdir, "lib", (const char*)NULL);
config = g_build_filename (root, "..", "etc", (const char*)NULL);
#ifdef HOST_WIN32
mono_set_dirs (root, config);
#else
if (g_file_test (root, G_FILE_TEST_EXISTS) && g_file_test (config, G_FILE_TEST_EXISTS))
mono_set_dirs (root, config);
else
fallback ();
#endif
g_free (config);
g_free (root);
g_free (installdir);
g_free (bindir);
g_free (path);
#elif defined(DISABLE_MONO_AUTODETECTION)
fallback ();
#else
set_dirs (path);
return;
#endif
}
/**
* mono_assemblies_init:
*
* Initialize global variables used by this module.
*/
void
mono_assemblies_init (void)
{
/*
* Initialize our internal paths if we have not been initialized yet.
* This happens when embedders use Mono.
*/
if (mono_assembly_getrootdir () == NULL)
mono_set_rootdir ();
check_path_env ();
mono_os_mutex_init_recursive (&assemblies_mutex);
}
gboolean
mono_assembly_fill_assembly_name_full (MonoImage *image, MonoAssemblyName *aname, gboolean copyBlobs)
{
MonoTableInfo *t = &image->tables [MONO_TABLE_ASSEMBLY];
guint32 cols [MONO_ASSEMBLY_SIZE];
gint32 machine, flags;
if (!table_info_get_rows (t))
return FALSE;
mono_metadata_decode_row (t, 0, cols, MONO_ASSEMBLY_SIZE);
aname->hash_len = 0;
aname->hash_value = NULL;
aname->name = mono_metadata_string_heap (image, cols [MONO_ASSEMBLY_NAME]);
if (copyBlobs)
aname->name = g_strdup (aname->name);
aname->culture = mono_metadata_string_heap (image, cols [MONO_ASSEMBLY_CULTURE]);
if (copyBlobs)
aname->culture = g_strdup (aname->culture);
aname->flags = cols [MONO_ASSEMBLY_FLAGS];
aname->major = cols [MONO_ASSEMBLY_MAJOR_VERSION];
aname->minor = cols [MONO_ASSEMBLY_MINOR_VERSION];
aname->build = cols [MONO_ASSEMBLY_BUILD_NUMBER];
aname->revision = cols [MONO_ASSEMBLY_REV_NUMBER];
aname->hash_alg = cols [MONO_ASSEMBLY_HASH_ALG];
if (cols [MONO_ASSEMBLY_PUBLIC_KEY]) {
guchar* token = (guchar *)g_malloc (8);
gchar* encoded;
const gchar* pkey;
int len;
pkey = mono_metadata_blob_heap (image, cols [MONO_ASSEMBLY_PUBLIC_KEY]);
len = mono_metadata_decode_blob_size (pkey, &pkey);
aname->public_key = (guchar*)pkey;
mono_digest_get_public_token (token, aname->public_key, len);
encoded = encode_public_tok (token, 8);
g_strlcpy ((char*)aname->public_key_token, encoded, MONO_PUBLIC_KEY_TOKEN_LENGTH);
g_free (encoded);
g_free (token);
}
else {
aname->public_key = NULL;
memset (aname->public_key_token, 0, MONO_PUBLIC_KEY_TOKEN_LENGTH);
}
if (cols [MONO_ASSEMBLY_PUBLIC_KEY]) {
aname->public_key = (guchar*)mono_metadata_blob_heap (image, cols [MONO_ASSEMBLY_PUBLIC_KEY]);
if (copyBlobs) {
const gchar *pkey_end;
int len = mono_metadata_decode_blob_size ((const gchar*) aname->public_key, &pkey_end);
pkey_end += len; /* move to end */
size_t size = pkey_end - (const gchar*)aname->public_key;
guchar *tmp = g_new (guchar, size);
memcpy (tmp, aname->public_key, size);
aname->public_key = tmp;
}
}
else
aname->public_key = 0;
machine = image->image_info->cli_header.coff.coff_machine;
flags = image->image_info->cli_cli_header.ch_flags;
switch (machine) {
case COFF_MACHINE_I386:
/* https://bugzilla.xamarin.com/show_bug.cgi?id=17632 */
if (flags & (CLI_FLAGS_32BITREQUIRED|CLI_FLAGS_PREFERRED32BIT))
aname->arch = MONO_PROCESSOR_ARCHITECTURE_X86;
else if ((flags & 0x70) == 0x70)
aname->arch = MONO_PROCESSOR_ARCHITECTURE_NONE;
else
aname->arch = MONO_PROCESSOR_ARCHITECTURE_MSIL;
break;
case COFF_MACHINE_IA64:
aname->arch = MONO_PROCESSOR_ARCHITECTURE_IA64;
break;
case COFF_MACHINE_AMD64:
aname->arch = MONO_PROCESSOR_ARCHITECTURE_AMD64;
break;
case COFF_MACHINE_ARM:
aname->arch = MONO_PROCESSOR_ARCHITECTURE_ARM;
break;
default:
break;
}
return TRUE;
}
/**
* mono_assembly_fill_assembly_name:
* \param image Image
* \param aname Name
* \returns TRUE if successful
*/
gboolean
mono_assembly_fill_assembly_name (MonoImage *image, MonoAssemblyName *aname)
{
return mono_assembly_fill_assembly_name_full (image, aname, FALSE);
}
/**
* mono_stringify_assembly_name:
* \param aname the assembly name.
*
* Convert \p aname into its string format. The returned string is dynamically
* allocated and should be freed by the caller.
*
* \returns a newly allocated string with a string representation of
* the assembly name.
*/
char*
mono_stringify_assembly_name (MonoAssemblyName *aname)
{
const char *quote = (aname->name && g_ascii_isspace (aname->name [0])) ? "\"" : "";
GString *str;
str = g_string_new (NULL);
g_string_append_printf (str, "%s%s%s", quote, aname->name, quote);
if (!aname->without_version)
g_string_append_printf (str, ", Version=%d.%d.%d.%d", aname->major, aname->minor, aname->build, aname->revision);
if (!aname->without_culture) {
if (aname->culture && *aname->culture)
g_string_append_printf (str, ", Culture=%s", aname->culture);
else
g_string_append_printf (str, ", Culture=%s", "neutral");
}
if (!aname->without_public_key_token) {
if (aname->public_key_token [0])
g_string_append_printf (str,", PublicKeyToken=%s%s", (char *)aname->public_key_token, (aname->flags & ASSEMBLYREF_RETARGETABLE_FLAG) ? ", Retargetable=Yes" : "");
else g_string_append_printf (str,", PublicKeyToken=%s%s", "null", (aname->flags & ASSEMBLYREF_RETARGETABLE_FLAG) ? ", Retargetable=Yes" : "");
}
char *result = g_string_free (str, FALSE); // result is the final formatted string.
return result;
}
static gchar*
assemblyref_public_tok (MonoImage *image, guint32 key_index, guint32 flags)
{
const gchar *public_tok;
int len;
public_tok = mono_metadata_blob_heap (image, key_index);
len = mono_metadata_decode_blob_size (public_tok, &public_tok);
if (flags & ASSEMBLYREF_FULL_PUBLIC_KEY_FLAG) {
guchar token [8];
mono_digest_get_public_token (token, (guchar*)public_tok, len);
return encode_public_tok (token, 8);
}
return encode_public_tok ((guchar*)public_tok, len);
}
static gchar*
assemblyref_public_tok_checked (MonoImage *image, guint32 key_index, guint32 flags, MonoError *error)
{
const gchar *public_tok;
int len;
public_tok = mono_metadata_blob_heap_checked (image, key_index, error);
return_val_if_nok (error, NULL);
if (!public_tok) {
mono_error_set_bad_image (error, image, "expected public key token (index = %d) in assembly reference, but the Blob heap is NULL", key_index);
return NULL;
}
len = mono_metadata_decode_blob_size (public_tok, &public_tok);
if (flags & ASSEMBLYREF_FULL_PUBLIC_KEY_FLAG) {
guchar token [8];
mono_digest_get_public_token (token, (guchar*)public_tok, len);
return encode_public_tok (token, 8);
}
return encode_public_tok ((guchar*)public_tok, len);
}
/**
* mono_assembly_addref:
* \param assembly the assembly to reference
*
* This routine increments the reference count on a MonoAssembly.
* The reference count is reduced every time the method mono_assembly_close() is
* invoked.
*/
gint32
mono_assembly_addref (MonoAssembly *assembly)
{
return mono_atomic_inc_i32 (&assembly->ref_count);
}
gint32
mono_assembly_decref (MonoAssembly *assembly)
{
return mono_atomic_dec_i32 (&assembly->ref_count);
}
/*
* CAUTION: This table must be kept in sync with
* ivkm/reflect/Fusion.cs
*/
#define SILVERLIGHT_KEY "7cec85d7bea7798e"
#define WINFX_KEY "31bf3856ad364e35"
#define ECMA_KEY "b77a5c561934e089"
#define MSFINAL_KEY "b03f5f7f11d50a3a"
#define COMPACTFRAMEWORK_KEY "969db8053d3322ac"
typedef struct {
const char *name;
const char *from;
const char *to;
} KeyRemapEntry;
static KeyRemapEntry key_remap_table[] = {
{ "CustomMarshalers", COMPACTFRAMEWORK_KEY, MSFINAL_KEY },
{ "Microsoft.CSharp", WINFX_KEY, MSFINAL_KEY },
{ "Microsoft.VisualBasic", COMPACTFRAMEWORK_KEY, MSFINAL_KEY },
{ "System", SILVERLIGHT_KEY, ECMA_KEY },
{ "System", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.ComponentModel.Composition", WINFX_KEY, ECMA_KEY },
{ "System.ComponentModel.DataAnnotations", "ddd0da4d3e678217", WINFX_KEY },
{ "System.Core", SILVERLIGHT_KEY, ECMA_KEY },
{ "System.Core", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.Data", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.Data.DataSetExtensions", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.Drawing", COMPACTFRAMEWORK_KEY, MSFINAL_KEY },
{ "System.Messaging", COMPACTFRAMEWORK_KEY, MSFINAL_KEY },
// FIXME: MS uses MSFINAL_KEY for .NET 4.5
{ "System.Net", SILVERLIGHT_KEY, MSFINAL_KEY },
{ "System.Numerics", WINFX_KEY, ECMA_KEY },
{ "System.Runtime.Serialization", SILVERLIGHT_KEY, ECMA_KEY },
{ "System.Runtime.Serialization", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.ServiceModel", WINFX_KEY, ECMA_KEY },
{ "System.ServiceModel", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.ServiceModel.Web", SILVERLIGHT_KEY, WINFX_KEY },
{ "System.Web.Services", COMPACTFRAMEWORK_KEY, MSFINAL_KEY },
{ "System.Windows", SILVERLIGHT_KEY, MSFINAL_KEY },
{ "System.Windows.Forms", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.Xml", SILVERLIGHT_KEY, ECMA_KEY },
{ "System.Xml", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.Xml.Linq", WINFX_KEY, ECMA_KEY },
{ "System.Xml.Linq", COMPACTFRAMEWORK_KEY, ECMA_KEY },
{ "System.Xml.Serialization", WINFX_KEY, ECMA_KEY }
};
static void
remap_keys (MonoAssemblyName *aname)
{
int i;
for (i = 0; i < G_N_ELEMENTS (key_remap_table); i++) {
const KeyRemapEntry *entry = &key_remap_table [i];
if (strcmp (aname->name, entry->name) ||
!mono_public_tokens_are_equal (aname->public_key_token, (const unsigned char*) entry->from))
continue;
memcpy (aname->public_key_token, entry->to, MONO_PUBLIC_KEY_TOKEN_LENGTH);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY,
"Remapped public key token of retargetable assembly %s from %s to %s",
aname->name, entry->from, entry->to);
return;
}
}
static MonoAssemblyName *
mono_assembly_remap_version (MonoAssemblyName *aname, MonoAssemblyName *dest_aname)
{
const MonoRuntimeInfo *current_runtime;
if (aname->name == NULL) return aname;
current_runtime = mono_get_runtime_info ();
if (aname->flags & ASSEMBLYREF_RETARGETABLE_FLAG) {
const AssemblyVersionSet* vset;
/* Remap to current runtime */
vset = ¤t_runtime->version_sets [0];
memcpy (dest_aname, aname, sizeof(MonoAssemblyName));
dest_aname->major = vset->major;
dest_aname->minor = vset->minor;
dest_aname->build = vset->build;
dest_aname->revision = vset->revision;
dest_aname->flags &= ~ASSEMBLYREF_RETARGETABLE_FLAG;
/* Remap assembly name */
if (!strcmp (aname->name, "System.Net"))
dest_aname->name = g_strdup ("System");
remap_keys (dest_aname);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY,
"The request to load the retargetable assembly %s v%d.%d.%d.%d was remapped to %s v%d.%d.%d.%d",
aname->name,
aname->major, aname->minor, aname->build, aname->revision,
dest_aname->name,
vset->major, vset->minor, vset->build, vset->revision
);
return dest_aname;
}
return aname;
}
/**
* mono_assembly_get_assemblyref:
* \param image pointer to the \c MonoImage to extract the information from.
* \param index index to the assembly reference in the image.
* \param aname pointer to a \c MonoAssemblyName that will hold the returned value.
*
* Fills out the \p aname with the assembly name of the \p index assembly reference in \p image.
*/
void
mono_assembly_get_assemblyref (MonoImage *image, int index, MonoAssemblyName *aname)
{
MonoTableInfo *t;
guint32 cols [MONO_ASSEMBLYREF_SIZE];
const char *hash;
t = &image->tables [MONO_TABLE_ASSEMBLYREF];
mono_metadata_decode_row (t, index, cols, MONO_ASSEMBLYREF_SIZE);
// ECMA-335: II.22.5 - AssemblyRef
// HashValue can be null or non-null. If non-null it's an index into the blob heap
// Sometimes ILasm can create an image without a Blob heap.
hash = mono_metadata_blob_heap_null_ok (image, cols [MONO_ASSEMBLYREF_HASH_VALUE]);
if (hash) {
aname->hash_len = mono_metadata_decode_blob_size (hash, &hash);
aname->hash_value = hash;
} else {
aname->hash_len = 0;
aname->hash_value = NULL;
}
aname->name = mono_metadata_string_heap (image, cols [MONO_ASSEMBLYREF_NAME]);
aname->culture = mono_metadata_string_heap (image, cols [MONO_ASSEMBLYREF_CULTURE]);
aname->flags = cols [MONO_ASSEMBLYREF_FLAGS];
aname->major = cols [MONO_ASSEMBLYREF_MAJOR_VERSION];
aname->minor = cols [MONO_ASSEMBLYREF_MINOR_VERSION];
aname->build = cols [MONO_ASSEMBLYREF_BUILD_NUMBER];
aname->revision = cols [MONO_ASSEMBLYREF_REV_NUMBER];
if (cols [MONO_ASSEMBLYREF_PUBLIC_KEY]) {
gchar *token = assemblyref_public_tok (image, cols [MONO_ASSEMBLYREF_PUBLIC_KEY], aname->flags);
g_strlcpy ((char*)aname->public_key_token, token, MONO_PUBLIC_KEY_TOKEN_LENGTH);
g_free (token);
} else {
memset (aname->public_key_token, 0, MONO_PUBLIC_KEY_TOKEN_LENGTH);
}
}
static MonoAssembly *
search_bundle_for_assembly (MonoAssemblyLoadContext *alc, MonoAssemblyName *aname)
{
if (bundles == NULL && satellite_bundles == NULL)
return NULL;
MonoImageOpenStatus status;
MonoImage *image;
MonoAssemblyLoadRequest req;
image = mono_assembly_open_from_bundle (alc, aname->name, &status, aname->culture);
if (!image && !g_str_has_suffix (aname->name, ".dll")) {
char *name = g_strdup_printf ("%s.dll", aname->name);
image = mono_assembly_open_from_bundle (alc, name, &status, aname->culture);
}
if (image) {
mono_assembly_request_prepare_load (&req, alc);
return mono_assembly_request_load_from (image, aname->name, &req, &status);
}
return NULL;
}
static MonoAssembly*
netcore_load_reference (MonoAssemblyName *aname, MonoAssemblyLoadContext *alc, MonoAssembly *requesting, gboolean postload)
{
g_assert (alc != NULL);
MonoAssemblyName mapped_aname;
aname = mono_assembly_remap_version (aname, &mapped_aname);
MonoAssembly *reference = NULL;
gboolean is_satellite = !mono_assembly_name_culture_is_neutral (aname);
gboolean is_default = mono_alc_is_default (alc);
/*
* Try these until one of them succeeds (by returning a non-NULL reference):
* 1. Check if it's already loaded by the ALC.
*
* 2. If it's a non-default ALC, call the Load() method.
*
* 3. If the ALC is not the default and this is not a satellite request,
* check if it's already loaded by the default ALC.
*
* 4. If we have a bundle registered and this is not a satellite request,
* search the images for a matching name.
*
* 5. If we have a satellite bundle registered and this is a satellite request,
* find the parent ALC and search the images for a matching name and culture.
*
* 6. If the ALC is the default or this is not a satellite request,
* check the TPA list, APP_PATHS, and ApplicationBase.
*
* 7. If this is a satellite request, call the ALC ResolveSatelliteAssembly method.
*
* 8. Call the ALC Resolving event. If the ALC is not the default and this is not
* a satellite request, call the Resolving event in the default ALC first.
*
* 9. Call the ALC AssemblyResolve event (except for corlib satellite assemblies).
*
* 10. Return NULL.
*/
reference = mono_assembly_loaded_internal (alc, aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly already loaded in the active ALC: '%s'.", aname->name);
goto leave;
}
if (!is_default) {
reference = mono_alc_invoke_resolve_using_load_nofail (alc, aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found using Load method: '%s'.", aname->name);
goto leave;
}
}
if (!is_default && !is_satellite) {
reference = mono_assembly_loaded_internal (mono_alc_get_default (), aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly already loaded in the default ALC: '%s'.", aname->name);
goto leave;
}
}
if (bundles != NULL && !is_satellite) {
reference = search_bundle_for_assembly (mono_alc_get_default (), aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found in the bundle: '%s'.", aname->name);
goto leave;
}
}
if (satellite_bundles != NULL && is_satellite) {
// Satellite assembly byname requests should be loaded in the same ALC as their parent assembly
size_t name_len = strlen (aname->name);
char *parent_name = NULL;
MonoAssemblyLoadContext *parent_alc = NULL;
if (g_str_has_suffix (aname->name, MONO_ASSEMBLY_RESOURCE_SUFFIX))
parent_name = g_strdup_printf ("%s.dll", g_strndup (aname->name, name_len - strlen (MONO_ASSEMBLY_RESOURCE_SUFFIX)));
if (parent_name) {
MonoAssemblyOpenRequest req;
mono_assembly_request_prepare_open (&req, alc);
MonoAssembly *parent_assembly = mono_assembly_request_open (parent_name, &req, NULL);
parent_alc = mono_assembly_get_alc (parent_assembly);
}
if (parent_alc)
reference = search_bundle_for_assembly (parent_alc, aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found in the satellite bundle: '%s'.", aname->name);
goto leave;
}
}
if (is_default || !is_satellite) {
reference = invoke_assembly_preload_hook (mono_alc_get_default (), aname, assemblies_path);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found with the filesystem probing logic: '%s'.", aname->name);
goto leave;
}
}
if (is_satellite) {
reference = mono_alc_invoke_resolve_using_resolve_satellite_nofail (alc, aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found with ResolveSatelliteAssembly method: '%s'.", aname->name);
goto leave;
}
}
// For compatibility with CoreCLR, invoke the Resolving event in the default ALC first whenever loading
// a non-satellite assembly into a non-default ALC. See: https://github.com/dotnet/runtime/issues/54814
if (!is_default && !is_satellite) {
reference = mono_alc_invoke_resolve_using_resolving_event_nofail (mono_alc_get_default (), aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found with the Resolving event (default ALC): '%s'.", aname->name);
goto leave;
}
}
reference = mono_alc_invoke_resolve_using_resolving_event_nofail (alc, aname);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found with the Resolving event: '%s'.", aname->name);
goto leave;
}
// Looking up corlib resources here can cause an infinite loop
// See: https://github.com/dotnet/coreclr/blob/0a762eb2f3a299489c459da1ddeb69e042008f07/src/vm/appdomain.cpp#L5178-L5239
if (!(strcmp (aname->name, MONO_ASSEMBLY_CORLIB_RESOURCE_NAME) == 0 && is_satellite) && postload) {
reference = mono_assembly_invoke_search_hook_internal (alc, requesting, aname, TRUE);
if (reference) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly found with AssemblyResolve event: '%s'.", aname->name);
goto leave;
}
}
leave:
return reference;
}
/**
* mono_assembly_get_assemblyref_checked:
* \param image pointer to the \c MonoImage to extract the information from.
* \param index index to the assembly reference in the image.
* \param aname pointer to a \c MonoAssemblyName that will hold the returned value.
* \param error set on error
*
* Fills out the \p aname with the assembly name of the \p index assembly reference in \p image.
*
* \returns TRUE on success, otherwise sets \p error and returns FALSE
*/
gboolean
mono_assembly_get_assemblyref_checked (MonoImage *image, int index, MonoAssemblyName *aname, MonoError *error)
{
guint32 cols [MONO_ASSEMBLYREF_SIZE];
const char *hash;
if (image_is_dynamic (image)) {
MonoDynamicTable *t = &(((MonoDynamicImage*) image)->tables [MONO_TABLE_ASSEMBLYREF]);
if (!mono_metadata_decode_row_dynamic_checked ((MonoDynamicImage*)image, t, index, cols, MONO_ASSEMBLYREF_SIZE, error))
return FALSE;
}
else {
MonoTableInfo *t = &image->tables [MONO_TABLE_ASSEMBLYREF];
if (!mono_metadata_decode_row_checked (image, t, index, cols, MONO_ASSEMBLYREF_SIZE, error))
return FALSE;
}
// ECMA-335: II.22.5 - AssemblyRef
// HashValue can be null or non-null. If non-null it's an index into the blob heap
// Sometimes ILasm can create an image without a Blob heap.
hash = mono_metadata_blob_heap_checked (image, cols [MONO_ASSEMBLYREF_HASH_VALUE], error);
return_val_if_nok (error, FALSE);
if (hash) {
aname->hash_len = mono_metadata_decode_blob_size (hash, &hash);
aname->hash_value = hash;
} else {
aname->hash_len = 0;
aname->hash_value = NULL;
}
aname->name = mono_metadata_string_heap_checked (image, cols [MONO_ASSEMBLYREF_NAME], error);
return_val_if_nok (error, FALSE);
aname->culture = mono_metadata_string_heap_checked (image, cols [MONO_ASSEMBLYREF_CULTURE], error);
return_val_if_nok (error, FALSE);
aname->flags = cols [MONO_ASSEMBLYREF_FLAGS];
aname->major = cols [MONO_ASSEMBLYREF_MAJOR_VERSION];
aname->minor = cols [MONO_ASSEMBLYREF_MINOR_VERSION];
aname->build = cols [MONO_ASSEMBLYREF_BUILD_NUMBER];
aname->revision = cols [MONO_ASSEMBLYREF_REV_NUMBER];
if (cols [MONO_ASSEMBLYREF_PUBLIC_KEY]) {
gchar *token = assemblyref_public_tok_checked (image, cols [MONO_ASSEMBLYREF_PUBLIC_KEY], aname->flags, error);
return_val_if_nok (error, FALSE);
g_strlcpy ((char*)aname->public_key_token, token, MONO_PUBLIC_KEY_TOKEN_LENGTH);
g_free (token);
} else {
memset (aname->public_key_token, 0, MONO_PUBLIC_KEY_TOKEN_LENGTH);
}
return TRUE;
}
/**
* mono_assembly_load_reference:
*/
void
mono_assembly_load_reference (MonoImage *image, int index)
{
MonoAssembly *reference;
MonoAssemblyName aname;
MonoImageOpenStatus status = MONO_IMAGE_OK;
memset (&aname, 0, sizeof (MonoAssemblyName));
/*
* image->references is shared between threads, so we need to access
* it inside a critical section.
*/
mono_image_lock (image);
if (!image->references) {
MonoTableInfo *t = &image->tables [MONO_TABLE_ASSEMBLYREF];
int n = table_info_get_rows (t);
image->references = g_new0 (MonoAssembly *, n + 1);
image->nreferences = n;
}
reference = image->references [index];
mono_image_unlock (image);
if (reference)
return;
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Requesting loading reference %d (of %d) of %s", index, image->nreferences, image->name);
ERROR_DECL (local_error);
mono_assembly_get_assemblyref_checked (image, index, &aname, local_error);
if (!is_ok (local_error)) {
mono_trace (G_LOG_LEVEL_WARNING, MONO_TRACE_ASSEMBLY, "Decoding assembly reference %d (of %d) of %s failed due to: %s", index, image->nreferences, image->name, mono_error_get_message (local_error));
mono_error_cleanup (local_error);
goto commit_reference;
}
if (image->assembly) {
if (mono_trace_is_traced (G_LOG_LEVEL_INFO, MONO_TRACE_ASSEMBLY)) {
char *aname_str = mono_stringify_assembly_name (&aname);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Loading reference %d of %s (%s), looking for %s",
index, image->name, mono_alc_is_default (mono_image_get_alc (image)) ? "default ALC" : "custom ALC" ,
aname_str);
g_free (aname_str);
}
MonoAssemblyByNameRequest req;
mono_assembly_request_prepare_byname (&req, mono_image_get_alc (image));
req.requesting_assembly = image->assembly;
//req.no_postload_search = TRUE; // FIXME: should this be set?
reference = mono_assembly_request_byname (&aname, &req, NULL);
} else {
g_assertf (image->assembly, "While loading reference %d MonoImage %s doesn't have a MonoAssembly", index, image->name);
}
if (reference == NULL){
char *extra_msg;
if (status == MONO_IMAGE_ERROR_ERRNO && errno == ENOENT) {
extra_msg = g_strdup_printf ("The assembly was not found in the Global Assembly Cache, a path listed in the MONO_PATH environment variable, or in the location of the executing assembly (%s).\n", image->assembly != NULL ? image->assembly->basedir : "" );
} else if (status == MONO_IMAGE_ERROR_ERRNO) {
extra_msg = g_strdup_printf ("System error: %s\n", strerror (errno));
} else if (status == MONO_IMAGE_MISSING_ASSEMBLYREF) {
extra_msg = g_strdup ("Cannot find an assembly referenced from this one.\n");
} else if (status == MONO_IMAGE_IMAGE_INVALID) {
extra_msg = g_strdup ("The file exists but is not a valid assembly.\n");
} else {
extra_msg = g_strdup ("");
}
mono_trace (G_LOG_LEVEL_WARNING, MONO_TRACE_ASSEMBLY, "The following assembly referenced from %s could not be loaded:\n"
" Assembly: %s (assemblyref_index=%d)\n"
" Version: %d.%d.%d.%d\n"
" Public Key: %s\n%s",
image->name, aname.name, index,
aname.major, aname.minor, aname.build, aname.revision,
strlen ((char*)aname.public_key_token) == 0 ? "(none)" : (char*)aname.public_key_token, extra_msg);
g_free (extra_msg);
}
commit_reference:
mono_image_lock (image);
if (reference == NULL) {
/* Flag as not found */
reference = (MonoAssembly *)REFERENCE_MISSING;
}
if (!image->references [index]) {
if (reference != REFERENCE_MISSING){
mono_assembly_addref (reference);
if (image->assembly)
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly Ref addref %s[%p] -> %s[%p]: %d",
image->assembly->aname.name, image->assembly, reference->aname.name, reference, reference->ref_count);
} else {
if (image->assembly)
mono_trace (G_LOG_LEVEL_INFO, MONO_TRACE_ASSEMBLY, "Failed to load assembly %s[%p].",
image->assembly->aname.name, image->assembly);
}
image->references [index] = reference;
}
mono_image_unlock (image);
if (image->references [index] != reference) {
/* Somebody loaded it before us */
mono_assembly_close (reference);
}
}
/**
* mono_assembly_load_references:
* \param image
* \param status
* \deprecated There is no reason to use this method anymore, it does nothing
*
* This method is now a no-op, it does nothing other than setting the \p status to \c MONO_IMAGE_OK
*/
void
mono_assembly_load_references (MonoImage *image, MonoImageOpenStatus *status)
{
/* This is a no-op now but it is part of the embedding API so we can't remove it */
if (status)
*status = MONO_IMAGE_OK;
}
typedef struct AssemblyLoadHook AssemblyLoadHook;
struct AssemblyLoadHook {
AssemblyLoadHook *next;
union {
MonoAssemblyLoadFunc v1;
MonoAssemblyLoadFuncV2 v2;
} func;
int version;
gpointer user_data;
};
static AssemblyLoadHook *assembly_load_hook = NULL;
void
mono_assembly_invoke_load_hook_internal (MonoAssemblyLoadContext *alc, MonoAssembly *ass)
{
AssemblyLoadHook *hook;
for (hook = assembly_load_hook; hook; hook = hook->next) {
if (hook->version == 1) {
hook->func.v1 (ass, hook->user_data);
} else {
ERROR_DECL (hook_error);
g_assert (hook->version == 2);
hook->func.v2 (alc, ass, hook->user_data, hook_error);
mono_error_assert_ok (hook_error); /* FIXME: proper error handling */
}
}
}
/**
* mono_assembly_invoke_load_hook:
*/
void
mono_assembly_invoke_load_hook (MonoAssembly *ass)
{
mono_assembly_invoke_load_hook_internal (mono_alc_get_default (), ass);
}
static void
mono_install_assembly_load_hook_v1 (MonoAssemblyLoadFunc func, gpointer user_data)
{
AssemblyLoadHook *hook;
g_return_if_fail (func != NULL);
hook = g_new0 (AssemblyLoadHook, 1);
hook->version = 1;
hook->func.v1 = func;
hook->user_data = user_data;
hook->next = assembly_load_hook;
assembly_load_hook = hook;
}
void
mono_install_assembly_load_hook_v2 (MonoAssemblyLoadFuncV2 func, gpointer user_data, gboolean append)
{
g_return_if_fail (func != NULL);
AssemblyLoadHook *hook = g_new0 (AssemblyLoadHook, 1);
hook->version = 2;
hook->func.v2 = func;
hook->user_data = user_data;
if (append && assembly_load_hook != NULL) { // If we don't have any installed hooks, append vs prepend is irrelevant
AssemblyLoadHook *old = assembly_load_hook;
while (old->next != NULL)
old = old->next;
old->next = hook;
} else {
hook->next = assembly_load_hook;
assembly_load_hook = hook;
}
}
/**
* mono_install_assembly_load_hook:
*/
void
mono_install_assembly_load_hook (MonoAssemblyLoadFunc func, gpointer user_data)
{
mono_install_assembly_load_hook_v1 (func, user_data);
}
typedef struct AssemblySearchHook AssemblySearchHook;
struct AssemblySearchHook {
AssemblySearchHook *next;
union {
MonoAssemblySearchFunc v1;
MonoAssemblySearchFuncV2 v2;
} func;
gboolean postload;
int version;
gpointer user_data;
};
static AssemblySearchHook *assembly_search_hook = NULL;
static MonoAssembly*
mono_assembly_invoke_search_hook_internal (MonoAssemblyLoadContext *alc, MonoAssembly *requesting, MonoAssemblyName *aname, gboolean postload)
{
AssemblySearchHook *hook;
for (hook = assembly_search_hook; hook; hook = hook->next) {
if (hook->postload == postload) {
MonoAssembly *ass;
if (hook->version == 1) {
ass = hook->func.v1 (aname, hook->user_data);
} else {
ERROR_DECL (hook_error);
g_assert (hook->version == 2);
ass = hook->func.v2 (alc, requesting, aname, postload, hook->user_data, hook_error);
mono_error_assert_ok (hook_error); /* FIXME: proper error handling */
}
if (ass)
return ass;
}
}
return NULL;
}
/**
* mono_assembly_invoke_search_hook:
*/
MonoAssembly*
mono_assembly_invoke_search_hook (MonoAssemblyName *aname)
{
return mono_assembly_invoke_search_hook_internal (NULL, NULL, aname, FALSE);
}
static void
mono_install_assembly_search_hook_internal_v1 (MonoAssemblySearchFunc func, gpointer user_data, gboolean postload)
{
AssemblySearchHook *hook;
g_return_if_fail (func != NULL);
hook = g_new0 (AssemblySearchHook, 1);
hook->version = 1;
hook->func.v1 = func;
hook->user_data = user_data;
hook->postload = postload;
hook->next = assembly_search_hook;
assembly_search_hook = hook;
}
void
mono_install_assembly_search_hook_v2 (MonoAssemblySearchFuncV2 func, gpointer user_data, gboolean postload, gboolean append)
{
if (func == NULL)
return;
AssemblySearchHook *hook = g_new0 (AssemblySearchHook, 1);
hook->version = 2;
hook->func.v2 = func;
hook->user_data = user_data;
hook->postload = postload;
if (append && assembly_search_hook != NULL) { // If we don't have any installed hooks, append vs prepend is irrelevant
AssemblySearchHook *old = assembly_search_hook;
while (old->next != NULL)
old = old->next;
old->next = hook;
} else {
hook->next = assembly_search_hook;
assembly_search_hook = hook;
}
}
/**
* mono_install_assembly_search_hook:
*/
void
mono_install_assembly_search_hook (MonoAssemblySearchFunc func, gpointer user_data)
{
mono_install_assembly_search_hook_internal_v1 (func, user_data, FALSE);
}
/**
* mono_install_assembly_refonly_search_hook:
*/
void
mono_install_assembly_refonly_search_hook (MonoAssemblySearchFunc func, gpointer user_data)
{
/* Ignore refonly hooks, they will never flre */
}
/**
* mono_install_assembly_postload_search_hook:
*/
void
mono_install_assembly_postload_search_hook (MonoAssemblySearchFunc func, gpointer user_data)
{
mono_install_assembly_search_hook_internal_v1 (func, user_data, TRUE);
}
void
mono_install_assembly_postload_refonly_search_hook (MonoAssemblySearchFunc func, gpointer user_data)
{
/* Ignore refonly hooks, they will never flre */
}
typedef struct AssemblyPreLoadHook AssemblyPreLoadHook;
struct AssemblyPreLoadHook {
AssemblyPreLoadHook *next;
union {
MonoAssemblyPreLoadFunc v1; // legacy internal use
MonoAssemblyPreLoadFuncV2 v2; // current internal use
MonoAssemblyPreLoadFuncV3 v3; // netcore external use
} func;
gpointer user_data;
gint32 version;
};
static AssemblyPreLoadHook *assembly_preload_hook = NULL;
static MonoAssembly *
invoke_assembly_preload_hook (MonoAssemblyLoadContext *alc, MonoAssemblyName *aname, gchar **apath)
{
AssemblyPreLoadHook *hook;
MonoAssembly *assembly;
for (hook = assembly_preload_hook; hook; hook = hook->next) {
if (hook->version == 1)
assembly = hook->func.v1 (aname, apath, hook->user_data);
else {
ERROR_DECL (error);
g_assert (hook->version == 2 || hook->version == 3);
if (hook->version == 2)
assembly = hook->func.v2 (alc, aname, apath, hook->user_data, error);
else { // v3
/*
* For the default ALC, pass the globally known gchandle (since it's never collectible, it's always a strong handle).
* For other ALCs, make a new strong handle that is passed to the caller.
* Early at startup, when the default ALC exists, but its managed object doesn't, so the default ALC gchandle points to null.
*/
gboolean needs_free = TRUE;
MonoGCHandle strong_gchandle;
if (mono_alc_is_default (alc)) {
needs_free = FALSE;
strong_gchandle = alc->gchandle;
} else
strong_gchandle = mono_gchandle_from_handle (mono_gchandle_get_target_handle (alc->gchandle), TRUE);
assembly = hook->func.v3 (strong_gchandle, aname, apath, hook->user_data, error);
if (needs_free)
mono_gchandle_free_internal (strong_gchandle);
}
/* TODO: propagage error out to callers */
mono_error_assert_ok (error);
}
if (assembly != NULL)
return assembly;
}
return NULL;
}
/**
* mono_install_assembly_preload_hook:
*/
void
mono_install_assembly_preload_hook (MonoAssemblyPreLoadFunc func, gpointer user_data)
{
AssemblyPreLoadHook *hook;
g_return_if_fail (func != NULL);
hook = g_new0 (AssemblyPreLoadHook, 1);
hook->version = 1;
hook->func.v1 = func;
hook->user_data = user_data;
hook->next = assembly_preload_hook;
assembly_preload_hook = hook;
}
/**
* mono_install_assembly_refonly_preload_hook:
*/
void
mono_install_assembly_refonly_preload_hook (MonoAssemblyPreLoadFunc func, gpointer user_data)
{
/* Ignore refonly hooks, they never fire */
}
void
mono_install_assembly_preload_hook_v2 (MonoAssemblyPreLoadFuncV2 func, gpointer user_data, gboolean append)
{
AssemblyPreLoadHook *hook;
g_return_if_fail (func != NULL);
AssemblyPreLoadHook **hooks = &assembly_preload_hook;
hook = g_new0 (AssemblyPreLoadHook, 1);
hook->version = 2;
hook->func.v2 = func;
hook->user_data = user_data;
if (append && *hooks != NULL) { // If we don't have any installed hooks, append vs prepend is irrelevant
AssemblyPreLoadHook *old = *hooks;
while (old->next != NULL)
old = old->next;
old->next = hook;
} else {
hook->next = *hooks;
*hooks = hook;
}
}
void
mono_install_assembly_preload_hook_v3 (MonoAssemblyPreLoadFuncV3 func, gpointer user_data, gboolean append)
{
AssemblyPreLoadHook *hook;
g_return_if_fail (func != NULL);
hook = g_new0 (AssemblyPreLoadHook, 1);
hook->version = 3;
hook->func.v3 = func;
hook->user_data = user_data;
if (append && assembly_preload_hook != NULL) {
AssemblyPreLoadHook *old = assembly_preload_hook;
while (old->next != NULL)
old = old->next;
old->next = hook;
} else {
hook->next = assembly_preload_hook;
assembly_preload_hook = hook;
}
}
static gchar *
absolute_dir (const gchar *filename)
{
gchar *cwd;
gchar *mixed;
gchar **parts;
gchar *part;
GList *list, *tmp;
GString *result;
gchar *res;
gint i;
if (g_path_is_absolute (filename)) {
part = g_path_get_dirname (filename);
res = g_strconcat (part, G_DIR_SEPARATOR_S, (const char*)NULL);
g_free (part);
return res;
}
cwd = g_get_current_dir ();
mixed = g_build_filename (cwd, filename, (const char*)NULL);
parts = g_strsplit (mixed, G_DIR_SEPARATOR_S, 0);
g_free (mixed);
g_free (cwd);
list = NULL;
for (i = 0; (part = parts [i]) != NULL; i++) {
if (!strcmp (part, "."))
continue;
if (!strcmp (part, "..")) {
if (list && list->next) /* Don't remove root */
list = g_list_delete_link (list, list);
} else {
list = g_list_prepend (list, part);
}
}
result = g_string_new ("");
list = g_list_reverse (list);
/* Ignores last data pointer, which should be the filename */
for (tmp = list; tmp && tmp->next != NULL; tmp = tmp->next){
if (tmp->data)
g_string_append_printf (result, "%s%c", (char *) tmp->data,
G_DIR_SEPARATOR);
}
res = result->str;
g_string_free (result, FALSE);
g_list_free (list);
g_strfreev (parts);
if (*res == '\0') {
g_free (res);
return g_strdup (".");
}
return res;
}
static MonoImage *
open_from_bundle_internal (MonoAssemblyLoadContext *alc, const char *filename, MonoImageOpenStatus *status, gboolean is_satellite)
{
if (!bundles)
return NULL;
MonoImage *image = NULL;
char *name = is_satellite ? g_strdup (filename) : g_path_get_basename (filename);
for (int i = 0; !image && bundles [i]; ++i) {
if (strcmp (bundles [i]->name, name) == 0) {
// Since bundled images don't exist on disk, don't give them a legit filename
image = mono_image_open_from_data_internal (alc, (char*)bundles [i]->data, bundles [i]->size, FALSE, status, FALSE, name, NULL);
break;
}
}
g_free (name);
return image;
}
static MonoImage *
open_from_satellite_bundle (MonoAssemblyLoadContext *alc, const char *filename, MonoImageOpenStatus *status, const char *culture)
{
if (!satellite_bundles)
return NULL;
MonoImage *image = NULL;
char *name = g_strdup (filename);
for (int i = 0; !image && satellite_bundles [i]; ++i) {
if (strcmp (satellite_bundles [i]->name, name) == 0 && strcmp (satellite_bundles [i]->culture, culture) == 0) {
char *bundle_name = g_strconcat (culture, "/", name, (const char *)NULL);
image = mono_image_open_from_data_internal (alc, (char *)satellite_bundles [i]->data, satellite_bundles [i]->size, FALSE, status, FALSE, bundle_name, NULL);
g_free (bundle_name);
break;
}
}
g_free (name);
return image;
}
/**
* mono_assembly_open_from_bundle:
* \param filename Filename requested
* \param status return status code
*
* This routine tries to open the assembly specified by \p filename from the
* defined bundles, if found, returns the MonoImage for it, if not found
* returns NULL
*/
MonoImage *
mono_assembly_open_from_bundle (MonoAssemblyLoadContext *alc, const char *filename, MonoImageOpenStatus *status, const char *culture)
{
/*
* we do a very simple search for bundled assemblies: it's not a general
* purpose assembly loading mechanism.
*/
MonoImage *image = NULL;
gboolean is_satellite = culture && culture [0] != 0;
if (is_satellite)
image = open_from_satellite_bundle (alc, filename, status, culture);
else
image = open_from_bundle_internal (alc, filename, status, FALSE);
if (image) {
mono_image_addref (image);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Assembly Loader loaded assembly from bundle: '%s'.", filename);
}
return image;
}
/**
* mono_assembly_open_full:
* \param filename the file to load
* \param status return status code
* \param refonly Whether this assembly is being opened in "reflection-only" mode.
*
* This loads an assembly from the specified \p filename. The \p filename allows
* a local URL (starting with a \c file:// prefix). If a file prefix is used, the
* filename is interpreted as a URL, and the filename is URL-decoded. Otherwise the file
* is treated as a local path.
*
* First, an attempt is made to load the assembly from the bundled executable (for those
* deployments that have been done with the \c mkbundle tool or for scenarios where the
* assembly has been registered as an embedded assembly). If this is not the case, then
* the assembly is loaded from disk using `api:mono_image_open_full`.
*
* If \p refonly is set to true, then the assembly is loaded purely for inspection with
* the \c System.Reflection API.
*
* \returns NULL on error, with the \p status set to an error code, or a pointer
* to the assembly.
*/
MonoAssembly *
mono_assembly_open_full (const char *filename, MonoImageOpenStatus *status, gboolean refonly)
{
if (refonly) {
if (status)
*status = MONO_IMAGE_IMAGE_INVALID;
return NULL;
}
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyOpenRequest req;
mono_assembly_request_prepare_open (&req, mono_alc_get_default ());
res = mono_assembly_request_open (filename, &req, status);
MONO_EXIT_GC_UNSAFE;
return res;
}
MonoAssembly *
mono_assembly_request_open (const char *filename, const MonoAssemblyOpenRequest *open_req,
MonoImageOpenStatus *status)
{
MonoImage *image;
MonoAssembly *ass;
MonoImageOpenStatus def_status;
gchar *fname;
gboolean loaded_from_bundle;
MonoAssemblyLoadRequest load_req;
/* we will be overwriting the load request's asmctx.*/
memcpy (&load_req, &open_req->request, sizeof (load_req));
g_return_val_if_fail (filename != NULL, NULL);
if (!status)
status = &def_status;
*status = MONO_IMAGE_OK;
fname = g_strdup (filename);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY,
"Assembly Loader probing location: '%s'.", fname);
image = NULL;
// If VM built with mkbundle
loaded_from_bundle = FALSE;
if (bundles != NULL || satellite_bundles != NULL) {
/* We don't know the culture of the filename we're loading here, so this call is not culture aware. */
image = mono_assembly_open_from_bundle (load_req.alc, fname, status, NULL);
loaded_from_bundle = image != NULL;
}
if (!image)
image = mono_image_open_a_lot (load_req.alc, fname, status);
if (!image){
if (*status == MONO_IMAGE_OK)
*status = MONO_IMAGE_ERROR_ERRNO;
g_free (fname);
return NULL;
}
if (image->assembly) {
/* We want to return the MonoAssembly that's already loaded,
* but if we're using the strict assembly loader, we also need
* to check that the previously loaded assembly matches the
* predicate. It could be that we previously loaded a
* different version that happens to have the filename that
* we're currently probing. */
if (mono_loader_get_strict_assembly_name_check () &&
load_req.predicate && !load_req.predicate (image->assembly, load_req.predicate_ud)) {
mono_image_close (image);
g_free (fname);
return NULL;
} else {
/* Already loaded by another appdomain */
mono_assembly_invoke_load_hook_internal (load_req.alc, image->assembly);
mono_image_close (image);
g_free (fname);
return image->assembly;
}
}
ass = mono_assembly_request_load_from (image, fname, &load_req, status);
if (ass) {
if (!loaded_from_bundle)
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY,
"Assembly Loader loaded assembly from location: '%s'.", filename);
}
/* Clear the reference added by mono_image_open */
mono_image_close (image);
g_free (fname);
return ass;
}
static void
free_assembly_name_item (gpointer val, gpointer user_data)
{
mono_assembly_name_free_internal ((MonoAssemblyName *)val);
g_free (val);
}
/**
* mono_assembly_load_friends:
* \param ass an assembly
*
* Load the list of friend assemblies that are allowed to access
* the assembly's internal types and members. They are stored as assembly
* names in custom attributes.
*
* This is an internal method, we need this because when we load mscorlib
* we do not have the internals visible cattr loaded yet,
* so we need to load these after we initialize the runtime.
*
* LOCKING: Acquires the assemblies lock plus the loader lock.
*/
void
mono_assembly_load_friends (MonoAssembly* ass)
{
ERROR_DECL (error);
int i;
MonoCustomAttrInfo* attrs;
if (ass->friend_assembly_names_inited)
return;
attrs = mono_custom_attrs_from_assembly_checked (ass, FALSE, error);
mono_error_assert_ok (error);
if (!attrs) {
mono_assemblies_lock ();
ass->friend_assembly_names_inited = TRUE;
mono_assemblies_unlock ();
return;
}
mono_assemblies_lock ();
if (ass->friend_assembly_names_inited) {
mono_assemblies_unlock ();
return;
}
mono_assemblies_unlock ();
GSList *visible_list = NULL;
GSList *ignores_list = NULL;
/*
* We build the list outside the assemblies lock, the worse that can happen
* is that we'll need to free the allocated list.
*/
for (i = 0; i < attrs->num_attrs; ++i) {
MonoCustomAttrEntry *attr = &attrs->attrs [i];
MonoAssemblyName *aname;
const gchar *data;
uint32_t data_length;
gchar *data_with_terminator;
/* Do some sanity checking */
if (!attr->ctor)
continue;
gboolean has_visible = FALSE;
gboolean has_ignores = FALSE;
has_visible = attr->ctor->klass == mono_class_try_get_internals_visible_class ();
/* IgnoresAccessChecksToAttribute is dynamically generated, so it's not necessarily in CoreLib */
/* FIXME: should we only check for it in dynamic modules? */
has_ignores = (!strcmp ("IgnoresAccessChecksToAttribute", m_class_get_name (attr->ctor->klass)) &&
!strcmp ("System.Runtime.CompilerServices", m_class_get_name_space (attr->ctor->klass)));
if (!has_visible && !has_ignores)
continue;
if (attr->data_size < 4)
continue;
data = (const char*)attr->data;
/* 0xFF means null string, see custom attr format */
if (data [0] != 1 || data [1] != 0 || (data [2] & 0xFF) == 0xFF)
continue;
data_length = mono_metadata_decode_value (data + 2, &data);
data_with_terminator = (char *)g_memdup (data, data_length + 1);
data_with_terminator[data_length] = 0;
aname = g_new0 (MonoAssemblyName, 1);
/*g_print ("friend ass: %s\n", data);*/
if (mono_assembly_name_parse_full (data_with_terminator, aname, TRUE, NULL, NULL)) {
if (has_visible)
visible_list = g_slist_prepend (visible_list, aname);
if (has_ignores)
ignores_list = g_slist_prepend (ignores_list, aname);
} else {
g_free (aname);
}
g_free (data_with_terminator);
}
mono_custom_attrs_free (attrs);
mono_assemblies_lock ();
if (ass->friend_assembly_names_inited) {
mono_assemblies_unlock ();
g_slist_foreach (visible_list, free_assembly_name_item, NULL);
g_slist_free (visible_list);
g_slist_foreach (ignores_list, free_assembly_name_item, NULL);
g_slist_free (ignores_list);
return;
}
ass->friend_assembly_names = visible_list;
ass->ignores_checks_assembly_names = ignores_list;
/* Because of the double checked locking pattern above */
mono_memory_barrier ();
ass->friend_assembly_names_inited = TRUE;
mono_assemblies_unlock ();
}
struct HasReferenceAssemblyAttributeIterData {
gboolean has_attr;
};
static gboolean
has_reference_assembly_attribute_iterator (MonoImage *image, guint32 typeref_scope_token, const char *nspace, const char *name, guint32 method_token, gpointer user_data)
{
gboolean stop_scanning = FALSE;
struct HasReferenceAssemblyAttributeIterData *iter_data = (struct HasReferenceAssemblyAttributeIterData*)user_data;
if (!strcmp (name, "ReferenceAssemblyAttribute") && !strcmp (nspace, "System.Runtime.CompilerServices")) {
/* Note we don't check the assembly name, same as coreCLR. */
iter_data->has_attr = TRUE;
stop_scanning = TRUE;
}
return stop_scanning;
}
/**
* mono_assembly_has_reference_assembly_attribute:
* \param assembly a MonoAssembly
* \param error set on error.
*
* \returns TRUE if \p assembly has the \c System.Runtime.CompilerServices.ReferenceAssemblyAttribute set.
* On error returns FALSE and sets \p error.
*/
gboolean
mono_assembly_has_reference_assembly_attribute (MonoAssembly *assembly, MonoError *error)
{
g_assert (assembly && assembly->image);
/* .NET Framework appears to ignore the attribute on dynamic
* assemblies, so don't call this function for dynamic assemblies. */
g_assert (!image_is_dynamic (assembly->image));
error_init (error);
/*
* This might be called during assembly loading, so do everything using the low-level
* metadata APIs.
*/
struct HasReferenceAssemblyAttributeIterData iter_data = { FALSE };
mono_assembly_metadata_foreach_custom_attr (assembly, &has_reference_assembly_attribute_iterator, &iter_data);
return iter_data.has_attr;
}
/**
* mono_assembly_open:
* \param filename Opens the assembly pointed out by this name
* \param status return status code
*
* This loads an assembly from the specified \p filename. The \p filename allows
* a local URL (starting with a \c file:// prefix). If a file prefix is used, the
* filename is interpreted as a URL, and the filename is URL-decoded. Otherwise the file
* is treated as a local path.
*
* First, an attempt is made to load the assembly from the bundled executable (for those
* deployments that have been done with the \c mkbundle tool or for scenarios where the
* assembly has been registered as an embedded assembly). If this is not the case, then
* the assembly is loaded from disk using `api:mono_image_open_full`.
*
* \returns a pointer to the \c MonoAssembly if \p filename contains a valid
* assembly or NULL on error. Details about the error are stored in the
* \p status variable.
*/
MonoAssembly *
mono_assembly_open (const char *filename, MonoImageOpenStatus *status)
{
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyOpenRequest req;
mono_assembly_request_prepare_open (&req, mono_alc_get_default ());
res = mono_assembly_request_open (filename, &req, status);
MONO_EXIT_GC_UNSAFE;
return res;
}
/**
* mono_assembly_load_from_full:
* \param image Image to load the assembly from
* \param fname assembly name to associate with the assembly
* \param status returns the status condition
* \param refonly Whether this assembly is being opened in "reflection-only" mode.
*
* If the provided \p image has an assembly reference, it will process the given
* image as an assembly with the given name.
*
* Most likely you want to use the `api:mono_assembly_load_full` method instead.
*
* Returns: A valid pointer to a \c MonoAssembly* on success and the \p status will be
* set to \c MONO_IMAGE_OK; or NULL on error.
*
* If there is an error loading the assembly the \p status will indicate the
* reason with \p status being set to \c MONO_IMAGE_INVALID if the
* image did not contain an assembly reference table.
*/
MonoAssembly *
mono_assembly_load_from_full (MonoImage *image, const char*fname,
MonoImageOpenStatus *status, gboolean refonly)
{
if (refonly) {
if (status)
*status = MONO_IMAGE_IMAGE_INVALID;
return NULL;
}
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyLoadRequest req;
MonoImageOpenStatus def_status;
if (!status)
status = &def_status;
mono_assembly_request_prepare_load (&req, mono_alc_get_default ());
res = mono_assembly_request_load_from (image, fname, &req, status);
MONO_EXIT_GC_UNSAFE;
return res;
}
MonoAssembly *
mono_assembly_request_load_from (MonoImage *image, const char *fname,
const MonoAssemblyLoadRequest *req,
MonoImageOpenStatus *status)
{
MonoAssemblyCandidatePredicate predicate;
gpointer user_data;
MonoAssembly *ass, *ass2;
char *base_dir;
g_assert (status != NULL);
predicate = req->predicate;
user_data = req->predicate_ud;
if (!table_info_get_rows (&image->tables [MONO_TABLE_ASSEMBLY])) {
/* 'image' doesn't have a manifest -- maybe someone is trying to Assembly.Load a .netmodule */
*status = MONO_IMAGE_IMAGE_INVALID;
return NULL;
}
#if defined (HOST_WIN32)
{
gchar *tmp_fn;
int i;
tmp_fn = g_strdup (fname);
for (i = strlen (tmp_fn) - 1; i >= 0; i--) {
if (tmp_fn [i] == '/')
tmp_fn [i] = '\\';
}
base_dir = absolute_dir (tmp_fn);
g_free (tmp_fn);
}
#else
base_dir = absolute_dir (fname);
#endif
/*
* Create assembly struct, and enter it into the assembly cache
*/
ass = g_new0 (MonoAssembly, 1);
ass->basedir = base_dir;
ass->context.no_managed_load_event = req->no_managed_load_event;
ass->image = image;
MONO_PROFILER_RAISE (assembly_loading, (ass));
mono_assembly_fill_assembly_name (image, &ass->aname);
if (mono_defaults.corlib && strcmp (ass->aname.name, MONO_ASSEMBLY_CORLIB_NAME) == 0) {
// MS.NET doesn't support loading other mscorlibs
g_free (ass);
g_free (base_dir);
mono_image_addref (mono_defaults.corlib);
*status = MONO_IMAGE_OK;
return mono_defaults.corlib->assembly;
}
/* Add a non-temporary reference because of ass->image */
mono_image_addref (image);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Image addref %s[%p] (%s) -> %s[%p]: %d", ass->aname.name, ass, mono_alc_is_default (mono_image_get_alc (image)) ? "default ALC" : "custom ALC", image->name, image, image->ref_count);
/*
* The load hooks might take locks so we can't call them while holding the
* assemblies lock.
*/
if (ass->aname.name && !req->no_invoke_search_hook) {
/* FIXME: I think individual context should probably also look for an existing MonoAssembly here, we just need to pass the asmctx to the search hook so that it does a filename match (I guess?) */
ass2 = mono_assembly_invoke_search_hook_internal (req->alc, NULL, &ass->aname, FALSE);
if (ass2) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Image %s[%p] reusing existing assembly %s[%p]", ass->aname.name, ass, ass2->aname.name, ass2);
g_free (ass);
g_free (base_dir);
mono_image_close (image);
*status = MONO_IMAGE_OK;
return ass2;
}
}
/* We need to check for ReferenceAssemblyAttribute before we
* mark the assembly as loaded and before we fire the load
* hook. Otherwise mono_domain_fire_assembly_load () in
* appdomain.c will cache a mapping from the assembly name to
* this image and we won't be able to look for a different
* candidate. */
{
ERROR_DECL (refasm_error);
if (mono_assembly_has_reference_assembly_attribute (ass, refasm_error)) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Image for assembly '%s' (%s) has ReferenceAssemblyAttribute, skipping", ass->aname.name, image->name);
g_free (ass);
g_free (base_dir);
mono_image_close (image);
*status = MONO_IMAGE_IMAGE_INVALID;
return NULL;
}
mono_error_cleanup (refasm_error);
}
if (predicate && !predicate (ass, user_data)) {
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Predicate returned FALSE, skipping '%s' (%s)\n", ass->aname.name, image->name);
g_free (ass);
g_free (base_dir);
mono_image_close (image);
*status = MONO_IMAGE_IMAGE_INVALID;
return NULL;
}
mono_assemblies_lock ();
/* If an assembly is loaded into an individual context, always return a
* new MonoAssembly, even if another assembly with the same name has
* already been loaded.
*/
if (image->assembly && !req->no_invoke_search_hook) {
/*
* This means another thread has already loaded the assembly, but not yet
* called the load hooks so the search hook can't find the assembly.
*/
mono_assemblies_unlock ();
ass2 = image->assembly;
g_free (ass);
g_free (base_dir);
mono_image_close (image);
*status = MONO_IMAGE_OK;
return ass2;
}
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Prepared to set up assembly '%s' (%s)", ass->aname.name, image->name);
/* If asmctx is INDIVIDUAL, image->assembly might not be NULL, so don't
* overwrite it. */
if (image->assembly == NULL)
image->assembly = ass;
loaded_assemblies = g_list_prepend (loaded_assemblies, ass);
loaded_assembly_count++;
mono_assemblies_unlock ();
#ifdef HOST_WIN32
if (m_image_is_module_handle (image))
mono_image_fixup_vtable (image);
#endif
mono_assembly_invoke_load_hook_internal (req->alc, ass);
MONO_PROFILER_RAISE (assembly_loaded, (ass));
return ass;
}
/**
* mono_assembly_load_from:
* \param image Image to load the assembly from
* \param fname assembly name to associate with the assembly
* \param status return status code
*
* If the provided \p image has an assembly reference, it will process the given
* image as an assembly with the given name.
*
* Most likely you want to use the `api:mono_assembly_load_full` method instead.
*
* This is equivalent to calling `api:mono_assembly_load_from_full` with the
* \p refonly parameter set to FALSE.
* \returns A valid pointer to a \c MonoAssembly* on success and then \p status will be
* set to \c MONO_IMAGE_OK; or NULL on error.
*
* If there is an error loading the assembly the \p status will indicate the
* reason with \p status being set to \c MONO_IMAGE_INVALID if the
* image did not contain an assembly reference table.
*/
MonoAssembly *
mono_assembly_load_from (MonoImage *image, const char *fname,
MonoImageOpenStatus *status)
{
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyLoadRequest req;
MonoImageOpenStatus def_status;
if (!status)
status = &def_status;
mono_assembly_request_prepare_load (&req, mono_alc_get_default ());
res = mono_assembly_request_load_from (image, fname, &req, status);
MONO_EXIT_GC_UNSAFE;
return res;
}
/**
* mono_assembly_name_free_internal:
* \param aname assembly name to free
*
* Frees the provided assembly name object.
* (it does not frees the object itself, only the name members).
*/
void
mono_assembly_name_free_internal (MonoAssemblyName *aname)
{
MONO_REQ_GC_UNSAFE_MODE;
if (aname == NULL)
return;
g_free ((void *) aname->name);
g_free ((void *) aname->culture);
g_free ((void *) aname->hash_value);
g_free ((guint8*) aname->public_key);
}
static gboolean
parse_public_key (const gchar *key, gchar** pubkey, gboolean *is_ecma)
{
const gchar *pkey;
gchar header [16], val, *arr, *endp;
gint i, j, offset, bitlen, keylen, pkeylen;
//both pubkey and is_ecma are required arguments
g_assert (pubkey && is_ecma);
keylen = strlen (key) >> 1;
if (keylen < 1)
return FALSE;
/* allow the ECMA standard key */
if (strcmp (key, "00000000000000000400000000000000") == 0) {
*pubkey = NULL;
*is_ecma = TRUE;
return TRUE;
}
*is_ecma = FALSE;
val = g_ascii_xdigit_value (key [0]) << 4;
val |= g_ascii_xdigit_value (key [1]);
switch (val) {
case 0x00:
if (keylen < 13)
return FALSE;
val = g_ascii_xdigit_value (key [24]);
val |= g_ascii_xdigit_value (key [25]);
if (val != 0x06)
return FALSE;
pkey = key + 24;
break;
case 0x06:
pkey = key;
break;
default:
return FALSE;
}
/* We need the first 16 bytes
* to check whether this key is valid or not */
pkeylen = strlen (pkey) >> 1;
if (pkeylen < 16)
return FALSE;
for (i = 0, j = 0; i < 16; i++) {
header [i] = g_ascii_xdigit_value (pkey [j++]) << 4;
header [i] |= g_ascii_xdigit_value (pkey [j++]);
}
if (header [0] != 0x06 || /* PUBLICKEYBLOB (0x06) */
header [1] != 0x02 || /* Version (0x02) */
header [2] != 0x00 || /* Reserved (word) */
header [3] != 0x00 ||
(guint)(read32 (header + 8)) != 0x31415352) /* DWORD magic = RSA1 */
return FALSE;
/* Based on this length, we _should_ be able to know if the length is right */
bitlen = read32 (header + 12) >> 3;
if ((bitlen + 16 + 4) != pkeylen)
return FALSE;
arr = (gchar *)g_malloc (keylen + 4);
/* Encode the size of the blob */
mono_metadata_encode_value (keylen, &arr[0], &endp);
offset = (gint)(endp-arr);
for (i = offset, j = 0; i < keylen + offset; i++) {
arr [i] = g_ascii_xdigit_value (key [j++]) << 4;
arr [i] |= g_ascii_xdigit_value (key [j++]);
}
*pubkey = arr;
return TRUE;
}
static gboolean
build_assembly_name (const char *name, const char *version, const char *culture, const char *token, const char *key, guint32 flags, guint32 arch, MonoAssemblyName *aname, gboolean save_public_key)
{
gint len;
gint version_parts;
gchar *pkeyptr, *encoded, tok [8];
memset (aname, 0, sizeof (MonoAssemblyName));
if (version) {
int parts [4];
int i;
int part_len;
parts [2] = -1;
parts [3] = -1;
const char *s = version;
version_parts = 0;
for (i = 0; i < 4; ++i) {
int n = sscanf (s, "%u%n", &parts [i], &part_len);
if (n != 1)
return FALSE;
if (parts [i] < 0 || parts [i] > 65535)
return FALSE;
if (i < 2 && parts [i] == 65535)
return FALSE;
version_parts ++;
s += part_len;
if (s [0] == '\0')
break;
if (i < 3) {
if (s [0] != '.')
return FALSE;
s ++;
}
}
if (s [0] != '\0')
return FALSE;
if (version_parts < 2 || version_parts > 4)
return FALSE;
aname->major = parts [0];
aname->minor = parts [1];
if (version_parts >= 3)
aname->build = parts [2];
else
aname->build = -1;
if (version_parts == 4)
aname->revision = parts [3];
else
aname->revision = -1;
}
aname->flags = flags;
aname->arch = arch;
aname->name = g_strdup (name);
if (culture) {
if (g_ascii_strcasecmp (culture, "neutral") == 0)
aname->culture = g_strdup ("");
else
aname->culture = g_strdup (culture);
}
if (token && strncmp (token, "null", 4) != 0) {
char *lower;
/* the constant includes the ending NULL, hence the -1 */
if (strlen (token) != (MONO_PUBLIC_KEY_TOKEN_LENGTH - 1)) {
mono_assembly_name_free_internal (aname);
return FALSE;
}
lower = g_ascii_strdown (token, MONO_PUBLIC_KEY_TOKEN_LENGTH);
g_strlcpy ((char*)aname->public_key_token, lower, MONO_PUBLIC_KEY_TOKEN_LENGTH);
g_free (lower);
}
if (key) {
gboolean is_ecma = FALSE;
gchar *pkey = NULL;
if (strcmp (key, "null") == 0 || !parse_public_key (key, &pkey, &is_ecma)) {
mono_assembly_name_free_internal (aname);
return FALSE;
}
if (is_ecma) {
g_assert (pkey == NULL);
aname->public_key = NULL;
g_strlcpy ((gchar*)aname->public_key_token, "b77a5c561934e089", MONO_PUBLIC_KEY_TOKEN_LENGTH);
return TRUE;
}
len = mono_metadata_decode_blob_size ((const gchar *) pkey, (const gchar **) &pkeyptr);
// We also need to generate the key token
mono_digest_get_public_token ((guchar*) tok, (guint8*) pkeyptr, len);
encoded = encode_public_tok ((guchar*) tok, 8);
g_strlcpy ((gchar*)aname->public_key_token, encoded, MONO_PUBLIC_KEY_TOKEN_LENGTH);
g_free (encoded);
if (save_public_key)
aname->public_key = (guint8*) pkey;
else
g_free (pkey);
}
return TRUE;
}
static gboolean
split_key_value (const gchar *pair, gchar **key, guint32 *keylen, gchar **value)
{
char *eqsign = (char*)strchr (pair, '=');
if (!eqsign) {
*key = NULL;
*keylen = 0;
*value = NULL;
return FALSE;
}
*key = (gchar*)pair;
*keylen = eqsign - *key;
while (*keylen > 0 && g_ascii_isspace ((*key) [*keylen - 1]))
(*keylen)--;
*value = g_strstrip (eqsign + 1);
return TRUE;
}
gboolean
mono_assembly_name_parse_full (const char *name, MonoAssemblyName *aname, gboolean save_public_key, gboolean *is_version_defined, gboolean *is_token_defined)
{
gchar *dllname;
gchar *dllname_uq;
gchar *version = NULL;
gchar *version_uq;
gchar *culture = NULL;
gchar *culture_uq;
gchar *token = NULL;
gchar *token_uq;
gchar *key = NULL;
gchar *key_uq;
gchar *retargetable = NULL;
gchar *retargetable_uq;
gchar *procarch = NULL;
gchar *procarch_uq;
gboolean res;
gchar *value, *part_name;
guint32 part_name_len;
gchar **parts;
gchar **tmp;
gboolean version_defined;
gboolean token_defined;
guint32 flags = 0;
guint32 arch = MONO_PROCESSOR_ARCHITECTURE_NONE;
if (!is_version_defined)
is_version_defined = &version_defined;
*is_version_defined = FALSE;
if (!is_token_defined)
is_token_defined = &token_defined;
*is_token_defined = FALSE;
parts = tmp = g_strsplit (name, ",", 6);
if (!tmp || !*tmp) {
goto cleanup_and_fail;
}
dllname = g_strstrip (*tmp);
// Simple name cannot be empty
if (!*dllname) {
goto cleanup_and_fail;
}
// Characters /, :, and \ not allowed in simple names
while (*dllname) {
gchar tmp_char = *dllname;
if (tmp_char == '/' || tmp_char == ':' || tmp_char == '\\')
goto cleanup_and_fail;
dllname++;
}
dllname = *tmp;
tmp++;
while (*tmp) {
if (!split_key_value (g_strstrip (*tmp), &part_name, &part_name_len, &value))
goto cleanup_and_fail;
if (part_name_len == 7 && !g_ascii_strncasecmp (part_name, "Version", part_name_len)) {
*is_version_defined = TRUE;
if (version != NULL || strlen (value) == 0) {
goto cleanup_and_fail;
}
version = value;
tmp++;
continue;
}
if (part_name_len == 7 && !g_ascii_strncasecmp (part_name, "Culture", part_name_len)) {
if (culture != NULL || strlen (value) == 0) {
goto cleanup_and_fail;
}
culture = value;
tmp++;
continue;
}
if (part_name_len == 14 && !g_ascii_strncasecmp (part_name, "PublicKeyToken", part_name_len)) {
*is_token_defined = TRUE;
if (token != NULL || key != NULL || strlen (value) == 0) {
goto cleanup_and_fail;
}
token = value;
tmp++;
continue;
}
if (part_name_len == 9 && !g_ascii_strncasecmp (part_name, "PublicKey", part_name_len)) {
if (token != NULL || key != NULL || strlen (value) == 0) {
goto cleanup_and_fail;
}
key = value;
tmp++;
continue;
}
if (part_name_len == 12 && !g_ascii_strncasecmp (part_name, "Retargetable", part_name_len)) {
if (retargetable != NULL) {
goto cleanup_and_fail;
}
retargetable = value;
retargetable_uq = unquote (retargetable);
if (retargetable_uq != NULL)
retargetable = retargetable_uq;
if (!g_ascii_strcasecmp (retargetable, "yes")) {
flags |= ASSEMBLYREF_RETARGETABLE_FLAG;
} else if (g_ascii_strcasecmp (retargetable, "no")) {
g_free (retargetable_uq);
goto cleanup_and_fail;
}
g_free (retargetable_uq);
tmp++;
continue;
}
if (part_name_len == 21 && !g_ascii_strncasecmp (part_name, "ProcessorArchitecture", part_name_len)) {
if (procarch != NULL) {
goto cleanup_and_fail;
}
procarch = value;
procarch_uq = unquote (procarch);
if (procarch_uq != NULL)
procarch = procarch_uq;
if (!g_ascii_strcasecmp (procarch, "MSIL"))
arch = MONO_PROCESSOR_ARCHITECTURE_MSIL;
else if (!g_ascii_strcasecmp (procarch, "X86"))
arch = MONO_PROCESSOR_ARCHITECTURE_X86;
else if (!g_ascii_strcasecmp (procarch, "IA64"))
arch = MONO_PROCESSOR_ARCHITECTURE_IA64;
else if (!g_ascii_strcasecmp (procarch, "AMD64"))
arch = MONO_PROCESSOR_ARCHITECTURE_AMD64;
else if (!g_ascii_strcasecmp (procarch, "ARM"))
arch = MONO_PROCESSOR_ARCHITECTURE_ARM;
else {
g_free (procarch_uq);
goto cleanup_and_fail;
}
flags |= arch << 4;
g_free (procarch_uq);
tmp++;
continue;
}
// compat: If we got here, the attribute name is unknown to us. Ignore it.
tmp++;
}
/* if retargetable flag is set, then we must have a fully qualified name */
if (retargetable != NULL && (version == NULL || culture == NULL || (key == NULL && token == NULL))) {
goto cleanup_and_fail;
}
dllname_uq = unquote (dllname);
version_uq = unquote (version);
culture_uq = unquote (culture);
token_uq = unquote (token);
key_uq = unquote (key);
res = build_assembly_name (
dllname_uq == NULL ? dllname : dllname_uq,
version_uq == NULL ? version : version_uq,
culture_uq == NULL ? culture : culture_uq,
token_uq == NULL ? token : token_uq,
key_uq == NULL ? key : key_uq,
flags, arch, aname, save_public_key);
g_free (dllname_uq);
g_free (version_uq);
g_free (culture_uq);
g_free (token_uq);
g_free (key_uq);
g_strfreev (parts);
return res;
cleanup_and_fail:
g_strfreev (parts);
return FALSE;
}
static char*
unquote (const char *str)
{
gint slen;
const char *end;
if (str == NULL)
return NULL;
slen = strlen (str);
if (slen < 2)
return NULL;
if (*str != '\'' && *str != '\"')
return NULL;
end = str + slen - 1;
if (*str != *end)
return NULL;
return g_strndup (str + 1, slen - 2);
}
/**
* mono_assembly_name_parse:
* \param name name to parse
* \param aname the destination assembly name
*
* Parses an assembly qualified type name and assigns the name,
* version, culture and token to the provided assembly name object.
*
* \returns TRUE if the name could be parsed.
*/
gboolean
mono_assembly_name_parse (const char *name, MonoAssemblyName *aname)
{
return mono_assembly_name_parse_full (name, aname, FALSE, NULL, NULL);
}
/**
* mono_assembly_name_new:
* \param name name to parse
*
* Allocate a new \c MonoAssemblyName and fill its values from the
* passed \p name.
*
* \returns a newly allocated structure or NULL if there was any failure.
*/
MonoAssemblyName*
mono_assembly_name_new (const char *name)
{
MonoAssemblyName *result = NULL;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyName *aname = g_new0 (MonoAssemblyName, 1);
if (mono_assembly_name_parse (name, aname))
result = aname;
else
g_free (aname);
MONO_EXIT_GC_UNSAFE;
return result;
}
/**
* mono_assembly_name_get_name:
*/
const char*
mono_assembly_name_get_name (MonoAssemblyName *aname)
{
const char *result = NULL;
MONO_ENTER_GC_UNSAFE;
result = aname->name;
MONO_EXIT_GC_UNSAFE;
return result;
}
/**
* mono_assembly_name_get_culture:
*/
const char*
mono_assembly_name_get_culture (MonoAssemblyName *aname)
{
const char *result = NULL;
MONO_ENTER_GC_UNSAFE;
result = aname->culture;
MONO_EXIT_GC_UNSAFE;
return result;
}
/**
* mono_assembly_name_get_pubkeytoken:
*/
mono_byte*
mono_assembly_name_get_pubkeytoken (MonoAssemblyName *aname)
{
if (aname->public_key_token [0])
return aname->public_key_token;
return NULL;
}
/**
* mono_assembly_name_get_version:
*/
uint16_t
mono_assembly_name_get_version (MonoAssemblyName *aname, uint16_t *minor, uint16_t *build, uint16_t *revision)
{
if (minor)
*minor = aname->minor;
if (build)
*build = aname->build;
if (revision)
*revision = aname->revision;
return aname->major;
}
gboolean
mono_assembly_name_culture_is_neutral (const MonoAssemblyName *aname)
{
return (!aname->culture || aname->culture [0] == 0);
}
/**
* mono_assembly_load_with_partial_name:
* \param name an assembly name that is then parsed by `api:mono_assembly_name_parse`.
* \param status return status code
*
* Loads a \c MonoAssembly from a name. The name is parsed using `api:mono_assembly_name_parse`,
* so it might contain a qualified type name, version, culture and token.
*
* This will load the assembly from the file whose name is derived from the assembly name
* by appending the \c .dll extension.
*
* The assembly is loaded from either one of the extra Global Assembly Caches specified
* by the extra GAC paths (specified by the \c MONO_GAC_PREFIX environment variable) or
* if that fails from the GAC.
*
* \returns NULL on failure, or a pointer to a \c MonoAssembly on success.
*/
MonoAssembly*
mono_assembly_load_with_partial_name (const char *name, MonoImageOpenStatus *status)
{
MonoAssembly *result;
MONO_ENTER_GC_UNSAFE;
MonoImageOpenStatus def_status;
if (!status)
status = &def_status;
result = mono_assembly_load_with_partial_name_internal (name, mono_alc_get_default (), status);
MONO_EXIT_GC_UNSAFE;
return result;
}
MonoAssembly*
mono_assembly_load_with_partial_name_internal (const char *name, MonoAssemblyLoadContext *alc, MonoImageOpenStatus *status)
{
ERROR_DECL (error);
MonoAssembly *res;
MonoAssemblyName *aname, base_name;
MonoAssemblyName mapped_aname;
MONO_REQ_GC_UNSAFE_MODE;
g_assert (status != NULL);
memset (&base_name, 0, sizeof (MonoAssemblyName));
aname = &base_name;
if (!mono_assembly_name_parse (name, aname))
return NULL;
/*
* If no specific version has been requested, make sure we load the
* correct version for system assemblies.
*/
if ((aname->major | aname->minor | aname->build | aname->revision) == 0)
aname = mono_assembly_remap_version (aname, &mapped_aname);
res = mono_assembly_loaded_internal (alc, aname);
if (res) {
mono_assembly_name_free_internal (aname);
return res;
}
res = invoke_assembly_preload_hook (alc, aname, assemblies_path);
if (res) {
mono_assembly_name_free_internal (aname);
return res;
}
mono_assembly_name_free_internal (aname);
if (!res) {
res = mono_try_assembly_resolve (alc, name, NULL, error);
if (!is_ok (error)) {
mono_error_cleanup (error);
if (*status == MONO_IMAGE_OK)
*status = MONO_IMAGE_IMAGE_INVALID;
}
}
return res;
}
MonoAssembly*
mono_assembly_load_corlib (MonoImageOpenStatus *status)
{
MonoAssemblyName *aname;
MonoAssemblyOpenRequest req;
mono_assembly_request_prepare_open (&req, mono_alc_get_default ());
if (corlib) {
/* g_print ("corlib already loaded\n"); */
return corlib;
}
aname = mono_assembly_name_new (MONO_ASSEMBLY_CORLIB_NAME);
corlib = invoke_assembly_preload_hook (req.request.alc, aname, NULL);
/* MonoCore preload hook should know how to find it */
/* FIXME: AOT compiler comes here without an installed hook. */
if (!corlib) {
if (assemblies_path) { // Custom assemblies path set via MONO_PATH or mono_set_assemblies_path
char *corlib_name = g_strdup_printf ("%s.dll", MONO_ASSEMBLY_CORLIB_NAME);
corlib = load_in_path (corlib_name, (const char**)assemblies_path, &req, status);
}
}
if (!corlib) {
/* Maybe its in a bundle */
char *corlib_name = g_strdup_printf ("%s.dll", MONO_ASSEMBLY_CORLIB_NAME);
corlib = mono_assembly_request_open (corlib_name, &req, status);
}
g_assert (corlib);
return corlib;
}
gboolean
mono_assembly_candidate_predicate_sn_same_name (MonoAssembly *candidate, gpointer ud)
{
MonoAssemblyName *wanted_name = (MonoAssemblyName*)ud;
MonoAssemblyName *candidate_name = &candidate->aname;
g_assert (wanted_name != NULL);
g_assert (candidate_name != NULL);
if (mono_trace_is_traced (G_LOG_LEVEL_INFO, MONO_TRACE_ASSEMBLY)) {
char * s = mono_stringify_assembly_name (wanted_name);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Predicate: wanted = %s", s);
g_free (s);
s = mono_stringify_assembly_name (candidate_name);
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Predicate: candidate = %s", s);
g_free (s);
}
return mono_assembly_check_name_match (wanted_name, candidate_name);
}
gboolean
mono_assembly_check_name_match (MonoAssemblyName *wanted_name, MonoAssemblyName *candidate_name)
{
gboolean result = mono_assembly_names_equal_flags (wanted_name, candidate_name, MONO_ANAME_EQ_IGNORE_VERSION | MONO_ANAME_EQ_IGNORE_PUBKEY);
if (result && assembly_names_compare_versions (wanted_name, candidate_name, -1) > 0)
result = FALSE;
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Predicate: candidate and wanted names %s",
result ? "match, returning TRUE" : "don't match, returning FALSE");
return result;
}
MonoAssembly*
mono_assembly_request_byname (MonoAssemblyName *aname, const MonoAssemblyByNameRequest *req, MonoImageOpenStatus *status)
{
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Request to load %s in alc %p", aname->name, (gpointer)req->request.alc);
MonoAssembly *result;
if (status)
*status = MONO_IMAGE_OK;
result = netcore_load_reference (aname, req->request.alc, req->requesting_assembly, !req->no_postload_search);
return result;
}
MonoAssembly *
mono_assembly_load_full_alc (MonoGCHandle alc_gchandle, MonoAssemblyName *aname, const char *basedir, MonoImageOpenStatus *status)
{
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyByNameRequest req;
MonoAssemblyLoadContext *alc = mono_alc_from_gchandle (alc_gchandle);
mono_assembly_request_prepare_byname (&req, alc);
req.requesting_assembly = NULL;
req.basedir = basedir;
res = mono_assembly_request_byname (aname, &req, status);
MONO_EXIT_GC_UNSAFE;
return res;
}
/**
* mono_assembly_load_full:
* \param aname A MonoAssemblyName with the assembly name to load.
* \param basedir A directory to look up the assembly at.
* \param status a pointer to a MonoImageOpenStatus to return the status of the load operation
* \param refonly Whether this assembly is being opened in "reflection-only" mode.
*
* Loads the assembly referenced by \p aname, if the value of \p basedir is not NULL, it
* attempts to load the assembly from that directory before probing the standard locations.
*
* If the assembly is being opened in reflection-only mode (\p refonly set to TRUE) then no
* assembly binding takes place.
*
* \returns the assembly referenced by \p aname loaded or NULL on error. On error the
* value pointed by \p status is updated with an error code.
*/
MonoAssembly*
mono_assembly_load_full (MonoAssemblyName *aname, const char *basedir, MonoImageOpenStatus *status, gboolean refonly)
{
if (refonly) {
if (status)
*status = MONO_IMAGE_IMAGE_INVALID;
return NULL;
}
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
MonoAssemblyByNameRequest req;
mono_assembly_request_prepare_byname (&req, mono_alc_get_default ());
req.requesting_assembly = NULL;
req.basedir = basedir;
res = mono_assembly_request_byname (aname, &req, status);
MONO_EXIT_GC_UNSAFE;
return res;
}
/**
* mono_assembly_load:
* \param aname A MonoAssemblyName with the assembly name to load.
* \param basedir A directory to look up the assembly at.
* \param status a pointer to a MonoImageOpenStatus to return the status of the load operation
*
* Loads the assembly referenced by \p aname, if the value of \p basedir is not NULL, it
* attempts to load the assembly from that directory before probing the standard locations.
*
* \returns the assembly referenced by \p aname loaded or NULL on error. On error the
* value pointed by \p status is updated with an error code.
*/
MonoAssembly*
mono_assembly_load (MonoAssemblyName *aname, const char *basedir, MonoImageOpenStatus *status)
{
MonoAssemblyByNameRequest req;
mono_assembly_request_prepare_byname (&req, mono_alc_get_default ());
req.requesting_assembly = NULL;
req.basedir = basedir;
return mono_assembly_request_byname (aname, &req, status);
}
/**
* mono_assembly_loaded_full:
* \param aname an assembly to look for.
* \param refonly Whether this assembly is being opened in "reflection-only" mode.
*
* This is used to determine if the specified assembly has been loaded
* \returns NULL If the given \p aname assembly has not been loaded, or a pointer to
* a \c MonoAssembly that matches the \c MonoAssemblyName specified.
*/
MonoAssembly*
mono_assembly_loaded_full (MonoAssemblyName *aname, gboolean refonly)
{
if (refonly)
return NULL;
MonoAssemblyLoadContext *alc = mono_alc_get_default ();
return mono_assembly_loaded_internal (alc, aname);
}
MonoAssembly *
mono_assembly_loaded_internal (MonoAssemblyLoadContext *alc, MonoAssemblyName *aname)
{
MonoAssembly *res;
MonoAssemblyName mapped_aname;
aname = mono_assembly_remap_version (aname, &mapped_aname);
res = mono_assembly_invoke_search_hook_internal (alc, NULL, aname, FALSE);
return res;
}
/**
* mono_assembly_loaded:
* \param aname an assembly to look for.
*
* This is used to determine if the specified assembly has been loaded
* \returns NULL If the given \p aname assembly has not been loaded, or a pointer to
* a \c MonoAssembly that matches the \c MonoAssemblyName specified.
*/
MonoAssembly*
mono_assembly_loaded (MonoAssemblyName *aname)
{
MonoAssembly *res;
MONO_ENTER_GC_UNSAFE;
res = mono_assembly_loaded_internal (mono_alc_get_default (), aname);
MONO_EXIT_GC_UNSAFE;
return res;
}
void
mono_assembly_release_gc_roots (MonoAssembly *assembly)
{
if (assembly == NULL || assembly == REFERENCE_MISSING)
return;
if (assembly_is_dynamic (assembly)) {
int i;
MonoDynamicImage *dynimg = (MonoDynamicImage *)assembly->image;
for (i = 0; i < dynimg->image.module_count; ++i)
mono_dynamic_image_release_gc_roots ((MonoDynamicImage *)dynimg->image.modules [i]);
mono_dynamic_image_release_gc_roots (dynimg);
}
}
/*
* Returns whether mono_assembly_close_finish() must be called as
* well. See comment for mono_image_close_except_pools() for why we
* unload in two steps.
*/
gboolean
mono_assembly_close_except_image_pools (MonoAssembly *assembly)
{
g_return_val_if_fail (assembly != NULL, FALSE);
if (assembly == REFERENCE_MISSING)
return FALSE;
/* Might be 0 already */
if (mono_assembly_decref (assembly) > 0)
return FALSE;
MONO_PROFILER_RAISE (assembly_unloading, (assembly));
mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_ASSEMBLY, "Unloading assembly %s [%p].", assembly->aname.name, assembly);
mono_debug_close_image (assembly->image);
mono_assemblies_lock ();
loaded_assemblies = g_list_remove (loaded_assemblies, assembly);
loaded_assembly_count--;
mono_assemblies_unlock ();
assembly->image->assembly = NULL;
if (!mono_image_close_except_pools (assembly->image))
assembly->image = NULL;
g_slist_foreach (assembly->friend_assembly_names, free_assembly_name_item, NULL);
g_slist_foreach (assembly->ignores_checks_assembly_names, free_assembly_name_item, NULL);
g_slist_free (assembly->friend_assembly_names);
g_slist_free (assembly->ignores_checks_assembly_names);
g_free (assembly->basedir);
MONO_PROFILER_RAISE (assembly_unloaded, (assembly));
return TRUE;
}
void
mono_assembly_close_finish (MonoAssembly *assembly)
{
g_assert (assembly && assembly != REFERENCE_MISSING);
if (assembly->image)
mono_image_close_finish (assembly->image);
if (assembly_is_dynamic (assembly)) {
g_free ((char*)assembly->aname.culture);
} else {
g_free (assembly);
}
}
/**
* mono_assembly_close:
* \param assembly the assembly to release.
*
* This method releases a reference to the \p assembly. The assembly is
* only released when all the outstanding references to it are released.
*/
void
mono_assembly_close (MonoAssembly *assembly)
{
if (mono_assembly_close_except_image_pools (assembly))
mono_assembly_close_finish (assembly);
}
/**
* mono_assembly_load_module:
*/
MonoImage*
mono_assembly_load_module (MonoAssembly *assembly, guint32 idx)
{
ERROR_DECL (error);
MonoImage *result = mono_assembly_load_module_checked (assembly, idx, error);
mono_error_assert_ok (error);
return result;
}
MONO_API MonoImage*
mono_assembly_load_module_checked (MonoAssembly *assembly, uint32_t idx, MonoError *error)
{
return mono_image_load_file_for_image_checked (assembly->image, idx, error);
}
/**
* mono_assembly_foreach:
* \param func function to invoke for each assembly loaded
* \param user_data data passed to the callback
*
* Invokes the provided \p func callback for each assembly loaded into
* the runtime. The first parameter passed to the callback is the
* \c MonoAssembly*, and the second parameter is the \p user_data.
*
* This is done for all assemblies loaded in the runtime, not just
* those loaded in the current application domain.
*/
void
mono_assembly_foreach (GFunc func, gpointer user_data)
{
GList *copy;
/*
* We make a copy of the list to avoid calling the callback inside the
* lock, which could lead to deadlocks.
*/
mono_assemblies_lock ();
copy = g_list_copy (loaded_assemblies);
mono_assemblies_unlock ();
g_list_foreach (loaded_assemblies, func, user_data);
g_list_free (copy);
}
/**
* mono_assemblies_cleanup:
*
* Free all resources used by this module.
*/
void
mono_assemblies_cleanup (void)
{
}
/*
* Holds the assembly of the application, for
* System.Diagnostics.Process::MainModule
*/
static MonoAssembly *main_assembly=NULL;
/**
* mono_assembly_set_main:
*/
void
mono_assembly_set_main (MonoAssembly *assembly)
{
main_assembly = assembly;
}
/**
* mono_assembly_get_main:
*
* Returns: the assembly for the application, the first assembly that is loaded by the VM
*/
MonoAssembly *
mono_assembly_get_main (void)
{
return (main_assembly);
}
/**
* mono_assembly_get_image:
* \param assembly The assembly to retrieve the image from
*
* \returns the \c MonoImage associated with this assembly.
*/
MonoImage*
mono_assembly_get_image (MonoAssembly *assembly)
{
MonoImage *res;
MONO_ENTER_GC_UNSAFE;
res = mono_assembly_get_image_internal (assembly);
MONO_EXIT_GC_UNSAFE;
return res;
}
MonoImage*
mono_assembly_get_image_internal (MonoAssembly *assembly)
{
MONO_REQ_GC_UNSAFE_MODE;
return assembly->image;
}
/**
* mono_assembly_get_name:
* \param assembly The assembly to retrieve the name from
*
* The returned name's lifetime is the same as \p assembly's.
*
* \returns the \c MonoAssemblyName associated with this assembly.
*/
MonoAssemblyName *
mono_assembly_get_name (MonoAssembly *assembly)
{
MonoAssemblyName *res;
MONO_ENTER_GC_UNSAFE;
res = mono_assembly_get_name_internal (assembly);
MONO_EXIT_GC_UNSAFE;
return res;
}
MonoAssemblyName *
mono_assembly_get_name_internal (MonoAssembly *assembly)
{
MONO_REQ_GC_UNSAFE_MODE;
return &assembly->aname;
}
/**
* mono_register_bundled_assemblies:
*/
void
mono_register_bundled_assemblies (const MonoBundledAssembly **assemblies)
{
bundles = assemblies;
}
/**
* mono_create_new_bundled_satellite_assembly:
*/
MonoBundledSatelliteAssembly *
mono_create_new_bundled_satellite_assembly (const char *name, const char *culture, const unsigned char *data, unsigned int size)
{
MonoBundledSatelliteAssembly *satellite_assembly = g_new0 (MonoBundledSatelliteAssembly, 1);
satellite_assembly->name = strdup (name);
satellite_assembly->culture = strdup (culture);
satellite_assembly->data = data;
satellite_assembly->size = size;
return satellite_assembly;
}
/**
* mono_register_bundled_satellite_assemblies:
*/
void
mono_register_bundled_satellite_assemblies (const MonoBundledSatelliteAssembly **assemblies)
{
satellite_bundles = assemblies;
}
/**
* mono_assembly_is_jit_optimizer_disabled:
*
* \param assm the assembly
*
* Returns TRUE if the System.Diagnostics.DebuggableAttribute has the
* DebuggingModes.DisableOptimizations bit set.
*
*/
gboolean
mono_assembly_is_jit_optimizer_disabled (MonoAssembly *ass)
{
ERROR_DECL (error);
g_assert (ass);
if (ass->jit_optimizer_disabled_inited)
return ass->jit_optimizer_disabled;
MonoClass *klass = mono_class_try_get_debuggable_attribute_class ();
if (!klass) {
/* Linked away */
ass->jit_optimizer_disabled = FALSE;
mono_memory_barrier ();
ass->jit_optimizer_disabled_inited = TRUE;
return FALSE;
}
gboolean disable_opts = FALSE;
MonoCustomAttrInfo* attrs = mono_custom_attrs_from_assembly_checked (ass, FALSE, error);
mono_error_cleanup (error); /* FIXME don't swallow the error */
if (attrs) {
for (int i = 0; i < attrs->num_attrs; ++i) {
MonoCustomAttrEntry *attr = &attrs->attrs [i];
const gchar *p;
MonoMethodSignature *sig;
if (!attr->ctor || attr->ctor->klass != klass)
continue;
/* Decode the attribute. See reflection.c */
p = (const char*)attr->data;
g_assert (read16 (p) == 0x0001);
p += 2;
// FIXME: Support named parameters
sig = mono_method_signature_internal (attr->ctor);
MonoClass *param_class;
if (sig->param_count == 2 && sig->params [0]->type == MONO_TYPE_BOOLEAN && sig->params [1]->type == MONO_TYPE_BOOLEAN) {
/* Two boolean arguments */
p ++;
disable_opts = *p;
} else if (sig->param_count == 1 &&
sig->params[0]->type == MONO_TYPE_VALUETYPE &&
(param_class = mono_class_from_mono_type_internal (sig->params[0])) != NULL &&
m_class_is_enumtype (param_class) &&
!strcmp (m_class_get_name (param_class), "DebuggingModes")) {
/* System.Diagnostics.DebuggableAttribute+DebuggingModes */
int32_t flags = read32 (p);
p += 4;
disable_opts = (flags & 0x0100) != 0;
}
}
mono_custom_attrs_free (attrs);
}
ass->jit_optimizer_disabled = disable_opts;
mono_memory_barrier ();
ass->jit_optimizer_disabled_inited = TRUE;
return disable_opts;
}
guint32
mono_assembly_get_count (void)
{
return loaded_assembly_count;
}
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/mono/metadata/metadata-internals.h | /**
* \file
*/
#ifndef __MONO_METADATA_INTERNALS_H__
#define __MONO_METADATA_INTERNALS_H__
#include "mono/utils/mono-forward-internal.h"
#include "mono/metadata/image.h"
#include "mono/metadata/blob.h"
#include "mono/metadata/cil-coff.h"
#include "mono/metadata/mempool.h"
#include "mono/metadata/domain-internals.h"
#include "mono/metadata/mono-hash.h"
#include "mono/utils/mono-compiler.h"
#include "mono/utils/mono-dl.h"
#include "mono/utils/monobitset.h"
#include "mono/utils/mono-property-hash.h"
#include "mono/utils/mono-value-hash.h"
#include <mono/utils/mono-error.h>
#include "mono/utils/mono-conc-hashtable.h"
#include "mono/utils/refcount.h"
struct _MonoType {
union {
MonoClass *klass; /* for VALUETYPE and CLASS */
MonoType *type; /* for PTR */
MonoArrayType *array; /* for ARRAY */
MonoMethodSignature *method;
MonoGenericParam *generic_param; /* for VAR and MVAR */
MonoGenericClass *generic_class; /* for GENERICINST */
} data;
unsigned int attrs : 16; /* param attributes or field flags */
MonoTypeEnum type : 8;
unsigned int has_cmods : 1;
unsigned int byref__ : 1; /* don't access directly, use m_type_is_byref */
unsigned int pinned : 1; /* valid when included in a local var signature */
};
typedef struct {
unsigned int required : 1;
MonoType *type;
} MonoSingleCustomMod;
/* Aggregate custom modifiers can happen if a generic VAR or MVAR is inflated,
* and both the VAR and the type that will be used to inflated it have custom
* modifiers, but they come from different images. (e.g. inflating 'class G<T>
* {void Test (T modopt(IsConst) t);}' with 'int32 modopt(IsLong)' where G is
* in image1 and the int32 is in image2.)
*
* Moreover, we can't just store an image and a type token per modifier, because
* Roslyn and C++/CLI sometimes create modifiers that mention generic parameters that must be inflated, like:
* void .CL1`1.Test(!0 modopt(System.Nullable`1<!0>))
* So we have to store a resolved MonoType*.
*
* Because the types come from different images, we allocate the aggregate
* custom modifiers container object in the mempool of a MonoImageSet to ensure
* that it doesn't have dangling image pointers.
*/
typedef struct {
uint8_t count;
MonoSingleCustomMod modifiers[1]; /* Actual length is count */
} MonoAggregateModContainer;
/* ECMA says upto 64 custom modifiers. It's possible we could see more at
* runtime due to modifiers being appended together when we inflate type. In
* that case we should revisit the places where this define is used to make
* sure that we don't blow up the stack (or switch to heap allocation for
* temporaries).
*/
#define MONO_MAX_EXPECTED_CMODS 64
typedef struct {
MonoType unmodified;
gboolean is_aggregate;
union {
MonoCustomModContainer cmods;
/* the actual aggregate modifiers are in a MonoImageSet mempool
* that includes all the images of all the modifier types and
* also the type that this aggregate container is a part of.*/
MonoAggregateModContainer *amods;
} mods;
} MonoTypeWithModifiers;
gboolean
mono_type_is_aggregate_mods (const MonoType *t);
static inline void
mono_type_with_mods_init (MonoType *dest, uint8_t num_mods, gboolean is_aggregate)
{
if (num_mods == 0) {
dest->has_cmods = 0;
return;
}
dest->has_cmods = 1;
MonoTypeWithModifiers *dest_full = (MonoTypeWithModifiers *)dest;
dest_full->is_aggregate = !!is_aggregate;
if (is_aggregate)
dest_full->mods.amods = NULL;
else
dest_full->mods.cmods.count = num_mods;
}
MonoCustomModContainer *
mono_type_get_cmods (const MonoType *t);
MonoAggregateModContainer *
mono_type_get_amods (const MonoType *t);
void
mono_type_set_amods (MonoType *t, MonoAggregateModContainer *amods);
static inline uint8_t
mono_type_custom_modifier_count (const MonoType *t)
{
if (!t->has_cmods)
return 0;
MonoTypeWithModifiers *full = (MonoTypeWithModifiers *)t;
if (full->is_aggregate)
return full->mods.amods->count;
else
return full->mods.cmods.count;
}
MonoType *
mono_type_get_custom_modifier (const MonoType *ty, uint8_t idx, gboolean *required, MonoError *error);
// Note: sizeof (MonoType) is dangerous. It can copy the num_mods
// field without copying the variably sized array. This leads to
// memory unsafety on the stack and/or heap, when we try to traverse
// this array.
//
// Use mono_sizeof_monotype
// to get the size of the memory to copy.
#define MONO_SIZEOF_TYPE sizeof (MonoType)
size_t
mono_sizeof_type_with_mods (uint8_t num_mods, gboolean aggregate);
size_t
mono_sizeof_type (const MonoType *ty);
size_t
mono_sizeof_aggregate_modifiers (uint8_t num_mods);
MonoAggregateModContainer *
mono_metadata_get_canonical_aggregate_modifiers (MonoAggregateModContainer *candidate);
#define MONO_SECMAN_FLAG_INIT(x) (x & 0x2)
#define MONO_SECMAN_FLAG_GET_VALUE(x) (x & 0x1)
#define MONO_SECMAN_FLAG_SET_VALUE(x,y) do { x = ((y) ? 0x3 : 0x2); } while (0)
#define MONO_PUBLIC_KEY_TOKEN_LENGTH 17
#define MONO_PROCESSOR_ARCHITECTURE_NONE 0
#define MONO_PROCESSOR_ARCHITECTURE_MSIL 1
#define MONO_PROCESSOR_ARCHITECTURE_X86 2
#define MONO_PROCESSOR_ARCHITECTURE_IA64 3
#define MONO_PROCESSOR_ARCHITECTURE_AMD64 4
#define MONO_PROCESSOR_ARCHITECTURE_ARM 5
struct _MonoAssemblyName {
const char *name;
const char *culture;
const char *hash_value;
const mono_byte* public_key;
// string of 16 hex chars + 1 NULL
mono_byte public_key_token [MONO_PUBLIC_KEY_TOKEN_LENGTH];
uint32_t hash_alg;
uint32_t hash_len;
uint32_t flags;
int32_t major, minor, build, revision, arch;
//Add members for correct work with mono_stringify_assembly_name
MonoBoolean without_version;
MonoBoolean without_culture;
MonoBoolean without_public_key_token;
};
struct MonoTypeNameParse {
char *name_space;
char *name;
MonoAssemblyName assembly;
GList *modifiers; /* 0 -> byref, -1 -> pointer, > 0 -> array rank */
GPtrArray *type_arguments;
GList *nested;
};
typedef struct _MonoAssemblyContext {
/* Don't fire managed load event for this assembly */
guint8 no_managed_load_event : 1;
} MonoAssemblyContext;
struct _MonoAssembly {
/*
* The number of appdomains which have this assembly loaded plus the number of
* assemblies referencing this assembly through an entry in their image->references
* arrays. The latter is needed because entries in the image->references array
* might point to assemblies which are only loaded in some appdomains, and without
* the additional reference, they can be freed at any time.
* The ref_count is initially 0.
*/
gint32 ref_count; /* use atomic operations only */
char *basedir;
MonoAssemblyName aname;
MonoImage *image;
GSList *friend_assembly_names; /* Computed by mono_assembly_load_friends () */
GSList *ignores_checks_assembly_names; /* Computed by mono_assembly_load_friends () */
guint8 friend_assembly_names_inited;
guint8 dynamic;
MonoAssemblyContext context;
guint8 wrap_non_exception_throws;
guint8 wrap_non_exception_throws_inited;
guint8 jit_optimizer_disabled;
guint8 jit_optimizer_disabled_inited;
guint8 runtime_marshalling_enabled;
guint8 runtime_marshalling_enabled_inited;
/* security manager flags (one bit is for lazy initialization) */
guint32 skipverification:2; /* Has SecurityPermissionFlag.SkipVerification permission */
};
typedef struct {
const char* data;
guint32 size;
} MonoStreamHeader;
struct _MonoTableInfo {
const char *base;
guint rows_ : 24; /* don't access directly, use table_info_get_rows */
guint row_size : 8;
/*
* Tables contain up to 9 columns and the possible sizes of the
* fields in the documentation are 1, 2 and 4 bytes. So we
* can encode in 2 bits the size.
*
* A 32 bit value can encode the resulting size
*
* The top eight bits encode the number of columns in the table.
* we only need 4, but 8 is aligned no shift required.
*/
guint32 size_bitfield;
};
#define REFERENCE_MISSING ((gpointer) -1)
typedef struct {
gboolean (*match) (MonoImage*);
gboolean (*load_pe_data) (MonoImage*);
gboolean (*load_cli_data) (MonoImage*);
gboolean (*load_tables) (MonoImage*);
} MonoImageLoader;
/* Represents the physical bytes for an image (usually in the file system, but
* could be in memory).
*
* The MonoImageStorage owns the raw data for an image and is responsible for
* cleanup.
*
* May be shared by multiple MonoImage objects if they opened the same
* underlying file or byte blob in memory.
*
* There is an abstract string key (usually a file path, but could be formed in
* other ways) that is used to share MonoImageStorage objects among images.
*
*/
typedef struct {
MonoRefCount ref;
/* key used for lookups. owned by this image storage. */
char *key;
/* If the raw data was allocated from a source such as mmap, the allocator may store resource tracking information here. */
void *raw_data_handle;
char *raw_data;
guint32 raw_data_len;
/* data was allocated with mono_file_map and must be unmapped */
guint8 raw_buffer_used : 1;
/* data was allocated with malloc and must be freed */
guint8 raw_data_allocated : 1;
/* data was allocated with mono_file_map_fileio */
guint8 fileio_used : 1;
#ifdef HOST_WIN32
/* Module was loaded using LoadLibrary. */
guint8 is_module_handle : 1;
/* Module entry point is _CorDllMain. */
guint8 has_entry_point : 1;
#endif
} MonoImageStorage;
struct _MonoImage {
/*
* This count is incremented during these situations:
* - An assembly references this MonoImage through its 'image' field
* - This MonoImage is present in the 'files' field of an image
* - This MonoImage is present in the 'modules' field of an image
* - A thread is holding a temporary reference to this MonoImage between
* calls to mono_image_open and mono_image_close ()
*/
int ref_count;
MonoImageStorage *storage;
/* Aliases storage->raw_data when storage is non-NULL. Otherwise NULL. */
char *raw_data;
guint32 raw_data_len;
/* Whenever this is a dynamically emitted module */
guint8 dynamic : 1;
/* Whenever this image contains uncompressed metadata */
guint8 uncompressed_metadata : 1;
/* Whenever this image contains metadata only without PE data */
guint8 metadata_only : 1;
guint8 checked_module_cctor : 1;
guint8 has_module_cctor : 1;
guint8 idx_string_wide : 1;
guint8 idx_guid_wide : 1;
guint8 idx_blob_wide : 1;
/* NOT SUPPORTED: Whenever this image is considered as platform code for the CoreCLR security model */
guint8 core_clr_platform_code : 1;
/* Whether a #JTD stream was present. Indicates that this image was a minimal delta and its heaps only include the new heap entries */
guint8 minimal_delta : 1;
/* The path to the file for this image or an arbitrary name for images loaded from data. */
char *name;
/* The path to the file for this image or NULL */
char *filename;
/* The assembly name reported in the file for this image (expected to be NULL for a netmodule) */
const char *assembly_name;
/* The module name reported in the file for this image (could be NULL for a malformed file) */
const char *module_name;
guint32 time_date_stamp;
char *version;
gint16 md_version_major, md_version_minor;
char *guid;
MonoCLIImageInfo *image_info;
MonoMemPool *mempool; /*protected by the image lock*/
char *raw_metadata;
MonoStreamHeader heap_strings;
MonoStreamHeader heap_us;
MonoStreamHeader heap_blob;
MonoStreamHeader heap_guid;
MonoStreamHeader heap_tables;
MonoStreamHeader heap_pdb;
const char *tables_base;
/* For PPDB files */
guint64 referenced_tables;
int *referenced_table_rows;
/**/
MonoTableInfo tables [MONO_TABLE_NUM];
/*
* references is initialized only by using the mono_assembly_open
* function, and not by using the lowlevel mono_image_open.
*
* Protected by the image lock.
*
* It is NULL terminated.
*/
MonoAssembly **references;
int nreferences;
/* Code files in the assembly. The main assembly has a "file" table and also a "module"
* table, where the module table is a subset of the file table. We track both lists,
* and because we can lazy-load them at different times we reference-increment both.
*/
/* No netmodules in netcore, but for System.Reflection.Emit support we still use modules */
MonoImage **modules;
guint32 module_count;
gboolean *modules_loaded;
MonoImage **files;
guint32 file_count;
MonoAotModule *aot_module;
guint8 aotid[16];
/*
* The Assembly this image was loaded from.
*/
MonoAssembly *assembly;
/*
* The AssemblyLoadContext that this image was loaded into.
*/
MonoAssemblyLoadContext *alc;
/*
* Indexed by method tokens and typedef tokens.
*/
GHashTable *method_cache; /*protected by the image lock*/
MonoInternalHashTable class_cache;
/* Indexed by memberref + methodspec tokens */
GHashTable *methodref_cache; /*protected by the image lock*/
/*
* Indexed by fielddef and memberref tokens
*/
MonoConcurrentHashTable *field_cache; /*protected by the image lock*/
/* indexed by typespec tokens. */
MonoConcurrentHashTable *typespec_cache; /* protected by the image lock */
/* indexed by token */
GHashTable *memberref_signatures;
/* Indexed by blob heap indexes */
GHashTable *method_signatures;
/*
* Indexes namespaces to hash tables that map class name to typedef token.
*/
GHashTable *name_cache; /*protected by the image lock*/
/*
* Indexed by MonoClass
*/
GHashTable *array_cache;
GHashTable *ptr_cache;
GHashTable *szarray_cache;
/* This has a separate lock to improve scalability */
mono_mutex_t szarray_cache_lock;
/*
* indexed by SignaturePointerPair
*/
GHashTable *native_func_wrapper_cache;
/*
* indexed by MonoMethod pointers
*/
GHashTable *wrapper_param_names;
GHashTable *array_accessor_cache;
GHashTable *icall_wrapper_cache;
GHashTable *rgctx_template_hash; /* LOCKING: templates lock */
/* Contains rarely used fields of runtime structures belonging to this image */
MonoPropertyHash *property_hash;
void *reflection_info;
/*
* user_info is a public field and is not touched by the
* metadata engine
*/
void *user_info;
#ifndef DISABLE_DLLMAP
/* dll map entries */
MonoDllMap *dll_map;
#endif
/* interfaces IDs from this image */
/* protected by the classes lock */
MonoBitSet *interface_bitset;
/* when the image is being closed, this is abused as a list of
malloc'ed regions to be freed. */
GSList *reflection_info_unregister_classes;
/* List of dependent image sets containing this image */
/* Protected by image_sets_lock */
GSList *image_sets;
/* Caches for wrappers that DO NOT reference generic */
/* arguments */
MonoWrapperCaches wrapper_caches;
/* Pre-allocated anon generic params for the first N generic
* parameters, for a small N */
MonoGenericParam *var_gparam_cache_fast;
MonoGenericParam *mvar_gparam_cache_fast;
/* Anon generic parameters past N, if needed */
MonoConcurrentHashTable *var_gparam_cache;
MonoConcurrentHashTable *mvar_gparam_cache;
/* The loader used to load this image */
MonoImageLoader *loader;
// Containers for MonoGenericParams associated with this image but not with any specific class or method. Created on demand.
// This could happen, for example, for MonoTypes associated with TypeSpec table entries.
MonoGenericContainer *anonymous_generic_class_container;
MonoGenericContainer *anonymous_generic_method_container;
gboolean weak_fields_inited;
/* Contains 1 based indexes */
GHashTable *weak_field_indexes;
/* baseline images only: whether any metadata updates have been applied to this image */
gboolean has_updates;
/*
* No other runtime locks must be taken while holding this lock.
* It's meant to be used only to mutate and query structures part of this image.
*/
mono_mutex_t lock;
};
enum {
MONO_SECTION_TEXT,
MONO_SECTION_RSRC,
MONO_SECTION_RELOC,
MONO_SECTION_MAX
};
typedef struct {
GHashTable *hash;
char *data;
guint32 alloc_size; /* malloced bytes */
guint32 index;
guint32 offset; /* from start of metadata */
} MonoDynamicStream;
typedef struct {
guint32 alloc_rows;
guint32 rows;
guint8 row_size; /* calculated later with column_sizes */
guint8 columns;
guint32 next_idx;
guint32 *values; /* rows * columns */
} MonoDynamicTable;
/* "Dynamic" assemblies and images arise from System.Reflection.Emit */
struct _MonoDynamicAssembly {
MonoAssembly assembly;
char *strong_name;
guint32 strong_name_size;
};
struct _MonoDynamicImage {
MonoImage image;
guint32 meta_size;
guint32 text_rva;
guint32 metadata_rva;
guint32 image_base;
guint32 cli_header_offset;
guint32 iat_offset;
guint32 idt_offset;
guint32 ilt_offset;
guint32 imp_names_offset;
struct {
guint32 rva;
guint32 size;
guint32 offset;
guint32 attrs;
} sections [MONO_SECTION_MAX];
GHashTable *typespec;
GHashTable *typeref;
GHashTable *handleref;
MonoGHashTable *tokens;
GHashTable *blob_cache;
GHashTable *standalonesig_cache;
GList *array_methods;
GHashTable *method_aux_hash;
GHashTable *vararg_aux_hash;
MonoGHashTable *generic_def_objects;
gboolean initial_image;
guint32 pe_kind, machine;
char *strong_name;
guint32 strong_name_size;
char *win32_res;
guint32 win32_res_size;
guint8 *public_key;
int public_key_len;
MonoDynamicStream sheap;
MonoDynamicStream code; /* used to store method headers and bytecode */
MonoDynamicStream resources; /* managed embedded resources */
MonoDynamicStream us;
MonoDynamicStream blob;
MonoDynamicStream tstream;
MonoDynamicStream guid;
MonoDynamicTable tables [MONO_TABLE_NUM];
MonoClass *wrappers_type; /*wrappers are bound to this type instead of <Module>*/
};
/* Contains information about assembly binding */
typedef struct _MonoAssemblyBindingInfo {
char *name;
char *culture;
guchar public_key_token [MONO_PUBLIC_KEY_TOKEN_LENGTH];
int major;
int minor;
AssemblyVersionSet old_version_bottom;
AssemblyVersionSet old_version_top;
AssemblyVersionSet new_version;
guint has_old_version_bottom : 1;
guint has_old_version_top : 1;
guint has_new_version : 1;
guint is_valid : 1;
gint32 domain_id; /*Needed to unload per-domain binding*/
} MonoAssemblyBindingInfo;
struct _MonoMethodHeader {
const unsigned char *code;
#ifdef MONO_SMALL_CONFIG
guint16 code_size;
#else
guint32 code_size;
#endif
guint16 max_stack : 15;
unsigned int is_transient: 1; /* mono_metadata_free_mh () will actually free this header */
unsigned int num_clauses : 15;
/* if num_locals != 0, then the following apply: */
unsigned int init_locals : 1;
guint16 num_locals;
MonoExceptionClause *clauses;
MonoBitSet *volatile_args;
MonoBitSet *volatile_locals;
MonoType *locals [MONO_ZERO_LEN_ARRAY];
};
typedef struct {
const unsigned char *code;
guint32 code_size;
guint16 max_stack;
gboolean has_clauses;
gboolean has_locals;
} MonoMethodHeaderSummary;
// FIXME? offsetof (MonoMethodHeader, locals)?
#define MONO_SIZEOF_METHOD_HEADER (sizeof (struct _MonoMethodHeader) - MONO_ZERO_LEN_ARRAY * SIZEOF_VOID_P)
struct _MonoMethodSignature {
MonoType *ret;
#ifdef MONO_SMALL_CONFIG
guint8 param_count;
gint8 sentinelpos;
unsigned int generic_param_count : 5;
#else
guint16 param_count;
gint16 sentinelpos;
unsigned int generic_param_count : 16;
#endif
unsigned int call_convention : 6;
unsigned int hasthis : 1;
unsigned int explicit_this : 1;
unsigned int pinvoke : 1;
unsigned int is_inflated : 1;
unsigned int has_type_parameters : 1;
unsigned int suppress_gc_transition : 1;
unsigned int marshalling_disabled : 1;
MonoType *params [MONO_ZERO_LEN_ARRAY];
};
/*
* AOT cache configuration loaded from config files.
* Doesn't really belong here.
*/
typedef struct {
/*
* Enable aot caching for applications whose main assemblies are in
* this list.
*/
GSList *apps;
GSList *assemblies;
char *aot_options;
} MonoAotCacheConfig;
#define MONO_SIZEOF_METHOD_SIGNATURE (sizeof (struct _MonoMethodSignature) - MONO_ZERO_LEN_ARRAY * SIZEOF_VOID_P)
static inline gboolean
image_is_dynamic (MonoImage *image)
{
#ifdef DISABLE_REFLECTION_EMIT
return FALSE;
#else
return image->dynamic;
#endif
}
static inline gboolean
assembly_is_dynamic (MonoAssembly *assembly)
{
#ifdef DISABLE_REFLECTION_EMIT
return FALSE;
#else
return assembly->dynamic;
#endif
}
static inline int
table_info_get_rows (const MonoTableInfo *table)
{
return table->rows_;
}
/* for use with allocated memory blocks (assumes alignment is to 8 bytes) */
MONO_COMPONENT_API guint mono_aligned_addr_hash (gconstpointer ptr);
void
mono_image_check_for_module_cctor (MonoImage *image);
gpointer
mono_image_alloc (MonoImage *image, guint size);
gpointer
mono_image_alloc0 (MonoImage *image, guint size);
#define mono_image_new0(image,type,size) ((type *) mono_image_alloc0 (image, sizeof (type)* (size)))
char*
mono_image_strdup (MonoImage *image, const char *s);
char*
mono_image_strdup_vprintf (MonoImage *image, const char *format, va_list args);
char*
mono_image_strdup_printf (MonoImage *image, const char *format, ...) MONO_ATTR_FORMAT_PRINTF(2,3);
GList*
mono_g_list_prepend_image (MonoImage *image, GList *list, gpointer data);
GSList*
mono_g_slist_append_image (MonoImage *image, GSList *list, gpointer data);
MONO_COMPONENT_API
void
mono_image_lock (MonoImage *image);
MONO_COMPONENT_API
void
mono_image_unlock (MonoImage *image);
gpointer
mono_image_property_lookup (MonoImage *image, gpointer subject, guint32 property);
void
mono_image_property_insert (MonoImage *image, gpointer subject, guint32 property, gpointer value);
void
mono_image_property_remove (MonoImage *image, gpointer subject);
MONO_COMPONENT_API
gboolean
mono_image_close_except_pools (MonoImage *image);
MONO_COMPONENT_API
void
mono_image_close_finish (MonoImage *image);
typedef void (*MonoImageUnloadFunc) (MonoImage *image, gpointer user_data);
void
mono_install_image_unload_hook (MonoImageUnloadFunc func, gpointer user_data);
void
mono_remove_image_unload_hook (MonoImageUnloadFunc func, gpointer user_data);
void
mono_install_image_loader (const MonoImageLoader *loader);
void
mono_image_append_class_to_reflection_info_set (MonoClass *klass);
typedef struct _MonoMetadataUpdateData MonoMetadataUpdateData;
struct _MonoMetadataUpdateData {
int has_updates;
};
extern MonoMetadataUpdateData mono_metadata_update_data_private;
/* returns TRUE if there's at least one update */
static inline gboolean
mono_metadata_has_updates (void)
{
return mono_metadata_update_data_private.has_updates != 0;
}
/* components can't call the inline function directly since the private data isn't exported */
MONO_COMPONENT_API
gboolean
mono_metadata_has_updates_api (void);
void
mono_image_effective_table_slow (const MonoTableInfo **t, int idx);
gboolean
mono_metadata_update_has_modified_rows (const MonoTableInfo *t);
static inline void
mono_image_effective_table (const MonoTableInfo **t, int idx)
{
if (G_UNLIKELY (mono_metadata_has_updates ())) {
if (G_UNLIKELY (idx >= table_info_get_rows ((*t)) || mono_metadata_update_has_modified_rows (*t))) {
mono_image_effective_table_slow (t, idx);
}
}
}
enum MonoEnCDeltaOrigin {
MONO_ENC_DELTA_API = 0,
MONO_ENC_DELTA_DBG = 1,
};
MONO_COMPONENT_API void
mono_image_load_enc_delta (int delta_origin, MonoImage *base_image, gconstpointer dmeta, uint32_t dmeta_len, gconstpointer dil, uint32_t dil_len, gconstpointer dpdb, uint32_t dpdb_len, MonoError *error);
gboolean
mono_image_load_cli_header (MonoImage *image, MonoCLIImageInfo *iinfo);
gboolean
mono_image_load_metadata (MonoImage *image, MonoCLIImageInfo *iinfo);
const char*
mono_metadata_string_heap_checked (MonoImage *meta, uint32_t table_index, MonoError *error);
const char *
mono_metadata_blob_heap_null_ok (MonoImage *meta, guint32 index);
const char*
mono_metadata_blob_heap_checked (MonoImage *meta, uint32_t table_index, MonoError *error);
gboolean
mono_metadata_decode_row_checked (const MonoImage *image, const MonoTableInfo *t, int idx, uint32_t *res, int res_size, MonoError *error);
MONO_COMPONENT_API
void
mono_metadata_decode_row_raw (const MonoTableInfo *t, int idx, uint32_t *res, int res_size);
gboolean
mono_metadata_decode_row_dynamic_checked (const MonoDynamicImage *image, const MonoDynamicTable *t, int idx, guint32 *res, int res_size, MonoError *error);
MonoType*
mono_metadata_get_shared_type (MonoType *type);
void
mono_metadata_clean_generic_classes_for_image (MonoImage *image);
gboolean
mono_metadata_table_bounds_check_slow (MonoImage *image, int table_index, int token_index);
int
mono_metadata_table_num_rows_slow (MonoImage *image, int table_index);
static inline int
mono_metadata_table_num_rows (MonoImage *image, int table_index)
{
if (G_LIKELY (!image->has_updates))
return table_info_get_rows (&image->tables [table_index]);
else
return mono_metadata_table_num_rows_slow (image, table_index);
}
/* token_index is 1-based */
static inline gboolean
mono_metadata_table_bounds_check (MonoImage *image, int table_index, int token_index)
{
/* returns true if given index is not in bounds with provided table/index pair */
if (G_LIKELY (token_index <= table_info_get_rows (&image->tables [table_index])))
return FALSE;
if (G_LIKELY (!image->has_updates))
return TRUE;
return mono_metadata_table_bounds_check_slow (image, table_index, token_index);
}
MONO_COMPONENT_API
const char * mono_meta_table_name (int table);
void mono_metadata_compute_table_bases (MonoImage *meta);
gboolean
mono_metadata_interfaces_from_typedef_full (MonoImage *image,
guint32 table_index,
MonoClass ***interfaces,
guint *count,
gboolean heap_alloc_result,
MonoGenericContext *context,
MonoError *error);
MONO_API MonoMethodSignature *
mono_metadata_parse_method_signature_full (MonoImage *image,
MonoGenericContainer *generic_container,
int def,
const char *ptr,
const char **rptr,
MonoError *error);
MONO_API MonoMethodHeader *
mono_metadata_parse_mh_full (MonoImage *image,
MonoGenericContainer *container,
const char *ptr,
MonoError *error);
MonoMethodSignature *mono_metadata_parse_signature_checked (MonoImage *image,
uint32_t token,
MonoError *error);
gboolean
mono_method_get_header_summary (MonoMethod *method, MonoMethodHeaderSummary *summary);
int* mono_metadata_get_param_attrs (MonoImage *m, int def, int param_count);
gboolean mono_metadata_method_has_param_attrs (MonoImage *m, int def);
guint
mono_metadata_generic_context_hash (const MonoGenericContext *context);
gboolean
mono_metadata_generic_context_equal (const MonoGenericContext *g1,
const MonoGenericContext *g2);
MonoGenericInst *
mono_metadata_parse_generic_inst (MonoImage *image,
MonoGenericContainer *container,
int count,
const char *ptr,
const char **rptr,
MonoError *error);
MONO_COMPONENT_API MonoGenericInst *
mono_metadata_get_generic_inst (int type_argc,
MonoType **type_argv);
MonoGenericInst *
mono_metadata_get_canonical_generic_inst (MonoGenericInst *candidate);
MonoGenericClass *
mono_metadata_lookup_generic_class (MonoClass *gclass,
MonoGenericInst *inst,
gboolean is_dynamic);
MonoGenericInst * mono_metadata_inflate_generic_inst (MonoGenericInst *ginst, MonoGenericContext *context, MonoError *error);
guint
mono_metadata_generic_param_hash (MonoGenericParam *p);
gboolean
mono_metadata_generic_param_equal (MonoGenericParam *p1, MonoGenericParam *p2);
void mono_dynamic_stream_reset (MonoDynamicStream* stream);
void mono_assembly_load_friends (MonoAssembly* ass);
gboolean mono_assembly_has_skip_verification (MonoAssembly* ass);
MONO_API gint32
mono_assembly_addref (MonoAssembly *assembly);
gint32
mono_assembly_decref (MonoAssembly *assembly);
void mono_assembly_release_gc_roots (MonoAssembly *assembly);
gboolean mono_assembly_close_except_image_pools (MonoAssembly *assembly);
void mono_assembly_close_finish (MonoAssembly *assembly);
gboolean mono_public_tokens_are_equal (const unsigned char *pubt1, const unsigned char *pubt2);
void mono_config_parse_publisher_policy (const char *filename, MonoAssemblyBindingInfo *binding_info);
gboolean
mono_assembly_name_parse_full (const char *name,
MonoAssemblyName *aname,
gboolean save_public_key,
gboolean *is_version_defined,
gboolean *is_token_defined);
gboolean
mono_assembly_fill_assembly_name_full (MonoImage *image, MonoAssemblyName *aname, gboolean copyBlobs);
MONO_API guint32 mono_metadata_get_generic_param_row (MonoImage *image, guint32 token, guint32 *owner);
MonoGenericParam*
mono_metadata_create_anon_gparam (MonoImage *image, gint32 param_num, gboolean is_mvar);
void mono_unload_interface_ids (MonoBitSet *bitset);
MonoType *mono_metadata_type_dup (MonoImage *image, const MonoType *original);
MonoType *mono_metadata_type_dup_with_cmods (MonoImage *image, const MonoType *original, const MonoType *cmods_source);
MonoMethodSignature *mono_metadata_signature_dup_full (MonoImage *image,MonoMethodSignature *sig);
MonoMethodSignature *mono_metadata_signature_dup_mempool (MonoMemPool *mp, MonoMethodSignature *sig);
MonoMethodSignature *mono_metadata_signature_dup_mem_manager (MonoMemoryManager *mem_manager, MonoMethodSignature *sig);
MonoMethodSignature *mono_metadata_signature_dup_add_this (MonoImage *image, MonoMethodSignature *sig, MonoClass *klass);
MonoGenericInst *
mono_get_shared_generic_inst (MonoGenericContainer *container);
int
mono_type_stack_size_internal (MonoType *t, int *align, gboolean allow_open);
MONO_API void mono_type_get_desc (GString *res, MonoType *type, mono_bool include_namespace);
gboolean
mono_metadata_type_equal_full (MonoType *t1, MonoType *t2, gboolean signature_only);
MonoMarshalSpec *
mono_metadata_parse_marshal_spec_full (MonoImage *image, MonoImage *parent_image, const char *ptr);
guint mono_metadata_generic_inst_hash (gconstpointer data);
gboolean mono_metadata_generic_inst_equal (gconstpointer ka, gconstpointer kb);
gboolean
mono_metadata_signature_equal_no_ret (MonoMethodSignature *sig1, MonoMethodSignature *sig2);
MONO_API void
mono_metadata_field_info_with_mempool (
MonoImage *meta,
guint32 table_index,
guint32 *offset,
guint32 *rva,
MonoMarshalSpec **marshal_spec);
MonoClassField*
mono_metadata_get_corresponding_field_from_generic_type_definition (MonoClassField *field);
MonoEvent*
mono_metadata_get_corresponding_event_from_generic_type_definition (MonoEvent *event);
MonoProperty*
mono_metadata_get_corresponding_property_from_generic_type_definition (MonoProperty *property);
guint32
mono_metadata_signature_size (MonoMethodSignature *sig);
guint mono_metadata_str_hash (gconstpointer v1);
gboolean mono_image_load_pe_data (MonoImage *image);
gboolean mono_image_load_cli_data (MonoImage *image);
void mono_image_load_names (MonoImage *image);
MonoImage *mono_image_open_raw (MonoAssemblyLoadContext *alc, const char *fname, MonoImageOpenStatus *status);
MonoImage *mono_image_open_metadata_only (MonoAssemblyLoadContext *alc, const char *fname, MonoImageOpenStatus *status);
MONO_COMPONENT_API
MonoImage *mono_image_open_from_data_internal (MonoAssemblyLoadContext *alc, char *data, guint32 data_len, gboolean need_copy, MonoImageOpenStatus *status, gboolean metadata_only, const char *name, const char *filename);
MonoException *mono_get_exception_field_access_msg (const char *msg);
MonoException *mono_get_exception_method_access_msg (const char *msg);
MonoMethod* mono_method_from_method_def_or_ref (MonoImage *m, guint32 tok, MonoGenericContext *context, MonoError *error);
MonoMethod *mono_get_method_constrained_with_method (MonoImage *image, MonoMethod *method, MonoClass *constrained_class, MonoGenericContext *context, MonoError *error);
MonoMethod *mono_get_method_constrained_checked (MonoImage *image, guint32 token, MonoClass *constrained_class, MonoGenericContext *context, MonoMethod **cil_method, MonoError *error);
void mono_type_set_alignment (MonoTypeEnum type, int align);
MonoType *
mono_type_create_from_typespec_checked (MonoImage *image, guint32 type_spec, MonoError *error);
MonoMethodSignature*
mono_method_get_signature_checked (MonoMethod *method, MonoImage *image, guint32 token, MonoGenericContext *context, MonoError *error);
MONO_COMPONENT_API MonoMethod *
mono_get_method_checked (MonoImage *image, guint32 token, MonoClass *klass, MonoGenericContext *context, MonoError *error);
guint32
mono_metadata_localscope_from_methoddef (MonoImage *meta, guint32 index);
void
mono_wrapper_caches_free (MonoWrapperCaches *cache);
MonoWrapperCaches*
mono_method_get_wrapper_cache (MonoMethod *method);
MonoWrapperCaches*
mono_method_get_wrapper_cache (MonoMethod *method);
MonoType*
mono_metadata_parse_type_checked (MonoImage *m, MonoGenericContainer *container, short opt_attrs, gboolean transient, const char *ptr, const char **rptr, MonoError *error);
MonoGenericContainer *
mono_get_anonymous_container_for_image (MonoImage *image, gboolean is_mvar);
void
mono_loader_register_module (const char *name, MonoDl *module);
void
mono_ginst_get_desc (GString *str, MonoGenericInst *ginst);
void
mono_loader_set_strict_assembly_name_check (gboolean enabled);
gboolean
mono_loader_get_strict_assembly_name_check (void);
MONO_COMPONENT_API gboolean
mono_type_in_image (MonoType *type, MonoImage *image);
gboolean
mono_type_is_valid_generic_argument (MonoType *type);
void
mono_metadata_get_class_guid (MonoClass* klass, uint8_t* guid, MonoError *error);
#define MONO_CLASS_IS_INTERFACE_INTERNAL(c) ((mono_class_get_flags (c) & TYPE_ATTRIBUTE_INTERFACE) || mono_type_is_generic_parameter (m_class_get_byval_arg (c)))
static inline gboolean
m_image_is_raw_data_allocated (MonoImage *image)
{
return image->storage ? image->storage->raw_data_allocated : FALSE;
}
static inline gboolean
m_image_is_fileio_used (MonoImage *image)
{
return image->storage ? image->storage->fileio_used : FALSE;
}
#ifdef HOST_WIN32
static inline gboolean
m_image_is_module_handle (MonoImage *image)
{
return image->storage ? image->storage->is_module_handle : FALSE;
}
static inline gboolean
m_image_has_entry_point (MonoImage *image)
{
return image->storage ? image->storage->has_entry_point : FALSE;
}
#endif
static inline const char *
m_image_get_name (MonoImage *image)
{
return image->name;
}
static inline const char *
m_image_get_filename (MonoImage *image)
{
return image->filename;
}
static inline const char *
m_image_get_assembly_name (MonoImage *image)
{
return image->assembly_name;
}
static inline
MonoAssemblyLoadContext *
mono_image_get_alc (MonoImage *image)
{
return image->alc;
}
static inline
MonoAssemblyLoadContext *
mono_assembly_get_alc (MonoAssembly *assm)
{
return mono_image_get_alc (assm->image);
}
static inline MonoType*
mono_signature_get_return_type_internal (MonoMethodSignature *sig)
{
return sig->ret;
}
/**
* mono_type_get_type_internal:
* \param type the \c MonoType operated on
* \returns the IL type value for \p type. This is one of the \c MonoTypeEnum
* enum members like \c MONO_TYPE_I4 or \c MONO_TYPE_STRING.
*/
static inline int
mono_type_get_type_internal (MonoType *type)
{
return type->type;
}
/**
* mono_type_get_signature:
* \param type the \c MonoType operated on
* It is only valid to call this function if \p type is a \c MONO_TYPE_FNPTR .
* \returns the \c MonoMethodSignature pointer that describes the signature
* of the function pointer \p type represents.
*/
static inline MonoMethodSignature*
mono_type_get_signature_internal (MonoType *type)
{
g_assert (type->type == MONO_TYPE_FNPTR);
return type->data.method;
}
/**
* m_type_is_byref:
* \param type the \c MonoType operated on
* \returns TRUE if \p type represents a type passed by reference,
* FALSE otherwise.
*/
static inline gboolean
m_type_is_byref (const MonoType *type)
{
return type->byref__;
}
/**
* mono_type_get_class_internal:
* \param type the \c MonoType operated on
* It is only valid to call this function if \p type is a \c MONO_TYPE_CLASS or a
* \c MONO_TYPE_VALUETYPE . For more general functionality, use \c mono_class_from_mono_type_internal,
* instead.
* \returns the \c MonoClass pointer that describes the class that \p type represents.
*/
static inline MonoClass*
mono_type_get_class_internal (MonoType *type)
{
/* FIXME: review the runtime users before adding the assert here */
return type->data.klass;
}
/**
* mono_type_get_array_type_internal:
* \param type the \c MonoType operated on
* It is only valid to call this function if \p type is a \c MONO_TYPE_ARRAY .
* \returns a \c MonoArrayType struct describing the array type that \p type
* represents. The info includes details such as rank, array element type
* and the sizes and bounds of multidimensional arrays.
*/
static inline MonoArrayType*
mono_type_get_array_type_internal (MonoType *type)
{
return type->data.array;
}
static inline int
mono_metadata_table_to_ptr_table (int table_num)
{
switch (table_num) {
case MONO_TABLE_FIELD: return MONO_TABLE_FIELD_POINTER;
case MONO_TABLE_METHOD: return MONO_TABLE_METHOD_POINTER;
case MONO_TABLE_PARAM: return MONO_TABLE_PARAM_POINTER;
case MONO_TABLE_PROPERTY: return MONO_TABLE_PROPERTY_POINTER;
case MONO_TABLE_EVENT: return MONO_TABLE_EVENT_POINTER;
default:
g_assert_not_reached ();
}
}
#endif /* __MONO_METADATA_INTERNALS_H__ */
| /**
* \file
*/
#ifndef __MONO_METADATA_INTERNALS_H__
#define __MONO_METADATA_INTERNALS_H__
#include "mono/utils/mono-forward-internal.h"
#include "mono/metadata/image.h"
#include "mono/metadata/blob.h"
#include "mono/metadata/cil-coff.h"
#include "mono/metadata/mempool.h"
#include "mono/metadata/domain-internals.h"
#include "mono/metadata/mono-hash.h"
#include "mono/utils/mono-compiler.h"
#include "mono/utils/mono-dl.h"
#include "mono/utils/monobitset.h"
#include "mono/utils/mono-property-hash.h"
#include "mono/utils/mono-value-hash.h"
#include <mono/utils/mono-error.h>
#include "mono/utils/mono-conc-hashtable.h"
#include "mono/utils/refcount.h"
struct _MonoType {
union {
MonoClass *klass; /* for VALUETYPE and CLASS */
MonoType *type; /* for PTR */
MonoArrayType *array; /* for ARRAY */
MonoMethodSignature *method;
MonoGenericParam *generic_param; /* for VAR and MVAR */
MonoGenericClass *generic_class; /* for GENERICINST */
} data;
unsigned int attrs : 16; /* param attributes or field flags */
MonoTypeEnum type : 8;
unsigned int has_cmods : 1;
unsigned int byref__ : 1; /* don't access directly, use m_type_is_byref */
unsigned int pinned : 1; /* valid when included in a local var signature */
};
typedef struct {
unsigned int required : 1;
MonoType *type;
} MonoSingleCustomMod;
/* Aggregate custom modifiers can happen if a generic VAR or MVAR is inflated,
* and both the VAR and the type that will be used to inflated it have custom
* modifiers, but they come from different images. (e.g. inflating 'class G<T>
* {void Test (T modopt(IsConst) t);}' with 'int32 modopt(IsLong)' where G is
* in image1 and the int32 is in image2.)
*
* Moreover, we can't just store an image and a type token per modifier, because
* Roslyn and C++/CLI sometimes create modifiers that mention generic parameters that must be inflated, like:
* void .CL1`1.Test(!0 modopt(System.Nullable`1<!0>))
* So we have to store a resolved MonoType*.
*
* Because the types come from different images, we allocate the aggregate
* custom modifiers container object in the mempool of a MonoImageSet to ensure
* that it doesn't have dangling image pointers.
*/
typedef struct {
uint8_t count;
MonoSingleCustomMod modifiers[1]; /* Actual length is count */
} MonoAggregateModContainer;
/* ECMA says upto 64 custom modifiers. It's possible we could see more at
* runtime due to modifiers being appended together when we inflate type. In
* that case we should revisit the places where this define is used to make
* sure that we don't blow up the stack (or switch to heap allocation for
* temporaries).
*/
#define MONO_MAX_EXPECTED_CMODS 64
typedef struct {
MonoType unmodified;
gboolean is_aggregate;
union {
MonoCustomModContainer cmods;
/* the actual aggregate modifiers are in a MonoImageSet mempool
* that includes all the images of all the modifier types and
* also the type that this aggregate container is a part of.*/
MonoAggregateModContainer *amods;
} mods;
} MonoTypeWithModifiers;
gboolean
mono_type_is_aggregate_mods (const MonoType *t);
static inline void
mono_type_with_mods_init (MonoType *dest, uint8_t num_mods, gboolean is_aggregate)
{
if (num_mods == 0) {
dest->has_cmods = 0;
return;
}
dest->has_cmods = 1;
MonoTypeWithModifiers *dest_full = (MonoTypeWithModifiers *)dest;
dest_full->is_aggregate = !!is_aggregate;
if (is_aggregate)
dest_full->mods.amods = NULL;
else
dest_full->mods.cmods.count = num_mods;
}
MonoCustomModContainer *
mono_type_get_cmods (const MonoType *t);
MonoAggregateModContainer *
mono_type_get_amods (const MonoType *t);
void
mono_type_set_amods (MonoType *t, MonoAggregateModContainer *amods);
static inline uint8_t
mono_type_custom_modifier_count (const MonoType *t)
{
if (!t->has_cmods)
return 0;
MonoTypeWithModifiers *full = (MonoTypeWithModifiers *)t;
if (full->is_aggregate)
return full->mods.amods->count;
else
return full->mods.cmods.count;
}
MonoType *
mono_type_get_custom_modifier (const MonoType *ty, uint8_t idx, gboolean *required, MonoError *error);
// Note: sizeof (MonoType) is dangerous. It can copy the num_mods
// field without copying the variably sized array. This leads to
// memory unsafety on the stack and/or heap, when we try to traverse
// this array.
//
// Use mono_sizeof_monotype
// to get the size of the memory to copy.
#define MONO_SIZEOF_TYPE sizeof (MonoType)
size_t
mono_sizeof_type_with_mods (uint8_t num_mods, gboolean aggregate);
size_t
mono_sizeof_type (const MonoType *ty);
size_t
mono_sizeof_aggregate_modifiers (uint8_t num_mods);
MonoAggregateModContainer *
mono_metadata_get_canonical_aggregate_modifiers (MonoAggregateModContainer *candidate);
#define MONO_PUBLIC_KEY_TOKEN_LENGTH 17
#define MONO_PROCESSOR_ARCHITECTURE_NONE 0
#define MONO_PROCESSOR_ARCHITECTURE_MSIL 1
#define MONO_PROCESSOR_ARCHITECTURE_X86 2
#define MONO_PROCESSOR_ARCHITECTURE_IA64 3
#define MONO_PROCESSOR_ARCHITECTURE_AMD64 4
#define MONO_PROCESSOR_ARCHITECTURE_ARM 5
struct _MonoAssemblyName {
const char *name;
const char *culture;
const char *hash_value;
const mono_byte* public_key;
// string of 16 hex chars + 1 NULL
mono_byte public_key_token [MONO_PUBLIC_KEY_TOKEN_LENGTH];
uint32_t hash_alg;
uint32_t hash_len;
uint32_t flags;
int32_t major, minor, build, revision, arch;
//Add members for correct work with mono_stringify_assembly_name
MonoBoolean without_version;
MonoBoolean without_culture;
MonoBoolean without_public_key_token;
};
struct MonoTypeNameParse {
char *name_space;
char *name;
MonoAssemblyName assembly;
GList *modifiers; /* 0 -> byref, -1 -> pointer, > 0 -> array rank */
GPtrArray *type_arguments;
GList *nested;
};
typedef struct _MonoAssemblyContext {
/* Don't fire managed load event for this assembly */
guint8 no_managed_load_event : 1;
} MonoAssemblyContext;
struct _MonoAssembly {
/*
* The number of appdomains which have this assembly loaded plus the number of
* assemblies referencing this assembly through an entry in their image->references
* arrays. The latter is needed because entries in the image->references array
* might point to assemblies which are only loaded in some appdomains, and without
* the additional reference, they can be freed at any time.
* The ref_count is initially 0.
*/
gint32 ref_count; /* use atomic operations only */
char *basedir;
MonoAssemblyName aname;
MonoImage *image;
GSList *friend_assembly_names; /* Computed by mono_assembly_load_friends () */
GSList *ignores_checks_assembly_names; /* Computed by mono_assembly_load_friends () */
guint8 friend_assembly_names_inited;
guint8 dynamic;
MonoAssemblyContext context;
guint8 wrap_non_exception_throws;
guint8 wrap_non_exception_throws_inited;
guint8 jit_optimizer_disabled;
guint8 jit_optimizer_disabled_inited;
guint8 runtime_marshalling_enabled;
guint8 runtime_marshalling_enabled_inited;
};
typedef struct {
const char* data;
guint32 size;
} MonoStreamHeader;
struct _MonoTableInfo {
const char *base;
guint rows_ : 24; /* don't access directly, use table_info_get_rows */
guint row_size : 8;
/*
* Tables contain up to 9 columns and the possible sizes of the
* fields in the documentation are 1, 2 and 4 bytes. So we
* can encode in 2 bits the size.
*
* A 32 bit value can encode the resulting size
*
* The top eight bits encode the number of columns in the table.
* we only need 4, but 8 is aligned no shift required.
*/
guint32 size_bitfield;
};
#define REFERENCE_MISSING ((gpointer) -1)
typedef struct {
gboolean (*match) (MonoImage*);
gboolean (*load_pe_data) (MonoImage*);
gboolean (*load_cli_data) (MonoImage*);
gboolean (*load_tables) (MonoImage*);
} MonoImageLoader;
/* Represents the physical bytes for an image (usually in the file system, but
* could be in memory).
*
* The MonoImageStorage owns the raw data for an image and is responsible for
* cleanup.
*
* May be shared by multiple MonoImage objects if they opened the same
* underlying file or byte blob in memory.
*
* There is an abstract string key (usually a file path, but could be formed in
* other ways) that is used to share MonoImageStorage objects among images.
*
*/
typedef struct {
MonoRefCount ref;
/* key used for lookups. owned by this image storage. */
char *key;
/* If the raw data was allocated from a source such as mmap, the allocator may store resource tracking information here. */
void *raw_data_handle;
char *raw_data;
guint32 raw_data_len;
/* data was allocated with mono_file_map and must be unmapped */
guint8 raw_buffer_used : 1;
/* data was allocated with malloc and must be freed */
guint8 raw_data_allocated : 1;
/* data was allocated with mono_file_map_fileio */
guint8 fileio_used : 1;
#ifdef HOST_WIN32
/* Module was loaded using LoadLibrary. */
guint8 is_module_handle : 1;
/* Module entry point is _CorDllMain. */
guint8 has_entry_point : 1;
#endif
} MonoImageStorage;
struct _MonoImage {
/*
* This count is incremented during these situations:
* - An assembly references this MonoImage through its 'image' field
* - This MonoImage is present in the 'files' field of an image
* - This MonoImage is present in the 'modules' field of an image
* - A thread is holding a temporary reference to this MonoImage between
* calls to mono_image_open and mono_image_close ()
*/
int ref_count;
MonoImageStorage *storage;
/* Aliases storage->raw_data when storage is non-NULL. Otherwise NULL. */
char *raw_data;
guint32 raw_data_len;
/* Whenever this is a dynamically emitted module */
guint8 dynamic : 1;
/* Whenever this image contains uncompressed metadata */
guint8 uncompressed_metadata : 1;
/* Whenever this image contains metadata only without PE data */
guint8 metadata_only : 1;
guint8 checked_module_cctor : 1;
guint8 has_module_cctor : 1;
guint8 idx_string_wide : 1;
guint8 idx_guid_wide : 1;
guint8 idx_blob_wide : 1;
/* NOT SUPPORTED: Whenever this image is considered as platform code for the CoreCLR security model */
guint8 core_clr_platform_code : 1;
/* Whether a #JTD stream was present. Indicates that this image was a minimal delta and its heaps only include the new heap entries */
guint8 minimal_delta : 1;
/* The path to the file for this image or an arbitrary name for images loaded from data. */
char *name;
/* The path to the file for this image or NULL */
char *filename;
/* The assembly name reported in the file for this image (expected to be NULL for a netmodule) */
const char *assembly_name;
/* The module name reported in the file for this image (could be NULL for a malformed file) */
const char *module_name;
guint32 time_date_stamp;
char *version;
gint16 md_version_major, md_version_minor;
char *guid;
MonoCLIImageInfo *image_info;
MonoMemPool *mempool; /*protected by the image lock*/
char *raw_metadata;
MonoStreamHeader heap_strings;
MonoStreamHeader heap_us;
MonoStreamHeader heap_blob;
MonoStreamHeader heap_guid;
MonoStreamHeader heap_tables;
MonoStreamHeader heap_pdb;
const char *tables_base;
/* For PPDB files */
guint64 referenced_tables;
int *referenced_table_rows;
/**/
MonoTableInfo tables [MONO_TABLE_NUM];
/*
* references is initialized only by using the mono_assembly_open
* function, and not by using the lowlevel mono_image_open.
*
* Protected by the image lock.
*
* It is NULL terminated.
*/
MonoAssembly **references;
int nreferences;
/* Code files in the assembly. The main assembly has a "file" table and also a "module"
* table, where the module table is a subset of the file table. We track both lists,
* and because we can lazy-load them at different times we reference-increment both.
*/
/* No netmodules in netcore, but for System.Reflection.Emit support we still use modules */
MonoImage **modules;
guint32 module_count;
gboolean *modules_loaded;
MonoImage **files;
guint32 file_count;
MonoAotModule *aot_module;
guint8 aotid[16];
/*
* The Assembly this image was loaded from.
*/
MonoAssembly *assembly;
/*
* The AssemblyLoadContext that this image was loaded into.
*/
MonoAssemblyLoadContext *alc;
/*
* Indexed by method tokens and typedef tokens.
*/
GHashTable *method_cache; /*protected by the image lock*/
MonoInternalHashTable class_cache;
/* Indexed by memberref + methodspec tokens */
GHashTable *methodref_cache; /*protected by the image lock*/
/*
* Indexed by fielddef and memberref tokens
*/
MonoConcurrentHashTable *field_cache; /*protected by the image lock*/
/* indexed by typespec tokens. */
MonoConcurrentHashTable *typespec_cache; /* protected by the image lock */
/* indexed by token */
GHashTable *memberref_signatures;
/* Indexed by blob heap indexes */
GHashTable *method_signatures;
/*
* Indexes namespaces to hash tables that map class name to typedef token.
*/
GHashTable *name_cache; /*protected by the image lock*/
/*
* Indexed by MonoClass
*/
GHashTable *array_cache;
GHashTable *ptr_cache;
GHashTable *szarray_cache;
/* This has a separate lock to improve scalability */
mono_mutex_t szarray_cache_lock;
/*
* indexed by SignaturePointerPair
*/
GHashTable *native_func_wrapper_cache;
/*
* indexed by MonoMethod pointers
*/
GHashTable *wrapper_param_names;
GHashTable *array_accessor_cache;
GHashTable *icall_wrapper_cache;
GHashTable *rgctx_template_hash; /* LOCKING: templates lock */
/* Contains rarely used fields of runtime structures belonging to this image */
MonoPropertyHash *property_hash;
void *reflection_info;
/*
* user_info is a public field and is not touched by the
* metadata engine
*/
void *user_info;
#ifndef DISABLE_DLLMAP
/* dll map entries */
MonoDllMap *dll_map;
#endif
/* interfaces IDs from this image */
/* protected by the classes lock */
MonoBitSet *interface_bitset;
/* when the image is being closed, this is abused as a list of
malloc'ed regions to be freed. */
GSList *reflection_info_unregister_classes;
/* List of dependent image sets containing this image */
/* Protected by image_sets_lock */
GSList *image_sets;
/* Caches for wrappers that DO NOT reference generic */
/* arguments */
MonoWrapperCaches wrapper_caches;
/* Pre-allocated anon generic params for the first N generic
* parameters, for a small N */
MonoGenericParam *var_gparam_cache_fast;
MonoGenericParam *mvar_gparam_cache_fast;
/* Anon generic parameters past N, if needed */
MonoConcurrentHashTable *var_gparam_cache;
MonoConcurrentHashTable *mvar_gparam_cache;
/* The loader used to load this image */
MonoImageLoader *loader;
// Containers for MonoGenericParams associated with this image but not with any specific class or method. Created on demand.
// This could happen, for example, for MonoTypes associated with TypeSpec table entries.
MonoGenericContainer *anonymous_generic_class_container;
MonoGenericContainer *anonymous_generic_method_container;
gboolean weak_fields_inited;
/* Contains 1 based indexes */
GHashTable *weak_field_indexes;
/* baseline images only: whether any metadata updates have been applied to this image */
gboolean has_updates;
/*
* No other runtime locks must be taken while holding this lock.
* It's meant to be used only to mutate and query structures part of this image.
*/
mono_mutex_t lock;
};
enum {
MONO_SECTION_TEXT,
MONO_SECTION_RSRC,
MONO_SECTION_RELOC,
MONO_SECTION_MAX
};
typedef struct {
GHashTable *hash;
char *data;
guint32 alloc_size; /* malloced bytes */
guint32 index;
guint32 offset; /* from start of metadata */
} MonoDynamicStream;
typedef struct {
guint32 alloc_rows;
guint32 rows;
guint8 row_size; /* calculated later with column_sizes */
guint8 columns;
guint32 next_idx;
guint32 *values; /* rows * columns */
} MonoDynamicTable;
/* "Dynamic" assemblies and images arise from System.Reflection.Emit */
struct _MonoDynamicAssembly {
MonoAssembly assembly;
char *strong_name;
guint32 strong_name_size;
};
struct _MonoDynamicImage {
MonoImage image;
guint32 meta_size;
guint32 text_rva;
guint32 metadata_rva;
guint32 image_base;
guint32 cli_header_offset;
guint32 iat_offset;
guint32 idt_offset;
guint32 ilt_offset;
guint32 imp_names_offset;
struct {
guint32 rva;
guint32 size;
guint32 offset;
guint32 attrs;
} sections [MONO_SECTION_MAX];
GHashTable *typespec;
GHashTable *typeref;
GHashTable *handleref;
MonoGHashTable *tokens;
GHashTable *blob_cache;
GHashTable *standalonesig_cache;
GList *array_methods;
GHashTable *method_aux_hash;
GHashTable *vararg_aux_hash;
MonoGHashTable *generic_def_objects;
gboolean initial_image;
guint32 pe_kind, machine;
char *strong_name;
guint32 strong_name_size;
char *win32_res;
guint32 win32_res_size;
guint8 *public_key;
int public_key_len;
MonoDynamicStream sheap;
MonoDynamicStream code; /* used to store method headers and bytecode */
MonoDynamicStream resources; /* managed embedded resources */
MonoDynamicStream us;
MonoDynamicStream blob;
MonoDynamicStream tstream;
MonoDynamicStream guid;
MonoDynamicTable tables [MONO_TABLE_NUM];
MonoClass *wrappers_type; /*wrappers are bound to this type instead of <Module>*/
};
/* Contains information about assembly binding */
typedef struct _MonoAssemblyBindingInfo {
char *name;
char *culture;
guchar public_key_token [MONO_PUBLIC_KEY_TOKEN_LENGTH];
int major;
int minor;
AssemblyVersionSet old_version_bottom;
AssemblyVersionSet old_version_top;
AssemblyVersionSet new_version;
guint has_old_version_bottom : 1;
guint has_old_version_top : 1;
guint has_new_version : 1;
guint is_valid : 1;
gint32 domain_id; /*Needed to unload per-domain binding*/
} MonoAssemblyBindingInfo;
struct _MonoMethodHeader {
const unsigned char *code;
#ifdef MONO_SMALL_CONFIG
guint16 code_size;
#else
guint32 code_size;
#endif
guint16 max_stack : 15;
unsigned int is_transient: 1; /* mono_metadata_free_mh () will actually free this header */
unsigned int num_clauses : 15;
/* if num_locals != 0, then the following apply: */
unsigned int init_locals : 1;
guint16 num_locals;
MonoExceptionClause *clauses;
MonoBitSet *volatile_args;
MonoBitSet *volatile_locals;
MonoType *locals [MONO_ZERO_LEN_ARRAY];
};
typedef struct {
const unsigned char *code;
guint32 code_size;
guint16 max_stack;
gboolean has_clauses;
gboolean has_locals;
} MonoMethodHeaderSummary;
// FIXME? offsetof (MonoMethodHeader, locals)?
#define MONO_SIZEOF_METHOD_HEADER (sizeof (struct _MonoMethodHeader) - MONO_ZERO_LEN_ARRAY * SIZEOF_VOID_P)
struct _MonoMethodSignature {
MonoType *ret;
#ifdef MONO_SMALL_CONFIG
guint8 param_count;
gint8 sentinelpos;
unsigned int generic_param_count : 5;
#else
guint16 param_count;
gint16 sentinelpos;
unsigned int generic_param_count : 16;
#endif
unsigned int call_convention : 6;
unsigned int hasthis : 1;
unsigned int explicit_this : 1;
unsigned int pinvoke : 1;
unsigned int is_inflated : 1;
unsigned int has_type_parameters : 1;
unsigned int suppress_gc_transition : 1;
unsigned int marshalling_disabled : 1;
MonoType *params [MONO_ZERO_LEN_ARRAY];
};
/*
* AOT cache configuration loaded from config files.
* Doesn't really belong here.
*/
typedef struct {
/*
* Enable aot caching for applications whose main assemblies are in
* this list.
*/
GSList *apps;
GSList *assemblies;
char *aot_options;
} MonoAotCacheConfig;
#define MONO_SIZEOF_METHOD_SIGNATURE (sizeof (struct _MonoMethodSignature) - MONO_ZERO_LEN_ARRAY * SIZEOF_VOID_P)
static inline gboolean
image_is_dynamic (MonoImage *image)
{
#ifdef DISABLE_REFLECTION_EMIT
return FALSE;
#else
return image->dynamic;
#endif
}
static inline gboolean
assembly_is_dynamic (MonoAssembly *assembly)
{
#ifdef DISABLE_REFLECTION_EMIT
return FALSE;
#else
return assembly->dynamic;
#endif
}
static inline int
table_info_get_rows (const MonoTableInfo *table)
{
return table->rows_;
}
/* for use with allocated memory blocks (assumes alignment is to 8 bytes) */
MONO_COMPONENT_API guint mono_aligned_addr_hash (gconstpointer ptr);
void
mono_image_check_for_module_cctor (MonoImage *image);
gpointer
mono_image_alloc (MonoImage *image, guint size);
gpointer
mono_image_alloc0 (MonoImage *image, guint size);
#define mono_image_new0(image,type,size) ((type *) mono_image_alloc0 (image, sizeof (type)* (size)))
char*
mono_image_strdup (MonoImage *image, const char *s);
char*
mono_image_strdup_vprintf (MonoImage *image, const char *format, va_list args);
char*
mono_image_strdup_printf (MonoImage *image, const char *format, ...) MONO_ATTR_FORMAT_PRINTF(2,3);
GList*
mono_g_list_prepend_image (MonoImage *image, GList *list, gpointer data);
GSList*
mono_g_slist_append_image (MonoImage *image, GSList *list, gpointer data);
MONO_COMPONENT_API
void
mono_image_lock (MonoImage *image);
MONO_COMPONENT_API
void
mono_image_unlock (MonoImage *image);
gpointer
mono_image_property_lookup (MonoImage *image, gpointer subject, guint32 property);
void
mono_image_property_insert (MonoImage *image, gpointer subject, guint32 property, gpointer value);
void
mono_image_property_remove (MonoImage *image, gpointer subject);
MONO_COMPONENT_API
gboolean
mono_image_close_except_pools (MonoImage *image);
MONO_COMPONENT_API
void
mono_image_close_finish (MonoImage *image);
typedef void (*MonoImageUnloadFunc) (MonoImage *image, gpointer user_data);
void
mono_install_image_unload_hook (MonoImageUnloadFunc func, gpointer user_data);
void
mono_remove_image_unload_hook (MonoImageUnloadFunc func, gpointer user_data);
void
mono_install_image_loader (const MonoImageLoader *loader);
void
mono_image_append_class_to_reflection_info_set (MonoClass *klass);
typedef struct _MonoMetadataUpdateData MonoMetadataUpdateData;
struct _MonoMetadataUpdateData {
int has_updates;
};
extern MonoMetadataUpdateData mono_metadata_update_data_private;
/* returns TRUE if there's at least one update */
static inline gboolean
mono_metadata_has_updates (void)
{
return mono_metadata_update_data_private.has_updates != 0;
}
/* components can't call the inline function directly since the private data isn't exported */
MONO_COMPONENT_API
gboolean
mono_metadata_has_updates_api (void);
void
mono_image_effective_table_slow (const MonoTableInfo **t, int idx);
gboolean
mono_metadata_update_has_modified_rows (const MonoTableInfo *t);
static inline void
mono_image_effective_table (const MonoTableInfo **t, int idx)
{
if (G_UNLIKELY (mono_metadata_has_updates ())) {
if (G_UNLIKELY (idx >= table_info_get_rows ((*t)) || mono_metadata_update_has_modified_rows (*t))) {
mono_image_effective_table_slow (t, idx);
}
}
}
enum MonoEnCDeltaOrigin {
MONO_ENC_DELTA_API = 0,
MONO_ENC_DELTA_DBG = 1,
};
MONO_COMPONENT_API void
mono_image_load_enc_delta (int delta_origin, MonoImage *base_image, gconstpointer dmeta, uint32_t dmeta_len, gconstpointer dil, uint32_t dil_len, gconstpointer dpdb, uint32_t dpdb_len, MonoError *error);
gboolean
mono_image_load_cli_header (MonoImage *image, MonoCLIImageInfo *iinfo);
gboolean
mono_image_load_metadata (MonoImage *image, MonoCLIImageInfo *iinfo);
const char*
mono_metadata_string_heap_checked (MonoImage *meta, uint32_t table_index, MonoError *error);
const char *
mono_metadata_blob_heap_null_ok (MonoImage *meta, guint32 index);
const char*
mono_metadata_blob_heap_checked (MonoImage *meta, uint32_t table_index, MonoError *error);
gboolean
mono_metadata_decode_row_checked (const MonoImage *image, const MonoTableInfo *t, int idx, uint32_t *res, int res_size, MonoError *error);
MONO_COMPONENT_API
void
mono_metadata_decode_row_raw (const MonoTableInfo *t, int idx, uint32_t *res, int res_size);
gboolean
mono_metadata_decode_row_dynamic_checked (const MonoDynamicImage *image, const MonoDynamicTable *t, int idx, guint32 *res, int res_size, MonoError *error);
MonoType*
mono_metadata_get_shared_type (MonoType *type);
void
mono_metadata_clean_generic_classes_for_image (MonoImage *image);
gboolean
mono_metadata_table_bounds_check_slow (MonoImage *image, int table_index, int token_index);
int
mono_metadata_table_num_rows_slow (MonoImage *image, int table_index);
static inline int
mono_metadata_table_num_rows (MonoImage *image, int table_index)
{
if (G_LIKELY (!image->has_updates))
return table_info_get_rows (&image->tables [table_index]);
else
return mono_metadata_table_num_rows_slow (image, table_index);
}
/* token_index is 1-based */
static inline gboolean
mono_metadata_table_bounds_check (MonoImage *image, int table_index, int token_index)
{
/* returns true if given index is not in bounds with provided table/index pair */
if (G_LIKELY (token_index <= table_info_get_rows (&image->tables [table_index])))
return FALSE;
if (G_LIKELY (!image->has_updates))
return TRUE;
return mono_metadata_table_bounds_check_slow (image, table_index, token_index);
}
MONO_COMPONENT_API
const char * mono_meta_table_name (int table);
void mono_metadata_compute_table_bases (MonoImage *meta);
gboolean
mono_metadata_interfaces_from_typedef_full (MonoImage *image,
guint32 table_index,
MonoClass ***interfaces,
guint *count,
gboolean heap_alloc_result,
MonoGenericContext *context,
MonoError *error);
MONO_API MonoMethodSignature *
mono_metadata_parse_method_signature_full (MonoImage *image,
MonoGenericContainer *generic_container,
int def,
const char *ptr,
const char **rptr,
MonoError *error);
MONO_API MonoMethodHeader *
mono_metadata_parse_mh_full (MonoImage *image,
MonoGenericContainer *container,
const char *ptr,
MonoError *error);
MonoMethodSignature *mono_metadata_parse_signature_checked (MonoImage *image,
uint32_t token,
MonoError *error);
gboolean
mono_method_get_header_summary (MonoMethod *method, MonoMethodHeaderSummary *summary);
int* mono_metadata_get_param_attrs (MonoImage *m, int def, int param_count);
gboolean mono_metadata_method_has_param_attrs (MonoImage *m, int def);
guint
mono_metadata_generic_context_hash (const MonoGenericContext *context);
gboolean
mono_metadata_generic_context_equal (const MonoGenericContext *g1,
const MonoGenericContext *g2);
MonoGenericInst *
mono_metadata_parse_generic_inst (MonoImage *image,
MonoGenericContainer *container,
int count,
const char *ptr,
const char **rptr,
MonoError *error);
MONO_COMPONENT_API MonoGenericInst *
mono_metadata_get_generic_inst (int type_argc,
MonoType **type_argv);
MonoGenericInst *
mono_metadata_get_canonical_generic_inst (MonoGenericInst *candidate);
MonoGenericClass *
mono_metadata_lookup_generic_class (MonoClass *gclass,
MonoGenericInst *inst,
gboolean is_dynamic);
MonoGenericInst * mono_metadata_inflate_generic_inst (MonoGenericInst *ginst, MonoGenericContext *context, MonoError *error);
guint
mono_metadata_generic_param_hash (MonoGenericParam *p);
gboolean
mono_metadata_generic_param_equal (MonoGenericParam *p1, MonoGenericParam *p2);
void mono_dynamic_stream_reset (MonoDynamicStream* stream);
void mono_assembly_load_friends (MonoAssembly* ass);
MONO_API gint32
mono_assembly_addref (MonoAssembly *assembly);
gint32
mono_assembly_decref (MonoAssembly *assembly);
void mono_assembly_release_gc_roots (MonoAssembly *assembly);
gboolean mono_assembly_close_except_image_pools (MonoAssembly *assembly);
void mono_assembly_close_finish (MonoAssembly *assembly);
gboolean mono_public_tokens_are_equal (const unsigned char *pubt1, const unsigned char *pubt2);
void mono_config_parse_publisher_policy (const char *filename, MonoAssemblyBindingInfo *binding_info);
gboolean
mono_assembly_name_parse_full (const char *name,
MonoAssemblyName *aname,
gboolean save_public_key,
gboolean *is_version_defined,
gboolean *is_token_defined);
gboolean
mono_assembly_fill_assembly_name_full (MonoImage *image, MonoAssemblyName *aname, gboolean copyBlobs);
MONO_API guint32 mono_metadata_get_generic_param_row (MonoImage *image, guint32 token, guint32 *owner);
MonoGenericParam*
mono_metadata_create_anon_gparam (MonoImage *image, gint32 param_num, gboolean is_mvar);
void mono_unload_interface_ids (MonoBitSet *bitset);
MonoType *mono_metadata_type_dup (MonoImage *image, const MonoType *original);
MonoType *mono_metadata_type_dup_with_cmods (MonoImage *image, const MonoType *original, const MonoType *cmods_source);
MonoMethodSignature *mono_metadata_signature_dup_full (MonoImage *image,MonoMethodSignature *sig);
MonoMethodSignature *mono_metadata_signature_dup_mempool (MonoMemPool *mp, MonoMethodSignature *sig);
MonoMethodSignature *mono_metadata_signature_dup_mem_manager (MonoMemoryManager *mem_manager, MonoMethodSignature *sig);
MonoMethodSignature *mono_metadata_signature_dup_add_this (MonoImage *image, MonoMethodSignature *sig, MonoClass *klass);
MonoGenericInst *
mono_get_shared_generic_inst (MonoGenericContainer *container);
int
mono_type_stack_size_internal (MonoType *t, int *align, gboolean allow_open);
MONO_API void mono_type_get_desc (GString *res, MonoType *type, mono_bool include_namespace);
gboolean
mono_metadata_type_equal_full (MonoType *t1, MonoType *t2, gboolean signature_only);
MonoMarshalSpec *
mono_metadata_parse_marshal_spec_full (MonoImage *image, MonoImage *parent_image, const char *ptr);
guint mono_metadata_generic_inst_hash (gconstpointer data);
gboolean mono_metadata_generic_inst_equal (gconstpointer ka, gconstpointer kb);
gboolean
mono_metadata_signature_equal_no_ret (MonoMethodSignature *sig1, MonoMethodSignature *sig2);
MONO_API void
mono_metadata_field_info_with_mempool (
MonoImage *meta,
guint32 table_index,
guint32 *offset,
guint32 *rva,
MonoMarshalSpec **marshal_spec);
MonoClassField*
mono_metadata_get_corresponding_field_from_generic_type_definition (MonoClassField *field);
MonoEvent*
mono_metadata_get_corresponding_event_from_generic_type_definition (MonoEvent *event);
MonoProperty*
mono_metadata_get_corresponding_property_from_generic_type_definition (MonoProperty *property);
guint32
mono_metadata_signature_size (MonoMethodSignature *sig);
guint mono_metadata_str_hash (gconstpointer v1);
gboolean mono_image_load_pe_data (MonoImage *image);
gboolean mono_image_load_cli_data (MonoImage *image);
void mono_image_load_names (MonoImage *image);
MonoImage *mono_image_open_raw (MonoAssemblyLoadContext *alc, const char *fname, MonoImageOpenStatus *status);
MonoImage *mono_image_open_metadata_only (MonoAssemblyLoadContext *alc, const char *fname, MonoImageOpenStatus *status);
MONO_COMPONENT_API
MonoImage *mono_image_open_from_data_internal (MonoAssemblyLoadContext *alc, char *data, guint32 data_len, gboolean need_copy, MonoImageOpenStatus *status, gboolean metadata_only, const char *name, const char *filename);
MonoException *mono_get_exception_field_access_msg (const char *msg);
MonoException *mono_get_exception_method_access_msg (const char *msg);
MonoMethod* mono_method_from_method_def_or_ref (MonoImage *m, guint32 tok, MonoGenericContext *context, MonoError *error);
MonoMethod *mono_get_method_constrained_with_method (MonoImage *image, MonoMethod *method, MonoClass *constrained_class, MonoGenericContext *context, MonoError *error);
MonoMethod *mono_get_method_constrained_checked (MonoImage *image, guint32 token, MonoClass *constrained_class, MonoGenericContext *context, MonoMethod **cil_method, MonoError *error);
void mono_type_set_alignment (MonoTypeEnum type, int align);
MonoType *
mono_type_create_from_typespec_checked (MonoImage *image, guint32 type_spec, MonoError *error);
MonoMethodSignature*
mono_method_get_signature_checked (MonoMethod *method, MonoImage *image, guint32 token, MonoGenericContext *context, MonoError *error);
MONO_COMPONENT_API MonoMethod *
mono_get_method_checked (MonoImage *image, guint32 token, MonoClass *klass, MonoGenericContext *context, MonoError *error);
guint32
mono_metadata_localscope_from_methoddef (MonoImage *meta, guint32 index);
void
mono_wrapper_caches_free (MonoWrapperCaches *cache);
MonoWrapperCaches*
mono_method_get_wrapper_cache (MonoMethod *method);
MonoWrapperCaches*
mono_method_get_wrapper_cache (MonoMethod *method);
MonoType*
mono_metadata_parse_type_checked (MonoImage *m, MonoGenericContainer *container, short opt_attrs, gboolean transient, const char *ptr, const char **rptr, MonoError *error);
MonoGenericContainer *
mono_get_anonymous_container_for_image (MonoImage *image, gboolean is_mvar);
void
mono_loader_register_module (const char *name, MonoDl *module);
void
mono_ginst_get_desc (GString *str, MonoGenericInst *ginst);
void
mono_loader_set_strict_assembly_name_check (gboolean enabled);
gboolean
mono_loader_get_strict_assembly_name_check (void);
MONO_COMPONENT_API gboolean
mono_type_in_image (MonoType *type, MonoImage *image);
gboolean
mono_type_is_valid_generic_argument (MonoType *type);
void
mono_metadata_get_class_guid (MonoClass* klass, uint8_t* guid, MonoError *error);
#define MONO_CLASS_IS_INTERFACE_INTERNAL(c) ((mono_class_get_flags (c) & TYPE_ATTRIBUTE_INTERFACE) || mono_type_is_generic_parameter (m_class_get_byval_arg (c)))
static inline gboolean
m_image_is_raw_data_allocated (MonoImage *image)
{
return image->storage ? image->storage->raw_data_allocated : FALSE;
}
static inline gboolean
m_image_is_fileio_used (MonoImage *image)
{
return image->storage ? image->storage->fileio_used : FALSE;
}
#ifdef HOST_WIN32
static inline gboolean
m_image_is_module_handle (MonoImage *image)
{
return image->storage ? image->storage->is_module_handle : FALSE;
}
static inline gboolean
m_image_has_entry_point (MonoImage *image)
{
return image->storage ? image->storage->has_entry_point : FALSE;
}
#endif
static inline const char *
m_image_get_name (MonoImage *image)
{
return image->name;
}
static inline const char *
m_image_get_filename (MonoImage *image)
{
return image->filename;
}
static inline const char *
m_image_get_assembly_name (MonoImage *image)
{
return image->assembly_name;
}
static inline
MonoAssemblyLoadContext *
mono_image_get_alc (MonoImage *image)
{
return image->alc;
}
static inline
MonoAssemblyLoadContext *
mono_assembly_get_alc (MonoAssembly *assm)
{
return mono_image_get_alc (assm->image);
}
static inline MonoType*
mono_signature_get_return_type_internal (MonoMethodSignature *sig)
{
return sig->ret;
}
/**
* mono_type_get_type_internal:
* \param type the \c MonoType operated on
* \returns the IL type value for \p type. This is one of the \c MonoTypeEnum
* enum members like \c MONO_TYPE_I4 or \c MONO_TYPE_STRING.
*/
static inline int
mono_type_get_type_internal (MonoType *type)
{
return type->type;
}
/**
* mono_type_get_signature:
* \param type the \c MonoType operated on
* It is only valid to call this function if \p type is a \c MONO_TYPE_FNPTR .
* \returns the \c MonoMethodSignature pointer that describes the signature
* of the function pointer \p type represents.
*/
static inline MonoMethodSignature*
mono_type_get_signature_internal (MonoType *type)
{
g_assert (type->type == MONO_TYPE_FNPTR);
return type->data.method;
}
/**
* m_type_is_byref:
* \param type the \c MonoType operated on
* \returns TRUE if \p type represents a type passed by reference,
* FALSE otherwise.
*/
static inline gboolean
m_type_is_byref (const MonoType *type)
{
return type->byref__;
}
/**
* mono_type_get_class_internal:
* \param type the \c MonoType operated on
* It is only valid to call this function if \p type is a \c MONO_TYPE_CLASS or a
* \c MONO_TYPE_VALUETYPE . For more general functionality, use \c mono_class_from_mono_type_internal,
* instead.
* \returns the \c MonoClass pointer that describes the class that \p type represents.
*/
static inline MonoClass*
mono_type_get_class_internal (MonoType *type)
{
/* FIXME: review the runtime users before adding the assert here */
return type->data.klass;
}
/**
* mono_type_get_array_type_internal:
* \param type the \c MonoType operated on
* It is only valid to call this function if \p type is a \c MONO_TYPE_ARRAY .
* \returns a \c MonoArrayType struct describing the array type that \p type
* represents. The info includes details such as rank, array element type
* and the sizes and bounds of multidimensional arrays.
*/
static inline MonoArrayType*
mono_type_get_array_type_internal (MonoType *type)
{
return type->data.array;
}
static inline int
mono_metadata_table_to_ptr_table (int table_num)
{
switch (table_num) {
case MONO_TABLE_FIELD: return MONO_TABLE_FIELD_POINTER;
case MONO_TABLE_METHOD: return MONO_TABLE_METHOD_POINTER;
case MONO_TABLE_PARAM: return MONO_TABLE_PARAM_POINTER;
case MONO_TABLE_PROPERTY: return MONO_TABLE_PROPERTY_POINTER;
case MONO_TABLE_EVENT: return MONO_TABLE_EVENT_POINTER;
default:
g_assert_not_reached ();
}
}
#endif /* __MONO_METADATA_INTERNALS_H__ */
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/mono/mini/CMakeLists.txt | project(mini)
include(FindPython3)
include_directories(
${PROJECT_BINARY_DIR}/
${PROJECT_BINARY_DIR}/../..
${PROJECT_BINARY_DIR}/../../mono/eglib
${CMAKE_CURRENT_SOURCE_DIR}/../..
${PROJECT_SOURCE_DIR}/../
${PROJECT_SOURCE_DIR}/../eglib
${PROJECT_SOURCE_DIR}/../sgen)
if(HOST_DARWIN)
set(OS_LIBS "-framework CoreFoundation" "-framework Foundation" "-lcompression")
if(CMAKE_SYSTEM_VARIANT STREQUAL "MacCatalyst")
set(OS_LIBS ${OS_LIBS} "-lobjc" "-lc++")
endif()
elseif(HOST_IOS)
set(OS_LIBS "-framework CoreFoundation" "-lcompression" "-lobjc" "-lc++")
elseif(HOST_ANDROID)
set(OS_LIBS m dl log)
elseif(HOST_LINUX)
set(OS_LIBS pthread m dl)
elseif(HOST_WIN32)
set(OS_LIBS bcrypt.lib Mswsock.lib ws2_32.lib psapi.lib version.lib advapi32.lib winmm.lib kernel32.lib)
elseif(HOST_SOLARIS)
set(OS_LIBS socket pthread m ${CMAKE_DL_LIBS})
elseif(HOST_FREEBSD)
set(OS_LIBS pthread m)
endif()
#
# SUBDIRS
#
include(../eglib/CMakeLists.txt)
include(../utils/CMakeLists.txt)
include(../metadata/CMakeLists.txt)
include(../sgen/CMakeLists.txt)
include(../component/CMakeLists.txt)
if(HOST_WIN32)
# /OPT:ICF merges idential functions breaking mono_lookup_icall_symbol ()
add_link_options(/OPT:NOICF)
endif()
# ICU
if(HAVE_SYS_ICU AND NOT HOST_WASI)
if(STATIC_ICU)
set(pal_icushim_sources_base
pal_icushim_static.c)
add_definitions(-DSTATIC_ICU=1)
else()
set(pal_icushim_sources_base
pal_icushim.c)
endif()
set(icu_shim_sources_base
pal_calendarData.c
pal_casing.c
pal_collation.c
pal_idna.c
pal_locale.c
pal_localeNumberData.c
pal_localeStringData.c
pal_normalization.c
pal_timeZoneInfo.c
entrypoints.c
${pal_icushim_sources_base})
addprefix(icu_shim_sources "${ICU_SHIM_PATH}" "${icu_shim_sources_base}")
set_source_files_properties(${icu_shim_sources} PROPERTIES COMPILE_DEFINITIONS OSX_ICU_LIBRARY_PATH="${OSX_ICU_LIBRARY_PATH}")
set_source_files_properties(${icu_shim_sources} PROPERTIES COMPILE_FLAGS "-I\"${ICU_INCLUDEDIR}\" -I\"${CLR_SRC_NATIVE_DIR}/libs/System.Globalization.Native/\" -I\"${CLR_SRC_NATIVE_DIR}/libs/Common/\" ${ICU_FLAGS}")
if(TARGET_WIN32)
set_source_files_properties(${icu_shim_sources} PROPERTIES LANGUAGE CXX)
endif()
if(ICU_LIBDIR)
set(ICU_LDFLAGS "-L${ICU_LIBDIR}")
endif()
endif()
#
# MINI
#
set(mini_common_sources
mini.c
mini-runtime.c
seq-points.c
seq-points.h
ir-emit.h
method-to-ir.c
cfgdump.h
cfgdump.c
calls.c
decompose.c
mini.h
optflags-def.h
jit-icalls.h
jit-icalls.c
trace.c
trace.h
patch-info.h
mini-ops.h
mini-arch.h
dominators.c
cfold.c
regalloc.h
helpers.c
liveness.c
ssa.c
abcremoval.c
abcremoval.h
local-propagation.c
driver.c
debug-mini.c
linear-scan.c
aot-compiler.h
aot-compiler.c
aot-runtime.c
graph.c
mini-codegen.c
mini-exceptions.c
mini-trampolines.c
branch-opts.c
mini-generic-sharing.c
simd-methods.h
simd-intrinsics.c
mini-unwind.h
unwind.c
image-writer.h
image-writer.c
dwarfwriter.h
dwarfwriter.c
mini-gc.h
mini-gc.c
mini-llvm.h
mini-llvm-cpp.h
llvm-jit.h
alias-analysis.c
mini-cross-helpers.c
arch-stubs.c
llvm-runtime.h
llvm-intrinsics.h
llvm-intrinsics-types.h
type-checking.c
lldb.h
lldb.c
memory-access.c
intrinsics.c
mini-profiler.c
interp-stubs.c
aot-runtime.h
ee.h
mini-runtime.h
llvmonly-runtime.h
llvmonly-runtime.c
monovm.h
monovm.c)
set(debugger_sources
debugger-agent-external.h
debugger-agent-external.c
)
set(amd64_sources
mini-amd64.c
mini-amd64.h
exceptions-amd64.c
tramp-amd64.c
mini-amd64-gsharedvt.c
mini-amd64-gsharedvt.h
tramp-amd64-gsharedvt.c
cpu-amd64.h)
set(x86_sources
mini-x86.c
mini-x86.h
exceptions-x86.c
tramp-x86.c
mini-x86-gsharedvt.c
tramp-x86-gsharedvt.c
cpu-x86.h)
set(arm64_sources
mini-arm64.c
mini-arm64.h
exceptions-arm64.c
tramp-arm64.c
mini-arm64-gsharedvt.c
mini-arm64-gsharedvt.h
tramp-arm64-gsharedvt.c
cpu-arm64.h)
set(arm_sources
mini-arm.c
mini-arm.h
exceptions-arm.c
tramp-arm.c
mini-arm-gsharedvt.c
tramp-arm-gsharedvt.c
cpu-arm.h)
set(s390x_sources
mini-s390x.c
mini-s390x.h
exceptions-s390x.c
tramp-s390x.c
cpu-s390x.h)
set(wasm_sources
mini-wasm.c
tramp-wasm.c
exceptions-wasm.c
aot-runtime-wasm.c
wasm_m2n_invoke.g.h
cpu-wasm.h)
if(TARGET_AMD64)
set(arch_sources ${amd64_sources})
elseif(TARGET_X86)
set(arch_sources ${x86_sources})
elseif(TARGET_ARM64)
set(arch_sources ${arm64_sources})
elseif(TARGET_ARM)
set(arch_sources ${arm_sources})
elseif(TARGET_S390X)
set(arch_sources ${s390x_sources})
elseif(TARGET_WASM)
set(arch_sources ${wasm_sources})
endif()
set(darwin_sources
mini-darwin.c)
set(windows_sources
mini-windows.c
mini-windows-tls-callback.c
mini-windows.h
)
set(posix_sources
mini-posix.c)
if(HOST_DARWIN)
set(os_sources "${darwin_sources};${posix_sources}")
elseif(HOST_LINUX OR HOST_SOLARIS OR HOST_FREEBSD)
set(os_sources "${posix_sources}")
elseif(HOST_WIN32)
set(os_sources "${windows_sources}")
endif()
set(interp_sources
interp/interp.h
interp/interp-internals.h
interp/interp.c
interp/interp-intrins.h
interp/interp-intrins.c
interp/mintops.h
interp/mintops.c
interp/transform.c)
set(interp_stub_sources
interp-stubs.c)
if(NOT DISABLE_INTERPRETER)
set(mini_interp_sources ${interp_sources})
else()
set(mini_interp_sources ${interp_stub_sources})
endif()
if(ENABLE_INTERP_LIB)
add_library(mono-ee-interp STATIC "${interp_sources}")
target_link_libraries(mono-ee-interp monoapi)
install(TARGETS mono-ee-interp LIBRARY)
endif()
if(ENABLE_LLVM)
set(llvm_sources
mini-llvm.c
mini-llvm-cpp.cpp
llvm-jit.cpp)
else()
set(llvm_sources)
endif()
if(ENABLE_LLVM)
set(llvm_runtime_sources
llvm-runtime.cpp)
elseif(ENABLE_LLVM_RUNTIME)
set(llvm_runtime_sources
llvm-runtime.cpp)
else()
set(llvm_runtime_sources)
endif()
set(mini_sources "${CMAKE_CURRENT_BINARY_DIR}/buildver-sgen.h;main-core.c;${mini_common_sources};${arch_sources};${os_sources};${mini_interp_sources};${llvm_sources};${debugger_sources};${llvm_runtime_sources}")
if(LLVM_INCLUDEDIR)
include_directories(BEFORE SYSTEM "${LLVM_INCLUDEDIR}")
endif()
if(HOST_WIN32)
set(mini_sources "${mini_sources};${VERSION_FILE_RC_PATH}") # this is generated by GenerateNativeVersionFile in Arcade
elseif(NOT HOST_BROWSER)
set(mini_sources "${mini_sources};${VERSION_FILE_PATH}") # this is generated by GenerateNativeVersionFile in Arcade
endif()
set(monosgen-sources "${metadata_sources};${utils_sources};${sgen_sources};${icu_shim_sources};${mini_sources};${ZLIB_SOURCES}")
add_library(monosgen-objects OBJECT "${monosgen-sources}")
target_link_libraries (monosgen-objects PRIVATE monoapi)
add_library(monosgen-static STATIC $<TARGET_OBJECTS:monosgen-objects>;$<TARGET_OBJECTS:eglib_objects>)
target_link_libraries (monosgen-static PRIVATE monoapi)
set_target_properties(monosgen-static PROPERTIES OUTPUT_NAME ${MONO_LIB_NAME})
if(DISABLE_COMPONENTS)
# add component fallback stubs into static mono library when components have been disabled.
target_sources(monosgen-static PRIVATE "${mono-components-stub-objects}")
endif()
if(NOT DISABLE_LIBS)
install(TARGETS monosgen-static LIBRARY)
endif()
if(NOT DISABLE_SHARED_LIBS)
if(HOST_WIN32)
add_library(monosgen-shared SHARED "mini-windows-dllmain.c;${monosgen-sources}")
target_compile_definitions(monosgen-shared PRIVATE -DMONO_DLL_EXPORT)
else()
add_library(monosgen-shared SHARED $<TARGET_OBJECTS:monosgen-objects>)
target_compile_definitions(monosgen-objects PRIVATE -DMONO_DLL_EXPORT)
endif()
target_sources(monosgen-shared PRIVATE $<TARGET_OBJECTS:eglib_objects>)
set_target_properties(monosgen-shared PROPERTIES OUTPUT_NAME ${MONO_SHARED_LIB_NAME})
target_link_libraries (monosgen-shared PRIVATE monoapi)
target_include_directories (monosgen-shared PRIVATE monoapi)
if(TARGET_WIN32)
# on Windows the import library for the shared mono library will have the same name as the static library,
# to avoid a conflict we rename the import library with the .import.lib suffix
set_target_properties(monosgen-shared PROPERTIES IMPORT_SUFFIX ".import.lib")
endif()
target_link_libraries(monosgen-shared PRIVATE ${OS_LIBS} ${ICONV_LIB} ${LLVM_LIBS} ${ICU_LIBS} ${Z_LIBS})
if(ICU_LDFLAGS)
set_property(TARGET monosgen-shared APPEND_STRING PROPERTY LINK_FLAGS " ${ICU_LDFLAGS}")
endif()
if(NOT TARGET_WASM AND STATIC_ICU)
set_property(TARGET monosgen-shared APPEND_STRING PROPERTY LINKER_LANGUAGE CXX)
endif ()
if(TARGET_DARWIN)
set_property(TARGET monosgen-shared APPEND_STRING PROPERTY LINK_FLAGS " -Wl,-compatibility_version -Wl,2.0 -Wl,-current_version -Wl,2.0")
endif()
if(NOT DISABLE_COMPONENTS AND STATIC_COMPONENTS AND NOT DISABLE_LINK_STATIC_COMPONENTS)
# if components are built statically, but we're building a shared lib mono,
# link them into the library
target_sources(monosgen-shared PRIVATE "${mono-components-objects}")
elseif(NOT DISABLE_COMPONENTS AND STATIC_COMPONENTS AND DISABLE_LINK_STATIC_COMPONENTS)
# if components are built statically, we're building a shared lib mono, but we shouldn't link components
# link the fallback stubs into the runtime
target_sources(monosgen-shared PRIVATE "${mono-components-stub-objects}")
elseif(NOT DISABLE_COMPONENTS AND NOT STATIC_COMPONENTS)
# if components are built dynamically, link the fallback stubs into the runtime
target_sources(monosgen-shared PRIVATE "${mono-components-stub-objects}")
elseif(DISABLE_COMPONENTS)
# if components are disabled, link the fallback stubs into the runtime
target_sources(monosgen-shared PRIVATE "${mono-components-stub-objects}")
endif()
install(TARGETS monosgen-shared LIBRARY)
if(HOST_WIN32 AND TARGET_AMD64)
add_library(monosgen-shared-dac SHARED "mini-windows-dlldac.c")
target_link_libraries(monosgen-shared-dac monoapi)
set_target_properties(monosgen-shared-dac PROPERTIES OUTPUT_NAME ${MONO_SHARED_LIB_NAME}-dac)
endif()
if(BUILD_DARWIN_FRAMEWORKS)
if(TARGET_DARWIN)
# In cmake, you cannot have list entries which contain a space or semicolon - those are considered
# record separators (i.e. a list of list(APPEND foo "a" "b;c" "d e") is a five entry list of values
# a, b, c, d and e.
# So, in order to treat the components lists as single list entries, swap out the ; character
# for a temporary replacement character, allowing the full lists to be treated as single entries
string(REPLACE ";" "*" mono-components-objects-nowhitespace "${mono-components-objects}")
string(REPLACE ";" "*" mono-components-stub-objects-nowhitespace "${mono-components-stub-objects}")
list(APPEND FrameworkConfig Mono.debug Mono.release)
list(APPEND ComponentsObjects "${mono-components-objects-nowhitespace}" "${mono-components-stub-objects-nowhitespace}")
foreach(frameworkconfig componentsobjects IN ZIP_LISTS FrameworkConfig ComponentsObjects)
if("${componentsobjects}" STREQUAL "")
#components list is empty, use stubs instead
set(componentsobjects "${mono-components-stub-objects-nowhitespace}")
endif()
add_library(${frameworkconfig} SHARED $<TARGET_OBJECTS:monosgen-objects>)
target_compile_definitions(${frameworkconfig} PRIVATE -DMONO_DLL_EXPORT)
target_sources(${frameworkconfig} PRIVATE $<TARGET_OBJECTS:eglib_objects>)
target_link_libraries(${frameworkconfig} PRIVATE ${OS_LIBS} ${ICONV_LIB} ${LLVM_LIBS} ${ICU_LIBS} ${Z_LIBS})
if(ICU_LDFLAGS)
set_property(TARGET ${frameworkconfig} APPEND_STRING PROPERTY LINK_FLAGS " ${ICU_LDFLAGS}")
endif()
if(STATIC_ICU)
set_property(TARGET ${frameworkconfig} APPEND_STRING PROPERTY LINKER_LANGUAGE CXX)
endif ()
set_property(TARGET ${frameworkconfig} APPEND_STRING PROPERTY LINK_FLAGS " -Wl,-compatibility_version -Wl,2.0 -Wl,-current_version -Wl,2.0")
string(REPLACE "*" ";" componentsobjects-whitespace "${componentsobjects}")
target_sources(${frameworkconfig} PRIVATE "${componentsobjects-whitespace}")
set_target_properties(${frameworkconfig} PROPERTIES
FRAMEWORK TRUE
FRAMEWORK_VERSION C
MACOSX_FRAMEWORK_IDENTIFIER net.dot.mono-framework
)
install(TARGETS ${frameworkconfig}
FRAMEWORK DESTINATION ${CMAKE_INSTALL_LIBDIR}
)
endforeach()
endif()
endif()
endif()
find_package(Python3 COMPONENTS Interpreter)
# don't set build_date, it creates non-deterministic builds
file(GENERATE OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/buildver-sgen.h CONTENT [=[const char *build_date = "";]=])
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/cpu-amd64.h
COMMAND ${Python3_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py TARGET_AMD64 ${CMAKE_CURRENT_SOURCE_DIR} cpu-amd64.h amd64_desc ${CMAKE_CURRENT_SOURCE_DIR}/cpu-amd64.md
DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py mini-ops.h
VERBATIM
)
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/cpu-x86.h
COMMAND ${Python3_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py TARGET_X86 ${CMAKE_CURRENT_SOURCE_DIR} cpu-x86.h x86_desc ${CMAKE_CURRENT_SOURCE_DIR}/cpu-x86.md
VERBATIM
)
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/cpu-arm64.h
COMMAND ${Python3_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py TARGET_ARM64 ${CMAKE_CURRENT_SOURCE_DIR} cpu-arm64.h arm64_cpu_desc ${CMAKE_CURRENT_SOURCE_DIR}/cpu-arm64.md
VERBATIM
)
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/cpu-arm.h
COMMAND ${Python3_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py TARGET_ARM ${CMAKE_CURRENT_SOURCE_DIR} cpu-arm.h arm_cpu_desc ${CMAKE_CURRENT_SOURCE_DIR}/cpu-arm.md
VERBATIM
)
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/cpu-s390x.h
COMMAND ${Python3_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py TARGET_S390X ${CMAKE_CURRENT_SOURCE_DIR} cpu-s390x.h s390x_cpu_desc ${CMAKE_CURRENT_SOURCE_DIR}/cpu-s390x.md
VERBATIM
)
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/cpu-wasm.h
COMMAND ${Python3_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py TARGET_WASM ${CMAKE_CURRENT_SOURCE_DIR} cpu-wasm.h wasm_desc ${CMAKE_CURRENT_SOURCE_DIR}/cpu-wasm.md
VERBATIM
)
if(NOT DISABLE_EXECUTABLES)
set(sgen_sources "main-sgen.c")
if(HOST_WIN32)
set(sgen_sources "${sgen_sources};${VERSION_FILE_RC_PATH}")
endif()
add_executable(mono-sgen "${sgen_sources}")
if(MONO_CROSS_COMPILE_EXECUTABLE_NAME)
set_target_properties(mono-sgen PROPERTIES OUTPUT_NAME mono-aot-cross)
endif()
target_link_libraries(mono-sgen PRIVATE monoapi monosgen-static ${OS_LIBS} ${ICONV_LIB} ${LLVM_LIBS} ${ICU_LIBS} ${Z_LIBS})
if(NOT DISABLE_COMPONENTS AND STATIC_COMPONENTS AND NOT DISABLE_LINK_STATIC_COMPONENTS)
# if components are built statically, link them into runtime.
target_sources(mono-sgen PRIVATE "${mono-components-objects}")
elseif(NOT DISABLE_COMPONENTS AND STATIC_COMPONENTS AND DISABLE_LINK_STATIC_COMPONENTS)
# if components are built statically, but we shouldn't link components
# link the fallback stubs into the runtime
target_sources(mono-sgen PRIVATE "${mono-components-stub-objects}")
elseif(NOT DISABLE_COMPONENTS AND NOT STATIC_COMPONENTS)
# if components are built dynamically, link the fallback stubs into the runtime
target_sources(mono-sgen PRIVATE "${mono-components-stub-objects}")
elseif(DISABLE_COMPONENTS)
# if components are disabled, link the fallback stubs into the runtime
# fallback stubs already provided in monosgen-static when components are disabled
endif()
if(ICU_LDFLAGS)
set_property(TARGET mono-sgen APPEND_STRING PROPERTY LINK_FLAGS " ${ICU_LDFLAGS}")
endif()
install(TARGETS mono-sgen RUNTIME)
if(HOST_WIN32)
install(FILES $<TARGET_PDB_FILE:mono-sgen> DESTINATION bin OPTIONAL)
endif()
endif()
| project(mini)
include(FindPython3)
include_directories(
${PROJECT_BINARY_DIR}/
${PROJECT_BINARY_DIR}/../..
${PROJECT_BINARY_DIR}/../../mono/eglib
${CMAKE_CURRENT_SOURCE_DIR}/../..
${PROJECT_SOURCE_DIR}/../
${PROJECT_SOURCE_DIR}/../eglib
${PROJECT_SOURCE_DIR}/../sgen)
if(HOST_DARWIN)
set(OS_LIBS "-framework CoreFoundation" "-framework Foundation" "-lcompression")
if(CMAKE_SYSTEM_VARIANT STREQUAL "MacCatalyst")
set(OS_LIBS ${OS_LIBS} "-lobjc" "-lc++")
endif()
elseif(HOST_IOS)
set(OS_LIBS "-framework CoreFoundation" "-lcompression" "-lobjc" "-lc++")
elseif(HOST_ANDROID)
set(OS_LIBS m dl log)
elseif(HOST_LINUX)
set(OS_LIBS pthread m dl)
elseif(HOST_WIN32)
set(OS_LIBS bcrypt.lib Mswsock.lib ws2_32.lib psapi.lib version.lib advapi32.lib winmm.lib kernel32.lib)
elseif(HOST_SOLARIS)
set(OS_LIBS socket pthread m ${CMAKE_DL_LIBS})
elseif(HOST_FREEBSD)
set(OS_LIBS pthread m)
endif()
#
# SUBDIRS
#
include(../eglib/CMakeLists.txt)
include(../utils/CMakeLists.txt)
include(../metadata/CMakeLists.txt)
include(../sgen/CMakeLists.txt)
include(../component/CMakeLists.txt)
if(HOST_WIN32)
# /OPT:ICF merges idential functions breaking mono_lookup_icall_symbol ()
add_link_options(/OPT:NOICF)
endif()
# ICU
if(HAVE_SYS_ICU AND NOT HOST_WASI)
if(STATIC_ICU)
set(pal_icushim_sources_base
pal_icushim_static.c)
add_definitions(-DSTATIC_ICU=1)
else()
set(pal_icushim_sources_base
pal_icushim.c)
endif()
set(icu_shim_sources_base
pal_calendarData.c
pal_casing.c
pal_collation.c
pal_idna.c
pal_locale.c
pal_localeNumberData.c
pal_localeStringData.c
pal_normalization.c
pal_timeZoneInfo.c
entrypoints.c
${pal_icushim_sources_base})
addprefix(icu_shim_sources "${ICU_SHIM_PATH}" "${icu_shim_sources_base}")
set_source_files_properties(${icu_shim_sources} PROPERTIES COMPILE_DEFINITIONS OSX_ICU_LIBRARY_PATH="${OSX_ICU_LIBRARY_PATH}")
set_source_files_properties(${icu_shim_sources} PROPERTIES COMPILE_FLAGS "-I\"${ICU_INCLUDEDIR}\" -I\"${CLR_SRC_NATIVE_DIR}/libs/System.Globalization.Native/\" -I\"${CLR_SRC_NATIVE_DIR}/libs/Common/\" ${ICU_FLAGS}")
if(TARGET_WIN32)
set_source_files_properties(${icu_shim_sources} PROPERTIES LANGUAGE CXX)
endif()
if(ICU_LIBDIR)
set(ICU_LDFLAGS "-L${ICU_LIBDIR}")
endif()
endif()
#
# MINI
#
set(mini_common_sources
mini.c
mini-runtime.c
seq-points.c
seq-points.h
ir-emit.h
method-to-ir.c
cfgdump.h
cfgdump.c
calls.c
decompose.c
mini.h
optflags-def.h
jit-icalls.h
jit-icalls.c
trace.c
trace.h
patch-info.h
mini-ops.h
mini-arch.h
dominators.c
cfold.c
regalloc.h
helpers.c
liveness.c
ssa.c
abcremoval.c
abcremoval.h
local-propagation.c
driver.c
debug-mini.c
linear-scan.c
aot-compiler.h
aot-compiler.c
aot-runtime.c
graph.c
mini-codegen.c
mini-exceptions.c
mini-trampolines.c
branch-opts.c
mini-generic-sharing.c
simd-methods.h
simd-intrinsics.c
mini-unwind.h
unwind.c
image-writer.h
image-writer.c
dwarfwriter.h
dwarfwriter.c
mini-gc.h
mini-gc.c
mini-llvm.h
mini-llvm-cpp.h
llvm-jit.h
alias-analysis.c
mini-cross-helpers.c
arch-stubs.c
llvm-runtime.h
llvm-intrinsics.h
llvm-intrinsics-types.h
type-checking.c
lldb.h
lldb.c
memory-access.c
intrinsics.c
mini-profiler.c
interp-stubs.c
aot-runtime.h
ee.h
mini-runtime.h
llvmonly-runtime.h
llvmonly-runtime.c
monovm.h
monovm.c)
set(debugger_sources
debugger-agent-external.h
debugger-agent-external.c
)
set(amd64_sources
mini-amd64.c
mini-amd64.h
exceptions-amd64.c
tramp-amd64.c
mini-amd64-gsharedvt.c
mini-amd64-gsharedvt.h
tramp-amd64-gsharedvt.c
cpu-amd64.h)
set(x86_sources
mini-x86.c
mini-x86.h
exceptions-x86.c
tramp-x86.c
mini-x86-gsharedvt.c
tramp-x86-gsharedvt.c
cpu-x86.h)
set(arm64_sources
mini-arm64.c
mini-arm64.h
exceptions-arm64.c
tramp-arm64.c
mini-arm64-gsharedvt.c
mini-arm64-gsharedvt.h
tramp-arm64-gsharedvt.c
cpu-arm64.h)
set(arm_sources
mini-arm.c
mini-arm.h
exceptions-arm.c
tramp-arm.c
mini-arm-gsharedvt.c
tramp-arm-gsharedvt.c
cpu-arm.h)
set(s390x_sources
mini-s390x.c
mini-s390x.h
exceptions-s390x.c
tramp-s390x.c
cpu-s390x.h)
set(wasm_sources
mini-wasm.c
tramp-wasm.c
exceptions-wasm.c
aot-runtime-wasm.c
wasm_m2n_invoke.g.h
cpu-wasm.h)
if(TARGET_AMD64)
set(arch_sources ${amd64_sources})
elseif(TARGET_X86)
set(arch_sources ${x86_sources})
elseif(TARGET_ARM64)
set(arch_sources ${arm64_sources})
elseif(TARGET_ARM)
set(arch_sources ${arm_sources})
elseif(TARGET_S390X)
set(arch_sources ${s390x_sources})
elseif(TARGET_WASM)
set(arch_sources ${wasm_sources})
endif()
set(darwin_sources
mini-darwin.c)
set(windows_sources
mini-windows.c
mini-windows-tls-callback.c
mini-windows.h
)
set(posix_sources
mini-posix.c)
if(HOST_DARWIN)
set(os_sources "${darwin_sources};${posix_sources}")
elseif(HOST_LINUX OR HOST_SOLARIS OR HOST_FREEBSD)
set(os_sources "${posix_sources}")
elseif(HOST_WIN32)
set(os_sources "${windows_sources}")
endif()
set(interp_sources
interp/interp.h
interp/interp-internals.h
interp/interp.c
interp/interp-intrins.h
interp/interp-intrins.c
interp/mintops.h
interp/mintops.c
interp/transform.c)
set(interp_stub_sources
interp-stubs.c)
if(NOT DISABLE_INTERPRETER)
set(mini_interp_sources ${interp_sources})
else()
set(mini_interp_sources ${interp_stub_sources})
endif()
if(ENABLE_INTERP_LIB)
add_library(mono-ee-interp STATIC "${interp_sources}")
target_link_libraries(mono-ee-interp monoapi)
install(TARGETS mono-ee-interp LIBRARY)
endif()
if(ENABLE_LLVM)
set(llvm_sources
mini-llvm.c
mini-llvm-cpp.cpp
llvm-jit.cpp)
else()
set(llvm_sources)
endif()
if(ENABLE_LLVM)
set(llvm_runtime_sources
llvm-runtime.cpp)
elseif(ENABLE_LLVM_RUNTIME)
set(llvm_runtime_sources
llvm-runtime.cpp)
else()
set(llvm_runtime_sources)
endif()
set(mini_sources "${CMAKE_CURRENT_BINARY_DIR}/buildver-sgen.h;main-core.c;${mini_common_sources};${arch_sources};${os_sources};${mini_interp_sources};${llvm_sources};${debugger_sources};${llvm_runtime_sources}")
if(LLVM_INCLUDEDIR)
include_directories(BEFORE SYSTEM "${LLVM_INCLUDEDIR}")
endif()
if(HOST_WIN32)
set(mini_sources "${mini_sources};${VERSION_FILE_RC_PATH}") # this is generated by GenerateNativeVersionFile in Arcade
elseif(NOT HOST_BROWSER)
set(mini_sources "${mini_sources};${VERSION_FILE_PATH}") # this is generated by GenerateNativeVersionFile in Arcade
endif()
set(monosgen-sources "${metadata_sources};${utils_sources};${sgen_sources};${icu_shim_sources};${mini_sources};${ZLIB_SOURCES}")
add_library(monosgen-objects OBJECT "${monosgen-sources}")
target_link_libraries (monosgen-objects PRIVATE monoapi)
add_library(monosgen-static STATIC $<TARGET_OBJECTS:monosgen-objects>;$<TARGET_OBJECTS:eglib_objects>)
target_link_libraries (monosgen-static PRIVATE monoapi)
set_target_properties(monosgen-static PROPERTIES OUTPUT_NAME ${MONO_LIB_NAME})
if(DISABLE_COMPONENTS)
# add component fallback stubs into static mono library when components have been disabled.
target_sources(monosgen-static PRIVATE "${mono-components-stub-objects}")
endif()
if(NOT DISABLE_LIBS)
install(TARGETS monosgen-static LIBRARY)
endif()
if(NOT DISABLE_SHARED_LIBS)
if(HOST_WIN32)
add_library(monosgen-shared SHARED "mini-windows-dllmain.c;${monosgen-sources}")
target_compile_definitions(monosgen-shared PRIVATE -DMONO_DLL_EXPORT)
else()
add_library(monosgen-shared SHARED $<TARGET_OBJECTS:monosgen-objects>)
target_compile_definitions(monosgen-objects PRIVATE -DMONO_DLL_EXPORT)
endif()
target_sources(monosgen-shared PRIVATE $<TARGET_OBJECTS:eglib_objects>)
set_target_properties(monosgen-shared PROPERTIES OUTPUT_NAME ${MONO_SHARED_LIB_NAME})
target_link_libraries (monosgen-shared PRIVATE monoapi)
target_include_directories (monosgen-shared PRIVATE monoapi)
if(TARGET_WIN32)
# on Windows the import library for the shared mono library will have the same name as the static library,
# to avoid a conflict we rename the import library with the .import.lib suffix
set_target_properties(monosgen-shared PROPERTIES IMPORT_SUFFIX ".import.lib")
endif()
target_link_libraries(monosgen-shared PRIVATE ${OS_LIBS} ${LLVM_LIBS} ${ICU_LIBS} ${Z_LIBS})
if(ICU_LDFLAGS)
set_property(TARGET monosgen-shared APPEND_STRING PROPERTY LINK_FLAGS " ${ICU_LDFLAGS}")
endif()
if(NOT TARGET_WASM AND STATIC_ICU)
set_property(TARGET monosgen-shared APPEND_STRING PROPERTY LINKER_LANGUAGE CXX)
endif ()
if(TARGET_DARWIN)
set_property(TARGET monosgen-shared APPEND_STRING PROPERTY LINK_FLAGS " -Wl,-compatibility_version -Wl,2.0 -Wl,-current_version -Wl,2.0")
endif()
if(NOT DISABLE_COMPONENTS AND STATIC_COMPONENTS AND NOT DISABLE_LINK_STATIC_COMPONENTS)
# if components are built statically, but we're building a shared lib mono,
# link them into the library
target_sources(monosgen-shared PRIVATE "${mono-components-objects}")
elseif(NOT DISABLE_COMPONENTS AND STATIC_COMPONENTS AND DISABLE_LINK_STATIC_COMPONENTS)
# if components are built statically, we're building a shared lib mono, but we shouldn't link components
# link the fallback stubs into the runtime
target_sources(monosgen-shared PRIVATE "${mono-components-stub-objects}")
elseif(NOT DISABLE_COMPONENTS AND NOT STATIC_COMPONENTS)
# if components are built dynamically, link the fallback stubs into the runtime
target_sources(monosgen-shared PRIVATE "${mono-components-stub-objects}")
elseif(DISABLE_COMPONENTS)
# if components are disabled, link the fallback stubs into the runtime
target_sources(monosgen-shared PRIVATE "${mono-components-stub-objects}")
endif()
install(TARGETS monosgen-shared LIBRARY)
if(HOST_WIN32 AND TARGET_AMD64)
add_library(monosgen-shared-dac SHARED "mini-windows-dlldac.c")
target_link_libraries(monosgen-shared-dac monoapi)
set_target_properties(monosgen-shared-dac PROPERTIES OUTPUT_NAME ${MONO_SHARED_LIB_NAME}-dac)
endif()
if(BUILD_DARWIN_FRAMEWORKS)
if(TARGET_DARWIN)
# In cmake, you cannot have list entries which contain a space or semicolon - those are considered
# record separators (i.e. a list of list(APPEND foo "a" "b;c" "d e") is a five entry list of values
# a, b, c, d and e.
# So, in order to treat the components lists as single list entries, swap out the ; character
# for a temporary replacement character, allowing the full lists to be treated as single entries
string(REPLACE ";" "*" mono-components-objects-nowhitespace "${mono-components-objects}")
string(REPLACE ";" "*" mono-components-stub-objects-nowhitespace "${mono-components-stub-objects}")
list(APPEND FrameworkConfig Mono.debug Mono.release)
list(APPEND ComponentsObjects "${mono-components-objects-nowhitespace}" "${mono-components-stub-objects-nowhitespace}")
foreach(frameworkconfig componentsobjects IN ZIP_LISTS FrameworkConfig ComponentsObjects)
if("${componentsobjects}" STREQUAL "")
#components list is empty, use stubs instead
set(componentsobjects "${mono-components-stub-objects-nowhitespace}")
endif()
add_library(${frameworkconfig} SHARED $<TARGET_OBJECTS:monosgen-objects>)
target_compile_definitions(${frameworkconfig} PRIVATE -DMONO_DLL_EXPORT)
target_sources(${frameworkconfig} PRIVATE $<TARGET_OBJECTS:eglib_objects>)
target_link_libraries(${frameworkconfig} PRIVATE ${OS_LIBS} ${LLVM_LIBS} ${ICU_LIBS} ${Z_LIBS})
if(ICU_LDFLAGS)
set_property(TARGET ${frameworkconfig} APPEND_STRING PROPERTY LINK_FLAGS " ${ICU_LDFLAGS}")
endif()
if(STATIC_ICU)
set_property(TARGET ${frameworkconfig} APPEND_STRING PROPERTY LINKER_LANGUAGE CXX)
endif ()
set_property(TARGET ${frameworkconfig} APPEND_STRING PROPERTY LINK_FLAGS " -Wl,-compatibility_version -Wl,2.0 -Wl,-current_version -Wl,2.0")
string(REPLACE "*" ";" componentsobjects-whitespace "${componentsobjects}")
target_sources(${frameworkconfig} PRIVATE "${componentsobjects-whitespace}")
set_target_properties(${frameworkconfig} PROPERTIES
FRAMEWORK TRUE
FRAMEWORK_VERSION C
MACOSX_FRAMEWORK_IDENTIFIER net.dot.mono-framework
)
install(TARGETS ${frameworkconfig}
FRAMEWORK DESTINATION ${CMAKE_INSTALL_LIBDIR}
)
endforeach()
endif()
endif()
endif()
find_package(Python3 COMPONENTS Interpreter)
# don't set build_date, it creates non-deterministic builds
file(GENERATE OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/buildver-sgen.h CONTENT [=[const char *build_date = "";]=])
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/cpu-amd64.h
COMMAND ${Python3_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py TARGET_AMD64 ${CMAKE_CURRENT_SOURCE_DIR} cpu-amd64.h amd64_desc ${CMAKE_CURRENT_SOURCE_DIR}/cpu-amd64.md
DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py mini-ops.h
VERBATIM
)
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/cpu-x86.h
COMMAND ${Python3_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py TARGET_X86 ${CMAKE_CURRENT_SOURCE_DIR} cpu-x86.h x86_desc ${CMAKE_CURRENT_SOURCE_DIR}/cpu-x86.md
VERBATIM
)
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/cpu-arm64.h
COMMAND ${Python3_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py TARGET_ARM64 ${CMAKE_CURRENT_SOURCE_DIR} cpu-arm64.h arm64_cpu_desc ${CMAKE_CURRENT_SOURCE_DIR}/cpu-arm64.md
VERBATIM
)
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/cpu-arm.h
COMMAND ${Python3_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py TARGET_ARM ${CMAKE_CURRENT_SOURCE_DIR} cpu-arm.h arm_cpu_desc ${CMAKE_CURRENT_SOURCE_DIR}/cpu-arm.md
VERBATIM
)
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/cpu-s390x.h
COMMAND ${Python3_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py TARGET_S390X ${CMAKE_CURRENT_SOURCE_DIR} cpu-s390x.h s390x_cpu_desc ${CMAKE_CURRENT_SOURCE_DIR}/cpu-s390x.md
VERBATIM
)
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/cpu-wasm.h
COMMAND ${Python3_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/genmdesc.py TARGET_WASM ${CMAKE_CURRENT_SOURCE_DIR} cpu-wasm.h wasm_desc ${CMAKE_CURRENT_SOURCE_DIR}/cpu-wasm.md
VERBATIM
)
if(NOT DISABLE_EXECUTABLES)
set(sgen_sources "main-sgen.c")
if(HOST_WIN32)
set(sgen_sources "${sgen_sources};${VERSION_FILE_RC_PATH}")
endif()
add_executable(mono-sgen "${sgen_sources}")
if(MONO_CROSS_COMPILE_EXECUTABLE_NAME)
set_target_properties(mono-sgen PROPERTIES OUTPUT_NAME mono-aot-cross)
endif()
target_link_libraries(mono-sgen PRIVATE monoapi monosgen-static ${OS_LIBS} ${LLVM_LIBS} ${ICU_LIBS} ${Z_LIBS})
if(NOT DISABLE_COMPONENTS AND STATIC_COMPONENTS AND NOT DISABLE_LINK_STATIC_COMPONENTS)
# if components are built statically, link them into runtime.
target_sources(mono-sgen PRIVATE "${mono-components-objects}")
elseif(NOT DISABLE_COMPONENTS AND STATIC_COMPONENTS AND DISABLE_LINK_STATIC_COMPONENTS)
# if components are built statically, but we shouldn't link components
# link the fallback stubs into the runtime
target_sources(mono-sgen PRIVATE "${mono-components-stub-objects}")
elseif(NOT DISABLE_COMPONENTS AND NOT STATIC_COMPONENTS)
# if components are built dynamically, link the fallback stubs into the runtime
target_sources(mono-sgen PRIVATE "${mono-components-stub-objects}")
elseif(DISABLE_COMPONENTS)
# if components are disabled, link the fallback stubs into the runtime
# fallback stubs already provided in monosgen-static when components are disabled
endif()
if(ICU_LDFLAGS)
set_property(TARGET mono-sgen APPEND_STRING PROPERTY LINK_FLAGS " ${ICU_LDFLAGS}")
endif()
install(TARGETS mono-sgen RUNTIME)
if(HOST_WIN32)
install(FILES $<TARGET_PDB_FILE:mono-sgen> DESTINATION bin OPTIONAL)
endif()
endif()
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/mono/mini/method-to-ir.c | /**
* \file
* Convert CIL to the JIT internal representation
*
* Author:
* Paolo Molaro ([email protected])
* Dietmar Maurer ([email protected])
*
* (C) 2002 Ximian, Inc.
* Copyright 2003-2010 Novell, Inc (http://www.novell.com)
* Copyright 2011 Xamarin, Inc (http://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#include <config.h>
#include <glib.h>
#include <mono/utils/mono-compiler.h>
#include "mini.h"
#ifndef DISABLE_JIT
#include <signal.h>
#ifdef HAVE_UNISTD_H
#include <unistd.h>
#endif
#include <math.h>
#include <string.h>
#include <ctype.h>
#ifdef HAVE_SYS_TIME_H
#include <sys/time.h>
#endif
#ifdef HAVE_ALLOCA_H
#include <alloca.h>
#endif
#include <mono/utils/memcheck.h>
#include <mono/metadata/abi-details.h>
#include <mono/metadata/assembly.h>
#include <mono/metadata/assembly-internals.h>
#include <mono/metadata/attrdefs.h>
#include <mono/metadata/loader.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/class.h>
#include <mono/metadata/class-abi-details.h>
#include <mono/metadata/object.h>
#include <mono/metadata/exception.h>
#include <mono/metadata/exception-internals.h>
#include <mono/metadata/opcodes.h>
#include <mono/metadata/mono-endian.h>
#include <mono/metadata/tokentype.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/marshal.h>
#include <mono/metadata/debug-helpers.h>
#include <mono/metadata/debug-internals.h>
#include <mono/metadata/gc-internals.h>
#include <mono/metadata/threads-types.h>
#include <mono/metadata/profiler-private.h>
#include <mono/metadata/profiler.h>
#include <mono/metadata/monitor.h>
#include <mono/utils/mono-memory-model.h>
#include <mono/utils/mono-error-internals.h>
#include <mono/metadata/mono-basic-block.h>
#include <mono/metadata/reflection-internals.h>
#include <mono/utils/mono-threads-coop.h>
#include <mono/utils/mono-utils-debug.h>
#include <mono/utils/mono-logger-internals.h>
#include <mono/metadata/verify-internals.h>
#include <mono/metadata/icall-decl.h>
#include "mono/metadata/icall-signatures.h"
#include "trace.h"
#include "ir-emit.h"
#include "jit-icalls.h"
#include <mono/jit/jit.h>
#include "seq-points.h"
#include "aot-compiler.h"
#include "mini-llvm.h"
#include "mini-runtime.h"
#include "llvmonly-runtime.h"
#include "mono/utils/mono-tls-inline.h"
#define BRANCH_COST 10
#define CALL_COST 10
/* Used for the JIT */
#define INLINE_LENGTH_LIMIT 20
/*
* The aot and jit inline limits should be different,
* since aot sees the whole program so we can let opt inline methods for us,
* while the jit only sees one method, so we have to inline things ourselves.
*/
/* Used by LLVM AOT */
#define LLVM_AOT_INLINE_LENGTH_LIMIT 30
/* Used to LLVM JIT */
#define LLVM_JIT_INLINE_LENGTH_LIMIT 100
static const gboolean debug_tailcall = FALSE; // logging
static const gboolean debug_tailcall_try_all = FALSE; // consider any call followed by ret
gboolean
mono_tailcall_print_enabled (void)
{
return debug_tailcall || MONO_TRACE_IS_TRACED (G_LOG_LEVEL_DEBUG, MONO_TRACE_TAILCALL);
}
void
mono_tailcall_print (const char *format, ...)
{
if (!mono_tailcall_print_enabled ())
return;
va_list args;
va_start (args, format);
g_printv (format, args);
va_end (args);
}
/* These have 'cfg' as an implicit argument */
#define INLINE_FAILURE(msg) do { \
if ((cfg->method != cfg->current_method) && (cfg->current_method->wrapper_type == MONO_WRAPPER_NONE)) { \
inline_failure (cfg, msg); \
goto exception_exit; \
} \
} while (0)
#define CHECK_CFG_EXCEPTION do {\
if (cfg->exception_type != MONO_EXCEPTION_NONE) \
goto exception_exit; \
} while (0)
#define FIELD_ACCESS_FAILURE(method, field) do { \
field_access_failure ((cfg), (method), (field)); \
goto exception_exit; \
} while (0)
#define GENERIC_SHARING_FAILURE(opcode) do { \
if (cfg->gshared) { \
gshared_failure (cfg, opcode, __FILE__, __LINE__); \
goto exception_exit; \
} \
} while (0)
#define GSHAREDVT_FAILURE(opcode) do { \
if (cfg->gsharedvt) { \
gsharedvt_failure (cfg, opcode, __FILE__, __LINE__); \
goto exception_exit; \
} \
} while (0)
#define OUT_OF_MEMORY_FAILURE do { \
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); \
mono_error_set_out_of_memory (cfg->error, ""); \
goto exception_exit; \
} while (0)
#define DISABLE_AOT(cfg) do { \
if ((cfg)->verbose_level >= 2) \
printf ("AOT disabled: %s:%d\n", __FILE__, __LINE__); \
(cfg)->disable_aot = TRUE; \
} while (0)
#define LOAD_ERROR do { \
break_on_unverified (); \
mono_cfg_set_exception (cfg, MONO_EXCEPTION_TYPE_LOAD); \
goto exception_exit; \
} while (0)
#define TYPE_LOAD_ERROR(klass) do { \
cfg->exception_ptr = klass; \
LOAD_ERROR; \
} while (0)
#define CHECK_CFG_ERROR do {\
if (!is_ok (cfg->error)) { \
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); \
goto mono_error_exit; \
} \
} while (0)
int mono_op_to_op_imm (int opcode);
int mono_op_to_op_imm_noemul (int opcode);
static int inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp,
guchar *ip, guint real_offset, gboolean inline_always, gboolean *is_empty);
static MonoInst*
convert_value (MonoCompile *cfg, MonoType *type, MonoInst *ins);
/* helper methods signatures */
/* type loading helpers */
static GENERATE_GET_CLASS_WITH_CACHE (iequatable, "System", "IEquatable`1")
static GENERATE_GET_CLASS_WITH_CACHE (geqcomparer, "System.Collections.Generic", "GenericEqualityComparer`1");
/*
* Instruction metadata
*/
#ifdef MINI_OP
#undef MINI_OP
#endif
#ifdef MINI_OP3
#undef MINI_OP3
#endif
#define MINI_OP(a,b,dest,src1,src2) dest, src1, src2, ' ',
#define MINI_OP3(a,b,dest,src1,src2,src3) dest, src1, src2, src3,
#define NONE ' '
#define IREG 'i'
#define FREG 'f'
#define VREG 'v'
#define XREG 'x'
#if SIZEOF_REGISTER == 8 && SIZEOF_REGISTER == TARGET_SIZEOF_VOID_P
#define LREG IREG
#else
#define LREG 'l'
#endif
/* keep in sync with the enum in mini.h */
const char
mini_ins_info[] = {
#include "mini-ops.h"
};
#undef MINI_OP
#undef MINI_OP3
#define MINI_OP(a,b,dest,src1,src2) ((src2) != NONE ? 2 : ((src1) != NONE ? 1 : 0)),
#define MINI_OP3(a,b,dest,src1,src2,src3) ((src3) != NONE ? 3 : ((src2) != NONE ? 2 : ((src1) != NONE ? 1 : 0))),
/*
* This should contain the index of the last sreg + 1. This is not the same
* as the number of sregs for opcodes like IA64_CMP_EQ_IMM.
*/
const gint8 mini_ins_sreg_counts[] = {
#include "mini-ops.h"
};
#undef MINI_OP
#undef MINI_OP3
guint32
mono_alloc_ireg (MonoCompile *cfg)
{
return alloc_ireg (cfg);
}
guint32
mono_alloc_lreg (MonoCompile *cfg)
{
return alloc_lreg (cfg);
}
guint32
mono_alloc_freg (MonoCompile *cfg)
{
return alloc_freg (cfg);
}
guint32
mono_alloc_preg (MonoCompile *cfg)
{
return alloc_preg (cfg);
}
guint32
mono_alloc_dreg (MonoCompile *cfg, MonoStackType stack_type)
{
return alloc_dreg (cfg, stack_type);
}
/*
* mono_alloc_ireg_ref:
*
* Allocate an IREG, and mark it as holding a GC ref.
*/
guint32
mono_alloc_ireg_ref (MonoCompile *cfg)
{
return alloc_ireg_ref (cfg);
}
/*
* mono_alloc_ireg_mp:
*
* Allocate an IREG, and mark it as holding a managed pointer.
*/
guint32
mono_alloc_ireg_mp (MonoCompile *cfg)
{
return alloc_ireg_mp (cfg);
}
/*
* mono_alloc_ireg_copy:
*
* Allocate an IREG with the same GC type as VREG.
*/
guint32
mono_alloc_ireg_copy (MonoCompile *cfg, guint32 vreg)
{
if (vreg_is_ref (cfg, vreg))
return alloc_ireg_ref (cfg);
else if (vreg_is_mp (cfg, vreg))
return alloc_ireg_mp (cfg);
else
return alloc_ireg (cfg);
}
guint
mono_type_to_regmove (MonoCompile *cfg, MonoType *type)
{
if (m_type_is_byref (type))
return OP_MOVE;
type = mini_get_underlying_type (type);
handle_enum:
switch (type->type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
return OP_MOVE;
case MONO_TYPE_I2:
case MONO_TYPE_U2:
return OP_MOVE;
case MONO_TYPE_I4:
case MONO_TYPE_U4:
return OP_MOVE;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
return OP_MOVE;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
return OP_MOVE;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
#if SIZEOF_REGISTER == 8
return OP_MOVE;
#else
return OP_LMOVE;
#endif
case MONO_TYPE_R4:
return cfg->r4fp ? OP_RMOVE : OP_FMOVE;
case MONO_TYPE_R8:
return OP_FMOVE;
case MONO_TYPE_VALUETYPE:
if (m_class_is_enumtype (type->data.klass)) {
type = mono_class_enum_basetype_internal (type->data.klass);
goto handle_enum;
}
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type)))
return OP_XMOVE;
return OP_VMOVE;
case MONO_TYPE_TYPEDBYREF:
return OP_VMOVE;
case MONO_TYPE_GENERICINST:
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type)))
return OP_XMOVE;
type = m_class_get_byval_arg (type->data.generic_class->container_class);
goto handle_enum;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
g_assert (cfg->gshared);
if (mini_type_var_is_vt (type))
return OP_VMOVE;
else
return mono_type_to_regmove (cfg, mini_get_underlying_type (type));
default:
g_error ("unknown type 0x%02x in type_to_regstore", type->type);
}
return -1;
}
void
mono_print_bb (MonoBasicBlock *bb, const char *msg)
{
int i;
MonoInst *tree;
GString *str = g_string_new ("");
g_string_append_printf (str, "%s %d: [IN: ", msg, bb->block_num);
for (i = 0; i < bb->in_count; ++i)
g_string_append_printf (str, " BB%d(%d)", bb->in_bb [i]->block_num, bb->in_bb [i]->dfn);
g_string_append_printf (str, ", OUT: ");
for (i = 0; i < bb->out_count; ++i)
g_string_append_printf (str, " BB%d(%d)", bb->out_bb [i]->block_num, bb->out_bb [i]->dfn);
g_string_append_printf (str, " ]\n");
g_print ("%s", str->str);
g_string_free (str, TRUE);
for (tree = bb->code; tree; tree = tree->next)
mono_print_ins_index (-1, tree);
}
static MONO_NEVER_INLINE gboolean
break_on_unverified (void)
{
if (mini_debug_options.break_on_unverified) {
G_BREAKPOINT ();
return TRUE;
}
return FALSE;
}
static void
clear_cfg_error (MonoCompile *cfg)
{
mono_error_cleanup (cfg->error);
error_init (cfg->error);
}
static MONO_NEVER_INLINE void
field_access_failure (MonoCompile *cfg, MonoMethod *method, MonoClassField *field)
{
char *method_fname = mono_method_full_name (method, TRUE);
char *field_fname = mono_field_full_name (field);
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
mono_error_set_generic_error (cfg->error, "System", "FieldAccessException", "Field `%s' is inaccessible from method `%s'\n", field_fname, method_fname);
g_free (method_fname);
g_free (field_fname);
}
static MONO_NEVER_INLINE void
inline_failure (MonoCompile *cfg, const char *msg)
{
if (cfg->verbose_level >= 2)
printf ("inline failed: %s\n", msg);
mono_cfg_set_exception (cfg, MONO_EXCEPTION_INLINE_FAILED);
}
static MONO_NEVER_INLINE void
gshared_failure (MonoCompile *cfg, int opcode, const char *file, int line)
{
if (cfg->verbose_level > 2)
printf ("sharing failed for method %s.%s.%s/%d opcode %s line %d\n", m_class_get_name_space (cfg->current_method->klass), m_class_get_name (cfg->current_method->klass), cfg->current_method->name, cfg->current_method->signature->param_count, mono_opcode_name (opcode), line);
mono_cfg_set_exception (cfg, MONO_EXCEPTION_GENERIC_SHARING_FAILED);
}
static MONO_NEVER_INLINE void
gsharedvt_failure (MonoCompile *cfg, int opcode, const char *file, int line)
{
cfg->exception_message = g_strdup_printf ("gsharedvt failed for method %s.%s.%s/%d opcode %s %s:%d", m_class_get_name_space (cfg->current_method->klass), m_class_get_name (cfg->current_method->klass), cfg->current_method->name, cfg->current_method->signature->param_count, mono_opcode_name ((opcode)), file, line);
if (cfg->verbose_level >= 2)
printf ("%s\n", cfg->exception_message);
mono_cfg_set_exception (cfg, MONO_EXCEPTION_GENERIC_SHARING_FAILED);
}
void
mini_set_inline_failure (MonoCompile *cfg, const char *msg)
{
if (cfg->verbose_level >= 2)
printf ("inline failed: %s\n", msg);
mono_cfg_set_exception (cfg, MONO_EXCEPTION_INLINE_FAILED);
}
/*
* When using gsharedvt, some instatiations might be verifiable, and some might be not. i.e.
* foo<T> (int i) { ldarg.0; box T; }
*/
#define UNVERIFIED do { \
if (cfg->gsharedvt) { \
if (cfg->verbose_level > 2) \
printf ("gsharedvt method failed to verify, falling back to instantiation.\n"); \
mono_cfg_set_exception (cfg, MONO_EXCEPTION_GENERIC_SHARING_FAILED); \
goto exception_exit; \
} \
break_on_unverified (); \
goto unverified; \
} while (0)
#define GET_BBLOCK(cfg,tblock,ip) do { \
(tblock) = cfg->cil_offset_to_bb [(ip) - cfg->cil_start]; \
if (!(tblock)) { \
if ((ip) >= end || (ip) < header->code) UNVERIFIED; \
NEW_BBLOCK (cfg, (tblock)); \
(tblock)->cil_code = (ip); \
ADD_BBLOCK (cfg, (tblock)); \
} \
} while (0)
/* Emit conversions so both operands of a binary opcode are of the same type */
static void
add_widen_op (MonoCompile *cfg, MonoInst *ins, MonoInst **arg1_ref, MonoInst **arg2_ref)
{
MonoInst *arg1 = *arg1_ref;
MonoInst *arg2 = *arg2_ref;
if (cfg->r4fp &&
((arg1->type == STACK_R4 && arg2->type == STACK_R8) ||
(arg1->type == STACK_R8 && arg2->type == STACK_R4))) {
MonoInst *conv;
/* Mixing r4/r8 is allowed by the spec */
if (arg1->type == STACK_R4) {
int dreg = alloc_freg (cfg);
EMIT_NEW_UNALU (cfg, conv, OP_RCONV_TO_R8, dreg, arg1->dreg);
conv->type = STACK_R8;
ins->sreg1 = dreg;
*arg1_ref = conv;
}
if (arg2->type == STACK_R4) {
int dreg = alloc_freg (cfg);
EMIT_NEW_UNALU (cfg, conv, OP_RCONV_TO_R8, dreg, arg2->dreg);
conv->type = STACK_R8;
ins->sreg2 = dreg;
*arg2_ref = conv;
}
}
#if SIZEOF_REGISTER == 8
/* FIXME: Need to add many more cases */
if ((arg1)->type == STACK_PTR && (arg2)->type == STACK_I4) {
MonoInst *widen;
int dr = alloc_preg (cfg);
EMIT_NEW_UNALU (cfg, widen, OP_SEXT_I4, dr, (arg2)->dreg);
(ins)->sreg2 = widen->dreg;
}
#endif
}
#define ADD_UNOP(op) do { \
MONO_INST_NEW (cfg, ins, (op)); \
sp--; \
ins->sreg1 = sp [0]->dreg; \
type_from_op (cfg, ins, sp [0], NULL); \
CHECK_TYPE (ins); \
(ins)->dreg = alloc_dreg ((cfg), (MonoStackType)(ins)->type); \
MONO_ADD_INS ((cfg)->cbb, (ins)); \
*sp++ = mono_decompose_opcode (cfg, ins); \
} while (0)
#define ADD_BINCOND(next_block) do { \
MonoInst *cmp; \
sp -= 2; \
MONO_INST_NEW(cfg, cmp, OP_COMPARE); \
cmp->sreg1 = sp [0]->dreg; \
cmp->sreg2 = sp [1]->dreg; \
add_widen_op (cfg, cmp, &sp [0], &sp [1]); \
type_from_op (cfg, cmp, sp [0], sp [1]); \
CHECK_TYPE (cmp); \
type_from_op (cfg, ins, sp [0], sp [1]); \
ins->inst_many_bb = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof(gpointer)*2); \
GET_BBLOCK (cfg, tblock, target); \
link_bblock (cfg, cfg->cbb, tblock); \
ins->inst_true_bb = tblock; \
if ((next_block)) { \
link_bblock (cfg, cfg->cbb, (next_block)); \
ins->inst_false_bb = (next_block); \
start_new_bblock = 1; \
} else { \
GET_BBLOCK (cfg, tblock, next_ip); \
link_bblock (cfg, cfg->cbb, tblock); \
ins->inst_false_bb = tblock; \
start_new_bblock = 2; \
} \
if (sp != stack_start) { \
handle_stack_args (cfg, stack_start, sp - stack_start); \
CHECK_UNVERIFIABLE (cfg); \
} \
MONO_ADD_INS (cfg->cbb, cmp); \
MONO_ADD_INS (cfg->cbb, ins); \
} while (0)
/* *
* link_bblock: Links two basic blocks
*
* links two basic blocks in the control flow graph, the 'from'
* argument is the starting block and the 'to' argument is the block
* the control flow ends to after 'from'.
*/
static void
link_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to)
{
MonoBasicBlock **newa;
int i, found;
#if 0
if (from->cil_code) {
if (to->cil_code)
printf ("edge from IL%04x to IL_%04x\n", from->cil_code - cfg->cil_code, to->cil_code - cfg->cil_code);
else
printf ("edge from IL%04x to exit\n", from->cil_code - cfg->cil_code);
} else {
if (to->cil_code)
printf ("edge from entry to IL_%04x\n", to->cil_code - cfg->cil_code);
else
printf ("edge from entry to exit\n");
}
#endif
found = FALSE;
for (i = 0; i < from->out_count; ++i) {
if (to == from->out_bb [i]) {
found = TRUE;
break;
}
}
if (!found) {
newa = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (gpointer) * (from->out_count + 1));
for (i = 0; i < from->out_count; ++i) {
newa [i] = from->out_bb [i];
}
newa [i] = to;
from->out_count++;
from->out_bb = newa;
}
found = FALSE;
for (i = 0; i < to->in_count; ++i) {
if (from == to->in_bb [i]) {
found = TRUE;
break;
}
}
if (!found) {
newa = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (gpointer) * (to->in_count + 1));
for (i = 0; i < to->in_count; ++i) {
newa [i] = to->in_bb [i];
}
newa [i] = from;
to->in_count++;
to->in_bb = newa;
}
}
void
mono_link_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to)
{
link_bblock (cfg, from, to);
}
static void
mono_create_spvar_for_region (MonoCompile *cfg, int region);
static void
mark_bb_in_region (MonoCompile *cfg, guint region, uint32_t start, uint32_t end)
{
MonoBasicBlock *bb = cfg->cil_offset_to_bb [start];
//start must exist in cil_offset_to_bb as those are il offsets used by EH which should have GET_BBLOCK early.
g_assert (bb);
if (cfg->verbose_level > 1)
g_print ("FIRST BB for %d is BB_%d\n", start, bb->block_num);
for (; bb && bb->real_offset < end; bb = bb->next_bb) {
//no one claimed this bb, take it.
if (bb->region == -1) {
bb->region = region;
continue;
}
//current region is an early handler, bail
if ((bb->region & (0xf << 4)) != MONO_REGION_TRY) {
continue;
}
//current region is a try, only overwrite if new region is a handler
if ((region & (0xf << 4)) != MONO_REGION_TRY) {
bb->region = region;
}
}
if (cfg->spvars)
mono_create_spvar_for_region (cfg, region);
}
static void
compute_bb_regions (MonoCompile *cfg)
{
MonoBasicBlock *bb;
MonoMethodHeader *header = cfg->header;
int i;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
bb->region = -1;
for (i = 0; i < header->num_clauses; ++i) {
MonoExceptionClause *clause = &header->clauses [i];
if (clause->flags == MONO_EXCEPTION_CLAUSE_FILTER)
mark_bb_in_region (cfg, ((i + 1) << 8) | MONO_REGION_FILTER | clause->flags, clause->data.filter_offset, clause->handler_offset);
guint handler_region;
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY)
handler_region = ((i + 1) << 8) | MONO_REGION_FINALLY | clause->flags;
else if (clause->flags == MONO_EXCEPTION_CLAUSE_FAULT)
handler_region = ((i + 1) << 8) | MONO_REGION_FAULT | clause->flags;
else
handler_region = ((i + 1) << 8) | MONO_REGION_CATCH | clause->flags;
mark_bb_in_region (cfg, handler_region, clause->handler_offset, clause->handler_offset + clause->handler_len);
mark_bb_in_region (cfg, ((i + 1) << 8) | clause->flags, clause->try_offset, clause->try_offset + clause->try_len);
}
if (cfg->verbose_level > 2) {
MonoBasicBlock *bb;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
g_print ("REGION BB%d IL_%04x ID_%08X\n", bb->block_num, bb->real_offset, bb->region);
}
}
static gboolean
ip_in_finally_clause (MonoCompile *cfg, int offset)
{
MonoMethodHeader *header = cfg->header;
MonoExceptionClause *clause;
int i;
for (i = 0; i < header->num_clauses; ++i) {
clause = &header->clauses [i];
if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY && clause->flags != MONO_EXCEPTION_CLAUSE_FAULT)
continue;
if (MONO_OFFSET_IN_HANDLER (clause, offset))
return TRUE;
}
return FALSE;
}
/* Find clauses between ip and target, from inner to outer */
static GList*
mono_find_leave_clauses (MonoCompile *cfg, guchar *ip, guchar *target)
{
MonoMethodHeader *header = cfg->header;
MonoExceptionClause *clause;
int i;
GList *res = NULL;
for (i = 0; i < header->num_clauses; ++i) {
clause = &header->clauses [i];
if (MONO_OFFSET_IN_CLAUSE (clause, (ip - header->code)) &&
(!MONO_OFFSET_IN_CLAUSE (clause, (target - header->code)))) {
MonoLeaveClause *leave = mono_mempool_alloc0 (cfg->mempool, sizeof (MonoLeaveClause));
leave->index = i;
leave->clause = clause;
res = g_list_append_mempool (cfg->mempool, res, leave);
}
}
return res;
}
static void
mono_create_spvar_for_region (MonoCompile *cfg, int region)
{
MonoInst *var;
var = (MonoInst *)g_hash_table_lookup (cfg->spvars, GINT_TO_POINTER (region));
if (var)
return;
var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
/* prevent it from being register allocated */
var->flags |= MONO_INST_VOLATILE;
g_hash_table_insert (cfg->spvars, GINT_TO_POINTER (region), var);
}
MonoInst *
mono_find_exvar_for_offset (MonoCompile *cfg, int offset)
{
return (MonoInst *)g_hash_table_lookup (cfg->exvars, GINT_TO_POINTER (offset));
}
static MonoInst*
mono_create_exvar_for_offset (MonoCompile *cfg, int offset)
{
MonoInst *var;
var = (MonoInst *)g_hash_table_lookup (cfg->exvars, GINT_TO_POINTER (offset));
if (var)
return var;
var = mono_compile_create_var (cfg, mono_get_object_type (), OP_LOCAL);
/* prevent it from being register allocated */
var->flags |= MONO_INST_VOLATILE;
g_hash_table_insert (cfg->exvars, GINT_TO_POINTER (offset), var);
return var;
}
/*
* Returns the type used in the eval stack when @type is loaded.
* FIXME: return a MonoType/MonoClass for the byref and VALUETYPE cases.
*/
void
mini_type_to_eval_stack_type (MonoCompile *cfg, MonoType *type, MonoInst *inst)
{
MonoClass *klass;
type = mini_get_underlying_type (type);
inst->klass = klass = mono_class_from_mono_type_internal (type);
if (m_type_is_byref (type)) {
inst->type = STACK_MP;
return;
}
handle_enum:
switch (type->type) {
case MONO_TYPE_VOID:
inst->type = STACK_INV;
return;
case MONO_TYPE_I1:
case MONO_TYPE_U1:
case MONO_TYPE_I2:
case MONO_TYPE_U2:
case MONO_TYPE_I4:
case MONO_TYPE_U4:
inst->type = STACK_I4;
return;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
inst->type = STACK_PTR;
return;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
inst->type = STACK_OBJ;
return;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
inst->type = STACK_I8;
return;
case MONO_TYPE_R4:
inst->type = cfg->r4_stack_type;
break;
case MONO_TYPE_R8:
inst->type = STACK_R8;
return;
case MONO_TYPE_VALUETYPE:
if (m_class_is_enumtype (type->data.klass)) {
type = mono_class_enum_basetype_internal (type->data.klass);
goto handle_enum;
} else {
inst->klass = klass;
inst->type = STACK_VTYPE;
return;
}
case MONO_TYPE_TYPEDBYREF:
inst->klass = mono_defaults.typed_reference_class;
inst->type = STACK_VTYPE;
return;
case MONO_TYPE_GENERICINST:
type = m_class_get_byval_arg (type->data.generic_class->container_class);
goto handle_enum;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
g_assert (cfg->gshared);
if (mini_is_gsharedvt_type (type)) {
g_assert (cfg->gsharedvt);
inst->type = STACK_VTYPE;
} else {
mini_type_to_eval_stack_type (cfg, mini_get_underlying_type (type), inst);
}
return;
default:
g_error ("unknown type 0x%02x in eval stack type", type->type);
}
}
/*
* The following tables are used to quickly validate the IL code in type_from_op ().
*/
#define IF_P8(v) (SIZEOF_VOID_P == 8 ? v : STACK_INV)
#define IF_P8_I8 IF_P8(STACK_I8)
#define IF_P8_PTR IF_P8(STACK_PTR)
static const char
bin_num_table [STACK_MAX] [STACK_MAX] = {
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_I4, IF_P8_I8, STACK_PTR, STACK_INV, STACK_MP, STACK_INV, STACK_INV},
{STACK_INV, IF_P8_I8, STACK_I8, IF_P8_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_PTR, IF_P8_PTR, STACK_PTR, STACK_INV, STACK_MP, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_R8, STACK_INV, STACK_INV, STACK_INV, STACK_R8},
{STACK_INV, STACK_MP, STACK_INV, STACK_MP, STACK_INV, STACK_PTR, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_R8, STACK_INV, STACK_INV, STACK_INV, STACK_R4}
};
static const char
neg_table [] = {
STACK_INV, STACK_I4, STACK_I8, STACK_PTR, STACK_R8, STACK_INV, STACK_INV, STACK_INV, STACK_R4
};
/* reduce the size of this table */
static const char
bin_int_table [STACK_MAX] [STACK_MAX] = {
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_I4, IF_P8_I8, STACK_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, IF_P8_I8, STACK_I8, IF_P8_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_PTR, IF_P8_PTR, STACK_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}
};
#define P1 (SIZEOF_VOID_P == 8)
static const char
bin_comp_table [STACK_MAX] [STACK_MAX] = {
/* Inv i L p F & O vt r4 */
{0},
{0, 1, 0, 1, 0, 0, 0, 0}, /* i, int32 */
{0, 0, 1,P1, 0, 0, 0, 0}, /* L, int64 */
{0, 1,P1, 1, 0, 2, 4, 0}, /* p, ptr */
{0, 0, 0, 0, 1, 0, 0, 0, 1}, /* F, R8 */
{0, 0, 0, 2, 0, 1, 0, 0}, /* &, managed pointer */
{0, 0, 0, 4, 0, 0, 3, 0}, /* O, reference */
{0, 0, 0, 0, 0, 0, 0, 0}, /* vt value type */
{0, 0, 0, 0, 1, 0, 0, 0, 1}, /* r, r4 */
};
#undef P1
/* reduce the size of this table */
static const char
shift_table [STACK_MAX] [STACK_MAX] = {
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_I4, STACK_INV, STACK_I4, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_I8, STACK_INV, STACK_I8, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_PTR, STACK_INV, STACK_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}
};
/*
* Tables to map from the non-specific opcode to the matching
* type-specific opcode.
*/
/* handles from CEE_ADD to CEE_SHR_UN (CEE_REM_UN for floats) */
static const guint16
binops_op_map [STACK_MAX] = {
0, OP_IADD-CEE_ADD, OP_LADD-CEE_ADD, OP_PADD-CEE_ADD, OP_FADD-CEE_ADD, OP_PADD-CEE_ADD, 0, 0, OP_RADD-CEE_ADD
};
/* handles from CEE_NEG to CEE_CONV_U8 */
static const guint16
unops_op_map [STACK_MAX] = {
0, OP_INEG-CEE_NEG, OP_LNEG-CEE_NEG, OP_PNEG-CEE_NEG, OP_FNEG-CEE_NEG, OP_PNEG-CEE_NEG, 0, 0, OP_RNEG-CEE_NEG
};
/* handles from CEE_CONV_U2 to CEE_SUB_OVF_UN */
static const guint16
ovfops_op_map [STACK_MAX] = {
0, OP_ICONV_TO_U2-CEE_CONV_U2, OP_LCONV_TO_U2-CEE_CONV_U2, OP_PCONV_TO_U2-CEE_CONV_U2, OP_FCONV_TO_U2-CEE_CONV_U2, OP_PCONV_TO_U2-CEE_CONV_U2, OP_PCONV_TO_U2-CEE_CONV_U2, 0, OP_RCONV_TO_U2-CEE_CONV_U2
};
/* handles from CEE_CONV_OVF_I1_UN to CEE_CONV_OVF_U_UN */
static const guint16
ovf2ops_op_map [STACK_MAX] = {
0, OP_ICONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_LCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_PCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_FCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_PCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, 0, 0, OP_RCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN
};
/* handles from CEE_CONV_OVF_I1 to CEE_CONV_OVF_U8 */
static const guint16
ovf3ops_op_map [STACK_MAX] = {
0, OP_ICONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_LCONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_PCONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_FCONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_PCONV_TO_OVF_I1-CEE_CONV_OVF_I1, 0, 0, OP_RCONV_TO_OVF_I1-CEE_CONV_OVF_I1
};
/* handles from CEE_BEQ to CEE_BLT_UN */
static const guint16
beqops_op_map [STACK_MAX] = {
0, OP_IBEQ-CEE_BEQ, OP_LBEQ-CEE_BEQ, OP_PBEQ-CEE_BEQ, OP_FBEQ-CEE_BEQ, OP_PBEQ-CEE_BEQ, OP_PBEQ-CEE_BEQ, 0, OP_FBEQ-CEE_BEQ
};
/* handles from CEE_CEQ to CEE_CLT_UN */
static const guint16
ceqops_op_map [STACK_MAX] = {
0, OP_ICEQ-OP_CEQ, OP_LCEQ-OP_CEQ, OP_PCEQ-OP_CEQ, OP_FCEQ-OP_CEQ, OP_PCEQ-OP_CEQ, OP_PCEQ-OP_CEQ, 0, OP_RCEQ-OP_CEQ
};
/*
* Sets ins->type (the type on the eval stack) according to the
* type of the opcode and the arguments to it.
* Invalid IL code is marked by setting ins->type to the invalid value STACK_INV.
*
* FIXME: this function sets ins->type unconditionally in some cases, but
* it should set it to invalid for some types (a conv.x on an object)
*/
static void
type_from_op (MonoCompile *cfg, MonoInst *ins, MonoInst *src1, MonoInst *src2)
{
switch (ins->opcode) {
/* binops */
case MONO_CEE_ADD:
case MONO_CEE_SUB:
case MONO_CEE_MUL:
case MONO_CEE_DIV:
case MONO_CEE_REM:
/* FIXME: check unverifiable args for STACK_MP */
ins->type = bin_num_table [src1->type] [src2->type];
ins->opcode += binops_op_map [ins->type];
break;
case MONO_CEE_DIV_UN:
case MONO_CEE_REM_UN:
case MONO_CEE_AND:
case MONO_CEE_OR:
case MONO_CEE_XOR:
ins->type = bin_int_table [src1->type] [src2->type];
ins->opcode += binops_op_map [ins->type];
break;
case MONO_CEE_SHL:
case MONO_CEE_SHR:
case MONO_CEE_SHR_UN:
ins->type = shift_table [src1->type] [src2->type];
ins->opcode += binops_op_map [ins->type];
break;
case OP_COMPARE:
case OP_LCOMPARE:
case OP_ICOMPARE:
ins->type = bin_comp_table [src1->type] [src2->type] ? STACK_I4: STACK_INV;
if ((src1->type == STACK_I8) || ((TARGET_SIZEOF_VOID_P == 8) && ((src1->type == STACK_PTR) || (src1->type == STACK_OBJ) || (src1->type == STACK_MP))))
ins->opcode = OP_LCOMPARE;
else if (src1->type == STACK_R4)
ins->opcode = OP_RCOMPARE;
else if (src1->type == STACK_R8)
ins->opcode = OP_FCOMPARE;
else
ins->opcode = OP_ICOMPARE;
break;
case OP_ICOMPARE_IMM:
ins->type = bin_comp_table [src1->type] [src1->type] ? STACK_I4 : STACK_INV;
if ((src1->type == STACK_I8) || ((TARGET_SIZEOF_VOID_P == 8) && ((src1->type == STACK_PTR) || (src1->type == STACK_OBJ) || (src1->type == STACK_MP))))
ins->opcode = OP_LCOMPARE_IMM;
break;
case MONO_CEE_BEQ:
case MONO_CEE_BGE:
case MONO_CEE_BGT:
case MONO_CEE_BLE:
case MONO_CEE_BLT:
case MONO_CEE_BNE_UN:
case MONO_CEE_BGE_UN:
case MONO_CEE_BGT_UN:
case MONO_CEE_BLE_UN:
case MONO_CEE_BLT_UN:
ins->opcode += beqops_op_map [src1->type];
break;
case OP_CEQ:
ins->type = bin_comp_table [src1->type] [src2->type] ? STACK_I4: STACK_INV;
ins->opcode += ceqops_op_map [src1->type];
break;
case OP_CGT:
case OP_CGT_UN:
case OP_CLT:
case OP_CLT_UN:
ins->type = (bin_comp_table [src1->type] [src2->type] & 1) ? STACK_I4: STACK_INV;
ins->opcode += ceqops_op_map [src1->type];
break;
/* unops */
case MONO_CEE_NEG:
ins->type = neg_table [src1->type];
ins->opcode += unops_op_map [ins->type];
break;
case MONO_CEE_NOT:
if (src1->type >= STACK_I4 && src1->type <= STACK_PTR)
ins->type = src1->type;
else
ins->type = STACK_INV;
ins->opcode += unops_op_map [ins->type];
break;
case MONO_CEE_CONV_I1:
case MONO_CEE_CONV_I2:
case MONO_CEE_CONV_I4:
case MONO_CEE_CONV_U4:
ins->type = STACK_I4;
ins->opcode += unops_op_map [src1->type];
break;
case MONO_CEE_CONV_R_UN:
ins->type = STACK_R8;
switch (src1->type) {
case STACK_I4:
case STACK_PTR:
ins->opcode = OP_ICONV_TO_R_UN;
break;
case STACK_I8:
ins->opcode = OP_LCONV_TO_R_UN;
break;
case STACK_R4:
ins->opcode = OP_RCONV_TO_R8;
break;
case STACK_R8:
ins->opcode = OP_FMOVE;
break;
}
break;
case MONO_CEE_CONV_OVF_I1:
case MONO_CEE_CONV_OVF_U1:
case MONO_CEE_CONV_OVF_I2:
case MONO_CEE_CONV_OVF_U2:
case MONO_CEE_CONV_OVF_I4:
case MONO_CEE_CONV_OVF_U4:
ins->type = STACK_I4;
ins->opcode += ovf3ops_op_map [src1->type];
break;
case MONO_CEE_CONV_OVF_I_UN:
case MONO_CEE_CONV_OVF_U_UN:
ins->type = STACK_PTR;
ins->opcode += ovf2ops_op_map [src1->type];
break;
case MONO_CEE_CONV_OVF_I1_UN:
case MONO_CEE_CONV_OVF_I2_UN:
case MONO_CEE_CONV_OVF_I4_UN:
case MONO_CEE_CONV_OVF_U1_UN:
case MONO_CEE_CONV_OVF_U2_UN:
case MONO_CEE_CONV_OVF_U4_UN:
ins->type = STACK_I4;
ins->opcode += ovf2ops_op_map [src1->type];
break;
case MONO_CEE_CONV_U:
ins->type = STACK_PTR;
switch (src1->type) {
case STACK_I4:
ins->opcode = OP_ICONV_TO_U;
break;
case STACK_PTR:
case STACK_MP:
case STACK_OBJ:
#if TARGET_SIZEOF_VOID_P == 8
ins->opcode = OP_LCONV_TO_U;
#else
ins->opcode = OP_MOVE;
#endif
break;
case STACK_I8:
ins->opcode = OP_LCONV_TO_U;
break;
case STACK_R8:
if (TARGET_SIZEOF_VOID_P == 8)
ins->opcode = OP_FCONV_TO_U8;
else
ins->opcode = OP_FCONV_TO_U4;
break;
case STACK_R4:
if (TARGET_SIZEOF_VOID_P == 8)
ins->opcode = OP_RCONV_TO_U8;
else
ins->opcode = OP_RCONV_TO_U4;
break;
}
break;
case MONO_CEE_CONV_I8:
case MONO_CEE_CONV_U8:
ins->type = STACK_I8;
ins->opcode += unops_op_map [src1->type];
break;
case MONO_CEE_CONV_OVF_I8:
case MONO_CEE_CONV_OVF_U8:
ins->type = STACK_I8;
ins->opcode += ovf3ops_op_map [src1->type];
break;
case MONO_CEE_CONV_OVF_U8_UN:
case MONO_CEE_CONV_OVF_I8_UN:
ins->type = STACK_I8;
ins->opcode += ovf2ops_op_map [src1->type];
break;
case MONO_CEE_CONV_R4:
ins->type = cfg->r4_stack_type;
ins->opcode += unops_op_map [src1->type];
break;
case MONO_CEE_CONV_R8:
ins->type = STACK_R8;
ins->opcode += unops_op_map [src1->type];
break;
case OP_CKFINITE:
ins->type = STACK_R8;
break;
case MONO_CEE_CONV_U2:
case MONO_CEE_CONV_U1:
ins->type = STACK_I4;
ins->opcode += ovfops_op_map [src1->type];
break;
case MONO_CEE_CONV_I:
case MONO_CEE_CONV_OVF_I:
case MONO_CEE_CONV_OVF_U:
ins->type = STACK_PTR;
ins->opcode += ovfops_op_map [src1->type];
break;
case MONO_CEE_ADD_OVF:
case MONO_CEE_ADD_OVF_UN:
case MONO_CEE_MUL_OVF:
case MONO_CEE_MUL_OVF_UN:
case MONO_CEE_SUB_OVF:
case MONO_CEE_SUB_OVF_UN:
ins->type = bin_num_table [src1->type] [src2->type];
ins->opcode += ovfops_op_map [src1->type];
if (ins->type == STACK_R8)
ins->type = STACK_INV;
break;
case OP_LOAD_MEMBASE:
ins->type = STACK_PTR;
break;
case OP_LOADI1_MEMBASE:
case OP_LOADU1_MEMBASE:
case OP_LOADI2_MEMBASE:
case OP_LOADU2_MEMBASE:
case OP_LOADI4_MEMBASE:
case OP_LOADU4_MEMBASE:
ins->type = STACK_PTR;
break;
case OP_LOADI8_MEMBASE:
ins->type = STACK_I8;
break;
case OP_LOADR4_MEMBASE:
ins->type = cfg->r4_stack_type;
break;
case OP_LOADR8_MEMBASE:
ins->type = STACK_R8;
break;
default:
g_error ("opcode 0x%04x not handled in type from op", ins->opcode);
break;
}
if (ins->type == STACK_MP) {
if (src1->type == STACK_MP)
ins->klass = src1->klass;
else
ins->klass = mono_defaults.object_class;
}
}
void
mini_type_from_op (MonoCompile *cfg, MonoInst *ins, MonoInst *src1, MonoInst *src2)
{
type_from_op (cfg, ins, src1, src2);
}
static MonoClass*
ldind_to_type (int op)
{
switch (op) {
case MONO_CEE_LDIND_I1: return mono_defaults.sbyte_class;
case MONO_CEE_LDIND_U1: return mono_defaults.byte_class;
case MONO_CEE_LDIND_I2: return mono_defaults.int16_class;
case MONO_CEE_LDIND_U2: return mono_defaults.uint16_class;
case MONO_CEE_LDIND_I4: return mono_defaults.int32_class;
case MONO_CEE_LDIND_U4: return mono_defaults.uint32_class;
case MONO_CEE_LDIND_I8: return mono_defaults.int64_class;
case MONO_CEE_LDIND_I: return mono_defaults.int_class;
case MONO_CEE_LDIND_R4: return mono_defaults.single_class;
case MONO_CEE_LDIND_R8: return mono_defaults.double_class;
case MONO_CEE_LDIND_REF:return mono_defaults.object_class; //FIXME we should try to return a more specific type
default: g_error ("Unknown ldind type %d", op);
}
}
static MonoClass*
stind_to_type (int op)
{
switch (op) {
case MONO_CEE_STIND_I1: return mono_defaults.sbyte_class;
case MONO_CEE_STIND_I2: return mono_defaults.int16_class;
case MONO_CEE_STIND_I4: return mono_defaults.int32_class;
case MONO_CEE_STIND_I8: return mono_defaults.int64_class;
case MONO_CEE_STIND_I: return mono_defaults.int_class;
case MONO_CEE_STIND_R4: return mono_defaults.single_class;
case MONO_CEE_STIND_R8: return mono_defaults.double_class;
case MONO_CEE_STIND_REF: return mono_defaults.object_class;
default: g_error ("Unknown stind type %d", op);
}
}
#if 0
static const char
param_table [STACK_MAX] [STACK_MAX] = {
{0},
};
static int
check_values_to_signature (MonoInst *args, MonoType *this_ins, MonoMethodSignature *sig)
{
int i;
if (sig->hasthis) {
switch (args->type) {
case STACK_I4:
case STACK_I8:
case STACK_R8:
case STACK_VTYPE:
case STACK_INV:
return 0;
}
args++;
}
for (i = 0; i < sig->param_count; ++i) {
switch (args [i].type) {
case STACK_INV:
return 0;
case STACK_MP:
if (m_type_is_byref (!sig->params [i]))
return 0;
continue;
case STACK_OBJ:
if (m_type_is_byref (sig->params [i]))
return 0;
switch (m_type_is_byref (sig->params [i])) {
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
break;
default:
return 0;
}
continue;
case STACK_R8:
if (m_type_is_byref (sig->params [i]))
return 0;
if (sig->params [i]->type != MONO_TYPE_R4 && sig->params [i]->type != MONO_TYPE_R8)
return 0;
continue;
case STACK_PTR:
case STACK_I4:
case STACK_I8:
case STACK_VTYPE:
break;
}
/*if (!param_table [args [i].type] [sig->params [i]->type])
return 0;*/
}
return 1;
}
#endif
/*
* The got_var contains the address of the Global Offset Table when AOT
* compiling.
*/
MonoInst *
mono_get_got_var (MonoCompile *cfg)
{
if (!cfg->compile_aot || !cfg->backend->need_got_var || cfg->llvm_only)
return NULL;
if (!cfg->got_var) {
cfg->got_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
}
return cfg->got_var;
}
static void
mono_create_rgctx_var (MonoCompile *cfg)
{
if (!cfg->rgctx_var) {
cfg->rgctx_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
/* force the var to be stack allocated */
if (!cfg->llvm_only)
cfg->rgctx_var->flags |= MONO_INST_VOLATILE;
}
}
static MonoInst *
mono_get_mrgctx_var (MonoCompile *cfg)
{
g_assert (cfg->gshared);
mono_create_rgctx_var (cfg);
return cfg->rgctx_var;
}
static MonoInst *
mono_get_vtable_var (MonoCompile *cfg)
{
g_assert (cfg->gshared);
/* The mrgctx and the vtable are stored in the same var */
mono_create_rgctx_var (cfg);
return cfg->rgctx_var;
}
static MonoType*
type_from_stack_type (MonoInst *ins) {
switch (ins->type) {
case STACK_I4: return mono_get_int32_type ();
case STACK_I8: return m_class_get_byval_arg (mono_defaults.int64_class);
case STACK_PTR: return mono_get_int_type ();
case STACK_R4: return m_class_get_byval_arg (mono_defaults.single_class);
case STACK_R8: return m_class_get_byval_arg (mono_defaults.double_class);
case STACK_MP:
return m_class_get_this_arg (ins->klass);
case STACK_OBJ: return mono_get_object_type ();
case STACK_VTYPE: return m_class_get_byval_arg (ins->klass);
default:
g_error ("stack type %d to monotype not handled\n", ins->type);
}
return NULL;
}
MonoStackType
mini_type_to_stack_type (MonoCompile *cfg, MonoType *t)
{
t = mini_type_get_underlying_type (t);
switch (t->type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
case MONO_TYPE_I2:
case MONO_TYPE_U2:
case MONO_TYPE_I4:
case MONO_TYPE_U4:
return STACK_I4;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
return STACK_PTR;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
return STACK_OBJ;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
return STACK_I8;
case MONO_TYPE_R4:
return (MonoStackType)cfg->r4_stack_type;
case MONO_TYPE_R8:
return STACK_R8;
case MONO_TYPE_VALUETYPE:
case MONO_TYPE_TYPEDBYREF:
return STACK_VTYPE;
case MONO_TYPE_GENERICINST:
if (mono_type_generic_inst_is_valuetype (t))
return STACK_VTYPE;
else
return STACK_OBJ;
break;
default:
g_assert_not_reached ();
}
return (MonoStackType)-1;
}
static MonoClass*
array_access_to_klass (int opcode)
{
switch (opcode) {
case MONO_CEE_LDELEM_U1:
return mono_defaults.byte_class;
case MONO_CEE_LDELEM_U2:
return mono_defaults.uint16_class;
case MONO_CEE_LDELEM_I:
case MONO_CEE_STELEM_I:
return mono_defaults.int_class;
case MONO_CEE_LDELEM_I1:
case MONO_CEE_STELEM_I1:
return mono_defaults.sbyte_class;
case MONO_CEE_LDELEM_I2:
case MONO_CEE_STELEM_I2:
return mono_defaults.int16_class;
case MONO_CEE_LDELEM_I4:
case MONO_CEE_STELEM_I4:
return mono_defaults.int32_class;
case MONO_CEE_LDELEM_U4:
return mono_defaults.uint32_class;
case MONO_CEE_LDELEM_I8:
case MONO_CEE_STELEM_I8:
return mono_defaults.int64_class;
case MONO_CEE_LDELEM_R4:
case MONO_CEE_STELEM_R4:
return mono_defaults.single_class;
case MONO_CEE_LDELEM_R8:
case MONO_CEE_STELEM_R8:
return mono_defaults.double_class;
case MONO_CEE_LDELEM_REF:
case MONO_CEE_STELEM_REF:
return mono_defaults.object_class;
default:
g_assert_not_reached ();
}
return NULL;
}
/*
* We try to share variables when possible
*/
static MonoInst *
mono_compile_get_interface_var (MonoCompile *cfg, int slot, MonoInst *ins)
{
MonoInst *res;
int pos, vnum;
MonoType *type;
type = type_from_stack_type (ins);
/* inlining can result in deeper stacks */
if (cfg->inline_depth || slot >= cfg->header->max_stack)
return mono_compile_create_var (cfg, type, OP_LOCAL);
pos = ins->type - 1 + slot * STACK_MAX;
switch (ins->type) {
case STACK_I4:
case STACK_I8:
case STACK_R8:
case STACK_PTR:
case STACK_MP:
case STACK_OBJ:
if ((vnum = cfg->intvars [pos]))
return cfg->varinfo [vnum];
res = mono_compile_create_var (cfg, type, OP_LOCAL);
cfg->intvars [pos] = res->inst_c0;
break;
default:
res = mono_compile_create_var (cfg, type, OP_LOCAL);
}
return res;
}
static void
mono_save_token_info (MonoCompile *cfg, MonoImage *image, guint32 token, gpointer key)
{
/*
* Don't use this if a generic_context is set, since that means AOT can't
* look up the method using just the image+token.
* table == 0 means this is a reference made from a wrapper.
*/
if (cfg->compile_aot && !cfg->generic_context && (mono_metadata_token_table (token) > 0)) {
MonoJumpInfoToken *jump_info_token = (MonoJumpInfoToken *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfoToken));
jump_info_token->image = image;
jump_info_token->token = token;
g_hash_table_insert (cfg->token_info_hash, key, jump_info_token);
}
}
/*
* This function is called to handle items that are left on the evaluation stack
* at basic block boundaries. What happens is that we save the values to local variables
* and we reload them later when first entering the target basic block (with the
* handle_loaded_temps () function).
* A single joint point will use the same variables (stored in the array bb->out_stack or
* bb->in_stack, if the basic block is before or after the joint point).
*
* This function needs to be called _before_ emitting the last instruction of
* the bb (i.e. before emitting a branch).
* If the stack merge fails at a join point, cfg->unverifiable is set.
*/
static void
handle_stack_args (MonoCompile *cfg, MonoInst **sp, int count)
{
int i, bindex;
MonoBasicBlock *bb = cfg->cbb;
MonoBasicBlock *outb;
MonoInst *inst, **locals;
gboolean found;
if (!count)
return;
if (cfg->verbose_level > 3)
printf ("%d item(s) on exit from B%d\n", count, bb->block_num);
if (!bb->out_scount) {
bb->out_scount = count;
//printf ("bblock %d has out:", bb->block_num);
found = FALSE;
for (i = 0; i < bb->out_count; ++i) {
outb = bb->out_bb [i];
/* exception handlers are linked, but they should not be considered for stack args */
if (outb->flags & BB_EXCEPTION_HANDLER)
continue;
//printf (" %d", outb->block_num);
if (outb->in_stack) {
found = TRUE;
bb->out_stack = outb->in_stack;
break;
}
}
//printf ("\n");
if (!found) {
bb->out_stack = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*) * count);
for (i = 0; i < count; ++i) {
/*
* try to reuse temps already allocated for this purpouse, if they occupy the same
* stack slot and if they are of the same type.
* This won't cause conflicts since if 'local' is used to
* store one of the values in the in_stack of a bblock, then
* the same variable will be used for the same outgoing stack
* slot as well.
* This doesn't work when inlining methods, since the bblocks
* in the inlined methods do not inherit their in_stack from
* the bblock they are inlined to. See bug #58863 for an
* example.
*/
bb->out_stack [i] = mono_compile_get_interface_var (cfg, i, sp [i]);
}
}
}
for (i = 0; i < bb->out_count; ++i) {
outb = bb->out_bb [i];
/* exception handlers are linked, but they should not be considered for stack args */
if (outb->flags & BB_EXCEPTION_HANDLER)
continue;
if (outb->in_scount) {
if (outb->in_scount != bb->out_scount) {
cfg->unverifiable = TRUE;
return;
}
continue; /* check they are the same locals */
}
outb->in_scount = count;
outb->in_stack = bb->out_stack;
}
locals = bb->out_stack;
cfg->cbb = bb;
for (i = 0; i < count; ++i) {
sp [i] = convert_value (cfg, locals [i]->inst_vtype, sp [i]);
EMIT_NEW_TEMPSTORE (cfg, inst, locals [i]->inst_c0, sp [i]);
inst->cil_code = sp [i]->cil_code;
sp [i] = locals [i];
if (cfg->verbose_level > 3)
printf ("storing %d to temp %d\n", i, (int)locals [i]->inst_c0);
}
/*
* It is possible that the out bblocks already have in_stack assigned, and
* the in_stacks differ. In this case, we will store to all the different
* in_stacks.
*/
found = TRUE;
bindex = 0;
while (found) {
/* Find a bblock which has a different in_stack */
found = FALSE;
while (bindex < bb->out_count) {
outb = bb->out_bb [bindex];
/* exception handlers are linked, but they should not be considered for stack args */
if (outb->flags & BB_EXCEPTION_HANDLER) {
bindex++;
continue;
}
if (outb->in_stack != locals) {
for (i = 0; i < count; ++i) {
sp [i] = convert_value (cfg, outb->in_stack [i]->inst_vtype, sp [i]);
EMIT_NEW_TEMPSTORE (cfg, inst, outb->in_stack [i]->inst_c0, sp [i]);
inst->cil_code = sp [i]->cil_code;
sp [i] = locals [i];
if (cfg->verbose_level > 3)
printf ("storing %d to temp %d\n", i, (int)outb->in_stack [i]->inst_c0);
}
locals = outb->in_stack;
found = TRUE;
break;
}
bindex ++;
}
}
}
MonoInst*
mini_emit_runtime_constant (MonoCompile *cfg, MonoJumpInfoType patch_type, gpointer data)
{
MonoInst *ins;
if (cfg->compile_aot) {
MONO_DISABLE_WARNING (4306) // 'type cast': conversion from 'MonoJumpInfoType' to 'MonoInst *' of greater size
EMIT_NEW_AOTCONST (cfg, ins, patch_type, data);
MONO_RESTORE_WARNING
} else {
MonoJumpInfo ji;
gpointer target;
ERROR_DECL (error);
ji.type = patch_type;
ji.data.target = data;
target = mono_resolve_patch_target_ext (cfg->mem_manager, NULL, NULL, &ji, FALSE, error);
mono_error_assert_ok (error);
EMIT_NEW_PCONST (cfg, ins, target);
}
return ins;
}
static MonoInst*
mono_create_fast_tls_getter (MonoCompile *cfg, MonoTlsKey key)
{
int tls_offset = mono_tls_get_tls_offset (key);
if (cfg->compile_aot)
return NULL;
if (tls_offset != -1 && mono_arch_have_fast_tls ()) {
MonoInst *ins;
MONO_INST_NEW (cfg, ins, OP_TLS_GET);
ins->dreg = mono_alloc_preg (cfg);
ins->inst_offset = tls_offset;
return ins;
}
return NULL;
}
static MonoInst*
mono_create_tls_get (MonoCompile *cfg, MonoTlsKey key)
{
MonoInst *fast_tls = NULL;
if (!mini_debug_options.use_fallback_tls)
fast_tls = mono_create_fast_tls_getter (cfg, key);
if (fast_tls) {
MONO_ADD_INS (cfg->cbb, fast_tls);
return fast_tls;
}
const MonoJitICallId jit_icall_id = mono_get_tls_key_to_jit_icall_id (key);
if (cfg->compile_aot && !cfg->llvm_only) {
MonoInst *addr;
/*
* tls getters are critical pieces of code and we don't want to resolve them
* through the standard plt/tramp mechanism since we might expose ourselves
* to crashes and infinite recursions.
* Therefore the NOCALL part of MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL, FALSE in is_plt_patch.
*/
EMIT_NEW_AOTCONST (cfg, addr, MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL, GUINT_TO_POINTER (jit_icall_id));
return mini_emit_calli (cfg, mono_icall_sig_ptr, NULL, addr, NULL, NULL);
} else {
return mono_emit_jit_icall_id (cfg, jit_icall_id, NULL);
}
}
/*
* emit_push_lmf:
*
* Emit IR to push the current LMF onto the LMF stack.
*/
static void
emit_push_lmf (MonoCompile *cfg)
{
/*
* Emit IR to push the LMF:
* lmf_addr = <lmf_addr from tls>
* lmf->lmf_addr = lmf_addr
* lmf->prev_lmf = *lmf_addr
* *lmf_addr = lmf
*/
MonoInst *ins, *lmf_ins;
if (!cfg->lmf_ir)
return;
int lmf_reg, prev_lmf_reg;
/*
* Store lmf_addr in a variable, so it can be allocated to a global register.
*/
if (!cfg->lmf_addr_var)
cfg->lmf_addr_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
if (!cfg->lmf_var) {
MonoInst *lmf_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
lmf_var->flags |= MONO_INST_VOLATILE;
lmf_var->flags |= MONO_INST_LMF;
cfg->lmf_var = lmf_var;
}
lmf_ins = mono_create_tls_get (cfg, TLS_KEY_LMF_ADDR);
g_assert (lmf_ins);
lmf_ins->dreg = cfg->lmf_addr_var->dreg;
EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL);
lmf_reg = ins->dreg;
prev_lmf_reg = alloc_preg (cfg);
/* Save previous_lmf */
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, prev_lmf_reg, cfg->lmf_addr_var->dreg, 0);
if (cfg->deopt)
/* Mark this as an LMFExt */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_POR_IMM, prev_lmf_reg, prev_lmf_reg, 2);
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, previous_lmf), prev_lmf_reg);
/* Set new lmf */
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, cfg->lmf_addr_var->dreg, 0, lmf_reg);
}
/*
* emit_pop_lmf:
*
* Emit IR to pop the current LMF from the LMF stack.
*/
static void
emit_pop_lmf (MonoCompile *cfg)
{
int lmf_reg, lmf_addr_reg;
MonoInst *ins;
if (!cfg->lmf_ir)
return;
EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL);
lmf_reg = ins->dreg;
int prev_lmf_reg;
/*
* Emit IR to pop the LMF:
* *(lmf->lmf_addr) = lmf->prev_lmf
*/
/* This could be called before emit_push_lmf () */
if (!cfg->lmf_addr_var)
cfg->lmf_addr_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
lmf_addr_reg = cfg->lmf_addr_var->dreg;
prev_lmf_reg = alloc_preg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, prev_lmf_reg, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, previous_lmf));
if (cfg->deopt)
/* Clear out the bit set by push_lmf () to mark this as LMFExt */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PXOR_IMM, prev_lmf_reg, prev_lmf_reg, 2);
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, lmf_addr_reg, 0, prev_lmf_reg);
}
/*
* target_type_is_incompatible:
* @cfg: MonoCompile context
*
* Check that the item @arg on the evaluation stack can be stored
* in the target type (can be a local, or field, etc).
* The cfg arg can be used to check if we need verification or just
* validity checks.
*
* Returns: non-0 value if arg can't be stored on a target.
*/
static int
target_type_is_incompatible (MonoCompile *cfg, MonoType *target, MonoInst *arg)
{
MonoType *simple_type;
MonoClass *klass;
if (m_type_is_byref (target)) {
/* FIXME: check that the pointed to types match */
if (arg->type == STACK_MP) {
/* This is needed to handle gshared types + ldaddr. We lower the types so we can handle enums and other typedef-like types. */
MonoClass *target_class_lowered = mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (mono_class_from_mono_type_internal (target))));
MonoClass *source_class_lowered = mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (arg->klass)));
/* if the target is native int& or X* or same type */
if (target->type == MONO_TYPE_I || target->type == MONO_TYPE_PTR || target_class_lowered == source_class_lowered)
return 0;
/* Both are primitive type byrefs and the source points to a larger type that the destination */
if (MONO_TYPE_IS_PRIMITIVE_SCALAR (m_class_get_byval_arg (target_class_lowered)) && MONO_TYPE_IS_PRIMITIVE_SCALAR (m_class_get_byval_arg (source_class_lowered)) &&
mono_class_instance_size (target_class_lowered) <= mono_class_instance_size (source_class_lowered))
return 0;
return 1;
}
if (arg->type == STACK_PTR)
return 0;
return 1;
}
simple_type = mini_get_underlying_type (target);
switch (simple_type->type) {
case MONO_TYPE_VOID:
return 1;
case MONO_TYPE_I1:
case MONO_TYPE_U1:
case MONO_TYPE_I2:
case MONO_TYPE_U2:
case MONO_TYPE_I4:
case MONO_TYPE_U4:
if (arg->type != STACK_I4 && arg->type != STACK_PTR)
return 1;
return 0;
case MONO_TYPE_PTR:
/* STACK_MP is needed when setting pinned locals */
if (arg->type != STACK_I4 && arg->type != STACK_PTR && arg->type != STACK_MP)
#if SIZEOF_VOID_P == 8
if (arg->type != STACK_I8)
#endif
return 1;
return 0;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_FNPTR:
/*
* Some opcodes like ldloca returns 'transient pointers' which can be stored in
* in native int. (#688008).
*/
if (arg->type != STACK_I4 && arg->type != STACK_PTR && arg->type != STACK_MP)
return 1;
return 0;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
if (arg->type != STACK_OBJ)
return 1;
/* FIXME: check type compatibility */
return 0;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
if (arg->type != STACK_I8)
#if SIZEOF_VOID_P == 8
if (arg->type != STACK_PTR)
#endif
return 1;
return 0;
case MONO_TYPE_R4:
if (arg->type != cfg->r4_stack_type)
return 1;
return 0;
case MONO_TYPE_R8:
if (arg->type != STACK_R8)
return 1;
return 0;
case MONO_TYPE_VALUETYPE:
if (arg->type != STACK_VTYPE)
return 1;
klass = mono_class_from_mono_type_internal (simple_type);
if (klass != arg->klass)
return 1;
return 0;
case MONO_TYPE_TYPEDBYREF:
if (arg->type != STACK_VTYPE)
return 1;
klass = mono_class_from_mono_type_internal (simple_type);
if (klass != arg->klass)
return 1;
return 0;
case MONO_TYPE_GENERICINST:
if (mono_type_generic_inst_is_valuetype (simple_type)) {
MonoClass *target_class;
if (arg->type != STACK_VTYPE)
return 1;
klass = mono_class_from_mono_type_internal (simple_type);
target_class = mono_class_from_mono_type_internal (target);
/* The second cases is needed when doing partial sharing */
if (klass != arg->klass && target_class != arg->klass && target_class != mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (arg->klass))))
return 1;
return 0;
} else {
if (arg->type != STACK_OBJ)
return 1;
/* FIXME: check type compatibility */
return 0;
}
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
g_assert (cfg->gshared);
if (mini_type_var_is_vt (simple_type)) {
if (arg->type != STACK_VTYPE)
return 1;
} else {
if (arg->type != STACK_OBJ)
return 1;
}
return 0;
default:
g_error ("unknown type 0x%02x in target_type_is_incompatible", simple_type->type);
}
return 1;
}
/*
* convert_value:
*
* Emit some implicit conversions which are not part of the .net spec, but are allowed by MS.NET.
*/
static MonoInst*
convert_value (MonoCompile *cfg, MonoType *type, MonoInst *ins)
{
if (!cfg->r4fp)
return ins;
type = mini_get_underlying_type (type);
switch (type->type) {
case MONO_TYPE_R4:
if (ins->type == STACK_R8) {
int dreg = alloc_freg (cfg);
MonoInst *conv;
EMIT_NEW_UNALU (cfg, conv, OP_FCONV_TO_R4, dreg, ins->dreg);
conv->type = STACK_R4;
return conv;
}
break;
case MONO_TYPE_R8:
if (ins->type == STACK_R4) {
int dreg = alloc_freg (cfg);
MonoInst *conv;
EMIT_NEW_UNALU (cfg, conv, OP_RCONV_TO_R8, dreg, ins->dreg);
conv->type = STACK_R8;
return conv;
}
break;
default:
break;
}
return ins;
}
/*
* Prepare arguments for passing to a function call.
* Return a non-zero value if the arguments can't be passed to the given
* signature.
* The type checks are not yet complete and some conversions may need
* casts on 32 or 64 bit architectures.
*
* FIXME: implement this using target_type_is_incompatible ()
*/
static gboolean
check_call_signature (MonoCompile *cfg, MonoMethodSignature *sig, MonoInst **args)
{
MonoType *simple_type;
int i;
if (sig->hasthis) {
if (args [0]->type != STACK_OBJ && args [0]->type != STACK_MP && args [0]->type != STACK_PTR)
return TRUE;
args++;
}
for (i = 0; i < sig->param_count; ++i) {
if (m_type_is_byref (sig->params [i])) {
if (args [i]->type != STACK_MP && args [i]->type != STACK_PTR)
return TRUE;
continue;
}
simple_type = mini_get_underlying_type (sig->params [i]);
handle_enum:
switch (simple_type->type) {
case MONO_TYPE_VOID:
return TRUE;
case MONO_TYPE_I1:
case MONO_TYPE_U1:
case MONO_TYPE_I2:
case MONO_TYPE_U2:
case MONO_TYPE_I4:
case MONO_TYPE_U4:
if (args [i]->type != STACK_I4 && args [i]->type != STACK_PTR)
return TRUE;
continue;
case MONO_TYPE_I:
case MONO_TYPE_U:
if (args [i]->type != STACK_I4 && args [i]->type != STACK_PTR && args [i]->type != STACK_MP && args [i]->type != STACK_OBJ)
return TRUE;
continue;
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
if (args [i]->type != STACK_I4 && !(SIZEOF_VOID_P == 8 && args [i]->type == STACK_I8) &&
args [i]->type != STACK_PTR && args [i]->type != STACK_MP && args [i]->type != STACK_OBJ)
return TRUE;
continue;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
if (args [i]->type != STACK_OBJ)
return TRUE;
continue;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
if (args [i]->type != STACK_I8 &&
!(SIZEOF_VOID_P == 8 && (args [i]->type == STACK_I4 || args [i]->type == STACK_PTR)))
return TRUE;
continue;
case MONO_TYPE_R4:
if (args [i]->type != cfg->r4_stack_type)
return TRUE;
continue;
case MONO_TYPE_R8:
if (args [i]->type != STACK_R8)
return TRUE;
continue;
case MONO_TYPE_VALUETYPE:
if (m_class_is_enumtype (simple_type->data.klass)) {
simple_type = mono_class_enum_basetype_internal (simple_type->data.klass);
goto handle_enum;
}
if (args [i]->type != STACK_VTYPE)
return TRUE;
continue;
case MONO_TYPE_TYPEDBYREF:
if (args [i]->type != STACK_VTYPE)
return TRUE;
continue;
case MONO_TYPE_GENERICINST:
simple_type = m_class_get_byval_arg (simple_type->data.generic_class->container_class);
goto handle_enum;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
/* gsharedvt */
if (args [i]->type != STACK_VTYPE)
return TRUE;
continue;
default:
g_error ("unknown type 0x%02x in check_call_signature",
simple_type->type);
}
}
return FALSE;
}
MonoJumpInfo *
mono_patch_info_new (MonoMemPool *mp, int ip, MonoJumpInfoType type, gconstpointer target)
{
MonoJumpInfo *ji = (MonoJumpInfo *)mono_mempool_alloc (mp, sizeof (MonoJumpInfo));
ji->ip.i = ip;
ji->type = type;
ji->data.target = target;
return ji;
}
int
mini_class_check_context_used (MonoCompile *cfg, MonoClass *klass)
{
if (cfg->gshared)
return mono_class_check_context_used (klass);
else
return 0;
}
int
mini_method_check_context_used (MonoCompile *cfg, MonoMethod *method)
{
if (cfg->gshared)
return mono_method_check_context_used (method);
else
return 0;
}
/*
* check_method_sharing:
*
* Check whenever the vtable or an mrgctx needs to be passed when calling CMETHOD.
*/
static void
check_method_sharing (MonoCompile *cfg, MonoMethod *cmethod, gboolean *out_pass_vtable, gboolean *out_pass_mrgctx)
{
gboolean pass_vtable = FALSE;
gboolean pass_mrgctx = FALSE;
if (((cmethod->flags & METHOD_ATTRIBUTE_STATIC) || m_class_is_valuetype (cmethod->klass)) &&
(mono_class_is_ginst (cmethod->klass) || mono_class_is_gtd (cmethod->klass))) {
gboolean sharable = FALSE;
if (mono_method_is_generic_sharable_full (cmethod, TRUE, TRUE, TRUE))
sharable = TRUE;
/*
* Pass vtable iff target method might
* be shared, which means that sharing
* is enabled for its class and its
* context is sharable (and it's not a
* generic method).
*/
if (sharable && !(mini_method_get_context (cmethod) && mini_method_get_context (cmethod)->method_inst))
pass_vtable = TRUE;
}
if (mini_method_needs_mrgctx (cmethod)) {
if (mini_method_is_default_method (cmethod))
pass_vtable = FALSE;
else
g_assert (!pass_vtable);
if (mono_method_is_generic_sharable_full (cmethod, TRUE, TRUE, TRUE)) {
pass_mrgctx = TRUE;
} else {
if (cfg->gsharedvt && mini_is_gsharedvt_signature (mono_method_signature_internal (cmethod)))
pass_mrgctx = TRUE;
}
}
if (out_pass_vtable)
*out_pass_vtable = pass_vtable;
if (out_pass_mrgctx)
*out_pass_mrgctx = pass_mrgctx;
}
static gboolean
direct_icalls_enabled (MonoCompile *cfg, MonoMethod *method)
{
if (cfg->gen_sdb_seq_points || cfg->disable_direct_icalls)
return FALSE;
if (method && cfg->compile_aot && mono_aot_direct_icalls_enabled_for_method (cfg, method))
return TRUE;
/* LLVM on amd64 can't handle calls to non-32 bit addresses */
#ifdef TARGET_AMD64
if (cfg->compile_llvm && !cfg->llvm_only)
return FALSE;
#endif
return FALSE;
}
MonoInst*
mono_emit_jit_icall_by_info (MonoCompile *cfg, int il_offset, MonoJitICallInfo *info, MonoInst **args)
{
/*
* Call the jit icall without a wrapper if possible.
* The wrapper is needed to be able to do stack walks for asynchronously suspended
* threads when debugging.
*/
if (direct_icalls_enabled (cfg, NULL)) {
int costs;
if (!info->wrapper_method) {
info->wrapper_method = mono_marshal_get_icall_wrapper (info, TRUE);
mono_memory_barrier ();
}
/*
* Inline the wrapper method, which is basically a call to the C icall, and
* an exception check.
*/
costs = inline_method (cfg, info->wrapper_method, NULL,
args, NULL, il_offset, TRUE, NULL);
g_assert (costs > 0);
g_assert (!MONO_TYPE_IS_VOID (info->sig->ret));
return args [0];
}
return mono_emit_jit_icall_id (cfg, mono_jit_icall_info_id (info), args);
}
static MonoInst*
mono_emit_widen_call_res (MonoCompile *cfg, MonoInst *ins, MonoMethodSignature *fsig)
{
if (!MONO_TYPE_IS_VOID (fsig->ret)) {
if ((fsig->pinvoke || LLVM_ENABLED) && !m_type_is_byref (fsig->ret)) {
int widen_op = -1;
/*
* Native code might return non register sized integers
* without initializing the upper bits.
*/
switch (mono_type_to_load_membase (cfg, fsig->ret)) {
case OP_LOADI1_MEMBASE:
widen_op = OP_ICONV_TO_I1;
break;
case OP_LOADU1_MEMBASE:
widen_op = OP_ICONV_TO_U1;
break;
case OP_LOADI2_MEMBASE:
widen_op = OP_ICONV_TO_I2;
break;
case OP_LOADU2_MEMBASE:
widen_op = OP_ICONV_TO_U2;
break;
default:
break;
}
if (widen_op != -1) {
int dreg = alloc_preg (cfg);
MonoInst *widen;
EMIT_NEW_UNALU (cfg, widen, widen_op, dreg, ins->dreg);
widen->type = ins->type;
ins = widen;
}
}
}
return ins;
}
static MonoInst*
emit_get_rgctx_method (MonoCompile *cfg, int context_used,
MonoMethod *cmethod, MonoRgctxInfoType rgctx_type);
static void
emit_method_access_failure (MonoCompile *cfg, MonoMethod *caller, MonoMethod *callee)
{
MonoInst *args [2];
args [0] = emit_get_rgctx_method (cfg, mono_method_check_context_used (caller), caller, MONO_RGCTX_INFO_METHOD);
args [1] = emit_get_rgctx_method (cfg, mono_method_check_context_used (callee), callee, MONO_RGCTX_INFO_METHOD);
mono_emit_jit_icall (cfg, mono_throw_method_access, args);
}
static void
emit_bad_image_failure (MonoCompile *cfg, MonoMethod *caller, MonoMethod *callee)
{
mono_emit_jit_icall (cfg, mono_throw_bad_image, NULL);
}
static void
emit_not_supported_failure (MonoCompile *cfg)
{
mono_emit_jit_icall (cfg, mono_throw_not_supported, NULL);
}
static void
emit_invalid_program_with_msg (MonoCompile *cfg, MonoError *error_msg, MonoMethod *caller, MonoMethod *callee)
{
g_assert (!is_ok (error_msg));
char *str = mono_mem_manager_strdup (cfg->mem_manager, mono_error_get_message (error_msg));
MonoInst *iargs[1];
if (cfg->compile_aot)
EMIT_NEW_LDSTRLITCONST (cfg, iargs [0], str);
else
EMIT_NEW_PCONST (cfg, iargs [0], str);
mono_emit_jit_icall (cfg, mono_throw_invalid_program, iargs);
}
// FIXME Consolidate the multiple functions named get_method_nofail.
static MonoMethod*
get_method_nofail (MonoClass *klass, const char *method_name, int num_params, int flags)
{
MonoMethod *method;
ERROR_DECL (error);
method = mono_class_get_method_from_name_checked (klass, method_name, num_params, flags, error);
mono_error_assert_ok (error);
g_assertf (method, "Could not lookup method %s in %s", method_name, m_class_get_name (klass));
return method;
}
MonoMethod*
mini_get_memcpy_method (void)
{
static MonoMethod *memcpy_method = NULL;
if (!memcpy_method) {
memcpy_method = get_method_nofail (mono_defaults.string_class, "memcpy", 3, 0);
if (!memcpy_method)
g_error ("Old corlib found. Install a new one");
}
return memcpy_method;
}
MonoInst*
mini_emit_storing_write_barrier (MonoCompile *cfg, MonoInst *ptr, MonoInst *value)
{
MonoInst *store;
/*
* Add a release memory barrier so the object contents are flushed
* to memory before storing the reference into another object.
*/
if (!mini_debug_options.weak_memory_model)
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
EMIT_NEW_STORE_MEMBASE (cfg, store, OP_STORE_MEMBASE_REG, ptr->dreg, 0, value->dreg);
mini_emit_write_barrier (cfg, ptr, value);
return store;
}
void
mini_emit_write_barrier (MonoCompile *cfg, MonoInst *ptr, MonoInst *value)
{
int card_table_shift_bits;
target_mgreg_t card_table_mask;
guint8 *card_table;
MonoInst *dummy_use;
int nursery_shift_bits;
size_t nursery_size;
if (!cfg->gen_write_barriers)
return;
//method->wrapper_type != MONO_WRAPPER_WRITE_BARRIER && !MONO_INS_IS_PCONST_NULL (sp [1])
card_table = mono_gc_get_target_card_table (&card_table_shift_bits, &card_table_mask);
mono_gc_get_nursery (&nursery_shift_bits, &nursery_size);
if (cfg->backend->have_card_table_wb && !cfg->compile_aot && card_table && nursery_shift_bits > 0 && !COMPILE_LLVM (cfg)) {
MonoInst *wbarrier;
MONO_INST_NEW (cfg, wbarrier, OP_CARD_TABLE_WBARRIER);
wbarrier->sreg1 = ptr->dreg;
wbarrier->sreg2 = value->dreg;
MONO_ADD_INS (cfg->cbb, wbarrier);
} else if (card_table) {
int offset_reg = alloc_preg (cfg);
int card_reg;
MonoInst *ins;
/*
* We emit a fast light weight write barrier. This always marks cards as in the concurrent
* collector case, so, for the serial collector, it might slightly slow down nursery
* collections. We also expect that the host system and the target system have the same card
* table configuration, which is the case if they have the same pointer size.
*/
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHR_UN_IMM, offset_reg, ptr->dreg, card_table_shift_bits);
if (card_table_mask)
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PAND_IMM, offset_reg, offset_reg, card_table_mask);
/*We can't use PADD_IMM since the cardtable might end up in high addresses and amd64 doesn't support
* IMM's larger than 32bits.
*/
ins = mini_emit_runtime_constant (cfg, MONO_PATCH_INFO_GC_CARD_TABLE_ADDR, NULL);
card_reg = ins->dreg;
MONO_EMIT_NEW_BIALU (cfg, OP_PADD, offset_reg, offset_reg, card_reg);
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI1_MEMBASE_IMM, offset_reg, 0, 1);
} else {
MonoMethod *write_barrier = mono_gc_get_write_barrier ();
mono_emit_method_call (cfg, write_barrier, &ptr, NULL);
}
EMIT_NEW_DUMMY_USE (cfg, dummy_use, value);
}
MonoMethod*
mini_get_memset_method (void)
{
static MonoMethod *memset_method = NULL;
if (!memset_method) {
memset_method = get_method_nofail (mono_defaults.string_class, "memset", 3, 0);
if (!memset_method)
g_error ("Old corlib found. Install a new one");
}
return memset_method;
}
void
mini_emit_initobj (MonoCompile *cfg, MonoInst *dest, const guchar *ip, MonoClass *klass)
{
MonoInst *iargs [3];
int n;
guint32 align;
MonoMethod *memset_method;
MonoInst *size_ins = NULL;
MonoInst *bzero_ins = NULL;
static MonoMethod *bzero_method;
/* FIXME: Optimize this for the case when dest is an LDADDR */
mono_class_init_internal (klass);
if (mini_is_gsharedvt_klass (klass)) {
size_ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_VALUE_SIZE);
bzero_ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_BZERO);
if (!bzero_method)
bzero_method = get_method_nofail (mono_defaults.string_class, "bzero_aligned_1", 2, 0);
g_assert (bzero_method);
iargs [0] = dest;
iargs [1] = size_ins;
mini_emit_calli (cfg, mono_method_signature_internal (bzero_method), iargs, bzero_ins, NULL, NULL);
return;
}
klass = mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (klass)));
n = mono_class_value_size (klass, &align);
if (n <= TARGET_SIZEOF_VOID_P * 8) {
mini_emit_memset (cfg, dest->dreg, 0, n, 0, align);
}
else {
memset_method = mini_get_memset_method ();
iargs [0] = dest;
EMIT_NEW_ICONST (cfg, iargs [1], 0);
EMIT_NEW_ICONST (cfg, iargs [2], n);
mono_emit_method_call (cfg, memset_method, iargs, NULL);
}
}
static gboolean
context_used_is_mrgctx (MonoCompile *cfg, int context_used)
{
/* gshared dim methods use an mrgctx */
if (mini_method_is_default_method (cfg->method))
return context_used != 0;
return context_used & MONO_GENERIC_CONTEXT_USED_METHOD;
}
/*
* emit_get_rgctx:
*
* Emit IR to return either the vtable or the mrgctx.
*/
static MonoInst*
emit_get_rgctx (MonoCompile *cfg, int context_used)
{
MonoMethod *method = cfg->method;
g_assert (cfg->gshared);
/* Data whose context contains method type vars is stored in the mrgctx */
if (context_used_is_mrgctx (cfg, context_used)) {
MonoInst *mrgctx_loc, *mrgctx_var;
g_assert (cfg->rgctx_access == MONO_RGCTX_ACCESS_MRGCTX);
if (!mini_method_is_default_method (method))
g_assert (method->is_inflated && mono_method_get_context (method)->method_inst);
if (cfg->llvm_only) {
mrgctx_var = mono_get_mrgctx_var (cfg);
} else {
/* Volatile */
mrgctx_loc = mono_get_mrgctx_var (cfg);
g_assert (mrgctx_loc->flags & MONO_INST_VOLATILE);
EMIT_NEW_TEMPLOAD (cfg, mrgctx_var, mrgctx_loc->inst_c0);
}
return mrgctx_var;
}
/*
* The rest of the entries are stored in vtable->runtime_generic_context so
* have to return a vtable.
*/
if (cfg->rgctx_access == MONO_RGCTX_ACCESS_MRGCTX) {
MonoInst *mrgctx_loc, *mrgctx_var, *vtable_var;
int vtable_reg;
/* We are passed an mrgctx, return mrgctx->class_vtable */
if (cfg->llvm_only) {
mrgctx_var = mono_get_mrgctx_var (cfg);
} else {
mrgctx_loc = mono_get_mrgctx_var (cfg);
g_assert (mrgctx_loc->flags & MONO_INST_VOLATILE);
EMIT_NEW_TEMPLOAD (cfg, mrgctx_var, mrgctx_loc->inst_c0);
}
vtable_reg = alloc_preg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, vtable_var, OP_LOAD_MEMBASE, vtable_reg, mrgctx_var->dreg, MONO_STRUCT_OFFSET (MonoMethodRuntimeGenericContext, class_vtable));
vtable_var->type = STACK_PTR;
return vtable_var;
} else if (cfg->rgctx_access == MONO_RGCTX_ACCESS_VTABLE) {
MonoInst *vtable_loc, *vtable_var;
/* We are passed a vtable, return it */
if (cfg->llvm_only) {
vtable_var = mono_get_vtable_var (cfg);
} else {
vtable_loc = mono_get_vtable_var (cfg);
g_assert (vtable_loc->flags & MONO_INST_VOLATILE);
EMIT_NEW_TEMPLOAD (cfg, vtable_var, vtable_loc->inst_c0);
}
vtable_var->type = STACK_PTR;
return vtable_var;
} else {
MonoInst *ins, *this_ins;
int vtable_reg;
/* We are passed a this pointer, return this->vtable */
EMIT_NEW_VARLOAD (cfg, this_ins, cfg->this_arg, mono_get_object_type ());
vtable_reg = alloc_preg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, vtable_reg, this_ins->dreg, MONO_STRUCT_OFFSET (MonoObject, vtable));
return ins;
}
}
static MonoJumpInfoRgctxEntry *
mono_patch_info_rgctx_entry_new (MonoMemPool *mp, MonoMethod *method, gboolean in_mrgctx, MonoJumpInfoType patch_type, gconstpointer patch_data, MonoRgctxInfoType info_type)
{
MonoJumpInfoRgctxEntry *res = (MonoJumpInfoRgctxEntry *)mono_mempool_alloc0 (mp, sizeof (MonoJumpInfoRgctxEntry));
if (in_mrgctx)
res->d.method = method;
else
res->d.klass = method->klass;
res->in_mrgctx = in_mrgctx;
res->data = (MonoJumpInfo *)mono_mempool_alloc0 (mp, sizeof (MonoJumpInfo));
res->data->type = patch_type;
res->data->data.target = patch_data;
res->info_type = info_type;
return res;
}
static MonoInst*
emit_get_gsharedvt_info (MonoCompile *cfg, gpointer data, MonoRgctxInfoType rgctx_type);
static MonoInst*
emit_rgctx_fetch_inline (MonoCompile *cfg, MonoInst *rgctx, MonoJumpInfoRgctxEntry *entry)
{
MonoInst *call;
MonoInst *slot_ins;
EMIT_NEW_AOTCONST (cfg, slot_ins, MONO_PATCH_INFO_RGCTX_SLOT_INDEX, entry);
// Can't add basic blocks during interp entry mode
if (cfg->disable_inline_rgctx_fetch || cfg->interp_entry_only) {
MonoInst *args [2] = { rgctx, slot_ins };
if (entry->in_mrgctx)
call = mono_emit_jit_icall (cfg, mono_fill_method_rgctx, args);
else
call = mono_emit_jit_icall (cfg, mono_fill_class_rgctx, args);
return call;
}
MonoBasicBlock *slowpath_bb, *end_bb;
MonoInst *ins, *res;
int rgctx_reg, res_reg;
/*
* rgctx = vtable->runtime_generic_context;
* if (rgctx) {
* val = rgctx [slot + 1];
* if (val)
* return val;
* }
* <slowpath>
*/
NEW_BBLOCK (cfg, end_bb);
NEW_BBLOCK (cfg, slowpath_bb);
if (entry->in_mrgctx) {
rgctx_reg = rgctx->dreg;
} else {
rgctx_reg = alloc_preg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, rgctx_reg, rgctx->dreg, MONO_STRUCT_OFFSET (MonoVTable, runtime_generic_context));
// FIXME: Avoid this check by allocating the table when the vtable is created etc.
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, rgctx_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, slowpath_bb);
}
int table_size = mono_class_rgctx_get_array_size (0, entry->in_mrgctx);
if (entry->in_mrgctx)
table_size -= MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT / TARGET_SIZEOF_VOID_P;
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, slot_ins->dreg, table_size - 1);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBGE, slowpath_bb);
int shifted_slot_reg = alloc_ireg (cfg);
EMIT_NEW_BIALU_IMM (cfg, ins, OP_ISHL_IMM, shifted_slot_reg, slot_ins->dreg, TARGET_SIZEOF_VOID_P == 8 ? 3 : 2);
int addr_reg = alloc_preg (cfg);
EMIT_NEW_UNALU (cfg, ins, OP_MOVE, addr_reg, rgctx_reg);
EMIT_NEW_BIALU (cfg, ins, OP_PADD, addr_reg, addr_reg, shifted_slot_reg);
int val_reg = alloc_preg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, val_reg, addr_reg, TARGET_SIZEOF_VOID_P + (entry->in_mrgctx ? MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT : 0));
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, val_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, slowpath_bb);
res_reg = alloc_preg (cfg);
EMIT_NEW_UNALU (cfg, ins, OP_MOVE, res_reg, val_reg);
res = ins;
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
MONO_START_BB (cfg, slowpath_bb);
slowpath_bb->out_of_line = TRUE;
MonoInst *args[2] = { rgctx, slot_ins };
if (entry->in_mrgctx)
call = mono_emit_jit_icall (cfg, mono_fill_method_rgctx, args);
else
call = mono_emit_jit_icall (cfg, mono_fill_class_rgctx, args);
EMIT_NEW_UNALU (cfg, ins, OP_MOVE, res_reg, call->dreg);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
MONO_START_BB (cfg, end_bb);
return res;
}
/*
* emit_rgctx_fetch:
*
* Emit IR to load the value of the rgctx entry ENTRY from the rgctx.
*/
static MonoInst*
emit_rgctx_fetch (MonoCompile *cfg, int context_used, MonoJumpInfoRgctxEntry *entry)
{
MonoInst *rgctx = emit_get_rgctx (cfg, context_used);
if (cfg->llvm_only)
return emit_rgctx_fetch_inline (cfg, rgctx, entry);
else
return mini_emit_abs_call (cfg, MONO_PATCH_INFO_RGCTX_FETCH, entry, mono_icall_sig_ptr_ptr, &rgctx);
}
/*
* mini_emit_get_rgctx_klass:
*
* Emit IR to load the property RGCTX_TYPE of KLASS. If context_used is 0, emit
* normal constants, else emit a load from the rgctx.
*/
MonoInst*
mini_emit_get_rgctx_klass (MonoCompile *cfg, int context_used,
MonoClass *klass, MonoRgctxInfoType rgctx_type)
{
if (!context_used) {
MonoInst *ins;
switch (rgctx_type) {
case MONO_RGCTX_INFO_KLASS:
EMIT_NEW_CLASSCONST (cfg, ins, klass);
return ins;
case MONO_RGCTX_INFO_VTABLE: {
MonoVTable *vtable = mono_class_vtable_checked (klass, cfg->error);
CHECK_CFG_ERROR;
EMIT_NEW_VTABLECONST (cfg, ins, vtable);
return ins;
}
default:
g_assert_not_reached ();
}
}
// Its cheaper to load these from the gsharedvt info struct
if (cfg->llvm_only && cfg->gsharedvt)
return mini_emit_get_gsharedvt_info_klass (cfg, klass, rgctx_type);
MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_CLASS, klass, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
mono_error_exit:
return NULL;
}
static MonoInst*
emit_get_rgctx_sig (MonoCompile *cfg, int context_used,
MonoMethodSignature *sig, MonoRgctxInfoType rgctx_type)
{
MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_SIGNATURE, sig, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
}
static MonoInst*
emit_get_rgctx_gsharedvt_call (MonoCompile *cfg, int context_used,
MonoMethodSignature *sig, MonoMethod *cmethod, MonoRgctxInfoType rgctx_type)
{
MonoJumpInfoGSharedVtCall *call_info;
MonoJumpInfoRgctxEntry *entry;
call_info = (MonoJumpInfoGSharedVtCall *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfoGSharedVtCall));
call_info->sig = sig;
call_info->method = cmethod;
entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_GSHAREDVT_CALL, call_info, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
}
/*
* emit_get_rgctx_virt_method:
*
* Return data for method VIRT_METHOD for a receiver of type KLASS.
*/
static MonoInst*
emit_get_rgctx_virt_method (MonoCompile *cfg, int context_used,
MonoClass *klass, MonoMethod *virt_method, MonoRgctxInfoType rgctx_type)
{
MonoJumpInfoVirtMethod *info;
MonoJumpInfoRgctxEntry *entry;
if (context_used == -1)
context_used = mono_class_check_context_used (klass) | mono_method_check_context_used (virt_method);
info = (MonoJumpInfoVirtMethod *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfoVirtMethod));
info->klass = klass;
info->method = virt_method;
entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_VIRT_METHOD, info, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
}
static MonoInst*
emit_get_rgctx_gsharedvt_method (MonoCompile *cfg, int context_used,
MonoMethod *cmethod, MonoGSharedVtMethodInfo *info)
{
MonoJumpInfoRgctxEntry *entry;
entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_GSHAREDVT_METHOD, info, MONO_RGCTX_INFO_METHOD_GSHAREDVT_INFO);
return emit_rgctx_fetch (cfg, context_used, entry);
}
/*
* emit_get_rgctx_method:
*
* Emit IR to load the property RGCTX_TYPE of CMETHOD. If context_used is 0, emit
* normal constants, else emit a load from the rgctx.
*/
static MonoInst*
emit_get_rgctx_method (MonoCompile *cfg, int context_used,
MonoMethod *cmethod, MonoRgctxInfoType rgctx_type)
{
if (context_used == -1)
context_used = mono_method_check_context_used (cmethod);
if (!context_used) {
MonoInst *ins;
switch (rgctx_type) {
case MONO_RGCTX_INFO_METHOD:
EMIT_NEW_METHODCONST (cfg, ins, cmethod);
return ins;
case MONO_RGCTX_INFO_METHOD_RGCTX:
EMIT_NEW_METHOD_RGCTX_CONST (cfg, ins, cmethod);
return ins;
case MONO_RGCTX_INFO_METHOD_FTNDESC:
EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_METHOD_FTNDESC, cmethod);
return ins;
case MONO_RGCTX_INFO_LLVMONLY_INTERP_ENTRY:
EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_LLVMONLY_INTERP_ENTRY, cmethod);
return ins;
default:
g_assert_not_reached ();
}
} else {
// Its cheaper to load these from the gsharedvt info struct
if (cfg->llvm_only && cfg->gsharedvt)
return emit_get_gsharedvt_info (cfg, cmethod, rgctx_type);
MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_METHODCONST, cmethod, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
}
}
static MonoInst*
emit_get_rgctx_field (MonoCompile *cfg, int context_used,
MonoClassField *field, MonoRgctxInfoType rgctx_type)
{
// Its cheaper to load these from the gsharedvt info struct
if (cfg->llvm_only && cfg->gsharedvt)
return emit_get_gsharedvt_info (cfg, field, rgctx_type);
MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_FIELD, field, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
}
MonoInst*
mini_emit_get_rgctx_method (MonoCompile *cfg, int context_used,
MonoMethod *cmethod, MonoRgctxInfoType rgctx_type)
{
return emit_get_rgctx_method (cfg, context_used, cmethod, rgctx_type);
}
static int
get_gsharedvt_info_slot (MonoCompile *cfg, gpointer data, MonoRgctxInfoType rgctx_type)
{
MonoGSharedVtMethodInfo *info = cfg->gsharedvt_info;
MonoRuntimeGenericContextInfoTemplate *template_;
int i, idx;
g_assert (info);
for (i = 0; i < info->num_entries; ++i) {
MonoRuntimeGenericContextInfoTemplate *otemplate = &info->entries [i];
if (otemplate->info_type == rgctx_type && otemplate->data == data && rgctx_type != MONO_RGCTX_INFO_LOCAL_OFFSET)
return i;
}
if (info->num_entries == info->count_entries) {
MonoRuntimeGenericContextInfoTemplate *new_entries;
int new_count_entries = info->count_entries ? info->count_entries * 2 : 16;
new_entries = (MonoRuntimeGenericContextInfoTemplate *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoRuntimeGenericContextInfoTemplate) * new_count_entries);
memcpy (new_entries, info->entries, sizeof (MonoRuntimeGenericContextInfoTemplate) * info->count_entries);
info->entries = new_entries;
info->count_entries = new_count_entries;
}
idx = info->num_entries;
template_ = &info->entries [idx];
template_->info_type = rgctx_type;
template_->data = data;
info->num_entries ++;
return idx;
}
/*
* emit_get_gsharedvt_info:
*
* This is similar to emit_get_rgctx_.., but loads the data from the gsharedvt info var instead of calling an rgctx fetch trampoline.
*/
static MonoInst*
emit_get_gsharedvt_info (MonoCompile *cfg, gpointer data, MonoRgctxInfoType rgctx_type)
{
MonoInst *ins;
int idx, dreg;
idx = get_gsharedvt_info_slot (cfg, data, rgctx_type);
/* Load info->entries [idx] */
dreg = alloc_preg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, cfg->gsharedvt_info_var->dreg, MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, entries) + (idx * TARGET_SIZEOF_VOID_P));
return ins;
}
MonoInst*
mini_emit_get_gsharedvt_info_klass (MonoCompile *cfg, MonoClass *klass, MonoRgctxInfoType rgctx_type)
{
return emit_get_gsharedvt_info (cfg, m_class_get_byval_arg (klass), rgctx_type);
}
/*
* On return the caller must check @klass for load errors.
*/
static void
emit_class_init (MonoCompile *cfg, MonoClass *klass)
{
MonoInst *vtable_arg;
int context_used;
context_used = mini_class_check_context_used (cfg, klass);
if (context_used) {
vtable_arg = mini_emit_get_rgctx_klass (cfg, context_used,
klass, MONO_RGCTX_INFO_VTABLE);
} else {
MonoVTable *vtable = mono_class_vtable_checked (klass, cfg->error);
if (!is_ok (cfg->error)) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
return;
}
EMIT_NEW_VTABLECONST (cfg, vtable_arg, vtable);
}
if (!COMPILE_LLVM (cfg) && cfg->backend->have_op_generic_class_init) {
MonoInst *ins;
/*
* Using an opcode instead of emitting IR here allows the hiding of the call inside the opcode,
* so this doesn't have to clobber any regs and it doesn't break basic blocks.
*/
MONO_INST_NEW (cfg, ins, OP_GENERIC_CLASS_INIT);
ins->sreg1 = vtable_arg->dreg;
MONO_ADD_INS (cfg->cbb, ins);
} else {
int inited_reg;
MonoBasicBlock *inited_bb;
inited_reg = alloc_ireg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADU1_MEMBASE, inited_reg, vtable_arg->dreg, MONO_STRUCT_OFFSET (MonoVTable, initialized));
NEW_BBLOCK (cfg, inited_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, inited_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBNE_UN, inited_bb);
cfg->cbb->out_of_line = TRUE;
mono_emit_jit_icall (cfg, mono_generic_class_init, &vtable_arg);
MONO_START_BB (cfg, inited_bb);
}
}
static void
emit_seq_point (MonoCompile *cfg, MonoMethod *method, guint8* ip, gboolean intr_loc, gboolean nonempty_stack)
{
MonoInst *ins;
if (cfg->gen_seq_points && cfg->method == method) {
NEW_SEQ_POINT (cfg, ins, ip - cfg->header->code, intr_loc);
if (nonempty_stack)
ins->flags |= MONO_INST_NONEMPTY_STACK;
MONO_ADD_INS (cfg->cbb, ins);
cfg->last_seq_point = ins;
}
}
void
mini_save_cast_details (MonoCompile *cfg, MonoClass *klass, int obj_reg, gboolean null_check)
{
if (mini_debug_options.better_cast_details) {
int vtable_reg = alloc_preg (cfg);
int klass_reg = alloc_preg (cfg);
MonoBasicBlock *is_null_bb = NULL;
MonoInst *tls_get;
if (null_check) {
NEW_BBLOCK (cfg, is_null_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, obj_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, is_null_bb);
}
tls_get = mono_create_tls_get (cfg, TLS_KEY_JIT_TLS);
if (!tls_get) {
fprintf (stderr, "error: --debug=casts not supported on this platform.\n.");
exit (1);
}
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, vtable_reg, obj_reg, MONO_STRUCT_OFFSET (MonoObject, vtable));
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, klass_reg, vtable_reg, MONO_STRUCT_OFFSET (MonoVTable, klass));
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, tls_get->dreg, MONO_STRUCT_OFFSET (MonoJitTlsData, class_cast_from), klass_reg);
MonoInst *class_ins = mini_emit_get_rgctx_klass (cfg, mini_class_check_context_used (cfg, klass), klass, MONO_RGCTX_INFO_KLASS);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, tls_get->dreg, MONO_STRUCT_OFFSET (MonoJitTlsData, class_cast_to), class_ins->dreg);
if (null_check)
MONO_START_BB (cfg, is_null_bb);
}
}
void
mini_reset_cast_details (MonoCompile *cfg)
{
/* Reset the variables holding the cast details */
if (mini_debug_options.better_cast_details) {
MonoInst *tls_get = mono_create_tls_get (cfg, TLS_KEY_JIT_TLS);
/* It is enough to reset the from field */
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STORE_MEMBASE_IMM, tls_get->dreg, MONO_STRUCT_OFFSET (MonoJitTlsData, class_cast_from), 0);
}
}
/*
* On return the caller must check @array_class for load errors
*/
static void
mini_emit_check_array_type (MonoCompile *cfg, MonoInst *obj, MonoClass *array_class)
{
int vtable_reg = alloc_preg (cfg);
int context_used;
context_used = mini_class_check_context_used (cfg, array_class);
mini_save_cast_details (cfg, array_class, obj->dreg, FALSE);
MONO_EMIT_NEW_LOAD_MEMBASE_FAULT (cfg, vtable_reg, obj->dreg, MONO_STRUCT_OFFSET (MonoObject, vtable));
if (context_used) {
MonoInst *vtable_ins;
vtable_ins = mini_emit_get_rgctx_klass (cfg, context_used, array_class, MONO_RGCTX_INFO_VTABLE);
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, vtable_reg, vtable_ins->dreg);
} else {
if (cfg->compile_aot) {
int vt_reg;
MonoVTable *vtable;
if (!(vtable = mono_class_vtable_checked (array_class, cfg->error))) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
return;
}
vt_reg = alloc_preg (cfg);
MONO_EMIT_NEW_VTABLECONST (cfg, vt_reg, vtable);
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, vtable_reg, vt_reg);
} else {
MonoVTable *vtable;
if (!(vtable = mono_class_vtable_checked (array_class, cfg->error))) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
return;
}
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, vtable_reg, (gssize)vtable);
}
}
MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "ArrayTypeMismatchException");
mini_reset_cast_details (cfg);
}
/**
* Handles unbox of a Nullable<T>. If context_used is non zero, then shared
* generic code is generated.
*/
static MonoInst*
handle_unbox_nullable (MonoCompile* cfg, MonoInst* val, MonoClass* klass, int context_used)
{
MonoMethod* method;
if (m_class_is_enumtype (mono_class_get_nullable_param_internal (klass)))
method = get_method_nofail (klass, "UnboxExact", 1, 0);
else
method = get_method_nofail (klass, "Unbox", 1, 0);
g_assert (method);
if (context_used) {
MonoInst *rgctx, *addr;
/* FIXME: What if the class is shared? We might not
have to get the address of the method from the
RGCTX. */
if (cfg->llvm_only) {
addr = emit_get_rgctx_method (cfg, context_used, method,
MONO_RGCTX_INFO_METHOD_FTNDESC);
cfg->signatures = g_slist_prepend_mempool (cfg->mempool, cfg->signatures, mono_method_signature_internal (method));
return mini_emit_llvmonly_calli (cfg, mono_method_signature_internal (method), &val, addr);
} else {
addr = emit_get_rgctx_method (cfg, context_used, method,
MONO_RGCTX_INFO_GENERIC_METHOD_CODE);
rgctx = emit_get_rgctx (cfg, context_used);
return mini_emit_calli (cfg, mono_method_signature_internal (method), &val, addr, NULL, rgctx);
}
} else {
gboolean pass_vtable, pass_mrgctx;
MonoInst *rgctx_arg = NULL;
check_method_sharing (cfg, method, &pass_vtable, &pass_mrgctx);
g_assert (!pass_mrgctx);
if (pass_vtable) {
MonoVTable *vtable = mono_class_vtable_checked (method->klass, cfg->error);
mono_error_assert_ok (cfg->error);
EMIT_NEW_VTABLECONST (cfg, rgctx_arg, vtable);
}
return mini_emit_method_call_full (cfg, method, NULL, FALSE, &val, NULL, NULL, rgctx_arg);
}
}
MonoInst*
mini_handle_unbox (MonoCompile *cfg, MonoClass *klass, MonoInst *val, int context_used)
{
MonoInst *add;
int obj_reg;
int vtable_reg = alloc_dreg (cfg ,STACK_PTR);
int klass_reg = alloc_dreg (cfg ,STACK_PTR);
int eclass_reg = alloc_dreg (cfg ,STACK_PTR);
int rank_reg = alloc_dreg (cfg ,STACK_I4);
obj_reg = val->dreg;
MONO_EMIT_NEW_LOAD_MEMBASE_FAULT (cfg, vtable_reg, obj_reg, MONO_STRUCT_OFFSET (MonoObject, vtable));
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADU1_MEMBASE, rank_reg, vtable_reg, MONO_STRUCT_OFFSET (MonoVTable, rank));
/* FIXME: generics */
g_assert (m_class_get_rank (klass) == 0);
// Check rank == 0
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, rank_reg, 0);
MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "InvalidCastException");
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, klass_reg, vtable_reg, MONO_STRUCT_OFFSET (MonoVTable, klass));
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, eclass_reg, klass_reg, m_class_offsetof_element_class ());
if (context_used) {
MonoInst *element_class;
/* This assertion is from the unboxcast insn */
g_assert (m_class_get_rank (klass) == 0);
element_class = mini_emit_get_rgctx_klass (cfg, context_used,
klass, MONO_RGCTX_INFO_ELEMENT_KLASS);
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, eclass_reg, element_class->dreg);
MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "InvalidCastException");
} else {
mini_save_cast_details (cfg, m_class_get_element_class (klass), obj_reg, FALSE);
mini_emit_class_check (cfg, eclass_reg, m_class_get_element_class (klass));
mini_reset_cast_details (cfg);
}
NEW_BIALU_IMM (cfg, add, OP_ADD_IMM, alloc_dreg (cfg, STACK_MP), obj_reg, MONO_ABI_SIZEOF (MonoObject));
MONO_ADD_INS (cfg->cbb, add);
add->type = STACK_MP;
add->klass = klass;
return add;
}
static MonoInst*
handle_unbox_gsharedvt (MonoCompile *cfg, MonoClass *klass, MonoInst *obj)
{
MonoInst *addr, *klass_inst, *is_ref, *args[16];
MonoBasicBlock *is_ref_bb, *is_nullable_bb, *end_bb;
MonoInst *ins;
int dreg, addr_reg;
klass_inst = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_KLASS);
/* obj */
args [0] = obj;
/* klass */
args [1] = klass_inst;
/* CASTCLASS */
obj = mono_emit_jit_icall (cfg, mono_object_castclass_unbox, args);
NEW_BBLOCK (cfg, is_ref_bb);
NEW_BBLOCK (cfg, is_nullable_bb);
NEW_BBLOCK (cfg, end_bb);
is_ref = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_CLASS_BOX_TYPE);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_REF);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_ref_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_NULLABLE);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_nullable_bb);
/* This will contain either the address of the unboxed vtype, or an address of the temporary where the ref is stored */
addr_reg = alloc_dreg (cfg, STACK_MP);
/* Non-ref case */
/* UNBOX */
NEW_BIALU_IMM (cfg, addr, OP_ADD_IMM, addr_reg, obj->dreg, MONO_ABI_SIZEOF (MonoObject));
MONO_ADD_INS (cfg->cbb, addr);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Ref case */
MONO_START_BB (cfg, is_ref_bb);
/* Save the ref to a temporary */
dreg = alloc_ireg (cfg);
EMIT_NEW_VARLOADA_VREG (cfg, addr, dreg, m_class_get_byval_arg (klass));
addr->dreg = addr_reg;
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, addr->dreg, 0, obj->dreg);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Nullable case */
MONO_START_BB (cfg, is_nullable_bb);
{
MonoInst *addr = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_NULLABLE_CLASS_UNBOX);
MonoInst *unbox_call;
MonoMethodSignature *unbox_sig;
unbox_sig = (MonoMethodSignature *)mono_mempool_alloc0 (cfg->mempool, MONO_SIZEOF_METHOD_SIGNATURE + (1 * sizeof (MonoType *)));
unbox_sig->ret = m_class_get_byval_arg (klass);
unbox_sig->param_count = 1;
unbox_sig->params [0] = mono_get_object_type ();
if (cfg->llvm_only)
unbox_call = mini_emit_llvmonly_calli (cfg, unbox_sig, &obj, addr);
else
unbox_call = mini_emit_calli (cfg, unbox_sig, &obj, addr, NULL, NULL);
EMIT_NEW_VARLOADA_VREG (cfg, addr, unbox_call->dreg, m_class_get_byval_arg (klass));
addr->dreg = addr_reg;
}
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* End */
MONO_START_BB (cfg, end_bb);
/* LDOBJ */
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr_reg, 0);
return ins;
}
/*
* Returns NULL and set the cfg exception on error.
*/
static MonoInst*
handle_alloc (MonoCompile *cfg, MonoClass *klass, gboolean for_box, int context_used)
{
MonoInst *iargs [2];
MonoJitICallId alloc_ftn;
if (mono_class_get_flags (klass) & TYPE_ATTRIBUTE_ABSTRACT) {
char* full_name = mono_type_get_full_name (klass);
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
mono_error_set_member_access (cfg->error, "Cannot create an abstract class: %s", full_name);
g_free (full_name);
return NULL;
}
if (context_used) {
gboolean known_instance_size = !mini_is_gsharedvt_klass (klass);
MonoMethod *managed_alloc = mono_gc_get_managed_allocator (klass, for_box, known_instance_size);
iargs [0] = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_VTABLE);
alloc_ftn = MONO_JIT_ICALL_ves_icall_object_new_specific;
if (managed_alloc) {
if (known_instance_size) {
int size = mono_class_instance_size (klass);
if (size < MONO_ABI_SIZEOF (MonoObject))
g_error ("Invalid size %d for class %s", size, mono_type_get_full_name (klass));
EMIT_NEW_ICONST (cfg, iargs [1], size);
}
return mono_emit_method_call (cfg, managed_alloc, iargs, NULL);
}
return mono_emit_jit_icall_id (cfg, alloc_ftn, iargs);
}
if (cfg->compile_aot && cfg->cbb->out_of_line && m_class_get_type_token (klass) && m_class_get_image (klass) == mono_defaults.corlib && !mono_class_is_ginst (klass)) {
/* This happens often in argument checking code, eg. throw new FooException... */
/* Avoid relocations and save some space by calling a helper function specialized to mscorlib */
EMIT_NEW_ICONST (cfg, iargs [0], mono_metadata_token_index (m_class_get_type_token (klass)));
alloc_ftn = MONO_JIT_ICALL_mono_helper_newobj_mscorlib;
} else {
MonoVTable *vtable = mono_class_vtable_checked (klass, cfg->error);
if (!is_ok (cfg->error)) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
return NULL;
}
MonoMethod *managed_alloc = mono_gc_get_managed_allocator (klass, for_box, TRUE);
if (managed_alloc) {
int size = mono_class_instance_size (klass);
if (size < MONO_ABI_SIZEOF (MonoObject))
g_error ("Invalid size %d for class %s", size, mono_type_get_full_name (klass));
EMIT_NEW_VTABLECONST (cfg, iargs [0], vtable);
EMIT_NEW_ICONST (cfg, iargs [1], size);
return mono_emit_method_call (cfg, managed_alloc, iargs, NULL);
}
alloc_ftn = MONO_JIT_ICALL_ves_icall_object_new_specific;
EMIT_NEW_VTABLECONST (cfg, iargs [0], vtable);
}
return mono_emit_jit_icall_id (cfg, alloc_ftn, iargs);
}
/*
* Returns NULL and set the cfg exception on error.
*/
MonoInst*
mini_emit_box (MonoCompile *cfg, MonoInst *val, MonoClass *klass, int context_used)
{
MonoInst *alloc, *ins;
if (G_UNLIKELY (m_class_is_byreflike (klass))) {
mono_error_set_bad_image (cfg->error, m_class_get_image (cfg->method->klass), "Cannot box IsByRefLike type '%s.%s'", m_class_get_name_space (klass), m_class_get_name (klass));
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
return NULL;
}
if (mono_class_is_nullable (klass)) {
MonoMethod* method = get_method_nofail (klass, "Box", 1, 0);
if (context_used) {
if (cfg->llvm_only) {
MonoMethodSignature *sig = mono_method_signature_internal (method);
MonoInst *addr = emit_get_rgctx_method (cfg, context_used, method,
MONO_RGCTX_INFO_METHOD_FTNDESC);
cfg->interp_in_signatures = g_slist_prepend_mempool (cfg->mempool, cfg->interp_in_signatures, sig);
return mini_emit_llvmonly_calli (cfg, sig, &val, addr);
} else {
/* FIXME: What if the class is shared? We might not
have to get the method address from the RGCTX. */
MonoInst *addr = emit_get_rgctx_method (cfg, context_used, method,
MONO_RGCTX_INFO_GENERIC_METHOD_CODE);
MonoInst *rgctx = emit_get_rgctx (cfg, context_used);
return mini_emit_calli (cfg, mono_method_signature_internal (method), &val, addr, NULL, rgctx);
}
} else {
gboolean pass_vtable, pass_mrgctx;
MonoInst *rgctx_arg = NULL;
check_method_sharing (cfg, method, &pass_vtable, &pass_mrgctx);
g_assert (!pass_mrgctx);
if (pass_vtable) {
MonoVTable *vtable = mono_class_vtable_checked (method->klass, cfg->error);
mono_error_assert_ok (cfg->error);
EMIT_NEW_VTABLECONST (cfg, rgctx_arg, vtable);
}
return mini_emit_method_call_full (cfg, method, NULL, FALSE, &val, NULL, NULL, rgctx_arg);
}
}
if (mini_is_gsharedvt_klass (klass)) {
MonoBasicBlock *is_ref_bb, *is_nullable_bb, *end_bb;
MonoInst *res, *is_ref, *src_var, *addr;
int dreg;
dreg = alloc_ireg (cfg);
NEW_BBLOCK (cfg, is_ref_bb);
NEW_BBLOCK (cfg, is_nullable_bb);
NEW_BBLOCK (cfg, end_bb);
is_ref = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_CLASS_BOX_TYPE);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_REF);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_ref_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_NULLABLE);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_nullable_bb);
/* Non-ref case */
alloc = handle_alloc (cfg, klass, TRUE, context_used);
if (!alloc)
return NULL;
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), alloc->dreg, MONO_ABI_SIZEOF (MonoObject), val->dreg);
ins->opcode = OP_STOREV_MEMBASE;
EMIT_NEW_UNALU (cfg, res, OP_MOVE, dreg, alloc->dreg);
res->type = STACK_OBJ;
res->klass = klass;
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Ref case */
MONO_START_BB (cfg, is_ref_bb);
/* val is a vtype, so has to load the value manually */
src_var = get_vreg_to_inst (cfg, val->dreg);
if (!src_var)
src_var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (klass), OP_LOCAL, val->dreg);
EMIT_NEW_VARLOADA (cfg, addr, src_var, src_var->inst_vtype);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, addr->dreg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Nullable case */
MONO_START_BB (cfg, is_nullable_bb);
{
MonoInst *addr = mini_emit_get_gsharedvt_info_klass (cfg, klass,
MONO_RGCTX_INFO_NULLABLE_CLASS_BOX);
MonoInst *box_call;
MonoMethodSignature *box_sig;
/*
* klass is Nullable<T>, need to call Nullable<T>.Box () using a gsharedvt signature, but we cannot
* construct that method at JIT time, so have to do things by hand.
*/
box_sig = (MonoMethodSignature *)mono_mempool_alloc0 (cfg->mempool, MONO_SIZEOF_METHOD_SIGNATURE + (1 * sizeof (MonoType *)));
box_sig->ret = mono_get_object_type ();
box_sig->param_count = 1;
box_sig->params [0] = m_class_get_byval_arg (klass);
if (cfg->llvm_only)
box_call = mini_emit_llvmonly_calli (cfg, box_sig, &val, addr);
else
box_call = mini_emit_calli (cfg, box_sig, &val, addr, NULL, NULL);
EMIT_NEW_UNALU (cfg, res, OP_MOVE, dreg, box_call->dreg);
res->type = STACK_OBJ;
res->klass = klass;
}
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
MONO_START_BB (cfg, end_bb);
return res;
}
alloc = handle_alloc (cfg, klass, TRUE, context_used);
if (!alloc)
return NULL;
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), alloc->dreg, MONO_ABI_SIZEOF (MonoObject), val->dreg);
return alloc;
}
static gboolean
method_needs_stack_walk (MonoCompile *cfg, MonoMethod *cmethod)
{
if (cmethod->klass == mono_defaults.systemtype_class) {
if (!strcmp (cmethod->name, "GetType"))
return TRUE;
}
/*
* In corelib code, methods which need to do a stack walk declare a StackCrawlMark local and pass it as an
* arguments until it reaches an icall. Its hard to detect which methods do that especially with
* StackCrawlMark.LookForMyCallersCaller, so for now, just hardcode the classes which contain the public
* methods whose caller is needed.
*/
if (mono_is_corlib_image (m_class_get_image (cmethod->klass))) {
const char *cname = m_class_get_name (cmethod->klass);
if (!strcmp (cname, "Assembly") ||
!strcmp (cname, "AssemblyLoadContext") ||
(!strcmp (cname, "Activator"))) {
if (!strcmp (cmethod->name, "op_Equality"))
return FALSE;
return TRUE;
}
}
return FALSE;
}
G_GNUC_UNUSED MonoInst*
mini_handle_enum_has_flag (MonoCompile *cfg, MonoClass *klass, MonoInst *enum_this, int enum_val_reg, MonoInst *enum_flag)
{
MonoType *enum_type = mono_type_get_underlying_type (m_class_get_byval_arg (klass));
guint32 load_opc = mono_type_to_load_membase (cfg, enum_type);
gboolean is_i4;
switch (enum_type->type) {
case MONO_TYPE_I8:
case MONO_TYPE_U8:
#if SIZEOF_REGISTER == 8
case MONO_TYPE_I:
case MONO_TYPE_U:
#endif
is_i4 = FALSE;
break;
default:
is_i4 = TRUE;
break;
}
{
MonoInst *load = NULL, *and_, *cmp, *ceq;
int enum_reg = is_i4 ? alloc_ireg (cfg) : alloc_lreg (cfg);
int and_reg = is_i4 ? alloc_ireg (cfg) : alloc_lreg (cfg);
int dest_reg = alloc_ireg (cfg);
if (enum_this) {
EMIT_NEW_LOAD_MEMBASE (cfg, load, load_opc, enum_reg, enum_this->dreg, 0);
} else {
g_assert (enum_val_reg != -1);
enum_reg = enum_val_reg;
}
EMIT_NEW_BIALU (cfg, and_, is_i4 ? OP_IAND : OP_LAND, and_reg, enum_reg, enum_flag->dreg);
EMIT_NEW_BIALU (cfg, cmp, is_i4 ? OP_ICOMPARE : OP_LCOMPARE, -1, and_reg, enum_flag->dreg);
EMIT_NEW_UNALU (cfg, ceq, is_i4 ? OP_ICEQ : OP_LCEQ, dest_reg, -1);
ceq->type = STACK_I4;
if (!is_i4) {
load = load ? mono_decompose_opcode (cfg, load) : NULL;
and_ = mono_decompose_opcode (cfg, and_);
cmp = mono_decompose_opcode (cfg, cmp);
ceq = mono_decompose_opcode (cfg, ceq);
}
return ceq;
}
}
static void
emit_set_deopt_il_offset (MonoCompile *cfg, int offset)
{
MonoInst *ins;
if (!(cfg->deopt && cfg->method == cfg->current_method))
return;
EMIT_NEW_VARLOADA (cfg, ins, cfg->il_state_var, NULL);
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI4_MEMBASE_IMM, ins->dreg, MONO_STRUCT_OFFSET (MonoMethodILState, il_offset), offset);
}
static MonoInst*
emit_get_rgctx_dele_tramp (MonoCompile *cfg, int context_used,
MonoClass *klass, MonoMethod *virt_method, gboolean _virtual, MonoRgctxInfoType rgctx_type)
{
MonoDelegateClassMethodPair *info;
MonoJumpInfoRgctxEntry *entry;
info = (MonoDelegateClassMethodPair *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoDelegateClassMethodPair));
info->klass = klass;
info->method = virt_method;
info->is_virtual = _virtual;
entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_DELEGATE_TRAMPOLINE, info, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
}
/*
* Returns NULL and set the cfg exception on error.
*/
static G_GNUC_UNUSED MonoInst*
handle_delegate_ctor (MonoCompile *cfg, MonoClass *klass, MonoInst *target, MonoMethod *method, int target_method_context_used, int invoke_context_used, gboolean virtual_)
{
MonoInst *ptr;
int dreg;
gpointer trampoline;
MonoInst *obj, *tramp_ins;
guint8 **code_slot;
if (virtual_ && !cfg->llvm_only) {
MonoMethod *invoke = mono_get_delegate_invoke_internal (klass);
g_assert (invoke);
//FIXME verify & fix any issue with removing invoke_context_used restriction
if (invoke_context_used || !mono_get_delegate_virtual_invoke_impl (mono_method_signature_internal (invoke), target_method_context_used ? NULL : method))
return NULL;
}
obj = handle_alloc (cfg, klass, FALSE, invoke_context_used);
if (!obj)
return NULL;
/* Inline the contents of mono_delegate_ctor */
/* Set target field */
/* Optimize away setting of NULL target */
if (!MONO_INS_IS_PCONST_NULL (target)) {
if (!(method->flags & METHOD_ATTRIBUTE_STATIC)) {
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, target->dreg, 0);
MONO_EMIT_NEW_COND_EXC (cfg, EQ, "NullReferenceException");
}
if (!mini_debug_options.weak_memory_model)
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, target), target->dreg);
if (cfg->gen_write_barriers) {
dreg = alloc_preg (cfg);
EMIT_NEW_BIALU_IMM (cfg, ptr, OP_PADD_IMM, dreg, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, target));
mini_emit_write_barrier (cfg, ptr, target);
}
}
/* Set method field */
if (!(target_method_context_used || invoke_context_used) && !cfg->llvm_only) {
//If compiling with gsharing enabled, it's faster to load method the delegate trampoline info than to use a rgctx slot
MonoInst *method_ins = emit_get_rgctx_method (cfg, target_method_context_used, method, MONO_RGCTX_INFO_METHOD);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method), method_ins->dreg);
}
if (cfg->llvm_only) {
if (virtual_) {
MonoInst *args [ ] = {
obj,
target,
emit_get_rgctx_method (cfg, target_method_context_used, method, MONO_RGCTX_INFO_METHOD)
};
mono_emit_jit_icall (cfg, mini_llvmonly_init_delegate_virtual, args);
return obj;
}
}
/*
* To avoid looking up the compiled code belonging to the target method
* in mono_delegate_trampoline (), we allocate a per-domain memory slot to
* store it, and we fill it after the method has been compiled.
*/
if (!method->dynamic && !cfg->llvm_only) {
MonoInst *code_slot_ins;
if (target_method_context_used) {
code_slot_ins = emit_get_rgctx_method (cfg, target_method_context_used, method, MONO_RGCTX_INFO_METHOD_DELEGATE_CODE);
} else {
MonoJitMemoryManager *jit_mm = (MonoJitMemoryManager*)cfg->jit_mm;
jit_mm_lock (jit_mm);
if (!jit_mm->method_code_hash)
jit_mm->method_code_hash = g_hash_table_new (NULL, NULL);
code_slot = (guint8 **)g_hash_table_lookup (jit_mm->method_code_hash, method);
if (!code_slot) {
code_slot = (guint8 **)mono_mem_manager_alloc0 (jit_mm->mem_manager, sizeof (gpointer));
g_hash_table_insert (jit_mm->method_code_hash, method, code_slot);
}
jit_mm_unlock (jit_mm);
code_slot_ins = mini_emit_runtime_constant (cfg, MONO_PATCH_INFO_METHOD_CODE_SLOT, method);
}
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_code), code_slot_ins->dreg);
}
if (target_method_context_used || invoke_context_used) {
tramp_ins = emit_get_rgctx_dele_tramp (cfg, target_method_context_used | invoke_context_used, klass, method, virtual_, MONO_RGCTX_INFO_DELEGATE_TRAMP_INFO);
//This is emited as a contant store for the non-shared case.
//We copy from the delegate trampoline info as it's faster than a rgctx fetch
dreg = alloc_preg (cfg);
if (!cfg->llvm_only) {
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, tramp_ins->dreg, MONO_STRUCT_OFFSET (MonoDelegateTrampInfo, method));
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method), dreg);
}
} else if (cfg->compile_aot) {
MonoDelegateClassMethodPair *del_tramp;
del_tramp = (MonoDelegateClassMethodPair *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoDelegateClassMethodPair));
del_tramp->klass = klass;
del_tramp->method = method;
del_tramp->is_virtual = virtual_;
EMIT_NEW_AOTCONST (cfg, tramp_ins, MONO_PATCH_INFO_DELEGATE_TRAMPOLINE, del_tramp);
} else {
if (virtual_)
trampoline = mono_create_delegate_virtual_trampoline (klass, method);
else
trampoline = mono_create_delegate_trampoline_info (klass, method);
EMIT_NEW_PCONST (cfg, tramp_ins, trampoline);
}
if (cfg->llvm_only) {
MonoInst *args [ ] = {
obj,
tramp_ins
};
mono_emit_jit_icall (cfg, mini_llvmonly_init_delegate, args);
return obj;
}
/* Set invoke_impl field */
if (virtual_) {
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, invoke_impl), tramp_ins->dreg);
} else {
dreg = alloc_preg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, tramp_ins->dreg, MONO_STRUCT_OFFSET (MonoDelegateTrampInfo, invoke_impl));
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, invoke_impl), dreg);
dreg = alloc_preg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, tramp_ins->dreg, MONO_STRUCT_OFFSET (MonoDelegateTrampInfo, method_ptr));
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_ptr), dreg);
}
dreg = alloc_preg (cfg);
MONO_EMIT_NEW_ICONST (cfg, dreg, virtual_ ? 1 : 0);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI1_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_is_virtual), dreg);
/* All the checks which are in mono_delegate_ctor () are done by the delegate trampoline */
return obj;
}
/*
* handle_constrained_gsharedvt_call:
*
* Handle constrained calls where the receiver is a gsharedvt type.
* Return the instruction representing the call. Set the cfg exception on failure.
*/
static MonoInst*
handle_constrained_gsharedvt_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, MonoClass *constrained_class,
gboolean *ref_emit_widen)
{
MonoInst *ins = NULL;
gboolean emit_widen = *ref_emit_widen;
gboolean supported;
/*
* Constrained calls need to behave differently at runtime dependending on whenever the receiver is instantiated as ref type or as a vtype.
* This is hard to do with the current call code, since we would have to emit a branch and two different calls. So instead, we
* pack the arguments into an array, and do the rest of the work in in an icall.
*/
supported = ((cmethod->klass == mono_defaults.object_class) || mono_class_is_interface (cmethod->klass) || (!m_class_is_valuetype (cmethod->klass) && m_class_get_image (cmethod->klass) != mono_defaults.corlib));
if (supported)
supported = (MONO_TYPE_IS_VOID (fsig->ret) || MONO_TYPE_IS_PRIMITIVE (fsig->ret) || MONO_TYPE_IS_REFERENCE (fsig->ret) || MONO_TYPE_ISSTRUCT (fsig->ret) || m_class_is_enumtype (mono_class_from_mono_type_internal (fsig->ret)) || mini_is_gsharedvt_type (fsig->ret));
if (supported) {
if (fsig->param_count == 0 || (!fsig->hasthis && fsig->param_count == 1)) {
supported = TRUE;
} else {
supported = TRUE;
for (int i = 0; i < fsig->param_count; ++i) {
if (!(m_type_is_byref (fsig->params [i]) || MONO_TYPE_IS_PRIMITIVE (fsig->params [i]) || MONO_TYPE_IS_REFERENCE (fsig->params [i]) || MONO_TYPE_ISSTRUCT (fsig->params [i]) || mini_is_gsharedvt_type (fsig->params [i])))
supported = FALSE;
}
}
}
if (supported) {
MonoInst *args [5];
/*
* This case handles calls to
* - object:ToString()/Equals()/GetHashCode(),
* - System.IComparable<T>:CompareTo()
* - System.IEquatable<T>:Equals ()
* plus some simple interface calls enough to support AsyncTaskMethodBuilder.
*/
if (fsig->hasthis)
args [0] = sp [0];
else
EMIT_NEW_PCONST (cfg, args [0], NULL);
args [1] = emit_get_rgctx_method (cfg, mono_method_check_context_used (cmethod), cmethod, MONO_RGCTX_INFO_METHOD);
args [2] = mini_emit_get_rgctx_klass (cfg, mono_class_check_context_used (constrained_class), constrained_class, MONO_RGCTX_INFO_KLASS);
/* !fsig->hasthis is for the wrapper for the Object.GetType () icall or static virtual methods */
if ((fsig->hasthis || m_method_is_static (cmethod)) && fsig->param_count) {
/* Call mono_gsharedvt_constrained_call (gpointer mp, MonoMethod *cmethod, MonoClass *klass, gboolean *deref_args, gpointer *args) */
gboolean has_gsharedvt = FALSE;
for (int i = 0; i < fsig->param_count; ++i) {
if (mini_is_gsharedvt_type (fsig->params [i]))
has_gsharedvt = TRUE;
}
/* Pass an array of bools which signal whenever the corresponding argument is a gsharedvt ref type */
if (has_gsharedvt) {
MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM);
ins->dreg = alloc_preg (cfg);
ins->inst_imm = fsig->param_count;
MONO_ADD_INS (cfg->cbb, ins);
args [3] = ins;
} else {
EMIT_NEW_PCONST (cfg, args [3], 0);
}
/* Pass the arguments using a localloc-ed array using the format expected by runtime_invoke () */
MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM);
ins->dreg = alloc_preg (cfg);
ins->inst_imm = fsig->param_count * sizeof (target_mgreg_t);
MONO_ADD_INS (cfg->cbb, ins);
args [4] = ins;
for (int i = 0; i < fsig->param_count; ++i) {
int addr_reg;
if (mini_is_gsharedvt_type (fsig->params [i])) {
MonoInst *is_deref;
int deref_arg_reg;
ins = mini_emit_get_gsharedvt_info_klass (cfg, mono_class_from_mono_type_internal (fsig->params [i]), MONO_RGCTX_INFO_CLASS_BOX_TYPE);
deref_arg_reg = alloc_preg (cfg);
/* deref_arg = BOX_TYPE != MONO_GSHAREDVT_BOX_TYPE_VTYPE */
EMIT_NEW_BIALU_IMM (cfg, is_deref, OP_ISUB_IMM, deref_arg_reg, ins->dreg, 1);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI1_MEMBASE_REG, args [3]->dreg, i, is_deref->dreg);
} else if (has_gsharedvt) {
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI1_MEMBASE_IMM, args [3]->dreg, i, 0);
}
MonoInst *arg = sp [i + fsig->hasthis];
if (mini_is_gsharedvt_type (fsig->params [i]) || MONO_TYPE_IS_PRIMITIVE (fsig->params [i]) || MONO_TYPE_ISSTRUCT (fsig->params [i])) {
EMIT_NEW_VARLOADA_VREG (cfg, ins, arg->dreg, fsig->params [i]);
addr_reg = ins->dreg;
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, args [4]->dreg, i * sizeof (target_mgreg_t), addr_reg);
} else {
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, args [4]->dreg, i * sizeof (target_mgreg_t), arg->dreg);
}
}
} else {
EMIT_NEW_ICONST (cfg, args [3], 0);
EMIT_NEW_ICONST (cfg, args [4], 0);
}
ins = mono_emit_jit_icall (cfg, mono_gsharedvt_constrained_call, args);
emit_widen = FALSE;
if (mini_is_gsharedvt_type (fsig->ret)) {
ins = handle_unbox_gsharedvt (cfg, mono_class_from_mono_type_internal (fsig->ret), ins);
} else if (MONO_TYPE_IS_PRIMITIVE (fsig->ret) || MONO_TYPE_ISSTRUCT (fsig->ret) || m_class_is_enumtype (mono_class_from_mono_type_internal (fsig->ret))) {
MonoInst *add;
/* Unbox */
NEW_BIALU_IMM (cfg, add, OP_ADD_IMM, alloc_dreg (cfg, STACK_MP), ins->dreg, MONO_ABI_SIZEOF (MonoObject));
MONO_ADD_INS (cfg->cbb, add);
/* Load value */
NEW_LOAD_MEMBASE_TYPE (cfg, ins, fsig->ret, add->dreg, 0);
MONO_ADD_INS (cfg->cbb, ins);
/* ins represents the call result */
}
} else {
GSHAREDVT_FAILURE (CEE_CALLVIRT);
}
*ref_emit_widen = emit_widen;
return ins;
exception_exit:
return NULL;
}
static void
mono_emit_load_got_addr (MonoCompile *cfg)
{
MonoInst *getaddr, *dummy_use;
if (!cfg->got_var || cfg->got_var_allocated)
return;
MONO_INST_NEW (cfg, getaddr, OP_LOAD_GOTADDR);
getaddr->cil_code = cfg->header->code;
getaddr->dreg = cfg->got_var->dreg;
/* Add it to the start of the first bblock */
if (cfg->bb_entry->code) {
getaddr->next = cfg->bb_entry->code;
cfg->bb_entry->code = getaddr;
}
else
MONO_ADD_INS (cfg->bb_entry, getaddr);
cfg->got_var_allocated = TRUE;
/*
* Add a dummy use to keep the got_var alive, since real uses might
* only be generated by the back ends.
* Add it to end_bblock, so the variable's lifetime covers the whole
* method.
* It would be better to make the usage of the got var explicit in all
* cases when the backend needs it (i.e. calls, throw etc.), so this
* wouldn't be needed.
*/
NEW_DUMMY_USE (cfg, dummy_use, cfg->got_var);
MONO_ADD_INS (cfg->bb_exit, dummy_use);
}
static MonoMethod*
get_constrained_method (MonoCompile *cfg, MonoImage *image, guint32 token,
MonoMethod *cil_method, MonoClass *constrained_class,
MonoGenericContext *generic_context)
{
MonoMethod *cmethod = cil_method;
gboolean constrained_is_generic_param =
m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_VAR ||
m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_MVAR;
if (cfg->current_method->wrapper_type != MONO_WRAPPER_NONE) {
if (cfg->verbose_level > 2)
printf ("DM Constrained call to %s\n", mono_type_get_full_name (constrained_class));
if (!(constrained_is_generic_param &&
cfg->gshared)) {
cmethod = mono_get_method_constrained_with_method (image, cil_method, constrained_class, generic_context, cfg->error);
CHECK_CFG_ERROR;
}
} else {
if (cfg->verbose_level > 2)
printf ("Constrained call to %s\n", mono_type_get_full_name (constrained_class));
if (constrained_is_generic_param && cfg->gshared) {
/*
* This is needed since get_method_constrained can't find
* the method in klass representing a type var.
* The type var is guaranteed to be a reference type in this
* case.
*/
if (!mini_is_gsharedvt_klass (constrained_class))
g_assert (!m_class_is_valuetype (cmethod->klass));
} else {
cmethod = mono_get_method_constrained_checked (image, token, constrained_class, generic_context, &cil_method, cfg->error);
CHECK_CFG_ERROR;
}
}
return cmethod;
mono_error_exit:
return NULL;
}
static gboolean
method_does_not_return (MonoMethod *method)
{
// FIXME: Under netcore, these are decorated with the [DoesNotReturn] attribute
return m_class_get_image (method->klass) == mono_defaults.corlib &&
!strcmp (m_class_get_name (method->klass), "ThrowHelper") &&
strstr (method->name, "Throw") == method->name &&
!method->is_inflated;
}
static int inline_limit, llvm_jit_inline_limit, llvm_aot_inline_limit;
static gboolean inline_limit_inited;
static gboolean
mono_method_check_inlining (MonoCompile *cfg, MonoMethod *method)
{
MonoMethodHeaderSummary header;
MonoVTable *vtable;
int limit;
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
MonoMethodSignature *sig = mono_method_signature_internal (method);
int i;
#endif
if (cfg->disable_inline)
return FALSE;
if (cfg->gsharedvt)
return FALSE;
if (cfg->inline_depth > 10)
return FALSE;
if (!mono_method_get_header_summary (method, &header))
return FALSE;
/*runtime, icall and pinvoke are checked by summary call*/
if ((method->iflags & METHOD_IMPL_ATTRIBUTE_NOINLINING) ||
(method->iflags & METHOD_IMPL_ATTRIBUTE_SYNCHRONIZED) ||
header.has_clauses)
return FALSE;
if (method->flags & METHOD_ATTRIBUTE_REQSECOBJ)
/* Used to mark methods containing StackCrawlMark locals */
return FALSE;
/* also consider num_locals? */
/* Do the size check early to avoid creating vtables */
if (!inline_limit_inited) {
char *inlinelimit;
if ((inlinelimit = g_getenv ("MONO_INLINELIMIT"))) {
inline_limit = atoi (inlinelimit);
llvm_jit_inline_limit = inline_limit;
llvm_aot_inline_limit = inline_limit;
g_free (inlinelimit);
} else {
inline_limit = INLINE_LENGTH_LIMIT;
llvm_jit_inline_limit = LLVM_JIT_INLINE_LENGTH_LIMIT;
llvm_aot_inline_limit = LLVM_AOT_INLINE_LENGTH_LIMIT;
}
inline_limit_inited = TRUE;
}
if (COMPILE_LLVM (cfg)) {
if (cfg->compile_aot)
limit = llvm_aot_inline_limit;
else
limit = llvm_jit_inline_limit;
} else {
limit = inline_limit;
}
if (header.code_size >= limit && !(method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING))
return FALSE;
/*
* if we can initialize the class of the method right away, we do,
* otherwise we don't allow inlining if the class needs initialization,
* since it would mean inserting a call to mono_runtime_class_init()
* inside the inlined code
*/
if (cfg->gshared && m_class_has_cctor (method->klass) && mini_class_check_context_used (cfg, method->klass))
return FALSE;
{
/* The AggressiveInlining hint is a good excuse to force that cctor to run. */
if ((cfg->opt & MONO_OPT_AGGRESSIVE_INLINING) || method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) {
if (m_class_has_cctor (method->klass)) {
ERROR_DECL (error);
vtable = mono_class_vtable_checked (method->klass, error);
if (!is_ok (error)) {
mono_error_cleanup (error);
return FALSE;
}
if (!cfg->compile_aot) {
if (!mono_runtime_class_init_full (vtable, error)) {
mono_error_cleanup (error);
return FALSE;
}
}
}
} else if (mono_class_is_before_field_init (method->klass)) {
if (cfg->run_cctors && m_class_has_cctor (method->klass)) {
ERROR_DECL (error);
/*FIXME it would easier and lazier to just use mono_class_try_get_vtable */
if (!m_class_get_runtime_vtable (method->klass))
/* No vtable created yet */
return FALSE;
vtable = mono_class_vtable_checked (method->klass, error);
if (!is_ok (error)) {
mono_error_cleanup (error);
return FALSE;
}
/* This makes so that inline cannot trigger */
/* .cctors: too many apps depend on them */
/* running with a specific order... */
if (! vtable->initialized)
return FALSE;
if (!mono_runtime_class_init_full (vtable, error)) {
mono_error_cleanup (error);
return FALSE;
}
}
} else if (mono_class_needs_cctor_run (method->klass, NULL)) {
ERROR_DECL (error);
if (!m_class_get_runtime_vtable (method->klass))
/* No vtable created yet */
return FALSE;
vtable = mono_class_vtable_checked (method->klass, error);
if (!is_ok (error)) {
mono_error_cleanup (error);
return FALSE;
}
if (!vtable->initialized)
return FALSE;
}
}
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
if (mono_arch_is_soft_float ()) {
/* FIXME: */
if (sig->ret && sig->ret->type == MONO_TYPE_R4)
return FALSE;
for (i = 0; i < sig->param_count; ++i)
if (!m_type_is_byref (sig->params [i]) && sig->params [i]->type == MONO_TYPE_R4)
return FALSE;
}
#endif
if (g_list_find (cfg->dont_inline, method))
return FALSE;
if (mono_profiler_get_call_instrumentation_flags (method))
return FALSE;
if (mono_profiler_coverage_instrumentation_enabled (method))
return FALSE;
if (method_does_not_return (method))
return FALSE;
return TRUE;
}
static gboolean
mini_field_access_needs_cctor_run (MonoCompile *cfg, MonoMethod *method, MonoClass *klass, MonoVTable *vtable)
{
if (!cfg->compile_aot) {
g_assert (vtable);
if (vtable->initialized)
return FALSE;
}
if (mono_class_is_before_field_init (klass)) {
if (cfg->method == method)
return FALSE;
}
if (!mono_class_needs_cctor_run (klass, method))
return FALSE;
if (! (method->flags & METHOD_ATTRIBUTE_STATIC) && (klass == method->klass))
/* The initialization is already done before the method is called */
return FALSE;
return TRUE;
}
int
mini_emit_sext_index_reg (MonoCompile *cfg, MonoInst *index)
{
int index_reg = index->dreg;
int index2_reg;
#if SIZEOF_REGISTER == 8
/* The array reg is 64 bits but the index reg is only 32 */
if (COMPILE_LLVM (cfg)) {
/*
* abcrem can't handle the OP_SEXT_I4, so add this after abcrem,
* during OP_BOUNDS_CHECK decomposition, and in the implementation
* of OP_X86_LEA for llvm.
*/
index2_reg = index_reg;
} else {
index2_reg = alloc_preg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_SEXT_I4, index2_reg, index_reg);
}
#else
if (index->type == STACK_I8) {
index2_reg = alloc_preg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_LCONV_TO_I4, index2_reg, index_reg);
} else {
index2_reg = index_reg;
}
#endif
return index2_reg;
}
MonoInst*
mini_emit_ldelema_1_ins (MonoCompile *cfg, MonoClass *klass, MonoInst *arr, MonoInst *index, gboolean bcheck, gboolean bounded)
{
MonoInst *ins;
guint32 size;
int mult_reg, add_reg, array_reg, index2_reg, bounds_reg, lower_bound_reg, realidx2_reg;
int context_used;
if (mini_is_gsharedvt_variable_klass (klass)) {
size = -1;
} else {
mono_class_init_internal (klass);
size = mono_class_array_element_size (klass);
}
mult_reg = alloc_preg (cfg);
array_reg = arr->dreg;
realidx2_reg = index2_reg = mini_emit_sext_index_reg (cfg, index);
if (bounded) {
bounds_reg = alloc_preg (cfg);
lower_bound_reg = alloc_preg (cfg);
realidx2_reg = alloc_preg (cfg);
MonoBasicBlock *is_null_bb = NULL;
NEW_BBLOCK (cfg, is_null_bb);
// gint32 lower_bound = 0;
// if (arr->bounds)
// lower_bound = arr->bounds.lower_bound;
// realidx2 = index2 - lower_bound;
MONO_EMIT_NEW_PCONST (cfg, lower_bound_reg, NULL);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, bounds_reg, arr->dreg, MONO_STRUCT_OFFSET (MonoArray, bounds));
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, bounds_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, is_null_bb);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, lower_bound_reg, bounds_reg, MONO_STRUCT_OFFSET (MonoArrayBounds, lower_bound));
MONO_START_BB (cfg, is_null_bb);
MONO_EMIT_NEW_BIALU (cfg, OP_PSUB, realidx2_reg, index2_reg, lower_bound_reg);
}
if (bcheck)
MONO_EMIT_BOUNDS_CHECK (cfg, array_reg, MonoArray, max_length, realidx2_reg);
#if defined(TARGET_X86) || defined(TARGET_AMD64)
if (size == 1 || size == 2 || size == 4 || size == 8) {
static const int fast_log2 [] = { 1, 0, 1, -1, 2, -1, -1, -1, 3 };
EMIT_NEW_X86_LEA (cfg, ins, array_reg, realidx2_reg, fast_log2 [size], MONO_STRUCT_OFFSET (MonoArray, vector));
ins->klass = klass;
ins->type = STACK_MP;
return ins;
}
#endif
add_reg = alloc_ireg_mp (cfg);
if (size == -1) {
MonoInst *rgctx_ins;
/* gsharedvt */
g_assert (cfg->gshared);
context_used = mini_class_check_context_used (cfg, klass);
g_assert (context_used);
rgctx_ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_ARRAY_ELEMENT_SIZE);
MONO_EMIT_NEW_BIALU (cfg, OP_IMUL, mult_reg, realidx2_reg, rgctx_ins->dreg);
} else {
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_MUL_IMM, mult_reg, realidx2_reg, size);
}
MONO_EMIT_NEW_BIALU (cfg, OP_PADD, add_reg, array_reg, mult_reg);
NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, add_reg, add_reg, MONO_STRUCT_OFFSET (MonoArray, vector));
ins->klass = klass;
ins->type = STACK_MP;
MONO_ADD_INS (cfg->cbb, ins);
return ins;
}
static MonoInst*
mini_emit_ldelema_2_ins (MonoCompile *cfg, MonoClass *klass, MonoInst *arr, MonoInst *index_ins1, MonoInst *index_ins2)
{
int bounds_reg = alloc_preg (cfg);
int add_reg = alloc_ireg_mp (cfg);
int mult_reg = alloc_preg (cfg);
int mult2_reg = alloc_preg (cfg);
int low1_reg = alloc_preg (cfg);
int low2_reg = alloc_preg (cfg);
int high1_reg = alloc_preg (cfg);
int high2_reg = alloc_preg (cfg);
int realidx1_reg = alloc_preg (cfg);
int realidx2_reg = alloc_preg (cfg);
int sum_reg = alloc_preg (cfg);
int index1, index2;
MonoInst *ins;
guint32 size;
mono_class_init_internal (klass);
size = mono_class_array_element_size (klass);
index1 = index_ins1->dreg;
index2 = index_ins2->dreg;
#if SIZEOF_REGISTER == 8
/* The array reg is 64 bits but the index reg is only 32 */
if (COMPILE_LLVM (cfg)) {
/* Not needed */
} else {
int tmpreg = alloc_preg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_SEXT_I4, tmpreg, index1);
index1 = tmpreg;
tmpreg = alloc_preg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_SEXT_I4, tmpreg, index2);
index2 = tmpreg;
}
#else
// FIXME: Do we need to do something here for i8 indexes, like in ldelema_1_ins ?
#endif
/* range checking */
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, bounds_reg,
arr->dreg, MONO_STRUCT_OFFSET (MonoArray, bounds));
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, low1_reg,
bounds_reg, MONO_STRUCT_OFFSET (MonoArrayBounds, lower_bound));
MONO_EMIT_NEW_BIALU (cfg, OP_PSUB, realidx1_reg, index1, low1_reg);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, high1_reg,
bounds_reg, MONO_STRUCT_OFFSET (MonoArrayBounds, length));
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, high1_reg, realidx1_reg);
MONO_EMIT_NEW_COND_EXC (cfg, LE_UN, "IndexOutOfRangeException");
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, low2_reg,
bounds_reg, sizeof (MonoArrayBounds) + MONO_STRUCT_OFFSET (MonoArrayBounds, lower_bound));
MONO_EMIT_NEW_BIALU (cfg, OP_PSUB, realidx2_reg, index2, low2_reg);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, high2_reg,
bounds_reg, sizeof (MonoArrayBounds) + MONO_STRUCT_OFFSET (MonoArrayBounds, length));
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, high2_reg, realidx2_reg);
MONO_EMIT_NEW_COND_EXC (cfg, LE_UN, "IndexOutOfRangeException");
MONO_EMIT_NEW_BIALU (cfg, OP_PMUL, mult_reg, high2_reg, realidx1_reg);
MONO_EMIT_NEW_BIALU (cfg, OP_PADD, sum_reg, mult_reg, realidx2_reg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PMUL_IMM, mult2_reg, sum_reg, size);
MONO_EMIT_NEW_BIALU (cfg, OP_PADD, add_reg, mult2_reg, arr->dreg);
NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, add_reg, add_reg, MONO_STRUCT_OFFSET (MonoArray, vector));
ins->type = STACK_MP;
ins->klass = klass;
MONO_ADD_INS (cfg->cbb, ins);
return ins;
}
static MonoInst*
mini_emit_ldelema_ins (MonoCompile *cfg, MonoMethod *cmethod, MonoInst **sp, guchar *ip, gboolean is_set)
{
int rank;
MonoInst *addr;
MonoMethod *addr_method;
int element_size;
MonoClass *eclass = m_class_get_element_class (cmethod->klass);
gboolean bounded = m_class_get_byval_arg (cmethod->klass) ? m_class_get_byval_arg (cmethod->klass)->type == MONO_TYPE_ARRAY : FALSE;
rank = mono_method_signature_internal (cmethod)->param_count - (is_set? 1: 0);
if (rank == 1)
return mini_emit_ldelema_1_ins (cfg, eclass, sp [0], sp [1], TRUE, bounded);
/* emit_ldelema_2 depends on OP_LMUL */
if (!cfg->backend->emulate_mul_div && rank == 2 && (cfg->opt & MONO_OPT_INTRINS) && !mini_is_gsharedvt_variable_klass (eclass)) {
return mini_emit_ldelema_2_ins (cfg, eclass, sp [0], sp [1], sp [2]);
}
if (mini_is_gsharedvt_variable_klass (eclass))
element_size = 0;
else
element_size = mono_class_array_element_size (eclass);
addr_method = mono_marshal_get_array_address (rank, element_size);
addr = mono_emit_method_call (cfg, addr_method, sp, NULL);
return addr;
}
static gboolean
mini_class_is_reference (MonoClass *klass)
{
return mini_type_is_reference (m_class_get_byval_arg (klass));
}
MonoInst*
mini_emit_array_store (MonoCompile *cfg, MonoClass *klass, MonoInst **sp, gboolean safety_checks)
{
if (safety_checks && mini_class_is_reference (klass) &&
!(MONO_INS_IS_PCONST_NULL (sp [2]))) {
MonoClass *obj_array = mono_array_class_get_cached (mono_defaults.object_class);
MonoMethod *helper;
MonoInst *iargs [3];
if (sp [0]->type != STACK_OBJ)
return NULL;
if (sp [2]->type != STACK_OBJ)
return NULL;
iargs [2] = sp [2];
iargs [1] = sp [1];
iargs [0] = sp [0];
MonoClass *array_class = sp [0]->klass;
if (array_class && m_class_get_rank (array_class) == 1) {
MonoClass *eclass = m_class_get_element_class (array_class);
if (m_class_is_sealed (eclass)) {
helper = mono_marshal_get_virtual_stelemref (array_class);
/* Make a non-virtual call if possible */
return mono_emit_method_call (cfg, helper, iargs, NULL);
}
}
helper = mono_marshal_get_virtual_stelemref (obj_array);
if (!helper->slot)
mono_class_setup_vtable (obj_array);
g_assert (helper->slot);
return mono_emit_method_call (cfg, helper, iargs, sp [0]);
} else {
MonoInst *ins;
if (mini_is_gsharedvt_variable_klass (klass)) {
MonoInst *addr;
// FIXME-VT: OP_ICONST optimization
addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE);
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0, sp [2]->dreg);
ins->opcode = OP_STOREV_MEMBASE;
} else if (sp [1]->opcode == OP_ICONST) {
int array_reg = sp [0]->dreg;
int index_reg = sp [1]->dreg;
int offset = (mono_class_array_element_size (klass) * sp [1]->inst_c0) + MONO_STRUCT_OFFSET (MonoArray, vector);
if (SIZEOF_REGISTER == 8 && COMPILE_LLVM (cfg) && sp [1]->inst_c0 < 0)
MONO_EMIT_NEW_UNALU (cfg, OP_ZEXT_I4, index_reg, index_reg);
if (safety_checks)
MONO_EMIT_BOUNDS_CHECK (cfg, array_reg, MonoArray, max_length, index_reg);
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), array_reg, offset, sp [2]->dreg);
} else {
MonoInst *addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], safety_checks, FALSE);
if (!mini_debug_options.weak_memory_model && mini_class_is_reference (klass))
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0, sp [2]->dreg);
if (mini_class_is_reference (klass))
mini_emit_write_barrier (cfg, addr, sp [2]);
}
return ins;
}
}
MonoInst*
mini_emit_memory_barrier (MonoCompile *cfg, int kind)
{
MonoInst *ins = NULL;
MONO_INST_NEW (cfg, ins, OP_MEMORY_BARRIER);
MONO_ADD_INS (cfg->cbb, ins);
ins->backend.memory_barrier_kind = kind;
return ins;
}
/*
* This entry point could be used later for arbitrary method
* redirection.
*/
inline static MonoInst*
mini_redirect_call (MonoCompile *cfg, MonoMethod *method,
MonoMethodSignature *signature, MonoInst **args, MonoInst *this_ins)
{
if (method->klass == mono_defaults.string_class) {
/* managed string allocation support */
if (strcmp (method->name, "FastAllocateString") == 0) {
MonoInst *iargs [2];
MonoVTable *vtable = mono_class_vtable_checked (method->klass, cfg->error);
MonoMethod *managed_alloc = NULL;
mono_error_assert_ok (cfg->error); /*Should not fail since it System.String*/
#ifndef MONO_CROSS_COMPILE
managed_alloc = mono_gc_get_managed_allocator (method->klass, FALSE, FALSE);
#endif
if (!managed_alloc)
return NULL;
EMIT_NEW_VTABLECONST (cfg, iargs [0], vtable);
iargs [1] = args [0];
return mono_emit_method_call (cfg, managed_alloc, iargs, this_ins);
}
}
return NULL;
}
static void
mono_save_args (MonoCompile *cfg, MonoMethodSignature *sig, MonoInst **sp)
{
MonoInst *store, *temp;
int i;
for (i = 0; i < sig->param_count + sig->hasthis; ++i) {
MonoType *argtype = (sig->hasthis && (i == 0)) ? type_from_stack_type (*sp) : sig->params [i - sig->hasthis];
/*
* FIXME: We should use *args++ = sp [0], but that would mean the arg
* would be different than the MonoInst's used to represent arguments, and
* the ldelema implementation can't deal with that.
* Solution: When ldelema is used on an inline argument, create a var for
* it, emit ldelema on that var, and emit the saving code below in
* inline_method () if needed.
*/
temp = mono_compile_create_var (cfg, argtype, OP_LOCAL);
cfg->args [i] = temp;
/* This uses cfg->args [i] which is set by the preceding line */
EMIT_NEW_ARGSTORE (cfg, store, i, *sp);
store->cil_code = sp [0]->cil_code;
sp++;
}
}
#define MONO_INLINE_CALLED_LIMITED_METHODS 1
#define MONO_INLINE_CALLER_LIMITED_METHODS 1
#if (MONO_INLINE_CALLED_LIMITED_METHODS)
static gboolean
check_inline_called_method_name_limit (MonoMethod *called_method)
{
int strncmp_result;
static const char *limit = NULL;
if (limit == NULL) {
const char *limit_string = g_getenv ("MONO_INLINE_CALLED_METHOD_NAME_LIMIT");
if (limit_string != NULL)
limit = limit_string;
else
limit = "";
}
if (limit [0] != '\0') {
char *called_method_name = mono_method_full_name (called_method, TRUE);
strncmp_result = strncmp (called_method_name, limit, strlen (limit));
g_free (called_method_name);
//return (strncmp_result <= 0);
return (strncmp_result == 0);
} else {
return TRUE;
}
}
#endif
#if (MONO_INLINE_CALLER_LIMITED_METHODS)
static gboolean
check_inline_caller_method_name_limit (MonoMethod *caller_method)
{
int strncmp_result;
static const char *limit = NULL;
if (limit == NULL) {
const char *limit_string = g_getenv ("MONO_INLINE_CALLER_METHOD_NAME_LIMIT");
if (limit_string != NULL) {
limit = limit_string;
} else {
limit = "";
}
}
if (limit [0] != '\0') {
char *caller_method_name = mono_method_full_name (caller_method, TRUE);
strncmp_result = strncmp (caller_method_name, limit, strlen (limit));
g_free (caller_method_name);
//return (strncmp_result <= 0);
return (strncmp_result == 0);
} else {
return TRUE;
}
}
#endif
void
mini_emit_init_rvar (MonoCompile *cfg, int dreg, MonoType *rtype)
{
static double r8_0 = 0.0;
static float r4_0 = 0.0;
MonoInst *ins;
int t;
rtype = mini_get_underlying_type (rtype);
t = rtype->type;
if (m_type_is_byref (rtype)) {
MONO_EMIT_NEW_PCONST (cfg, dreg, NULL);
} else if (t >= MONO_TYPE_BOOLEAN && t <= MONO_TYPE_U4) {
MONO_EMIT_NEW_ICONST (cfg, dreg, 0);
} else if (t == MONO_TYPE_I8 || t == MONO_TYPE_U8) {
MONO_EMIT_NEW_I8CONST (cfg, dreg, 0);
} else if (cfg->r4fp && t == MONO_TYPE_R4) {
MONO_INST_NEW (cfg, ins, OP_R4CONST);
ins->type = STACK_R4;
ins->inst_p0 = (void*)&r4_0;
ins->dreg = dreg;
MONO_ADD_INS (cfg->cbb, ins);
} else if (t == MONO_TYPE_R4 || t == MONO_TYPE_R8) {
MONO_INST_NEW (cfg, ins, OP_R8CONST);
ins->type = STACK_R8;
ins->inst_p0 = (void*)&r8_0;
ins->dreg = dreg;
MONO_ADD_INS (cfg->cbb, ins);
} else if ((t == MONO_TYPE_VALUETYPE) || (t == MONO_TYPE_TYPEDBYREF) ||
((t == MONO_TYPE_GENERICINST) && mono_type_generic_inst_is_valuetype (rtype))) {
MONO_EMIT_NEW_VZERO (cfg, dreg, mono_class_from_mono_type_internal (rtype));
} else if (((t == MONO_TYPE_VAR) || (t == MONO_TYPE_MVAR)) && mini_type_var_is_vt (rtype)) {
MONO_EMIT_NEW_VZERO (cfg, dreg, mono_class_from_mono_type_internal (rtype));
} else {
MONO_EMIT_NEW_PCONST (cfg, dreg, NULL);
}
}
static void
emit_dummy_init_rvar (MonoCompile *cfg, int dreg, MonoType *rtype)
{
int t;
rtype = mini_get_underlying_type (rtype);
t = rtype->type;
if (m_type_is_byref (rtype)) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_PCONST);
} else if (t >= MONO_TYPE_BOOLEAN && t <= MONO_TYPE_U4) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_ICONST);
} else if (t == MONO_TYPE_I8 || t == MONO_TYPE_U8) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_I8CONST);
} else if (cfg->r4fp && t == MONO_TYPE_R4) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_R4CONST);
} else if (t == MONO_TYPE_R4 || t == MONO_TYPE_R8) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_R8CONST);
} else if ((t == MONO_TYPE_VALUETYPE) || (t == MONO_TYPE_TYPEDBYREF) ||
((t == MONO_TYPE_GENERICINST) && mono_type_generic_inst_is_valuetype (rtype))) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_VZERO);
} else if (((t == MONO_TYPE_VAR) || (t == MONO_TYPE_MVAR)) && mini_type_var_is_vt (rtype)) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_VZERO);
} else {
mini_emit_init_rvar (cfg, dreg, rtype);
}
}
/* If INIT is FALSE, emit dummy initialization statements to keep the IR valid */
static void
emit_init_local (MonoCompile *cfg, int local, MonoType *type, gboolean init)
{
MonoInst *var = cfg->locals [local];
if (COMPILE_SOFT_FLOAT (cfg)) {
MonoInst *store;
int reg = alloc_dreg (cfg, (MonoStackType)var->type);
mini_emit_init_rvar (cfg, reg, type);
EMIT_NEW_LOCSTORE (cfg, store, local, cfg->cbb->last_ins);
} else {
if (init)
mini_emit_init_rvar (cfg, var->dreg, type);
else
emit_dummy_init_rvar (cfg, var->dreg, type);
}
}
int
mini_inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, guchar *ip, guint real_offset, gboolean inline_always)
{
return inline_method (cfg, cmethod, fsig, sp, ip, real_offset, inline_always, NULL);
}
/*
* inline_method:
*
* Return the cost of inlining CMETHOD, or zero if it should not be inlined.
*/
static int
inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp,
guchar *ip, guint real_offset, gboolean inline_always, gboolean *is_empty)
{
ERROR_DECL (error);
MonoInst *ins, *rvar = NULL;
MonoMethodHeader *cheader;
MonoBasicBlock *ebblock, *sbblock;
int i, costs;
MonoInst **prev_locals, **prev_args;
MonoType **prev_arg_types;
guint prev_real_offset;
GHashTable *prev_cbb_hash;
MonoBasicBlock **prev_cil_offset_to_bb;
MonoBasicBlock *prev_cbb;
const guchar *prev_ip;
guchar *prev_cil_start;
guint32 prev_cil_offset_to_bb_len;
MonoMethod *prev_current_method;
MonoGenericContext *prev_generic_context;
gboolean ret_var_set, prev_ret_var_set, prev_disable_inline, virtual_ = FALSE;
g_assert (cfg->exception_type == MONO_EXCEPTION_NONE);
#if (MONO_INLINE_CALLED_LIMITED_METHODS)
if ((! inline_always) && ! check_inline_called_method_name_limit (cmethod))
return 0;
#endif
#if (MONO_INLINE_CALLER_LIMITED_METHODS)
if ((! inline_always) && ! check_inline_caller_method_name_limit (cfg->method))
return 0;
#endif
if (!fsig)
fsig = mono_method_signature_internal (cmethod);
if (cfg->verbose_level > 2)
printf ("INLINE START %p %s -> %s\n", cmethod, mono_method_full_name (cfg->method, TRUE), mono_method_full_name (cmethod, TRUE));
if (!cmethod->inline_info) {
cfg->stat_inlineable_methods++;
cmethod->inline_info = 1;
}
if (is_empty)
*is_empty = FALSE;
/* allocate local variables */
cheader = mono_method_get_header_checked (cmethod, error);
if (!cheader) {
if (inline_always) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
mono_error_move (cfg->error, error);
} else {
mono_error_cleanup (error);
}
return 0;
}
if (is_empty && cheader->code_size == 1 && cheader->code [0] == CEE_RET)
*is_empty = TRUE;
/* allocate space to store the return value */
if (!MONO_TYPE_IS_VOID (fsig->ret)) {
rvar = mono_compile_create_var (cfg, fsig->ret, OP_LOCAL);
}
prev_locals = cfg->locals;
cfg->locals = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, cheader->num_locals * sizeof (MonoInst*));
for (i = 0; i < cheader->num_locals; ++i)
cfg->locals [i] = mono_compile_create_var (cfg, cheader->locals [i], OP_LOCAL);
/* allocate start and end blocks */
/* This is needed so if the inline is aborted, we can clean up */
NEW_BBLOCK (cfg, sbblock);
sbblock->real_offset = real_offset;
NEW_BBLOCK (cfg, ebblock);
ebblock->block_num = cfg->num_bblocks++;
ebblock->real_offset = real_offset;
prev_args = cfg->args;
prev_arg_types = cfg->arg_types;
prev_ret_var_set = cfg->ret_var_set;
prev_real_offset = cfg->real_offset;
prev_cbb_hash = cfg->cbb_hash;
prev_cil_offset_to_bb = cfg->cil_offset_to_bb;
prev_cil_offset_to_bb_len = cfg->cil_offset_to_bb_len;
prev_cil_start = cfg->cil_start;
prev_ip = cfg->ip;
prev_cbb = cfg->cbb;
prev_current_method = cfg->current_method;
prev_generic_context = cfg->generic_context;
prev_disable_inline = cfg->disable_inline;
cfg->ret_var_set = FALSE;
cfg->inline_depth ++;
if (ip && *ip == CEE_CALLVIRT && !(cmethod->flags & METHOD_ATTRIBUTE_STATIC))
virtual_ = TRUE;
costs = mono_method_to_ir (cfg, cmethod, sbblock, ebblock, rvar, sp, real_offset, virtual_);
ret_var_set = cfg->ret_var_set;
cfg->real_offset = prev_real_offset;
cfg->cbb_hash = prev_cbb_hash;
cfg->cil_offset_to_bb = prev_cil_offset_to_bb;
cfg->cil_offset_to_bb_len = prev_cil_offset_to_bb_len;
cfg->cil_start = prev_cil_start;
cfg->ip = prev_ip;
cfg->locals = prev_locals;
cfg->args = prev_args;
cfg->arg_types = prev_arg_types;
cfg->current_method = prev_current_method;
cfg->generic_context = prev_generic_context;
cfg->ret_var_set = prev_ret_var_set;
cfg->disable_inline = prev_disable_inline;
cfg->inline_depth --;
if ((costs >= 0 && costs < 60) || inline_always || (costs >= 0 && (cmethod->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING))) {
if (cfg->verbose_level > 2)
printf ("INLINE END %s -> %s\n", mono_method_full_name (cfg->method, TRUE), mono_method_full_name (cmethod, TRUE));
mono_error_assert_ok (cfg->error);
cfg->stat_inlined_methods++;
/* always add some code to avoid block split failures */
MONO_INST_NEW (cfg, ins, OP_NOP);
MONO_ADD_INS (prev_cbb, ins);
prev_cbb->next_bb = sbblock;
link_bblock (cfg, prev_cbb, sbblock);
/*
* Get rid of the begin and end bblocks if possible to aid local
* optimizations.
*/
if (prev_cbb->out_count == 1)
mono_merge_basic_blocks (cfg, prev_cbb, sbblock);
if ((prev_cbb->out_count == 1) && (prev_cbb->out_bb [0]->in_count == 1) && (prev_cbb->out_bb [0] != ebblock))
mono_merge_basic_blocks (cfg, prev_cbb, prev_cbb->out_bb [0]);
if ((ebblock->in_count == 1) && ebblock->in_bb [0]->out_count == 1) {
MonoBasicBlock *prev = ebblock->in_bb [0];
if (prev->next_bb == ebblock) {
mono_merge_basic_blocks (cfg, prev, ebblock);
cfg->cbb = prev;
if ((prev_cbb->out_count == 1) && (prev_cbb->out_bb [0]->in_count == 1) && (prev_cbb->out_bb [0] == prev)) {
mono_merge_basic_blocks (cfg, prev_cbb, prev);
cfg->cbb = prev_cbb;
}
} else {
/* There could be a bblock after 'prev', and making 'prev' the current bb could cause problems */
cfg->cbb = ebblock;
}
} else {
/*
* Its possible that the rvar is set in some prev bblock, but not in others.
* (#1835).
*/
if (rvar) {
MonoBasicBlock *bb;
for (i = 0; i < ebblock->in_count; ++i) {
bb = ebblock->in_bb [i];
if (bb->last_ins && bb->last_ins->opcode == OP_NOT_REACHED) {
cfg->cbb = bb;
mini_emit_init_rvar (cfg, rvar->dreg, fsig->ret);
}
}
}
cfg->cbb = ebblock;
}
if (rvar) {
/*
* If the inlined method contains only a throw, then the ret var is not
* set, so set it to a dummy value.
*/
if (!ret_var_set)
mini_emit_init_rvar (cfg, rvar->dreg, fsig->ret);
EMIT_NEW_TEMPLOAD (cfg, ins, rvar->inst_c0);
*sp++ = ins;
}
cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, cheader);
return costs + 1;
} else {
if (cfg->verbose_level > 2) {
const char *msg = mono_error_get_message (cfg->error);
printf ("INLINE ABORTED %s (cost %d) %s\n", mono_method_full_name (cmethod, TRUE), costs, msg ? msg : "");
}
cfg->exception_type = MONO_EXCEPTION_NONE;
clear_cfg_error (cfg);
/* This gets rid of the newly added bblocks */
cfg->cbb = prev_cbb;
}
cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, cheader);
return 0;
}
/*
* Some of these comments may well be out-of-date.
* Design decisions: we do a single pass over the IL code (and we do bblock
* splitting/merging in the few cases when it's required: a back jump to an IL
* address that was not already seen as bblock starting point).
* Code is validated as we go (full verification is still better left to metadata/verify.c).
* Complex operations are decomposed in simpler ones right away. We need to let the
* arch-specific code peek and poke inside this process somehow (except when the
* optimizations can take advantage of the full semantic info of coarse opcodes).
* All the opcodes of the form opcode.s are 'normalized' to opcode.
* MonoInst->opcode initially is the IL opcode or some simplification of that
* (OP_LOAD, OP_STORE). The arch-specific code may rearrange it to an arch-specific
* opcode with value bigger than OP_LAST.
* At this point the IR can be handed over to an interpreter, a dumb code generator
* or to the optimizing code generator that will translate it to SSA form.
*
* Profiling directed optimizations.
* We may compile by default with few or no optimizations and instrument the code
* or the user may indicate what methods to optimize the most either in a config file
* or through repeated runs where the compiler applies offline the optimizations to
* each method and then decides if it was worth it.
*/
#define CHECK_TYPE(ins) if (!(ins)->type) UNVERIFIED
#define CHECK_STACK(num) if ((sp - stack_start) < (num)) UNVERIFIED
#define CHECK_STACK_OVF() if (((sp - stack_start) + 1) > header->max_stack) UNVERIFIED
#define CHECK_ARG(num) if ((unsigned)(num) >= (unsigned)num_args) UNVERIFIED
#define CHECK_LOCAL(num) if ((unsigned)(num) >= (unsigned)header->num_locals) UNVERIFIED
#define CHECK_OPSIZE(size) if ((size) < 1 || ip + (size) > end) UNVERIFIED
#define CHECK_UNVERIFIABLE(cfg) if (cfg->unverifiable) UNVERIFIED
#define CHECK_TYPELOAD(klass) if (!(klass) || mono_class_has_failure (klass)) TYPE_LOAD_ERROR ((klass))
/* offset from br.s -> br like opcodes */
#define BIG_BRANCH_OFFSET 13
static gboolean
ip_in_bb (MonoCompile *cfg, MonoBasicBlock *bb, const guint8* ip)
{
MonoBasicBlock *b = cfg->cil_offset_to_bb [ip - cfg->cil_start];
return b == NULL || b == bb;
}
static int
get_basic_blocks (MonoCompile *cfg, MonoMethodHeader* header, guint real_offset, guchar *start, guchar *end, guchar **pos)
{
guchar *ip = start;
guchar *target;
int i;
guint cli_addr;
MonoBasicBlock *bblock;
const MonoOpcode *opcode;
while (ip < end) {
cli_addr = ip - start;
i = mono_opcode_value ((const guint8 **)&ip, end);
if (i < 0)
UNVERIFIED;
opcode = &mono_opcodes [i];
switch (opcode->argument) {
case MonoInlineNone:
ip++;
break;
case MonoInlineString:
case MonoInlineType:
case MonoInlineField:
case MonoInlineMethod:
case MonoInlineTok:
case MonoInlineSig:
case MonoShortInlineR:
case MonoInlineI:
ip += 5;
break;
case MonoInlineVar:
ip += 3;
break;
case MonoShortInlineVar:
case MonoShortInlineI:
ip += 2;
break;
case MonoShortInlineBrTarget:
target = start + cli_addr + 2 + (signed char)ip [1];
GET_BBLOCK (cfg, bblock, target);
ip += 2;
if (ip < end)
GET_BBLOCK (cfg, bblock, ip);
break;
case MonoInlineBrTarget:
target = start + cli_addr + 5 + (gint32)read32 (ip + 1);
GET_BBLOCK (cfg, bblock, target);
ip += 5;
if (ip < end)
GET_BBLOCK (cfg, bblock, ip);
break;
case MonoInlineSwitch: {
guint32 n = read32 (ip + 1);
guint32 j;
ip += 5;
cli_addr += 5 + 4 * n;
target = start + cli_addr;
GET_BBLOCK (cfg, bblock, target);
for (j = 0; j < n; ++j) {
target = start + cli_addr + (gint32)read32 (ip);
GET_BBLOCK (cfg, bblock, target);
ip += 4;
}
break;
}
case MonoInlineR:
case MonoInlineI8:
ip += 9;
break;
default:
g_assert_not_reached ();
}
if (i == CEE_THROW) {
guchar *bb_start = ip - 1;
/* Find the start of the bblock containing the throw */
bblock = NULL;
while ((bb_start >= start) && !bblock) {
bblock = cfg->cil_offset_to_bb [(bb_start) - start];
bb_start --;
}
if (bblock)
bblock->out_of_line = 1;
}
}
return 0;
unverified:
exception_exit:
*pos = ip;
return 1;
}
static MonoMethod *
mini_get_method_allow_open (MonoMethod *m, guint32 token, MonoClass *klass, MonoGenericContext *context, MonoError *error)
{
MonoMethod *method;
error_init (error);
if (m->wrapper_type != MONO_WRAPPER_NONE) {
method = (MonoMethod *)mono_method_get_wrapper_data (m, token);
if (context) {
method = mono_class_inflate_generic_method_checked (method, context, error);
}
} else {
method = mono_get_method_checked (m_class_get_image (m->klass), token, klass, context, error);
}
return method;
}
static MonoMethod *
mini_get_method (MonoCompile *cfg, MonoMethod *m, guint32 token, MonoClass *klass, MonoGenericContext *context)
{
ERROR_DECL (error);
MonoMethod *method = mini_get_method_allow_open (m, token, klass, context, cfg ? cfg->error : error);
if (method && cfg && !cfg->gshared && mono_class_is_open_constructed_type (m_class_get_byval_arg (method->klass))) {
mono_error_set_bad_image (cfg->error, m_class_get_image (cfg->method->klass), "Method with open type while not compiling gshared");
method = NULL;
}
if (!method && !cfg)
mono_error_cleanup (error); /* FIXME don't swallow the error */
return method;
}
static MonoMethodSignature*
mini_get_signature (MonoMethod *method, guint32 token, MonoGenericContext *context, MonoError *error)
{
MonoMethodSignature *fsig;
error_init (error);
if (method->wrapper_type != MONO_WRAPPER_NONE) {
fsig = (MonoMethodSignature *)mono_method_get_wrapper_data (method, token);
} else {
fsig = mono_metadata_parse_signature_checked (m_class_get_image (method->klass), token, error);
return_val_if_nok (error, NULL);
}
if (context) {
fsig = mono_inflate_generic_signature(fsig, context, error);
}
return fsig;
}
/*
* Return the original method is a wrapper is specified. We can only access
* the custom attributes from the original method.
*/
static MonoMethod*
get_original_method (MonoMethod *method)
{
if (method->wrapper_type == MONO_WRAPPER_NONE)
return method;
/* native code (which is like Critical) can call any managed method XXX FIXME XXX to validate all usages */
if (method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED)
return NULL;
/* in other cases we need to find the original method */
return mono_marshal_method_from_wrapper (method);
}
static guchar*
il_read_op (guchar *ip, guchar *end, guchar first_byte, MonoOpcodeEnum desired_il_op)
// If ip is desired_il_op, return the next ip, else NULL.
{
if (G_LIKELY (ip < end) && G_UNLIKELY (*ip == first_byte)) {
MonoOpcodeEnum il_op = MonoOpcodeEnum_Invalid;
// mono_opcode_value_and_size updates ip, but not in the expected way.
const guchar *temp_ip = ip;
const int size = mono_opcode_value_and_size (&temp_ip, end, &il_op);
return (G_LIKELY (size > 0) && G_UNLIKELY (il_op == desired_il_op)) ? (ip + size) : NULL;
}
return NULL;
}
static guchar*
il_read_op_and_token (guchar *ip, guchar *end, guchar first_byte, MonoOpcodeEnum desired_il_op, guint32 *token)
{
ip = il_read_op (ip, end, first_byte, desired_il_op);
if (ip)
*token = read32 (ip - 4); // could be +1 or +2 from start
return ip;
}
static guchar*
il_read_branch_and_target (guchar *ip, guchar *end, guchar first_byte, MonoOpcodeEnum desired_il_op, int size, guchar **target)
{
ip = il_read_op (ip, end, first_byte, desired_il_op);
if (ip) {
gint32 delta = 0;
switch (size) {
case 1:
delta = (signed char)ip [-1];
break;
case 4:
delta = (gint32)read32 (ip - 4);
break;
}
// FIXME verify it is within the function and start of an instruction.
*target = ip + delta;
return ip;
}
return NULL;
}
#define il_read_brtrue(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRTRUE, MONO_CEE_BRTRUE, 4, target))
#define il_read_brtrue_s(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRTRUE_S, MONO_CEE_BRTRUE_S, 1, target))
#define il_read_brfalse(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRFALSE, MONO_CEE_BRFALSE, 4, target))
#define il_read_brfalse_s(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRFALSE_S, MONO_CEE_BRFALSE_S, 1, target))
#define il_read_dup(ip, end) (il_read_op (ip, end, CEE_DUP, MONO_CEE_DUP))
#define il_read_newobj(ip, end, token) (il_read_op_and_token (ip, end, CEE_NEW_OBJ, MONO_CEE_NEWOBJ, token))
#define il_read_ldtoken(ip, end, token) (il_read_op_and_token (ip, end, CEE_LDTOKEN, MONO_CEE_LDTOKEN, token))
#define il_read_call(ip, end, token) (il_read_op_and_token (ip, end, CEE_CALL, MONO_CEE_CALL, token))
#define il_read_callvirt(ip, end, token) (il_read_op_and_token (ip, end, CEE_CALLVIRT, MONO_CEE_CALLVIRT, token))
#define il_read_initobj(ip, end, token) (il_read_op_and_token (ip, end, CEE_PREFIX1, MONO_CEE_INITOBJ, token))
#define il_read_constrained(ip, end, token) (il_read_op_and_token (ip, end, CEE_PREFIX1, MONO_CEE_CONSTRAINED_, token))
#define il_read_unbox_any(ip, end, token) (il_read_op_and_token (ip, end, CEE_UNBOX_ANY, MONO_CEE_UNBOX_ANY, token))
/*
* Check that the IL instructions at ip are the array initialization
* sequence and return the pointer to the data and the size.
*/
static const char*
initialize_array_data (MonoCompile *cfg, MonoMethod *method, gboolean aot, guchar *ip,
guchar *end, MonoClass *klass, guint32 len, int *out_size,
guint32 *out_field_token, MonoOpcodeEnum *il_op, guchar **next_ip)
{
/*
* newarr[System.Int32]
* dup
* ldtoken field valuetype ...
* call void class [mscorlib]System.Runtime.CompilerServices.RuntimeHelpers::InitializeArray(class [mscorlib]System.Array, valuetype [mscorlib]System.RuntimeFieldHandle)
*/
guint32 token;
guint32 field_token;
if ((ip = il_read_dup (ip, end))
&& ip_in_bb (cfg, cfg->cbb, ip)
&& (ip = il_read_ldtoken (ip, end, &field_token))
&& IS_FIELD_DEF (field_token)
&& ip_in_bb (cfg, cfg->cbb, ip)
&& (ip = il_read_call (ip, end, &token))) {
ERROR_DECL (error);
guint32 rva;
const char *data_ptr;
int size = 0;
MonoMethod *cmethod;
MonoClass *dummy_class;
MonoClassField *field = mono_field_from_token_checked (m_class_get_image (method->klass), field_token, &dummy_class, NULL, error);
int dummy_align;
if (!field) {
mono_error_cleanup (error); /* FIXME don't swallow the error */
return NULL;
}
*out_field_token = field_token;
cmethod = mini_get_method (NULL, method, token, NULL, NULL);
if (!cmethod)
return NULL;
if (strcmp (cmethod->name, "InitializeArray") || strcmp (m_class_get_name (cmethod->klass), "RuntimeHelpers") || m_class_get_image (cmethod->klass) != mono_defaults.corlib)
return NULL;
switch (mini_get_underlying_type (m_class_get_byval_arg (klass))->type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
size = 1; break;
/* we need to swap on big endian, so punt. Should we handle R4 and R8 as well? */
#if TARGET_BYTE_ORDER == G_LITTLE_ENDIAN
case MONO_TYPE_I2:
case MONO_TYPE_U2:
size = 2; break;
case MONO_TYPE_I4:
case MONO_TYPE_U4:
case MONO_TYPE_R4:
size = 4; break;
case MONO_TYPE_R8:
case MONO_TYPE_I8:
case MONO_TYPE_U8:
size = 8; break;
#endif
default:
return NULL;
}
size *= len;
if (size > mono_type_size (field->type, &dummy_align))
return NULL;
*out_size = size;
/*g_print ("optimized in %s: size: %d, numelems: %d\n", method->name, size, newarr->inst_newa_len->inst_c0);*/
MonoImage *method_klass_image = m_class_get_image (method->klass);
if (!image_is_dynamic (method_klass_image)) {
guint32 field_index = mono_metadata_token_index (field_token);
mono_metadata_field_info (method_klass_image, field_index - 1, NULL, &rva, NULL);
data_ptr = mono_image_rva_map (method_klass_image, rva);
/*g_print ("field: 0x%08x, rva: %d, rva_ptr: %p\n", read32 (ip + 2), rva, data_ptr);*/
/* for aot code we do the lookup on load */
if (aot && data_ptr)
data_ptr = (const char *)GUINT_TO_POINTER (rva);
} else {
/*FIXME is it possible to AOT a SRE assembly not meant to be saved? */
g_assert (!aot);
data_ptr = mono_field_get_data (field);
}
if (!data_ptr)
return NULL;
*il_op = MONO_CEE_CALL;
*next_ip = ip;
return data_ptr;
}
return NULL;
}
static void
set_exception_type_from_invalid_il (MonoCompile *cfg, MonoMethod *method, guchar *ip)
{
ERROR_DECL (error);
char *method_fname = mono_method_full_name (method, TRUE);
char *method_code;
MonoMethodHeader *header = mono_method_get_header_checked (method, error);
if (!header) {
method_code = g_strdup_printf ("could not parse method body due to %s", mono_error_get_message (error));
mono_error_cleanup (error);
} else if (header->code_size == 0)
method_code = g_strdup ("method body is empty.");
else
method_code = mono_disasm_code_one (NULL, method, ip, NULL);
mono_cfg_set_exception_invalid_program (cfg, g_strdup_printf ("Invalid IL code in %s: %s\n", method_fname, method_code));
g_free (method_fname);
g_free (method_code);
cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, header);
}
guint32
mono_type_to_stloc_coerce (MonoType *type)
{
if (m_type_is_byref (type))
return 0;
type = mini_get_underlying_type (type);
handle_enum:
switch (type->type) {
case MONO_TYPE_I1:
return OP_ICONV_TO_I1;
case MONO_TYPE_U1:
return OP_ICONV_TO_U1;
case MONO_TYPE_I2:
return OP_ICONV_TO_I2;
case MONO_TYPE_U2:
return OP_ICONV_TO_U2;
case MONO_TYPE_I4:
case MONO_TYPE_U4:
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
case MONO_TYPE_I8:
case MONO_TYPE_U8:
case MONO_TYPE_R4:
case MONO_TYPE_R8:
case MONO_TYPE_TYPEDBYREF:
case MONO_TYPE_GENERICINST:
return 0;
case MONO_TYPE_VALUETYPE:
if (m_class_is_enumtype (type->data.klass)) {
type = mono_class_enum_basetype_internal (type->data.klass);
goto handle_enum;
}
return 0;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR: //TODO I believe we don't need to handle gsharedvt as there won't be match and, for example, u1 is not covariant to u32
return 0;
default:
g_error ("unknown type 0x%02x in mono_type_to_stloc_coerce", type->type);
}
return -1;
}
static void
emit_stloc_ir (MonoCompile *cfg, MonoInst **sp, MonoMethodHeader *header, int n)
{
MonoInst *ins;
guint32 coerce_op = mono_type_to_stloc_coerce (header->locals [n]);
if (coerce_op) {
if (cfg->cbb->last_ins == sp [0] && sp [0]->opcode == coerce_op) {
if (cfg->verbose_level > 2)
printf ("Found existing coercing is enough for stloc\n");
} else {
MONO_INST_NEW (cfg, ins, coerce_op);
ins->dreg = alloc_ireg (cfg);
ins->sreg1 = sp [0]->dreg;
ins->type = STACK_I4;
ins->klass = mono_class_from_mono_type_internal (header->locals [n]);
MONO_ADD_INS (cfg->cbb, ins);
*sp = mono_decompose_opcode (cfg, ins);
}
}
guint32 opcode = mono_type_to_regmove (cfg, header->locals [n]);
if (!cfg->deopt && (opcode == OP_MOVE) && cfg->cbb->last_ins == sp [0] &&
((sp [0]->opcode == OP_ICONST) || (sp [0]->opcode == OP_I8CONST))) {
/* Optimize reg-reg moves away */
/*
* Can't optimize other opcodes, since sp[0] might point to
* the last ins of a decomposed opcode.
*/
sp [0]->dreg = (cfg)->locals [n]->dreg;
} else {
EMIT_NEW_LOCSTORE (cfg, ins, n, *sp);
}
}
static void
emit_starg_ir (MonoCompile *cfg, MonoInst **sp, int n)
{
MonoInst *ins;
guint32 coerce_op = mono_type_to_stloc_coerce (cfg->arg_types [n]);
if (coerce_op) {
if (cfg->cbb->last_ins == sp [0] && sp [0]->opcode == coerce_op) {
if (cfg->verbose_level > 2)
printf ("Found existing coercing is enough for starg\n");
} else {
MONO_INST_NEW (cfg, ins, coerce_op);
ins->dreg = alloc_ireg (cfg);
ins->sreg1 = sp [0]->dreg;
ins->type = STACK_I4;
ins->klass = mono_class_from_mono_type_internal (cfg->arg_types [n]);
MONO_ADD_INS (cfg->cbb, ins);
*sp = mono_decompose_opcode (cfg, ins);
}
}
EMIT_NEW_ARGSTORE (cfg, ins, n, *sp);
}
/*
* ldloca inhibits many optimizations so try to get rid of it in common
* cases.
*/
static guchar *
emit_optimized_ldloca_ir (MonoCompile *cfg, guchar *ip, guchar *end, int local)
{
guint32 token;
MonoClass *klass;
MonoType *type;
guchar *start = ip;
if ((ip = il_read_initobj (ip, end, &token)) && ip_in_bb (cfg, cfg->cbb, start + 1)) {
/* From the INITOBJ case */
klass = mini_get_class (cfg->current_method, token, cfg->generic_context);
CHECK_TYPELOAD (klass);
type = mini_get_underlying_type (m_class_get_byval_arg (klass));
emit_init_local (cfg, local, type, TRUE);
return ip;
}
exception_exit:
return NULL;
}
static MonoInst*
handle_call_res_devirt (MonoCompile *cfg, MonoMethod *cmethod, MonoInst *call_res)
{
/*
* Devirt EqualityComparer.Default.Equals () calls for some types.
* The corefx code excepts these calls to be devirtualized.
* This depends on the implementation of EqualityComparer.Default, which is
* in mcs/class/referencesource/mscorlib/system/collections/generic/equalitycomparer.cs
*/
if (m_class_get_image (cmethod->klass) == mono_defaults.corlib &&
!strcmp (m_class_get_name (cmethod->klass), "EqualityComparer`1") &&
!strcmp (cmethod->name, "get_Default")) {
MonoType *param_type = mono_class_get_generic_class (cmethod->klass)->context.class_inst->type_argv [0];
MonoClass *inst;
MonoGenericContext ctx;
ERROR_DECL (error);
memset (&ctx, 0, sizeof (ctx));
MonoType *args [ ] = { param_type };
ctx.class_inst = mono_metadata_get_generic_inst (1, args);
inst = mono_class_inflate_generic_class_checked (mono_class_get_iequatable_class (), &ctx, error);
mono_error_assert_ok (error);
/* EqualityComparer<T>.Default returns specific types depending on T */
// FIXME: Add more
/* 1. Implements IEquatable<T> */
/*
* Can't use this for string/byte as it might use a different comparer:
*
* // Specialize type byte for performance reasons
* if (t == typeof(byte)) {
* return (EqualityComparer<T>)(object)(new ByteEqualityComparer());
* }
* #if MOBILE
* // Breaks .net serialization compatibility
* if (t == typeof (string))
* return (EqualityComparer<T>)(object)new InternalStringComparer ();
* #endif
*/
if (mono_class_is_assignable_from_internal (inst, mono_class_from_mono_type_internal (param_type)) && param_type->type != MONO_TYPE_U1 && param_type->type != MONO_TYPE_STRING) {
MonoInst *typed_objref;
MonoClass *gcomparer_inst;
memset (&ctx, 0, sizeof (ctx));
args [0] = param_type;
ctx.class_inst = mono_metadata_get_generic_inst (1, args);
MonoClass *gcomparer = mono_class_get_geqcomparer_class ();
g_assert (gcomparer);
gcomparer_inst = mono_class_inflate_generic_class_checked (gcomparer, &ctx, error);
if (is_ok (error)) {
MONO_INST_NEW (cfg, typed_objref, OP_TYPED_OBJREF);
typed_objref->type = STACK_OBJ;
typed_objref->dreg = alloc_ireg_ref (cfg);
typed_objref->sreg1 = call_res->dreg;
typed_objref->klass = gcomparer_inst;
MONO_ADD_INS (cfg->cbb, typed_objref);
call_res = typed_objref;
/* Force decompose */
cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE;
cfg->cbb->needs_decompose = TRUE;
}
}
}
return call_res;
}
static gboolean
is_exception_class (MonoClass *klass)
{
if (G_LIKELY (m_class_get_supertypes (klass)))
return mono_class_has_parent_fast (klass, mono_defaults.exception_class);
while (klass) {
if (klass == mono_defaults.exception_class)
return TRUE;
klass = m_class_get_parent (klass);
}
return FALSE;
}
/*
* is_jit_optimizer_disabled:
*
* Determine whenever M's assembly has a DebuggableAttribute with the
* IsJITOptimizerDisabled flag set.
*/
static gboolean
is_jit_optimizer_disabled (MonoMethod *m)
{
MonoAssembly *ass = m_class_get_image (m->klass)->assembly;
g_assert (ass);
if (ass->jit_optimizer_disabled_inited)
return ass->jit_optimizer_disabled;
return mono_assembly_is_jit_optimizer_disabled (ass);
}
gboolean
mono_is_supported_tailcall_helper (gboolean value, const char *svalue)
{
if (!value)
mono_tailcall_print ("%s %s\n", __func__, svalue);
return value;
}
static gboolean
mono_is_not_supported_tailcall_helper (gboolean value, const char *svalue, MonoMethod *method, MonoMethod *cmethod)
{
// Return value, printing if it inhibits tailcall.
if (value && mono_tailcall_print_enabled ()) {
const char *lparen = strchr (svalue, ' ') ? "(" : "";
const char *rparen = *lparen ? ")" : "";
mono_tailcall_print ("%s %s -> %s %s%s%s:%d\n", __func__, method->name, cmethod->name, lparen, svalue, rparen, value);
}
return value;
}
#define IS_NOT_SUPPORTED_TAILCALL(x) (mono_is_not_supported_tailcall_helper((x), #x, method, cmethod))
static gboolean
is_supported_tailcall (MonoCompile *cfg, const guint8 *ip, MonoMethod *method, MonoMethod *cmethod, MonoMethodSignature *fsig,
gboolean virtual_, gboolean extra_arg, gboolean *ptailcall_calli)
{
// Some checks apply to "regular", some to "calli", some to both.
// To ease burden on caller, always compute regular and calli.
gboolean tailcall = TRUE;
gboolean tailcall_calli = TRUE;
if (IS_NOT_SUPPORTED_TAILCALL (virtual_ && !cfg->backend->have_op_tailcall_membase))
tailcall = FALSE;
if (IS_NOT_SUPPORTED_TAILCALL (!cfg->backend->have_op_tailcall_reg))
tailcall_calli = FALSE;
if (!tailcall && !tailcall_calli)
goto exit;
// FIXME in calli, there is no type for for the this parameter,
// so we assume it might be valuetype; in future we should issue a range
// check, so rule out pointing to frame (for other reference parameters also)
if ( IS_NOT_SUPPORTED_TAILCALL (cmethod && fsig->hasthis && m_class_is_valuetype (cmethod->klass)) // This might point to the current method's stack. Emit range check?
|| IS_NOT_SUPPORTED_TAILCALL (cmethod && (cmethod->flags & METHOD_ATTRIBUTE_PINVOKE_IMPL))
|| IS_NOT_SUPPORTED_TAILCALL (fsig->pinvoke) // i.e. if !cmethod (calli)
|| IS_NOT_SUPPORTED_TAILCALL (cfg->method->save_lmf)
|| IS_NOT_SUPPORTED_TAILCALL (!cmethod && fsig->hasthis) // FIXME could be valuetype to current frame; range check
|| IS_NOT_SUPPORTED_TAILCALL (cmethod && cmethod->wrapper_type && cmethod->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD)
// http://www.mono-project.com/docs/advanced/runtime/docs/generic-sharing/
//
// 1. Non-generic non-static methods of reference types have access to the
// RGCTX via the "this" argument (this->vtable->rgctx).
// 2. a Non-generic static methods of reference types and b. non-generic methods
// of value types need to be passed a pointer to the caller's class's VTable in the MONO_ARCH_RGCTX_REG register.
// 3. Generic methods need to be passed a pointer to the MRGCTX in the MONO_ARCH_RGCTX_REG register
//
// That is what vtable_arg is here (always?).
//
// Passing vtable_arg uses (requires?) a volatile non-parameter register,
// such as AMD64 rax, r10, r11, or the return register on many architectures.
// ARM32 does not always clearly have such a register. ARM32's return register
// is a parameter register.
// iPhone could use r9 except on old systems. iPhone/ARM32 is not particularly
// important. Linux/arm32 is less clear.
// ARM32's scratch r12 might work but only with much collateral change.
//
// Imagine F1 calls F2, and F2 tailcalls F3.
// F2 and F3 are managed. F1 is native.
// Without a tailcall, F2 can save and restore everything needed for F1.
// However if the extra parameter were in a non-volatile, such as ARM32 V5/R8,
// F3 cannot easily restore it for F1, in the current scheme. The current
// scheme where the extra parameter is not merely an extra parameter, but
// passed "outside of the ABI".
//
// If all native to managed transitions are intercepted and wrapped (w/o tailcall),
// then they can preserve this register and the rest of the managed callgraph
// treat it as volatile.
//
// Interface method dispatch has the same problem (imt_arg).
|| IS_NOT_SUPPORTED_TAILCALL (extra_arg && !cfg->backend->have_volatile_non_param_register)
|| IS_NOT_SUPPORTED_TAILCALL (cfg->gsharedvt)
) {
tailcall_calli = FALSE;
tailcall = FALSE;
goto exit;
}
for (int i = 0; i < fsig->param_count; ++i) {
if (IS_NOT_SUPPORTED_TAILCALL (m_type_is_byref (fsig->params [i]) || fsig->params [i]->type == MONO_TYPE_PTR || fsig->params [i]->type == MONO_TYPE_FNPTR)) {
tailcall_calli = FALSE;
tailcall = FALSE; // These can point to the current method's stack. Emit range check?
goto exit;
}
}
MonoMethodSignature *caller_signature;
MonoMethodSignature *callee_signature;
caller_signature = mono_method_signature_internal (method);
callee_signature = cmethod ? mono_method_signature_internal (cmethod) : fsig;
g_assert (caller_signature);
g_assert (callee_signature);
// Require an exact match on return type due to various conversions in emit_move_return_value that would be skipped.
// The main troublesome conversions are double <=> float.
// CoreCLR allows some conversions here, such as integer truncation.
// As well I <=> I[48] and U <=> U[48] would be ok, for matching size.
if (IS_NOT_SUPPORTED_TAILCALL (mini_get_underlying_type (caller_signature->ret)->type != mini_get_underlying_type (callee_signature->ret)->type)
|| IS_NOT_SUPPORTED_TAILCALL (!mono_arch_tailcall_supported (cfg, caller_signature, callee_signature, virtual_))) {
tailcall_calli = FALSE;
tailcall = FALSE;
goto exit;
}
/* Debugging support */
#if 0
if (!mono_debug_count ()) {
tailcall_calli = FALSE;
tailcall = FALSE;
goto exit;
}
#endif
// See check_sp in mini_emit_calli_full.
if (tailcall_calli && IS_NOT_SUPPORTED_TAILCALL (mini_should_check_stack_pointer (cfg)))
tailcall_calli = FALSE;
exit:
mono_tailcall_print ("tail.%s %s -> %s tailcall:%d tailcall_calli:%d gshared:%d extra_arg:%d virtual_:%d\n",
mono_opcode_name (*ip), method->name, cmethod ? cmethod->name : "calli", tailcall, tailcall_calli,
cfg->gshared, extra_arg, virtual_);
*ptailcall_calli = tailcall_calli;
return tailcall;
}
/*
* is_addressable_valuetype_load
*
* Returns true if a previous load can be done without doing an extra copy, given the new instruction ip and the type of the object being loaded ldtype
*/
static gboolean
is_addressable_valuetype_load (MonoCompile* cfg, guint8* ip, MonoType* ldtype)
{
/* Avoid loading a struct just to load one of its fields */
gboolean is_load_instruction = (*ip == CEE_LDFLD);
gboolean is_in_previous_bb = ip_in_bb(cfg, cfg->cbb, ip);
gboolean is_struct = MONO_TYPE_ISSTRUCT(ldtype);
return is_load_instruction && is_in_previous_bb && is_struct;
}
/*
* handle_ctor_call:
*
* Handle calls made to ctors from NEWOBJ opcodes.
*/
static void
handle_ctor_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, int context_used,
MonoInst **sp, guint8 *ip, int *inline_costs)
{
MonoInst *vtable_arg = NULL, *callvirt_this_arg = NULL, *ins;
if (cmethod && (ins = mini_emit_inst_for_ctor (cfg, cmethod, fsig, sp))) {
g_assert (MONO_TYPE_IS_VOID (fsig->ret));
CHECK_CFG_EXCEPTION;
return;
}
if (mono_class_generic_sharing_enabled (cmethod->klass) && mono_method_is_generic_sharable (cmethod, TRUE)) {
MonoRgctxAccess access = mini_get_rgctx_access_for_method (cmethod);
if (access == MONO_RGCTX_ACCESS_MRGCTX) {
mono_class_vtable_checked (cmethod->klass, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (cmethod->klass);
vtable_arg = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_METHOD_RGCTX);
} else if (access == MONO_RGCTX_ACCESS_VTABLE) {
vtable_arg = mini_emit_get_rgctx_klass (cfg, context_used,
cmethod->klass, MONO_RGCTX_INFO_VTABLE);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (cmethod->klass);
} else {
g_assert (access == MONO_RGCTX_ACCESS_THIS);
}
}
/* Avoid virtual calls to ctors if possible */
if ((cfg->opt & MONO_OPT_INLINE) && cmethod && !context_used && !vtable_arg &&
mono_method_check_inlining (cfg, cmethod) &&
!mono_class_is_subclass_of_internal (cmethod->klass, mono_defaults.exception_class, FALSE)) {
int costs;
if ((costs = inline_method (cfg, cmethod, fsig, sp, ip, cfg->real_offset, FALSE, NULL))) {
cfg->real_offset += 5;
*inline_costs += costs - 5;
} else {
INLINE_FAILURE ("inline failure");
// FIXME-VT: Clean this up
if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig))
GSHAREDVT_FAILURE(*ip);
mini_emit_method_call_full (cfg, cmethod, fsig, FALSE, sp, callvirt_this_arg, NULL, NULL);
}
} else if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) {
MonoInst *addr;
addr = emit_get_rgctx_gsharedvt_call (cfg, context_used, fsig, cmethod, MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE);
if (cfg->llvm_only) {
// FIXME: Avoid initializing vtable_arg
mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
} else {
mini_emit_calli (cfg, fsig, sp, addr, NULL, vtable_arg);
}
} else if (context_used &&
((!mono_method_is_generic_sharable_full (cmethod, TRUE, FALSE, FALSE) ||
!mono_class_generic_sharing_enabled (cmethod->klass)) || cfg->gsharedvt)) {
MonoInst *cmethod_addr;
/* Generic calls made out of gsharedvt methods cannot be patched, so use an indirect call */
if (cfg->llvm_only) {
MonoInst *addr = emit_get_rgctx_method (cfg, context_used, cmethod,
MONO_RGCTX_INFO_METHOD_FTNDESC);
mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
} else {
cmethod_addr = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_GENERIC_METHOD_CODE);
mini_emit_calli (cfg, fsig, sp, cmethod_addr, NULL, vtable_arg);
}
} else {
INLINE_FAILURE ("ctor call");
ins = mini_emit_method_call_full (cfg, cmethod, fsig, FALSE, sp,
callvirt_this_arg, NULL, vtable_arg);
}
exception_exit:
mono_error_exit:
return;
}
typedef struct {
MonoMethod *method;
gboolean inst_tailcall;
} HandleCallData;
/*
* handle_constrained_call:
*
* Handle constrained calls. Return a MonoInst* representing the call or NULL.
* May overwrite sp [0] and modify the ref_... parameters.
*/
static MonoInst*
handle_constrained_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoClass *constrained_class, MonoInst **sp,
HandleCallData *cdata, MonoMethod **ref_cmethod, gboolean *ref_virtual, gboolean *ref_emit_widen)
{
MonoInst *ins, *addr;
MonoMethod *method = cdata->method;
gboolean constrained_partial_call = FALSE;
gboolean constrained_is_generic_param =
m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_VAR ||
m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_MVAR;
MonoType *gshared_constraint = NULL;
if (constrained_is_generic_param && cfg->gshared) {
if (!mini_is_gsharedvt_klass (constrained_class)) {
g_assert (!m_class_is_valuetype (cmethod->klass));
if (!mini_type_is_reference (m_class_get_byval_arg (constrained_class)))
constrained_partial_call = TRUE;
MonoType *t = m_class_get_byval_arg (constrained_class);
MonoGenericParam *gparam = t->data.generic_param;
gshared_constraint = gparam->gshared_constraint;
}
}
if (mini_is_gsharedvt_klass (constrained_class)) {
if ((cmethod->klass != mono_defaults.object_class) && m_class_is_valuetype (constrained_class) && m_class_is_valuetype (cmethod->klass)) {
/* The 'Own method' case below */
} else if (m_class_get_image (cmethod->klass) != mono_defaults.corlib && !mono_class_is_interface (cmethod->klass) && !m_class_is_valuetype (cmethod->klass)) {
/* 'The type parameter is instantiated as a reference type' case below. */
} else {
ins = handle_constrained_gsharedvt_call (cfg, cmethod, fsig, sp, constrained_class, ref_emit_widen);
CHECK_CFG_EXCEPTION;
g_assert (ins);
if (cdata->inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall constrained_class %s -> %s\n", method->name, cmethod->name);
return ins;
}
}
if (m_method_is_static (cmethod)) {
/* Call to an abstract static method, handled normally */
return NULL;
} else if (constrained_partial_call) {
gboolean need_box = TRUE;
/*
* The receiver is a valuetype, but the exact type is not known at compile time. This means the
* called method is not known at compile time either. The called method could end up being
* one of the methods on the parent classes (object/valuetype/enum), in which case we need
* to box the receiver.
* A simple solution would be to box always and make a normal virtual call, but that would
* be bad performance wise.
*/
if (mono_class_is_interface (cmethod->klass) && mono_class_is_ginst (cmethod->klass) &&
(cmethod->flags & METHOD_ATTRIBUTE_ABSTRACT)) {
/*
* The parent classes implement no generic interfaces, so the called method will be a vtype method, so no boxing necessary.
*/
/* If the method is not abstract, it's a default interface method, and we need to box */
need_box = FALSE;
}
if (gshared_constraint && MONO_TYPE_IS_PRIMITIVE (gshared_constraint) && cmethod->klass == mono_defaults.object_class &&
!strcmp (cmethod->name, "GetHashCode")) {
/*
* The receiver is constrained to a primitive type or an enum with the same basetype.
* Enum.GetHashCode () returns the hash code of the underlying type (see comments in Enum.cs),
* so the constrained call can be replaced with a normal call to the basetype GetHashCode ()
* method.
*/
MonoClass *gshared_constraint_class = mono_class_from_mono_type_internal (gshared_constraint);
cmethod = get_method_nofail (gshared_constraint_class, cmethod->name, 0, 0);
g_assert (cmethod);
*ref_cmethod = cmethod;
*ref_virtual = FALSE;
if (cfg->verbose_level)
printf (" -> %s\n", mono_method_get_full_name (cmethod));
return NULL;
}
if (!(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) && (cmethod->klass == mono_defaults.object_class || cmethod->klass == m_class_get_parent (mono_defaults.enum_class) || cmethod->klass == mono_defaults.enum_class)) {
/* The called method is not virtual, i.e. Object:GetType (), the receiver is a vtype, has to box */
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0);
ins->klass = constrained_class;
sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class));
CHECK_CFG_EXCEPTION;
} else if (need_box) {
MonoInst *box_type;
MonoBasicBlock *is_ref_bb, *end_bb;
MonoInst *nonbox_call, *addr;
/*
* Determine at runtime whenever the called method is defined on object/valuetype/enum, and emit a boxing call
* if needed.
* FIXME: It is possible to inline the called method in a lot of cases, i.e. for T_INT,
* the no-box case goes to a method in Int32, while the box case goes to a method in Enum.
*/
addr = emit_get_rgctx_virt_method (cfg, mono_class_check_context_used (constrained_class), constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE);
NEW_BBLOCK (cfg, is_ref_bb);
NEW_BBLOCK (cfg, end_bb);
box_type = emit_get_rgctx_virt_method (cfg, mono_class_check_context_used (constrained_class), constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_BOX_TYPE);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, box_type->dreg, MONO_GSHAREDVT_BOX_TYPE_REF);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_ref_bb);
/* Non-ref case */
if (cfg->llvm_only)
/* addr is an ftndesc in this case */
nonbox_call = mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
else
nonbox_call = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Ref case */
MONO_START_BB (cfg, is_ref_bb);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0);
ins->klass = constrained_class;
sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class));
CHECK_CFG_EXCEPTION;
if (cfg->llvm_only)
ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
else
ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
MONO_START_BB (cfg, end_bb);
cfg->cbb = end_bb;
nonbox_call->dreg = ins->dreg;
if (cdata->inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall constrained_partial_need_box %s -> %s\n", method->name, cmethod->name);
return ins;
} else {
g_assert (mono_class_is_interface (cmethod->klass));
addr = emit_get_rgctx_virt_method (cfg, mono_class_check_context_used (constrained_class), constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE);
if (cfg->llvm_only)
ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
else
ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL);
if (cdata->inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall constrained_partial %s -> %s\n", method->name, cmethod->name);
return ins;
}
} else if (!m_class_is_valuetype (constrained_class)) {
int dreg = alloc_ireg_ref (cfg);
/*
* The type parameter is instantiated as a reference
* type. We have a managed pointer on the stack, so
* we need to dereference it here.
*/
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, sp [0]->dreg, 0);
ins->type = STACK_OBJ;
sp [0] = ins;
} else if (cmethod->klass == mono_defaults.object_class || cmethod->klass == m_class_get_parent (mono_defaults.enum_class) || cmethod->klass == mono_defaults.enum_class) {
/*
* The type parameter is instantiated as a valuetype,
* but that type doesn't override the method we're
* calling, so we need to box `this'.
*/
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0);
ins->klass = constrained_class;
sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class));
CHECK_CFG_EXCEPTION;
} else {
if (cmethod->klass != constrained_class) {
/* Enums/default interface methods */
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0);
ins->klass = constrained_class;
sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class));
CHECK_CFG_EXCEPTION;
}
*ref_virtual = FALSE;
}
exception_exit:
return NULL;
}
static void
emit_setret (MonoCompile *cfg, MonoInst *val)
{
MonoType *ret_type = mini_get_underlying_type (mono_method_signature_internal (cfg->method)->ret);
MonoInst *ins;
if (mini_type_to_stind (cfg, ret_type) == CEE_STOBJ) {
MonoInst *ret_addr;
if (!cfg->vret_addr) {
EMIT_NEW_VARSTORE (cfg, ins, cfg->ret, ret_type, val);
} else {
EMIT_NEW_RETLOADA (cfg, ret_addr);
MonoClass *ret_class = mono_class_from_mono_type_internal (ret_type);
if (MONO_CLASS_IS_SIMD (cfg, ret_class))
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STOREX_MEMBASE, ret_addr->dreg, 0, val->dreg);
else
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STOREV_MEMBASE, ret_addr->dreg, 0, val->dreg);
ins->klass = ret_class;
}
} else {
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
if (COMPILE_SOFT_FLOAT (cfg) && !m_type_is_byref (ret_type) && ret_type->type == MONO_TYPE_R4) {
MonoInst *conv;
MonoInst *iargs [ ] = { val };
conv = mono_emit_jit_icall (cfg, mono_fload_r4_arg, iargs);
mono_arch_emit_setret (cfg, cfg->method, conv);
} else {
mono_arch_emit_setret (cfg, cfg->method, val);
}
#else
mono_arch_emit_setret (cfg, cfg->method, val);
#endif
}
}
/*
* Emit a call to enter the interpreter for methods with filter clauses.
*/
static void
emit_llvmonly_interp_entry (MonoCompile *cfg, MonoMethodHeader *header)
{
MonoInst *ins;
MonoInst **iargs;
MonoMethodSignature *sig = mono_method_signature_internal (cfg->method);
MonoInst *ftndesc;
cfg->interp_in_signatures = g_slist_prepend_mempool (cfg->mempool, cfg->interp_in_signatures, sig);
/*
* Emit a call to the interp entry function. We emit it here instead of the llvm backend since
* calling conventions etc. are easier to handle here. The LLVM backend will only emit the
* entry/exit bblocks.
*/
g_assert (cfg->cbb == cfg->bb_init);
if (cfg->gsharedvt && mini_is_gsharedvt_variable_signature (sig)) {
/*
* Would have to generate a gsharedvt out wrapper which calls the interp entry wrapper, but
* the gsharedvt out wrapper might not exist if the caller is also a gsharedvt method since
* the concrete signature of the call might not exist in the program.
* So transition directly to the interpreter without the wrappers.
*/
MonoInst *args_ins;
MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM);
ins->dreg = alloc_preg (cfg);
ins->inst_imm = sig->param_count * sizeof (target_mgreg_t);
MONO_ADD_INS (cfg->cbb, ins);
args_ins = ins;
for (int i = 0; i < sig->hasthis + sig->param_count; ++i) {
MonoInst *arg_addr_ins;
EMIT_NEW_VARLOADA ((cfg), arg_addr_ins, cfg->args [i], cfg->arg_types [i]);
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, args_ins->dreg, i * sizeof (target_mgreg_t), arg_addr_ins->dreg);
}
MonoInst *ret_var = NULL;
MonoInst *ret_arg_ins;
if (!MONO_TYPE_IS_VOID (sig->ret)) {
ret_var = mono_compile_create_var (cfg, sig->ret, OP_LOCAL);
EMIT_NEW_VARLOADA (cfg, ret_arg_ins, ret_var, sig->ret);
} else {
EMIT_NEW_PCONST (cfg, ret_arg_ins, NULL);
}
iargs = g_newa (MonoInst*, 3);
iargs [0] = emit_get_rgctx_method (cfg, -1, cfg->method, MONO_RGCTX_INFO_INTERP_METHOD);
iargs [1] = ret_arg_ins;
iargs [2] = args_ins;
mono_emit_jit_icall_id (cfg, MONO_JIT_ICALL_mini_llvmonly_interp_entry_gsharedvt, iargs);
if (!MONO_TYPE_IS_VOID (sig->ret))
EMIT_NEW_VARLOAD (cfg, ins, ret_var, sig->ret);
else
ins = NULL;
} else {
/* Obtain the interp entry function */
ftndesc = emit_get_rgctx_method (cfg, -1, cfg->method, MONO_RGCTX_INFO_LLVMONLY_INTERP_ENTRY);
/* Call it */
iargs = g_newa (MonoInst*, sig->param_count + 1);
for (int i = 0; i < sig->param_count + sig->hasthis; ++i)
EMIT_NEW_ARGLOAD (cfg, iargs [i], i);
ins = mini_emit_llvmonly_calli (cfg, sig, iargs, ftndesc);
}
/* Do a normal return */
if (cfg->ret) {
emit_setret (cfg, ins);
/*
* Since only bb_entry/bb_exit is emitted if interp_entry_only is set,
* its possible that the return value becomes an OP_PHI node whose inputs
* are not emitted. Make it volatile to prevent that.
*/
cfg->ret->flags |= MONO_INST_VOLATILE;
}
MONO_INST_NEW (cfg, ins, OP_BR);
ins->inst_target_bb = cfg->bb_exit;
MONO_ADD_INS (cfg->cbb, ins);
link_bblock (cfg, cfg->cbb, cfg->bb_exit);
}
typedef union _MonoOpcodeParameter {
gint32 i32;
gint64 i64;
float f;
double d;
guchar *branch_target;
} MonoOpcodeParameter;
typedef struct _MonoOpcodeInfo {
guint constant : 4; // private
gint pops : 3; // public -1 means variable
gint pushes : 3; // public -1 means variable
} MonoOpcodeInfo;
static const MonoOpcodeInfo*
mono_opcode_decode (guchar *ip, guint op_size, MonoOpcodeEnum il_op, MonoOpcodeParameter *parameter)
{
#define Push0 (0)
#define Pop0 (0)
#define Push1 (1)
#define Pop1 (1)
#define PushI (1)
#define PopI (1)
#define PushI8 (1)
#define PopI8 (1)
#define PushRef (1)
#define PopRef (1)
#define PushR4 (1)
#define PopR4 (1)
#define PushR8 (1)
#define PopR8 (1)
#define VarPush (-1)
#define VarPop (-1)
static const MonoOpcodeInfo mono_opcode_info [ ] = {
#define OPDEF(name, str, pops, pushes, param, param_constant, a, b, c, flow) {param_constant + 1, pops, pushes },
#include "mono/cil/opcode.def"
#undef OPDEF
};
#undef Push0
#undef Pop0
#undef Push1
#undef Pop1
#undef PushI
#undef PopI
#undef PushI8
#undef PopI8
#undef PushRef
#undef PopRef
#undef PushR4
#undef PopR4
#undef PushR8
#undef PopR8
#undef VarPush
#undef VarPop
gint32 delta;
guchar *next_ip = ip + op_size;
const MonoOpcodeInfo *info = &mono_opcode_info [il_op];
switch (mono_opcodes [il_op].argument) {
case MonoInlineNone:
parameter->i32 = (int)info->constant - 1;
break;
case MonoInlineString:
case MonoInlineType:
case MonoInlineField:
case MonoInlineMethod:
case MonoInlineTok:
case MonoInlineSig:
case MonoShortInlineR:
case MonoInlineI:
parameter->i32 = read32 (next_ip - 4);
// FIXME check token type?
break;
case MonoShortInlineI:
parameter->i32 = (signed char)next_ip [-1];
break;
case MonoInlineVar:
parameter->i32 = read16 (next_ip - 2);
break;
case MonoShortInlineVar:
parameter->i32 = next_ip [-1];
break;
case MonoInlineR:
case MonoInlineI8:
parameter->i64 = read64 (next_ip - 8);
break;
case MonoShortInlineBrTarget:
delta = (signed char)next_ip [-1];
goto branch_target;
case MonoInlineBrTarget:
delta = (gint32)read32 (next_ip - 4);
branch_target:
parameter->branch_target = delta + next_ip;
break;
case MonoInlineSwitch: // complicated
break;
default:
g_error ("%s %d %d\n", __func__, il_op, mono_opcodes [il_op].argument);
}
return info;
}
/*
* mono_method_to_ir:
*
* Translate the .net IL into linear IR.
*
* @start_bblock: if not NULL, the starting basic block, used during inlining.
* @end_bblock: if not NULL, the ending basic block, used during inlining.
* @return_var: if not NULL, the place where the return value is stored, used during inlining.
* @inline_args: if not NULL, contains the arguments to the inline call
* @inline_offset: if not zero, the real offset from the inline call, or zero otherwise.
* @is_virtual_call: whether this method is being called as a result of a call to callvirt
*
* This method is used to turn ECMA IL into Mono's internal Linear IR
* reprensetation. It is used both for entire methods, as well as
* inlining existing methods. In the former case, the @start_bblock,
* @end_bblock, @return_var, @inline_args are all set to NULL, and the
* inline_offset is set to zero.
*
* Returns: the inline cost, or -1 if there was an error processing this method.
*/
int
mono_method_to_ir (MonoCompile *cfg, MonoMethod *method, MonoBasicBlock *start_bblock, MonoBasicBlock *end_bblock,
MonoInst *return_var, MonoInst **inline_args,
guint inline_offset, gboolean is_virtual_call)
{
ERROR_DECL (error);
// Buffer to hold parameters to mono_new_array, instead of varargs.
MonoInst *array_new_localalloc_ins = NULL;
MonoInst *ins, **sp, **stack_start;
MonoBasicBlock *tblock = NULL;
MonoBasicBlock *init_localsbb = NULL, *init_localsbb2 = NULL;
MonoSimpleBasicBlock *bb = NULL, *original_bb = NULL;
MonoMethod *method_definition;
MonoInst **arg_array;
MonoMethodHeader *header;
MonoImage *image;
guint32 token, ins_flag;
MonoClass *klass;
MonoClass *constrained_class = NULL;
gboolean save_last_error = FALSE;
guchar *ip, *end, *target, *err_pos;
MonoMethodSignature *sig;
MonoGenericContext *generic_context = NULL;
MonoGenericContainer *generic_container = NULL;
MonoType **param_types;
int i, n, start_new_bblock, dreg;
int num_calls = 0, inline_costs = 0;
guint num_args;
GSList *class_inits = NULL;
gboolean dont_verify, dont_verify_stloc, readonly = FALSE;
int context_used;
gboolean init_locals, seq_points, skip_dead_blocks;
gboolean sym_seq_points = FALSE;
MonoDebugMethodInfo *minfo;
MonoBitSet *seq_point_locs = NULL;
MonoBitSet *seq_point_set_locs = NULL;
const char *ovf_exc = NULL;
gboolean emitted_funccall_seq_point = FALSE;
gboolean detached_before_ret = FALSE;
gboolean ins_has_side_effect;
if (!cfg->disable_inline)
cfg->disable_inline = (method->iflags & METHOD_IMPL_ATTRIBUTE_NOOPTIMIZATION) || is_jit_optimizer_disabled (method);
cfg->current_method = method;
image = m_class_get_image (method->klass);
/* serialization and xdomain stuff may need access to private fields and methods */
dont_verify = FALSE;
dont_verify |= method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE; /* bug #77896 */
dont_verify |= method->wrapper_type == MONO_WRAPPER_COMINTEROP;
dont_verify |= method->wrapper_type == MONO_WRAPPER_COMINTEROP_INVOKE;
/* still some type unsafety issues in marshal wrappers... (unknown is PtrToStructure) */
dont_verify_stloc = method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE;
dont_verify_stloc |= method->wrapper_type == MONO_WRAPPER_OTHER;
dont_verify_stloc |= method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED;
dont_verify_stloc |= method->wrapper_type == MONO_WRAPPER_STELEMREF;
header = mono_method_get_header_checked (method, cfg->error);
if (!header) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
goto exception_exit;
} else {
cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, header);
}
generic_container = mono_method_get_generic_container (method);
sig = mono_method_signature_internal (method);
num_args = sig->hasthis + sig->param_count;
ip = (guchar*)header->code;
cfg->cil_start = ip;
end = ip + header->code_size;
cfg->stat_cil_code_size += header->code_size;
seq_points = cfg->gen_seq_points && cfg->method == method;
if (method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED) {
/* We could hit a seq point before attaching to the JIT (#8338) */
seq_points = FALSE;
}
if (method->wrapper_type == MONO_WRAPPER_OTHER) {
WrapperInfo *info = mono_marshal_get_wrapper_info (method);
if (info->subtype == WRAPPER_SUBTYPE_INTERP_IN) {
/* We could hit a seq point before attaching to the JIT (#8338) */
seq_points = FALSE;
}
}
if (cfg->prof_coverage) {
if (cfg->compile_aot)
g_error ("Coverage profiling is not supported with AOT.");
INLINE_FAILURE ("coverage profiling");
cfg->coverage_info = mono_profiler_coverage_alloc (cfg->method, header->code_size);
}
if ((cfg->gen_sdb_seq_points && cfg->method == method) || cfg->prof_coverage) {
minfo = mono_debug_lookup_method (method);
if (minfo) {
MonoSymSeqPoint *sps;
int i, n_il_offsets;
mono_debug_get_seq_points (minfo, NULL, NULL, NULL, &sps, &n_il_offsets);
seq_point_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0);
seq_point_set_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0);
sym_seq_points = TRUE;
for (i = 0; i < n_il_offsets; ++i) {
if (sps [i].il_offset < header->code_size)
mono_bitset_set_fast (seq_point_locs, sps [i].il_offset);
}
g_free (sps);
MonoDebugMethodAsyncInfo* asyncMethod = mono_debug_lookup_method_async_debug_info (method);
if (asyncMethod) {
for (i = 0; asyncMethod != NULL && i < asyncMethod->num_awaits; i++)
{
mono_bitset_set_fast (seq_point_locs, asyncMethod->resume_offsets[i]);
mono_bitset_set_fast (seq_point_locs, asyncMethod->yield_offsets[i]);
}
mono_debug_free_method_async_debug_info (asyncMethod);
}
} else if (!method->wrapper_type && !method->dynamic && mono_debug_image_has_debug_info (m_class_get_image (method->klass))) {
/* Methods without line number info like auto-generated property accessors */
seq_point_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0);
seq_point_set_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0);
sym_seq_points = TRUE;
}
}
/*
* Methods without init_locals set could cause asserts in various passes
* (#497220). To work around this, we emit dummy initialization opcodes
* (OP_DUMMY_ICONST etc.) which generate no code. These are only supported
* on some platforms.
*/
if (cfg->opt & MONO_OPT_UNSAFE)
init_locals = header->init_locals;
else
init_locals = TRUE;
method_definition = method;
while (method_definition->is_inflated) {
MonoMethodInflated *imethod = (MonoMethodInflated *) method_definition;
method_definition = imethod->declaring;
}
/* SkipVerification is not allowed if core-clr is enabled */
if (!dont_verify && mini_assembly_can_skip_verification (method)) {
dont_verify = TRUE;
dont_verify_stloc = TRUE;
}
if (sig->is_inflated)
generic_context = mono_method_get_context (method);
else if (generic_container)
generic_context = &generic_container->context;
cfg->generic_context = generic_context;
if (!cfg->gshared)
g_assert (!sig->has_type_parameters);
if (sig->generic_param_count && method->wrapper_type == MONO_WRAPPER_NONE) {
g_assert (method->is_inflated);
g_assert (mono_method_get_context (method)->method_inst);
}
if (method->is_inflated && mono_method_get_context (method)->method_inst)
g_assert (sig->generic_param_count);
if (cfg->method == method) {
cfg->real_offset = 0;
} else {
cfg->real_offset = inline_offset;
}
cfg->cil_offset_to_bb = (MonoBasicBlock **)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoBasicBlock*) * header->code_size);
cfg->cil_offset_to_bb_len = header->code_size;
if (cfg->verbose_level > 2)
printf ("method to IR %s\n", mono_method_full_name (method, TRUE));
param_types = (MonoType **)mono_mempool_alloc (cfg->mempool, sizeof (MonoType*) * num_args);
if (sig->hasthis)
param_types [0] = m_class_is_valuetype (method->klass) ? m_class_get_this_arg (method->klass) : m_class_get_byval_arg (method->klass);
for (n = 0; n < sig->param_count; ++n)
param_types [n + sig->hasthis] = sig->params [n];
cfg->arg_types = param_types;
cfg->dont_inline = g_list_prepend (cfg->dont_inline, method);
if (cfg->method == method) {
/* ENTRY BLOCK */
NEW_BBLOCK (cfg, start_bblock);
cfg->bb_entry = start_bblock;
start_bblock->cil_code = NULL;
start_bblock->cil_length = 0;
/* EXIT BLOCK */
NEW_BBLOCK (cfg, end_bblock);
cfg->bb_exit = end_bblock;
end_bblock->cil_code = NULL;
end_bblock->cil_length = 0;
end_bblock->flags |= BB_INDIRECT_JUMP_TARGET;
g_assert (cfg->num_bblocks == 2);
arg_array = cfg->args;
if (header->num_clauses) {
cfg->spvars = g_hash_table_new (NULL, NULL);
cfg->exvars = g_hash_table_new (NULL, NULL);
}
cfg->clause_is_dead = mono_mempool_alloc0 (cfg->mempool, sizeof (gboolean) * header->num_clauses);
/* handle exception clauses */
for (i = 0; i < header->num_clauses; ++i) {
MonoBasicBlock *try_bb;
MonoExceptionClause *clause = &header->clauses [i];
GET_BBLOCK (cfg, try_bb, ip + clause->try_offset);
try_bb->real_offset = clause->try_offset;
try_bb->try_start = TRUE;
GET_BBLOCK (cfg, tblock, ip + clause->handler_offset);
tblock->real_offset = clause->handler_offset;
tblock->flags |= BB_EXCEPTION_HANDLER;
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY)
mono_create_exvar_for_offset (cfg, clause->handler_offset);
/*
* Linking the try block with the EH block hinders inlining as we won't be able to
* merge the bblocks from inlining and produce an artificial hole for no good reason.
*/
if (COMPILE_LLVM (cfg))
link_bblock (cfg, try_bb, tblock);
if (*(ip + clause->handler_offset) == CEE_POP)
tblock->flags |= BB_EXCEPTION_DEAD_OBJ;
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY ||
clause->flags == MONO_EXCEPTION_CLAUSE_FILTER ||
clause->flags == MONO_EXCEPTION_CLAUSE_FAULT) {
MONO_INST_NEW (cfg, ins, OP_START_HANDLER);
MONO_ADD_INS (tblock, ins);
if (seq_points && clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY && clause->flags != MONO_EXCEPTION_CLAUSE_FILTER) {
/* finally clauses already have a seq point */
/* seq points for filter clauses are emitted below */
NEW_SEQ_POINT (cfg, ins, clause->handler_offset, TRUE);
MONO_ADD_INS (tblock, ins);
}
/* todo: is a fault block unsafe to optimize? */
if (clause->flags == MONO_EXCEPTION_CLAUSE_FAULT)
tblock->flags |= BB_EXCEPTION_UNSAFE;
}
/*printf ("clause try IL_%04x to IL_%04x handler %d at IL_%04x to IL_%04x\n", clause->try_offset, clause->try_offset + clause->try_len, clause->flags, clause->handler_offset, clause->handler_offset + clause->handler_len);
while (p < end) {
printf ("%s", mono_disasm_code_one (NULL, method, p, &p));
}*/
/* catch and filter blocks get the exception object on the stack */
if (clause->flags == MONO_EXCEPTION_CLAUSE_NONE ||
clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) {
/* mostly like handle_stack_args (), but just sets the input args */
/* printf ("handling clause at IL_%04x\n", clause->handler_offset); */
tblock->in_scount = 1;
tblock->in_stack = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*));
tblock->in_stack [0] = mono_create_exvar_for_offset (cfg, clause->handler_offset);
cfg->cbb = tblock;
#ifdef MONO_CONTEXT_SET_LLVM_EXC_REG
/* The EH code passes in the exception in a register to both JITted and LLVM compiled code */
if (!cfg->compile_llvm) {
MONO_INST_NEW (cfg, ins, OP_GET_EX_OBJ);
ins->dreg = tblock->in_stack [0]->dreg;
MONO_ADD_INS (tblock, ins);
}
#else
MonoInst *dummy_use;
/*
* Add a dummy use for the exvar so its liveness info will be
* correct.
*/
EMIT_NEW_DUMMY_USE (cfg, dummy_use, tblock->in_stack [0]);
#endif
if (seq_points && clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) {
NEW_SEQ_POINT (cfg, ins, clause->handler_offset, TRUE);
MONO_ADD_INS (tblock, ins);
}
if (clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) {
GET_BBLOCK (cfg, tblock, ip + clause->data.filter_offset);
tblock->flags |= BB_EXCEPTION_HANDLER;
tblock->real_offset = clause->data.filter_offset;
tblock->in_scount = 1;
tblock->in_stack = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*));
/* The filter block shares the exvar with the handler block */
tblock->in_stack [0] = mono_create_exvar_for_offset (cfg, clause->handler_offset);
MONO_INST_NEW (cfg, ins, OP_START_HANDLER);
MONO_ADD_INS (tblock, ins);
}
}
if (clause->flags != MONO_EXCEPTION_CLAUSE_FILTER &&
clause->data.catch_class &&
cfg->gshared &&
mono_class_check_context_used (clause->data.catch_class)) {
/*
* In shared generic code with catch
* clauses containing type variables
* the exception handling code has to
* be able to get to the rgctx.
* Therefore we have to make sure that
* the vtable/mrgctx argument (for
* static or generic methods) or the
* "this" argument (for non-static
* methods) are live.
*/
if ((method->flags & METHOD_ATTRIBUTE_STATIC) ||
mini_method_get_context (method)->method_inst ||
m_class_is_valuetype (method->klass)) {
mono_get_vtable_var (cfg);
} else {
MonoInst *dummy_use;
EMIT_NEW_DUMMY_USE (cfg, dummy_use, arg_array [0]);
}
}
}
} else {
arg_array = g_newa (MonoInst*, num_args);
cfg->cbb = start_bblock;
cfg->args = arg_array;
mono_save_args (cfg, sig, inline_args);
}
if (cfg->method == method && cfg->self_init && cfg->compile_aot && !COMPILE_LLVM (cfg)) {
MonoMethod *wrapper;
MonoInst *args [2];
int idx;
/*
* Emit code to initialize this method by calling the init wrapper emitted by LLVM.
* This is not efficient right now, but its only used for the methods which fail
* LLVM compilation.
* FIXME: Optimize this
*/
g_assert (!cfg->gshared);
wrapper = mono_marshal_get_aot_init_wrapper (AOT_INIT_METHOD);
/* Emit this into the entry bb so it comes before the GC safe point which depends on an inited GOT */
cfg->cbb = cfg->bb_entry;
idx = mono_aot_get_method_index (cfg->method);
EMIT_NEW_ICONST (cfg, args [0], idx);
/* Dummy */
EMIT_NEW_ICONST (cfg, args [1], 0);
mono_emit_method_call (cfg, wrapper, args, NULL);
}
if (cfg->llvm_only && cfg->interp && cfg->method == method && !cfg->deopt) {
if (header->num_clauses) {
for (int i = 0; i < header->num_clauses; ++i) {
MonoExceptionClause *clause = &header->clauses [i];
/* Finally clauses are checked after the remove_finally pass */
if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY)
cfg->interp_entry_only = TRUE;
}
}
}
/* we use a separate basic block for the initialization code */
NEW_BBLOCK (cfg, init_localsbb);
if (cfg->method == method)
cfg->bb_init = init_localsbb;
init_localsbb->real_offset = cfg->real_offset;
start_bblock->next_bb = init_localsbb;
link_bblock (cfg, start_bblock, init_localsbb);
init_localsbb2 = init_localsbb;
cfg->cbb = init_localsbb;
if (cfg->gsharedvt && cfg->method == method) {
MonoGSharedVtMethodInfo *info;
MonoInst *var, *locals_var;
int dreg;
info = (MonoGSharedVtMethodInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoGSharedVtMethodInfo));
info->method = cfg->method;
info->count_entries = 16;
info->entries = (MonoRuntimeGenericContextInfoTemplate *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoRuntimeGenericContextInfoTemplate) * info->count_entries);
cfg->gsharedvt_info = info;
var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
/* prevent it from being register allocated */
//var->flags |= MONO_INST_VOLATILE;
cfg->gsharedvt_info_var = var;
ins = emit_get_rgctx_gsharedvt_method (cfg, mini_method_check_context_used (cfg, method), method, info);
MONO_EMIT_NEW_UNALU (cfg, OP_MOVE, var->dreg, ins->dreg);
/* Allocate locals */
locals_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
/* prevent it from being register allocated */
//locals_var->flags |= MONO_INST_VOLATILE;
cfg->gsharedvt_locals_var = locals_var;
dreg = alloc_ireg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, dreg, var->dreg, MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, locals_size));
MONO_INST_NEW (cfg, ins, OP_LOCALLOC);
ins->dreg = locals_var->dreg;
ins->sreg1 = dreg;
MONO_ADD_INS (cfg->cbb, ins);
cfg->gsharedvt_locals_var_ins = ins;
cfg->flags |= MONO_CFG_HAS_ALLOCA;
/*
if (init_locals)
ins->flags |= MONO_INST_INIT;
*/
if (cfg->llvm_only) {
init_localsbb = cfg->cbb;
init_localsbb2 = cfg->cbb;
}
}
if (cfg->deopt) {
/*
* Push an LMFExt frame which points to a MonoMethodILState structure.
*/
emit_push_lmf (cfg);
/* The type doesn't matter, the llvm backend will use the correct type */
MonoInst *il_state_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
il_state_var->flags |= MONO_INST_VOLATILE;
cfg->il_state_var = il_state_var;
EMIT_NEW_VARLOADA (cfg, ins, cfg->il_state_var, NULL);
int il_state_addr_reg = ins->dreg;
/* il_state->method = method */
MonoInst *method_ins = emit_get_rgctx_method (cfg, -1, cfg->method, MONO_RGCTX_INFO_METHOD);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, il_state_addr_reg, MONO_STRUCT_OFFSET (MonoMethodILState, method), method_ins->dreg);
EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL);
int lmf_reg = ins->dreg;
/* lmf->kind = MONO_LMFEXT_IL_STATE */
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI4_MEMBASE_IMM, lmf_reg, MONO_STRUCT_OFFSET (MonoLMFExt, kind), MONO_LMFEXT_IL_STATE);
/* lmf->il_state = il_state */
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, lmf_reg, MONO_STRUCT_OFFSET (MonoLMFExt, il_state), il_state_addr_reg);
/* emit_get_rgctx_method () might create new bblocks */
if (cfg->llvm_only) {
init_localsbb = cfg->cbb;
init_localsbb2 = cfg->cbb;
}
}
if (cfg->llvm_only && cfg->interp && cfg->method == method) {
if (cfg->interp_entry_only)
emit_llvmonly_interp_entry (cfg, header);
}
/* FIRST CODE BLOCK */
NEW_BBLOCK (cfg, tblock);
tblock->cil_code = ip;
cfg->cbb = tblock;
cfg->ip = ip;
init_localsbb->next_bb = cfg->cbb;
link_bblock (cfg, init_localsbb, cfg->cbb);
ADD_BBLOCK (cfg, tblock);
CHECK_CFG_EXCEPTION;
if (header->code_size == 0)
UNVERIFIED;
if (get_basic_blocks (cfg, header, cfg->real_offset, ip, end, &err_pos)) {
ip = err_pos;
UNVERIFIED;
}
if (cfg->method == method) {
int breakpoint_id = mono_debugger_method_has_breakpoint (method);
if (breakpoint_id) {
MONO_INST_NEW (cfg, ins, OP_BREAK);
MONO_ADD_INS (cfg->cbb, ins);
}
mono_debug_init_method (cfg, cfg->cbb, breakpoint_id);
}
for (n = 0; n < header->num_locals; ++n) {
if (header->locals [n]->type == MONO_TYPE_VOID && !m_type_is_byref (header->locals [n]))
UNVERIFIED;
}
class_inits = NULL;
/* We force the vtable variable here for all shared methods
for the possibility that they might show up in a stack
trace where their exact instantiation is needed. */
if (cfg->gshared && method == cfg->method) {
if ((method->flags & METHOD_ATTRIBUTE_STATIC) ||
mini_method_get_context (method)->method_inst ||
m_class_is_valuetype (method->klass)) {
mono_get_vtable_var (cfg);
} else {
/* FIXME: Is there a better way to do this?
We need the variable live for the duration
of the whole method. */
cfg->args [0]->flags |= MONO_INST_VOLATILE;
}
}
/* add a check for this != NULL to inlined methods */
if (is_virtual_call) {
MonoInst *arg_ins;
//
// This is just a hack to avoid checks in empty methods which could get inlined
// into finally clauses preventing the removal of empty finally clauses, since all
// variables in finally clauses are marked volatile so the check can't be removed
//
if (!(cfg->llvm_only && m_class_is_valuetype (method->klass) && header->code_size == 1 && header->code [0] == CEE_RET)) {
NEW_ARGLOAD (cfg, arg_ins, 0);
MONO_ADD_INS (cfg->cbb, arg_ins);
MONO_EMIT_NEW_CHECK_THIS (cfg, arg_ins->dreg);
}
}
skip_dead_blocks = !dont_verify;
if (skip_dead_blocks) {
original_bb = bb = mono_basic_block_split (method, cfg->error, header);
CHECK_CFG_ERROR;
g_assert (bb);
}
/* we use a spare stack slot in SWITCH and NEWOBJ and others */
stack_start = sp = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoInst*) * (header->max_stack + 1));
ins_flag = 0;
start_new_bblock = 0;
MonoOpcodeEnum il_op; il_op = MonoOpcodeEnum_Invalid;
emit_set_deopt_il_offset (cfg, ip - cfg->cil_start);
for (guchar *next_ip = ip; ip < end; ip = next_ip) {
MonoOpcodeEnum previous_il_op = il_op;
const guchar *tmp_ip = ip;
const int op_size = mono_opcode_value_and_size (&tmp_ip, end, &il_op);
CHECK_OPSIZE (op_size);
next_ip += op_size;
if (cfg->method == method)
cfg->real_offset = ip - header->code;
else
cfg->real_offset = inline_offset;
cfg->ip = ip;
context_used = 0;
if (start_new_bblock) {
cfg->cbb->cil_length = ip - cfg->cbb->cil_code;
if (start_new_bblock == 2) {
g_assert (ip == tblock->cil_code);
} else {
GET_BBLOCK (cfg, tblock, ip);
}
cfg->cbb->next_bb = tblock;
cfg->cbb = tblock;
start_new_bblock = 0;
for (i = 0; i < cfg->cbb->in_scount; ++i) {
if (cfg->verbose_level > 3)
printf ("loading %d from temp %d\n", i, (int)cfg->cbb->in_stack [i]->inst_c0);
EMIT_NEW_TEMPLOAD (cfg, ins, cfg->cbb->in_stack [i]->inst_c0);
*sp++ = ins;
}
if (class_inits)
g_slist_free (class_inits);
class_inits = NULL;
emit_set_deopt_il_offset (cfg, ip - cfg->cil_start);
} else {
if ((tblock = cfg->cil_offset_to_bb [ip - cfg->cil_start]) && (tblock != cfg->cbb)) {
link_bblock (cfg, cfg->cbb, tblock);
if (sp != stack_start) {
handle_stack_args (cfg, stack_start, sp - stack_start);
sp = stack_start;
CHECK_UNVERIFIABLE (cfg);
}
cfg->cbb->next_bb = tblock;
cfg->cbb = tblock;
for (i = 0; i < cfg->cbb->in_scount; ++i) {
if (cfg->verbose_level > 3)
printf ("loading %d from temp %d\n", i, (int)cfg->cbb->in_stack [i]->inst_c0);
EMIT_NEW_TEMPLOAD (cfg, ins, cfg->cbb->in_stack [i]->inst_c0);
*sp++ = ins;
}
g_slist_free (class_inits);
class_inits = NULL;
emit_set_deopt_il_offset (cfg, ip - cfg->cil_start);
}
}
/*
* Methods with AggressiveInline flag could be inlined even if the class has a cctor.
* This might create a branch so emit it in the first code bblock instead of into initlocals_bb.
*/
if (ip - header->code == 0 && cfg->method != method && cfg->compile_aot && (method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) && mono_class_needs_cctor_run (method->klass, method)) {
emit_class_init (cfg, method->klass);
}
if (skip_dead_blocks) {
int ip_offset = ip - header->code;
if (ip_offset == bb->end)
bb = bb->next;
if (bb->dead) {
g_assert (op_size > 0); /*The BB formation pass must catch all bad ops*/
if (cfg->verbose_level > 3) printf ("SKIPPING DEAD OP at %x\n", ip_offset);
if (ip_offset + op_size == bb->end) {
MONO_INST_NEW (cfg, ins, OP_NOP);
MONO_ADD_INS (cfg->cbb, ins);
start_new_bblock = 1;
}
continue;
}
}
/*
* Sequence points are points where the debugger can place a breakpoint.
* Currently, we generate these automatically at points where the IL
* stack is empty.
*/
if (seq_points && ((!sym_seq_points && (sp == stack_start)) || (sym_seq_points && mono_bitset_test_fast (seq_point_locs, ip - header->code)))) {
/*
* Make methods interruptable at the beginning, and at the targets of
* backward branches.
* Also, do this at the start of every bblock in methods with clauses too,
* to be able to handle instructions with inprecise control flow like
* throw/endfinally.
* Backward branches are handled at the end of method-to-ir ().
*/
gboolean intr_loc = ip == header->code || (!cfg->cbb->last_ins && cfg->header->num_clauses);
gboolean sym_seq_point = sym_seq_points && mono_bitset_test_fast (seq_point_locs, ip - header->code);
/* Avoid sequence points on empty IL like .volatile */
// FIXME: Enable this
//if (!(cfg->cbb->last_ins && cfg->cbb->last_ins->opcode == OP_SEQ_POINT)) {
NEW_SEQ_POINT (cfg, ins, ip - header->code, intr_loc);
if ((sp != stack_start) && !sym_seq_point)
ins->flags |= MONO_INST_NONEMPTY_STACK;
MONO_ADD_INS (cfg->cbb, ins);
if (sym_seq_points)
mono_bitset_set_fast (seq_point_set_locs, ip - header->code);
if (cfg->prof_coverage) {
guint32 cil_offset = ip - header->code;
gpointer counter = &cfg->coverage_info->data [cil_offset].count;
cfg->coverage_info->data [cil_offset].cil_code = ip;
if (mono_arch_opcode_supported (OP_ATOMIC_ADD_I4)) {
MonoInst *one_ins, *load_ins;
EMIT_NEW_PCONST (cfg, load_ins, counter);
EMIT_NEW_ICONST (cfg, one_ins, 1);
MONO_INST_NEW (cfg, ins, OP_ATOMIC_ADD_I4);
ins->dreg = mono_alloc_ireg (cfg);
ins->inst_basereg = load_ins->dreg;
ins->inst_offset = 0;
ins->sreg2 = one_ins->dreg;
ins->type = STACK_I4;
MONO_ADD_INS (cfg->cbb, ins);
} else {
EMIT_NEW_PCONST (cfg, ins, counter);
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STORE_MEMBASE_IMM, ins->dreg, 0, 1);
}
}
}
cfg->cbb->real_offset = cfg->real_offset;
if (cfg->verbose_level > 3)
printf ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip, NULL));
/*
* This is used to compute BB_HAS_SIDE_EFFECTS, which is used for the elimination of
* foreach finally clauses, so only IL opcodes which occur in such clauses
* need to set this.
*/
ins_has_side_effect = TRUE;
// Variables shared by CEE_CALLI CEE_CALL CEE_CALLVIRT CEE_JMP.
// Initialize to either what they all need or zero.
gboolean emit_widen = TRUE;
gboolean tailcall = FALSE;
gboolean common_call = FALSE;
MonoInst *keep_this_alive = NULL;
MonoMethod *cmethod = NULL;
MonoMethodSignature *fsig = NULL;
// These are used only in CALL/CALLVIRT but must be initialized also for CALLI,
// since it jumps into CALL/CALLVIRT.
gboolean need_seq_point = FALSE;
gboolean push_res = TRUE;
gboolean skip_ret = FALSE;
gboolean tailcall_remove_ret = FALSE;
// FIXME split 500 lines load/store field into separate file/function.
MonoOpcodeParameter parameter;
const MonoOpcodeInfo* info = mono_opcode_decode (ip, op_size, il_op, ¶meter);
g_assert (info);
n = parameter.i32;
token = parameter.i32;
target = parameter.branch_target;
// Check stack size for push/pop except variable cases -- -1 like call/ret/newobj.
const int pushes = info->pushes;
const int pops = info->pops;
if (pushes >= 0 && pops >= 0) {
g_assert (pushes - pops <= 1);
if (pushes - pops == 1)
CHECK_STACK_OVF ();
}
if (pops >= 0)
CHECK_STACK (pops);
switch (il_op) {
case MONO_CEE_NOP:
if (seq_points && !sym_seq_points && sp != stack_start) {
/*
* The C# compiler uses these nops to notify the JIT that it should
* insert seq points.
*/
NEW_SEQ_POINT (cfg, ins, ip - header->code, FALSE);
MONO_ADD_INS (cfg->cbb, ins);
}
if (cfg->keep_cil_nops)
MONO_INST_NEW (cfg, ins, OP_HARD_NOP);
else
MONO_INST_NEW (cfg, ins, OP_NOP);
MONO_ADD_INS (cfg->cbb, ins);
emitted_funccall_seq_point = FALSE;
ins_has_side_effect = FALSE;
break;
case MONO_CEE_BREAK:
if (mini_should_insert_breakpoint (cfg->method)) {
ins = mono_emit_jit_icall (cfg, mono_debugger_agent_user_break, NULL);
} else {
MONO_INST_NEW (cfg, ins, OP_NOP);
MONO_ADD_INS (cfg->cbb, ins);
}
break;
case MONO_CEE_LDARG_0:
case MONO_CEE_LDARG_1:
case MONO_CEE_LDARG_2:
case MONO_CEE_LDARG_3:
case MONO_CEE_LDARG_S:
case MONO_CEE_LDARG:
CHECK_ARG (n);
if (next_ip < end && is_addressable_valuetype_load (cfg, next_ip, cfg->arg_types[n])) {
EMIT_NEW_ARGLOADA (cfg, ins, n);
} else {
EMIT_NEW_ARGLOAD (cfg, ins, n);
}
*sp++ = ins;
break;
case MONO_CEE_LDLOC_0:
case MONO_CEE_LDLOC_1:
case MONO_CEE_LDLOC_2:
case MONO_CEE_LDLOC_3:
case MONO_CEE_LDLOC_S:
case MONO_CEE_LDLOC:
CHECK_LOCAL (n);
if (next_ip < end && is_addressable_valuetype_load (cfg, next_ip, header->locals[n])) {
EMIT_NEW_LOCLOADA (cfg, ins, n);
} else {
EMIT_NEW_LOCLOAD (cfg, ins, n);
}
*sp++ = ins;
break;
case MONO_CEE_STLOC_0:
case MONO_CEE_STLOC_1:
case MONO_CEE_STLOC_2:
case MONO_CEE_STLOC_3:
case MONO_CEE_STLOC_S:
case MONO_CEE_STLOC:
CHECK_LOCAL (n);
--sp;
*sp = convert_value (cfg, header->locals [n], *sp);
if (!dont_verify_stloc && target_type_is_incompatible (cfg, header->locals [n], *sp))
UNVERIFIED;
emit_stloc_ir (cfg, sp, header, n);
inline_costs += 1;
break;
case MONO_CEE_LDARGA_S:
case MONO_CEE_LDARGA:
CHECK_ARG (n);
NEW_ARGLOADA (cfg, ins, n);
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = ins;
break;
case MONO_CEE_STARG_S:
case MONO_CEE_STARG:
--sp;
CHECK_ARG (n);
*sp = convert_value (cfg, param_types [n], *sp);
if (!dont_verify_stloc && target_type_is_incompatible (cfg, param_types [n], *sp))
UNVERIFIED;
emit_starg_ir (cfg, sp, n);
break;
case MONO_CEE_LDLOCA:
case MONO_CEE_LDLOCA_S: {
guchar *tmp_ip;
CHECK_LOCAL (n);
if ((tmp_ip = emit_optimized_ldloca_ir (cfg, next_ip, end, n))) {
next_ip = tmp_ip;
il_op = MONO_CEE_INITOBJ;
inline_costs += 1;
break;
}
ins_has_side_effect = FALSE;
EMIT_NEW_LOCLOADA (cfg, ins, n);
*sp++ = ins;
break;
}
case MONO_CEE_LDNULL:
EMIT_NEW_PCONST (cfg, ins, NULL);
ins->type = STACK_OBJ;
*sp++ = ins;
break;
case MONO_CEE_LDC_I4_M1:
case MONO_CEE_LDC_I4_0:
case MONO_CEE_LDC_I4_1:
case MONO_CEE_LDC_I4_2:
case MONO_CEE_LDC_I4_3:
case MONO_CEE_LDC_I4_4:
case MONO_CEE_LDC_I4_5:
case MONO_CEE_LDC_I4_6:
case MONO_CEE_LDC_I4_7:
case MONO_CEE_LDC_I4_8:
case MONO_CEE_LDC_I4_S:
case MONO_CEE_LDC_I4:
EMIT_NEW_ICONST (cfg, ins, n);
*sp++ = ins;
break;
case MONO_CEE_LDC_I8:
MONO_INST_NEW (cfg, ins, OP_I8CONST);
ins->type = STACK_I8;
ins->dreg = alloc_dreg (cfg, STACK_I8);
ins->inst_l = parameter.i64;
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = ins;
break;
case MONO_CEE_LDC_R4: {
float *f;
gboolean use_aotconst = FALSE;
#ifdef TARGET_POWERPC
/* FIXME: Clean this up */
if (cfg->compile_aot)
use_aotconst = TRUE;
#endif
/* FIXME: we should really allocate this only late in the compilation process */
f = (float *)mono_mem_manager_alloc (cfg->mem_manager, sizeof (float));
if (use_aotconst) {
MonoInst *cons;
int dreg;
EMIT_NEW_AOTCONST (cfg, cons, MONO_PATCH_INFO_R4, f);
dreg = alloc_freg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOADR4_MEMBASE, dreg, cons->dreg, 0);
ins->type = cfg->r4_stack_type;
} else {
MONO_INST_NEW (cfg, ins, OP_R4CONST);
ins->type = cfg->r4_stack_type;
ins->dreg = alloc_dreg (cfg, STACK_R8);
ins->inst_p0 = f;
MONO_ADD_INS (cfg->cbb, ins);
}
*f = parameter.f;
*sp++ = ins;
break;
}
case MONO_CEE_LDC_R8: {
double *d;
gboolean use_aotconst = FALSE;
#ifdef TARGET_POWERPC
/* FIXME: Clean this up */
if (cfg->compile_aot)
use_aotconst = TRUE;
#endif
/* FIXME: we should really allocate this only late in the compilation process */
d = (double *)mono_mem_manager_alloc (cfg->mem_manager, sizeof (double));
if (use_aotconst) {
MonoInst *cons;
int dreg;
EMIT_NEW_AOTCONST (cfg, cons, MONO_PATCH_INFO_R8, d);
dreg = alloc_freg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOADR8_MEMBASE, dreg, cons->dreg, 0);
ins->type = STACK_R8;
} else {
MONO_INST_NEW (cfg, ins, OP_R8CONST);
ins->type = STACK_R8;
ins->dreg = alloc_dreg (cfg, STACK_R8);
ins->inst_p0 = d;
MONO_ADD_INS (cfg->cbb, ins);
}
*d = parameter.d;
*sp++ = ins;
break;
}
case MONO_CEE_DUP: {
MonoInst *temp, *store;
MonoClass *klass;
sp--;
ins = *sp;
klass = ins->klass;
temp = mono_compile_create_var (cfg, type_from_stack_type (ins), OP_LOCAL);
EMIT_NEW_TEMPSTORE (cfg, store, temp->inst_c0, ins);
EMIT_NEW_TEMPLOAD (cfg, ins, temp->inst_c0);
ins->klass = klass;
*sp++ = ins;
EMIT_NEW_TEMPLOAD (cfg, ins, temp->inst_c0);
ins->klass = klass;
*sp++ = ins;
inline_costs += 2;
break;
}
case MONO_CEE_POP:
--sp;
#ifdef TARGET_X86
if (sp [0]->type == STACK_R8)
/* we need to pop the value from the x86 FP stack */
MONO_EMIT_NEW_UNALU (cfg, OP_X86_FPOP, -1, sp [0]->dreg);
#endif
break;
case MONO_CEE_JMP: {
MonoCallInst *call;
int i, n;
INLINE_FAILURE ("jmp");
GSHAREDVT_FAILURE (il_op);
if (stack_start != sp)
UNVERIFIED;
/* FIXME: check the signature matches */
cmethod = mini_get_method (cfg, method, token, NULL, generic_context);
CHECK_CFG_ERROR;
if (cfg->gshared && mono_method_check_context_used (cmethod))
GENERIC_SHARING_FAILURE (CEE_JMP);
mini_profiler_emit_tail_call (cfg, cmethod);
fsig = mono_method_signature_internal (cmethod);
n = fsig->param_count + fsig->hasthis;
if (cfg->llvm_only) {
MonoInst **args;
args = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*) * n);
for (i = 0; i < n; ++i)
EMIT_NEW_ARGLOAD (cfg, args [i], i);
ins = mini_emit_method_call_full (cfg, cmethod, fsig, TRUE, args, NULL, NULL, NULL);
/*
* The code in mono-basic-block.c treats the rest of the code as dead, but we
* have to emit a normal return since llvm expects it.
*/
if (cfg->ret)
emit_setret (cfg, ins);
MONO_INST_NEW (cfg, ins, OP_BR);
ins->inst_target_bb = end_bblock;
MONO_ADD_INS (cfg->cbb, ins);
link_bblock (cfg, cfg->cbb, end_bblock);
break;
} else {
/* Handle tailcalls similarly to calls */
DISABLE_AOT (cfg);
mini_emit_tailcall_parameters (cfg, fsig);
MONO_INST_NEW_CALL (cfg, call, OP_TAILCALL);
call->method = cmethod;
// FIXME Other initialization of the tailcall field occurs after
// it is used. So this is the only "real" use and needs more attention.
call->tailcall = TRUE;
call->signature = fsig;
call->args = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*) * n);
call->inst.inst_p0 = cmethod;
for (i = 0; i < n; ++i)
EMIT_NEW_ARGLOAD (cfg, call->args [i], i);
if (mini_type_is_vtype (mini_get_underlying_type (call->signature->ret)))
call->vret_var = cfg->vret_addr;
mono_arch_emit_call (cfg, call);
cfg->param_area = MAX(cfg->param_area, call->stack_usage);
MONO_ADD_INS (cfg->cbb, (MonoInst*)call);
}
start_new_bblock = 1;
break;
}
case MONO_CEE_CALLI: {
// FIXME tail.calli is problemetic because the this pointer's type
// is not in the signature, and we cannot check for a byref valuetype.
MonoInst *addr;
MonoInst *callee = NULL;
// Variables shared by CEE_CALLI and CEE_CALL/CEE_CALLVIRT.
common_call = TRUE; // i.e. skip_ret/push_res/seq_point logic
cmethod = NULL;
gboolean const inst_tailcall = G_UNLIKELY (debug_tailcall_try_all
? (next_ip < end && next_ip [0] == CEE_RET)
: ((ins_flag & MONO_INST_TAILCALL) != 0));
ins = NULL;
//GSHAREDVT_FAILURE (il_op);
CHECK_STACK (1);
--sp;
addr = *sp;
g_assert (addr);
fsig = mini_get_signature (method, token, generic_context, cfg->error);
CHECK_CFG_ERROR;
if (method->dynamic && fsig->pinvoke) {
MonoInst *args [3];
/*
* This is a call through a function pointer using a pinvoke
* signature. Have to create a wrapper and call that instead.
* FIXME: This is very slow, need to create a wrapper at JIT time
* instead based on the signature.
*/
EMIT_NEW_IMAGECONST (cfg, args [0], ((MonoDynamicMethod*)method)->assembly->image);
EMIT_NEW_PCONST (cfg, args [1], fsig);
args [2] = addr;
// FIXME tailcall?
addr = mono_emit_jit_icall (cfg, mono_get_native_calli_wrapper, args);
}
if (!method->dynamic && fsig->pinvoke &&
!method->wrapper_type) {
/* MONO_WRAPPER_DYNAMIC_METHOD dynamic method handled above in the
method->dynamic case; for other wrapper types assume the code knows
what its doing and added its own GC transitions */
gboolean skip_gc_trans = fsig->suppress_gc_transition;
if (!skip_gc_trans) {
#if 0
fprintf (stderr, "generating wrapper for calli in method %s with wrapper type %s\n", method->name, mono_wrapper_type_to_str (method->wrapper_type));
#endif
/* Call the wrapper that will do the GC transition instead */
MonoMethod *wrapper = mono_marshal_get_native_func_wrapper_indirect (method->klass, fsig, cfg->compile_aot);
fsig = mono_method_signature_internal (wrapper);
n = fsig->param_count - 1; /* wrapper has extra fnptr param */
CHECK_STACK (n);
/* move the args to allow room for 'this' in the first position */
while (n--) {
--sp;
sp [1] = sp [0];
}
sp[0] = addr; /* n+1 args, first arg is the address of the indirect method to call */
g_assert (!fsig->hasthis && !fsig->pinvoke);
ins = mono_emit_method_call (cfg, wrapper, /*args*/sp, NULL);
goto calli_end;
}
}
n = fsig->param_count + fsig->hasthis;
CHECK_STACK (n);
//g_assert (!virtual_ || fsig->hasthis);
sp -= n;
if (!(cfg->method->wrapper_type && cfg->method->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD) && check_call_signature (cfg, fsig, sp)) {
if (break_on_unverified ())
check_call_signature (cfg, fsig, sp); // Again, step through it.
UNVERIFIED;
}
inline_costs += CALL_COST * MIN(10, num_calls++);
/*
* Making generic calls out of gsharedvt methods.
* This needs to be used for all generic calls, not just ones with a gsharedvt signature, to avoid
* patching gshared method addresses into a gsharedvt method.
*/
if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) {
/*
* We pass the address to the gsharedvt trampoline in the rgctx reg
*/
callee = addr;
g_assert (addr); // Doubles as boolean after tailcall check.
}
inst_tailcall && is_supported_tailcall (cfg, ip, method, NULL, fsig,
FALSE/*virtual irrelevant*/, addr != NULL, &tailcall);
if (save_last_error)
mono_emit_jit_icall (cfg, mono_marshal_clear_last_error, NULL);
if (callee) {
if (method->wrapper_type != MONO_WRAPPER_DELEGATE_INVOKE)
/* Not tested */
GSHAREDVT_FAILURE (il_op);
if (cfg->llvm_only)
// FIXME:
GSHAREDVT_FAILURE (il_op);
addr = emit_get_rgctx_sig (cfg, context_used, fsig, MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI);
ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, NULL, callee, tailcall);
goto calli_end;
}
/* Prevent inlining of methods with indirect calls */
INLINE_FAILURE ("indirect call");
if (addr->opcode == OP_PCONST || addr->opcode == OP_AOTCONST || addr->opcode == OP_GOT_ENTRY) {
MonoJumpInfoType info_type;
gpointer info_data;
/*
* Instead of emitting an indirect call, emit a direct call
* with the contents of the aotconst as the patch info.
*/
if (addr->opcode == OP_PCONST || addr->opcode == OP_AOTCONST) {
info_type = (MonoJumpInfoType)addr->inst_c1;
info_data = addr->inst_p0;
} else {
info_type = (MonoJumpInfoType)addr->inst_right->inst_c1;
info_data = addr->inst_right->inst_left;
}
if (info_type == MONO_PATCH_INFO_ICALL_ADDR) {
// non-JIT icall, mostly builtin, but also user-extensible
tailcall = FALSE;
ins = (MonoInst*)mini_emit_abs_call (cfg, MONO_PATCH_INFO_ICALL_ADDR_CALL, info_data, fsig, sp);
NULLIFY_INS (addr);
goto calli_end;
} else if (info_type == MONO_PATCH_INFO_JIT_ICALL_ADDR
|| info_type == MONO_PATCH_INFO_SPECIFIC_TRAMPOLINE_LAZY_FETCH_ADDR) {
tailcall = FALSE;
ins = (MonoInst*)mini_emit_abs_call (cfg, info_type, info_data, fsig, sp);
NULLIFY_INS (addr);
goto calli_end;
}
}
if (cfg->llvm_only && !(cfg->method->wrapper_type && cfg->method->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD))
ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
else
ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, NULL, NULL, tailcall);
goto calli_end;
}
case MONO_CEE_CALL:
case MONO_CEE_CALLVIRT: {
MonoInst *addr; addr = NULL;
int array_rank; array_rank = 0;
gboolean virtual_; virtual_ = il_op == MONO_CEE_CALLVIRT;
gboolean pass_imt_from_rgctx; pass_imt_from_rgctx = FALSE;
MonoInst *imt_arg; imt_arg = NULL;
gboolean pass_vtable; pass_vtable = FALSE;
gboolean pass_mrgctx; pass_mrgctx = FALSE;
MonoInst *vtable_arg; vtable_arg = NULL;
gboolean check_this; check_this = FALSE;
gboolean delegate_invoke; delegate_invoke = FALSE;
gboolean direct_icall; direct_icall = FALSE;
gboolean tailcall_calli; tailcall_calli = FALSE;
gboolean noreturn; noreturn = FALSE;
gboolean gshared_static_virtual; gshared_static_virtual = FALSE;
#ifdef TARGET_WASM
gboolean needs_stack_walk; needs_stack_walk = FALSE;
#endif
// Variables shared by CEE_CALLI and CEE_CALL/CEE_CALLVIRT.
common_call = FALSE;
// variables to help in assertions
gboolean called_is_supported_tailcall; called_is_supported_tailcall = FALSE;
MonoMethod *tailcall_method; tailcall_method = NULL;
MonoMethod *tailcall_cmethod; tailcall_cmethod = NULL;
MonoMethodSignature *tailcall_fsig; tailcall_fsig = NULL;
gboolean tailcall_virtual; tailcall_virtual = FALSE;
gboolean tailcall_extra_arg; tailcall_extra_arg = FALSE;
gboolean inst_tailcall; inst_tailcall = G_UNLIKELY (debug_tailcall_try_all
? (next_ip < end && next_ip [0] == CEE_RET)
: ((ins_flag & MONO_INST_TAILCALL) != 0));
ins = NULL;
/* Used to pass arguments to called functions */
HandleCallData cdata;
memset (&cdata, 0, sizeof (HandleCallData));
cmethod = mini_get_method (cfg, method, token, NULL, generic_context);
CHECK_CFG_ERROR;
if (cfg->verbose_level > 3)
printf ("cmethod = %s\n", mono_method_get_full_name (cmethod));
MonoMethod *cil_method; cil_method = cmethod;
if (constrained_class) {
if (m_method_is_static (cil_method) && mini_class_check_context_used (cfg, constrained_class)) {
/* get_constrained_method () doesn't work on the gparams used by generic sharing */
// FIXME: Other configurations
//if (!cfg->gsharedvt)
// GENERIC_SHARING_FAILURE (CEE_CALL);
gshared_static_virtual = TRUE;
} else {
cmethod = get_constrained_method (cfg, image, token, cil_method, constrained_class, generic_context);
CHECK_CFG_ERROR;
if (m_class_is_enumtype (constrained_class) && !strcmp (cmethod->name, "GetHashCode")) {
/* Use the corresponding method from the base type to avoid boxing */
MonoType *base_type = mono_class_enum_basetype_internal (constrained_class);
g_assert (base_type);
constrained_class = mono_class_from_mono_type_internal (base_type);
cmethod = get_method_nofail (constrained_class, cmethod->name, 0, 0);
g_assert (cmethod);
}
}
}
if (!dont_verify && !cfg->skip_visibility) {
MonoMethod *target_method = cil_method;
if (method->is_inflated) {
MonoGenericContainer *container = mono_method_get_generic_container(method_definition);
MonoGenericContext *context = (container != NULL ? &container->context : NULL);
target_method = mini_get_method_allow_open (method, token, NULL, context, cfg->error);
CHECK_CFG_ERROR;
}
if (!mono_method_can_access_method (method_definition, target_method) &&
!mono_method_can_access_method (method, cil_method))
emit_method_access_failure (cfg, method, cil_method);
}
if (cfg->llvm_only && cmethod && method_needs_stack_walk (cfg, cmethod)) {
if (cfg->interp && !cfg->interp_entry_only) {
/* Use the interpreter instead */
cfg->exception_message = g_strdup ("stack walk");
cfg->disable_llvm = TRUE;
}
#ifdef TARGET_WASM
else {
needs_stack_walk = TRUE;
}
#endif
}
if (!virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_ABSTRACT) && !gshared_static_virtual) {
if (!mono_class_is_interface (method->klass))
emit_bad_image_failure (cfg, method, cil_method);
else
virtual_ = TRUE;
}
if (!m_class_is_inited (cmethod->klass))
if (!mono_class_init_internal (cmethod->klass))
TYPE_LOAD_ERROR (cmethod->klass);
fsig = mono_method_signature_internal (cmethod);
if (!fsig)
LOAD_ERROR;
if (cmethod->iflags & METHOD_IMPL_ATTRIBUTE_INTERNAL_CALL &&
mini_class_is_system_array (cmethod->klass)) {
array_rank = m_class_get_rank (cmethod->klass);
} else if ((cmethod->iflags & METHOD_IMPL_ATTRIBUTE_INTERNAL_CALL) && direct_icalls_enabled (cfg, cmethod)) {
direct_icall = TRUE;
} else if (fsig->pinvoke) {
if (cmethod->flags & METHOD_ATTRIBUTE_PINVOKE_IMPL) {
/*
* Avoid calling mono_marshal_get_native_wrapper () too early, it might call managed
* callbacks on netcore.
*/
fsig = mono_metadata_signature_dup_mempool (cfg->mempool, fsig);
fsig->pinvoke = FALSE;
} else {
MonoMethod *wrapper = mono_marshal_get_native_wrapper (cmethod, TRUE, cfg->compile_aot);
fsig = mono_method_signature_internal (wrapper);
}
} else if (constrained_class) {
} else {
fsig = mono_method_get_signature_checked (cmethod, image, token, generic_context, cfg->error);
CHECK_CFG_ERROR;
}
if (cfg->llvm_only && !cfg->method->wrapper_type && (!cmethod || cmethod->is_inflated))
cfg->signatures = g_slist_prepend_mempool (cfg->mempool, cfg->signatures, fsig);
/* See code below */
if (cmethod->klass == mono_defaults.monitor_class && !strcmp (cmethod->name, "Enter") && mono_method_signature_internal (cmethod)->param_count == 1) {
MonoBasicBlock *tbb;
GET_BBLOCK (cfg, tbb, next_ip);
if (tbb->try_start && MONO_REGION_FLAGS(tbb->region) == MONO_EXCEPTION_CLAUSE_FINALLY) {
/*
* We want to extend the try block to cover the call, but we can't do it if the
* call is made directly since its followed by an exception check.
*/
direct_icall = FALSE;
}
}
mono_save_token_info (cfg, image, token, cil_method);
if (!(seq_point_locs && mono_bitset_test_fast (seq_point_locs, next_ip - header->code)))
need_seq_point = TRUE;
/* Don't support calls made using type arguments for now */
/*
if (cfg->gsharedvt) {
if (mini_is_gsharedvt_signature (fsig))
GSHAREDVT_FAILURE (il_op);
}
*/
if (cmethod->string_ctor && method->wrapper_type != MONO_WRAPPER_RUNTIME_INVOKE)
g_assert_not_reached ();
n = fsig->param_count + fsig->hasthis;
if (!cfg->gshared && mono_class_is_gtd (cmethod->klass))
UNVERIFIED;
if (!cfg->gshared)
g_assert (!mono_method_check_context_used (cmethod));
CHECK_STACK (n);
//g_assert (!virtual_ || fsig->hasthis);
sp -= n;
if (virtual_ && cmethod && sp [0] && sp [0]->opcode == OP_TYPED_OBJREF) {
ERROR_DECL (error);
MonoMethod *new_cmethod = mono_class_get_virtual_method (sp [0]->klass, cmethod, error);
if (is_ok (error)) {
cmethod = new_cmethod;
virtual_ = FALSE;
} else {
mono_error_cleanup (error);
}
}
if (cmethod && method_does_not_return (cmethod)) {
cfg->cbb->out_of_line = TRUE;
noreturn = TRUE;
}
cdata.method = method;
cdata.inst_tailcall = inst_tailcall;
/*
* We have the `constrained.' prefix opcode.
*/
if (constrained_class) {
ins = handle_constrained_call (cfg, cmethod, fsig, constrained_class, sp, &cdata, &cmethod, &virtual_, &emit_widen);
CHECK_CFG_EXCEPTION;
if (!gshared_static_virtual)
constrained_class = NULL;
if (ins)
goto call_end;
}
for (int i = 0; i < fsig->param_count; ++i)
sp [i + fsig->hasthis] = convert_value (cfg, fsig->params [i], sp [i + fsig->hasthis]);
if (check_call_signature (cfg, fsig, sp)) {
if (break_on_unverified ())
check_call_signature (cfg, fsig, sp); // Again, step through it.
UNVERIFIED;
}
if ((m_class_get_parent (cmethod->klass) == mono_defaults.multicastdelegate_class) && !strcmp (cmethod->name, "Invoke"))
delegate_invoke = TRUE;
/*
* Implement a workaround for the inherent races involved in locking:
* Monitor.Enter ()
* try {
* } finally {
* Monitor.Exit ()
* }
* If a thread abort happens between the call to Monitor.Enter () and the start of the
* try block, the Exit () won't be executed, see:
* http://www.bluebytesoftware.com/blog/2007/01/30/MonitorEnterThreadAbortsAndOrphanedLocks.aspx
* To work around this, we extend such try blocks to include the last x bytes
* of the Monitor.Enter () call.
*/
if (cmethod->klass == mono_defaults.monitor_class && !strcmp (cmethod->name, "Enter") && mono_method_signature_internal (cmethod)->param_count == 1) {
MonoBasicBlock *tbb;
GET_BBLOCK (cfg, tbb, next_ip);
/*
* Only extend try blocks with a finally, to avoid catching exceptions thrown
* from Monitor.Enter like ArgumentNullException.
*/
if (tbb->try_start && MONO_REGION_FLAGS(tbb->region) == MONO_EXCEPTION_CLAUSE_FINALLY) {
/* Mark this bblock as needing to be extended */
tbb->extend_try_block = TRUE;
}
}
/* Conversion to a JIT intrinsic */
gboolean ins_type_initialized;
if ((ins = mini_emit_inst_for_method (cfg, cmethod, fsig, sp, &ins_type_initialized))) {
if (!MONO_TYPE_IS_VOID (fsig->ret)) {
if (!ins_type_initialized)
mini_type_to_eval_stack_type ((cfg), fsig->ret, ins);
emit_widen = FALSE;
}
// FIXME This is only missed if in fact the intrinsic involves a call.
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall intrins %s -> %s\n", method->name, cmethod->name);
goto call_end;
}
CHECK_CFG_ERROR;
/*
* If the callee is a shared method, then its static cctor
* might not get called after the call was patched.
*/
if (cfg->gshared && cmethod->klass != method->klass && mono_class_is_ginst (cmethod->klass) && mono_method_is_generic_sharable (cmethod, TRUE) && mono_class_needs_cctor_run (cmethod->klass, method)) {
emit_class_init (cfg, cmethod->klass);
CHECK_TYPELOAD (cmethod->klass);
}
/* Inlining */
if ((cfg->opt & MONO_OPT_INLINE) && !inst_tailcall &&
(!virtual_ || !(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) || MONO_METHOD_IS_FINAL (cmethod)) &&
mono_method_check_inlining (cfg, cmethod)) {
int costs;
gboolean always = FALSE;
gboolean is_empty = FALSE;
if (cmethod->iflags & METHOD_IMPL_ATTRIBUTE_INTERNAL_CALL) {
/* Prevent inlining of methods that call wrappers */
INLINE_FAILURE ("wrapper call");
// FIXME? Does this write to cmethod impact tailcall_supported? Probably not.
// Neither pinvoke or icall are likely to be tailcalled.
cmethod = mono_marshal_get_native_wrapper (cmethod, TRUE, FALSE);
always = TRUE;
}
costs = inline_method (cfg, cmethod, fsig, sp, ip, cfg->real_offset, always, &is_empty);
if (costs) {
cfg->real_offset += 5;
if (!MONO_TYPE_IS_VOID (fsig->ret))
/* *sp is already set by inline_method */
ins = *sp;
inline_costs += costs;
// FIXME This is missed if the inlinee contains tail calls that
// would work, but not once inlined into caller.
// This matchingness could be a factor in inlining.
// i.e. Do not inline if it hurts tailcall, do inline
// if it helps and/or or is neutral, and helps performance
// using usual heuristics.
// Note that inlining will expose multiple tailcall opportunities
// so the tradeoff is not obvious. If we can tailcall anything
// like desktop, then this factor mostly falls away, except
// that inlining can affect tailcall performance due to
// signature match/mismatch.
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall inline %s -> %s\n", method->name, cmethod->name);
if (is_empty)
ins_has_side_effect = FALSE;
goto call_end;
}
}
check_method_sharing (cfg, cmethod, &pass_vtable, &pass_mrgctx);
if (cfg->gshared) {
MonoGenericContext *cmethod_context = mono_method_get_context (cmethod);
context_used = mini_method_check_context_used (cfg, cmethod);
if (!context_used && gshared_static_virtual)
context_used = mini_class_check_context_used (cfg, constrained_class);
if (context_used && mono_class_is_interface (cmethod->klass) && !m_method_is_static (cmethod)) {
/* Generic method interface
calls are resolved via a
helper function and don't
need an imt. */
if (!cmethod_context || !cmethod_context->method_inst)
pass_imt_from_rgctx = TRUE;
}
/*
* If a shared method calls another
* shared method then the caller must
* have a generic sharing context
* because the magic trampoline
* requires it. FIXME: We shouldn't
* have to force the vtable/mrgctx
* variable here. Instead there
* should be a flag in the cfg to
* request a generic sharing context.
*/
if (context_used &&
((cfg->method->flags & METHOD_ATTRIBUTE_STATIC) || m_class_is_valuetype (cfg->method->klass)))
mono_get_vtable_var (cfg);
}
if (pass_vtable) {
if (context_used) {
vtable_arg = mini_emit_get_rgctx_klass (cfg, context_used, cmethod->klass, MONO_RGCTX_INFO_VTABLE);
} else {
MonoVTable *vtable = mono_class_vtable_checked (cmethod->klass, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (cmethod->klass);
EMIT_NEW_VTABLECONST (cfg, vtable_arg, vtable);
}
}
if (pass_mrgctx) {
g_assert (!vtable_arg);
if (!cfg->compile_aot) {
/*
* emit_get_rgctx_method () calls mono_class_vtable () so check
* for type load errors before.
*/
mono_class_setup_vtable (cmethod->klass);
CHECK_TYPELOAD (cmethod->klass);
}
vtable_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD_RGCTX);
if ((!(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) || MONO_METHOD_IS_FINAL (cmethod))) {
if (virtual_)
check_this = TRUE;
virtual_ = FALSE;
}
}
if (pass_imt_from_rgctx) {
g_assert (!pass_vtable);
imt_arg = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_METHOD);
g_assert (imt_arg);
}
if (check_this)
MONO_EMIT_NEW_CHECK_THIS (cfg, sp [0]->dreg);
/* Calling virtual generic methods */
// These temporaries help detangle "pure" computation of
// inputs to is_supported_tailcall from side effects, so that
// is_supported_tailcall can be computed just once.
gboolean virtual_generic; virtual_generic = FALSE;
gboolean virtual_generic_imt; virtual_generic_imt = FALSE;
if (virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) &&
!MONO_METHOD_IS_FINAL (cmethod) &&
fsig->generic_param_count &&
!(cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) &&
!cfg->llvm_only) {
g_assert (fsig->is_inflated);
virtual_generic = TRUE;
/* Prevent inlining of methods that contain indirect calls */
INLINE_FAILURE ("virtual generic call");
if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig))
GSHAREDVT_FAILURE (il_op);
if (cfg->backend->have_generalized_imt_trampoline && cfg->backend->gshared_supported && cmethod->wrapper_type == MONO_WRAPPER_NONE) {
virtual_generic_imt = TRUE;
g_assert (!imt_arg);
if (!context_used)
g_assert (cmethod->is_inflated);
imt_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD);
g_assert (imt_arg);
virtual_ = TRUE;
vtable_arg = NULL;
}
}
// Capture some intent before computing tailcall.
gboolean make_generic_call_out_of_gsharedvt_method;
gboolean will_have_imt_arg;
make_generic_call_out_of_gsharedvt_method = FALSE;
will_have_imt_arg = FALSE;
/*
* Making generic calls out of gsharedvt methods.
* This needs to be used for all generic calls, not just ones with a gsharedvt signature, to avoid
* patching gshared method addresses into a gsharedvt method.
*/
if (cfg->gsharedvt && (mini_is_gsharedvt_signature (fsig) || cmethod->is_inflated || mono_class_is_ginst (cmethod->klass)) &&
!(m_class_get_rank (cmethod->klass) && m_class_get_byval_arg (cmethod->klass)->type != MONO_TYPE_SZARRAY) &&
(!(cfg->llvm_only && virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL)))) {
make_generic_call_out_of_gsharedvt_method = TRUE;
if (virtual_) {
if (fsig->generic_param_count) {
will_have_imt_arg = TRUE;
} else if (mono_class_is_interface (cmethod->klass) && !imt_arg) {
will_have_imt_arg = TRUE;
}
}
}
/* Tail prefix / tailcall optimization */
/* FIXME: Enabling TAILC breaks some inlining/stack trace/etc tests.
Inlining and stack traces are not guaranteed however. */
/* FIXME: runtime generic context pointer for jumps? */
/* FIXME: handle this for generic sharing eventually */
// tailcall means "the backend can and will handle it".
// inst_tailcall means the tail. prefix is present.
tailcall_extra_arg = vtable_arg || imt_arg || will_have_imt_arg || mono_class_is_interface (cmethod->klass);
tailcall = inst_tailcall && is_supported_tailcall (cfg, ip, method, cmethod, fsig,
virtual_, tailcall_extra_arg, &tailcall_calli);
// Writes to imt_arg, vtable_arg, virtual_, cmethod, must not occur from here (inputs to is_supported_tailcall).
// Capture values to later assert they don't change.
called_is_supported_tailcall = TRUE;
tailcall_method = method;
tailcall_cmethod = cmethod;
tailcall_fsig = fsig;
tailcall_virtual = virtual_;
if (virtual_generic) {
if (virtual_generic_imt) {
if (tailcall) {
/* Prevent inlining of methods with tailcalls (the call stack would be altered) */
INLINE_FAILURE ("tailcall");
}
common_call = TRUE;
goto call_end;
}
MonoInst *this_temp, *this_arg_temp, *store;
MonoInst *iargs [4];
this_temp = mono_compile_create_var (cfg, type_from_stack_type (sp [0]), OP_LOCAL);
NEW_TEMPSTORE (cfg, store, this_temp->inst_c0, sp [0]);
MONO_ADD_INS (cfg->cbb, store);
/* FIXME: This should be a managed pointer */
this_arg_temp = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
EMIT_NEW_TEMPLOAD (cfg, iargs [0], this_temp->inst_c0);
iargs [1] = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD);
EMIT_NEW_TEMPLOADA (cfg, iargs [2], this_arg_temp->inst_c0);
addr = mono_emit_jit_icall (cfg, mono_helper_compile_generic_method, iargs);
EMIT_NEW_TEMPLOAD (cfg, sp [0], this_arg_temp->inst_c0);
ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL);
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall virtual generic %s -> %s\n", method->name, cmethod->name);
goto call_end;
}
CHECK_CFG_ERROR;
/* Tail recursion elimination */
if (((cfg->opt & MONO_OPT_TAILCALL) || inst_tailcall) && il_op == MONO_CEE_CALL && cmethod == method && next_ip < end && next_ip [0] == CEE_RET && !vtable_arg) {
gboolean has_vtargs = FALSE;
int i;
/* Prevent inlining of methods with tailcalls (the call stack would be altered) */
INLINE_FAILURE ("tailcall");
/* keep it simple */
for (i = fsig->param_count - 1; !has_vtargs && i >= 0; i--)
has_vtargs = MONO_TYPE_ISSTRUCT (mono_method_signature_internal (cmethod)->params [i]);
if (!has_vtargs) {
if (need_seq_point) {
emit_seq_point (cfg, method, ip, FALSE, TRUE);
need_seq_point = FALSE;
}
for (i = 0; i < n; ++i)
EMIT_NEW_ARGSTORE (cfg, ins, i, sp [i]);
mini_profiler_emit_tail_call (cfg, cmethod);
MONO_INST_NEW (cfg, ins, OP_BR);
MONO_ADD_INS (cfg->cbb, ins);
tblock = start_bblock->out_bb [0];
link_bblock (cfg, cfg->cbb, tblock);
ins->inst_target_bb = tblock;
start_new_bblock = 1;
/* skip the CEE_RET, too */
if (ip_in_bb (cfg, cfg->cbb, next_ip))
skip_ret = TRUE;
push_res = FALSE;
need_seq_point = FALSE;
goto call_end;
}
}
inline_costs += CALL_COST * MIN(10, num_calls++);
/*
* Synchronized wrappers.
* Its hard to determine where to replace a method with its synchronized
* wrapper without causing an infinite recursion. The current solution is
* to add the synchronized wrapper in the trampolines, and to
* change the called method to a dummy wrapper, and resolve that wrapper
* to the real method in mono_jit_compile_method ().
*/
if (cfg->method->wrapper_type == MONO_WRAPPER_SYNCHRONIZED) {
MonoMethod *orig = mono_marshal_method_from_wrapper (cfg->method);
if (cmethod == orig || (cmethod->is_inflated && mono_method_get_declaring_generic_method (cmethod) == orig)) {
// FIXME? Does this write to cmethod impact tailcall_supported? Probably not.
cmethod = mono_marshal_get_synchronized_inner_wrapper (cmethod);
}
}
/*
* Making generic calls out of gsharedvt methods.
* This needs to be used for all generic calls, not just ones with a gsharedvt signature, to avoid
* patching gshared method addresses into a gsharedvt method.
*/
if (make_generic_call_out_of_gsharedvt_method) {
if (virtual_) {
//if (mono_class_is_interface (cmethod->klass))
//GSHAREDVT_FAILURE (il_op);
// disable for possible remoting calls
if (fsig->hasthis && method->klass == mono_defaults.object_class)
GSHAREDVT_FAILURE (il_op);
if (fsig->generic_param_count) {
/* virtual generic call */
g_assert (!imt_arg);
g_assert (will_have_imt_arg);
/* Same as the virtual generic case above */
imt_arg = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_METHOD);
g_assert (imt_arg);
} else if (mono_class_is_interface (cmethod->klass) && !imt_arg) {
/* This can happen when we call a fully instantiated iface method */
g_assert (will_have_imt_arg);
imt_arg = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_METHOD);
g_assert (imt_arg);
}
/* This is not needed, as the trampoline code will pass one, and it might be passed in the same reg as the imt arg */
vtable_arg = NULL;
}
if ((m_class_get_parent (cmethod->klass) == mono_defaults.multicastdelegate_class) && (!strcmp (cmethod->name, "Invoke")))
keep_this_alive = sp [0];
MonoRgctxInfoType info_type;
if (virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL))
info_type = MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE_VIRT;
else
info_type = MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE;
addr = emit_get_rgctx_gsharedvt_call (cfg, context_used, fsig, cmethod, info_type);
if (cfg->llvm_only) {
// FIXME: Avoid initializing vtable_arg
ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall llvmonly gsharedvt %s -> %s\n", method->name, cmethod->name);
} else {
tailcall = tailcall_calli;
ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, imt_arg, vtable_arg, tailcall);
tailcall_remove_ret |= tailcall;
}
goto call_end;
}
/* Generic sharing */
/*
* Calls to generic methods from shared code cannot go through the trampoline infrastructure
* in some cases, because the called method might end up being different on every call.
* Load the called method address from the rgctx and do an indirect call in these cases.
* Use this if the callee is gsharedvt sharable too, since
* at runtime we might find an instantiation so the call cannot
* be patched (the 'no_patch' code path in mini-trampolines.c).
*/
gboolean gshared_indirect;
gshared_indirect = context_used && !imt_arg && !array_rank && !delegate_invoke;
if (gshared_indirect)
gshared_indirect = (!mono_method_is_generic_sharable_full (cmethod, TRUE, FALSE, FALSE) ||
!mono_class_generic_sharing_enabled (cmethod->klass) ||
gshared_static_virtual);
if (gshared_indirect)
gshared_indirect = (!virtual_ || MONO_METHOD_IS_FINAL (cmethod) ||
!(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL));
if (gshared_indirect) {
INLINE_FAILURE ("gshared");
g_assert (cfg->gshared && cmethod);
g_assert (!addr);
if (fsig->hasthis)
MONO_EMIT_NEW_CHECK_THIS (cfg, sp [0]->dreg);
if (cfg->llvm_only) {
if (cfg->gsharedvt && mini_is_gsharedvt_variable_signature (fsig)) {
/* Handled in handle_constrained_gsharedvt_call () */
g_assert (!gshared_static_virtual);
addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_GSHAREDVT_OUT_WRAPPER);
} else {
if (gshared_static_virtual)
addr = emit_get_rgctx_virt_method (cfg, -1, constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE);
else
addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD_FTNDESC);
}
// FIXME: Avoid initializing imt_arg/vtable_arg
ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall context_used_llvmonly %s -> %s\n", method->name, cmethod->name);
} else {
if (gshared_static_virtual) {
/*
* cmethod is a static interface method, the actual called method at runtime
* needs to be computed using constrained_class and cmethod.
*/
addr = emit_get_rgctx_virt_method (cfg, -1, constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE);
} else {
addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_GENERIC_METHOD_CODE);
}
if (inst_tailcall)
mono_tailcall_print ("%s tailcall_calli#2 %s -> %s\n", tailcall_calli ? "making" : "missed", method->name, cmethod->name);
tailcall = tailcall_calli;
ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, imt_arg, vtable_arg, tailcall);
tailcall_remove_ret |= tailcall;
}
goto call_end;
}
/* Direct calls to icalls */
if (direct_icall) {
MonoMethod *wrapper;
int costs;
/* Inline the wrapper */
wrapper = mono_marshal_get_native_wrapper (cmethod, TRUE, cfg->compile_aot);
costs = inline_method (cfg, wrapper, fsig, sp, ip, cfg->real_offset, TRUE, NULL);
g_assert (costs > 0);
cfg->real_offset += 5;
if (!MONO_TYPE_IS_VOID (fsig->ret))
/* *sp is already set by inline_method */
ins = *sp;
inline_costs += costs;
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall direct_icall %s -> %s\n", method->name, cmethod->name);
goto call_end;
}
/* Array methods */
if (array_rank) {
MonoInst *addr;
if (strcmp (cmethod->name, "Set") == 0) { /* array Set */
MonoInst *val = sp [fsig->param_count];
if (val->type == STACK_OBJ) {
MonoInst *iargs [ ] = { sp [0], val };
mono_emit_jit_icall (cfg, mono_helper_stelem_ref_check, iargs);
}
addr = mini_emit_ldelema_ins (cfg, cmethod, sp, ip, TRUE);
if (!mini_debug_options.weak_memory_model && val->type == STACK_OBJ)
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, fsig->params [fsig->param_count - 1], addr->dreg, 0, val->dreg);
if (cfg->gen_write_barriers && val->type == STACK_OBJ && !MONO_INS_IS_PCONST_NULL (val))
mini_emit_write_barrier (cfg, addr, val);
if (cfg->gen_write_barriers && mini_is_gsharedvt_klass (cmethod->klass))
GSHAREDVT_FAILURE (il_op);
} else if (strcmp (cmethod->name, "Get") == 0) { /* array Get */
addr = mini_emit_ldelema_ins (cfg, cmethod, sp, ip, FALSE);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, fsig->ret, addr->dreg, 0);
} else if (strcmp (cmethod->name, "Address") == 0) { /* array Address */
if (!m_class_is_valuetype (m_class_get_element_class (cmethod->klass)) && !readonly)
mini_emit_check_array_type (cfg, sp [0], cmethod->klass);
CHECK_TYPELOAD (cmethod->klass);
readonly = FALSE;
addr = mini_emit_ldelema_ins (cfg, cmethod, sp, ip, FALSE);
ins = addr;
} else {
g_assert_not_reached ();
}
emit_widen = FALSE;
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall array_rank %s -> %s\n", method->name, cmethod->name);
goto call_end;
}
ins = mini_redirect_call (cfg, cmethod, fsig, sp, virtual_ ? sp [0] : NULL);
if (ins) {
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall redirect %s -> %s\n", method->name, cmethod->name);
goto call_end;
}
/* Tail prefix / tailcall optimization */
if (tailcall) {
/* Prevent inlining of methods with tailcalls (the call stack would be altered) */
INLINE_FAILURE ("tailcall");
}
/*
* Virtual calls in llvm-only mode.
*/
if (cfg->llvm_only && virtual_ && cmethod && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL)) {
ins = mini_emit_llvmonly_virtual_call (cfg, cmethod, fsig, context_used, sp);
goto call_end;
}
/* Common call */
if (!(cfg->opt & MONO_OPT_AGGRESSIVE_INLINING) && !(method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) && !(cmethod->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) && !method_does_not_return (cmethod))
INLINE_FAILURE ("call");
common_call = TRUE;
#ifdef TARGET_WASM
/* Push an LMF so these frames can be enumerated during stack walks by mono_arch_unwind_frame () */
if (needs_stack_walk && !cfg->deopt) {
MonoInst *method_ins;
int lmf_reg;
emit_push_lmf (cfg);
EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL);
lmf_reg = ins->dreg;
/* The lmf->method field will be used to look up the MonoJitInfo for this method */
method_ins = emit_get_rgctx_method (cfg, mono_method_check_context_used (cfg->method), cfg->method, MONO_RGCTX_INFO_METHOD);
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, method), method_ins->dreg);
}
#endif
call_end:
// Check that the decision to tailcall would not have changed.
g_assert (!called_is_supported_tailcall || tailcall_method == method);
// FIXME? cmethod does change, weaken the assert if we weren't tailcalling anyway.
// If this still fails, restructure the code, or call tailcall_supported again and assert no change.
g_assert (!called_is_supported_tailcall || !tailcall || tailcall_cmethod == cmethod);
g_assert (!called_is_supported_tailcall || tailcall_fsig == fsig);
g_assert (!called_is_supported_tailcall || tailcall_virtual == virtual_);
g_assert (!called_is_supported_tailcall || tailcall_extra_arg == (vtable_arg || imt_arg || will_have_imt_arg || mono_class_is_interface (cmethod->klass)));
if (common_call) // FIXME goto call_end && !common_call often skips tailcall processing.
ins = mini_emit_method_call_full (cfg, cmethod, fsig, tailcall, sp, virtual_ ? sp [0] : NULL,
imt_arg, vtable_arg);
/*
* Handle devirt of some A.B.C calls by replacing the result of A.B with a OP_TYPED_OBJREF instruction, so the .C
* call can be devirtualized above.
*/
if (cmethod)
ins = handle_call_res_devirt (cfg, cmethod, ins);
#ifdef TARGET_WASM
if (common_call && needs_stack_walk && !cfg->deopt)
/* If an exception is thrown, the LMF is popped by a call to mini_llvmonly_pop_lmf () */
emit_pop_lmf (cfg);
#endif
if (noreturn) {
MONO_INST_NEW (cfg, ins, OP_NOT_REACHED);
MONO_ADD_INS (cfg->cbb, ins);
}
calli_end:
if ((tailcall_remove_ret || (common_call && tailcall)) && !cfg->llvm_only) {
link_bblock (cfg, cfg->cbb, end_bblock);
start_new_bblock = 1;
// FIXME: Eliminate unreachable epilogs
/*
* OP_TAILCALL has no return value, so skip the CEE_RET if it is
* only reachable from this call.
*/
GET_BBLOCK (cfg, tblock, next_ip);
if (tblock == cfg->cbb || tblock->in_count == 0)
skip_ret = TRUE;
push_res = FALSE;
need_seq_point = FALSE;
}
if (ins_flag & MONO_INST_TAILCALL)
mini_test_tailcall (cfg, tailcall);
/* End of call, INS should contain the result of the call, if any */
if (push_res && !MONO_TYPE_IS_VOID (fsig->ret)) {
g_assert (ins);
if (emit_widen)
*sp++ = mono_emit_widen_call_res (cfg, ins, fsig);
else
*sp++ = ins;
}
if (save_last_error) {
save_last_error = FALSE;
#ifdef TARGET_WIN32
// Making icalls etc could clobber the value so emit inline code
// to read last error on Windows.
MONO_INST_NEW (cfg, ins, OP_GET_LAST_ERROR);
ins->dreg = alloc_dreg (cfg, STACK_I4);
ins->type = STACK_I4;
MONO_ADD_INS (cfg->cbb, ins);
mono_emit_jit_icall (cfg, mono_marshal_set_last_error_windows, &ins);
#else
mono_emit_jit_icall (cfg, mono_marshal_set_last_error, NULL);
#endif
}
if (keep_this_alive) {
MonoInst *dummy_use;
/* See mini_emit_method_call_full () */
EMIT_NEW_DUMMY_USE (cfg, dummy_use, keep_this_alive);
}
if (cfg->llvm_only && cmethod && method_needs_stack_walk (cfg, cmethod)) {
/*
* Clang can convert these calls to tailcalls which screw up the stack
* walk. This happens even when the -fno-optimize-sibling-calls
* option is passed to clang.
* Work around this by emitting a dummy call.
*/
mono_emit_jit_icall (cfg, mono_dummy_jit_icall, NULL);
}
CHECK_CFG_EXCEPTION;
if (skip_ret) {
// FIXME When not followed by CEE_RET, correct behavior is to raise an exception.
g_assert (next_ip [0] == CEE_RET);
next_ip += 1;
il_op = MonoOpcodeEnum_Invalid; // Call or ret? Unclear.
}
ins_flag = 0;
constrained_class = NULL;
if (need_seq_point) {
//check is is a nested call and remove the non_empty_stack of the last call, only for non native methods
if (!(method->flags & METHOD_IMPL_ATTRIBUTE_NATIVE)) {
if (emitted_funccall_seq_point) {
if (cfg->last_seq_point)
cfg->last_seq_point->flags |= MONO_INST_NESTED_CALL;
}
else
emitted_funccall_seq_point = TRUE;
}
emit_seq_point (cfg, method, next_ip, FALSE, TRUE);
}
break;
}
case MONO_CEE_RET:
if (!detached_before_ret)
mini_profiler_emit_leave (cfg, sig->ret->type != MONO_TYPE_VOID ? sp [-1] : NULL);
g_assert (!method_does_not_return (method));
if (cfg->method != method) {
/* return from inlined method */
/*
* If in_count == 0, that means the ret is unreachable due to
* being preceded by a throw. In that case, inline_method () will
* handle setting the return value
* (test case: test_0_inline_throw ()).
*/
if (return_var && cfg->cbb->in_count) {
MonoType *ret_type = mono_method_signature_internal (method)->ret;
MonoInst *store;
CHECK_STACK (1);
--sp;
*sp = convert_value (cfg, ret_type, *sp);
if ((method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD || method->wrapper_type == MONO_WRAPPER_NONE) && target_type_is_incompatible (cfg, ret_type, *sp))
UNVERIFIED;
//g_assert (returnvar != -1);
EMIT_NEW_TEMPSTORE (cfg, store, return_var->inst_c0, *sp);
cfg->ret_var_set = TRUE;
}
} else {
if (cfg->lmf_var && cfg->cbb->in_count && (!cfg->llvm_only || cfg->deopt))
emit_pop_lmf (cfg);
if (cfg->ret) {
MonoType *ret_type = mini_get_underlying_type (mono_method_signature_internal (method)->ret);
if (seq_points && !sym_seq_points) {
/*
* Place a seq point here too even through the IL stack is not
* empty, so a step over on
* call <FOO>
* ret
* will work correctly.
*/
NEW_SEQ_POINT (cfg, ins, ip - header->code, TRUE);
MONO_ADD_INS (cfg->cbb, ins);
}
g_assert (!return_var);
CHECK_STACK (1);
--sp;
*sp = convert_value (cfg, ret_type, *sp);
if ((method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD || method->wrapper_type == MONO_WRAPPER_NONE) && target_type_is_incompatible (cfg, ret_type, *sp))
UNVERIFIED;
emit_setret (cfg, *sp);
}
}
if (sp != stack_start)
UNVERIFIED;
MONO_INST_NEW (cfg, ins, OP_BR);
ins->inst_target_bb = end_bblock;
MONO_ADD_INS (cfg->cbb, ins);
link_bblock (cfg, cfg->cbb, end_bblock);
start_new_bblock = 1;
break;
case MONO_CEE_BR_S:
MONO_INST_NEW (cfg, ins, OP_BR);
GET_BBLOCK (cfg, tblock, target);
link_bblock (cfg, cfg->cbb, tblock);
ins->inst_target_bb = tblock;
if (sp != stack_start) {
handle_stack_args (cfg, stack_start, sp - stack_start);
sp = stack_start;
CHECK_UNVERIFIABLE (cfg);
}
MONO_ADD_INS (cfg->cbb, ins);
start_new_bblock = 1;
inline_costs += BRANCH_COST;
break;
case MONO_CEE_BEQ_S:
case MONO_CEE_BGE_S:
case MONO_CEE_BGT_S:
case MONO_CEE_BLE_S:
case MONO_CEE_BLT_S:
case MONO_CEE_BNE_UN_S:
case MONO_CEE_BGE_UN_S:
case MONO_CEE_BGT_UN_S:
case MONO_CEE_BLE_UN_S:
case MONO_CEE_BLT_UN_S:
MONO_INST_NEW (cfg, ins, il_op + BIG_BRANCH_OFFSET);
ADD_BINCOND (NULL);
sp = stack_start;
inline_costs += BRANCH_COST;
break;
case MONO_CEE_BR:
MONO_INST_NEW (cfg, ins, OP_BR);
GET_BBLOCK (cfg, tblock, target);
link_bblock (cfg, cfg->cbb, tblock);
ins->inst_target_bb = tblock;
if (sp != stack_start) {
handle_stack_args (cfg, stack_start, sp - stack_start);
sp = stack_start;
CHECK_UNVERIFIABLE (cfg);
}
MONO_ADD_INS (cfg->cbb, ins);
start_new_bblock = 1;
inline_costs += BRANCH_COST;
break;
case MONO_CEE_BRFALSE_S:
case MONO_CEE_BRTRUE_S:
case MONO_CEE_BRFALSE:
case MONO_CEE_BRTRUE: {
MonoInst *cmp;
gboolean is_true = il_op == MONO_CEE_BRTRUE_S || il_op == MONO_CEE_BRTRUE;
if (sp [-1]->type == STACK_VTYPE || sp [-1]->type == STACK_R8)
UNVERIFIED;
sp--;
GET_BBLOCK (cfg, tblock, target);
link_bblock (cfg, cfg->cbb, tblock);
GET_BBLOCK (cfg, tblock, next_ip);
link_bblock (cfg, cfg->cbb, tblock);
if (sp != stack_start) {
handle_stack_args (cfg, stack_start, sp - stack_start);
CHECK_UNVERIFIABLE (cfg);
}
MONO_INST_NEW(cfg, cmp, OP_ICOMPARE_IMM);
cmp->sreg1 = sp [0]->dreg;
type_from_op (cfg, cmp, sp [0], NULL);
CHECK_TYPE (cmp);
#if SIZEOF_REGISTER == 4
if (cmp->opcode == OP_LCOMPARE_IMM) {
/* Convert it to OP_LCOMPARE */
MONO_INST_NEW (cfg, ins, OP_I8CONST);
ins->type = STACK_I8;
ins->dreg = alloc_dreg (cfg, STACK_I8);
ins->inst_l = 0;
MONO_ADD_INS (cfg->cbb, ins);
cmp->opcode = OP_LCOMPARE;
cmp->sreg2 = ins->dreg;
}
#endif
MONO_ADD_INS (cfg->cbb, cmp);
MONO_INST_NEW (cfg, ins, is_true ? CEE_BNE_UN : CEE_BEQ);
type_from_op (cfg, ins, sp [0], NULL);
MONO_ADD_INS (cfg->cbb, ins);
ins->inst_many_bb = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (gpointer) * 2);
GET_BBLOCK (cfg, tblock, target);
ins->inst_true_bb = tblock;
GET_BBLOCK (cfg, tblock, next_ip);
ins->inst_false_bb = tblock;
start_new_bblock = 2;
sp = stack_start;
inline_costs += BRANCH_COST;
break;
}
case MONO_CEE_BEQ:
case MONO_CEE_BGE:
case MONO_CEE_BGT:
case MONO_CEE_BLE:
case MONO_CEE_BLT:
case MONO_CEE_BNE_UN:
case MONO_CEE_BGE_UN:
case MONO_CEE_BGT_UN:
case MONO_CEE_BLE_UN:
case MONO_CEE_BLT_UN:
MONO_INST_NEW (cfg, ins, il_op);
ADD_BINCOND (NULL);
sp = stack_start;
inline_costs += BRANCH_COST;
break;
case MONO_CEE_SWITCH: {
MonoInst *src1;
MonoBasicBlock **targets;
MonoBasicBlock *default_bblock;
MonoJumpInfoBBTable *table;
int offset_reg = alloc_preg (cfg);
int target_reg = alloc_preg (cfg);
int table_reg = alloc_preg (cfg);
int sum_reg = alloc_preg (cfg);
gboolean use_op_switch;
n = read32 (ip + 1);
--sp;
src1 = sp [0];
if ((src1->type != STACK_I4) && (src1->type != STACK_PTR))
UNVERIFIED;
ip += 5;
GET_BBLOCK (cfg, default_bblock, next_ip);
default_bblock->flags |= BB_INDIRECT_JUMP_TARGET;
targets = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (MonoBasicBlock*) * n);
for (i = 0; i < n; ++i) {
GET_BBLOCK (cfg, tblock, next_ip + (gint32)read32 (ip));
targets [i] = tblock;
targets [i]->flags |= BB_INDIRECT_JUMP_TARGET;
ip += 4;
}
if (sp != stack_start) {
/*
* Link the current bb with the targets as well, so handle_stack_args
* will set their in_stack correctly.
*/
link_bblock (cfg, cfg->cbb, default_bblock);
for (i = 0; i < n; ++i)
link_bblock (cfg, cfg->cbb, targets [i]);
handle_stack_args (cfg, stack_start, sp - stack_start);
sp = stack_start;
CHECK_UNVERIFIABLE (cfg);
/* Undo the links */
mono_unlink_bblock (cfg, cfg->cbb, default_bblock);
for (i = 0; i < n; ++i)
mono_unlink_bblock (cfg, cfg->cbb, targets [i]);
}
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ICOMPARE_IMM, -1, src1->dreg, n);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBGE_UN, default_bblock);
for (i = 0; i < n; ++i)
link_bblock (cfg, cfg->cbb, targets [i]);
table = (MonoJumpInfoBBTable *)mono_mempool_alloc (cfg->mempool, sizeof (MonoJumpInfoBBTable));
table->table = targets;
table->table_size = n;
use_op_switch = FALSE;
#ifdef TARGET_ARM
/* ARM implements SWITCH statements differently */
/* FIXME: Make it use the generic implementation */
if (!cfg->compile_aot)
use_op_switch = TRUE;
#endif
if (COMPILE_LLVM (cfg))
use_op_switch = TRUE;
cfg->cbb->has_jump_table = 1;
if (use_op_switch) {
MONO_INST_NEW (cfg, ins, OP_SWITCH);
ins->sreg1 = src1->dreg;
ins->inst_p0 = table;
ins->inst_many_bb = targets;
ins->klass = (MonoClass *)GUINT_TO_POINTER (n);
MONO_ADD_INS (cfg->cbb, ins);
} else {
if (TARGET_SIZEOF_VOID_P == 8)
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHL_IMM, offset_reg, src1->dreg, 3);
else
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHL_IMM, offset_reg, src1->dreg, 2);
#if SIZEOF_REGISTER == 8
/* The upper word might not be zero, and we add it to a 64 bit address later */
MONO_EMIT_NEW_UNALU (cfg, OP_ZEXT_I4, offset_reg, offset_reg);
#endif
if (cfg->compile_aot) {
MONO_EMIT_NEW_AOTCONST (cfg, table_reg, table, MONO_PATCH_INFO_SWITCH);
} else {
MONO_INST_NEW (cfg, ins, OP_JUMP_TABLE);
ins->inst_c1 = MONO_PATCH_INFO_SWITCH;
ins->inst_p0 = table;
ins->dreg = table_reg;
MONO_ADD_INS (cfg->cbb, ins);
}
/* FIXME: Use load_memindex */
MONO_EMIT_NEW_BIALU (cfg, OP_PADD, sum_reg, table_reg, offset_reg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, target_reg, sum_reg, 0);
MONO_EMIT_NEW_UNALU (cfg, OP_BR_REG, -1, target_reg);
}
start_new_bblock = 1;
inline_costs += BRANCH_COST * 2;
break;
}
case MONO_CEE_LDIND_I1:
case MONO_CEE_LDIND_U1:
case MONO_CEE_LDIND_I2:
case MONO_CEE_LDIND_U2:
case MONO_CEE_LDIND_I4:
case MONO_CEE_LDIND_U4:
case MONO_CEE_LDIND_I8:
case MONO_CEE_LDIND_I:
case MONO_CEE_LDIND_R4:
case MONO_CEE_LDIND_R8:
case MONO_CEE_LDIND_REF:
--sp;
if (!(ins_flag & MONO_INST_NONULLCHECK))
MONO_EMIT_NULL_CHECK (cfg, sp [0]->dreg, FALSE);
ins = mini_emit_memory_load (cfg, m_class_get_byval_arg (ldind_to_type (il_op)), sp [0], 0, ins_flag);
*sp++ = ins;
ins_flag = 0;
break;
case MONO_CEE_STIND_REF:
case MONO_CEE_STIND_I1:
case MONO_CEE_STIND_I2:
case MONO_CEE_STIND_I4:
case MONO_CEE_STIND_I8:
case MONO_CEE_STIND_R4:
case MONO_CEE_STIND_R8:
case MONO_CEE_STIND_I: {
sp -= 2;
if (il_op == MONO_CEE_STIND_REF && sp [1]->type != STACK_OBJ) {
/* stind.ref must only be used with object references. */
UNVERIFIED;
}
if (il_op == MONO_CEE_STIND_R4 && sp [1]->type == STACK_R8)
sp [1] = convert_value (cfg, m_class_get_byval_arg (mono_defaults.single_class), sp [1]);
mini_emit_memory_store (cfg, m_class_get_byval_arg (stind_to_type (il_op)), sp [0], sp [1], ins_flag);
ins_flag = 0;
inline_costs += 1;
break;
}
case MONO_CEE_MUL:
MONO_INST_NEW (cfg, ins, il_op);
sp -= 2;
ins->sreg1 = sp [0]->dreg;
ins->sreg2 = sp [1]->dreg;
type_from_op (cfg, ins, sp [0], sp [1]);
CHECK_TYPE (ins);
ins->dreg = alloc_dreg ((cfg), (MonoStackType)(ins)->type);
/* Use the immediate opcodes if possible */
int imm_opcode; imm_opcode = mono_op_to_op_imm_noemul (ins->opcode);
if ((sp [1]->opcode == OP_ICONST) && mono_arch_is_inst_imm (ins->opcode, imm_opcode, sp [1]->inst_c0)) {
if (imm_opcode != -1) {
ins->opcode = imm_opcode;
ins->inst_p1 = (gpointer)(gssize)(sp [1]->inst_c0);
ins->sreg2 = -1;
NULLIFY_INS (sp [1]);
}
}
MONO_ADD_INS ((cfg)->cbb, (ins));
*sp++ = mono_decompose_opcode (cfg, ins);
break;
case MONO_CEE_ADD:
case MONO_CEE_SUB:
case MONO_CEE_DIV:
case MONO_CEE_DIV_UN:
case MONO_CEE_REM:
case MONO_CEE_REM_UN:
case MONO_CEE_AND:
case MONO_CEE_OR:
case MONO_CEE_XOR:
case MONO_CEE_SHL:
case MONO_CEE_SHR:
case MONO_CEE_SHR_UN: {
MONO_INST_NEW (cfg, ins, il_op);
sp -= 2;
ins->sreg1 = sp [0]->dreg;
ins->sreg2 = sp [1]->dreg;
type_from_op (cfg, ins, sp [0], sp [1]);
CHECK_TYPE (ins);
add_widen_op (cfg, ins, &sp [0], &sp [1]);
ins->dreg = alloc_dreg ((cfg), (MonoStackType)(ins)->type);
/* Use the immediate opcodes if possible */
int imm_opcode; imm_opcode = mono_op_to_op_imm_noemul (ins->opcode);
if (((sp [1]->opcode == OP_ICONST) || (sp [1]->opcode == OP_I8CONST)) &&
mono_arch_is_inst_imm (ins->opcode, imm_opcode, sp [1]->opcode == OP_ICONST ? sp [1]->inst_c0 : sp [1]->inst_l)) {
if (imm_opcode != -1) {
ins->opcode = imm_opcode;
if (sp [1]->opcode == OP_I8CONST) {
#if SIZEOF_REGISTER == 8
ins->inst_imm = sp [1]->inst_l;
#else
ins->inst_l = sp [1]->inst_l;
#endif
} else {
ins->inst_imm = (gssize)(sp [1]->inst_c0);
}
ins->sreg2 = -1;
/* Might be followed by an instruction added by add_widen_op */
if (sp [1]->next == NULL)
NULLIFY_INS (sp [1]);
}
}
MONO_ADD_INS ((cfg)->cbb, (ins));
*sp++ = mono_decompose_opcode (cfg, ins);
break;
}
case MONO_CEE_NEG:
case MONO_CEE_NOT:
case MONO_CEE_CONV_I1:
case MONO_CEE_CONV_I2:
case MONO_CEE_CONV_I4:
case MONO_CEE_CONV_R4:
case MONO_CEE_CONV_R8:
case MONO_CEE_CONV_U4:
case MONO_CEE_CONV_I8:
case MONO_CEE_CONV_U8:
case MONO_CEE_CONV_OVF_I8:
case MONO_CEE_CONV_OVF_U8:
case MONO_CEE_CONV_R_UN:
/* Special case this earlier so we have long constants in the IR */
if ((il_op == MONO_CEE_CONV_I8 || il_op == MONO_CEE_CONV_U8) && (sp [-1]->opcode == OP_ICONST)) {
int data = sp [-1]->inst_c0;
sp [-1]->opcode = OP_I8CONST;
sp [-1]->type = STACK_I8;
#if SIZEOF_REGISTER == 8
if (il_op == MONO_CEE_CONV_U8)
sp [-1]->inst_c0 = (guint32)data;
else
sp [-1]->inst_c0 = data;
#else
if (il_op == MONO_CEE_CONV_U8)
sp [-1]->inst_l = (guint32)data;
else
sp [-1]->inst_l = data;
#endif
sp [-1]->dreg = alloc_dreg (cfg, STACK_I8);
}
else {
ADD_UNOP (il_op);
}
break;
case MONO_CEE_CONV_OVF_I4:
case MONO_CEE_CONV_OVF_I1:
case MONO_CEE_CONV_OVF_I2:
case MONO_CEE_CONV_OVF_I:
case MONO_CEE_CONV_OVF_I1_UN:
case MONO_CEE_CONV_OVF_I2_UN:
case MONO_CEE_CONV_OVF_I4_UN:
case MONO_CEE_CONV_OVF_I8_UN:
case MONO_CEE_CONV_OVF_I_UN:
if (sp [-1]->type == STACK_R8 || sp [-1]->type == STACK_R4) {
/* floats are always signed, _UN has no effect */
ADD_UNOP (CEE_CONV_OVF_I8);
if (il_op == MONO_CEE_CONV_OVF_I1_UN)
ADD_UNOP (MONO_CEE_CONV_OVF_I1);
else if (il_op == MONO_CEE_CONV_OVF_I2_UN)
ADD_UNOP (MONO_CEE_CONV_OVF_I2);
else if (il_op == MONO_CEE_CONV_OVF_I4_UN)
ADD_UNOP (MONO_CEE_CONV_OVF_I4);
else if (il_op == MONO_CEE_CONV_OVF_I8_UN)
;
else
ADD_UNOP (il_op);
} else {
ADD_UNOP (il_op);
}
break;
case MONO_CEE_CONV_OVF_U1:
case MONO_CEE_CONV_OVF_U2:
case MONO_CEE_CONV_OVF_U4:
case MONO_CEE_CONV_OVF_U:
case MONO_CEE_CONV_OVF_U1_UN:
case MONO_CEE_CONV_OVF_U2_UN:
case MONO_CEE_CONV_OVF_U4_UN:
case MONO_CEE_CONV_OVF_U8_UN:
case MONO_CEE_CONV_OVF_U_UN:
if (sp [-1]->type == STACK_R8 || sp [-1]->type == STACK_R4) {
/* floats are always signed, _UN has no effect */
ADD_UNOP (CEE_CONV_OVF_U8);
ADD_UNOP (il_op);
} else {
ADD_UNOP (il_op);
}
break;
case MONO_CEE_CONV_U2:
case MONO_CEE_CONV_U1:
case MONO_CEE_CONV_I:
case MONO_CEE_CONV_U:
ADD_UNOP (il_op);
CHECK_CFG_EXCEPTION;
break;
case MONO_CEE_ADD_OVF:
case MONO_CEE_ADD_OVF_UN:
case MONO_CEE_MUL_OVF:
case MONO_CEE_MUL_OVF_UN:
case MONO_CEE_SUB_OVF:
case MONO_CEE_SUB_OVF_UN:
MONO_INST_NEW (cfg, ins, il_op);
sp -= 2;
ins->sreg1 = sp [0]->dreg;
ins->sreg2 = sp [1]->dreg;
type_from_op (cfg, ins, sp [0], sp [1]);
CHECK_TYPE (ins);
if (ovf_exc)
ins->inst_exc_name = ovf_exc;
else
ins->inst_exc_name = "OverflowException";
/* Have to insert a widening op */
add_widen_op (cfg, ins, &sp [0], &sp [1]);
ins->dreg = alloc_dreg (cfg, (MonoStackType)(ins)->type);
MONO_ADD_INS ((cfg)->cbb, ins);
/* The opcode might be emulated, so need to special case this */
if (ovf_exc && mono_find_jit_opcode_emulation (ins->opcode)) {
switch (ins->opcode) {
case OP_IMUL_OVF_UN:
/* This opcode is just a placeholder, it will be emulated also */
ins->opcode = OP_IMUL_OVF_UN_OOM;
break;
case OP_LMUL_OVF_UN:
/* This opcode is just a placeholder, it will be emulated also */
ins->opcode = OP_LMUL_OVF_UN_OOM;
break;
default:
g_assert_not_reached ();
}
}
ovf_exc = NULL;
*sp++ = mono_decompose_opcode (cfg, ins);
break;
case MONO_CEE_CPOBJ:
GSHAREDVT_FAILURE (il_op);
GSHAREDVT_FAILURE (*ip);
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
sp -= 2;
mini_emit_memory_copy (cfg, sp [0], sp [1], klass, FALSE, ins_flag);
ins_flag = 0;
break;
case MONO_CEE_LDOBJ: {
int loc_index = -1;
int stloc_len = 0;
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
/* Optimize the common ldobj+stloc combination */
if (next_ip < end) {
switch (next_ip [0]) {
case MONO_CEE_STLOC_S:
CHECK_OPSIZE (7);
loc_index = next_ip [1];
stloc_len = 2;
break;
case MONO_CEE_STLOC_0:
case MONO_CEE_STLOC_1:
case MONO_CEE_STLOC_2:
case MONO_CEE_STLOC_3:
loc_index = next_ip [0] - CEE_STLOC_0;
stloc_len = 1;
break;
default:
break;
}
}
if ((loc_index != -1) && ip_in_bb (cfg, cfg->cbb, next_ip)) {
CHECK_LOCAL (loc_index);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), sp [0]->dreg, 0);
ins->dreg = cfg->locals [loc_index]->dreg;
ins->flags |= ins_flag;
il_op = (MonoOpcodeEnum)next_ip [0];
next_ip += stloc_len;
if (ins_flag & MONO_INST_VOLATILE) {
/* Volatile loads have acquire semantics, see 12.6.7 in Ecma 335 */
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_ACQ);
}
ins_flag = 0;
break;
}
/* Optimize the ldobj+stobj combination */
if (next_ip + 4 < end && next_ip [0] == CEE_STOBJ && ip_in_bb (cfg, cfg->cbb, next_ip) && read32 (next_ip + 1) == token) {
CHECK_STACK (1);
sp --;
mini_emit_memory_copy (cfg, sp [0], sp [1], klass, FALSE, ins_flag);
il_op = (MonoOpcodeEnum)next_ip [0];
next_ip += 5;
ins_flag = 0;
break;
}
ins = mini_emit_memory_load (cfg, m_class_get_byval_arg (klass), sp [0], 0, ins_flag);
*sp++ = ins;
ins_flag = 0;
inline_costs += 1;
break;
}
case MONO_CEE_LDSTR:
if (method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD) {
EMIT_NEW_PCONST (cfg, ins, mono_method_get_wrapper_data (method, n));
ins->type = STACK_OBJ;
*sp = ins;
}
else if (method->wrapper_type != MONO_WRAPPER_NONE) {
MonoInst *iargs [1];
char *str = (char *)mono_method_get_wrapper_data (method, n);
if (cfg->compile_aot)
EMIT_NEW_LDSTRLITCONST (cfg, iargs [0], str);
else
EMIT_NEW_PCONST (cfg, iargs [0], str);
*sp = mono_emit_jit_icall (cfg, mono_string_new_wrapper_internal, iargs);
} else {
{
if (cfg->cbb->out_of_line) {
MonoInst *iargs [2];
if (image == mono_defaults.corlib) {
/*
* Avoid relocations in AOT and save some space by using a
* version of helper_ldstr specialized to mscorlib.
*/
EMIT_NEW_ICONST (cfg, iargs [0], mono_metadata_token_index (n));
*sp = mono_emit_jit_icall (cfg, mono_helper_ldstr_mscorlib, iargs);
} else {
/* Avoid creating the string object */
EMIT_NEW_IMAGECONST (cfg, iargs [0], image);
EMIT_NEW_ICONST (cfg, iargs [1], mono_metadata_token_index (n));
*sp = mono_emit_jit_icall (cfg, mono_helper_ldstr, iargs);
}
}
else
if (cfg->compile_aot) {
NEW_LDSTRCONST (cfg, ins, image, n);
*sp = ins;
MONO_ADD_INS (cfg->cbb, ins);
}
else {
NEW_PCONST (cfg, ins, NULL);
ins->type = STACK_OBJ;
ins->inst_p0 = mono_ldstr_checked (image, mono_metadata_token_index (n), cfg->error);
CHECK_CFG_ERROR;
if (!ins->inst_p0)
OUT_OF_MEMORY_FAILURE;
*sp = ins;
MONO_ADD_INS (cfg->cbb, ins);
}
}
}
sp++;
break;
case MONO_CEE_NEWOBJ: {
MonoInst *iargs [2];
MonoMethodSignature *fsig;
MonoInst this_ins;
MonoInst *alloc;
MonoInst *vtable_arg = NULL;
cmethod = mini_get_method (cfg, method, token, NULL, generic_context);
CHECK_CFG_ERROR;
fsig = mono_method_get_signature_checked (cmethod, image, token, generic_context, cfg->error);
CHECK_CFG_ERROR;
mono_save_token_info (cfg, image, token, cmethod);
if (!mono_class_init_internal (cmethod->klass))
TYPE_LOAD_ERROR (cmethod->klass);
context_used = mini_method_check_context_used (cfg, cmethod);
if (!dont_verify && !cfg->skip_visibility) {
MonoMethod *cil_method = cmethod;
MonoMethod *target_method = cil_method;
if (method->is_inflated) {
MonoGenericContainer *container = mono_method_get_generic_container(method_definition);
MonoGenericContext *context = (container != NULL ? &container->context : NULL);
target_method = mini_get_method_allow_open (method, token, NULL, context, cfg->error);
CHECK_CFG_ERROR;
}
if (!mono_method_can_access_method (method_definition, target_method) &&
!mono_method_can_access_method (method, cil_method))
emit_method_access_failure (cfg, method, cil_method);
}
if (cfg->gshared && cmethod && cmethod->klass != method->klass && mono_class_is_ginst (cmethod->klass) && mono_method_is_generic_sharable (cmethod, TRUE) && mono_class_needs_cctor_run (cmethod->klass, method)) {
emit_class_init (cfg, cmethod->klass);
CHECK_TYPELOAD (cmethod->klass);
}
/*
if (cfg->gsharedvt) {
if (mini_is_gsharedvt_variable_signature (sig))
GSHAREDVT_FAILURE (il_op);
}
*/
n = fsig->param_count;
CHECK_STACK (n);
/*
* Generate smaller code for the common newobj <exception> instruction in
* argument checking code.
*/
if (cfg->cbb->out_of_line && m_class_get_image (cmethod->klass) == mono_defaults.corlib &&
is_exception_class (cmethod->klass) && n <= 2 &&
((n < 1) || (!m_type_is_byref (fsig->params [0]) && fsig->params [0]->type == MONO_TYPE_STRING)) &&
((n < 2) || (!m_type_is_byref (fsig->params [1]) && fsig->params [1]->type == MONO_TYPE_STRING))) {
MonoInst *iargs [3];
sp -= n;
EMIT_NEW_ICONST (cfg, iargs [0], m_class_get_type_token (cmethod->klass));
switch (n) {
case 0:
*sp ++ = mono_emit_jit_icall (cfg, mono_create_corlib_exception_0, iargs);
break;
case 1:
iargs [1] = sp [0];
*sp ++ = mono_emit_jit_icall (cfg, mono_create_corlib_exception_1, iargs);
break;
case 2:
iargs [1] = sp [0];
iargs [2] = sp [1];
*sp ++ = mono_emit_jit_icall (cfg, mono_create_corlib_exception_2, iargs);
break;
default:
g_assert_not_reached ();
}
inline_costs += 5;
break;
}
/* move the args to allow room for 'this' in the first position */
while (n--) {
--sp;
sp [1] = sp [0];
}
for (int i = 0; i < fsig->param_count; ++i)
sp [i + fsig->hasthis] = convert_value (cfg, fsig->params [i], sp [i + fsig->hasthis]);
/* check_call_signature () requires sp[0] to be set */
this_ins.type = STACK_OBJ;
sp [0] = &this_ins;
if (check_call_signature (cfg, fsig, sp))
UNVERIFIED;
iargs [0] = NULL;
if (mini_class_is_system_array (cmethod->klass)) {
*sp = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_METHOD);
MonoJitICallId function = MONO_JIT_ICALL_ZeroIsReserved;
int rank = m_class_get_rank (cmethod->klass);
int n = fsig->param_count;
/* Optimize the common cases, use ctor using length for each rank (no lbound). */
if (n == rank) {
switch (n) {
case 1: function = MONO_JIT_ICALL_mono_array_new_1;
break;
case 2: function = MONO_JIT_ICALL_mono_array_new_2;
break;
case 3: function = MONO_JIT_ICALL_mono_array_new_3;
break;
case 4: function = MONO_JIT_ICALL_mono_array_new_4;
break;
default:
break;
}
}
/* Regular case, rank > 4 or legnth, lbound specified per rank. */
if (function == MONO_JIT_ICALL_ZeroIsReserved) {
// FIXME Maximum value of param_count? Realistically 64. Fits in imm?
if (!array_new_localalloc_ins) {
MONO_INST_NEW (cfg, array_new_localalloc_ins, OP_LOCALLOC_IMM);
array_new_localalloc_ins->dreg = alloc_preg (cfg);
cfg->flags |= MONO_CFG_HAS_ALLOCA;
MONO_ADD_INS (init_localsbb, array_new_localalloc_ins);
}
array_new_localalloc_ins->inst_imm = MAX (array_new_localalloc_ins->inst_imm, n * sizeof (target_mgreg_t));
int dreg = array_new_localalloc_ins->dreg;
if (2 * rank == n) {
/* [lbound, length, lbound, length, ...]
* mono_array_new_n_icall expects a non-interleaved list of
* lbounds and lengths, so deinterleave here.
*/
for (int l = 0; l < 2; ++l) {
int src = l;
int dst = l * rank;
for (int r = 0; r < rank; ++r, src += 2, ++dst) {
NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, dreg, dst * sizeof (target_mgreg_t), sp [src + 1]->dreg);
MONO_ADD_INS (cfg->cbb, ins);
}
}
} else {
/* [length, length, length, ...] */
for (int i = 0; i < n; ++i) {
NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, dreg, i * sizeof (target_mgreg_t), sp [i + 1]->dreg);
MONO_ADD_INS (cfg->cbb, ins);
}
}
EMIT_NEW_ICONST (cfg, ins, n);
sp [1] = ins;
EMIT_NEW_UNALU (cfg, ins, OP_MOVE, alloc_preg (cfg), dreg);
ins->type = STACK_PTR;
sp [2] = ins;
// FIXME Adjust sp by n - 3? Attempts failed.
function = MONO_JIT_ICALL_mono_array_new_n_icall;
}
alloc = mono_emit_jit_icall_id (cfg, function, sp);
} else if (cmethod->string_ctor) {
g_assert (!context_used);
g_assert (!vtable_arg);
/* we simply pass a null pointer */
EMIT_NEW_PCONST (cfg, *sp, NULL);
/* now call the string ctor */
alloc = mini_emit_method_call_full (cfg, cmethod, fsig, FALSE, sp, NULL, NULL, NULL);
} else {
if (m_class_is_valuetype (cmethod->klass)) {
iargs [0] = mono_compile_create_var (cfg, m_class_get_byval_arg (cmethod->klass), OP_LOCAL);
mini_emit_init_rvar (cfg, iargs [0]->dreg, m_class_get_byval_arg (cmethod->klass));
EMIT_NEW_TEMPLOADA (cfg, *sp, iargs [0]->inst_c0);
alloc = NULL;
/*
* The code generated by mini_emit_virtual_call () expects
* iargs [0] to be a boxed instance, but luckily the vcall
* will be transformed into a normal call there.
*/
} else if (context_used) {
alloc = handle_alloc (cfg, cmethod->klass, FALSE, context_used);
*sp = alloc;
} else {
MonoVTable *vtable = NULL;
if (!cfg->compile_aot)
vtable = mono_class_vtable_checked (cmethod->klass, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (cmethod->klass);
/*
* TypeInitializationExceptions thrown from the mono_runtime_class_init
* call in mono_jit_runtime_invoke () can abort the finalizer thread.
* As a workaround, we call class cctors before allocating objects.
*/
if (mini_field_access_needs_cctor_run (cfg, method, cmethod->klass, vtable) && !(g_slist_find (class_inits, cmethod->klass))) {
emit_class_init (cfg, cmethod->klass);
if (cfg->verbose_level > 2)
printf ("class %s.%s needs init call for ctor\n", m_class_get_name_space (cmethod->klass), m_class_get_name (cmethod->klass));
class_inits = g_slist_prepend (class_inits, cmethod->klass);
}
alloc = handle_alloc (cfg, cmethod->klass, FALSE, 0);
*sp = alloc;
}
CHECK_CFG_EXCEPTION; /*for handle_alloc*/
if (alloc)
MONO_EMIT_NEW_UNALU (cfg, OP_NOT_NULL, -1, alloc->dreg);
/* Now call the actual ctor */
int ctor_inline_costs = 0;
handle_ctor_call (cfg, cmethod, fsig, context_used, sp, ip, &ctor_inline_costs);
// don't contribute to inline_const if ctor has [MethodImpl(MethodImplOptions.AggressiveInlining)]
if (!COMPILE_LLVM(cfg) || !(cmethod->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING))
inline_costs += ctor_inline_costs;
CHECK_CFG_EXCEPTION;
}
if (alloc == NULL) {
/* Valuetype */
EMIT_NEW_TEMPLOAD (cfg, ins, iargs [0]->inst_c0);
mini_type_to_eval_stack_type (cfg, m_class_get_byval_arg (ins->klass), ins);
*sp++= ins;
} else {
*sp++ = alloc;
}
inline_costs += 5;
if (!(seq_point_locs && mono_bitset_test_fast (seq_point_locs, next_ip - header->code)))
emit_seq_point (cfg, method, next_ip, FALSE, TRUE);
break;
}
case MONO_CEE_CASTCLASS:
case MONO_CEE_ISINST: {
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
if (sp [0]->type != STACK_OBJ)
UNVERIFIED;
MONO_INST_NEW (cfg, ins, (il_op == MONO_CEE_ISINST) ? OP_ISINST : OP_CASTCLASS);
ins->dreg = alloc_preg (cfg);
ins->sreg1 = (*sp)->dreg;
ins->klass = klass;
ins->type = STACK_OBJ;
MONO_ADD_INS (cfg->cbb, ins);
CHECK_CFG_EXCEPTION;
*sp++ = ins;
cfg->flags |= MONO_CFG_HAS_TYPE_CHECK;
break;
}
case MONO_CEE_UNBOX_ANY: {
MonoInst *res, *addr;
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
mono_save_token_info (cfg, image, token, klass);
context_used = mini_class_check_context_used (cfg, klass);
if (mini_is_gsharedvt_klass (klass)) {
res = handle_unbox_gsharedvt (cfg, klass, *sp);
inline_costs += 2;
} else if (mini_class_is_reference (klass)) {
if (MONO_INS_IS_PCONST_NULL (*sp)) {
EMIT_NEW_PCONST (cfg, res, NULL);
res->type = STACK_OBJ;
} else {
MONO_INST_NEW (cfg, res, OP_CASTCLASS);
res->dreg = alloc_preg (cfg);
res->sreg1 = (*sp)->dreg;
res->klass = klass;
res->type = STACK_OBJ;
MONO_ADD_INS (cfg->cbb, res);
cfg->flags |= MONO_CFG_HAS_TYPE_CHECK;
}
} else if (mono_class_is_nullable (klass)) {
res = handle_unbox_nullable (cfg, *sp, klass, context_used);
} else {
addr = mini_handle_unbox (cfg, klass, *sp, context_used);
/* LDOBJ */
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0);
res = ins;
inline_costs += 2;
}
*sp ++ = res;
break;
}
case MONO_CEE_BOX: {
MonoInst *val;
MonoClass *enum_class;
MonoMethod *has_flag;
MonoMethodSignature *has_flag_sig;
--sp;
val = *sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
mono_save_token_info (cfg, image, token, klass);
context_used = mini_class_check_context_used (cfg, klass);
if (mini_class_is_reference (klass)) {
*sp++ = val;
break;
}
val = convert_value (cfg, m_class_get_byval_arg (klass), val);
if (klass == mono_defaults.void_class)
UNVERIFIED;
if (target_type_is_incompatible (cfg, m_class_get_byval_arg (klass), val))
UNVERIFIED;
/* frequent check in generic code: box (struct), brtrue */
/*
* Look for:
*
* <push int/long ptr>
* <push int/long>
* box MyFlags
* constrained. MyFlags
* callvirt instace bool class [mscorlib] System.Enum::HasFlag (class [mscorlib] System.Enum)
*
* If we find this sequence and the operand types on box and constrained
* are equal, we can emit a specialized instruction sequence instead of
* the very slow HasFlag () call.
* This code sequence is generated by older mcs/csc, the newer one is handled in
* emit_inst_for_method ().
*/
guint32 constrained_token;
guint32 callvirt_token;
if ((cfg->opt & MONO_OPT_INTRINS) &&
// FIXME ip_in_bb as we go?
next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) &&
(ip = il_read_constrained (next_ip, end, &constrained_token)) &&
ip_in_bb (cfg, cfg->cbb, ip) &&
(ip = il_read_callvirt (ip, end, &callvirt_token)) &&
ip_in_bb (cfg, cfg->cbb, ip) &&
m_class_is_enumtype (klass) &&
(enum_class = mini_get_class (method, constrained_token, generic_context)) &&
(has_flag = mini_get_method (cfg, method, callvirt_token, NULL, generic_context)) &&
has_flag->klass == mono_defaults.enum_class &&
!strcmp (has_flag->name, "HasFlag") &&
(has_flag_sig = mono_method_signature_internal (has_flag)) &&
has_flag_sig->hasthis &&
has_flag_sig->param_count == 1) {
CHECK_TYPELOAD (enum_class);
if (enum_class == klass) {
MonoInst *enum_this, *enum_flag;
next_ip = ip;
il_op = MONO_CEE_CALLVIRT;
--sp;
enum_this = sp [0];
enum_flag = sp [1];
*sp++ = mini_handle_enum_has_flag (cfg, klass, enum_this, -1, enum_flag);
break;
}
}
guint32 unbox_any_token;
/*
* Common in generic code:
* box T1, unbox.any T2.
*/
if ((cfg->opt & MONO_OPT_INTRINS) &&
next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) &&
(ip = il_read_unbox_any (next_ip, end, &unbox_any_token))) {
MonoClass *unbox_klass = mini_get_class (method, unbox_any_token, generic_context);
CHECK_TYPELOAD (unbox_klass);
if (klass == unbox_klass) {
next_ip = ip;
*sp++ = val;
break;
}
}
// Optimize
//
// box
// call object::GetType()
//
guint32 gettype_token;
if ((ip = il_read_call(next_ip, end, &gettype_token)) && ip_in_bb (cfg, cfg->cbb, ip)) {
MonoMethod* gettype_method = mini_get_method (cfg, method, gettype_token, NULL, generic_context);
if (!strcmp (gettype_method->name, "GetType") && gettype_method->klass == mono_defaults.object_class) {
mono_class_init_internal(klass);
if (mono_class_get_checked (m_class_get_image (klass), m_class_get_type_token (klass), error) == klass) {
if (cfg->compile_aot) {
EMIT_NEW_TYPE_FROM_HANDLE_CONST (cfg, ins, m_class_get_image (klass), m_class_get_type_token (klass), generic_context);
} else {
MonoType *klass_type = m_class_get_byval_arg (klass);
MonoReflectionType* reflection_type = mono_type_get_object_checked (klass_type, cfg->error);
EMIT_NEW_PCONST (cfg, ins, reflection_type);
}
ins->type = STACK_OBJ;
ins->klass = mono_defaults.systemtype_class;
*sp++ = ins;
next_ip = ip;
break;
}
}
}
// Optimize
//
// box
// ldnull
// ceq (or cgt.un)
//
// to just
//
// ldc.i4.0 (or 1)
guchar* ldnull_ip;
if ((ldnull_ip = il_read_op (next_ip, end, CEE_LDNULL, MONO_CEE_LDNULL)) && ip_in_bb (cfg, cfg->cbb, ldnull_ip)) {
gboolean is_eq = FALSE, is_neq = FALSE;
if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CEQ)))
is_eq = TRUE;
else if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CGT_UN)))
is_neq = TRUE;
if ((is_eq || is_neq) && ip_in_bb (cfg, cfg->cbb, ip) &&
!mono_class_is_nullable (klass) && !mini_is_gsharedvt_klass (klass)) {
next_ip = ip;
il_op = (MonoOpcodeEnum) (is_eq ? CEE_LDC_I4_0 : CEE_LDC_I4_1);
EMIT_NEW_ICONST (cfg, ins, is_eq ? 0 : 1);
ins->type = STACK_I4;
*sp++ = ins;
break;
}
}
guint32 isinst_tk = 0;
if ((ip = il_read_op_and_token (next_ip, end, CEE_ISINST, MONO_CEE_ISINST, &isinst_tk)) &&
ip_in_bb (cfg, cfg->cbb, ip)) {
MonoClass *isinst_class = mini_get_class (method, isinst_tk, generic_context);
if (!mono_class_is_nullable (klass) && !mono_class_is_nullable (isinst_class) &&
!mini_is_gsharedvt_variable_klass (klass) && !mini_is_gsharedvt_variable_klass (isinst_class) &&
!mono_class_is_open_constructed_type (m_class_get_byval_arg (klass)) &&
!mono_class_is_open_constructed_type (m_class_get_byval_arg (isinst_class))) {
// Optimize
//
// box
// isinst [Type]
// brfalse/brtrue
//
// to
//
// ldc.i4.0 (or 1)
// brfalse/brtrue
//
guchar* br_ip = NULL;
if ((br_ip = il_read_brtrue (ip, end, &target)) || (br_ip = il_read_brtrue_s (ip, end, &target)) ||
(br_ip = il_read_brfalse (ip, end, &target)) || (br_ip = il_read_brfalse_s (ip, end, &target))) {
gboolean isinst = mono_class_is_assignable_from_internal (isinst_class, klass);
next_ip = ip;
il_op = (MonoOpcodeEnum) (isinst ? CEE_LDC_I4_1 : CEE_LDC_I4_0);
EMIT_NEW_ICONST (cfg, ins, isinst ? 1 : 0);
ins->type = STACK_I4;
*sp++ = ins;
break;
}
// Optimize
//
// box
// isinst [Type]
// ldnull
// ceq/cgt.un
//
// to
//
// ldc.i4.0 (or 1)
//
guchar* ldnull_ip = NULL;
if ((ldnull_ip = il_read_op (ip, end, CEE_LDNULL, MONO_CEE_LDNULL)) && ip_in_bb (cfg, cfg->cbb, ldnull_ip)) {
gboolean is_eq = FALSE, is_neq = FALSE;
if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CEQ)))
is_eq = TRUE;
else if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CGT_UN)))
is_neq = TRUE;
if ((is_eq || is_neq) && ip_in_bb (cfg, cfg->cbb, ip) &&
!mono_class_is_nullable (klass) && !mini_is_gsharedvt_klass (klass)) {
gboolean isinst = mono_class_is_assignable_from_internal (isinst_class, klass);
next_ip = ip;
if (is_eq)
isinst = !isinst;
il_op = (MonoOpcodeEnum) (isinst ? CEE_LDC_I4_1 : CEE_LDC_I4_0);
EMIT_NEW_ICONST (cfg, ins, isinst ? 1 : 0);
ins->type = STACK_I4;
*sp++ = ins;
break;
}
}
// Optimize
//
// box
// isinst [Type]
// unbox.any
//
// to
//
// nop
//
guchar* unbox_ip = NULL;
guint32 unbox_token = 0;
if ((unbox_ip = il_read_unbox_any (ip, end, &unbox_token)) && ip_in_bb (cfg, cfg->cbb, unbox_ip)) {
MonoClass *unbox_klass = mini_get_class (method, unbox_token, generic_context);
CHECK_TYPELOAD (unbox_klass);
if (!mono_class_is_nullable (unbox_klass) &&
!mini_is_gsharedvt_klass (unbox_klass) &&
klass == isinst_class &&
klass == unbox_klass)
{
*sp++ = val;
next_ip = unbox_ip;
break;
}
}
}
}
gboolean is_true;
// FIXME: LLVM can't handle the inconsistent bb linking
if (!mono_class_is_nullable (klass) &&
!mini_is_gsharedvt_klass (klass) &&
next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) &&
( (is_true = !!(ip = il_read_brtrue (next_ip, end, &target))) ||
(is_true = !!(ip = il_read_brtrue_s (next_ip, end, &target))) ||
(ip = il_read_brfalse (next_ip, end, &target)) ||
(ip = il_read_brfalse_s (next_ip, end, &target)))) {
int dreg;
MonoBasicBlock *true_bb, *false_bb;
il_op = (MonoOpcodeEnum)next_ip [0];
next_ip = ip;
if (cfg->verbose_level > 3) {
printf ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip, NULL));
printf ("<box+brtrue opt>\n");
}
/*
* We need to link both bblocks, since it is needed for handling stack
* arguments correctly (See test_0_box_brtrue_opt_regress_81102).
* Branching to only one of them would lead to inconsistencies, so
* generate an ICONST+BRTRUE, the branch opts will get rid of them.
*/
GET_BBLOCK (cfg, true_bb, target);
GET_BBLOCK (cfg, false_bb, next_ip);
mono_link_bblock (cfg, cfg->cbb, true_bb);
mono_link_bblock (cfg, cfg->cbb, false_bb);
if (sp != stack_start) {
handle_stack_args (cfg, stack_start, sp - stack_start);
sp = stack_start;
CHECK_UNVERIFIABLE (cfg);
}
if (COMPILE_LLVM (cfg)) {
dreg = alloc_ireg (cfg);
MONO_EMIT_NEW_ICONST (cfg, dreg, 0);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, dreg, is_true ? 0 : 1);
MONO_EMIT_NEW_BRANCH_BLOCK2 (cfg, OP_IBEQ, true_bb, false_bb);
} else {
/* The JIT can't eliminate the iconst+compare */
MONO_INST_NEW (cfg, ins, OP_BR);
ins->inst_target_bb = is_true ? true_bb : false_bb;
MONO_ADD_INS (cfg->cbb, ins);
}
start_new_bblock = 1;
break;
}
if (m_class_is_enumtype (klass) && !mini_is_gsharedvt_klass (klass) && !(val->type == STACK_I8 && TARGET_SIZEOF_VOID_P == 4)) {
/* Can't do this with 64 bit enums on 32 bit since the vtype decomp pass is ran after the long decomp pass */
if (val->opcode == OP_ICONST) {
MONO_INST_NEW (cfg, ins, OP_BOX_ICONST);
ins->type = STACK_OBJ;
ins->klass = klass;
ins->inst_c0 = val->inst_c0;
ins->dreg = alloc_dreg (cfg, (MonoStackType)val->type);
} else {
MONO_INST_NEW (cfg, ins, OP_BOX);
ins->type = STACK_OBJ;
ins->klass = klass;
ins->sreg1 = val->dreg;
ins->dreg = alloc_dreg (cfg, (MonoStackType)val->type);
}
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = ins;
} else {
*sp++ = mini_emit_box (cfg, val, klass, context_used);
}
CHECK_CFG_EXCEPTION;
inline_costs += 1;
break;
}
case MONO_CEE_UNBOX: {
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
mono_save_token_info (cfg, image, token, klass);
context_used = mini_class_check_context_used (cfg, klass);
if (mono_class_is_nullable (klass)) {
MonoInst *val;
val = handle_unbox_nullable (cfg, *sp, klass, context_used);
EMIT_NEW_VARLOADA (cfg, ins, get_vreg_to_inst (cfg, val->dreg), m_class_get_byval_arg (val->klass));
*sp++= ins;
} else {
ins = mini_handle_unbox (cfg, klass, *sp, context_used);
*sp++ = ins;
}
inline_costs += 2;
break;
}
case MONO_CEE_LDFLD:
case MONO_CEE_LDFLDA:
case MONO_CEE_STFLD:
case MONO_CEE_LDSFLD:
case MONO_CEE_LDSFLDA:
case MONO_CEE_STSFLD: {
MonoClassField *field;
guint foffset;
gboolean is_instance;
gpointer addr = NULL;
gboolean is_special_static;
MonoType *ftype;
MonoInst *store_val = NULL;
MonoInst *thread_ins;
is_instance = (il_op == MONO_CEE_LDFLD || il_op == MONO_CEE_LDFLDA || il_op == MONO_CEE_STFLD);
if (is_instance) {
if (il_op == MONO_CEE_STFLD) {
sp -= 2;
store_val = sp [1];
} else {
--sp;
}
if (sp [0]->type == STACK_I4 || sp [0]->type == STACK_I8 || sp [0]->type == STACK_R8)
UNVERIFIED;
if (il_op != MONO_CEE_LDFLD && sp [0]->type == STACK_VTYPE)
UNVERIFIED;
} else {
if (il_op == MONO_CEE_STSFLD) {
sp--;
store_val = sp [0];
}
}
if (method->wrapper_type != MONO_WRAPPER_NONE) {
field = (MonoClassField *)mono_method_get_wrapper_data (method, token);
klass = m_field_get_parent (field);
}
else {
klass = NULL;
field = mono_field_from_token_checked (image, token, &klass, generic_context, cfg->error);
if (!field)
CHECK_TYPELOAD (klass);
CHECK_CFG_ERROR;
}
if (!dont_verify && !cfg->skip_visibility && !mono_method_can_access_field (method, field))
FIELD_ACCESS_FAILURE (method, field);
mono_class_init_internal (klass);
mono_class_setup_fields (klass);
ftype = mono_field_get_type_internal (field);
/*
* LDFLD etc. is usable on static fields as well, so convert those cases to
* the static case.
*/
if (is_instance && ftype->attrs & FIELD_ATTRIBUTE_STATIC) {
switch (il_op) {
case MONO_CEE_LDFLD:
il_op = MONO_CEE_LDSFLD;
break;
case MONO_CEE_STFLD:
il_op = MONO_CEE_STSFLD;
break;
case MONO_CEE_LDFLDA:
il_op = MONO_CEE_LDSFLDA;
break;
default:
g_assert_not_reached ();
}
is_instance = FALSE;
}
context_used = mini_class_check_context_used (cfg, klass);
if (il_op == MONO_CEE_LDSFLD) {
ins = mini_emit_inst_for_field_load (cfg, field);
if (ins) {
*sp++ = ins;
goto field_access_end;
}
}
/* INSTANCE CASE */
if (is_instance)
g_assert (field->offset);
foffset = m_class_is_valuetype (klass) ? field->offset - MONO_ABI_SIZEOF (MonoObject): field->offset;
if (il_op == MONO_CEE_STFLD) {
sp [1] = convert_value (cfg, field->type, sp [1]);
if (target_type_is_incompatible (cfg, field->type, sp [1]))
UNVERIFIED;
{
MonoInst *store;
MONO_EMIT_NULL_CHECK (cfg, sp [0]->dreg, foffset > mono_target_pagesize ());
if (ins_flag & MONO_INST_VOLATILE) {
/* Volatile stores have release semantics, see 12.6.7 in Ecma 335 */
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
}
if (mini_is_gsharedvt_klass (klass)) {
MonoInst *offset_ins;
context_used = mini_class_check_context_used (cfg, klass);
offset_ins = emit_get_gsharedvt_info (cfg, field, MONO_RGCTX_INFO_FIELD_OFFSET);
/* The value is offset by 1 */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1);
dreg = alloc_ireg_mp (cfg);
EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, sp [0]->dreg, offset_ins->dreg);
if (cfg->gen_write_barriers && mini_type_to_stind (cfg, field->type) == CEE_STIND_REF && !MONO_INS_IS_PCONST_NULL (sp [1])) {
store = mini_emit_storing_write_barrier (cfg, ins, sp [1]);
} else {
/* The decomposition will call mini_emit_memory_copy () which will emit a wbarrier if needed */
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, store, field->type, dreg, 0, sp [1]->dreg);
}
} else {
if (cfg->gen_write_barriers && mini_type_to_stind (cfg, field->type) == CEE_STIND_REF && !MONO_INS_IS_PCONST_NULL (sp [1])) {
/* insert call to write barrier */
MonoInst *ptr;
int dreg;
dreg = alloc_ireg_mp (cfg);
EMIT_NEW_BIALU_IMM (cfg, ptr, OP_PADD_IMM, dreg, sp [0]->dreg, foffset);
store = mini_emit_storing_write_barrier (cfg, ptr, sp [1]);
} else {
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, store, field->type, sp [0]->dreg, foffset, sp [1]->dreg);
}
}
if (sp [0]->opcode != OP_LDADDR)
store->flags |= MONO_INST_FAULT;
store->flags |= ins_flag;
}
goto field_access_end;
}
if (is_instance) {
if (sp [0]->type == STACK_VTYPE) {
MonoInst *var;
/* Have to compute the address of the variable */
var = get_vreg_to_inst (cfg, sp [0]->dreg);
if (!var)
var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (klass), OP_LOCAL, sp [0]->dreg);
else
g_assert (var->klass == klass);
EMIT_NEW_VARLOADA (cfg, ins, var, m_class_get_byval_arg (var->klass));
sp [0] = ins;
}
if (il_op == MONO_CEE_LDFLDA) {
if (sp [0]->type == STACK_OBJ) {
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, sp [0]->dreg, 0);
MONO_EMIT_NEW_COND_EXC (cfg, EQ, "NullReferenceException");
}
dreg = alloc_ireg_mp (cfg);
if (mini_is_gsharedvt_klass (klass)) {
MonoInst *offset_ins;
offset_ins = emit_get_gsharedvt_info (cfg, field, MONO_RGCTX_INFO_FIELD_OFFSET);
/* The value is offset by 1 */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1);
EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, sp [0]->dreg, offset_ins->dreg);
} else {
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, dreg, sp [0]->dreg, foffset);
}
ins->klass = mono_class_from_mono_type_internal (field->type);
ins->type = STACK_MP;
*sp++ = ins;
} else {
MonoInst *load;
MONO_EMIT_NULL_CHECK (cfg, sp [0]->dreg, foffset > mono_target_pagesize ());
#ifdef MONO_ARCH_SIMD_INTRINSICS
if (sp [0]->opcode == OP_LDADDR && m_class_is_simd_type (klass) && cfg->opt & MONO_OPT_SIMD) {
ins = mono_emit_simd_field_load (cfg, field, sp [0]);
if (ins) {
*sp++ = ins;
goto field_access_end;
}
}
#endif
MonoInst *field_add_inst = sp [0];
if (mini_is_gsharedvt_klass (klass)) {
MonoInst *offset_ins;
offset_ins = emit_get_gsharedvt_info (cfg, field, MONO_RGCTX_INFO_FIELD_OFFSET);
/* The value is offset by 1 */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1);
EMIT_NEW_BIALU (cfg, field_add_inst, OP_PADD, alloc_ireg_mp (cfg), sp [0]->dreg, offset_ins->dreg);
foffset = 0;
}
load = mini_emit_memory_load (cfg, field->type, field_add_inst, foffset, ins_flag);
if (sp [0]->opcode != OP_LDADDR)
load->flags |= MONO_INST_FAULT;
*sp++ = load;
}
}
if (is_instance)
goto field_access_end;
/* STATIC CASE */
context_used = mini_class_check_context_used (cfg, klass);
if (ftype->attrs & FIELD_ATTRIBUTE_LITERAL) {
mono_error_set_field_missing (cfg->error, m_field_get_parent (field), field->name, NULL, "Using static instructions with literal field");
CHECK_CFG_ERROR;
}
/* The special_static_fields field is init'd in mono_class_vtable, so it needs
* to be called here.
*/
if (!context_used) {
mono_class_vtable_checked (klass, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (klass);
}
addr = mono_special_static_field_get_offset (field, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (klass);
is_special_static = mono_class_field_is_special_static (field);
if (is_special_static && ((gsize)addr & 0x80000000) == 0)
thread_ins = mono_create_tls_get (cfg, TLS_KEY_THREAD);
else
thread_ins = NULL;
/* Generate IR to compute the field address */
if (is_special_static && ((gsize)addr & 0x80000000) == 0 && thread_ins &&
!(context_used && cfg->gsharedvt && mini_is_gsharedvt_klass (klass))) {
/*
* Fast access to TLS data
* Inline version of get_thread_static_data () in
* threads.c.
*/
guint32 offset;
int idx, static_data_reg, array_reg, dreg;
static_data_reg = alloc_ireg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, static_data_reg, thread_ins->dreg, MONO_STRUCT_OFFSET (MonoInternalThread, static_data));
if (cfg->compile_aot || context_used) {
int offset_reg, offset2_reg, idx_reg;
/* For TLS variables, this will return the TLS offset */
if (context_used) {
MonoInst *addr_ins = emit_get_rgctx_field (cfg, context_used, field, MONO_RGCTX_INFO_FIELD_OFFSET);
/* The value is offset by 1 */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, addr_ins->dreg, addr_ins->dreg, 1);
} else {
EMIT_NEW_SFLDACONST (cfg, ins, field);
}
offset_reg = ins->dreg;
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IAND_IMM, offset_reg, offset_reg, 0x7fffffff);
idx_reg = alloc_ireg (cfg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IAND_IMM, idx_reg, offset_reg, 0x3f);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ISHL_IMM, idx_reg, idx_reg, TARGET_SIZEOF_VOID_P == 8 ? 3 : 2);
MONO_EMIT_NEW_BIALU (cfg, OP_PADD, static_data_reg, static_data_reg, idx_reg);
array_reg = alloc_ireg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, array_reg, static_data_reg, 0);
offset2_reg = alloc_ireg (cfg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ISHR_UN_IMM, offset2_reg, offset_reg, 6);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IAND_IMM, offset2_reg, offset2_reg, 0x1ffffff);
dreg = alloc_ireg (cfg);
EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, array_reg, offset2_reg);
} else {
offset = (gsize)addr & 0x7fffffff;
idx = offset & 0x3f;
array_reg = alloc_ireg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, array_reg, static_data_reg, idx * TARGET_SIZEOF_VOID_P);
dreg = alloc_ireg (cfg);
EMIT_NEW_BIALU_IMM (cfg, ins, OP_ADD_IMM, dreg, array_reg, ((offset >> 6) & 0x1ffffff));
}
} else if ((cfg->compile_aot && is_special_static) ||
(context_used && is_special_static)) {
MonoInst *iargs [1];
g_assert (m_field_get_parent (field));
if (context_used) {
iargs [0] = emit_get_rgctx_field (cfg, context_used,
field, MONO_RGCTX_INFO_CLASS_FIELD);
} else {
EMIT_NEW_FIELDCONST (cfg, iargs [0], field);
}
ins = mono_emit_jit_icall (cfg, mono_class_static_field_address, iargs);
} else if (context_used) {
MonoInst *static_data;
/*
g_print ("sharing static field access in %s.%s.%s - depth %d offset %d\n",
method->klass->name_space, method->klass->name, method->name,
depth, field->offset);
*/
if (mono_class_needs_cctor_run (klass, method))
emit_class_init (cfg, klass);
/*
* The pointer we're computing here is
*
* super_info.static_data + field->offset
*/
static_data = mini_emit_get_rgctx_klass (cfg, context_used,
klass, MONO_RGCTX_INFO_STATIC_DATA);
if (mini_is_gsharedvt_klass (klass)) {
MonoInst *offset_ins;
offset_ins = emit_get_rgctx_field (cfg, context_used, field, MONO_RGCTX_INFO_FIELD_OFFSET);
/* The value is offset by 1 */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1);
dreg = alloc_ireg_mp (cfg);
EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, static_data->dreg, offset_ins->dreg);
} else if (field->offset == 0) {
ins = static_data;
} else {
int addr_reg = mono_alloc_preg (cfg);
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, addr_reg, static_data->dreg, field->offset);
}
} else if (cfg->compile_aot && addr) {
MonoInst *iargs [1];
g_assert (m_field_get_parent (field));
EMIT_NEW_FIELDCONST (cfg, iargs [0], field);
ins = mono_emit_jit_icall (cfg, mono_class_static_field_address, iargs);
} else {
MonoVTable *vtable = NULL;
if (!cfg->compile_aot)
vtable = mono_class_vtable_checked (klass, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (klass);
if (!addr) {
if (mini_field_access_needs_cctor_run (cfg, method, klass, vtable)) {
if (!(g_slist_find (class_inits, klass))) {
emit_class_init (cfg, klass);
if (cfg->verbose_level > 2)
printf ("class %s.%s needs init call for %s\n", m_class_get_name_space (klass), m_class_get_name (klass), mono_field_get_name (field));
class_inits = g_slist_prepend (class_inits, klass);
}
} else {
if (cfg->run_cctors) {
/* This makes so that inline cannot trigger */
/* .cctors: too many apps depend on them */
/* running with a specific order... */
g_assert (vtable);
if (!vtable->initialized && m_class_has_cctor (vtable->klass))
INLINE_FAILURE ("class init");
if (!mono_runtime_class_init_full (vtable, cfg->error)) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
goto exception_exit;
}
}
}
if (cfg->compile_aot)
EMIT_NEW_SFLDACONST (cfg, ins, field);
else {
g_assert (vtable);
addr = mono_static_field_get_addr (vtable, field);
g_assert (addr);
EMIT_NEW_PCONST (cfg, ins, addr);
}
} else {
MonoInst *iargs [1];
EMIT_NEW_ICONST (cfg, iargs [0], GPOINTER_TO_UINT (addr));
ins = mono_emit_jit_icall (cfg, mono_get_special_static_data, iargs);
}
}
/* Generate IR to do the actual load/store operation */
if ((il_op == MONO_CEE_STFLD || il_op == MONO_CEE_STSFLD)) {
if (ins_flag & MONO_INST_VOLATILE) {
/* Volatile stores have release semantics, see 12.6.7 in Ecma 335 */
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
} else if (!mini_debug_options.weak_memory_model && mini_type_is_reference (ftype)) {
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
}
}
if (il_op == MONO_CEE_LDSFLDA) {
ins->klass = mono_class_from_mono_type_internal (ftype);
ins->type = STACK_PTR;
*sp++ = ins;
} else if (il_op == MONO_CEE_STSFLD) {
MonoInst *store;
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, store, ftype, ins->dreg, 0, store_val->dreg);
store->flags |= ins_flag;
} else {
gboolean is_const = FALSE;
MonoVTable *vtable = NULL;
gpointer addr = NULL;
if (!context_used) {
vtable = mono_class_vtable_checked (klass, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (klass);
}
if ((ftype->attrs & FIELD_ATTRIBUTE_INIT_ONLY) && (((addr = mono_aot_readonly_field_override (field)) != NULL) ||
(!context_used && !cfg->compile_aot && vtable->initialized))) {
int ro_type = ftype->type;
if (!addr)
addr = mono_static_field_get_addr (vtable, field);
if (ro_type == MONO_TYPE_VALUETYPE && m_class_is_enumtype (ftype->data.klass)) {
ro_type = mono_class_enum_basetype_internal (ftype->data.klass)->type;
}
GSHAREDVT_FAILURE (il_op);
/* printf ("RO-FIELD %s.%s:%s\n", klass->name_space, klass->name, mono_field_get_name (field));*/
is_const = TRUE;
switch (ro_type) {
case MONO_TYPE_BOOLEAN:
case MONO_TYPE_U1:
EMIT_NEW_ICONST (cfg, *sp, *((guint8 *)addr));
sp++;
break;
case MONO_TYPE_I1:
EMIT_NEW_ICONST (cfg, *sp, *((gint8 *)addr));
sp++;
break;
case MONO_TYPE_CHAR:
case MONO_TYPE_U2:
EMIT_NEW_ICONST (cfg, *sp, *((guint16 *)addr));
sp++;
break;
case MONO_TYPE_I2:
EMIT_NEW_ICONST (cfg, *sp, *((gint16 *)addr));
sp++;
break;
break;
case MONO_TYPE_I4:
EMIT_NEW_ICONST (cfg, *sp, *((gint32 *)addr));
sp++;
break;
case MONO_TYPE_U4:
EMIT_NEW_ICONST (cfg, *sp, *((guint32 *)addr));
sp++;
break;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
EMIT_NEW_PCONST (cfg, *sp, *((gpointer *)addr));
mini_type_to_eval_stack_type ((cfg), field->type, *sp);
sp++;
break;
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_CLASS:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
if (!mono_gc_is_moving ()) {
EMIT_NEW_PCONST (cfg, *sp, *((gpointer *)addr));
mini_type_to_eval_stack_type ((cfg), field->type, *sp);
sp++;
} else {
is_const = FALSE;
}
break;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
EMIT_NEW_I8CONST (cfg, *sp, *((gint64 *)addr));
sp++;
break;
case MONO_TYPE_R4:
case MONO_TYPE_R8:
case MONO_TYPE_VALUETYPE:
default:
is_const = FALSE;
break;
}
}
if (!is_const) {
MonoInst *load;
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, load, field->type, ins->dreg, 0);
load->flags |= ins_flag;
*sp++ = load;
}
}
field_access_end:
if ((il_op == MONO_CEE_LDFLD || il_op == MONO_CEE_LDSFLD) && (ins_flag & MONO_INST_VOLATILE)) {
/* Volatile loads have acquire semantics, see 12.6.7 in Ecma 335 */
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_ACQ);
}
ins_flag = 0;
break;
}
case MONO_CEE_STOBJ:
sp -= 2;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
/* FIXME: should check item at sp [1] is compatible with the type of the store. */
mini_emit_memory_store (cfg, m_class_get_byval_arg (klass), sp [0], sp [1], ins_flag);
ins_flag = 0;
inline_costs += 1;
break;
/*
* Array opcodes
*/
case MONO_CEE_NEWARR: {
MonoInst *len_ins;
const char *data_ptr;
int data_size = 0;
guint32 field_token;
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
if (m_class_get_byval_arg (klass)->type == MONO_TYPE_VOID)
UNVERIFIED;
context_used = mini_class_check_context_used (cfg, klass);
#ifndef TARGET_S390X
if (sp [0]->type == STACK_I8 && TARGET_SIZEOF_VOID_P == 4) {
MONO_INST_NEW (cfg, ins, OP_LCONV_TO_OVF_U4);
ins->sreg1 = sp [0]->dreg;
ins->type = STACK_I4;
ins->dreg = alloc_ireg (cfg);
MONO_ADD_INS (cfg->cbb, ins);
*sp = mono_decompose_opcode (cfg, ins);
}
#else
/* The array allocator expects a 64-bit input, and we cannot rely
on the high bits of a 32-bit result, so we have to extend. */
if (sp [0]->type == STACK_I4 && TARGET_SIZEOF_VOID_P == 8) {
MONO_INST_NEW (cfg, ins, OP_ICONV_TO_I8);
ins->sreg1 = sp [0]->dreg;
ins->type = STACK_I8;
ins->dreg = alloc_ireg (cfg);
MONO_ADD_INS (cfg->cbb, ins);
*sp = mono_decompose_opcode (cfg, ins);
}
#endif
if (context_used) {
MonoInst *args [3];
MonoClass *array_class = mono_class_create_array (klass, 1);
MonoMethod *managed_alloc = mono_gc_get_managed_array_allocator (array_class);
/* FIXME: Use OP_NEWARR and decompose later to help abcrem */
/* vtable */
args [0] = mini_emit_get_rgctx_klass (cfg, context_used,
array_class, MONO_RGCTX_INFO_VTABLE);
/* array len */
args [1] = sp [0];
if (managed_alloc)
ins = mono_emit_method_call (cfg, managed_alloc, args, NULL);
else
ins = mono_emit_jit_icall (cfg, ves_icall_array_new_specific, args);
} else {
/* Decompose later since it is needed by abcrem */
MonoClass *array_type = mono_class_create_array (klass, 1);
mono_class_vtable_checked (array_type, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (array_type);
MONO_INST_NEW (cfg, ins, OP_NEWARR);
ins->dreg = alloc_ireg_ref (cfg);
ins->sreg1 = sp [0]->dreg;
ins->inst_newa_class = klass;
ins->type = STACK_OBJ;
ins->klass = array_type;
MONO_ADD_INS (cfg->cbb, ins);
cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE;
cfg->cbb->needs_decompose = TRUE;
/* Needed so mono_emit_load_get_addr () gets called */
mono_get_got_var (cfg);
}
len_ins = sp [0];
ip += 5;
*sp++ = ins;
inline_costs += 1;
/*
* we inline/optimize the initialization sequence if possible.
* we should also allocate the array as not cleared, since we spend as much time clearing to 0 as initializing
* for small sizes open code the memcpy
* ensure the rva field is big enough
*/
if ((cfg->opt & MONO_OPT_INTRINS) && next_ip < end
&& ip_in_bb (cfg, cfg->cbb, next_ip)
&& (len_ins->opcode == OP_ICONST)
&& (data_ptr = initialize_array_data (cfg, method,
cfg->compile_aot, next_ip, end, klass,
len_ins->inst_c0, &data_size, &field_token,
&il_op, &next_ip))) {
MonoMethod *memcpy_method = mini_get_memcpy_method ();
MonoInst *iargs [3];
int add_reg = alloc_ireg_mp (cfg);
EMIT_NEW_BIALU_IMM (cfg, iargs [0], OP_PADD_IMM, add_reg, ins->dreg, MONO_STRUCT_OFFSET (MonoArray, vector));
if (cfg->compile_aot) {
EMIT_NEW_AOTCONST_TOKEN (cfg, iargs [1], MONO_PATCH_INFO_RVA, m_class_get_image (method->klass), GPOINTER_TO_UINT(field_token), STACK_PTR, NULL);
} else {
EMIT_NEW_PCONST (cfg, iargs [1], (char*)data_ptr);
}
EMIT_NEW_ICONST (cfg, iargs [2], data_size);
mono_emit_method_call (cfg, memcpy_method, iargs, NULL);
}
break;
}
case MONO_CEE_LDLEN:
--sp;
if (sp [0]->type != STACK_OBJ)
UNVERIFIED;
MONO_INST_NEW (cfg, ins, OP_LDLEN);
ins->dreg = alloc_preg (cfg);
ins->sreg1 = sp [0]->dreg;
ins->inst_imm = MONO_STRUCT_OFFSET (MonoArray, max_length);
ins->type = STACK_I4;
/* This flag will be inherited by the decomposition */
ins->flags |= MONO_INST_FAULT | MONO_INST_INVARIANT_LOAD;
MONO_ADD_INS (cfg->cbb, ins);
cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE;
cfg->cbb->needs_decompose = TRUE;
MONO_EMIT_NEW_UNALU (cfg, OP_NOT_NULL, -1, sp [0]->dreg);
*sp++ = ins;
break;
case MONO_CEE_LDELEMA:
sp -= 2;
if (sp [0]->type != STACK_OBJ)
UNVERIFIED;
cfg->flags |= MONO_CFG_HAS_LDELEMA;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
/* we need to make sure that this array is exactly the type it needs
* to be for correctness. the wrappers are lax with their usage
* so we need to ignore them here
*/
if (!m_class_is_valuetype (klass) && method->wrapper_type == MONO_WRAPPER_NONE && !readonly) {
MonoClass *array_class = mono_class_create_array (klass, 1);
mini_emit_check_array_type (cfg, sp [0], array_class);
CHECK_TYPELOAD (array_class);
}
readonly = FALSE;
ins = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE);
*sp++ = ins;
break;
case MONO_CEE_LDELEM:
case MONO_CEE_LDELEM_I1:
case MONO_CEE_LDELEM_U1:
case MONO_CEE_LDELEM_I2:
case MONO_CEE_LDELEM_U2:
case MONO_CEE_LDELEM_I4:
case MONO_CEE_LDELEM_U4:
case MONO_CEE_LDELEM_I8:
case MONO_CEE_LDELEM_I:
case MONO_CEE_LDELEM_R4:
case MONO_CEE_LDELEM_R8:
case MONO_CEE_LDELEM_REF: {
MonoInst *addr;
sp -= 2;
if (il_op == MONO_CEE_LDELEM) {
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
mono_class_init_internal (klass);
}
else
klass = array_access_to_klass (il_op);
if (sp [0]->type != STACK_OBJ)
UNVERIFIED;
cfg->flags |= MONO_CFG_HAS_LDELEMA;
if (mini_is_gsharedvt_variable_klass (klass)) {
// FIXME-VT: OP_ICONST optimization
addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0);
ins->opcode = OP_LOADV_MEMBASE;
} else if (sp [1]->opcode == OP_ICONST) {
int array_reg = sp [0]->dreg;
int index_reg = sp [1]->dreg;
int offset = (mono_class_array_element_size (klass) * sp [1]->inst_c0) + MONO_STRUCT_OFFSET (MonoArray, vector);
if (SIZEOF_REGISTER == 8 && COMPILE_LLVM (cfg))
MONO_EMIT_NEW_UNALU (cfg, OP_ZEXT_I4, index_reg, index_reg);
MONO_EMIT_BOUNDS_CHECK (cfg, array_reg, MonoArray, max_length, index_reg);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), array_reg, offset);
} else {
addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0);
}
*sp++ = ins;
break;
}
case MONO_CEE_STELEM_I:
case MONO_CEE_STELEM_I1:
case MONO_CEE_STELEM_I2:
case MONO_CEE_STELEM_I4:
case MONO_CEE_STELEM_I8:
case MONO_CEE_STELEM_R4:
case MONO_CEE_STELEM_R8:
case MONO_CEE_STELEM_REF:
case MONO_CEE_STELEM: {
sp -= 3;
cfg->flags |= MONO_CFG_HAS_LDELEMA;
if (il_op == MONO_CEE_STELEM) {
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
mono_class_init_internal (klass);
}
else
klass = array_access_to_klass (il_op);
if (sp [0]->type != STACK_OBJ)
UNVERIFIED;
sp [2] = convert_value (cfg, m_class_get_byval_arg (klass), sp [2]);
mini_emit_array_store (cfg, klass, sp, TRUE);
inline_costs += 1;
break;
}
case MONO_CEE_CKFINITE: {
--sp;
if (cfg->llvm_only) {
MonoInst *iargs [1];
iargs [0] = sp [0];
*sp++ = mono_emit_jit_icall (cfg, mono_ckfinite, iargs);
} else {
sp [0] = convert_value (cfg, m_class_get_byval_arg (mono_defaults.double_class), sp [0]);
MONO_INST_NEW (cfg, ins, OP_CKFINITE);
ins->sreg1 = sp [0]->dreg;
ins->dreg = alloc_freg (cfg);
ins->type = STACK_R8;
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = mono_decompose_opcode (cfg, ins);
}
break;
}
case MONO_CEE_REFANYVAL: {
MonoInst *src_var, *src;
int klass_reg = alloc_preg (cfg);
int dreg = alloc_preg (cfg);
GSHAREDVT_FAILURE (il_op);
MONO_INST_NEW (cfg, ins, il_op);
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
context_used = mini_class_check_context_used (cfg, klass);
// FIXME:
src_var = get_vreg_to_inst (cfg, sp [0]->dreg);
if (!src_var)
src_var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.typed_reference_class), OP_LOCAL, sp [0]->dreg);
EMIT_NEW_VARLOADA (cfg, src, src_var, src_var->inst_vtype);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, klass_reg, src->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, klass));
if (context_used) {
MonoInst *klass_ins;
klass_ins = mini_emit_get_rgctx_klass (cfg, context_used,
klass, MONO_RGCTX_INFO_KLASS);
// FIXME:
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, klass_reg, klass_ins->dreg);
MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "InvalidCastException");
} else {
mini_emit_class_check (cfg, klass_reg, klass);
}
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, src->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, value));
ins->type = STACK_MP;
ins->klass = klass;
*sp++ = ins;
break;
}
case MONO_CEE_MKREFANY: {
MonoInst *loc, *addr;
GSHAREDVT_FAILURE (il_op);
MONO_INST_NEW (cfg, ins, il_op);
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
context_used = mini_class_check_context_used (cfg, klass);
loc = mono_compile_create_var (cfg, m_class_get_byval_arg (mono_defaults.typed_reference_class), OP_LOCAL);
EMIT_NEW_TEMPLOADA (cfg, addr, loc->inst_c0);
MonoInst *const_ins = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_KLASS);
int type_reg = alloc_preg (cfg);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREP_MEMBASE_REG, addr->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, klass), const_ins->dreg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ADD_IMM, type_reg, const_ins->dreg, m_class_offsetof_byval_arg ());
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREP_MEMBASE_REG, addr->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, type), type_reg);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREP_MEMBASE_REG, addr->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, value), sp [0]->dreg);
EMIT_NEW_TEMPLOAD (cfg, ins, loc->inst_c0);
ins->type = STACK_VTYPE;
ins->klass = mono_defaults.typed_reference_class;
*sp++ = ins;
break;
}
case MONO_CEE_LDTOKEN: {
gpointer handle;
MonoClass *handle_class;
if (method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD ||
method->wrapper_type == MONO_WRAPPER_SYNCHRONIZED) {
handle = mono_method_get_wrapper_data (method, n);
handle_class = (MonoClass *)mono_method_get_wrapper_data (method, n + 1);
if (handle_class == mono_defaults.typehandle_class)
handle = m_class_get_byval_arg ((MonoClass*)handle);
}
else {
handle = mono_ldtoken_checked (image, n, &handle_class, generic_context, cfg->error);
CHECK_CFG_ERROR;
}
if (!handle)
LOAD_ERROR;
mono_class_init_internal (handle_class);
if (cfg->gshared) {
if (mono_metadata_token_table (n) == MONO_TABLE_TYPEDEF ||
mono_metadata_token_table (n) == MONO_TABLE_TYPEREF) {
/* This case handles ldtoken
of an open type, like for
typeof(Gen<>). */
context_used = 0;
} else if (handle_class == mono_defaults.typehandle_class) {
context_used = mini_class_check_context_used (cfg, mono_class_from_mono_type_internal ((MonoType *)handle));
} else if (handle_class == mono_defaults.fieldhandle_class)
context_used = mini_class_check_context_used (cfg, m_field_get_parent (((MonoClassField*)handle)));
else if (handle_class == mono_defaults.methodhandle_class)
context_used = mini_method_check_context_used (cfg, (MonoMethod *)handle);
else
g_assert_not_reached ();
}
{
if ((next_ip + 4 < end) && ip_in_bb (cfg, cfg->cbb, next_ip) &&
((next_ip [0] == CEE_CALL) || (next_ip [0] == CEE_CALLVIRT)) &&
(cmethod = mini_get_method (cfg, method, read32 (next_ip + 1), NULL, generic_context)) &&
(cmethod->klass == mono_defaults.systemtype_class) &&
(strcmp (cmethod->name, "GetTypeFromHandle") == 0)) {
MonoClass *tclass = mono_class_from_mono_type_internal ((MonoType *)handle);
mono_class_init_internal (tclass);
// Optimize to true/false if next instruction is `call instance bool Type::get_IsValueType()`
guchar *is_vt_ip;
guint32 is_vt_token;
if ((is_vt_ip = il_read_call (next_ip + 5, end, &is_vt_token)) && ip_in_bb (cfg, cfg->cbb, is_vt_ip)) {
MonoMethod *is_vt_method = mini_get_method (cfg, method, is_vt_token, NULL, generic_context);
if (is_vt_method->klass == mono_defaults.systemtype_class &&
!mini_is_gsharedvt_variable_klass (tclass) &&
!mono_class_is_open_constructed_type (m_class_get_byval_arg (tclass)) &&
!strcmp ("get_IsValueType", is_vt_method->name)) {
next_ip = is_vt_ip;
EMIT_NEW_ICONST (cfg, ins, m_class_is_valuetype (tclass) ? 1 : 0);
ins->type = STACK_I4;
*sp++ = ins;
break;
}
}
if (context_used) {
MONO_INST_NEW (cfg, ins, OP_RTTYPE);
ins->dreg = alloc_ireg_ref (cfg);
ins->inst_p0 = tclass;
ins->type = STACK_OBJ;
MONO_ADD_INS (cfg->cbb, ins);
cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE;
cfg->cbb->needs_decompose = TRUE;
} else if (cfg->compile_aot) {
if (method->wrapper_type) {
error_init (error); //got to do it since there are multiple conditionals below
if (mono_class_get_checked (m_class_get_image (tclass), m_class_get_type_token (tclass), error) == tclass && !generic_context) {
/* Special case for static synchronized wrappers */
EMIT_NEW_TYPE_FROM_HANDLE_CONST (cfg, ins, m_class_get_image (tclass), m_class_get_type_token (tclass), generic_context);
} else {
mono_error_cleanup (error); /* FIXME don't swallow the error */
/* FIXME: n is not a normal token */
DISABLE_AOT (cfg);
EMIT_NEW_PCONST (cfg, ins, NULL);
}
} else {
EMIT_NEW_TYPE_FROM_HANDLE_CONST (cfg, ins, image, n, generic_context);
}
} else {
MonoReflectionType *rt = mono_type_get_object_checked ((MonoType *)handle, cfg->error);
CHECK_CFG_ERROR;
EMIT_NEW_PCONST (cfg, ins, rt);
}
ins->type = STACK_OBJ;
ins->klass = mono_defaults.runtimetype_class;
il_op = (MonoOpcodeEnum)next_ip [0];
next_ip += 5;
} else {
MonoInst *addr, *vtvar;
vtvar = mono_compile_create_var (cfg, m_class_get_byval_arg (handle_class), OP_LOCAL);
if (context_used) {
if (handle_class == mono_defaults.typehandle_class) {
ins = mini_emit_get_rgctx_klass (cfg, context_used,
mono_class_from_mono_type_internal ((MonoType *)handle),
MONO_RGCTX_INFO_TYPE);
} else if (handle_class == mono_defaults.methodhandle_class) {
ins = emit_get_rgctx_method (cfg, context_used,
(MonoMethod *)handle, MONO_RGCTX_INFO_METHOD);
} else if (handle_class == mono_defaults.fieldhandle_class) {
ins = emit_get_rgctx_field (cfg, context_used,
(MonoClassField *)handle, MONO_RGCTX_INFO_CLASS_FIELD);
} else {
g_assert_not_reached ();
}
} else if (cfg->compile_aot) {
EMIT_NEW_LDTOKENCONST (cfg, ins, image, n, generic_context);
} else {
EMIT_NEW_PCONST (cfg, ins, handle);
}
EMIT_NEW_TEMPLOADA (cfg, addr, vtvar->inst_c0);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, addr->dreg, 0, ins->dreg);
EMIT_NEW_TEMPLOAD (cfg, ins, vtvar->inst_c0);
}
}
*sp++ = ins;
break;
}
case MONO_CEE_THROW:
if (sp [-1]->type != STACK_OBJ)
UNVERIFIED;
MONO_INST_NEW (cfg, ins, OP_THROW);
--sp;
ins->sreg1 = sp [0]->dreg;
cfg->cbb->out_of_line = TRUE;
MONO_ADD_INS (cfg->cbb, ins);
MONO_INST_NEW (cfg, ins, OP_NOT_REACHED);
MONO_ADD_INS (cfg->cbb, ins);
sp = stack_start;
link_bblock (cfg, cfg->cbb, end_bblock);
start_new_bblock = 1;
/* This can complicate code generation for llvm since the return value might not be defined */
if (COMPILE_LLVM (cfg))
INLINE_FAILURE ("throw");
break;
case MONO_CEE_ENDFINALLY:
if (!ip_in_finally_clause (cfg, ip - header->code))
UNVERIFIED;
/* mono_save_seq_point_info () depends on this */
if (sp != stack_start)
emit_seq_point (cfg, method, ip, FALSE, FALSE);
MONO_INST_NEW (cfg, ins, OP_ENDFINALLY);
MONO_ADD_INS (cfg->cbb, ins);
start_new_bblock = 1;
ins_has_side_effect = FALSE;
/*
* Control will leave the method so empty the stack, otherwise
* the next basic block will start with a nonempty stack.
*/
while (sp != stack_start) {
sp--;
}
break;
case MONO_CEE_LEAVE:
case MONO_CEE_LEAVE_S: {
GList *handlers;
/* empty the stack */
g_assert (sp >= stack_start);
sp = stack_start;
/*
* If this leave statement is in a catch block, check for a
* pending exception, and rethrow it if necessary.
* We avoid doing this in runtime invoke wrappers, since those are called
* by native code which excepts the wrapper to catch all exceptions.
*/
for (i = 0; i < header->num_clauses; ++i) {
MonoExceptionClause *clause = &header->clauses [i];
/*
* Use <= in the final comparison to handle clauses with multiple
* leave statements, like in bug #78024.
* The ordering of the exception clauses guarantees that we find the
* innermost clause.
*/
if (MONO_OFFSET_IN_HANDLER (clause, ip - header->code) && (clause->flags == MONO_EXCEPTION_CLAUSE_NONE) && (ip - header->code + ((il_op == MONO_CEE_LEAVE) ? 5 : 2)) <= (clause->handler_offset + clause->handler_len) && method->wrapper_type != MONO_WRAPPER_RUNTIME_INVOKE) {
MonoInst *exc_ins;
MonoBasicBlock *dont_throw;
/*
MonoInst *load;
NEW_TEMPLOAD (cfg, load, mono_find_exvar_for_offset (cfg, clause->handler_offset)->inst_c0);
*/
exc_ins = mono_emit_jit_icall (cfg, mono_thread_get_undeniable_exception, NULL);
NEW_BBLOCK (cfg, dont_throw);
/*
* Currently, we always rethrow the abort exception, despite the
* fact that this is not correct. See thread6.cs for an example.
* But propagating the abort exception is more important than
* getting the semantics right.
*/
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, exc_ins->dreg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, dont_throw);
MONO_EMIT_NEW_UNALU (cfg, OP_THROW, -1, exc_ins->dreg);
MONO_START_BB (cfg, dont_throw);
}
}
#ifdef ENABLE_LLVM
cfg->cbb->try_end = (intptr_t)(ip - header->code);
#endif
if ((handlers = mono_find_leave_clauses (cfg, ip, target))) {
GList *tmp;
/*
* For each finally clause that we exit we need to invoke the finally block.
* After each invocation we need to add try holes for all the clauses that
* we already exited.
*/
for (tmp = handlers; tmp; tmp = tmp->next) {
MonoLeaveClause *leave = (MonoLeaveClause *) tmp->data;
MonoExceptionClause *clause = leave->clause;
if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY)
continue;
MonoInst *abort_exc = (MonoInst *)mono_find_exvar_for_offset (cfg, clause->handler_offset);
MonoBasicBlock *dont_throw;
/*
* Emit instrumentation code before linking the basic blocks below as this
* will alter cfg->cbb.
*/
mini_profiler_emit_call_finally (cfg, header, ip, leave->index, clause);
tblock = cfg->cil_offset_to_bb [clause->handler_offset];
g_assert (tblock);
link_bblock (cfg, cfg->cbb, tblock);
MONO_EMIT_NEW_PCONST (cfg, abort_exc->dreg, 0);
MONO_INST_NEW (cfg, ins, OP_CALL_HANDLER);
ins->inst_target_bb = tblock;
ins->inst_eh_blocks = tmp;
MONO_ADD_INS (cfg->cbb, ins);
cfg->cbb->has_call_handler = 1;
/* Throw exception if exvar is set */
/* FIXME Do we need this for calls from catch/filter ? */
NEW_BBLOCK (cfg, dont_throw);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, abort_exc->dreg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, dont_throw);
mono_emit_jit_icall (cfg, ves_icall_thread_finish_async_abort, NULL);
cfg->cbb->clause_holes = tmp;
MONO_START_BB (cfg, dont_throw);
cfg->cbb->clause_holes = tmp;
if (COMPILE_LLVM (cfg)) {
MonoBasicBlock *target_bb;
/*
* Link the finally bblock with the target, since it will
* conceptually branch there.
*/
GET_BBLOCK (cfg, tblock, cfg->cil_start + clause->handler_offset + clause->handler_len - 1);
GET_BBLOCK (cfg, target_bb, target);
link_bblock (cfg, tblock, target_bb);
}
}
}
MONO_INST_NEW (cfg, ins, OP_BR);
MONO_ADD_INS (cfg->cbb, ins);
GET_BBLOCK (cfg, tblock, target);
link_bblock (cfg, cfg->cbb, tblock);
ins->inst_target_bb = tblock;
start_new_bblock = 1;
break;
}
/*
* Mono specific opcodes
*/
case MONO_CEE_MONO_ICALL: {
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
const MonoJitICallId jit_icall_id = (MonoJitICallId)token;
MonoJitICallInfo * const info = mono_find_jit_icall_info (jit_icall_id);
CHECK_STACK (info->sig->param_count);
sp -= info->sig->param_count;
if (token == MONO_JIT_ICALL_mono_threads_attach_coop) {
MonoInst *addr;
MonoBasicBlock *next_bb;
if (cfg->compile_aot) {
/*
* This is called on unattached threads, so it cannot go through the trampoline
* infrastructure. Use an indirect call through a got slot initialized at load time
* instead.
*/
EMIT_NEW_AOTCONST (cfg, addr, MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL, GUINT_TO_POINTER (jit_icall_id));
ins = mini_emit_calli (cfg, info->sig, sp, addr, NULL, NULL);
} else {
ins = mono_emit_jit_icall_id (cfg, jit_icall_id, sp);
}
/*
* Parts of the initlocals code needs to come after this, since it might call methods like memset.
* Also profiling needs to be after attach.
*/
init_localsbb2 = cfg->cbb;
NEW_BBLOCK (cfg, next_bb);
MONO_START_BB (cfg, next_bb);
} else {
if (token == MONO_JIT_ICALL_mono_threads_detach_coop) {
/* can't emit profiling code after a detach, so emit it now */
mini_profiler_emit_leave (cfg, NULL);
detached_before_ret = TRUE;
}
ins = mono_emit_jit_icall_id (cfg, jit_icall_id, sp);
}
if (!MONO_TYPE_IS_VOID (info->sig->ret))
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
}
MonoJumpInfoType ldptr_type;
case MONO_CEE_MONO_LDPTR_CARD_TABLE:
ldptr_type = MONO_PATCH_INFO_GC_CARD_TABLE_ADDR;
goto mono_ldptr;
case MONO_CEE_MONO_LDPTR_NURSERY_START:
ldptr_type = MONO_PATCH_INFO_GC_NURSERY_START;
goto mono_ldptr;
case MONO_CEE_MONO_LDPTR_NURSERY_BITS:
ldptr_type = MONO_PATCH_INFO_GC_NURSERY_BITS;
goto mono_ldptr;
case MONO_CEE_MONO_LDPTR_INT_REQ_FLAG:
ldptr_type = MONO_PATCH_INFO_INTERRUPTION_REQUEST_FLAG;
goto mono_ldptr;
case MONO_CEE_MONO_LDPTR_PROFILER_ALLOCATION_COUNT:
ldptr_type = MONO_PATCH_INFO_PROFILER_ALLOCATION_COUNT;
mono_ldptr:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
ins = mini_emit_runtime_constant (cfg, ldptr_type, NULL);
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
case MONO_CEE_MONO_LDPTR: {
gpointer ptr;
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
ptr = mono_method_get_wrapper_data (method, token);
EMIT_NEW_PCONST (cfg, ins, ptr);
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
/* Can't embed random pointers into AOT code */
DISABLE_AOT (cfg);
break;
}
case MONO_CEE_MONO_JIT_ICALL_ADDR:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
EMIT_NEW_JIT_ICALL_ADDRCONST (cfg, ins, GUINT_TO_POINTER (token));
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
case MONO_CEE_MONO_ICALL_ADDR: {
MonoMethod *cmethod;
gpointer ptr;
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
cmethod = (MonoMethod *)mono_method_get_wrapper_data (method, token);
if (cfg->compile_aot) {
if (cfg->direct_pinvoke && ip + 6 < end && (ip [6] == CEE_POP)) {
/*
* This is generated by emit_native_wrapper () to resolve the pinvoke address
* before the call, its not needed when using direct pinvoke.
* This is not an optimization, but its used to avoid looking up pinvokes
* on platforms which don't support dlopen ().
*/
EMIT_NEW_PCONST (cfg, ins, NULL);
} else {
EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_ICALL_ADDR, cmethod);
}
} else {
ptr = mono_lookup_internal_call (cmethod);
g_assert (ptr);
EMIT_NEW_PCONST (cfg, ins, ptr);
}
*sp++ = ins;
break;
}
case MONO_CEE_MONO_VTADDR: {
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
MonoInst *src_var, *src;
--sp;
// FIXME:
src_var = get_vreg_to_inst (cfg, sp [0]->dreg);
EMIT_NEW_VARLOADA ((cfg), (src), src_var, src_var->inst_vtype);
*sp++ = src;
break;
}
case MONO_CEE_MONO_NEWOBJ: {
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
MonoInst *iargs [2];
klass = (MonoClass *)mono_method_get_wrapper_data (method, token);
mono_class_init_internal (klass);
NEW_CLASSCONST (cfg, iargs [0], klass);
MONO_ADD_INS (cfg->cbb, iargs [0]);
*sp++ = mono_emit_jit_icall (cfg, ves_icall_object_new, iargs);
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
}
case MONO_CEE_MONO_OBJADDR:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
--sp;
MONO_INST_NEW (cfg, ins, OP_MOVE);
ins->dreg = alloc_ireg_mp (cfg);
ins->sreg1 = sp [0]->dreg;
ins->type = STACK_MP;
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = ins;
break;
case MONO_CEE_MONO_LDNATIVEOBJ:
/*
* Similar to LDOBJ, but instead load the unmanaged
* representation of the vtype to the stack.
*/
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
--sp;
klass = (MonoClass *)mono_method_get_wrapper_data (method, token);
g_assert (m_class_is_valuetype (klass));
mono_class_init_internal (klass);
{
MonoInst *src, *dest, *temp;
src = sp [0];
temp = mono_compile_create_var (cfg, m_class_get_byval_arg (klass), OP_LOCAL);
temp->backend.is_pinvoke = 1;
EMIT_NEW_TEMPLOADA (cfg, dest, temp->inst_c0);
mini_emit_memory_copy (cfg, dest, src, klass, TRUE, 0);
EMIT_NEW_TEMPLOAD (cfg, dest, temp->inst_c0);
dest->type = STACK_VTYPE;
dest->klass = klass;
*sp ++ = dest;
}
break;
case MONO_CEE_MONO_RETOBJ: {
/*
* Same as RET, but return the native representation of a vtype
* to the caller.
*/
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
g_assert (cfg->ret);
g_assert (mono_method_signature_internal (method)->pinvoke);
--sp;
klass = (MonoClass *)mono_method_get_wrapper_data (method, token);
if (!cfg->vret_addr) {
g_assert (cfg->ret_var_is_local);
EMIT_NEW_VARLOADA (cfg, ins, cfg->ret, cfg->ret->inst_vtype);
} else {
EMIT_NEW_RETLOADA (cfg, ins);
}
mini_emit_memory_copy (cfg, ins, sp [0], klass, TRUE, 0);
if (sp != stack_start)
UNVERIFIED;
if (!detached_before_ret)
mini_profiler_emit_leave (cfg, sp [0]);
MONO_INST_NEW (cfg, ins, OP_BR);
ins->inst_target_bb = end_bblock;
MONO_ADD_INS (cfg->cbb, ins);
link_bblock (cfg, cfg->cbb, end_bblock);
start_new_bblock = 1;
break;
}
case MONO_CEE_MONO_SAVE_LMF:
case MONO_CEE_MONO_RESTORE_LMF:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
break;
case MONO_CEE_MONO_CLASSCONST:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
EMIT_NEW_CLASSCONST (cfg, ins, mono_method_get_wrapper_data (method, token));
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
case MONO_CEE_MONO_METHODCONST:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
EMIT_NEW_METHODCONST (cfg, ins, mono_method_get_wrapper_data (method, token));
*sp++ = ins;
break;
case MONO_CEE_MONO_PINVOKE_ADDR_CACHE: {
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
MonoMethod *pinvoke_method = (MonoMethod*)mono_method_get_wrapper_data (method, token);
/* This is a memory slot used by the wrapper */
if (cfg->compile_aot) {
EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_METHOD_PINVOKE_ADDR_CACHE, pinvoke_method);
} else {
gpointer addr = mono_mem_manager_alloc0 (cfg->mem_manager, sizeof (gpointer));
EMIT_NEW_PCONST (cfg, ins, addr);
}
*sp++ = ins;
break;
}
case MONO_CEE_MONO_NOT_TAKEN:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
cfg->cbb->out_of_line = TRUE;
break;
case MONO_CEE_MONO_TLS: {
MonoTlsKey key;
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
key = (MonoTlsKey)n;
g_assert (key < TLS_KEY_NUM);
ins = mono_create_tls_get (cfg, key);
g_assert (ins);
ins->type = STACK_PTR;
*sp++ = ins;
break;
}
case MONO_CEE_MONO_DYN_CALL: {
MonoCallInst *call;
/* It would be easier to call a trampoline, but that would put an
* extra frame on the stack, confusing exception handling. So
* implement it inline using an opcode for now.
*/
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
if (!cfg->dyn_call_var) {
cfg->dyn_call_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
/* prevent it from being register allocated */
cfg->dyn_call_var->flags |= MONO_INST_VOLATILE;
}
/* Has to use a call inst since local regalloc expects it */
MONO_INST_NEW_CALL (cfg, call, OP_DYN_CALL);
ins = (MonoInst*)call;
sp -= 2;
ins->sreg1 = sp [0]->dreg;
ins->sreg2 = sp [1]->dreg;
MONO_ADD_INS (cfg->cbb, ins);
cfg->param_area = MAX (cfg->param_area, cfg->backend->dyn_call_param_area);
/* OP_DYN_CALL might need to allocate a dynamically sized param area */
cfg->flags |= MONO_CFG_HAS_ALLOCA;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
}
case MONO_CEE_MONO_MEMORY_BARRIER: {
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
mini_emit_memory_barrier (cfg, (int)n);
break;
}
case MONO_CEE_MONO_ATOMIC_STORE_I4: {
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
g_assert (mono_arch_opcode_supported (OP_ATOMIC_STORE_I4));
sp -= 2;
MONO_INST_NEW (cfg, ins, OP_ATOMIC_STORE_I4);
ins->dreg = sp [0]->dreg;
ins->sreg1 = sp [1]->dreg;
ins->backend.memory_barrier_kind = (int)n;
MONO_ADD_INS (cfg->cbb, ins);
break;
}
case MONO_CEE_MONO_LD_DELEGATE_METHOD_PTR: {
CHECK_STACK (1);
--sp;
dreg = alloc_preg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, sp [0]->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_ptr));
*sp++ = ins;
break;
}
case MONO_CEE_MONO_CALLI_EXTRA_ARG: {
MonoInst *addr;
MonoMethodSignature *fsig;
MonoInst *arg;
/*
* This is the same as CEE_CALLI, but passes an additional argument
* to the called method in llvmonly mode.
* This is only used by delegate invoke wrappers to call the
* actual delegate method.
*/
g_assert (method->wrapper_type == MONO_WRAPPER_DELEGATE_INVOKE);
ins = NULL;
cmethod = NULL;
CHECK_STACK (1);
--sp;
addr = *sp;
fsig = mini_get_signature (method, token, generic_context, cfg->error);
CHECK_CFG_ERROR;
if (cfg->llvm_only)
cfg->signatures = g_slist_prepend_mempool (cfg->mempool, cfg->signatures, fsig);
n = fsig->param_count + fsig->hasthis + 1;
CHECK_STACK (n);
sp -= n;
arg = sp [n - 1];
if (cfg->llvm_only) {
/*
* The lowest bit of 'arg' determines whenever the callee uses the gsharedvt
* cconv. This is set by mono_init_delegate ().
*/
if (cfg->gsharedvt && mini_is_gsharedvt_variable_signature (fsig)) {
MonoInst *callee = addr;
MonoInst *call, *localloc_ins;
MonoBasicBlock *is_gsharedvt_bb, *end_bb;
int low_bit_reg = alloc_preg (cfg);
NEW_BBLOCK (cfg, is_gsharedvt_bb);
NEW_BBLOCK (cfg, end_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PAND_IMM, low_bit_reg, arg->dreg, 1);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, low_bit_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBNE_UN, is_gsharedvt_bb);
/* Normal case: callee uses a normal cconv, have to add an out wrapper */
addr = emit_get_rgctx_sig (cfg, context_used,
fsig, MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI);
/*
* ADDR points to a gsharedvt-out wrapper, have to pass <callee, arg> as an extra arg.
*/
MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM);
ins->dreg = alloc_preg (cfg);
ins->inst_imm = 2 * TARGET_SIZEOF_VOID_P;
MONO_ADD_INS (cfg->cbb, ins);
localloc_ins = ins;
cfg->flags |= MONO_CFG_HAS_ALLOCA;
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, 0, callee->dreg);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, TARGET_SIZEOF_VOID_P, arg->dreg);
call = mini_emit_extra_arg_calli (cfg, fsig, sp, localloc_ins->dreg, addr);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Gsharedvt case: callee uses a gsharedvt cconv, no conversion is needed */
MONO_START_BB (cfg, is_gsharedvt_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PXOR_IMM, arg->dreg, arg->dreg, 1);
ins = mini_emit_extra_arg_calli (cfg, fsig, sp, arg->dreg, callee);
ins->dreg = call->dreg;
MONO_START_BB (cfg, end_bb);
} else {
/* Caller uses a normal calling conv */
MonoInst *callee = addr;
MonoInst *call, *localloc_ins;
MonoBasicBlock *is_gsharedvt_bb, *end_bb;
int low_bit_reg = alloc_preg (cfg);
NEW_BBLOCK (cfg, is_gsharedvt_bb);
NEW_BBLOCK (cfg, end_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PAND_IMM, low_bit_reg, arg->dreg, 1);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, low_bit_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBNE_UN, is_gsharedvt_bb);
/* Normal case: callee uses a normal cconv, no conversion is needed */
call = mini_emit_extra_arg_calli (cfg, fsig, sp, arg->dreg, callee);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Gsharedvt case: callee uses a gsharedvt cconv, have to add an in wrapper */
MONO_START_BB (cfg, is_gsharedvt_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PXOR_IMM, arg->dreg, arg->dreg, 1);
NEW_AOTCONST (cfg, addr, MONO_PATCH_INFO_GSHAREDVT_IN_WRAPPER, fsig);
MONO_ADD_INS (cfg->cbb, addr);
/*
* ADDR points to a gsharedvt-in wrapper, have to pass <callee, arg> as an extra arg.
*/
MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM);
ins->dreg = alloc_preg (cfg);
ins->inst_imm = 2 * TARGET_SIZEOF_VOID_P;
MONO_ADD_INS (cfg->cbb, ins);
localloc_ins = ins;
cfg->flags |= MONO_CFG_HAS_ALLOCA;
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, 0, callee->dreg);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, TARGET_SIZEOF_VOID_P, arg->dreg);
ins = mini_emit_extra_arg_calli (cfg, fsig, sp, localloc_ins->dreg, addr);
ins->dreg = call->dreg;
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
MONO_START_BB (cfg, end_bb);
}
} else {
/* Same as CEE_CALLI */
if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) {
/*
* We pass the address to the gsharedvt trampoline in the rgctx reg
*/
MonoInst *callee = addr;
addr = emit_get_rgctx_sig (cfg, context_used,
fsig, MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI);
ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, callee);
} else {
ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL);
}
}
if (!MONO_TYPE_IS_VOID (fsig->ret))
*sp++ = mono_emit_widen_call_res (cfg, ins, fsig);
CHECK_CFG_EXCEPTION;
ins_flag = 0;
constrained_class = NULL;
break;
}
case MONO_CEE_MONO_LDDOMAIN: {
MonoDomain *domain = mono_get_root_domain ();
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
EMIT_NEW_PCONST (cfg, ins, cfg->compile_aot ? NULL : domain);
*sp++ = ins;
break;
}
case MONO_CEE_MONO_SAVE_LAST_ERROR:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
// Just an IL prefix, setting this flag, picked up by call instructions.
save_last_error = TRUE;
break;
case MONO_CEE_MONO_GET_RGCTX_ARG:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
mono_create_rgctx_var (cfg);
MONO_INST_NEW (cfg, ins, OP_MOVE);
ins->dreg = alloc_dreg (cfg, STACK_PTR);
ins->sreg1 = cfg->rgctx_var->dreg;
ins->type = STACK_PTR;
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = ins;
break;
case MONO_CEE_MONO_GET_SP: {
/* Used by COOP only, so this is good enough */
MonoInst *var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
EMIT_NEW_VARLOADA (cfg, ins, var, NULL);
*sp++ = ins;
break;
}
case MONO_CEE_MONO_REMAP_OVF_EXC:
/* Remap the exception thrown by the next _OVF opcode */
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
ovf_exc = (const char*)mono_method_get_wrapper_data (method, token);
break;
case MONO_CEE_ARGLIST: {
/* somewhat similar to LDTOKEN */
MonoInst *addr, *vtvar;
vtvar = mono_compile_create_var (cfg, m_class_get_byval_arg (mono_defaults.argumenthandle_class), OP_LOCAL);
EMIT_NEW_TEMPLOADA (cfg, addr, vtvar->inst_c0);
EMIT_NEW_UNALU (cfg, ins, OP_ARGLIST, -1, addr->dreg);
EMIT_NEW_TEMPLOAD (cfg, ins, vtvar->inst_c0);
ins->type = STACK_VTYPE;
ins->klass = mono_defaults.argumenthandle_class;
*sp++ = ins;
break;
}
case MONO_CEE_CEQ:
case MONO_CEE_CGT:
case MONO_CEE_CGT_UN:
case MONO_CEE_CLT:
case MONO_CEE_CLT_UN: {
MonoInst *cmp, *arg1, *arg2;
sp -= 2;
arg1 = sp [0];
arg2 = sp [1];
/*
* The following transforms:
* CEE_CEQ into OP_CEQ
* CEE_CGT into OP_CGT
* CEE_CGT_UN into OP_CGT_UN
* CEE_CLT into OP_CLT
* CEE_CLT_UN into OP_CLT_UN
*/
MONO_INST_NEW (cfg, cmp, (OP_CEQ - CEE_CEQ) + ip [1]);
MONO_INST_NEW (cfg, ins, cmp->opcode);
cmp->sreg1 = arg1->dreg;
cmp->sreg2 = arg2->dreg;
type_from_op (cfg, cmp, arg1, arg2);
CHECK_TYPE (cmp);
add_widen_op (cfg, cmp, &arg1, &arg2);
if ((arg1->type == STACK_I8) || ((TARGET_SIZEOF_VOID_P == 8) && ((arg1->type == STACK_PTR) || (arg1->type == STACK_OBJ) || (arg1->type == STACK_MP))))
cmp->opcode = OP_LCOMPARE;
else if (arg1->type == STACK_R4)
cmp->opcode = OP_RCOMPARE;
else if (arg1->type == STACK_R8)
cmp->opcode = OP_FCOMPARE;
else
cmp->opcode = OP_ICOMPARE;
MONO_ADD_INS (cfg->cbb, cmp);
ins->type = STACK_I4;
ins->dreg = alloc_dreg (cfg, (MonoStackType)ins->type);
type_from_op (cfg, ins, arg1, arg2);
if (cmp->opcode == OP_FCOMPARE || cmp->opcode == OP_RCOMPARE) {
/*
* The backends expect the fceq opcodes to do the
* comparison too.
*/
ins->sreg1 = cmp->sreg1;
ins->sreg2 = cmp->sreg2;
NULLIFY_INS (cmp);
}
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = ins;
break;
}
case MONO_CEE_LDFTN: {
MonoInst *argconst;
MonoMethod *cil_method;
cmethod = mini_get_method (cfg, method, n, NULL, generic_context);
CHECK_CFG_ERROR;
if (constrained_class) {
if (m_method_is_static (cmethod) && mini_class_check_context_used (cfg, constrained_class))
// FIXME:
GENERIC_SHARING_FAILURE (CEE_LDFTN);
cmethod = get_constrained_method (cfg, image, n, cmethod, constrained_class, generic_context);
constrained_class = NULL;
CHECK_CFG_ERROR;
}
mono_class_init_internal (cmethod->klass);
mono_save_token_info (cfg, image, n, cmethod);
context_used = mini_method_check_context_used (cfg, cmethod);
cil_method = cmethod;
if (!dont_verify && !cfg->skip_visibility && !mono_method_can_access_method (method, cmethod))
emit_method_access_failure (cfg, method, cil_method);
const gboolean has_unmanaged_callers_only =
cmethod->wrapper_type == MONO_WRAPPER_NONE &&
mono_method_has_unmanaged_callers_only_attribute (cmethod);
/*
* Optimize the common case of ldftn+delegate creation
*/
if ((sp > stack_start) && (next_ip + 4 < end) && ip_in_bb (cfg, cfg->cbb, next_ip) && (next_ip [0] == CEE_NEWOBJ)) {
MonoMethod *ctor_method = mini_get_method (cfg, method, read32 (next_ip + 1), NULL, generic_context);
if (ctor_method && (m_class_get_parent (ctor_method->klass) == mono_defaults.multicastdelegate_class)) {
MonoInst *target_ins, *handle_ins;
MonoMethod *invoke;
int invoke_context_used;
if (G_UNLIKELY (has_unmanaged_callers_only)) {
mono_error_set_not_supported (cfg->error, "Cannot create delegate from method with UnmanagedCallersOnlyAttribute");
CHECK_CFG_ERROR;
}
invoke = mono_get_delegate_invoke_internal (ctor_method->klass);
if (!invoke || !mono_method_signature_internal (invoke))
LOAD_ERROR;
invoke_context_used = mini_method_check_context_used (cfg, invoke);
target_ins = sp [-1];
if (!(cmethod->flags & METHOD_ATTRIBUTE_STATIC)) {
/*BAD IMPL: We must not add a null check for virtual invoke delegates.*/
if (mono_method_signature_internal (invoke)->param_count == mono_method_signature_internal (cmethod)->param_count) {
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, target_ins->dreg, 0);
MONO_EMIT_NEW_COND_EXC (cfg, EQ, "ArgumentException");
}
}
if ((invoke_context_used == 0 || !cfg->gsharedvt) || cfg->llvm_only) {
if (cfg->verbose_level > 3)
g_print ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip + 6, NULL));
if ((handle_ins = handle_delegate_ctor (cfg, ctor_method->klass, target_ins, cmethod, context_used, invoke_context_used, FALSE))) {
sp --;
*sp = handle_ins;
CHECK_CFG_EXCEPTION;
sp ++;
next_ip += 5;
il_op = MONO_CEE_NEWOBJ;
break;
} else {
CHECK_CFG_ERROR;
}
}
}
}
/* UnmanagedCallersOnlyAttribute means ldftn should return a method callable from native */
if (G_UNLIKELY (has_unmanaged_callers_only)) {
if (G_UNLIKELY (cmethod->flags & METHOD_ATTRIBUTE_PINVOKE_IMPL)) {
// Follow CoreCLR, disallow [UnmanagedCallersOnly] and [DllImport] to be used
// together
emit_not_supported_failure (cfg);
EMIT_NEW_PCONST (cfg, ins, NULL);
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
}
MonoClass *delegate_klass = NULL;
MonoGCHandle target_handle = 0;
ERROR_DECL (wrapper_error);
MonoMethod *wrapped_cmethod;
wrapped_cmethod = mono_marshal_get_managed_wrapper (cmethod, delegate_klass, target_handle, wrapper_error);
if (!is_ok (wrapper_error)) {
/* if we couldn't create a wrapper because cmethod isn't supposed to have an
UnmanagedCallersOnly attribute, follow CoreCLR behavior and throw when the
method with the ldftn is executing, not when it is being compiled. */
emit_invalid_program_with_msg (cfg, wrapper_error, method, cmethod);
mono_error_cleanup (wrapper_error);
EMIT_NEW_PCONST (cfg, ins, NULL);
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
} else {
cmethod = wrapped_cmethod;
}
}
argconst = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD);
ins = mono_emit_jit_icall (cfg, mono_ldftn, &argconst);
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
}
case MONO_CEE_LDVIRTFTN: {
MonoInst *args [2];
cmethod = mini_get_method (cfg, method, n, NULL, generic_context);
CHECK_CFG_ERROR;
mono_class_init_internal (cmethod->klass);
context_used = mini_method_check_context_used (cfg, cmethod);
/*
* Optimize the common case of ldvirtftn+delegate creation
*/
if (previous_il_op == MONO_CEE_DUP && (sp > stack_start) && (next_ip + 4 < end) && ip_in_bb (cfg, cfg->cbb, next_ip) && (next_ip [0] == CEE_NEWOBJ)) {
MonoMethod *ctor_method = mini_get_method (cfg, method, read32 (next_ip + 1), NULL, generic_context);
if (ctor_method && (m_class_get_parent (ctor_method->klass) == mono_defaults.multicastdelegate_class)) {
MonoInst *target_ins, *handle_ins;
MonoMethod *invoke;
int invoke_context_used;
const gboolean is_virtual = (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) != 0;
invoke = mono_get_delegate_invoke_internal (ctor_method->klass);
if (!invoke || !mono_method_signature_internal (invoke))
LOAD_ERROR;
invoke_context_used = mini_method_check_context_used (cfg, invoke);
target_ins = sp [-1];
if (invoke_context_used == 0 || !cfg->gsharedvt || cfg->llvm_only) {
if (cfg->verbose_level > 3)
g_print ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip + 6, NULL));
if ((handle_ins = handle_delegate_ctor (cfg, ctor_method->klass, target_ins, cmethod, context_used, invoke_context_used, is_virtual))) {
sp -= 2;
*sp = handle_ins;
CHECK_CFG_EXCEPTION;
next_ip += 5;
previous_il_op = MONO_CEE_NEWOBJ;
sp ++;
break;
} else {
CHECK_CFG_ERROR;
}
}
}
}
--sp;
args [0] = *sp;
args [1] = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_METHOD);
if (context_used)
*sp++ = mono_emit_jit_icall (cfg, mono_ldvirtfn_gshared, args);
else
*sp++ = mono_emit_jit_icall (cfg, mono_ldvirtfn, args);
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
}
case MONO_CEE_LOCALLOC: {
MonoBasicBlock *non_zero_bb, *end_bb;
int alloc_ptr = alloc_preg (cfg);
--sp;
if (sp != stack_start)
UNVERIFIED;
if (cfg->method != method)
/*
* Inlining this into a loop in a parent could lead to
* stack overflows which is different behavior than the
* non-inlined case, thus disable inlining in this case.
*/
INLINE_FAILURE("localloc");
NEW_BBLOCK (cfg, non_zero_bb);
NEW_BBLOCK (cfg, end_bb);
/* if size != zero */
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, sp [0]->dreg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBNE_UN, non_zero_bb);
//size is zero, so result is NULL
MONO_EMIT_NEW_PCONST (cfg, alloc_ptr, NULL);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
MONO_START_BB (cfg, non_zero_bb);
MONO_INST_NEW (cfg, ins, OP_LOCALLOC);
ins->dreg = alloc_ptr;
ins->sreg1 = sp [0]->dreg;
ins->type = STACK_PTR;
MONO_ADD_INS (cfg->cbb, ins);
cfg->flags |= MONO_CFG_HAS_ALLOCA;
if (header->init_locals)
ins->flags |= MONO_INST_INIT;
MONO_START_BB (cfg, end_bb);
EMIT_NEW_UNALU (cfg, ins, OP_MOVE, alloc_preg (cfg), alloc_ptr);
ins->type = STACK_PTR;
*sp++ = ins;
break;
}
case MONO_CEE_ENDFILTER: {
MonoExceptionClause *clause, *nearest;
int cc;
--sp;
if ((sp != stack_start) || (sp [0]->type != STACK_I4))
UNVERIFIED;
MONO_INST_NEW (cfg, ins, OP_ENDFILTER);
ins->sreg1 = (*sp)->dreg;
MONO_ADD_INS (cfg->cbb, ins);
start_new_bblock = 1;
nearest = NULL;
for (cc = 0; cc < header->num_clauses; ++cc) {
clause = &header->clauses [cc];
if ((clause->flags & MONO_EXCEPTION_CLAUSE_FILTER) &&
((next_ip - header->code) > clause->data.filter_offset && (next_ip - header->code) <= clause->handler_offset) &&
(!nearest || (clause->data.filter_offset < nearest->data.filter_offset)))
nearest = clause;
}
g_assert (nearest);
if ((next_ip - header->code) != nearest->handler_offset)
UNVERIFIED;
break;
}
case MONO_CEE_UNALIGNED_:
ins_flag |= MONO_INST_UNALIGNED;
/* FIXME: record alignment? we can assume 1 for now */
break;
case MONO_CEE_VOLATILE_:
ins_flag |= MONO_INST_VOLATILE;
break;
case MONO_CEE_TAIL_:
ins_flag |= MONO_INST_TAILCALL;
cfg->flags |= MONO_CFG_HAS_TAILCALL;
/* Can't inline tailcalls at this time */
inline_costs += 100000;
break;
case MONO_CEE_INITOBJ:
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
if (mini_class_is_reference (klass))
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STORE_MEMBASE_IMM, sp [0]->dreg, 0, 0);
else
mini_emit_initobj (cfg, *sp, NULL, klass);
inline_costs += 1;
break;
case MONO_CEE_CONSTRAINED_:
constrained_class = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (constrained_class);
ins_has_side_effect = FALSE;
break;
case MONO_CEE_CPBLK:
sp -= 3;
mini_emit_memory_copy_bytes (cfg, sp [0], sp [1], sp [2], ins_flag);
ins_flag = 0;
inline_costs += 1;
break;
case MONO_CEE_INITBLK:
sp -= 3;
mini_emit_memory_init_bytes (cfg, sp [0], sp [1], sp [2], ins_flag);
ins_flag = 0;
inline_costs += 1;
break;
case MONO_CEE_NO_:
if (ip [2] & CEE_NO_TYPECHECK)
ins_flag |= MONO_INST_NOTYPECHECK;
if (ip [2] & CEE_NO_RANGECHECK)
ins_flag |= MONO_INST_NORANGECHECK;
if (ip [2] & CEE_NO_NULLCHECK)
ins_flag |= MONO_INST_NONULLCHECK;
break;
case MONO_CEE_RETHROW: {
MonoInst *load;
int handler_offset = -1;
for (i = 0; i < header->num_clauses; ++i) {
MonoExceptionClause *clause = &header->clauses [i];
if (MONO_OFFSET_IN_HANDLER (clause, ip - header->code) && !(clause->flags & MONO_EXCEPTION_CLAUSE_FINALLY)) {
handler_offset = clause->handler_offset;
break;
}
}
cfg->cbb->flags |= BB_EXCEPTION_UNSAFE;
if (handler_offset == -1)
UNVERIFIED;
EMIT_NEW_TEMPLOAD (cfg, load, mono_find_exvar_for_offset (cfg, handler_offset)->inst_c0);
MONO_INST_NEW (cfg, ins, OP_RETHROW);
ins->sreg1 = load->dreg;
MONO_ADD_INS (cfg->cbb, ins);
MONO_INST_NEW (cfg, ins, OP_NOT_REACHED);
MONO_ADD_INS (cfg->cbb, ins);
sp = stack_start;
link_bblock (cfg, cfg->cbb, end_bblock);
start_new_bblock = 1;
break;
}
case MONO_CEE_MONO_RETHROW: {
if (sp [-1]->type != STACK_OBJ)
UNVERIFIED;
MONO_INST_NEW (cfg, ins, OP_RETHROW);
--sp;
ins->sreg1 = sp [0]->dreg;
cfg->cbb->out_of_line = TRUE;
MONO_ADD_INS (cfg->cbb, ins);
MONO_INST_NEW (cfg, ins, OP_NOT_REACHED);
MONO_ADD_INS (cfg->cbb, ins);
sp = stack_start;
link_bblock (cfg, cfg->cbb, end_bblock);
start_new_bblock = 1;
/* This can complicate code generation for llvm since the return value might not be defined */
if (COMPILE_LLVM (cfg))
INLINE_FAILURE ("mono_rethrow");
break;
}
case MONO_CEE_SIZEOF: {
guint32 val;
int ialign;
if (mono_metadata_token_table (token) == MONO_TABLE_TYPESPEC && !image_is_dynamic (m_class_get_image (method->klass)) && !generic_context) {
MonoType *type = mono_type_create_from_typespec_checked (image, token, cfg->error);
CHECK_CFG_ERROR;
val = mono_type_size (type, &ialign);
EMIT_NEW_ICONST (cfg, ins, val);
} else {
MonoClass *klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
if (mini_is_gsharedvt_klass (klass)) {
ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_CLASS_SIZEOF);
ins->type = STACK_I4;
} else {
val = mono_type_size (m_class_get_byval_arg (klass), &ialign);
EMIT_NEW_ICONST (cfg, ins, val);
}
}
*sp++ = ins;
break;
}
case MONO_CEE_REFANYTYPE: {
MonoInst *src_var, *src;
GSHAREDVT_FAILURE (il_op);
--sp;
// FIXME:
src_var = get_vreg_to_inst (cfg, sp [0]->dreg);
if (!src_var)
src_var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.typed_reference_class), OP_LOCAL, sp [0]->dreg);
EMIT_NEW_VARLOADA (cfg, src, src_var, src_var->inst_vtype);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (mono_defaults.typehandle_class), src->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, type));
*sp++ = ins;
break;
}
case MONO_CEE_READONLY_:
readonly = TRUE;
break;
case MONO_CEE_UNUSED56:
case MONO_CEE_UNUSED57:
case MONO_CEE_UNUSED70:
case MONO_CEE_UNUSED:
case MONO_CEE_UNUSED99:
case MONO_CEE_UNUSED58:
case MONO_CEE_UNUSED1:
UNVERIFIED;
default:
g_warning ("opcode 0x%02x not handled", il_op);
UNVERIFIED;
}
if (ins_has_side_effect)
cfg->cbb->flags |= BB_HAS_SIDE_EFFECTS;
}
if (start_new_bblock != 1)
UNVERIFIED;
cfg->cbb->cil_length = ip - cfg->cbb->cil_code;
if (cfg->cbb->next_bb) {
/* This could already be set because of inlining, #693905 */
MonoBasicBlock *bb = cfg->cbb;
while (bb->next_bb)
bb = bb->next_bb;
bb->next_bb = end_bblock;
} else {
cfg->cbb->next_bb = end_bblock;
}
#if defined(TARGET_POWERPC) || defined(TARGET_X86)
if (cfg->compile_aot)
/* FIXME: The plt slots require a GOT var even if the method doesn't use it */
mono_get_got_var (cfg);
#endif
#ifdef TARGET_WASM
if (cfg->lmf_var && !cfg->deopt) {
// mini_llvmonly_pop_lmf () might be called before emit_push_lmf () so initialize the LMF
cfg->cbb = init_localsbb;
EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL);
int lmf_reg = ins->dreg;
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_IMM, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, previous_lmf), 0);
}
#endif
if (cfg->method == method && cfg->got_var)
mono_emit_load_got_addr (cfg);
if (init_localsbb) {
cfg->cbb = init_localsbb;
cfg->ip = NULL;
for (i = 0; i < header->num_locals; ++i) {
/*
* Vtype initialization might need to be done after CEE_JIT_ATTACH, since it can make calls to memset (),
* which need the trampoline code to work.
*/
if (MONO_TYPE_ISSTRUCT (header->locals [i]))
cfg->cbb = init_localsbb2;
else
cfg->cbb = init_localsbb;
emit_init_local (cfg, i, header->locals [i], init_locals);
}
}
if (cfg->init_ref_vars && cfg->method == method) {
/* Emit initialization for ref vars */
// FIXME: Avoid duplication initialization for IL locals.
for (i = 0; i < cfg->num_varinfo; ++i) {
MonoInst *ins = cfg->varinfo [i];
if (ins->opcode == OP_LOCAL && ins->type == STACK_OBJ)
MONO_EMIT_NEW_PCONST (cfg, ins->dreg, NULL);
}
}
if (cfg->lmf_var && cfg->method == method && !cfg->llvm_only) {
cfg->cbb = init_localsbb;
emit_push_lmf (cfg);
}
/* emit profiler enter code after a jit attach if there is one */
cfg->cbb = init_localsbb2;
mini_profiler_emit_enter (cfg);
cfg->cbb = init_localsbb;
if (seq_points) {
MonoBasicBlock *bb;
/*
* Make seq points at backward branch targets interruptable.
*/
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
if (bb->code && bb->in_count > 1 && bb->code->opcode == OP_SEQ_POINT)
bb->code->flags |= MONO_INST_SINGLE_STEP_LOC;
}
/* Add a sequence point for method entry/exit events */
if (seq_points && cfg->gen_sdb_seq_points) {
NEW_SEQ_POINT (cfg, ins, METHOD_ENTRY_IL_OFFSET, FALSE);
MONO_ADD_INS (init_localsbb, ins);
NEW_SEQ_POINT (cfg, ins, METHOD_EXIT_IL_OFFSET, FALSE);
MONO_ADD_INS (cfg->bb_exit, ins);
}
/*
* Add seq points for IL offsets which have line number info, but wasn't generated a seq point during JITting because
* the code they refer to was dead (#11880).
*/
if (sym_seq_points) {
for (i = 0; i < header->code_size; ++i) {
if (mono_bitset_test_fast (seq_point_locs, i) && !mono_bitset_test_fast (seq_point_set_locs, i)) {
MonoInst *ins;
NEW_SEQ_POINT (cfg, ins, i, FALSE);
mono_add_seq_point (cfg, NULL, ins, SEQ_POINT_NATIVE_OFFSET_DEAD_CODE);
}
}
}
cfg->ip = NULL;
if (cfg->method == method) {
compute_bb_regions (cfg);
} else {
MonoBasicBlock *bb;
/* get_most_deep_clause () in mini-llvm.c depends on this for inlined bblocks */
for (bb = start_bblock; bb != end_bblock; bb = bb->next_bb) {
bb->real_offset = inline_offset;
}
}
if (inline_costs < 0) {
char *mname;
/* Method is too large */
mname = mono_method_full_name (method, TRUE);
mono_cfg_set_exception_invalid_program (cfg, g_strdup_printf ("Method %s is too complex.", mname));
g_free (mname);
}
if ((cfg->verbose_level > 2) && (cfg->method == method))
mono_print_code (cfg, "AFTER METHOD-TO-IR");
goto cleanup;
mono_error_exit:
if (cfg->verbose_level > 3)
g_print ("exiting due to error");
g_assert (!is_ok (cfg->error));
goto cleanup;
exception_exit:
if (cfg->verbose_level > 3)
g_print ("exiting due to exception");
g_assert (cfg->exception_type != MONO_EXCEPTION_NONE);
goto cleanup;
unverified:
if (cfg->verbose_level > 3)
g_print ("exiting due to invalid il");
set_exception_type_from_invalid_il (cfg, method, ip);
goto cleanup;
cleanup:
g_slist_free (class_inits);
mono_basic_block_free (original_bb);
cfg->dont_inline = g_list_remove (cfg->dont_inline, method);
if (cfg->exception_type)
return -1;
else
return inline_costs;
}
static int
store_membase_reg_to_store_membase_imm (int opcode)
{
switch (opcode) {
case OP_STORE_MEMBASE_REG:
return OP_STORE_MEMBASE_IMM;
case OP_STOREI1_MEMBASE_REG:
return OP_STOREI1_MEMBASE_IMM;
case OP_STOREI2_MEMBASE_REG:
return OP_STOREI2_MEMBASE_IMM;
case OP_STOREI4_MEMBASE_REG:
return OP_STOREI4_MEMBASE_IMM;
case OP_STOREI8_MEMBASE_REG:
return OP_STOREI8_MEMBASE_IMM;
default:
g_assert_not_reached ();
}
return -1;
}
int
mono_op_to_op_imm (int opcode)
{
switch (opcode) {
case OP_IADD:
return OP_IADD_IMM;
case OP_ISUB:
return OP_ISUB_IMM;
case OP_IDIV:
return OP_IDIV_IMM;
case OP_IDIV_UN:
return OP_IDIV_UN_IMM;
case OP_IREM:
return OP_IREM_IMM;
case OP_IREM_UN:
return OP_IREM_UN_IMM;
case OP_IMUL:
return OP_IMUL_IMM;
case OP_IAND:
return OP_IAND_IMM;
case OP_IOR:
return OP_IOR_IMM;
case OP_IXOR:
return OP_IXOR_IMM;
case OP_ISHL:
return OP_ISHL_IMM;
case OP_ISHR:
return OP_ISHR_IMM;
case OP_ISHR_UN:
return OP_ISHR_UN_IMM;
case OP_LADD:
return OP_LADD_IMM;
case OP_LSUB:
return OP_LSUB_IMM;
case OP_LAND:
return OP_LAND_IMM;
case OP_LOR:
return OP_LOR_IMM;
case OP_LXOR:
return OP_LXOR_IMM;
case OP_LSHL:
return OP_LSHL_IMM;
case OP_LSHR:
return OP_LSHR_IMM;
case OP_LSHR_UN:
return OP_LSHR_UN_IMM;
#if SIZEOF_REGISTER == 8
case OP_LMUL:
return OP_LMUL_IMM;
case OP_LREM:
return OP_LREM_IMM;
#endif
case OP_COMPARE:
return OP_COMPARE_IMM;
case OP_ICOMPARE:
return OP_ICOMPARE_IMM;
case OP_LCOMPARE:
return OP_LCOMPARE_IMM;
case OP_STORE_MEMBASE_REG:
return OP_STORE_MEMBASE_IMM;
case OP_STOREI1_MEMBASE_REG:
return OP_STOREI1_MEMBASE_IMM;
case OP_STOREI2_MEMBASE_REG:
return OP_STOREI2_MEMBASE_IMM;
case OP_STOREI4_MEMBASE_REG:
return OP_STOREI4_MEMBASE_IMM;
#if defined(TARGET_X86) || defined (TARGET_AMD64)
case OP_X86_PUSH:
return OP_X86_PUSH_IMM;
case OP_X86_COMPARE_MEMBASE_REG:
return OP_X86_COMPARE_MEMBASE_IMM;
#endif
#if defined(TARGET_AMD64)
case OP_AMD64_ICOMPARE_MEMBASE_REG:
return OP_AMD64_ICOMPARE_MEMBASE_IMM;
#endif
case OP_VOIDCALL_REG:
return OP_VOIDCALL;
case OP_CALL_REG:
return OP_CALL;
case OP_LCALL_REG:
return OP_LCALL;
case OP_FCALL_REG:
return OP_FCALL;
case OP_LOCALLOC:
return OP_LOCALLOC_IMM;
}
return -1;
}
int
mono_load_membase_to_load_mem (int opcode)
{
// FIXME: Add a MONO_ARCH_HAVE_LOAD_MEM macro
#if defined(TARGET_X86) || defined(TARGET_AMD64)
switch (opcode) {
case OP_LOAD_MEMBASE:
return OP_LOAD_MEM;
case OP_LOADU1_MEMBASE:
return OP_LOADU1_MEM;
case OP_LOADU2_MEMBASE:
return OP_LOADU2_MEM;
case OP_LOADI4_MEMBASE:
return OP_LOADI4_MEM;
case OP_LOADU4_MEMBASE:
return OP_LOADU4_MEM;
#if SIZEOF_REGISTER == 8
case OP_LOADI8_MEMBASE:
return OP_LOADI8_MEM;
#endif
}
#endif
return -1;
}
static int
op_to_op_dest_membase (int store_opcode, int opcode)
{
#if defined(TARGET_X86)
if (!((store_opcode == OP_STORE_MEMBASE_REG) || (store_opcode == OP_STOREI4_MEMBASE_REG)))
return -1;
switch (opcode) {
case OP_IADD:
return OP_X86_ADD_MEMBASE_REG;
case OP_ISUB:
return OP_X86_SUB_MEMBASE_REG;
case OP_IAND:
return OP_X86_AND_MEMBASE_REG;
case OP_IOR:
return OP_X86_OR_MEMBASE_REG;
case OP_IXOR:
return OP_X86_XOR_MEMBASE_REG;
case OP_ADD_IMM:
case OP_IADD_IMM:
return OP_X86_ADD_MEMBASE_IMM;
case OP_SUB_IMM:
case OP_ISUB_IMM:
return OP_X86_SUB_MEMBASE_IMM;
case OP_AND_IMM:
case OP_IAND_IMM:
return OP_X86_AND_MEMBASE_IMM;
case OP_OR_IMM:
case OP_IOR_IMM:
return OP_X86_OR_MEMBASE_IMM;
case OP_XOR_IMM:
case OP_IXOR_IMM:
return OP_X86_XOR_MEMBASE_IMM;
case OP_MOVE:
return OP_NOP;
}
#endif
#if defined(TARGET_AMD64)
if (!((store_opcode == OP_STORE_MEMBASE_REG) || (store_opcode == OP_STOREI4_MEMBASE_REG) || (store_opcode == OP_STOREI8_MEMBASE_REG)))
return -1;
switch (opcode) {
case OP_IADD:
return OP_X86_ADD_MEMBASE_REG;
case OP_ISUB:
return OP_X86_SUB_MEMBASE_REG;
case OP_IAND:
return OP_X86_AND_MEMBASE_REG;
case OP_IOR:
return OP_X86_OR_MEMBASE_REG;
case OP_IXOR:
return OP_X86_XOR_MEMBASE_REG;
case OP_IADD_IMM:
return OP_X86_ADD_MEMBASE_IMM;
case OP_ISUB_IMM:
return OP_X86_SUB_MEMBASE_IMM;
case OP_IAND_IMM:
return OP_X86_AND_MEMBASE_IMM;
case OP_IOR_IMM:
return OP_X86_OR_MEMBASE_IMM;
case OP_IXOR_IMM:
return OP_X86_XOR_MEMBASE_IMM;
case OP_LADD:
return OP_AMD64_ADD_MEMBASE_REG;
case OP_LSUB:
return OP_AMD64_SUB_MEMBASE_REG;
case OP_LAND:
return OP_AMD64_AND_MEMBASE_REG;
case OP_LOR:
return OP_AMD64_OR_MEMBASE_REG;
case OP_LXOR:
return OP_AMD64_XOR_MEMBASE_REG;
case OP_ADD_IMM:
case OP_LADD_IMM:
return OP_AMD64_ADD_MEMBASE_IMM;
case OP_SUB_IMM:
case OP_LSUB_IMM:
return OP_AMD64_SUB_MEMBASE_IMM;
case OP_AND_IMM:
case OP_LAND_IMM:
return OP_AMD64_AND_MEMBASE_IMM;
case OP_OR_IMM:
case OP_LOR_IMM:
return OP_AMD64_OR_MEMBASE_IMM;
case OP_XOR_IMM:
case OP_LXOR_IMM:
return OP_AMD64_XOR_MEMBASE_IMM;
case OP_MOVE:
return OP_NOP;
}
#endif
return -1;
}
static int
op_to_op_store_membase (int store_opcode, int opcode)
{
#if defined(TARGET_X86) || defined(TARGET_AMD64)
switch (opcode) {
case OP_ICEQ:
if (store_opcode == OP_STOREI1_MEMBASE_REG)
return OP_X86_SETEQ_MEMBASE;
case OP_CNE:
if (store_opcode == OP_STOREI1_MEMBASE_REG)
return OP_X86_SETNE_MEMBASE;
}
#endif
return -1;
}
static int
op_to_op_src1_membase (MonoCompile *cfg, int load_opcode, int opcode)
{
#ifdef TARGET_X86
/* FIXME: This has sign extension issues */
/*
if ((opcode == OP_ICOMPARE_IMM) && (load_opcode == OP_LOADU1_MEMBASE))
return OP_X86_COMPARE_MEMBASE8_IMM;
*/
if (!((load_opcode == OP_LOAD_MEMBASE) || (load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE)))
return -1;
switch (opcode) {
case OP_X86_PUSH:
return OP_X86_PUSH_MEMBASE;
case OP_COMPARE_IMM:
case OP_ICOMPARE_IMM:
return OP_X86_COMPARE_MEMBASE_IMM;
case OP_COMPARE:
case OP_ICOMPARE:
return OP_X86_COMPARE_MEMBASE_REG;
}
#endif
#ifdef TARGET_AMD64
/* FIXME: This has sign extension issues */
/*
if ((opcode == OP_ICOMPARE_IMM) && (load_opcode == OP_LOADU1_MEMBASE))
return OP_X86_COMPARE_MEMBASE8_IMM;
*/
switch (opcode) {
case OP_X86_PUSH:
if ((load_opcode == OP_LOAD_MEMBASE && !cfg->backend->ilp32) || (load_opcode == OP_LOADI8_MEMBASE))
return OP_X86_PUSH_MEMBASE;
break;
/* FIXME: This only works for 32 bit immediates
case OP_COMPARE_IMM:
case OP_LCOMPARE_IMM:
if ((load_opcode == OP_LOAD_MEMBASE) || (load_opcode == OP_LOADI8_MEMBASE))
return OP_AMD64_COMPARE_MEMBASE_IMM;
*/
case OP_ICOMPARE_IMM:
if ((load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE))
return OP_AMD64_ICOMPARE_MEMBASE_IMM;
break;
case OP_COMPARE:
case OP_LCOMPARE:
if (cfg->backend->ilp32 && load_opcode == OP_LOAD_MEMBASE)
return OP_AMD64_ICOMPARE_MEMBASE_REG;
if ((load_opcode == OP_LOAD_MEMBASE && !cfg->backend->ilp32) || (load_opcode == OP_LOADI8_MEMBASE))
return OP_AMD64_COMPARE_MEMBASE_REG;
break;
case OP_ICOMPARE:
if ((load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE))
return OP_AMD64_ICOMPARE_MEMBASE_REG;
break;
}
#endif
return -1;
}
static int
op_to_op_src2_membase (MonoCompile *cfg, int load_opcode, int opcode)
{
#ifdef TARGET_X86
if (!((load_opcode == OP_LOAD_MEMBASE) || (load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE)))
return -1;
switch (opcode) {
case OP_COMPARE:
case OP_ICOMPARE:
return OP_X86_COMPARE_REG_MEMBASE;
case OP_IADD:
return OP_X86_ADD_REG_MEMBASE;
case OP_ISUB:
return OP_X86_SUB_REG_MEMBASE;
case OP_IAND:
return OP_X86_AND_REG_MEMBASE;
case OP_IOR:
return OP_X86_OR_REG_MEMBASE;
case OP_IXOR:
return OP_X86_XOR_REG_MEMBASE;
}
#endif
#ifdef TARGET_AMD64
if ((load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE) || (load_opcode == OP_LOAD_MEMBASE && cfg->backend->ilp32)) {
switch (opcode) {
case OP_ICOMPARE:
return OP_AMD64_ICOMPARE_REG_MEMBASE;
case OP_IADD:
return OP_X86_ADD_REG_MEMBASE;
case OP_ISUB:
return OP_X86_SUB_REG_MEMBASE;
case OP_IAND:
return OP_X86_AND_REG_MEMBASE;
case OP_IOR:
return OP_X86_OR_REG_MEMBASE;
case OP_IXOR:
return OP_X86_XOR_REG_MEMBASE;
}
} else if ((load_opcode == OP_LOADI8_MEMBASE) || (load_opcode == OP_LOAD_MEMBASE && !cfg->backend->ilp32)) {
switch (opcode) {
case OP_COMPARE:
case OP_LCOMPARE:
return OP_AMD64_COMPARE_REG_MEMBASE;
case OP_LADD:
return OP_AMD64_ADD_REG_MEMBASE;
case OP_LSUB:
return OP_AMD64_SUB_REG_MEMBASE;
case OP_LAND:
return OP_AMD64_AND_REG_MEMBASE;
case OP_LOR:
return OP_AMD64_OR_REG_MEMBASE;
case OP_LXOR:
return OP_AMD64_XOR_REG_MEMBASE;
}
}
#endif
return -1;
}
int
mono_op_to_op_imm_noemul (int opcode)
{
MONO_DISABLE_WARNING(4065) // switch with default but no case
switch (opcode) {
#if SIZEOF_REGISTER == 4 && !defined(MONO_ARCH_NO_EMULATE_LONG_SHIFT_OPS)
case OP_LSHR:
case OP_LSHL:
case OP_LSHR_UN:
return -1;
#endif
#if defined(MONO_ARCH_EMULATE_MUL_DIV) || defined(MONO_ARCH_EMULATE_DIV)
case OP_IDIV:
case OP_IDIV_UN:
case OP_IREM:
case OP_IREM_UN:
return -1;
#endif
#if defined(MONO_ARCH_EMULATE_MUL_DIV)
case OP_IMUL:
return -1;
#endif
default:
return mono_op_to_op_imm (opcode);
}
MONO_RESTORE_WARNING
}
gboolean
mono_op_no_side_effects (int opcode)
{
/* FIXME: Add more instructions */
/* INEG sets the condition codes, and the OP_LNEG decomposition depends on this on x86 */
switch (opcode) {
case OP_MOVE:
case OP_FMOVE:
case OP_VMOVE:
case OP_XMOVE:
case OP_RMOVE:
case OP_VZERO:
case OP_XZERO:
case OP_ICONST:
case OP_I8CONST:
case OP_ADD_IMM:
case OP_R8CONST:
case OP_LADD_IMM:
case OP_ISUB_IMM:
case OP_IADD_IMM:
case OP_LNEG:
case OP_ISUB:
case OP_CMOV_IGE:
case OP_ISHL_IMM:
case OP_ISHR_IMM:
case OP_ISHR_UN_IMM:
case OP_IAND_IMM:
case OP_ICONV_TO_U1:
case OP_ICONV_TO_I1:
case OP_SEXT_I4:
case OP_LCONV_TO_U1:
case OP_ICONV_TO_U2:
case OP_ICONV_TO_I2:
case OP_LCONV_TO_I2:
case OP_LDADDR:
case OP_PHI:
case OP_NOP:
case OP_ZEXT_I4:
case OP_NOT_NULL:
case OP_IL_SEQ_POINT:
case OP_RTTYPE:
return TRUE;
default:
return FALSE;
}
}
gboolean
mono_ins_no_side_effects (MonoInst *ins)
{
if (mono_op_no_side_effects (ins->opcode))
return TRUE;
if (ins->opcode == OP_AOTCONST) {
MonoJumpInfoType type = (MonoJumpInfoType)(intptr_t)ins->inst_p1;
// Some AOTCONSTs have side effects
switch (type) {
case MONO_PATCH_INFO_TYPE_FROM_HANDLE:
case MONO_PATCH_INFO_LDSTR:
case MONO_PATCH_INFO_VTABLE:
case MONO_PATCH_INFO_METHOD_RGCTX:
return TRUE;
}
}
return FALSE;
}
/**
* mono_handle_global_vregs:
*
* Make vregs used in more than one bblock 'global', i.e. allocate a variable
* for them.
*/
void
mono_handle_global_vregs (MonoCompile *cfg)
{
gint32 *vreg_to_bb;
MonoBasicBlock *bb;
int i, pos;
vreg_to_bb = (gint32 *)mono_mempool_alloc0 (cfg->mempool, sizeof (gint32*) * cfg->next_vreg + 1);
#ifdef MONO_ARCH_SIMD_INTRINSICS
if (cfg->uses_simd_intrinsics & MONO_CFG_USES_SIMD_INTRINSICS_SIMPLIFY_INDIRECTION)
mono_simd_simplify_indirection (cfg);
#endif
/* Find local vregs used in more than one bb */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
MonoInst *ins = bb->code;
int block_num = bb->block_num;
if (cfg->verbose_level > 2)
printf ("\nHANDLE-GLOBAL-VREGS BLOCK %d:\n", bb->block_num);
cfg->cbb = bb;
for (; ins; ins = ins->next) {
const char *spec = INS_INFO (ins->opcode);
int regtype = 0, regindex;
gint32 prev_bb;
if (G_UNLIKELY (cfg->verbose_level > 2))
mono_print_ins (ins);
g_assert (ins->opcode >= MONO_CEE_LAST);
for (regindex = 0; regindex < 4; regindex ++) {
int vreg = 0;
if (regindex == 0) {
regtype = spec [MONO_INST_DEST];
if (regtype == ' ')
continue;
vreg = ins->dreg;
} else if (regindex == 1) {
regtype = spec [MONO_INST_SRC1];
if (regtype == ' ')
continue;
vreg = ins->sreg1;
} else if (regindex == 2) {
regtype = spec [MONO_INST_SRC2];
if (regtype == ' ')
continue;
vreg = ins->sreg2;
} else if (regindex == 3) {
regtype = spec [MONO_INST_SRC3];
if (regtype == ' ')
continue;
vreg = ins->sreg3;
}
#if SIZEOF_REGISTER == 4
/* In the LLVM case, the long opcodes are not decomposed */
if (regtype == 'l' && !COMPILE_LLVM (cfg)) {
/*
* Since some instructions reference the original long vreg,
* and some reference the two component vregs, it is quite hard
* to determine when it needs to be global. So be conservative.
*/
if (!get_vreg_to_inst (cfg, vreg)) {
mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.int64_class), OP_LOCAL, vreg);
if (cfg->verbose_level > 2)
printf ("LONG VREG R%d made global.\n", vreg);
}
/*
* Make the component vregs volatile since the optimizations can
* get confused otherwise.
*/
get_vreg_to_inst (cfg, MONO_LVREG_LS (vreg))->flags |= MONO_INST_VOLATILE;
get_vreg_to_inst (cfg, MONO_LVREG_MS (vreg))->flags |= MONO_INST_VOLATILE;
}
#endif
g_assert (vreg != -1);
prev_bb = vreg_to_bb [vreg];
if (prev_bb == 0) {
/* 0 is a valid block num */
vreg_to_bb [vreg] = block_num + 1;
} else if ((prev_bb != block_num + 1) && (prev_bb != -1)) {
if (((regtype == 'i' && (vreg < MONO_MAX_IREGS))) || (regtype == 'f' && (vreg < MONO_MAX_FREGS)))
continue;
if (!get_vreg_to_inst (cfg, vreg)) {
if (G_UNLIKELY (cfg->verbose_level > 2))
printf ("VREG R%d used in BB%d and BB%d made global.\n", vreg, vreg_to_bb [vreg], block_num);
switch (regtype) {
case 'i':
if (vreg_is_ref (cfg, vreg))
mono_compile_create_var_for_vreg (cfg, mono_get_object_type (), OP_LOCAL, vreg);
else
mono_compile_create_var_for_vreg (cfg, mono_get_int_type (), OP_LOCAL, vreg);
break;
case 'l':
mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.int64_class), OP_LOCAL, vreg);
break;
case 'f':
mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.double_class), OP_LOCAL, vreg);
break;
case 'v':
case 'x':
mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (ins->klass), OP_LOCAL, vreg);
break;
default:
g_assert_not_reached ();
}
}
/* Flag as having been used in more than one bb */
vreg_to_bb [vreg] = -1;
}
}
}
}
/* If a variable is used in only one bblock, convert it into a local vreg */
for (i = 0; i < cfg->num_varinfo; i++) {
MonoInst *var = cfg->varinfo [i];
MonoMethodVar *vmv = MONO_VARINFO (cfg, i);
switch (var->type) {
case STACK_I4:
case STACK_OBJ:
case STACK_PTR:
case STACK_MP:
case STACK_VTYPE:
#if SIZEOF_REGISTER == 8
case STACK_I8:
#endif
#if !defined(TARGET_X86)
/* Enabling this screws up the fp stack on x86 */
case STACK_R8:
#endif
if (mono_arch_is_soft_float ())
break;
/*
if (var->type == STACK_VTYPE && cfg->gsharedvt && mini_is_gsharedvt_variable_type (var->inst_vtype))
break;
*/
/* Arguments are implicitly global */
/* Putting R4 vars into registers doesn't work currently */
/* The gsharedvt vars are implicitly referenced by ldaddr opcodes, but those opcodes are only generated later */
if ((var->opcode != OP_ARG) && (var != cfg->ret) && !(var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) && (vreg_to_bb [var->dreg] != -1) && (m_class_get_byval_arg (var->klass)->type != MONO_TYPE_R4) && !cfg->disable_vreg_to_lvreg && var != cfg->gsharedvt_info_var && var != cfg->gsharedvt_locals_var && var != cfg->lmf_addr_var) {
/*
* Make that the variable's liveness interval doesn't contain a call, since
* that would cause the lvreg to be spilled, making the whole optimization
* useless.
*/
/* This is too slow for JIT compilation */
#if 0
if (cfg->compile_aot && vreg_to_bb [var->dreg]) {
MonoInst *ins;
int def_index, call_index, ins_index;
gboolean spilled = FALSE;
def_index = -1;
call_index = -1;
ins_index = 0;
for (ins = vreg_to_bb [var->dreg]->code; ins; ins = ins->next) {
const char *spec = INS_INFO (ins->opcode);
if ((spec [MONO_INST_DEST] != ' ') && (ins->dreg == var->dreg))
def_index = ins_index;
if (((spec [MONO_INST_SRC1] != ' ') && (ins->sreg1 == var->dreg)) ||
((spec [MONO_INST_SRC1] != ' ') && (ins->sreg1 == var->dreg))) {
if (call_index > def_index) {
spilled = TRUE;
break;
}
}
if (MONO_IS_CALL (ins))
call_index = ins_index;
ins_index ++;
}
if (spilled)
break;
}
#endif
if (G_UNLIKELY (cfg->verbose_level > 2))
printf ("CONVERTED R%d(%d) TO VREG.\n", var->dreg, vmv->idx);
var->flags |= MONO_INST_IS_DEAD;
cfg->vreg_to_inst [var->dreg] = NULL;
}
break;
}
}
/*
* Compress the varinfo and vars tables so the liveness computation is faster and
* takes up less space.
*/
pos = 0;
for (i = 0; i < cfg->num_varinfo; ++i) {
MonoInst *var = cfg->varinfo [i];
if (pos < i && cfg->locals_start == i)
cfg->locals_start = pos;
if (!(var->flags & MONO_INST_IS_DEAD)) {
if (pos < i) {
cfg->varinfo [pos] = cfg->varinfo [i];
cfg->varinfo [pos]->inst_c0 = pos;
memcpy (&cfg->vars [pos], &cfg->vars [i], sizeof (MonoMethodVar));
cfg->vars [pos].idx = pos;
#if SIZEOF_REGISTER == 4
if (cfg->varinfo [pos]->type == STACK_I8) {
/* Modify the two component vars too */
MonoInst *var1;
var1 = get_vreg_to_inst (cfg, MONO_LVREG_LS (cfg->varinfo [pos]->dreg));
var1->inst_c0 = pos;
var1 = get_vreg_to_inst (cfg, MONO_LVREG_MS (cfg->varinfo [pos]->dreg));
var1->inst_c0 = pos;
}
#endif
}
pos ++;
}
}
cfg->num_varinfo = pos;
if (cfg->locals_start > cfg->num_varinfo)
cfg->locals_start = cfg->num_varinfo;
}
/*
* mono_allocate_gsharedvt_vars:
*
* Allocate variables with gsharedvt types to entries in the MonoGSharedVtMethodRuntimeInfo.entries array.
* Initialize cfg->gsharedvt_vreg_to_idx with the mapping between vregs and indexes.
*/
void
mono_allocate_gsharedvt_vars (MonoCompile *cfg)
{
int i;
cfg->gsharedvt_vreg_to_idx = (int *)mono_mempool_alloc0 (cfg->mempool, sizeof (int) * cfg->next_vreg);
for (i = 0; i < cfg->num_varinfo; ++i) {
MonoInst *ins = cfg->varinfo [i];
int idx;
if (mini_is_gsharedvt_variable_type (ins->inst_vtype)) {
if (i >= cfg->locals_start) {
/* Local */
idx = get_gsharedvt_info_slot (cfg, ins->inst_vtype, MONO_RGCTX_INFO_LOCAL_OFFSET);
cfg->gsharedvt_vreg_to_idx [ins->dreg] = idx + 1;
ins->opcode = OP_GSHAREDVT_LOCAL;
ins->inst_imm = idx;
} else {
/* Arg */
cfg->gsharedvt_vreg_to_idx [ins->dreg] = -1;
ins->opcode = OP_GSHAREDVT_ARG_REGOFFSET;
}
}
}
}
/**
* mono_spill_global_vars:
*
* Generate spill code for variables which are not allocated to registers,
* and replace vregs with their allocated hregs. *need_local_opts is set to TRUE if
* code is generated which could be optimized by the local optimization passes.
*/
void
mono_spill_global_vars (MonoCompile *cfg, gboolean *need_local_opts)
{
MonoBasicBlock *bb;
char spec2 [16];
int orig_next_vreg;
guint32 *vreg_to_lvreg;
guint32 *lvregs;
guint32 i, lvregs_len, lvregs_size;
gboolean dest_has_lvreg = FALSE;
MonoStackType stacktypes [128];
MonoInst **live_range_start, **live_range_end;
MonoBasicBlock **live_range_start_bb, **live_range_end_bb;
*need_local_opts = FALSE;
memset (spec2, 0, sizeof (spec2));
/* FIXME: Move this function to mini.c */
stacktypes [(int)'i'] = STACK_PTR;
stacktypes [(int)'l'] = STACK_I8;
stacktypes [(int)'f'] = STACK_R8;
#ifdef MONO_ARCH_SIMD_INTRINSICS
stacktypes [(int)'x'] = STACK_VTYPE;
#endif
#if SIZEOF_REGISTER == 4
/* Create MonoInsts for longs */
for (i = 0; i < cfg->num_varinfo; i++) {
MonoInst *ins = cfg->varinfo [i];
if ((ins->opcode != OP_REGVAR) && !(ins->flags & MONO_INST_IS_DEAD)) {
switch (ins->type) {
case STACK_R8:
case STACK_I8: {
MonoInst *tree;
if (ins->type == STACK_R8 && !COMPILE_SOFT_FLOAT (cfg))
break;
g_assert (ins->opcode == OP_REGOFFSET);
tree = get_vreg_to_inst (cfg, MONO_LVREG_LS (ins->dreg));
g_assert (tree);
tree->opcode = OP_REGOFFSET;
tree->inst_basereg = ins->inst_basereg;
tree->inst_offset = ins->inst_offset + MINI_LS_WORD_OFFSET;
tree = get_vreg_to_inst (cfg, MONO_LVREG_MS (ins->dreg));
g_assert (tree);
tree->opcode = OP_REGOFFSET;
tree->inst_basereg = ins->inst_basereg;
tree->inst_offset = ins->inst_offset + MINI_MS_WORD_OFFSET;
break;
}
default:
break;
}
}
}
#endif
if (cfg->compute_gc_maps) {
/* registers need liveness info even for !non refs */
for (i = 0; i < cfg->num_varinfo; i++) {
MonoInst *ins = cfg->varinfo [i];
if (ins->opcode == OP_REGVAR)
ins->flags |= MONO_INST_GC_TRACK;
}
}
/* FIXME: widening and truncation */
/*
* As an optimization, when a variable allocated to the stack is first loaded into
* an lvreg, we will remember the lvreg and use it the next time instead of loading
* the variable again.
*/
orig_next_vreg = cfg->next_vreg;
vreg_to_lvreg = (guint32 *)mono_mempool_alloc0 (cfg->mempool, sizeof (guint32) * cfg->next_vreg);
lvregs_size = 1024;
lvregs = (guint32 *)mono_mempool_alloc (cfg->mempool, sizeof (guint32) * lvregs_size);
lvregs_len = 0;
/*
* These arrays contain the first and last instructions accessing a given
* variable.
* Since we emit bblocks in the same order we process them here, and we
* don't split live ranges, these will precisely describe the live range of
* the variable, i.e. the instruction range where a valid value can be found
* in the variables location.
* The live range is computed using the liveness info computed by the liveness pass.
* We can't use vmv->range, since that is an abstract live range, and we need
* one which is instruction precise.
* FIXME: Variables used in out-of-line bblocks have a hole in their live range.
*/
/* FIXME: Only do this if debugging info is requested */
live_range_start = g_new0 (MonoInst*, cfg->next_vreg);
live_range_end = g_new0 (MonoInst*, cfg->next_vreg);
live_range_start_bb = g_new (MonoBasicBlock*, cfg->next_vreg);
live_range_end_bb = g_new (MonoBasicBlock*, cfg->next_vreg);
/* Add spill loads/stores */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
MonoInst *ins;
if (cfg->verbose_level > 2)
printf ("\nSPILL BLOCK %d:\n", bb->block_num);
/* Clear vreg_to_lvreg array */
for (i = 0; i < lvregs_len; i++)
vreg_to_lvreg [lvregs [i]] = 0;
lvregs_len = 0;
cfg->cbb = bb;
MONO_BB_FOR_EACH_INS (bb, ins) {
const char *spec = INS_INFO (ins->opcode);
int regtype, srcindex, sreg, tmp_reg, prev_dreg, num_sregs;
gboolean store, no_lvreg;
int sregs [MONO_MAX_SRC_REGS];
if (G_UNLIKELY (cfg->verbose_level > 2))
mono_print_ins (ins);
if (ins->opcode == OP_NOP)
continue;
/*
* We handle LDADDR here as well, since it can only be decomposed
* when variable addresses are known.
*/
if (ins->opcode == OP_LDADDR) {
MonoInst *var = (MonoInst *)ins->inst_p0;
if (var->opcode == OP_VTARG_ADDR) {
/* Happens on SPARC/S390 where vtypes are passed by reference */
MonoInst *vtaddr = var->inst_left;
if (vtaddr->opcode == OP_REGVAR) {
ins->opcode = OP_MOVE;
ins->sreg1 = vtaddr->dreg;
}
else if (var->inst_left->opcode == OP_REGOFFSET) {
ins->opcode = OP_LOAD_MEMBASE;
ins->inst_basereg = vtaddr->inst_basereg;
ins->inst_offset = vtaddr->inst_offset;
} else
NOT_IMPLEMENTED;
} else if (cfg->gsharedvt && cfg->gsharedvt_vreg_to_idx [var->dreg] < 0) {
/* gsharedvt arg passed by ref */
g_assert (var->opcode == OP_GSHAREDVT_ARG_REGOFFSET);
ins->opcode = OP_LOAD_MEMBASE;
ins->inst_basereg = var->inst_basereg;
ins->inst_offset = var->inst_offset;
} else if (cfg->gsharedvt && cfg->gsharedvt_vreg_to_idx [var->dreg]) {
MonoInst *load, *load2, *load3;
int idx = cfg->gsharedvt_vreg_to_idx [var->dreg] - 1;
int reg1, reg2, reg3;
MonoInst *info_var = cfg->gsharedvt_info_var;
MonoInst *locals_var = cfg->gsharedvt_locals_var;
/*
* gsharedvt local.
* Compute the address of the local as gsharedvt_locals_var + gsharedvt_info_var->locals_offsets [idx].
*/
g_assert (var->opcode == OP_GSHAREDVT_LOCAL);
g_assert (info_var);
g_assert (locals_var);
/* Mark the instruction used to compute the locals var as used */
cfg->gsharedvt_locals_var_ins = NULL;
/* Load the offset */
if (info_var->opcode == OP_REGOFFSET) {
reg1 = alloc_ireg (cfg);
NEW_LOAD_MEMBASE (cfg, load, OP_LOAD_MEMBASE, reg1, info_var->inst_basereg, info_var->inst_offset);
} else if (info_var->opcode == OP_REGVAR) {
load = NULL;
reg1 = info_var->dreg;
} else {
g_assert_not_reached ();
}
reg2 = alloc_ireg (cfg);
NEW_LOAD_MEMBASE (cfg, load2, OP_LOADI4_MEMBASE, reg2, reg1, MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, entries) + (idx * TARGET_SIZEOF_VOID_P));
/* Load the locals area address */
reg3 = alloc_ireg (cfg);
if (locals_var->opcode == OP_REGOFFSET) {
NEW_LOAD_MEMBASE (cfg, load3, OP_LOAD_MEMBASE, reg3, locals_var->inst_basereg, locals_var->inst_offset);
} else if (locals_var->opcode == OP_REGVAR) {
NEW_UNALU (cfg, load3, OP_MOVE, reg3, locals_var->dreg);
} else {
g_assert_not_reached ();
}
/* Compute the address */
ins->opcode = OP_PADD;
ins->sreg1 = reg3;
ins->sreg2 = reg2;
mono_bblock_insert_before_ins (bb, ins, load3);
mono_bblock_insert_before_ins (bb, load3, load2);
if (load)
mono_bblock_insert_before_ins (bb, load2, load);
} else {
g_assert (var->opcode == OP_REGOFFSET);
ins->opcode = OP_ADD_IMM;
ins->sreg1 = var->inst_basereg;
ins->inst_imm = var->inst_offset;
}
*need_local_opts = TRUE;
spec = INS_INFO (ins->opcode);
}
if (ins->opcode < MONO_CEE_LAST) {
mono_print_ins (ins);
g_assert_not_reached ();
}
/*
* Store opcodes have destbasereg in the dreg, but in reality, it is an
* src register.
* FIXME:
*/
if (MONO_IS_STORE_MEMBASE (ins)) {
tmp_reg = ins->dreg;
ins->dreg = ins->sreg2;
ins->sreg2 = tmp_reg;
store = TRUE;
spec2 [MONO_INST_DEST] = ' ';
spec2 [MONO_INST_SRC1] = spec [MONO_INST_SRC1];
spec2 [MONO_INST_SRC2] = spec [MONO_INST_DEST];
spec2 [MONO_INST_SRC3] = ' ';
spec = spec2;
} else if (MONO_IS_STORE_MEMINDEX (ins))
g_assert_not_reached ();
else
store = FALSE;
no_lvreg = FALSE;
if (G_UNLIKELY (cfg->verbose_level > 2)) {
printf ("\t %.3s %d", spec, ins->dreg);
num_sregs = mono_inst_get_src_registers (ins, sregs);
for (srcindex = 0; srcindex < num_sregs; ++srcindex)
printf (" %d", sregs [srcindex]);
printf ("\n");
}
/***************/
/* DREG */
/***************/
regtype = spec [MONO_INST_DEST];
g_assert (((ins->dreg == -1) && (regtype == ' ')) || ((ins->dreg != -1) && (regtype != ' ')));
prev_dreg = -1;
int dreg_using_dest_to_membase_op = -1;
if ((ins->dreg != -1) && get_vreg_to_inst (cfg, ins->dreg)) {
MonoInst *var = get_vreg_to_inst (cfg, ins->dreg);
MonoInst *store_ins;
int store_opcode;
MonoInst *def_ins = ins;
int dreg = ins->dreg; /* The original vreg */
store_opcode = mono_type_to_store_membase (cfg, var->inst_vtype);
if (var->opcode == OP_REGVAR) {
ins->dreg = var->dreg;
} else if ((ins->dreg == ins->sreg1) && (spec [MONO_INST_DEST] == 'i') && (spec [MONO_INST_SRC1] == 'i') && !vreg_to_lvreg [ins->dreg] && (op_to_op_dest_membase (store_opcode, ins->opcode) != -1)) {
/*
* Instead of emitting a load+store, use a _membase opcode.
*/
g_assert (var->opcode == OP_REGOFFSET);
if (ins->opcode == OP_MOVE) {
NULLIFY_INS (ins);
def_ins = NULL;
} else {
dreg_using_dest_to_membase_op = ins->dreg;
ins->opcode = op_to_op_dest_membase (store_opcode, ins->opcode);
ins->inst_basereg = var->inst_basereg;
ins->inst_offset = var->inst_offset;
ins->dreg = -1;
}
spec = INS_INFO (ins->opcode);
} else {
guint32 lvreg;
g_assert (var->opcode == OP_REGOFFSET);
prev_dreg = ins->dreg;
/* Invalidate any previous lvreg for this vreg */
vreg_to_lvreg [ins->dreg] = 0;
lvreg = 0;
if (COMPILE_SOFT_FLOAT (cfg) && store_opcode == OP_STORER8_MEMBASE_REG) {
regtype = 'l';
store_opcode = OP_STOREI8_MEMBASE_REG;
}
ins->dreg = alloc_dreg (cfg, stacktypes [regtype]);
#if SIZEOF_REGISTER != 8
if (regtype == 'l') {
NEW_STORE_MEMBASE (cfg, store_ins, OP_STOREI4_MEMBASE_REG, var->inst_basereg, var->inst_offset + MINI_LS_WORD_OFFSET, MONO_LVREG_LS (ins->dreg));
mono_bblock_insert_after_ins (bb, ins, store_ins);
NEW_STORE_MEMBASE (cfg, store_ins, OP_STOREI4_MEMBASE_REG, var->inst_basereg, var->inst_offset + MINI_MS_WORD_OFFSET, MONO_LVREG_MS (ins->dreg));
mono_bblock_insert_after_ins (bb, ins, store_ins);
def_ins = store_ins;
}
else
#endif
{
g_assert (store_opcode != OP_STOREV_MEMBASE);
/* Try to fuse the store into the instruction itself */
/* FIXME: Add more instructions */
if (!lvreg && ((ins->opcode == OP_ICONST) || ((ins->opcode == OP_I8CONST) && (ins->inst_c0 == 0)))) {
ins->opcode = store_membase_reg_to_store_membase_imm (store_opcode);
ins->inst_imm = ins->inst_c0;
ins->inst_destbasereg = var->inst_basereg;
ins->inst_offset = var->inst_offset;
spec = INS_INFO (ins->opcode);
} else if (!lvreg && ((ins->opcode == OP_MOVE) || (ins->opcode == OP_FMOVE) || (ins->opcode == OP_LMOVE) || (ins->opcode == OP_RMOVE))) {
ins->opcode = store_opcode;
ins->inst_destbasereg = var->inst_basereg;
ins->inst_offset = var->inst_offset;
no_lvreg = TRUE;
tmp_reg = ins->dreg;
ins->dreg = ins->sreg2;
ins->sreg2 = tmp_reg;
store = TRUE;
spec2 [MONO_INST_DEST] = ' ';
spec2 [MONO_INST_SRC1] = spec [MONO_INST_SRC1];
spec2 [MONO_INST_SRC2] = spec [MONO_INST_DEST];
spec2 [MONO_INST_SRC3] = ' ';
spec = spec2;
} else if (!lvreg && (op_to_op_store_membase (store_opcode, ins->opcode) != -1)) {
// FIXME: The backends expect the base reg to be in inst_basereg
ins->opcode = op_to_op_store_membase (store_opcode, ins->opcode);
ins->dreg = -1;
ins->inst_basereg = var->inst_basereg;
ins->inst_offset = var->inst_offset;
spec = INS_INFO (ins->opcode);
} else {
/* printf ("INS: "); mono_print_ins (ins); */
/* Create a store instruction */
NEW_STORE_MEMBASE (cfg, store_ins, store_opcode, var->inst_basereg, var->inst_offset, ins->dreg);
/* Insert it after the instruction */
mono_bblock_insert_after_ins (bb, ins, store_ins);
def_ins = store_ins;
/*
* We can't assign ins->dreg to var->dreg here, since the
* sregs could use it. So set a flag, and do it after
* the sregs.
*/
if ((!cfg->backend->use_fpstack || ((store_opcode != OP_STORER8_MEMBASE_REG) && (store_opcode != OP_STORER4_MEMBASE_REG))) && !((var)->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)))
dest_has_lvreg = TRUE;
}
}
}
if (def_ins && !live_range_start [dreg]) {
live_range_start [dreg] = def_ins;
live_range_start_bb [dreg] = bb;
}
if (cfg->compute_gc_maps && def_ins && (var->flags & MONO_INST_GC_TRACK)) {
MonoInst *tmp;
MONO_INST_NEW (cfg, tmp, OP_GC_LIVENESS_DEF);
tmp->inst_c1 = dreg;
mono_bblock_insert_after_ins (bb, def_ins, tmp);
}
}
/************/
/* SREGS */
/************/
num_sregs = mono_inst_get_src_registers (ins, sregs);
for (srcindex = 0; srcindex < 3; ++srcindex) {
regtype = spec [MONO_INST_SRC1 + srcindex];
sreg = sregs [srcindex];
g_assert (((sreg == -1) && (regtype == ' ')) || ((sreg != -1) && (regtype != ' ')));
if ((sreg != -1) && get_vreg_to_inst (cfg, sreg)) {
MonoInst *var = get_vreg_to_inst (cfg, sreg);
MonoInst *use_ins = ins;
MonoInst *load_ins;
guint32 load_opcode;
if (var->opcode == OP_REGVAR) {
sregs [srcindex] = var->dreg;
//mono_inst_set_src_registers (ins, sregs);
live_range_end [sreg] = use_ins;
live_range_end_bb [sreg] = bb;
if (cfg->compute_gc_maps && var->dreg < orig_next_vreg && (var->flags & MONO_INST_GC_TRACK)) {
MonoInst *tmp;
MONO_INST_NEW (cfg, tmp, OP_GC_LIVENESS_USE);
/* var->dreg is a hreg */
tmp->inst_c1 = sreg;
mono_bblock_insert_after_ins (bb, ins, tmp);
}
continue;
}
g_assert (var->opcode == OP_REGOFFSET);
load_opcode = mono_type_to_load_membase (cfg, var->inst_vtype);
g_assert (load_opcode != OP_LOADV_MEMBASE);
if (vreg_to_lvreg [sreg]) {
g_assert (vreg_to_lvreg [sreg] != -1);
/* The variable is already loaded to an lvreg */
if (G_UNLIKELY (cfg->verbose_level > 2))
printf ("\t\tUse lvreg R%d for R%d.\n", vreg_to_lvreg [sreg], sreg);
sregs [srcindex] = vreg_to_lvreg [sreg];
//mono_inst_set_src_registers (ins, sregs);
continue;
}
/* Try to fuse the load into the instruction */
if ((srcindex == 0) && (op_to_op_src1_membase (cfg, load_opcode, ins->opcode) != -1)) {
ins->opcode = op_to_op_src1_membase (cfg, load_opcode, ins->opcode);
sregs [0] = var->inst_basereg;
//mono_inst_set_src_registers (ins, sregs);
ins->inst_offset = var->inst_offset;
} else if ((srcindex == 1) && (op_to_op_src2_membase (cfg, load_opcode, ins->opcode) != -1)) {
ins->opcode = op_to_op_src2_membase (cfg, load_opcode, ins->opcode);
sregs [1] = var->inst_basereg;
//mono_inst_set_src_registers (ins, sregs);
ins->inst_offset = var->inst_offset;
} else {
if (MONO_IS_REAL_MOVE (ins)) {
ins->opcode = OP_NOP;
sreg = ins->dreg;
} else {
//printf ("%d ", srcindex); mono_print_ins (ins);
sreg = alloc_dreg (cfg, stacktypes [regtype]);
if ((!cfg->backend->use_fpstack || ((load_opcode != OP_LOADR8_MEMBASE) && (load_opcode != OP_LOADR4_MEMBASE))) && !((var)->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) && !no_lvreg) {
if (var->dreg == prev_dreg) {
/*
* sreg refers to the value loaded by the load
* emitted below, but we need to use ins->dreg
* since it refers to the store emitted earlier.
*/
sreg = ins->dreg;
}
g_assert (sreg != -1);
if (var->dreg == dreg_using_dest_to_membase_op) {
if (cfg->verbose_level > 2)
printf ("\tCan't cache R%d because it's part of a dreg dest_membase optimization\n", var->dreg);
} else {
vreg_to_lvreg [var->dreg] = sreg;
}
if (lvregs_len >= lvregs_size) {
guint32 *new_lvregs = mono_mempool_alloc0 (cfg->mempool, sizeof (guint32) * lvregs_size * 2);
memcpy (new_lvregs, lvregs, sizeof (guint32) * lvregs_size);
lvregs = new_lvregs;
lvregs_size *= 2;
}
lvregs [lvregs_len ++] = var->dreg;
}
}
sregs [srcindex] = sreg;
//mono_inst_set_src_registers (ins, sregs);
#if SIZEOF_REGISTER != 8
if (regtype == 'l') {
NEW_LOAD_MEMBASE (cfg, load_ins, OP_LOADI4_MEMBASE, MONO_LVREG_MS (sreg), var->inst_basereg, var->inst_offset + MINI_MS_WORD_OFFSET);
mono_bblock_insert_before_ins (bb, ins, load_ins);
NEW_LOAD_MEMBASE (cfg, load_ins, OP_LOADI4_MEMBASE, MONO_LVREG_LS (sreg), var->inst_basereg, var->inst_offset + MINI_LS_WORD_OFFSET);
mono_bblock_insert_before_ins (bb, ins, load_ins);
use_ins = load_ins;
}
else
#endif
{
#if SIZEOF_REGISTER == 4
g_assert (load_opcode != OP_LOADI8_MEMBASE);
#endif
NEW_LOAD_MEMBASE (cfg, load_ins, load_opcode, sreg, var->inst_basereg, var->inst_offset);
mono_bblock_insert_before_ins (bb, ins, load_ins);
use_ins = load_ins;
}
if (cfg->verbose_level > 2)
mono_print_ins_index (0, use_ins);
}
if (var->dreg < orig_next_vreg) {
live_range_end [var->dreg] = use_ins;
live_range_end_bb [var->dreg] = bb;
}
if (cfg->compute_gc_maps && var->dreg < orig_next_vreg && (var->flags & MONO_INST_GC_TRACK)) {
MonoInst *tmp;
MONO_INST_NEW (cfg, tmp, OP_GC_LIVENESS_USE);
tmp->inst_c1 = var->dreg;
mono_bblock_insert_after_ins (bb, ins, tmp);
}
}
}
mono_inst_set_src_registers (ins, sregs);
if (dest_has_lvreg) {
g_assert (ins->dreg != -1);
vreg_to_lvreg [prev_dreg] = ins->dreg;
if (lvregs_len >= lvregs_size) {
guint32 *new_lvregs = mono_mempool_alloc0 (cfg->mempool, sizeof (guint32) * lvregs_size * 2);
memcpy (new_lvregs, lvregs, sizeof (guint32) * lvregs_size);
lvregs = new_lvregs;
lvregs_size *= 2;
}
lvregs [lvregs_len ++] = prev_dreg;
dest_has_lvreg = FALSE;
}
if (store) {
tmp_reg = ins->dreg;
ins->dreg = ins->sreg2;
ins->sreg2 = tmp_reg;
}
if (MONO_IS_CALL (ins)) {
/* Clear vreg_to_lvreg array */
for (i = 0; i < lvregs_len; i++)
vreg_to_lvreg [lvregs [i]] = 0;
lvregs_len = 0;
} else if (ins->opcode == OP_NOP) {
ins->dreg = -1;
MONO_INST_NULLIFY_SREGS (ins);
}
if (cfg->verbose_level > 2)
mono_print_ins_index (1, ins);
}
/* Extend the live range based on the liveness info */
if (cfg->compute_precise_live_ranges && bb->live_out_set && bb->code) {
for (i = 0; i < cfg->num_varinfo; i ++) {
MonoMethodVar *vi = MONO_VARINFO (cfg, i);
if (vreg_is_volatile (cfg, vi->vreg))
/* The liveness info is incomplete */
continue;
if (mono_bitset_test_fast (bb->live_in_set, i) && !live_range_start [vi->vreg]) {
/* Live from at least the first ins of this bb */
live_range_start [vi->vreg] = bb->code;
live_range_start_bb [vi->vreg] = bb;
}
if (mono_bitset_test_fast (bb->live_out_set, i)) {
/* Live at least until the last ins of this bb */
live_range_end [vi->vreg] = bb->last_ins;
live_range_end_bb [vi->vreg] = bb;
}
}
}
}
/*
* Emit LIVERANGE_START/LIVERANGE_END opcodes, the backend will implement them
* by storing the current native offset into MonoMethodVar->live_range_start/end.
*/
if (cfg->compute_precise_live_ranges && cfg->comp_done & MONO_COMP_LIVENESS) {
for (i = 0; i < cfg->num_varinfo; ++i) {
int vreg = MONO_VARINFO (cfg, i)->vreg;
MonoInst *ins;
if (live_range_start [vreg]) {
MONO_INST_NEW (cfg, ins, OP_LIVERANGE_START);
ins->inst_c0 = i;
ins->inst_c1 = vreg;
mono_bblock_insert_after_ins (live_range_start_bb [vreg], live_range_start [vreg], ins);
}
if (live_range_end [vreg]) {
MONO_INST_NEW (cfg, ins, OP_LIVERANGE_END);
ins->inst_c0 = i;
ins->inst_c1 = vreg;
if (live_range_end [vreg] == live_range_end_bb [vreg]->last_ins)
mono_add_ins_to_end (live_range_end_bb [vreg], ins);
else
mono_bblock_insert_after_ins (live_range_end_bb [vreg], live_range_end [vreg], ins);
}
}
}
if (cfg->gsharedvt_locals_var_ins) {
/* Nullify if unused */
cfg->gsharedvt_locals_var_ins->opcode = OP_PCONST;
cfg->gsharedvt_locals_var_ins->inst_imm = 0;
}
g_free (live_range_start);
g_free (live_range_end);
g_free (live_range_start_bb);
g_free (live_range_end_bb);
}
/**
* FIXME:
* - use 'iadd' instead of 'int_add'
* - handling ovf opcodes: decompose in method_to_ir.
* - unify iregs/fregs
* -> partly done, the missing parts are:
* - a more complete unification would involve unifying the hregs as well, so
* code wouldn't need if (fp) all over the place. but that would mean the hregs
* would no longer map to the machine hregs, so the code generators would need to
* be modified. Also, on ia64 for example, niregs + nfregs > 256 -> bitmasks
* wouldn't work any more. Duplicating the code in mono_local_regalloc () into
* fp/non-fp branches speeds it up by about 15%.
* - use sext/zext opcodes instead of shifts
* - add OP_ICALL
* - get rid of TEMPLOADs if possible and use vregs instead
* - clean up usage of OP_P/OP_ opcodes
* - cleanup usage of DUMMY_USE
* - cleanup the setting of ins->type for MonoInst's which are pushed on the
* stack
* - set the stack type and allocate a dreg in the EMIT_NEW macros
* - get rid of all the <foo>2 stuff when the new JIT is ready.
* - make sure handle_stack_args () is called before the branch is emitted
* - when the new IR is done, get rid of all unused stuff
* - COMPARE/BEQ as separate instructions or unify them ?
* - keeping them separate allows specialized compare instructions like
* compare_imm, compare_membase
* - most back ends unify fp compare+branch, fp compare+ceq
* - integrate mono_save_args into inline_method
* - get rid of the empty bblocks created by MONO_EMIT_NEW_BRACH_BLOCK2
* - handle long shift opts on 32 bit platforms somehow: they require
* 3 sregs (2 for arg1 and 1 for arg2)
* - make byref a 'normal' type.
* - use vregs for bb->out_stacks if possible, handle_global_vreg will make them a
* variable if needed.
* - do not start a new IL level bblock when cfg->cbb is changed by a function call
* like inline_method.
* - remove inlining restrictions
* - fix LNEG and enable cfold of INEG
* - generalize x86 optimizations like ldelema as a peephole optimization
* - add store_mem_imm for amd64
* - optimize the loading of the interruption flag in the managed->native wrappers
* - avoid special handling of OP_NOP in passes
* - move code inserting instructions into one function/macro.
* - try a coalescing phase after liveness analysis
* - add float -> vreg conversion + local optimizations on !x86
* - figure out how to handle decomposed branches during optimizations, ie.
* compare+branch, op_jump_table+op_br etc.
* - promote RuntimeXHandles to vregs
* - vtype cleanups:
* - add a NEW_VARLOADA_VREG macro
* - the vtype optimizations are blocked by the LDADDR opcodes generated for
* accessing vtype fields.
* - get rid of I8CONST on 64 bit platforms
* - dealing with the increase in code size due to branches created during opcode
* decomposition:
* - use extended basic blocks
* - all parts of the JIT
* - handle_global_vregs () && local regalloc
* - avoid introducing global vregs during decomposition, like 'vtable' in isinst
* - sources of increase in code size:
* - vtypes
* - long compares
* - isinst and castclass
* - lvregs not allocated to global registers even if used multiple times
* - call cctors outside the JIT, to make -v output more readable and JIT timings more
* meaningful.
* - check for fp stack leakage in other opcodes too. (-> 'exceptions' optimization)
* - add all micro optimizations from the old JIT
* - put tree optimizations into the deadce pass
* - decompose op_start_handler/op_endfilter/op_endfinally earlier using an arch
* specific function.
* - unify the float comparison opcodes with the other comparison opcodes, i.e.
* fcompare + branchCC.
* - create a helper function for allocating a stack slot, taking into account
* MONO_CFG_HAS_SPILLUP.
* - merge r68207.
* - optimize mono_regstate2_alloc_int/float.
* - fix the pessimistic handling of variables accessed in exception handler blocks.
* - need to write a tree optimization pass, but the creation of trees is difficult, i.e.
* parts of the tree could be separated by other instructions, killing the tree
* arguments, or stores killing loads etc. Also, should we fold loads into other
* instructions if the result of the load is used multiple times ?
* - make the REM_IMM optimization in mini-x86.c arch-independent.
* - LAST MERGE: 108395.
* - when returning vtypes in registers, generate IR and append it to the end of the
* last bb instead of doing it in the epilog.
* - change the store opcodes so they use sreg1 instead of dreg to store the base register.
*/
/*
NOTES
-----
- When to decompose opcodes:
- earlier: this makes some optimizations hard to implement, since the low level IR
no longer contains the necessary information. But it is easier to do.
- later: harder to implement, enables more optimizations.
- Branches inside bblocks:
- created when decomposing complex opcodes.
- branches to another bblock: harmless, but not tracked by the branch
optimizations, so need to branch to a label at the start of the bblock.
- branches to inside the same bblock: very problematic, trips up the local
reg allocator. Can be fixed by spitting the current bblock, but that is a
complex operation, since some local vregs can become global vregs etc.
- Local/global vregs:
- local vregs: temporary vregs used inside one bblock. Assigned to hregs by the
local register allocator.
- global vregs: used in more than one bblock. Have an associated MonoMethodVar
structure, created by mono_create_var (). Assigned to hregs or the stack by
the global register allocator.
- When to do optimizations like alu->alu_imm:
- earlier -> saves work later on since the IR will be smaller/simpler
- later -> can work on more instructions
- Handling of valuetypes:
- When a vtype is pushed on the stack, a new temporary is created, an
instruction computing its address (LDADDR) is emitted and pushed on
the stack. Need to optimize cases when the vtype is used immediately as in
argument passing, stloc etc.
- Instead of the to_end stuff in the old JIT, simply call the function handling
the values on the stack before emitting the last instruction of the bb.
*/
#else /* !DISABLE_JIT */
MONO_EMPTY_SOURCE_FILE (method_to_ir);
#endif /* !DISABLE_JIT */
| /**
* \file
* Convert CIL to the JIT internal representation
*
* Author:
* Paolo Molaro ([email protected])
* Dietmar Maurer ([email protected])
*
* (C) 2002 Ximian, Inc.
* Copyright 2003-2010 Novell, Inc (http://www.novell.com)
* Copyright 2011 Xamarin, Inc (http://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#include <config.h>
#include <glib.h>
#include <mono/utils/mono-compiler.h>
#include "mini.h"
#ifndef DISABLE_JIT
#include <signal.h>
#ifdef HAVE_UNISTD_H
#include <unistd.h>
#endif
#include <math.h>
#include <string.h>
#include <ctype.h>
#ifdef HAVE_SYS_TIME_H
#include <sys/time.h>
#endif
#ifdef HAVE_ALLOCA_H
#include <alloca.h>
#endif
#include <mono/utils/memcheck.h>
#include <mono/metadata/abi-details.h>
#include <mono/metadata/assembly.h>
#include <mono/metadata/assembly-internals.h>
#include <mono/metadata/attrdefs.h>
#include <mono/metadata/loader.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/class.h>
#include <mono/metadata/class-abi-details.h>
#include <mono/metadata/object.h>
#include <mono/metadata/exception.h>
#include <mono/metadata/exception-internals.h>
#include <mono/metadata/opcodes.h>
#include <mono/metadata/mono-endian.h>
#include <mono/metadata/tokentype.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/marshal.h>
#include <mono/metadata/debug-helpers.h>
#include <mono/metadata/debug-internals.h>
#include <mono/metadata/gc-internals.h>
#include <mono/metadata/threads-types.h>
#include <mono/metadata/profiler-private.h>
#include <mono/metadata/profiler.h>
#include <mono/metadata/monitor.h>
#include <mono/utils/mono-memory-model.h>
#include <mono/utils/mono-error-internals.h>
#include <mono/metadata/mono-basic-block.h>
#include <mono/metadata/reflection-internals.h>
#include <mono/utils/mono-threads-coop.h>
#include <mono/utils/mono-utils-debug.h>
#include <mono/utils/mono-logger-internals.h>
#include <mono/metadata/verify-internals.h>
#include <mono/metadata/icall-decl.h>
#include "mono/metadata/icall-signatures.h"
#include "trace.h"
#include "ir-emit.h"
#include "jit-icalls.h"
#include <mono/jit/jit.h>
#include "seq-points.h"
#include "aot-compiler.h"
#include "mini-llvm.h"
#include "mini-runtime.h"
#include "llvmonly-runtime.h"
#include "mono/utils/mono-tls-inline.h"
#define BRANCH_COST 10
#define CALL_COST 10
/* Used for the JIT */
#define INLINE_LENGTH_LIMIT 20
/*
* The aot and jit inline limits should be different,
* since aot sees the whole program so we can let opt inline methods for us,
* while the jit only sees one method, so we have to inline things ourselves.
*/
/* Used by LLVM AOT */
#define LLVM_AOT_INLINE_LENGTH_LIMIT 30
/* Used to LLVM JIT */
#define LLVM_JIT_INLINE_LENGTH_LIMIT 100
static const gboolean debug_tailcall = FALSE; // logging
static const gboolean debug_tailcall_try_all = FALSE; // consider any call followed by ret
gboolean
mono_tailcall_print_enabled (void)
{
return debug_tailcall || MONO_TRACE_IS_TRACED (G_LOG_LEVEL_DEBUG, MONO_TRACE_TAILCALL);
}
void
mono_tailcall_print (const char *format, ...)
{
if (!mono_tailcall_print_enabled ())
return;
va_list args;
va_start (args, format);
g_printv (format, args);
va_end (args);
}
/* These have 'cfg' as an implicit argument */
#define INLINE_FAILURE(msg) do { \
if ((cfg->method != cfg->current_method) && (cfg->current_method->wrapper_type == MONO_WRAPPER_NONE)) { \
inline_failure (cfg, msg); \
goto exception_exit; \
} \
} while (0)
#define CHECK_CFG_EXCEPTION do {\
if (cfg->exception_type != MONO_EXCEPTION_NONE) \
goto exception_exit; \
} while (0)
#define FIELD_ACCESS_FAILURE(method, field) do { \
field_access_failure ((cfg), (method), (field)); \
goto exception_exit; \
} while (0)
#define GENERIC_SHARING_FAILURE(opcode) do { \
if (cfg->gshared) { \
gshared_failure (cfg, opcode, __FILE__, __LINE__); \
goto exception_exit; \
} \
} while (0)
#define GSHAREDVT_FAILURE(opcode) do { \
if (cfg->gsharedvt) { \
gsharedvt_failure (cfg, opcode, __FILE__, __LINE__); \
goto exception_exit; \
} \
} while (0)
#define OUT_OF_MEMORY_FAILURE do { \
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); \
mono_error_set_out_of_memory (cfg->error, ""); \
goto exception_exit; \
} while (0)
#define DISABLE_AOT(cfg) do { \
if ((cfg)->verbose_level >= 2) \
printf ("AOT disabled: %s:%d\n", __FILE__, __LINE__); \
(cfg)->disable_aot = TRUE; \
} while (0)
#define LOAD_ERROR do { \
break_on_unverified (); \
mono_cfg_set_exception (cfg, MONO_EXCEPTION_TYPE_LOAD); \
goto exception_exit; \
} while (0)
#define TYPE_LOAD_ERROR(klass) do { \
cfg->exception_ptr = klass; \
LOAD_ERROR; \
} while (0)
#define CHECK_CFG_ERROR do {\
if (!is_ok (cfg->error)) { \
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); \
goto mono_error_exit; \
} \
} while (0)
int mono_op_to_op_imm (int opcode);
int mono_op_to_op_imm_noemul (int opcode);
static int inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp,
guchar *ip, guint real_offset, gboolean inline_always, gboolean *is_empty);
static MonoInst*
convert_value (MonoCompile *cfg, MonoType *type, MonoInst *ins);
/* helper methods signatures */
/* type loading helpers */
static GENERATE_GET_CLASS_WITH_CACHE (iequatable, "System", "IEquatable`1")
static GENERATE_GET_CLASS_WITH_CACHE (geqcomparer, "System.Collections.Generic", "GenericEqualityComparer`1");
/*
* Instruction metadata
*/
#ifdef MINI_OP
#undef MINI_OP
#endif
#ifdef MINI_OP3
#undef MINI_OP3
#endif
#define MINI_OP(a,b,dest,src1,src2) dest, src1, src2, ' ',
#define MINI_OP3(a,b,dest,src1,src2,src3) dest, src1, src2, src3,
#define NONE ' '
#define IREG 'i'
#define FREG 'f'
#define VREG 'v'
#define XREG 'x'
#if SIZEOF_REGISTER == 8 && SIZEOF_REGISTER == TARGET_SIZEOF_VOID_P
#define LREG IREG
#else
#define LREG 'l'
#endif
/* keep in sync with the enum in mini.h */
const char
mini_ins_info[] = {
#include "mini-ops.h"
};
#undef MINI_OP
#undef MINI_OP3
#define MINI_OP(a,b,dest,src1,src2) ((src2) != NONE ? 2 : ((src1) != NONE ? 1 : 0)),
#define MINI_OP3(a,b,dest,src1,src2,src3) ((src3) != NONE ? 3 : ((src2) != NONE ? 2 : ((src1) != NONE ? 1 : 0))),
/*
* This should contain the index of the last sreg + 1. This is not the same
* as the number of sregs for opcodes like IA64_CMP_EQ_IMM.
*/
const gint8 mini_ins_sreg_counts[] = {
#include "mini-ops.h"
};
#undef MINI_OP
#undef MINI_OP3
guint32
mono_alloc_ireg (MonoCompile *cfg)
{
return alloc_ireg (cfg);
}
guint32
mono_alloc_lreg (MonoCompile *cfg)
{
return alloc_lreg (cfg);
}
guint32
mono_alloc_freg (MonoCompile *cfg)
{
return alloc_freg (cfg);
}
guint32
mono_alloc_preg (MonoCompile *cfg)
{
return alloc_preg (cfg);
}
guint32
mono_alloc_dreg (MonoCompile *cfg, MonoStackType stack_type)
{
return alloc_dreg (cfg, stack_type);
}
/*
* mono_alloc_ireg_ref:
*
* Allocate an IREG, and mark it as holding a GC ref.
*/
guint32
mono_alloc_ireg_ref (MonoCompile *cfg)
{
return alloc_ireg_ref (cfg);
}
/*
* mono_alloc_ireg_mp:
*
* Allocate an IREG, and mark it as holding a managed pointer.
*/
guint32
mono_alloc_ireg_mp (MonoCompile *cfg)
{
return alloc_ireg_mp (cfg);
}
/*
* mono_alloc_ireg_copy:
*
* Allocate an IREG with the same GC type as VREG.
*/
guint32
mono_alloc_ireg_copy (MonoCompile *cfg, guint32 vreg)
{
if (vreg_is_ref (cfg, vreg))
return alloc_ireg_ref (cfg);
else if (vreg_is_mp (cfg, vreg))
return alloc_ireg_mp (cfg);
else
return alloc_ireg (cfg);
}
guint
mono_type_to_regmove (MonoCompile *cfg, MonoType *type)
{
if (m_type_is_byref (type))
return OP_MOVE;
type = mini_get_underlying_type (type);
handle_enum:
switch (type->type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
return OP_MOVE;
case MONO_TYPE_I2:
case MONO_TYPE_U2:
return OP_MOVE;
case MONO_TYPE_I4:
case MONO_TYPE_U4:
return OP_MOVE;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
return OP_MOVE;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
return OP_MOVE;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
#if SIZEOF_REGISTER == 8
return OP_MOVE;
#else
return OP_LMOVE;
#endif
case MONO_TYPE_R4:
return cfg->r4fp ? OP_RMOVE : OP_FMOVE;
case MONO_TYPE_R8:
return OP_FMOVE;
case MONO_TYPE_VALUETYPE:
if (m_class_is_enumtype (type->data.klass)) {
type = mono_class_enum_basetype_internal (type->data.klass);
goto handle_enum;
}
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type)))
return OP_XMOVE;
return OP_VMOVE;
case MONO_TYPE_TYPEDBYREF:
return OP_VMOVE;
case MONO_TYPE_GENERICINST:
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type)))
return OP_XMOVE;
type = m_class_get_byval_arg (type->data.generic_class->container_class);
goto handle_enum;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
g_assert (cfg->gshared);
if (mini_type_var_is_vt (type))
return OP_VMOVE;
else
return mono_type_to_regmove (cfg, mini_get_underlying_type (type));
default:
g_error ("unknown type 0x%02x in type_to_regstore", type->type);
}
return -1;
}
void
mono_print_bb (MonoBasicBlock *bb, const char *msg)
{
int i;
MonoInst *tree;
GString *str = g_string_new ("");
g_string_append_printf (str, "%s %d: [IN: ", msg, bb->block_num);
for (i = 0; i < bb->in_count; ++i)
g_string_append_printf (str, " BB%d(%d)", bb->in_bb [i]->block_num, bb->in_bb [i]->dfn);
g_string_append_printf (str, ", OUT: ");
for (i = 0; i < bb->out_count; ++i)
g_string_append_printf (str, " BB%d(%d)", bb->out_bb [i]->block_num, bb->out_bb [i]->dfn);
g_string_append_printf (str, " ]\n");
g_print ("%s", str->str);
g_string_free (str, TRUE);
for (tree = bb->code; tree; tree = tree->next)
mono_print_ins_index (-1, tree);
}
static MONO_NEVER_INLINE gboolean
break_on_unverified (void)
{
if (mini_debug_options.break_on_unverified) {
G_BREAKPOINT ();
return TRUE;
}
return FALSE;
}
static void
clear_cfg_error (MonoCompile *cfg)
{
mono_error_cleanup (cfg->error);
error_init (cfg->error);
}
static MONO_NEVER_INLINE void
field_access_failure (MonoCompile *cfg, MonoMethod *method, MonoClassField *field)
{
char *method_fname = mono_method_full_name (method, TRUE);
char *field_fname = mono_field_full_name (field);
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
mono_error_set_generic_error (cfg->error, "System", "FieldAccessException", "Field `%s' is inaccessible from method `%s'\n", field_fname, method_fname);
g_free (method_fname);
g_free (field_fname);
}
static MONO_NEVER_INLINE void
inline_failure (MonoCompile *cfg, const char *msg)
{
if (cfg->verbose_level >= 2)
printf ("inline failed: %s\n", msg);
mono_cfg_set_exception (cfg, MONO_EXCEPTION_INLINE_FAILED);
}
static MONO_NEVER_INLINE void
gshared_failure (MonoCompile *cfg, int opcode, const char *file, int line)
{
if (cfg->verbose_level > 2)
printf ("sharing failed for method %s.%s.%s/%d opcode %s line %d\n", m_class_get_name_space (cfg->current_method->klass), m_class_get_name (cfg->current_method->klass), cfg->current_method->name, cfg->current_method->signature->param_count, mono_opcode_name (opcode), line);
mono_cfg_set_exception (cfg, MONO_EXCEPTION_GENERIC_SHARING_FAILED);
}
static MONO_NEVER_INLINE void
gsharedvt_failure (MonoCompile *cfg, int opcode, const char *file, int line)
{
cfg->exception_message = g_strdup_printf ("gsharedvt failed for method %s.%s.%s/%d opcode %s %s:%d", m_class_get_name_space (cfg->current_method->klass), m_class_get_name (cfg->current_method->klass), cfg->current_method->name, cfg->current_method->signature->param_count, mono_opcode_name ((opcode)), file, line);
if (cfg->verbose_level >= 2)
printf ("%s\n", cfg->exception_message);
mono_cfg_set_exception (cfg, MONO_EXCEPTION_GENERIC_SHARING_FAILED);
}
void
mini_set_inline_failure (MonoCompile *cfg, const char *msg)
{
if (cfg->verbose_level >= 2)
printf ("inline failed: %s\n", msg);
mono_cfg_set_exception (cfg, MONO_EXCEPTION_INLINE_FAILED);
}
/*
* When using gsharedvt, some instatiations might be verifiable, and some might be not. i.e.
* foo<T> (int i) { ldarg.0; box T; }
*/
#define UNVERIFIED do { \
if (cfg->gsharedvt) { \
if (cfg->verbose_level > 2) \
printf ("gsharedvt method failed to verify, falling back to instantiation.\n"); \
mono_cfg_set_exception (cfg, MONO_EXCEPTION_GENERIC_SHARING_FAILED); \
goto exception_exit; \
} \
break_on_unverified (); \
goto unverified; \
} while (0)
#define GET_BBLOCK(cfg,tblock,ip) do { \
(tblock) = cfg->cil_offset_to_bb [(ip) - cfg->cil_start]; \
if (!(tblock)) { \
if ((ip) >= end || (ip) < header->code) UNVERIFIED; \
NEW_BBLOCK (cfg, (tblock)); \
(tblock)->cil_code = (ip); \
ADD_BBLOCK (cfg, (tblock)); \
} \
} while (0)
/* Emit conversions so both operands of a binary opcode are of the same type */
static void
add_widen_op (MonoCompile *cfg, MonoInst *ins, MonoInst **arg1_ref, MonoInst **arg2_ref)
{
MonoInst *arg1 = *arg1_ref;
MonoInst *arg2 = *arg2_ref;
if (cfg->r4fp &&
((arg1->type == STACK_R4 && arg2->type == STACK_R8) ||
(arg1->type == STACK_R8 && arg2->type == STACK_R4))) {
MonoInst *conv;
/* Mixing r4/r8 is allowed by the spec */
if (arg1->type == STACK_R4) {
int dreg = alloc_freg (cfg);
EMIT_NEW_UNALU (cfg, conv, OP_RCONV_TO_R8, dreg, arg1->dreg);
conv->type = STACK_R8;
ins->sreg1 = dreg;
*arg1_ref = conv;
}
if (arg2->type == STACK_R4) {
int dreg = alloc_freg (cfg);
EMIT_NEW_UNALU (cfg, conv, OP_RCONV_TO_R8, dreg, arg2->dreg);
conv->type = STACK_R8;
ins->sreg2 = dreg;
*arg2_ref = conv;
}
}
#if SIZEOF_REGISTER == 8
/* FIXME: Need to add many more cases */
if ((arg1)->type == STACK_PTR && (arg2)->type == STACK_I4) {
MonoInst *widen;
int dr = alloc_preg (cfg);
EMIT_NEW_UNALU (cfg, widen, OP_SEXT_I4, dr, (arg2)->dreg);
(ins)->sreg2 = widen->dreg;
}
#endif
}
#define ADD_UNOP(op) do { \
MONO_INST_NEW (cfg, ins, (op)); \
sp--; \
ins->sreg1 = sp [0]->dreg; \
type_from_op (cfg, ins, sp [0], NULL); \
CHECK_TYPE (ins); \
(ins)->dreg = alloc_dreg ((cfg), (MonoStackType)(ins)->type); \
MONO_ADD_INS ((cfg)->cbb, (ins)); \
*sp++ = mono_decompose_opcode (cfg, ins); \
} while (0)
#define ADD_BINCOND(next_block) do { \
MonoInst *cmp; \
sp -= 2; \
MONO_INST_NEW(cfg, cmp, OP_COMPARE); \
cmp->sreg1 = sp [0]->dreg; \
cmp->sreg2 = sp [1]->dreg; \
add_widen_op (cfg, cmp, &sp [0], &sp [1]); \
type_from_op (cfg, cmp, sp [0], sp [1]); \
CHECK_TYPE (cmp); \
type_from_op (cfg, ins, sp [0], sp [1]); \
ins->inst_many_bb = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof(gpointer)*2); \
GET_BBLOCK (cfg, tblock, target); \
link_bblock (cfg, cfg->cbb, tblock); \
ins->inst_true_bb = tblock; \
if ((next_block)) { \
link_bblock (cfg, cfg->cbb, (next_block)); \
ins->inst_false_bb = (next_block); \
start_new_bblock = 1; \
} else { \
GET_BBLOCK (cfg, tblock, next_ip); \
link_bblock (cfg, cfg->cbb, tblock); \
ins->inst_false_bb = tblock; \
start_new_bblock = 2; \
} \
if (sp != stack_start) { \
handle_stack_args (cfg, stack_start, sp - stack_start); \
CHECK_UNVERIFIABLE (cfg); \
} \
MONO_ADD_INS (cfg->cbb, cmp); \
MONO_ADD_INS (cfg->cbb, ins); \
} while (0)
/* *
* link_bblock: Links two basic blocks
*
* links two basic blocks in the control flow graph, the 'from'
* argument is the starting block and the 'to' argument is the block
* the control flow ends to after 'from'.
*/
static void
link_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to)
{
MonoBasicBlock **newa;
int i, found;
#if 0
if (from->cil_code) {
if (to->cil_code)
printf ("edge from IL%04x to IL_%04x\n", from->cil_code - cfg->cil_code, to->cil_code - cfg->cil_code);
else
printf ("edge from IL%04x to exit\n", from->cil_code - cfg->cil_code);
} else {
if (to->cil_code)
printf ("edge from entry to IL_%04x\n", to->cil_code - cfg->cil_code);
else
printf ("edge from entry to exit\n");
}
#endif
found = FALSE;
for (i = 0; i < from->out_count; ++i) {
if (to == from->out_bb [i]) {
found = TRUE;
break;
}
}
if (!found) {
newa = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (gpointer) * (from->out_count + 1));
for (i = 0; i < from->out_count; ++i) {
newa [i] = from->out_bb [i];
}
newa [i] = to;
from->out_count++;
from->out_bb = newa;
}
found = FALSE;
for (i = 0; i < to->in_count; ++i) {
if (from == to->in_bb [i]) {
found = TRUE;
break;
}
}
if (!found) {
newa = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (gpointer) * (to->in_count + 1));
for (i = 0; i < to->in_count; ++i) {
newa [i] = to->in_bb [i];
}
newa [i] = from;
to->in_count++;
to->in_bb = newa;
}
}
void
mono_link_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to)
{
link_bblock (cfg, from, to);
}
static void
mono_create_spvar_for_region (MonoCompile *cfg, int region);
static void
mark_bb_in_region (MonoCompile *cfg, guint region, uint32_t start, uint32_t end)
{
MonoBasicBlock *bb = cfg->cil_offset_to_bb [start];
//start must exist in cil_offset_to_bb as those are il offsets used by EH which should have GET_BBLOCK early.
g_assert (bb);
if (cfg->verbose_level > 1)
g_print ("FIRST BB for %d is BB_%d\n", start, bb->block_num);
for (; bb && bb->real_offset < end; bb = bb->next_bb) {
//no one claimed this bb, take it.
if (bb->region == -1) {
bb->region = region;
continue;
}
//current region is an early handler, bail
if ((bb->region & (0xf << 4)) != MONO_REGION_TRY) {
continue;
}
//current region is a try, only overwrite if new region is a handler
if ((region & (0xf << 4)) != MONO_REGION_TRY) {
bb->region = region;
}
}
if (cfg->spvars)
mono_create_spvar_for_region (cfg, region);
}
static void
compute_bb_regions (MonoCompile *cfg)
{
MonoBasicBlock *bb;
MonoMethodHeader *header = cfg->header;
int i;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
bb->region = -1;
for (i = 0; i < header->num_clauses; ++i) {
MonoExceptionClause *clause = &header->clauses [i];
if (clause->flags == MONO_EXCEPTION_CLAUSE_FILTER)
mark_bb_in_region (cfg, ((i + 1) << 8) | MONO_REGION_FILTER | clause->flags, clause->data.filter_offset, clause->handler_offset);
guint handler_region;
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY)
handler_region = ((i + 1) << 8) | MONO_REGION_FINALLY | clause->flags;
else if (clause->flags == MONO_EXCEPTION_CLAUSE_FAULT)
handler_region = ((i + 1) << 8) | MONO_REGION_FAULT | clause->flags;
else
handler_region = ((i + 1) << 8) | MONO_REGION_CATCH | clause->flags;
mark_bb_in_region (cfg, handler_region, clause->handler_offset, clause->handler_offset + clause->handler_len);
mark_bb_in_region (cfg, ((i + 1) << 8) | clause->flags, clause->try_offset, clause->try_offset + clause->try_len);
}
if (cfg->verbose_level > 2) {
MonoBasicBlock *bb;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
g_print ("REGION BB%d IL_%04x ID_%08X\n", bb->block_num, bb->real_offset, bb->region);
}
}
static gboolean
ip_in_finally_clause (MonoCompile *cfg, int offset)
{
MonoMethodHeader *header = cfg->header;
MonoExceptionClause *clause;
int i;
for (i = 0; i < header->num_clauses; ++i) {
clause = &header->clauses [i];
if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY && clause->flags != MONO_EXCEPTION_CLAUSE_FAULT)
continue;
if (MONO_OFFSET_IN_HANDLER (clause, offset))
return TRUE;
}
return FALSE;
}
/* Find clauses between ip and target, from inner to outer */
static GList*
mono_find_leave_clauses (MonoCompile *cfg, guchar *ip, guchar *target)
{
MonoMethodHeader *header = cfg->header;
MonoExceptionClause *clause;
int i;
GList *res = NULL;
for (i = 0; i < header->num_clauses; ++i) {
clause = &header->clauses [i];
if (MONO_OFFSET_IN_CLAUSE (clause, (ip - header->code)) &&
(!MONO_OFFSET_IN_CLAUSE (clause, (target - header->code)))) {
MonoLeaveClause *leave = mono_mempool_alloc0 (cfg->mempool, sizeof (MonoLeaveClause));
leave->index = i;
leave->clause = clause;
res = g_list_append_mempool (cfg->mempool, res, leave);
}
}
return res;
}
static void
mono_create_spvar_for_region (MonoCompile *cfg, int region)
{
MonoInst *var;
var = (MonoInst *)g_hash_table_lookup (cfg->spvars, GINT_TO_POINTER (region));
if (var)
return;
var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
/* prevent it from being register allocated */
var->flags |= MONO_INST_VOLATILE;
g_hash_table_insert (cfg->spvars, GINT_TO_POINTER (region), var);
}
MonoInst *
mono_find_exvar_for_offset (MonoCompile *cfg, int offset)
{
return (MonoInst *)g_hash_table_lookup (cfg->exvars, GINT_TO_POINTER (offset));
}
static MonoInst*
mono_create_exvar_for_offset (MonoCompile *cfg, int offset)
{
MonoInst *var;
var = (MonoInst *)g_hash_table_lookup (cfg->exvars, GINT_TO_POINTER (offset));
if (var)
return var;
var = mono_compile_create_var (cfg, mono_get_object_type (), OP_LOCAL);
/* prevent it from being register allocated */
var->flags |= MONO_INST_VOLATILE;
g_hash_table_insert (cfg->exvars, GINT_TO_POINTER (offset), var);
return var;
}
/*
* Returns the type used in the eval stack when @type is loaded.
* FIXME: return a MonoType/MonoClass for the byref and VALUETYPE cases.
*/
void
mini_type_to_eval_stack_type (MonoCompile *cfg, MonoType *type, MonoInst *inst)
{
MonoClass *klass;
type = mini_get_underlying_type (type);
inst->klass = klass = mono_class_from_mono_type_internal (type);
if (m_type_is_byref (type)) {
inst->type = STACK_MP;
return;
}
handle_enum:
switch (type->type) {
case MONO_TYPE_VOID:
inst->type = STACK_INV;
return;
case MONO_TYPE_I1:
case MONO_TYPE_U1:
case MONO_TYPE_I2:
case MONO_TYPE_U2:
case MONO_TYPE_I4:
case MONO_TYPE_U4:
inst->type = STACK_I4;
return;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
inst->type = STACK_PTR;
return;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
inst->type = STACK_OBJ;
return;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
inst->type = STACK_I8;
return;
case MONO_TYPE_R4:
inst->type = cfg->r4_stack_type;
break;
case MONO_TYPE_R8:
inst->type = STACK_R8;
return;
case MONO_TYPE_VALUETYPE:
if (m_class_is_enumtype (type->data.klass)) {
type = mono_class_enum_basetype_internal (type->data.klass);
goto handle_enum;
} else {
inst->klass = klass;
inst->type = STACK_VTYPE;
return;
}
case MONO_TYPE_TYPEDBYREF:
inst->klass = mono_defaults.typed_reference_class;
inst->type = STACK_VTYPE;
return;
case MONO_TYPE_GENERICINST:
type = m_class_get_byval_arg (type->data.generic_class->container_class);
goto handle_enum;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
g_assert (cfg->gshared);
if (mini_is_gsharedvt_type (type)) {
g_assert (cfg->gsharedvt);
inst->type = STACK_VTYPE;
} else {
mini_type_to_eval_stack_type (cfg, mini_get_underlying_type (type), inst);
}
return;
default:
g_error ("unknown type 0x%02x in eval stack type", type->type);
}
}
/*
* The following tables are used to quickly validate the IL code in type_from_op ().
*/
#define IF_P8(v) (SIZEOF_VOID_P == 8 ? v : STACK_INV)
#define IF_P8_I8 IF_P8(STACK_I8)
#define IF_P8_PTR IF_P8(STACK_PTR)
static const char
bin_num_table [STACK_MAX] [STACK_MAX] = {
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_I4, IF_P8_I8, STACK_PTR, STACK_INV, STACK_MP, STACK_INV, STACK_INV},
{STACK_INV, IF_P8_I8, STACK_I8, IF_P8_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_PTR, IF_P8_PTR, STACK_PTR, STACK_INV, STACK_MP, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_R8, STACK_INV, STACK_INV, STACK_INV, STACK_R8},
{STACK_INV, STACK_MP, STACK_INV, STACK_MP, STACK_INV, STACK_PTR, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_R8, STACK_INV, STACK_INV, STACK_INV, STACK_R4}
};
static const char
neg_table [] = {
STACK_INV, STACK_I4, STACK_I8, STACK_PTR, STACK_R8, STACK_INV, STACK_INV, STACK_INV, STACK_R4
};
/* reduce the size of this table */
static const char
bin_int_table [STACK_MAX] [STACK_MAX] = {
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_I4, IF_P8_I8, STACK_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, IF_P8_I8, STACK_I8, IF_P8_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_PTR, IF_P8_PTR, STACK_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}
};
#define P1 (SIZEOF_VOID_P == 8)
static const char
bin_comp_table [STACK_MAX] [STACK_MAX] = {
/* Inv i L p F & O vt r4 */
{0},
{0, 1, 0, 1, 0, 0, 0, 0}, /* i, int32 */
{0, 0, 1,P1, 0, 0, 0, 0}, /* L, int64 */
{0, 1,P1, 1, 0, 2, 4, 0}, /* p, ptr */
{0, 0, 0, 0, 1, 0, 0, 0, 1}, /* F, R8 */
{0, 0, 0, 2, 0, 1, 0, 0}, /* &, managed pointer */
{0, 0, 0, 4, 0, 0, 3, 0}, /* O, reference */
{0, 0, 0, 0, 0, 0, 0, 0}, /* vt value type */
{0, 0, 0, 0, 1, 0, 0, 0, 1}, /* r, r4 */
};
#undef P1
/* reduce the size of this table */
static const char
shift_table [STACK_MAX] [STACK_MAX] = {
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_I4, STACK_INV, STACK_I4, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_I8, STACK_INV, STACK_I8, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_PTR, STACK_INV, STACK_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV},
{STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}
};
/*
* Tables to map from the non-specific opcode to the matching
* type-specific opcode.
*/
/* handles from CEE_ADD to CEE_SHR_UN (CEE_REM_UN for floats) */
static const guint16
binops_op_map [STACK_MAX] = {
0, OP_IADD-CEE_ADD, OP_LADD-CEE_ADD, OP_PADD-CEE_ADD, OP_FADD-CEE_ADD, OP_PADD-CEE_ADD, 0, 0, OP_RADD-CEE_ADD
};
/* handles from CEE_NEG to CEE_CONV_U8 */
static const guint16
unops_op_map [STACK_MAX] = {
0, OP_INEG-CEE_NEG, OP_LNEG-CEE_NEG, OP_PNEG-CEE_NEG, OP_FNEG-CEE_NEG, OP_PNEG-CEE_NEG, 0, 0, OP_RNEG-CEE_NEG
};
/* handles from CEE_CONV_U2 to CEE_SUB_OVF_UN */
static const guint16
ovfops_op_map [STACK_MAX] = {
0, OP_ICONV_TO_U2-CEE_CONV_U2, OP_LCONV_TO_U2-CEE_CONV_U2, OP_PCONV_TO_U2-CEE_CONV_U2, OP_FCONV_TO_U2-CEE_CONV_U2, OP_PCONV_TO_U2-CEE_CONV_U2, OP_PCONV_TO_U2-CEE_CONV_U2, 0, OP_RCONV_TO_U2-CEE_CONV_U2
};
/* handles from CEE_CONV_OVF_I1_UN to CEE_CONV_OVF_U_UN */
static const guint16
ovf2ops_op_map [STACK_MAX] = {
0, OP_ICONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_LCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_PCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_FCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_PCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, 0, 0, OP_RCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN
};
/* handles from CEE_CONV_OVF_I1 to CEE_CONV_OVF_U8 */
static const guint16
ovf3ops_op_map [STACK_MAX] = {
0, OP_ICONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_LCONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_PCONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_FCONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_PCONV_TO_OVF_I1-CEE_CONV_OVF_I1, 0, 0, OP_RCONV_TO_OVF_I1-CEE_CONV_OVF_I1
};
/* handles from CEE_BEQ to CEE_BLT_UN */
static const guint16
beqops_op_map [STACK_MAX] = {
0, OP_IBEQ-CEE_BEQ, OP_LBEQ-CEE_BEQ, OP_PBEQ-CEE_BEQ, OP_FBEQ-CEE_BEQ, OP_PBEQ-CEE_BEQ, OP_PBEQ-CEE_BEQ, 0, OP_FBEQ-CEE_BEQ
};
/* handles from CEE_CEQ to CEE_CLT_UN */
static const guint16
ceqops_op_map [STACK_MAX] = {
0, OP_ICEQ-OP_CEQ, OP_LCEQ-OP_CEQ, OP_PCEQ-OP_CEQ, OP_FCEQ-OP_CEQ, OP_PCEQ-OP_CEQ, OP_PCEQ-OP_CEQ, 0, OP_RCEQ-OP_CEQ
};
/*
* Sets ins->type (the type on the eval stack) according to the
* type of the opcode and the arguments to it.
* Invalid IL code is marked by setting ins->type to the invalid value STACK_INV.
*
* FIXME: this function sets ins->type unconditionally in some cases, but
* it should set it to invalid for some types (a conv.x on an object)
*/
static void
type_from_op (MonoCompile *cfg, MonoInst *ins, MonoInst *src1, MonoInst *src2)
{
switch (ins->opcode) {
/* binops */
case MONO_CEE_ADD:
case MONO_CEE_SUB:
case MONO_CEE_MUL:
case MONO_CEE_DIV:
case MONO_CEE_REM:
/* FIXME: check unverifiable args for STACK_MP */
ins->type = bin_num_table [src1->type] [src2->type];
ins->opcode += binops_op_map [ins->type];
break;
case MONO_CEE_DIV_UN:
case MONO_CEE_REM_UN:
case MONO_CEE_AND:
case MONO_CEE_OR:
case MONO_CEE_XOR:
ins->type = bin_int_table [src1->type] [src2->type];
ins->opcode += binops_op_map [ins->type];
break;
case MONO_CEE_SHL:
case MONO_CEE_SHR:
case MONO_CEE_SHR_UN:
ins->type = shift_table [src1->type] [src2->type];
ins->opcode += binops_op_map [ins->type];
break;
case OP_COMPARE:
case OP_LCOMPARE:
case OP_ICOMPARE:
ins->type = bin_comp_table [src1->type] [src2->type] ? STACK_I4: STACK_INV;
if ((src1->type == STACK_I8) || ((TARGET_SIZEOF_VOID_P == 8) && ((src1->type == STACK_PTR) || (src1->type == STACK_OBJ) || (src1->type == STACK_MP))))
ins->opcode = OP_LCOMPARE;
else if (src1->type == STACK_R4)
ins->opcode = OP_RCOMPARE;
else if (src1->type == STACK_R8)
ins->opcode = OP_FCOMPARE;
else
ins->opcode = OP_ICOMPARE;
break;
case OP_ICOMPARE_IMM:
ins->type = bin_comp_table [src1->type] [src1->type] ? STACK_I4 : STACK_INV;
if ((src1->type == STACK_I8) || ((TARGET_SIZEOF_VOID_P == 8) && ((src1->type == STACK_PTR) || (src1->type == STACK_OBJ) || (src1->type == STACK_MP))))
ins->opcode = OP_LCOMPARE_IMM;
break;
case MONO_CEE_BEQ:
case MONO_CEE_BGE:
case MONO_CEE_BGT:
case MONO_CEE_BLE:
case MONO_CEE_BLT:
case MONO_CEE_BNE_UN:
case MONO_CEE_BGE_UN:
case MONO_CEE_BGT_UN:
case MONO_CEE_BLE_UN:
case MONO_CEE_BLT_UN:
ins->opcode += beqops_op_map [src1->type];
break;
case OP_CEQ:
ins->type = bin_comp_table [src1->type] [src2->type] ? STACK_I4: STACK_INV;
ins->opcode += ceqops_op_map [src1->type];
break;
case OP_CGT:
case OP_CGT_UN:
case OP_CLT:
case OP_CLT_UN:
ins->type = (bin_comp_table [src1->type] [src2->type] & 1) ? STACK_I4: STACK_INV;
ins->opcode += ceqops_op_map [src1->type];
break;
/* unops */
case MONO_CEE_NEG:
ins->type = neg_table [src1->type];
ins->opcode += unops_op_map [ins->type];
break;
case MONO_CEE_NOT:
if (src1->type >= STACK_I4 && src1->type <= STACK_PTR)
ins->type = src1->type;
else
ins->type = STACK_INV;
ins->opcode += unops_op_map [ins->type];
break;
case MONO_CEE_CONV_I1:
case MONO_CEE_CONV_I2:
case MONO_CEE_CONV_I4:
case MONO_CEE_CONV_U4:
ins->type = STACK_I4;
ins->opcode += unops_op_map [src1->type];
break;
case MONO_CEE_CONV_R_UN:
ins->type = STACK_R8;
switch (src1->type) {
case STACK_I4:
case STACK_PTR:
ins->opcode = OP_ICONV_TO_R_UN;
break;
case STACK_I8:
ins->opcode = OP_LCONV_TO_R_UN;
break;
case STACK_R4:
ins->opcode = OP_RCONV_TO_R8;
break;
case STACK_R8:
ins->opcode = OP_FMOVE;
break;
}
break;
case MONO_CEE_CONV_OVF_I1:
case MONO_CEE_CONV_OVF_U1:
case MONO_CEE_CONV_OVF_I2:
case MONO_CEE_CONV_OVF_U2:
case MONO_CEE_CONV_OVF_I4:
case MONO_CEE_CONV_OVF_U4:
ins->type = STACK_I4;
ins->opcode += ovf3ops_op_map [src1->type];
break;
case MONO_CEE_CONV_OVF_I_UN:
case MONO_CEE_CONV_OVF_U_UN:
ins->type = STACK_PTR;
ins->opcode += ovf2ops_op_map [src1->type];
break;
case MONO_CEE_CONV_OVF_I1_UN:
case MONO_CEE_CONV_OVF_I2_UN:
case MONO_CEE_CONV_OVF_I4_UN:
case MONO_CEE_CONV_OVF_U1_UN:
case MONO_CEE_CONV_OVF_U2_UN:
case MONO_CEE_CONV_OVF_U4_UN:
ins->type = STACK_I4;
ins->opcode += ovf2ops_op_map [src1->type];
break;
case MONO_CEE_CONV_U:
ins->type = STACK_PTR;
switch (src1->type) {
case STACK_I4:
ins->opcode = OP_ICONV_TO_U;
break;
case STACK_PTR:
case STACK_MP:
case STACK_OBJ:
#if TARGET_SIZEOF_VOID_P == 8
ins->opcode = OP_LCONV_TO_U;
#else
ins->opcode = OP_MOVE;
#endif
break;
case STACK_I8:
ins->opcode = OP_LCONV_TO_U;
break;
case STACK_R8:
if (TARGET_SIZEOF_VOID_P == 8)
ins->opcode = OP_FCONV_TO_U8;
else
ins->opcode = OP_FCONV_TO_U4;
break;
case STACK_R4:
if (TARGET_SIZEOF_VOID_P == 8)
ins->opcode = OP_RCONV_TO_U8;
else
ins->opcode = OP_RCONV_TO_U4;
break;
}
break;
case MONO_CEE_CONV_I8:
case MONO_CEE_CONV_U8:
ins->type = STACK_I8;
ins->opcode += unops_op_map [src1->type];
break;
case MONO_CEE_CONV_OVF_I8:
case MONO_CEE_CONV_OVF_U8:
ins->type = STACK_I8;
ins->opcode += ovf3ops_op_map [src1->type];
break;
case MONO_CEE_CONV_OVF_U8_UN:
case MONO_CEE_CONV_OVF_I8_UN:
ins->type = STACK_I8;
ins->opcode += ovf2ops_op_map [src1->type];
break;
case MONO_CEE_CONV_R4:
ins->type = cfg->r4_stack_type;
ins->opcode += unops_op_map [src1->type];
break;
case MONO_CEE_CONV_R8:
ins->type = STACK_R8;
ins->opcode += unops_op_map [src1->type];
break;
case OP_CKFINITE:
ins->type = STACK_R8;
break;
case MONO_CEE_CONV_U2:
case MONO_CEE_CONV_U1:
ins->type = STACK_I4;
ins->opcode += ovfops_op_map [src1->type];
break;
case MONO_CEE_CONV_I:
case MONO_CEE_CONV_OVF_I:
case MONO_CEE_CONV_OVF_U:
ins->type = STACK_PTR;
ins->opcode += ovfops_op_map [src1->type];
break;
case MONO_CEE_ADD_OVF:
case MONO_CEE_ADD_OVF_UN:
case MONO_CEE_MUL_OVF:
case MONO_CEE_MUL_OVF_UN:
case MONO_CEE_SUB_OVF:
case MONO_CEE_SUB_OVF_UN:
ins->type = bin_num_table [src1->type] [src2->type];
ins->opcode += ovfops_op_map [src1->type];
if (ins->type == STACK_R8)
ins->type = STACK_INV;
break;
case OP_LOAD_MEMBASE:
ins->type = STACK_PTR;
break;
case OP_LOADI1_MEMBASE:
case OP_LOADU1_MEMBASE:
case OP_LOADI2_MEMBASE:
case OP_LOADU2_MEMBASE:
case OP_LOADI4_MEMBASE:
case OP_LOADU4_MEMBASE:
ins->type = STACK_PTR;
break;
case OP_LOADI8_MEMBASE:
ins->type = STACK_I8;
break;
case OP_LOADR4_MEMBASE:
ins->type = cfg->r4_stack_type;
break;
case OP_LOADR8_MEMBASE:
ins->type = STACK_R8;
break;
default:
g_error ("opcode 0x%04x not handled in type from op", ins->opcode);
break;
}
if (ins->type == STACK_MP) {
if (src1->type == STACK_MP)
ins->klass = src1->klass;
else
ins->klass = mono_defaults.object_class;
}
}
void
mini_type_from_op (MonoCompile *cfg, MonoInst *ins, MonoInst *src1, MonoInst *src2)
{
type_from_op (cfg, ins, src1, src2);
}
static MonoClass*
ldind_to_type (int op)
{
switch (op) {
case MONO_CEE_LDIND_I1: return mono_defaults.sbyte_class;
case MONO_CEE_LDIND_U1: return mono_defaults.byte_class;
case MONO_CEE_LDIND_I2: return mono_defaults.int16_class;
case MONO_CEE_LDIND_U2: return mono_defaults.uint16_class;
case MONO_CEE_LDIND_I4: return mono_defaults.int32_class;
case MONO_CEE_LDIND_U4: return mono_defaults.uint32_class;
case MONO_CEE_LDIND_I8: return mono_defaults.int64_class;
case MONO_CEE_LDIND_I: return mono_defaults.int_class;
case MONO_CEE_LDIND_R4: return mono_defaults.single_class;
case MONO_CEE_LDIND_R8: return mono_defaults.double_class;
case MONO_CEE_LDIND_REF:return mono_defaults.object_class; //FIXME we should try to return a more specific type
default: g_error ("Unknown ldind type %d", op);
}
}
static MonoClass*
stind_to_type (int op)
{
switch (op) {
case MONO_CEE_STIND_I1: return mono_defaults.sbyte_class;
case MONO_CEE_STIND_I2: return mono_defaults.int16_class;
case MONO_CEE_STIND_I4: return mono_defaults.int32_class;
case MONO_CEE_STIND_I8: return mono_defaults.int64_class;
case MONO_CEE_STIND_I: return mono_defaults.int_class;
case MONO_CEE_STIND_R4: return mono_defaults.single_class;
case MONO_CEE_STIND_R8: return mono_defaults.double_class;
case MONO_CEE_STIND_REF: return mono_defaults.object_class;
default: g_error ("Unknown stind type %d", op);
}
}
#if 0
static const char
param_table [STACK_MAX] [STACK_MAX] = {
{0},
};
static int
check_values_to_signature (MonoInst *args, MonoType *this_ins, MonoMethodSignature *sig)
{
int i;
if (sig->hasthis) {
switch (args->type) {
case STACK_I4:
case STACK_I8:
case STACK_R8:
case STACK_VTYPE:
case STACK_INV:
return 0;
}
args++;
}
for (i = 0; i < sig->param_count; ++i) {
switch (args [i].type) {
case STACK_INV:
return 0;
case STACK_MP:
if (m_type_is_byref (!sig->params [i]))
return 0;
continue;
case STACK_OBJ:
if (m_type_is_byref (sig->params [i]))
return 0;
switch (m_type_is_byref (sig->params [i])) {
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
break;
default:
return 0;
}
continue;
case STACK_R8:
if (m_type_is_byref (sig->params [i]))
return 0;
if (sig->params [i]->type != MONO_TYPE_R4 && sig->params [i]->type != MONO_TYPE_R8)
return 0;
continue;
case STACK_PTR:
case STACK_I4:
case STACK_I8:
case STACK_VTYPE:
break;
}
/*if (!param_table [args [i].type] [sig->params [i]->type])
return 0;*/
}
return 1;
}
#endif
/*
* The got_var contains the address of the Global Offset Table when AOT
* compiling.
*/
MonoInst *
mono_get_got_var (MonoCompile *cfg)
{
if (!cfg->compile_aot || !cfg->backend->need_got_var || cfg->llvm_only)
return NULL;
if (!cfg->got_var) {
cfg->got_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
}
return cfg->got_var;
}
static void
mono_create_rgctx_var (MonoCompile *cfg)
{
if (!cfg->rgctx_var) {
cfg->rgctx_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
/* force the var to be stack allocated */
if (!cfg->llvm_only)
cfg->rgctx_var->flags |= MONO_INST_VOLATILE;
}
}
static MonoInst *
mono_get_mrgctx_var (MonoCompile *cfg)
{
g_assert (cfg->gshared);
mono_create_rgctx_var (cfg);
return cfg->rgctx_var;
}
static MonoInst *
mono_get_vtable_var (MonoCompile *cfg)
{
g_assert (cfg->gshared);
/* The mrgctx and the vtable are stored in the same var */
mono_create_rgctx_var (cfg);
return cfg->rgctx_var;
}
static MonoType*
type_from_stack_type (MonoInst *ins) {
switch (ins->type) {
case STACK_I4: return mono_get_int32_type ();
case STACK_I8: return m_class_get_byval_arg (mono_defaults.int64_class);
case STACK_PTR: return mono_get_int_type ();
case STACK_R4: return m_class_get_byval_arg (mono_defaults.single_class);
case STACK_R8: return m_class_get_byval_arg (mono_defaults.double_class);
case STACK_MP:
return m_class_get_this_arg (ins->klass);
case STACK_OBJ: return mono_get_object_type ();
case STACK_VTYPE: return m_class_get_byval_arg (ins->klass);
default:
g_error ("stack type %d to monotype not handled\n", ins->type);
}
return NULL;
}
MonoStackType
mini_type_to_stack_type (MonoCompile *cfg, MonoType *t)
{
t = mini_type_get_underlying_type (t);
switch (t->type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
case MONO_TYPE_I2:
case MONO_TYPE_U2:
case MONO_TYPE_I4:
case MONO_TYPE_U4:
return STACK_I4;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
return STACK_PTR;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
return STACK_OBJ;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
return STACK_I8;
case MONO_TYPE_R4:
return (MonoStackType)cfg->r4_stack_type;
case MONO_TYPE_R8:
return STACK_R8;
case MONO_TYPE_VALUETYPE:
case MONO_TYPE_TYPEDBYREF:
return STACK_VTYPE;
case MONO_TYPE_GENERICINST:
if (mono_type_generic_inst_is_valuetype (t))
return STACK_VTYPE;
else
return STACK_OBJ;
break;
default:
g_assert_not_reached ();
}
return (MonoStackType)-1;
}
static MonoClass*
array_access_to_klass (int opcode)
{
switch (opcode) {
case MONO_CEE_LDELEM_U1:
return mono_defaults.byte_class;
case MONO_CEE_LDELEM_U2:
return mono_defaults.uint16_class;
case MONO_CEE_LDELEM_I:
case MONO_CEE_STELEM_I:
return mono_defaults.int_class;
case MONO_CEE_LDELEM_I1:
case MONO_CEE_STELEM_I1:
return mono_defaults.sbyte_class;
case MONO_CEE_LDELEM_I2:
case MONO_CEE_STELEM_I2:
return mono_defaults.int16_class;
case MONO_CEE_LDELEM_I4:
case MONO_CEE_STELEM_I4:
return mono_defaults.int32_class;
case MONO_CEE_LDELEM_U4:
return mono_defaults.uint32_class;
case MONO_CEE_LDELEM_I8:
case MONO_CEE_STELEM_I8:
return mono_defaults.int64_class;
case MONO_CEE_LDELEM_R4:
case MONO_CEE_STELEM_R4:
return mono_defaults.single_class;
case MONO_CEE_LDELEM_R8:
case MONO_CEE_STELEM_R8:
return mono_defaults.double_class;
case MONO_CEE_LDELEM_REF:
case MONO_CEE_STELEM_REF:
return mono_defaults.object_class;
default:
g_assert_not_reached ();
}
return NULL;
}
/*
* We try to share variables when possible
*/
static MonoInst *
mono_compile_get_interface_var (MonoCompile *cfg, int slot, MonoInst *ins)
{
MonoInst *res;
int pos, vnum;
MonoType *type;
type = type_from_stack_type (ins);
/* inlining can result in deeper stacks */
if (cfg->inline_depth || slot >= cfg->header->max_stack)
return mono_compile_create_var (cfg, type, OP_LOCAL);
pos = ins->type - 1 + slot * STACK_MAX;
switch (ins->type) {
case STACK_I4:
case STACK_I8:
case STACK_R8:
case STACK_PTR:
case STACK_MP:
case STACK_OBJ:
if ((vnum = cfg->intvars [pos]))
return cfg->varinfo [vnum];
res = mono_compile_create_var (cfg, type, OP_LOCAL);
cfg->intvars [pos] = res->inst_c0;
break;
default:
res = mono_compile_create_var (cfg, type, OP_LOCAL);
}
return res;
}
static void
mono_save_token_info (MonoCompile *cfg, MonoImage *image, guint32 token, gpointer key)
{
/*
* Don't use this if a generic_context is set, since that means AOT can't
* look up the method using just the image+token.
* table == 0 means this is a reference made from a wrapper.
*/
if (cfg->compile_aot && !cfg->generic_context && (mono_metadata_token_table (token) > 0)) {
MonoJumpInfoToken *jump_info_token = (MonoJumpInfoToken *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfoToken));
jump_info_token->image = image;
jump_info_token->token = token;
g_hash_table_insert (cfg->token_info_hash, key, jump_info_token);
}
}
/*
* This function is called to handle items that are left on the evaluation stack
* at basic block boundaries. What happens is that we save the values to local variables
* and we reload them later when first entering the target basic block (with the
* handle_loaded_temps () function).
* A single joint point will use the same variables (stored in the array bb->out_stack or
* bb->in_stack, if the basic block is before or after the joint point).
*
* This function needs to be called _before_ emitting the last instruction of
* the bb (i.e. before emitting a branch).
* If the stack merge fails at a join point, cfg->unverifiable is set.
*/
static void
handle_stack_args (MonoCompile *cfg, MonoInst **sp, int count)
{
int i, bindex;
MonoBasicBlock *bb = cfg->cbb;
MonoBasicBlock *outb;
MonoInst *inst, **locals;
gboolean found;
if (!count)
return;
if (cfg->verbose_level > 3)
printf ("%d item(s) on exit from B%d\n", count, bb->block_num);
if (!bb->out_scount) {
bb->out_scount = count;
//printf ("bblock %d has out:", bb->block_num);
found = FALSE;
for (i = 0; i < bb->out_count; ++i) {
outb = bb->out_bb [i];
/* exception handlers are linked, but they should not be considered for stack args */
if (outb->flags & BB_EXCEPTION_HANDLER)
continue;
//printf (" %d", outb->block_num);
if (outb->in_stack) {
found = TRUE;
bb->out_stack = outb->in_stack;
break;
}
}
//printf ("\n");
if (!found) {
bb->out_stack = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*) * count);
for (i = 0; i < count; ++i) {
/*
* try to reuse temps already allocated for this purpouse, if they occupy the same
* stack slot and if they are of the same type.
* This won't cause conflicts since if 'local' is used to
* store one of the values in the in_stack of a bblock, then
* the same variable will be used for the same outgoing stack
* slot as well.
* This doesn't work when inlining methods, since the bblocks
* in the inlined methods do not inherit their in_stack from
* the bblock they are inlined to. See bug #58863 for an
* example.
*/
bb->out_stack [i] = mono_compile_get_interface_var (cfg, i, sp [i]);
}
}
}
for (i = 0; i < bb->out_count; ++i) {
outb = bb->out_bb [i];
/* exception handlers are linked, but they should not be considered for stack args */
if (outb->flags & BB_EXCEPTION_HANDLER)
continue;
if (outb->in_scount) {
if (outb->in_scount != bb->out_scount) {
cfg->unverifiable = TRUE;
return;
}
continue; /* check they are the same locals */
}
outb->in_scount = count;
outb->in_stack = bb->out_stack;
}
locals = bb->out_stack;
cfg->cbb = bb;
for (i = 0; i < count; ++i) {
sp [i] = convert_value (cfg, locals [i]->inst_vtype, sp [i]);
EMIT_NEW_TEMPSTORE (cfg, inst, locals [i]->inst_c0, sp [i]);
inst->cil_code = sp [i]->cil_code;
sp [i] = locals [i];
if (cfg->verbose_level > 3)
printf ("storing %d to temp %d\n", i, (int)locals [i]->inst_c0);
}
/*
* It is possible that the out bblocks already have in_stack assigned, and
* the in_stacks differ. In this case, we will store to all the different
* in_stacks.
*/
found = TRUE;
bindex = 0;
while (found) {
/* Find a bblock which has a different in_stack */
found = FALSE;
while (bindex < bb->out_count) {
outb = bb->out_bb [bindex];
/* exception handlers are linked, but they should not be considered for stack args */
if (outb->flags & BB_EXCEPTION_HANDLER) {
bindex++;
continue;
}
if (outb->in_stack != locals) {
for (i = 0; i < count; ++i) {
sp [i] = convert_value (cfg, outb->in_stack [i]->inst_vtype, sp [i]);
EMIT_NEW_TEMPSTORE (cfg, inst, outb->in_stack [i]->inst_c0, sp [i]);
inst->cil_code = sp [i]->cil_code;
sp [i] = locals [i];
if (cfg->verbose_level > 3)
printf ("storing %d to temp %d\n", i, (int)outb->in_stack [i]->inst_c0);
}
locals = outb->in_stack;
found = TRUE;
break;
}
bindex ++;
}
}
}
MonoInst*
mini_emit_runtime_constant (MonoCompile *cfg, MonoJumpInfoType patch_type, gpointer data)
{
MonoInst *ins;
if (cfg->compile_aot) {
MONO_DISABLE_WARNING (4306) // 'type cast': conversion from 'MonoJumpInfoType' to 'MonoInst *' of greater size
EMIT_NEW_AOTCONST (cfg, ins, patch_type, data);
MONO_RESTORE_WARNING
} else {
MonoJumpInfo ji;
gpointer target;
ERROR_DECL (error);
ji.type = patch_type;
ji.data.target = data;
target = mono_resolve_patch_target_ext (cfg->mem_manager, NULL, NULL, &ji, FALSE, error);
mono_error_assert_ok (error);
EMIT_NEW_PCONST (cfg, ins, target);
}
return ins;
}
static MonoInst*
mono_create_fast_tls_getter (MonoCompile *cfg, MonoTlsKey key)
{
int tls_offset = mono_tls_get_tls_offset (key);
if (cfg->compile_aot)
return NULL;
if (tls_offset != -1 && mono_arch_have_fast_tls ()) {
MonoInst *ins;
MONO_INST_NEW (cfg, ins, OP_TLS_GET);
ins->dreg = mono_alloc_preg (cfg);
ins->inst_offset = tls_offset;
return ins;
}
return NULL;
}
static MonoInst*
mono_create_tls_get (MonoCompile *cfg, MonoTlsKey key)
{
MonoInst *fast_tls = NULL;
if (!mini_debug_options.use_fallback_tls)
fast_tls = mono_create_fast_tls_getter (cfg, key);
if (fast_tls) {
MONO_ADD_INS (cfg->cbb, fast_tls);
return fast_tls;
}
const MonoJitICallId jit_icall_id = mono_get_tls_key_to_jit_icall_id (key);
if (cfg->compile_aot && !cfg->llvm_only) {
MonoInst *addr;
/*
* tls getters are critical pieces of code and we don't want to resolve them
* through the standard plt/tramp mechanism since we might expose ourselves
* to crashes and infinite recursions.
* Therefore the NOCALL part of MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL, FALSE in is_plt_patch.
*/
EMIT_NEW_AOTCONST (cfg, addr, MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL, GUINT_TO_POINTER (jit_icall_id));
return mini_emit_calli (cfg, mono_icall_sig_ptr, NULL, addr, NULL, NULL);
} else {
return mono_emit_jit_icall_id (cfg, jit_icall_id, NULL);
}
}
/*
* emit_push_lmf:
*
* Emit IR to push the current LMF onto the LMF stack.
*/
static void
emit_push_lmf (MonoCompile *cfg)
{
/*
* Emit IR to push the LMF:
* lmf_addr = <lmf_addr from tls>
* lmf->lmf_addr = lmf_addr
* lmf->prev_lmf = *lmf_addr
* *lmf_addr = lmf
*/
MonoInst *ins, *lmf_ins;
if (!cfg->lmf_ir)
return;
int lmf_reg, prev_lmf_reg;
/*
* Store lmf_addr in a variable, so it can be allocated to a global register.
*/
if (!cfg->lmf_addr_var)
cfg->lmf_addr_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
if (!cfg->lmf_var) {
MonoInst *lmf_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
lmf_var->flags |= MONO_INST_VOLATILE;
lmf_var->flags |= MONO_INST_LMF;
cfg->lmf_var = lmf_var;
}
lmf_ins = mono_create_tls_get (cfg, TLS_KEY_LMF_ADDR);
g_assert (lmf_ins);
lmf_ins->dreg = cfg->lmf_addr_var->dreg;
EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL);
lmf_reg = ins->dreg;
prev_lmf_reg = alloc_preg (cfg);
/* Save previous_lmf */
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, prev_lmf_reg, cfg->lmf_addr_var->dreg, 0);
if (cfg->deopt)
/* Mark this as an LMFExt */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_POR_IMM, prev_lmf_reg, prev_lmf_reg, 2);
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, previous_lmf), prev_lmf_reg);
/* Set new lmf */
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, cfg->lmf_addr_var->dreg, 0, lmf_reg);
}
/*
* emit_pop_lmf:
*
* Emit IR to pop the current LMF from the LMF stack.
*/
static void
emit_pop_lmf (MonoCompile *cfg)
{
int lmf_reg, lmf_addr_reg;
MonoInst *ins;
if (!cfg->lmf_ir)
return;
EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL);
lmf_reg = ins->dreg;
int prev_lmf_reg;
/*
* Emit IR to pop the LMF:
* *(lmf->lmf_addr) = lmf->prev_lmf
*/
/* This could be called before emit_push_lmf () */
if (!cfg->lmf_addr_var)
cfg->lmf_addr_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
lmf_addr_reg = cfg->lmf_addr_var->dreg;
prev_lmf_reg = alloc_preg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, prev_lmf_reg, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, previous_lmf));
if (cfg->deopt)
/* Clear out the bit set by push_lmf () to mark this as LMFExt */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PXOR_IMM, prev_lmf_reg, prev_lmf_reg, 2);
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, lmf_addr_reg, 0, prev_lmf_reg);
}
/*
* target_type_is_incompatible:
* @cfg: MonoCompile context
*
* Check that the item @arg on the evaluation stack can be stored
* in the target type (can be a local, or field, etc).
* The cfg arg can be used to check if we need verification or just
* validity checks.
*
* Returns: non-0 value if arg can't be stored on a target.
*/
static int
target_type_is_incompatible (MonoCompile *cfg, MonoType *target, MonoInst *arg)
{
MonoType *simple_type;
MonoClass *klass;
if (m_type_is_byref (target)) {
/* FIXME: check that the pointed to types match */
if (arg->type == STACK_MP) {
/* This is needed to handle gshared types + ldaddr. We lower the types so we can handle enums and other typedef-like types. */
MonoClass *target_class_lowered = mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (mono_class_from_mono_type_internal (target))));
MonoClass *source_class_lowered = mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (arg->klass)));
/* if the target is native int& or X* or same type */
if (target->type == MONO_TYPE_I || target->type == MONO_TYPE_PTR || target_class_lowered == source_class_lowered)
return 0;
/* Both are primitive type byrefs and the source points to a larger type that the destination */
if (MONO_TYPE_IS_PRIMITIVE_SCALAR (m_class_get_byval_arg (target_class_lowered)) && MONO_TYPE_IS_PRIMITIVE_SCALAR (m_class_get_byval_arg (source_class_lowered)) &&
mono_class_instance_size (target_class_lowered) <= mono_class_instance_size (source_class_lowered))
return 0;
return 1;
}
if (arg->type == STACK_PTR)
return 0;
return 1;
}
simple_type = mini_get_underlying_type (target);
switch (simple_type->type) {
case MONO_TYPE_VOID:
return 1;
case MONO_TYPE_I1:
case MONO_TYPE_U1:
case MONO_TYPE_I2:
case MONO_TYPE_U2:
case MONO_TYPE_I4:
case MONO_TYPE_U4:
if (arg->type != STACK_I4 && arg->type != STACK_PTR)
return 1;
return 0;
case MONO_TYPE_PTR:
/* STACK_MP is needed when setting pinned locals */
if (arg->type != STACK_I4 && arg->type != STACK_PTR && arg->type != STACK_MP)
#if SIZEOF_VOID_P == 8
if (arg->type != STACK_I8)
#endif
return 1;
return 0;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_FNPTR:
/*
* Some opcodes like ldloca returns 'transient pointers' which can be stored in
* in native int. (#688008).
*/
if (arg->type != STACK_I4 && arg->type != STACK_PTR && arg->type != STACK_MP)
return 1;
return 0;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
if (arg->type != STACK_OBJ)
return 1;
/* FIXME: check type compatibility */
return 0;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
if (arg->type != STACK_I8)
#if SIZEOF_VOID_P == 8
if (arg->type != STACK_PTR)
#endif
return 1;
return 0;
case MONO_TYPE_R4:
if (arg->type != cfg->r4_stack_type)
return 1;
return 0;
case MONO_TYPE_R8:
if (arg->type != STACK_R8)
return 1;
return 0;
case MONO_TYPE_VALUETYPE:
if (arg->type != STACK_VTYPE)
return 1;
klass = mono_class_from_mono_type_internal (simple_type);
if (klass != arg->klass)
return 1;
return 0;
case MONO_TYPE_TYPEDBYREF:
if (arg->type != STACK_VTYPE)
return 1;
klass = mono_class_from_mono_type_internal (simple_type);
if (klass != arg->klass)
return 1;
return 0;
case MONO_TYPE_GENERICINST:
if (mono_type_generic_inst_is_valuetype (simple_type)) {
MonoClass *target_class;
if (arg->type != STACK_VTYPE)
return 1;
klass = mono_class_from_mono_type_internal (simple_type);
target_class = mono_class_from_mono_type_internal (target);
/* The second cases is needed when doing partial sharing */
if (klass != arg->klass && target_class != arg->klass && target_class != mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (arg->klass))))
return 1;
return 0;
} else {
if (arg->type != STACK_OBJ)
return 1;
/* FIXME: check type compatibility */
return 0;
}
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
g_assert (cfg->gshared);
if (mini_type_var_is_vt (simple_type)) {
if (arg->type != STACK_VTYPE)
return 1;
} else {
if (arg->type != STACK_OBJ)
return 1;
}
return 0;
default:
g_error ("unknown type 0x%02x in target_type_is_incompatible", simple_type->type);
}
return 1;
}
/*
* convert_value:
*
* Emit some implicit conversions which are not part of the .net spec, but are allowed by MS.NET.
*/
static MonoInst*
convert_value (MonoCompile *cfg, MonoType *type, MonoInst *ins)
{
if (!cfg->r4fp)
return ins;
type = mini_get_underlying_type (type);
switch (type->type) {
case MONO_TYPE_R4:
if (ins->type == STACK_R8) {
int dreg = alloc_freg (cfg);
MonoInst *conv;
EMIT_NEW_UNALU (cfg, conv, OP_FCONV_TO_R4, dreg, ins->dreg);
conv->type = STACK_R4;
return conv;
}
break;
case MONO_TYPE_R8:
if (ins->type == STACK_R4) {
int dreg = alloc_freg (cfg);
MonoInst *conv;
EMIT_NEW_UNALU (cfg, conv, OP_RCONV_TO_R8, dreg, ins->dreg);
conv->type = STACK_R8;
return conv;
}
break;
default:
break;
}
return ins;
}
/*
* Prepare arguments for passing to a function call.
* Return a non-zero value if the arguments can't be passed to the given
* signature.
* The type checks are not yet complete and some conversions may need
* casts on 32 or 64 bit architectures.
*
* FIXME: implement this using target_type_is_incompatible ()
*/
static gboolean
check_call_signature (MonoCompile *cfg, MonoMethodSignature *sig, MonoInst **args)
{
MonoType *simple_type;
int i;
if (sig->hasthis) {
if (args [0]->type != STACK_OBJ && args [0]->type != STACK_MP && args [0]->type != STACK_PTR)
return TRUE;
args++;
}
for (i = 0; i < sig->param_count; ++i) {
if (m_type_is_byref (sig->params [i])) {
if (args [i]->type != STACK_MP && args [i]->type != STACK_PTR)
return TRUE;
continue;
}
simple_type = mini_get_underlying_type (sig->params [i]);
handle_enum:
switch (simple_type->type) {
case MONO_TYPE_VOID:
return TRUE;
case MONO_TYPE_I1:
case MONO_TYPE_U1:
case MONO_TYPE_I2:
case MONO_TYPE_U2:
case MONO_TYPE_I4:
case MONO_TYPE_U4:
if (args [i]->type != STACK_I4 && args [i]->type != STACK_PTR)
return TRUE;
continue;
case MONO_TYPE_I:
case MONO_TYPE_U:
if (args [i]->type != STACK_I4 && args [i]->type != STACK_PTR && args [i]->type != STACK_MP && args [i]->type != STACK_OBJ)
return TRUE;
continue;
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
if (args [i]->type != STACK_I4 && !(SIZEOF_VOID_P == 8 && args [i]->type == STACK_I8) &&
args [i]->type != STACK_PTR && args [i]->type != STACK_MP && args [i]->type != STACK_OBJ)
return TRUE;
continue;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
if (args [i]->type != STACK_OBJ)
return TRUE;
continue;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
if (args [i]->type != STACK_I8 &&
!(SIZEOF_VOID_P == 8 && (args [i]->type == STACK_I4 || args [i]->type == STACK_PTR)))
return TRUE;
continue;
case MONO_TYPE_R4:
if (args [i]->type != cfg->r4_stack_type)
return TRUE;
continue;
case MONO_TYPE_R8:
if (args [i]->type != STACK_R8)
return TRUE;
continue;
case MONO_TYPE_VALUETYPE:
if (m_class_is_enumtype (simple_type->data.klass)) {
simple_type = mono_class_enum_basetype_internal (simple_type->data.klass);
goto handle_enum;
}
if (args [i]->type != STACK_VTYPE)
return TRUE;
continue;
case MONO_TYPE_TYPEDBYREF:
if (args [i]->type != STACK_VTYPE)
return TRUE;
continue;
case MONO_TYPE_GENERICINST:
simple_type = m_class_get_byval_arg (simple_type->data.generic_class->container_class);
goto handle_enum;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
/* gsharedvt */
if (args [i]->type != STACK_VTYPE)
return TRUE;
continue;
default:
g_error ("unknown type 0x%02x in check_call_signature",
simple_type->type);
}
}
return FALSE;
}
MonoJumpInfo *
mono_patch_info_new (MonoMemPool *mp, int ip, MonoJumpInfoType type, gconstpointer target)
{
MonoJumpInfo *ji = (MonoJumpInfo *)mono_mempool_alloc (mp, sizeof (MonoJumpInfo));
ji->ip.i = ip;
ji->type = type;
ji->data.target = target;
return ji;
}
int
mini_class_check_context_used (MonoCompile *cfg, MonoClass *klass)
{
if (cfg->gshared)
return mono_class_check_context_used (klass);
else
return 0;
}
int
mini_method_check_context_used (MonoCompile *cfg, MonoMethod *method)
{
if (cfg->gshared)
return mono_method_check_context_used (method);
else
return 0;
}
/*
* check_method_sharing:
*
* Check whenever the vtable or an mrgctx needs to be passed when calling CMETHOD.
*/
static void
check_method_sharing (MonoCompile *cfg, MonoMethod *cmethod, gboolean *out_pass_vtable, gboolean *out_pass_mrgctx)
{
gboolean pass_vtable = FALSE;
gboolean pass_mrgctx = FALSE;
if (((cmethod->flags & METHOD_ATTRIBUTE_STATIC) || m_class_is_valuetype (cmethod->klass)) &&
(mono_class_is_ginst (cmethod->klass) || mono_class_is_gtd (cmethod->klass))) {
gboolean sharable = FALSE;
if (mono_method_is_generic_sharable_full (cmethod, TRUE, TRUE, TRUE))
sharable = TRUE;
/*
* Pass vtable iff target method might
* be shared, which means that sharing
* is enabled for its class and its
* context is sharable (and it's not a
* generic method).
*/
if (sharable && !(mini_method_get_context (cmethod) && mini_method_get_context (cmethod)->method_inst))
pass_vtable = TRUE;
}
if (mini_method_needs_mrgctx (cmethod)) {
if (mini_method_is_default_method (cmethod))
pass_vtable = FALSE;
else
g_assert (!pass_vtable);
if (mono_method_is_generic_sharable_full (cmethod, TRUE, TRUE, TRUE)) {
pass_mrgctx = TRUE;
} else {
if (cfg->gsharedvt && mini_is_gsharedvt_signature (mono_method_signature_internal (cmethod)))
pass_mrgctx = TRUE;
}
}
if (out_pass_vtable)
*out_pass_vtable = pass_vtable;
if (out_pass_mrgctx)
*out_pass_mrgctx = pass_mrgctx;
}
static gboolean
direct_icalls_enabled (MonoCompile *cfg, MonoMethod *method)
{
if (cfg->gen_sdb_seq_points || cfg->disable_direct_icalls)
return FALSE;
if (method && cfg->compile_aot && mono_aot_direct_icalls_enabled_for_method (cfg, method))
return TRUE;
/* LLVM on amd64 can't handle calls to non-32 bit addresses */
#ifdef TARGET_AMD64
if (cfg->compile_llvm && !cfg->llvm_only)
return FALSE;
#endif
return FALSE;
}
MonoInst*
mono_emit_jit_icall_by_info (MonoCompile *cfg, int il_offset, MonoJitICallInfo *info, MonoInst **args)
{
/*
* Call the jit icall without a wrapper if possible.
* The wrapper is needed to be able to do stack walks for asynchronously suspended
* threads when debugging.
*/
if (direct_icalls_enabled (cfg, NULL)) {
int costs;
if (!info->wrapper_method) {
info->wrapper_method = mono_marshal_get_icall_wrapper (info, TRUE);
mono_memory_barrier ();
}
/*
* Inline the wrapper method, which is basically a call to the C icall, and
* an exception check.
*/
costs = inline_method (cfg, info->wrapper_method, NULL,
args, NULL, il_offset, TRUE, NULL);
g_assert (costs > 0);
g_assert (!MONO_TYPE_IS_VOID (info->sig->ret));
return args [0];
}
return mono_emit_jit_icall_id (cfg, mono_jit_icall_info_id (info), args);
}
static MonoInst*
mono_emit_widen_call_res (MonoCompile *cfg, MonoInst *ins, MonoMethodSignature *fsig)
{
if (!MONO_TYPE_IS_VOID (fsig->ret)) {
if ((fsig->pinvoke || LLVM_ENABLED) && !m_type_is_byref (fsig->ret)) {
int widen_op = -1;
/*
* Native code might return non register sized integers
* without initializing the upper bits.
*/
switch (mono_type_to_load_membase (cfg, fsig->ret)) {
case OP_LOADI1_MEMBASE:
widen_op = OP_ICONV_TO_I1;
break;
case OP_LOADU1_MEMBASE:
widen_op = OP_ICONV_TO_U1;
break;
case OP_LOADI2_MEMBASE:
widen_op = OP_ICONV_TO_I2;
break;
case OP_LOADU2_MEMBASE:
widen_op = OP_ICONV_TO_U2;
break;
default:
break;
}
if (widen_op != -1) {
int dreg = alloc_preg (cfg);
MonoInst *widen;
EMIT_NEW_UNALU (cfg, widen, widen_op, dreg, ins->dreg);
widen->type = ins->type;
ins = widen;
}
}
}
return ins;
}
static MonoInst*
emit_get_rgctx_method (MonoCompile *cfg, int context_used,
MonoMethod *cmethod, MonoRgctxInfoType rgctx_type);
static void
emit_method_access_failure (MonoCompile *cfg, MonoMethod *caller, MonoMethod *callee)
{
MonoInst *args [2];
args [0] = emit_get_rgctx_method (cfg, mono_method_check_context_used (caller), caller, MONO_RGCTX_INFO_METHOD);
args [1] = emit_get_rgctx_method (cfg, mono_method_check_context_used (callee), callee, MONO_RGCTX_INFO_METHOD);
mono_emit_jit_icall (cfg, mono_throw_method_access, args);
}
static void
emit_bad_image_failure (MonoCompile *cfg, MonoMethod *caller, MonoMethod *callee)
{
mono_emit_jit_icall (cfg, mono_throw_bad_image, NULL);
}
static void
emit_not_supported_failure (MonoCompile *cfg)
{
mono_emit_jit_icall (cfg, mono_throw_not_supported, NULL);
}
static void
emit_invalid_program_with_msg (MonoCompile *cfg, MonoError *error_msg, MonoMethod *caller, MonoMethod *callee)
{
g_assert (!is_ok (error_msg));
char *str = mono_mem_manager_strdup (cfg->mem_manager, mono_error_get_message (error_msg));
MonoInst *iargs[1];
if (cfg->compile_aot)
EMIT_NEW_LDSTRLITCONST (cfg, iargs [0], str);
else
EMIT_NEW_PCONST (cfg, iargs [0], str);
mono_emit_jit_icall (cfg, mono_throw_invalid_program, iargs);
}
// FIXME Consolidate the multiple functions named get_method_nofail.
static MonoMethod*
get_method_nofail (MonoClass *klass, const char *method_name, int num_params, int flags)
{
MonoMethod *method;
ERROR_DECL (error);
method = mono_class_get_method_from_name_checked (klass, method_name, num_params, flags, error);
mono_error_assert_ok (error);
g_assertf (method, "Could not lookup method %s in %s", method_name, m_class_get_name (klass));
return method;
}
MonoMethod*
mini_get_memcpy_method (void)
{
static MonoMethod *memcpy_method = NULL;
if (!memcpy_method) {
memcpy_method = get_method_nofail (mono_defaults.string_class, "memcpy", 3, 0);
if (!memcpy_method)
g_error ("Old corlib found. Install a new one");
}
return memcpy_method;
}
MonoInst*
mini_emit_storing_write_barrier (MonoCompile *cfg, MonoInst *ptr, MonoInst *value)
{
MonoInst *store;
/*
* Add a release memory barrier so the object contents are flushed
* to memory before storing the reference into another object.
*/
if (!mini_debug_options.weak_memory_model)
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
EMIT_NEW_STORE_MEMBASE (cfg, store, OP_STORE_MEMBASE_REG, ptr->dreg, 0, value->dreg);
mini_emit_write_barrier (cfg, ptr, value);
return store;
}
void
mini_emit_write_barrier (MonoCompile *cfg, MonoInst *ptr, MonoInst *value)
{
int card_table_shift_bits;
target_mgreg_t card_table_mask;
guint8 *card_table;
MonoInst *dummy_use;
int nursery_shift_bits;
size_t nursery_size;
if (!cfg->gen_write_barriers)
return;
//method->wrapper_type != MONO_WRAPPER_WRITE_BARRIER && !MONO_INS_IS_PCONST_NULL (sp [1])
card_table = mono_gc_get_target_card_table (&card_table_shift_bits, &card_table_mask);
mono_gc_get_nursery (&nursery_shift_bits, &nursery_size);
if (cfg->backend->have_card_table_wb && !cfg->compile_aot && card_table && nursery_shift_bits > 0 && !COMPILE_LLVM (cfg)) {
MonoInst *wbarrier;
MONO_INST_NEW (cfg, wbarrier, OP_CARD_TABLE_WBARRIER);
wbarrier->sreg1 = ptr->dreg;
wbarrier->sreg2 = value->dreg;
MONO_ADD_INS (cfg->cbb, wbarrier);
} else if (card_table) {
int offset_reg = alloc_preg (cfg);
int card_reg;
MonoInst *ins;
/*
* We emit a fast light weight write barrier. This always marks cards as in the concurrent
* collector case, so, for the serial collector, it might slightly slow down nursery
* collections. We also expect that the host system and the target system have the same card
* table configuration, which is the case if they have the same pointer size.
*/
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHR_UN_IMM, offset_reg, ptr->dreg, card_table_shift_bits);
if (card_table_mask)
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PAND_IMM, offset_reg, offset_reg, card_table_mask);
/*We can't use PADD_IMM since the cardtable might end up in high addresses and amd64 doesn't support
* IMM's larger than 32bits.
*/
ins = mini_emit_runtime_constant (cfg, MONO_PATCH_INFO_GC_CARD_TABLE_ADDR, NULL);
card_reg = ins->dreg;
MONO_EMIT_NEW_BIALU (cfg, OP_PADD, offset_reg, offset_reg, card_reg);
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI1_MEMBASE_IMM, offset_reg, 0, 1);
} else {
MonoMethod *write_barrier = mono_gc_get_write_barrier ();
mono_emit_method_call (cfg, write_barrier, &ptr, NULL);
}
EMIT_NEW_DUMMY_USE (cfg, dummy_use, value);
}
MonoMethod*
mini_get_memset_method (void)
{
static MonoMethod *memset_method = NULL;
if (!memset_method) {
memset_method = get_method_nofail (mono_defaults.string_class, "memset", 3, 0);
if (!memset_method)
g_error ("Old corlib found. Install a new one");
}
return memset_method;
}
void
mini_emit_initobj (MonoCompile *cfg, MonoInst *dest, const guchar *ip, MonoClass *klass)
{
MonoInst *iargs [3];
int n;
guint32 align;
MonoMethod *memset_method;
MonoInst *size_ins = NULL;
MonoInst *bzero_ins = NULL;
static MonoMethod *bzero_method;
/* FIXME: Optimize this for the case when dest is an LDADDR */
mono_class_init_internal (klass);
if (mini_is_gsharedvt_klass (klass)) {
size_ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_VALUE_SIZE);
bzero_ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_BZERO);
if (!bzero_method)
bzero_method = get_method_nofail (mono_defaults.string_class, "bzero_aligned_1", 2, 0);
g_assert (bzero_method);
iargs [0] = dest;
iargs [1] = size_ins;
mini_emit_calli (cfg, mono_method_signature_internal (bzero_method), iargs, bzero_ins, NULL, NULL);
return;
}
klass = mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (klass)));
n = mono_class_value_size (klass, &align);
if (n <= TARGET_SIZEOF_VOID_P * 8) {
mini_emit_memset (cfg, dest->dreg, 0, n, 0, align);
}
else {
memset_method = mini_get_memset_method ();
iargs [0] = dest;
EMIT_NEW_ICONST (cfg, iargs [1], 0);
EMIT_NEW_ICONST (cfg, iargs [2], n);
mono_emit_method_call (cfg, memset_method, iargs, NULL);
}
}
static gboolean
context_used_is_mrgctx (MonoCompile *cfg, int context_used)
{
/* gshared dim methods use an mrgctx */
if (mini_method_is_default_method (cfg->method))
return context_used != 0;
return context_used & MONO_GENERIC_CONTEXT_USED_METHOD;
}
/*
* emit_get_rgctx:
*
* Emit IR to return either the vtable or the mrgctx.
*/
static MonoInst*
emit_get_rgctx (MonoCompile *cfg, int context_used)
{
MonoMethod *method = cfg->method;
g_assert (cfg->gshared);
/* Data whose context contains method type vars is stored in the mrgctx */
if (context_used_is_mrgctx (cfg, context_used)) {
MonoInst *mrgctx_loc, *mrgctx_var;
g_assert (cfg->rgctx_access == MONO_RGCTX_ACCESS_MRGCTX);
if (!mini_method_is_default_method (method))
g_assert (method->is_inflated && mono_method_get_context (method)->method_inst);
if (cfg->llvm_only) {
mrgctx_var = mono_get_mrgctx_var (cfg);
} else {
/* Volatile */
mrgctx_loc = mono_get_mrgctx_var (cfg);
g_assert (mrgctx_loc->flags & MONO_INST_VOLATILE);
EMIT_NEW_TEMPLOAD (cfg, mrgctx_var, mrgctx_loc->inst_c0);
}
return mrgctx_var;
}
/*
* The rest of the entries are stored in vtable->runtime_generic_context so
* have to return a vtable.
*/
if (cfg->rgctx_access == MONO_RGCTX_ACCESS_MRGCTX) {
MonoInst *mrgctx_loc, *mrgctx_var, *vtable_var;
int vtable_reg;
/* We are passed an mrgctx, return mrgctx->class_vtable */
if (cfg->llvm_only) {
mrgctx_var = mono_get_mrgctx_var (cfg);
} else {
mrgctx_loc = mono_get_mrgctx_var (cfg);
g_assert (mrgctx_loc->flags & MONO_INST_VOLATILE);
EMIT_NEW_TEMPLOAD (cfg, mrgctx_var, mrgctx_loc->inst_c0);
}
vtable_reg = alloc_preg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, vtable_var, OP_LOAD_MEMBASE, vtable_reg, mrgctx_var->dreg, MONO_STRUCT_OFFSET (MonoMethodRuntimeGenericContext, class_vtable));
vtable_var->type = STACK_PTR;
return vtable_var;
} else if (cfg->rgctx_access == MONO_RGCTX_ACCESS_VTABLE) {
MonoInst *vtable_loc, *vtable_var;
/* We are passed a vtable, return it */
if (cfg->llvm_only) {
vtable_var = mono_get_vtable_var (cfg);
} else {
vtable_loc = mono_get_vtable_var (cfg);
g_assert (vtable_loc->flags & MONO_INST_VOLATILE);
EMIT_NEW_TEMPLOAD (cfg, vtable_var, vtable_loc->inst_c0);
}
vtable_var->type = STACK_PTR;
return vtable_var;
} else {
MonoInst *ins, *this_ins;
int vtable_reg;
/* We are passed a this pointer, return this->vtable */
EMIT_NEW_VARLOAD (cfg, this_ins, cfg->this_arg, mono_get_object_type ());
vtable_reg = alloc_preg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, vtable_reg, this_ins->dreg, MONO_STRUCT_OFFSET (MonoObject, vtable));
return ins;
}
}
static MonoJumpInfoRgctxEntry *
mono_patch_info_rgctx_entry_new (MonoMemPool *mp, MonoMethod *method, gboolean in_mrgctx, MonoJumpInfoType patch_type, gconstpointer patch_data, MonoRgctxInfoType info_type)
{
MonoJumpInfoRgctxEntry *res = (MonoJumpInfoRgctxEntry *)mono_mempool_alloc0 (mp, sizeof (MonoJumpInfoRgctxEntry));
if (in_mrgctx)
res->d.method = method;
else
res->d.klass = method->klass;
res->in_mrgctx = in_mrgctx;
res->data = (MonoJumpInfo *)mono_mempool_alloc0 (mp, sizeof (MonoJumpInfo));
res->data->type = patch_type;
res->data->data.target = patch_data;
res->info_type = info_type;
return res;
}
static MonoInst*
emit_get_gsharedvt_info (MonoCompile *cfg, gpointer data, MonoRgctxInfoType rgctx_type);
static MonoInst*
emit_rgctx_fetch_inline (MonoCompile *cfg, MonoInst *rgctx, MonoJumpInfoRgctxEntry *entry)
{
MonoInst *call;
MonoInst *slot_ins;
EMIT_NEW_AOTCONST (cfg, slot_ins, MONO_PATCH_INFO_RGCTX_SLOT_INDEX, entry);
// Can't add basic blocks during interp entry mode
if (cfg->disable_inline_rgctx_fetch || cfg->interp_entry_only) {
MonoInst *args [2] = { rgctx, slot_ins };
if (entry->in_mrgctx)
call = mono_emit_jit_icall (cfg, mono_fill_method_rgctx, args);
else
call = mono_emit_jit_icall (cfg, mono_fill_class_rgctx, args);
return call;
}
MonoBasicBlock *slowpath_bb, *end_bb;
MonoInst *ins, *res;
int rgctx_reg, res_reg;
/*
* rgctx = vtable->runtime_generic_context;
* if (rgctx) {
* val = rgctx [slot + 1];
* if (val)
* return val;
* }
* <slowpath>
*/
NEW_BBLOCK (cfg, end_bb);
NEW_BBLOCK (cfg, slowpath_bb);
if (entry->in_mrgctx) {
rgctx_reg = rgctx->dreg;
} else {
rgctx_reg = alloc_preg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, rgctx_reg, rgctx->dreg, MONO_STRUCT_OFFSET (MonoVTable, runtime_generic_context));
// FIXME: Avoid this check by allocating the table when the vtable is created etc.
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, rgctx_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, slowpath_bb);
}
int table_size = mono_class_rgctx_get_array_size (0, entry->in_mrgctx);
if (entry->in_mrgctx)
table_size -= MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT / TARGET_SIZEOF_VOID_P;
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, slot_ins->dreg, table_size - 1);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBGE, slowpath_bb);
int shifted_slot_reg = alloc_ireg (cfg);
EMIT_NEW_BIALU_IMM (cfg, ins, OP_ISHL_IMM, shifted_slot_reg, slot_ins->dreg, TARGET_SIZEOF_VOID_P == 8 ? 3 : 2);
int addr_reg = alloc_preg (cfg);
EMIT_NEW_UNALU (cfg, ins, OP_MOVE, addr_reg, rgctx_reg);
EMIT_NEW_BIALU (cfg, ins, OP_PADD, addr_reg, addr_reg, shifted_slot_reg);
int val_reg = alloc_preg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, val_reg, addr_reg, TARGET_SIZEOF_VOID_P + (entry->in_mrgctx ? MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT : 0));
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, val_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, slowpath_bb);
res_reg = alloc_preg (cfg);
EMIT_NEW_UNALU (cfg, ins, OP_MOVE, res_reg, val_reg);
res = ins;
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
MONO_START_BB (cfg, slowpath_bb);
slowpath_bb->out_of_line = TRUE;
MonoInst *args[2] = { rgctx, slot_ins };
if (entry->in_mrgctx)
call = mono_emit_jit_icall (cfg, mono_fill_method_rgctx, args);
else
call = mono_emit_jit_icall (cfg, mono_fill_class_rgctx, args);
EMIT_NEW_UNALU (cfg, ins, OP_MOVE, res_reg, call->dreg);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
MONO_START_BB (cfg, end_bb);
return res;
}
/*
* emit_rgctx_fetch:
*
* Emit IR to load the value of the rgctx entry ENTRY from the rgctx.
*/
static MonoInst*
emit_rgctx_fetch (MonoCompile *cfg, int context_used, MonoJumpInfoRgctxEntry *entry)
{
MonoInst *rgctx = emit_get_rgctx (cfg, context_used);
if (cfg->llvm_only)
return emit_rgctx_fetch_inline (cfg, rgctx, entry);
else
return mini_emit_abs_call (cfg, MONO_PATCH_INFO_RGCTX_FETCH, entry, mono_icall_sig_ptr_ptr, &rgctx);
}
/*
* mini_emit_get_rgctx_klass:
*
* Emit IR to load the property RGCTX_TYPE of KLASS. If context_used is 0, emit
* normal constants, else emit a load from the rgctx.
*/
MonoInst*
mini_emit_get_rgctx_klass (MonoCompile *cfg, int context_used,
MonoClass *klass, MonoRgctxInfoType rgctx_type)
{
if (!context_used) {
MonoInst *ins;
switch (rgctx_type) {
case MONO_RGCTX_INFO_KLASS:
EMIT_NEW_CLASSCONST (cfg, ins, klass);
return ins;
case MONO_RGCTX_INFO_VTABLE: {
MonoVTable *vtable = mono_class_vtable_checked (klass, cfg->error);
CHECK_CFG_ERROR;
EMIT_NEW_VTABLECONST (cfg, ins, vtable);
return ins;
}
default:
g_assert_not_reached ();
}
}
// Its cheaper to load these from the gsharedvt info struct
if (cfg->llvm_only && cfg->gsharedvt)
return mini_emit_get_gsharedvt_info_klass (cfg, klass, rgctx_type);
MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_CLASS, klass, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
mono_error_exit:
return NULL;
}
static MonoInst*
emit_get_rgctx_sig (MonoCompile *cfg, int context_used,
MonoMethodSignature *sig, MonoRgctxInfoType rgctx_type)
{
MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_SIGNATURE, sig, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
}
static MonoInst*
emit_get_rgctx_gsharedvt_call (MonoCompile *cfg, int context_used,
MonoMethodSignature *sig, MonoMethod *cmethod, MonoRgctxInfoType rgctx_type)
{
MonoJumpInfoGSharedVtCall *call_info;
MonoJumpInfoRgctxEntry *entry;
call_info = (MonoJumpInfoGSharedVtCall *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfoGSharedVtCall));
call_info->sig = sig;
call_info->method = cmethod;
entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_GSHAREDVT_CALL, call_info, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
}
/*
* emit_get_rgctx_virt_method:
*
* Return data for method VIRT_METHOD for a receiver of type KLASS.
*/
static MonoInst*
emit_get_rgctx_virt_method (MonoCompile *cfg, int context_used,
MonoClass *klass, MonoMethod *virt_method, MonoRgctxInfoType rgctx_type)
{
MonoJumpInfoVirtMethod *info;
MonoJumpInfoRgctxEntry *entry;
if (context_used == -1)
context_used = mono_class_check_context_used (klass) | mono_method_check_context_used (virt_method);
info = (MonoJumpInfoVirtMethod *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfoVirtMethod));
info->klass = klass;
info->method = virt_method;
entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_VIRT_METHOD, info, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
}
static MonoInst*
emit_get_rgctx_gsharedvt_method (MonoCompile *cfg, int context_used,
MonoMethod *cmethod, MonoGSharedVtMethodInfo *info)
{
MonoJumpInfoRgctxEntry *entry;
entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_GSHAREDVT_METHOD, info, MONO_RGCTX_INFO_METHOD_GSHAREDVT_INFO);
return emit_rgctx_fetch (cfg, context_used, entry);
}
/*
* emit_get_rgctx_method:
*
* Emit IR to load the property RGCTX_TYPE of CMETHOD. If context_used is 0, emit
* normal constants, else emit a load from the rgctx.
*/
static MonoInst*
emit_get_rgctx_method (MonoCompile *cfg, int context_used,
MonoMethod *cmethod, MonoRgctxInfoType rgctx_type)
{
if (context_used == -1)
context_used = mono_method_check_context_used (cmethod);
if (!context_used) {
MonoInst *ins;
switch (rgctx_type) {
case MONO_RGCTX_INFO_METHOD:
EMIT_NEW_METHODCONST (cfg, ins, cmethod);
return ins;
case MONO_RGCTX_INFO_METHOD_RGCTX:
EMIT_NEW_METHOD_RGCTX_CONST (cfg, ins, cmethod);
return ins;
case MONO_RGCTX_INFO_METHOD_FTNDESC:
EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_METHOD_FTNDESC, cmethod);
return ins;
case MONO_RGCTX_INFO_LLVMONLY_INTERP_ENTRY:
EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_LLVMONLY_INTERP_ENTRY, cmethod);
return ins;
default:
g_assert_not_reached ();
}
} else {
// Its cheaper to load these from the gsharedvt info struct
if (cfg->llvm_only && cfg->gsharedvt)
return emit_get_gsharedvt_info (cfg, cmethod, rgctx_type);
MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_METHODCONST, cmethod, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
}
}
static MonoInst*
emit_get_rgctx_field (MonoCompile *cfg, int context_used,
MonoClassField *field, MonoRgctxInfoType rgctx_type)
{
// Its cheaper to load these from the gsharedvt info struct
if (cfg->llvm_only && cfg->gsharedvt)
return emit_get_gsharedvt_info (cfg, field, rgctx_type);
MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_FIELD, field, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
}
MonoInst*
mini_emit_get_rgctx_method (MonoCompile *cfg, int context_used,
MonoMethod *cmethod, MonoRgctxInfoType rgctx_type)
{
return emit_get_rgctx_method (cfg, context_used, cmethod, rgctx_type);
}
static int
get_gsharedvt_info_slot (MonoCompile *cfg, gpointer data, MonoRgctxInfoType rgctx_type)
{
MonoGSharedVtMethodInfo *info = cfg->gsharedvt_info;
MonoRuntimeGenericContextInfoTemplate *template_;
int i, idx;
g_assert (info);
for (i = 0; i < info->num_entries; ++i) {
MonoRuntimeGenericContextInfoTemplate *otemplate = &info->entries [i];
if (otemplate->info_type == rgctx_type && otemplate->data == data && rgctx_type != MONO_RGCTX_INFO_LOCAL_OFFSET)
return i;
}
if (info->num_entries == info->count_entries) {
MonoRuntimeGenericContextInfoTemplate *new_entries;
int new_count_entries = info->count_entries ? info->count_entries * 2 : 16;
new_entries = (MonoRuntimeGenericContextInfoTemplate *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoRuntimeGenericContextInfoTemplate) * new_count_entries);
memcpy (new_entries, info->entries, sizeof (MonoRuntimeGenericContextInfoTemplate) * info->count_entries);
info->entries = new_entries;
info->count_entries = new_count_entries;
}
idx = info->num_entries;
template_ = &info->entries [idx];
template_->info_type = rgctx_type;
template_->data = data;
info->num_entries ++;
return idx;
}
/*
* emit_get_gsharedvt_info:
*
* This is similar to emit_get_rgctx_.., but loads the data from the gsharedvt info var instead of calling an rgctx fetch trampoline.
*/
static MonoInst*
emit_get_gsharedvt_info (MonoCompile *cfg, gpointer data, MonoRgctxInfoType rgctx_type)
{
MonoInst *ins;
int idx, dreg;
idx = get_gsharedvt_info_slot (cfg, data, rgctx_type);
/* Load info->entries [idx] */
dreg = alloc_preg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, cfg->gsharedvt_info_var->dreg, MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, entries) + (idx * TARGET_SIZEOF_VOID_P));
return ins;
}
MonoInst*
mini_emit_get_gsharedvt_info_klass (MonoCompile *cfg, MonoClass *klass, MonoRgctxInfoType rgctx_type)
{
return emit_get_gsharedvt_info (cfg, m_class_get_byval_arg (klass), rgctx_type);
}
/*
* On return the caller must check @klass for load errors.
*/
static void
emit_class_init (MonoCompile *cfg, MonoClass *klass)
{
MonoInst *vtable_arg;
int context_used;
context_used = mini_class_check_context_used (cfg, klass);
if (context_used) {
vtable_arg = mini_emit_get_rgctx_klass (cfg, context_used,
klass, MONO_RGCTX_INFO_VTABLE);
} else {
MonoVTable *vtable = mono_class_vtable_checked (klass, cfg->error);
if (!is_ok (cfg->error)) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
return;
}
EMIT_NEW_VTABLECONST (cfg, vtable_arg, vtable);
}
if (!COMPILE_LLVM (cfg) && cfg->backend->have_op_generic_class_init) {
MonoInst *ins;
/*
* Using an opcode instead of emitting IR here allows the hiding of the call inside the opcode,
* so this doesn't have to clobber any regs and it doesn't break basic blocks.
*/
MONO_INST_NEW (cfg, ins, OP_GENERIC_CLASS_INIT);
ins->sreg1 = vtable_arg->dreg;
MONO_ADD_INS (cfg->cbb, ins);
} else {
int inited_reg;
MonoBasicBlock *inited_bb;
inited_reg = alloc_ireg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADU1_MEMBASE, inited_reg, vtable_arg->dreg, MONO_STRUCT_OFFSET (MonoVTable, initialized));
NEW_BBLOCK (cfg, inited_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, inited_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBNE_UN, inited_bb);
cfg->cbb->out_of_line = TRUE;
mono_emit_jit_icall (cfg, mono_generic_class_init, &vtable_arg);
MONO_START_BB (cfg, inited_bb);
}
}
static void
emit_seq_point (MonoCompile *cfg, MonoMethod *method, guint8* ip, gboolean intr_loc, gboolean nonempty_stack)
{
MonoInst *ins;
if (cfg->gen_seq_points && cfg->method == method) {
NEW_SEQ_POINT (cfg, ins, ip - cfg->header->code, intr_loc);
if (nonempty_stack)
ins->flags |= MONO_INST_NONEMPTY_STACK;
MONO_ADD_INS (cfg->cbb, ins);
cfg->last_seq_point = ins;
}
}
void
mini_save_cast_details (MonoCompile *cfg, MonoClass *klass, int obj_reg, gboolean null_check)
{
if (mini_debug_options.better_cast_details) {
int vtable_reg = alloc_preg (cfg);
int klass_reg = alloc_preg (cfg);
MonoBasicBlock *is_null_bb = NULL;
MonoInst *tls_get;
if (null_check) {
NEW_BBLOCK (cfg, is_null_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, obj_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, is_null_bb);
}
tls_get = mono_create_tls_get (cfg, TLS_KEY_JIT_TLS);
if (!tls_get) {
fprintf (stderr, "error: --debug=casts not supported on this platform.\n.");
exit (1);
}
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, vtable_reg, obj_reg, MONO_STRUCT_OFFSET (MonoObject, vtable));
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, klass_reg, vtable_reg, MONO_STRUCT_OFFSET (MonoVTable, klass));
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, tls_get->dreg, MONO_STRUCT_OFFSET (MonoJitTlsData, class_cast_from), klass_reg);
MonoInst *class_ins = mini_emit_get_rgctx_klass (cfg, mini_class_check_context_used (cfg, klass), klass, MONO_RGCTX_INFO_KLASS);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, tls_get->dreg, MONO_STRUCT_OFFSET (MonoJitTlsData, class_cast_to), class_ins->dreg);
if (null_check)
MONO_START_BB (cfg, is_null_bb);
}
}
void
mini_reset_cast_details (MonoCompile *cfg)
{
/* Reset the variables holding the cast details */
if (mini_debug_options.better_cast_details) {
MonoInst *tls_get = mono_create_tls_get (cfg, TLS_KEY_JIT_TLS);
/* It is enough to reset the from field */
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STORE_MEMBASE_IMM, tls_get->dreg, MONO_STRUCT_OFFSET (MonoJitTlsData, class_cast_from), 0);
}
}
/*
* On return the caller must check @array_class for load errors
*/
static void
mini_emit_check_array_type (MonoCompile *cfg, MonoInst *obj, MonoClass *array_class)
{
int vtable_reg = alloc_preg (cfg);
int context_used;
context_used = mini_class_check_context_used (cfg, array_class);
mini_save_cast_details (cfg, array_class, obj->dreg, FALSE);
MONO_EMIT_NEW_LOAD_MEMBASE_FAULT (cfg, vtable_reg, obj->dreg, MONO_STRUCT_OFFSET (MonoObject, vtable));
if (context_used) {
MonoInst *vtable_ins;
vtable_ins = mini_emit_get_rgctx_klass (cfg, context_used, array_class, MONO_RGCTX_INFO_VTABLE);
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, vtable_reg, vtable_ins->dreg);
} else {
if (cfg->compile_aot) {
int vt_reg;
MonoVTable *vtable;
if (!(vtable = mono_class_vtable_checked (array_class, cfg->error))) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
return;
}
vt_reg = alloc_preg (cfg);
MONO_EMIT_NEW_VTABLECONST (cfg, vt_reg, vtable);
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, vtable_reg, vt_reg);
} else {
MonoVTable *vtable;
if (!(vtable = mono_class_vtable_checked (array_class, cfg->error))) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
return;
}
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, vtable_reg, (gssize)vtable);
}
}
MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "ArrayTypeMismatchException");
mini_reset_cast_details (cfg);
}
/**
* Handles unbox of a Nullable<T>. If context_used is non zero, then shared
* generic code is generated.
*/
static MonoInst*
handle_unbox_nullable (MonoCompile* cfg, MonoInst* val, MonoClass* klass, int context_used)
{
MonoMethod* method;
if (m_class_is_enumtype (mono_class_get_nullable_param_internal (klass)))
method = get_method_nofail (klass, "UnboxExact", 1, 0);
else
method = get_method_nofail (klass, "Unbox", 1, 0);
g_assert (method);
if (context_used) {
MonoInst *rgctx, *addr;
/* FIXME: What if the class is shared? We might not
have to get the address of the method from the
RGCTX. */
if (cfg->llvm_only) {
addr = emit_get_rgctx_method (cfg, context_used, method,
MONO_RGCTX_INFO_METHOD_FTNDESC);
cfg->signatures = g_slist_prepend_mempool (cfg->mempool, cfg->signatures, mono_method_signature_internal (method));
return mini_emit_llvmonly_calli (cfg, mono_method_signature_internal (method), &val, addr);
} else {
addr = emit_get_rgctx_method (cfg, context_used, method,
MONO_RGCTX_INFO_GENERIC_METHOD_CODE);
rgctx = emit_get_rgctx (cfg, context_used);
return mini_emit_calli (cfg, mono_method_signature_internal (method), &val, addr, NULL, rgctx);
}
} else {
gboolean pass_vtable, pass_mrgctx;
MonoInst *rgctx_arg = NULL;
check_method_sharing (cfg, method, &pass_vtable, &pass_mrgctx);
g_assert (!pass_mrgctx);
if (pass_vtable) {
MonoVTable *vtable = mono_class_vtable_checked (method->klass, cfg->error);
mono_error_assert_ok (cfg->error);
EMIT_NEW_VTABLECONST (cfg, rgctx_arg, vtable);
}
return mini_emit_method_call_full (cfg, method, NULL, FALSE, &val, NULL, NULL, rgctx_arg);
}
}
MonoInst*
mini_handle_unbox (MonoCompile *cfg, MonoClass *klass, MonoInst *val, int context_used)
{
MonoInst *add;
int obj_reg;
int vtable_reg = alloc_dreg (cfg ,STACK_PTR);
int klass_reg = alloc_dreg (cfg ,STACK_PTR);
int eclass_reg = alloc_dreg (cfg ,STACK_PTR);
int rank_reg = alloc_dreg (cfg ,STACK_I4);
obj_reg = val->dreg;
MONO_EMIT_NEW_LOAD_MEMBASE_FAULT (cfg, vtable_reg, obj_reg, MONO_STRUCT_OFFSET (MonoObject, vtable));
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADU1_MEMBASE, rank_reg, vtable_reg, MONO_STRUCT_OFFSET (MonoVTable, rank));
/* FIXME: generics */
g_assert (m_class_get_rank (klass) == 0);
// Check rank == 0
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, rank_reg, 0);
MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "InvalidCastException");
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, klass_reg, vtable_reg, MONO_STRUCT_OFFSET (MonoVTable, klass));
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, eclass_reg, klass_reg, m_class_offsetof_element_class ());
if (context_used) {
MonoInst *element_class;
/* This assertion is from the unboxcast insn */
g_assert (m_class_get_rank (klass) == 0);
element_class = mini_emit_get_rgctx_klass (cfg, context_used,
klass, MONO_RGCTX_INFO_ELEMENT_KLASS);
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, eclass_reg, element_class->dreg);
MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "InvalidCastException");
} else {
mini_save_cast_details (cfg, m_class_get_element_class (klass), obj_reg, FALSE);
mini_emit_class_check (cfg, eclass_reg, m_class_get_element_class (klass));
mini_reset_cast_details (cfg);
}
NEW_BIALU_IMM (cfg, add, OP_ADD_IMM, alloc_dreg (cfg, STACK_MP), obj_reg, MONO_ABI_SIZEOF (MonoObject));
MONO_ADD_INS (cfg->cbb, add);
add->type = STACK_MP;
add->klass = klass;
return add;
}
static MonoInst*
handle_unbox_gsharedvt (MonoCompile *cfg, MonoClass *klass, MonoInst *obj)
{
MonoInst *addr, *klass_inst, *is_ref, *args[16];
MonoBasicBlock *is_ref_bb, *is_nullable_bb, *end_bb;
MonoInst *ins;
int dreg, addr_reg;
klass_inst = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_KLASS);
/* obj */
args [0] = obj;
/* klass */
args [1] = klass_inst;
/* CASTCLASS */
obj = mono_emit_jit_icall (cfg, mono_object_castclass_unbox, args);
NEW_BBLOCK (cfg, is_ref_bb);
NEW_BBLOCK (cfg, is_nullable_bb);
NEW_BBLOCK (cfg, end_bb);
is_ref = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_CLASS_BOX_TYPE);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_REF);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_ref_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_NULLABLE);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_nullable_bb);
/* This will contain either the address of the unboxed vtype, or an address of the temporary where the ref is stored */
addr_reg = alloc_dreg (cfg, STACK_MP);
/* Non-ref case */
/* UNBOX */
NEW_BIALU_IMM (cfg, addr, OP_ADD_IMM, addr_reg, obj->dreg, MONO_ABI_SIZEOF (MonoObject));
MONO_ADD_INS (cfg->cbb, addr);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Ref case */
MONO_START_BB (cfg, is_ref_bb);
/* Save the ref to a temporary */
dreg = alloc_ireg (cfg);
EMIT_NEW_VARLOADA_VREG (cfg, addr, dreg, m_class_get_byval_arg (klass));
addr->dreg = addr_reg;
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, addr->dreg, 0, obj->dreg);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Nullable case */
MONO_START_BB (cfg, is_nullable_bb);
{
MonoInst *addr = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_NULLABLE_CLASS_UNBOX);
MonoInst *unbox_call;
MonoMethodSignature *unbox_sig;
unbox_sig = (MonoMethodSignature *)mono_mempool_alloc0 (cfg->mempool, MONO_SIZEOF_METHOD_SIGNATURE + (1 * sizeof (MonoType *)));
unbox_sig->ret = m_class_get_byval_arg (klass);
unbox_sig->param_count = 1;
unbox_sig->params [0] = mono_get_object_type ();
if (cfg->llvm_only)
unbox_call = mini_emit_llvmonly_calli (cfg, unbox_sig, &obj, addr);
else
unbox_call = mini_emit_calli (cfg, unbox_sig, &obj, addr, NULL, NULL);
EMIT_NEW_VARLOADA_VREG (cfg, addr, unbox_call->dreg, m_class_get_byval_arg (klass));
addr->dreg = addr_reg;
}
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* End */
MONO_START_BB (cfg, end_bb);
/* LDOBJ */
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr_reg, 0);
return ins;
}
/*
* Returns NULL and set the cfg exception on error.
*/
static MonoInst*
handle_alloc (MonoCompile *cfg, MonoClass *klass, gboolean for_box, int context_used)
{
MonoInst *iargs [2];
MonoJitICallId alloc_ftn;
if (mono_class_get_flags (klass) & TYPE_ATTRIBUTE_ABSTRACT) {
char* full_name = mono_type_get_full_name (klass);
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
mono_error_set_member_access (cfg->error, "Cannot create an abstract class: %s", full_name);
g_free (full_name);
return NULL;
}
if (context_used) {
gboolean known_instance_size = !mini_is_gsharedvt_klass (klass);
MonoMethod *managed_alloc = mono_gc_get_managed_allocator (klass, for_box, known_instance_size);
iargs [0] = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_VTABLE);
alloc_ftn = MONO_JIT_ICALL_ves_icall_object_new_specific;
if (managed_alloc) {
if (known_instance_size) {
int size = mono_class_instance_size (klass);
if (size < MONO_ABI_SIZEOF (MonoObject))
g_error ("Invalid size %d for class %s", size, mono_type_get_full_name (klass));
EMIT_NEW_ICONST (cfg, iargs [1], size);
}
return mono_emit_method_call (cfg, managed_alloc, iargs, NULL);
}
return mono_emit_jit_icall_id (cfg, alloc_ftn, iargs);
}
if (cfg->compile_aot && cfg->cbb->out_of_line && m_class_get_type_token (klass) && m_class_get_image (klass) == mono_defaults.corlib && !mono_class_is_ginst (klass)) {
/* This happens often in argument checking code, eg. throw new FooException... */
/* Avoid relocations and save some space by calling a helper function specialized to mscorlib */
EMIT_NEW_ICONST (cfg, iargs [0], mono_metadata_token_index (m_class_get_type_token (klass)));
alloc_ftn = MONO_JIT_ICALL_mono_helper_newobj_mscorlib;
} else {
MonoVTable *vtable = mono_class_vtable_checked (klass, cfg->error);
if (!is_ok (cfg->error)) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
return NULL;
}
MonoMethod *managed_alloc = mono_gc_get_managed_allocator (klass, for_box, TRUE);
if (managed_alloc) {
int size = mono_class_instance_size (klass);
if (size < MONO_ABI_SIZEOF (MonoObject))
g_error ("Invalid size %d for class %s", size, mono_type_get_full_name (klass));
EMIT_NEW_VTABLECONST (cfg, iargs [0], vtable);
EMIT_NEW_ICONST (cfg, iargs [1], size);
return mono_emit_method_call (cfg, managed_alloc, iargs, NULL);
}
alloc_ftn = MONO_JIT_ICALL_ves_icall_object_new_specific;
EMIT_NEW_VTABLECONST (cfg, iargs [0], vtable);
}
return mono_emit_jit_icall_id (cfg, alloc_ftn, iargs);
}
/*
* Returns NULL and set the cfg exception on error.
*/
MonoInst*
mini_emit_box (MonoCompile *cfg, MonoInst *val, MonoClass *klass, int context_used)
{
MonoInst *alloc, *ins;
if (G_UNLIKELY (m_class_is_byreflike (klass))) {
mono_error_set_bad_image (cfg->error, m_class_get_image (cfg->method->klass), "Cannot box IsByRefLike type '%s.%s'", m_class_get_name_space (klass), m_class_get_name (klass));
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
return NULL;
}
if (mono_class_is_nullable (klass)) {
MonoMethod* method = get_method_nofail (klass, "Box", 1, 0);
if (context_used) {
if (cfg->llvm_only) {
MonoMethodSignature *sig = mono_method_signature_internal (method);
MonoInst *addr = emit_get_rgctx_method (cfg, context_used, method,
MONO_RGCTX_INFO_METHOD_FTNDESC);
cfg->interp_in_signatures = g_slist_prepend_mempool (cfg->mempool, cfg->interp_in_signatures, sig);
return mini_emit_llvmonly_calli (cfg, sig, &val, addr);
} else {
/* FIXME: What if the class is shared? We might not
have to get the method address from the RGCTX. */
MonoInst *addr = emit_get_rgctx_method (cfg, context_used, method,
MONO_RGCTX_INFO_GENERIC_METHOD_CODE);
MonoInst *rgctx = emit_get_rgctx (cfg, context_used);
return mini_emit_calli (cfg, mono_method_signature_internal (method), &val, addr, NULL, rgctx);
}
} else {
gboolean pass_vtable, pass_mrgctx;
MonoInst *rgctx_arg = NULL;
check_method_sharing (cfg, method, &pass_vtable, &pass_mrgctx);
g_assert (!pass_mrgctx);
if (pass_vtable) {
MonoVTable *vtable = mono_class_vtable_checked (method->klass, cfg->error);
mono_error_assert_ok (cfg->error);
EMIT_NEW_VTABLECONST (cfg, rgctx_arg, vtable);
}
return mini_emit_method_call_full (cfg, method, NULL, FALSE, &val, NULL, NULL, rgctx_arg);
}
}
if (mini_is_gsharedvt_klass (klass)) {
MonoBasicBlock *is_ref_bb, *is_nullable_bb, *end_bb;
MonoInst *res, *is_ref, *src_var, *addr;
int dreg;
dreg = alloc_ireg (cfg);
NEW_BBLOCK (cfg, is_ref_bb);
NEW_BBLOCK (cfg, is_nullable_bb);
NEW_BBLOCK (cfg, end_bb);
is_ref = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_CLASS_BOX_TYPE);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_REF);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_ref_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_NULLABLE);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_nullable_bb);
/* Non-ref case */
alloc = handle_alloc (cfg, klass, TRUE, context_used);
if (!alloc)
return NULL;
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), alloc->dreg, MONO_ABI_SIZEOF (MonoObject), val->dreg);
ins->opcode = OP_STOREV_MEMBASE;
EMIT_NEW_UNALU (cfg, res, OP_MOVE, dreg, alloc->dreg);
res->type = STACK_OBJ;
res->klass = klass;
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Ref case */
MONO_START_BB (cfg, is_ref_bb);
/* val is a vtype, so has to load the value manually */
src_var = get_vreg_to_inst (cfg, val->dreg);
if (!src_var)
src_var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (klass), OP_LOCAL, val->dreg);
EMIT_NEW_VARLOADA (cfg, addr, src_var, src_var->inst_vtype);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, addr->dreg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Nullable case */
MONO_START_BB (cfg, is_nullable_bb);
{
MonoInst *addr = mini_emit_get_gsharedvt_info_klass (cfg, klass,
MONO_RGCTX_INFO_NULLABLE_CLASS_BOX);
MonoInst *box_call;
MonoMethodSignature *box_sig;
/*
* klass is Nullable<T>, need to call Nullable<T>.Box () using a gsharedvt signature, but we cannot
* construct that method at JIT time, so have to do things by hand.
*/
box_sig = (MonoMethodSignature *)mono_mempool_alloc0 (cfg->mempool, MONO_SIZEOF_METHOD_SIGNATURE + (1 * sizeof (MonoType *)));
box_sig->ret = mono_get_object_type ();
box_sig->param_count = 1;
box_sig->params [0] = m_class_get_byval_arg (klass);
if (cfg->llvm_only)
box_call = mini_emit_llvmonly_calli (cfg, box_sig, &val, addr);
else
box_call = mini_emit_calli (cfg, box_sig, &val, addr, NULL, NULL);
EMIT_NEW_UNALU (cfg, res, OP_MOVE, dreg, box_call->dreg);
res->type = STACK_OBJ;
res->klass = klass;
}
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
MONO_START_BB (cfg, end_bb);
return res;
}
alloc = handle_alloc (cfg, klass, TRUE, context_used);
if (!alloc)
return NULL;
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), alloc->dreg, MONO_ABI_SIZEOF (MonoObject), val->dreg);
return alloc;
}
static gboolean
method_needs_stack_walk (MonoCompile *cfg, MonoMethod *cmethod)
{
if (cmethod->klass == mono_defaults.systemtype_class) {
if (!strcmp (cmethod->name, "GetType"))
return TRUE;
}
/*
* In corelib code, methods which need to do a stack walk declare a StackCrawlMark local and pass it as an
* arguments until it reaches an icall. Its hard to detect which methods do that especially with
* StackCrawlMark.LookForMyCallersCaller, so for now, just hardcode the classes which contain the public
* methods whose caller is needed.
*/
if (mono_is_corlib_image (m_class_get_image (cmethod->klass))) {
const char *cname = m_class_get_name (cmethod->klass);
if (!strcmp (cname, "Assembly") ||
!strcmp (cname, "AssemblyLoadContext") ||
(!strcmp (cname, "Activator"))) {
if (!strcmp (cmethod->name, "op_Equality"))
return FALSE;
return TRUE;
}
}
return FALSE;
}
G_GNUC_UNUSED MonoInst*
mini_handle_enum_has_flag (MonoCompile *cfg, MonoClass *klass, MonoInst *enum_this, int enum_val_reg, MonoInst *enum_flag)
{
MonoType *enum_type = mono_type_get_underlying_type (m_class_get_byval_arg (klass));
guint32 load_opc = mono_type_to_load_membase (cfg, enum_type);
gboolean is_i4;
switch (enum_type->type) {
case MONO_TYPE_I8:
case MONO_TYPE_U8:
#if SIZEOF_REGISTER == 8
case MONO_TYPE_I:
case MONO_TYPE_U:
#endif
is_i4 = FALSE;
break;
default:
is_i4 = TRUE;
break;
}
{
MonoInst *load = NULL, *and_, *cmp, *ceq;
int enum_reg = is_i4 ? alloc_ireg (cfg) : alloc_lreg (cfg);
int and_reg = is_i4 ? alloc_ireg (cfg) : alloc_lreg (cfg);
int dest_reg = alloc_ireg (cfg);
if (enum_this) {
EMIT_NEW_LOAD_MEMBASE (cfg, load, load_opc, enum_reg, enum_this->dreg, 0);
} else {
g_assert (enum_val_reg != -1);
enum_reg = enum_val_reg;
}
EMIT_NEW_BIALU (cfg, and_, is_i4 ? OP_IAND : OP_LAND, and_reg, enum_reg, enum_flag->dreg);
EMIT_NEW_BIALU (cfg, cmp, is_i4 ? OP_ICOMPARE : OP_LCOMPARE, -1, and_reg, enum_flag->dreg);
EMIT_NEW_UNALU (cfg, ceq, is_i4 ? OP_ICEQ : OP_LCEQ, dest_reg, -1);
ceq->type = STACK_I4;
if (!is_i4) {
load = load ? mono_decompose_opcode (cfg, load) : NULL;
and_ = mono_decompose_opcode (cfg, and_);
cmp = mono_decompose_opcode (cfg, cmp);
ceq = mono_decompose_opcode (cfg, ceq);
}
return ceq;
}
}
static void
emit_set_deopt_il_offset (MonoCompile *cfg, int offset)
{
MonoInst *ins;
if (!(cfg->deopt && cfg->method == cfg->current_method))
return;
EMIT_NEW_VARLOADA (cfg, ins, cfg->il_state_var, NULL);
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI4_MEMBASE_IMM, ins->dreg, MONO_STRUCT_OFFSET (MonoMethodILState, il_offset), offset);
}
static MonoInst*
emit_get_rgctx_dele_tramp (MonoCompile *cfg, int context_used,
MonoClass *klass, MonoMethod *virt_method, gboolean _virtual, MonoRgctxInfoType rgctx_type)
{
MonoDelegateClassMethodPair *info;
MonoJumpInfoRgctxEntry *entry;
info = (MonoDelegateClassMethodPair *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoDelegateClassMethodPair));
info->klass = klass;
info->method = virt_method;
info->is_virtual = _virtual;
entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_DELEGATE_TRAMPOLINE, info, rgctx_type);
return emit_rgctx_fetch (cfg, context_used, entry);
}
/*
* Returns NULL and set the cfg exception on error.
*/
static G_GNUC_UNUSED MonoInst*
handle_delegate_ctor (MonoCompile *cfg, MonoClass *klass, MonoInst *target, MonoMethod *method, int target_method_context_used, int invoke_context_used, gboolean virtual_)
{
MonoInst *ptr;
int dreg;
gpointer trampoline;
MonoInst *obj, *tramp_ins;
guint8 **code_slot;
if (virtual_ && !cfg->llvm_only) {
MonoMethod *invoke = mono_get_delegate_invoke_internal (klass);
g_assert (invoke);
//FIXME verify & fix any issue with removing invoke_context_used restriction
if (invoke_context_used || !mono_get_delegate_virtual_invoke_impl (mono_method_signature_internal (invoke), target_method_context_used ? NULL : method))
return NULL;
}
obj = handle_alloc (cfg, klass, FALSE, invoke_context_used);
if (!obj)
return NULL;
/* Inline the contents of mono_delegate_ctor */
/* Set target field */
/* Optimize away setting of NULL target */
if (!MONO_INS_IS_PCONST_NULL (target)) {
if (!(method->flags & METHOD_ATTRIBUTE_STATIC)) {
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, target->dreg, 0);
MONO_EMIT_NEW_COND_EXC (cfg, EQ, "NullReferenceException");
}
if (!mini_debug_options.weak_memory_model)
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, target), target->dreg);
if (cfg->gen_write_barriers) {
dreg = alloc_preg (cfg);
EMIT_NEW_BIALU_IMM (cfg, ptr, OP_PADD_IMM, dreg, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, target));
mini_emit_write_barrier (cfg, ptr, target);
}
}
/* Set method field */
if (!(target_method_context_used || invoke_context_used) && !cfg->llvm_only) {
//If compiling with gsharing enabled, it's faster to load method the delegate trampoline info than to use a rgctx slot
MonoInst *method_ins = emit_get_rgctx_method (cfg, target_method_context_used, method, MONO_RGCTX_INFO_METHOD);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method), method_ins->dreg);
}
if (cfg->llvm_only) {
if (virtual_) {
MonoInst *args [ ] = {
obj,
target,
emit_get_rgctx_method (cfg, target_method_context_used, method, MONO_RGCTX_INFO_METHOD)
};
mono_emit_jit_icall (cfg, mini_llvmonly_init_delegate_virtual, args);
return obj;
}
}
/*
* To avoid looking up the compiled code belonging to the target method
* in mono_delegate_trampoline (), we allocate a per-domain memory slot to
* store it, and we fill it after the method has been compiled.
*/
if (!method->dynamic && !cfg->llvm_only) {
MonoInst *code_slot_ins;
if (target_method_context_used) {
code_slot_ins = emit_get_rgctx_method (cfg, target_method_context_used, method, MONO_RGCTX_INFO_METHOD_DELEGATE_CODE);
} else {
MonoJitMemoryManager *jit_mm = (MonoJitMemoryManager*)cfg->jit_mm;
jit_mm_lock (jit_mm);
if (!jit_mm->method_code_hash)
jit_mm->method_code_hash = g_hash_table_new (NULL, NULL);
code_slot = (guint8 **)g_hash_table_lookup (jit_mm->method_code_hash, method);
if (!code_slot) {
code_slot = (guint8 **)mono_mem_manager_alloc0 (jit_mm->mem_manager, sizeof (gpointer));
g_hash_table_insert (jit_mm->method_code_hash, method, code_slot);
}
jit_mm_unlock (jit_mm);
code_slot_ins = mini_emit_runtime_constant (cfg, MONO_PATCH_INFO_METHOD_CODE_SLOT, method);
}
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_code), code_slot_ins->dreg);
}
if (target_method_context_used || invoke_context_used) {
tramp_ins = emit_get_rgctx_dele_tramp (cfg, target_method_context_used | invoke_context_used, klass, method, virtual_, MONO_RGCTX_INFO_DELEGATE_TRAMP_INFO);
//This is emited as a contant store for the non-shared case.
//We copy from the delegate trampoline info as it's faster than a rgctx fetch
dreg = alloc_preg (cfg);
if (!cfg->llvm_only) {
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, tramp_ins->dreg, MONO_STRUCT_OFFSET (MonoDelegateTrampInfo, method));
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method), dreg);
}
} else if (cfg->compile_aot) {
MonoDelegateClassMethodPair *del_tramp;
del_tramp = (MonoDelegateClassMethodPair *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoDelegateClassMethodPair));
del_tramp->klass = klass;
del_tramp->method = method;
del_tramp->is_virtual = virtual_;
EMIT_NEW_AOTCONST (cfg, tramp_ins, MONO_PATCH_INFO_DELEGATE_TRAMPOLINE, del_tramp);
} else {
if (virtual_)
trampoline = mono_create_delegate_virtual_trampoline (klass, method);
else
trampoline = mono_create_delegate_trampoline_info (klass, method);
EMIT_NEW_PCONST (cfg, tramp_ins, trampoline);
}
if (cfg->llvm_only) {
MonoInst *args [ ] = {
obj,
tramp_ins
};
mono_emit_jit_icall (cfg, mini_llvmonly_init_delegate, args);
return obj;
}
/* Set invoke_impl field */
if (virtual_) {
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, invoke_impl), tramp_ins->dreg);
} else {
dreg = alloc_preg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, tramp_ins->dreg, MONO_STRUCT_OFFSET (MonoDelegateTrampInfo, invoke_impl));
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, invoke_impl), dreg);
dreg = alloc_preg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, tramp_ins->dreg, MONO_STRUCT_OFFSET (MonoDelegateTrampInfo, method_ptr));
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_ptr), dreg);
}
dreg = alloc_preg (cfg);
MONO_EMIT_NEW_ICONST (cfg, dreg, virtual_ ? 1 : 0);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI1_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_is_virtual), dreg);
/* All the checks which are in mono_delegate_ctor () are done by the delegate trampoline */
return obj;
}
/*
* handle_constrained_gsharedvt_call:
*
* Handle constrained calls where the receiver is a gsharedvt type.
* Return the instruction representing the call. Set the cfg exception on failure.
*/
static MonoInst*
handle_constrained_gsharedvt_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, MonoClass *constrained_class,
gboolean *ref_emit_widen)
{
MonoInst *ins = NULL;
gboolean emit_widen = *ref_emit_widen;
gboolean supported;
/*
* Constrained calls need to behave differently at runtime dependending on whenever the receiver is instantiated as ref type or as a vtype.
* This is hard to do with the current call code, since we would have to emit a branch and two different calls. So instead, we
* pack the arguments into an array, and do the rest of the work in in an icall.
*/
supported = ((cmethod->klass == mono_defaults.object_class) || mono_class_is_interface (cmethod->klass) || (!m_class_is_valuetype (cmethod->klass) && m_class_get_image (cmethod->klass) != mono_defaults.corlib));
if (supported)
supported = (MONO_TYPE_IS_VOID (fsig->ret) || MONO_TYPE_IS_PRIMITIVE (fsig->ret) || MONO_TYPE_IS_REFERENCE (fsig->ret) || MONO_TYPE_ISSTRUCT (fsig->ret) || m_class_is_enumtype (mono_class_from_mono_type_internal (fsig->ret)) || mini_is_gsharedvt_type (fsig->ret));
if (supported) {
if (fsig->param_count == 0 || (!fsig->hasthis && fsig->param_count == 1)) {
supported = TRUE;
} else {
supported = TRUE;
for (int i = 0; i < fsig->param_count; ++i) {
if (!(m_type_is_byref (fsig->params [i]) || MONO_TYPE_IS_PRIMITIVE (fsig->params [i]) || MONO_TYPE_IS_REFERENCE (fsig->params [i]) || MONO_TYPE_ISSTRUCT (fsig->params [i]) || mini_is_gsharedvt_type (fsig->params [i])))
supported = FALSE;
}
}
}
if (supported) {
MonoInst *args [5];
/*
* This case handles calls to
* - object:ToString()/Equals()/GetHashCode(),
* - System.IComparable<T>:CompareTo()
* - System.IEquatable<T>:Equals ()
* plus some simple interface calls enough to support AsyncTaskMethodBuilder.
*/
if (fsig->hasthis)
args [0] = sp [0];
else
EMIT_NEW_PCONST (cfg, args [0], NULL);
args [1] = emit_get_rgctx_method (cfg, mono_method_check_context_used (cmethod), cmethod, MONO_RGCTX_INFO_METHOD);
args [2] = mini_emit_get_rgctx_klass (cfg, mono_class_check_context_used (constrained_class), constrained_class, MONO_RGCTX_INFO_KLASS);
/* !fsig->hasthis is for the wrapper for the Object.GetType () icall or static virtual methods */
if ((fsig->hasthis || m_method_is_static (cmethod)) && fsig->param_count) {
/* Call mono_gsharedvt_constrained_call (gpointer mp, MonoMethod *cmethod, MonoClass *klass, gboolean *deref_args, gpointer *args) */
gboolean has_gsharedvt = FALSE;
for (int i = 0; i < fsig->param_count; ++i) {
if (mini_is_gsharedvt_type (fsig->params [i]))
has_gsharedvt = TRUE;
}
/* Pass an array of bools which signal whenever the corresponding argument is a gsharedvt ref type */
if (has_gsharedvt) {
MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM);
ins->dreg = alloc_preg (cfg);
ins->inst_imm = fsig->param_count;
MONO_ADD_INS (cfg->cbb, ins);
args [3] = ins;
} else {
EMIT_NEW_PCONST (cfg, args [3], 0);
}
/* Pass the arguments using a localloc-ed array using the format expected by runtime_invoke () */
MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM);
ins->dreg = alloc_preg (cfg);
ins->inst_imm = fsig->param_count * sizeof (target_mgreg_t);
MONO_ADD_INS (cfg->cbb, ins);
args [4] = ins;
for (int i = 0; i < fsig->param_count; ++i) {
int addr_reg;
if (mini_is_gsharedvt_type (fsig->params [i])) {
MonoInst *is_deref;
int deref_arg_reg;
ins = mini_emit_get_gsharedvt_info_klass (cfg, mono_class_from_mono_type_internal (fsig->params [i]), MONO_RGCTX_INFO_CLASS_BOX_TYPE);
deref_arg_reg = alloc_preg (cfg);
/* deref_arg = BOX_TYPE != MONO_GSHAREDVT_BOX_TYPE_VTYPE */
EMIT_NEW_BIALU_IMM (cfg, is_deref, OP_ISUB_IMM, deref_arg_reg, ins->dreg, 1);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI1_MEMBASE_REG, args [3]->dreg, i, is_deref->dreg);
} else if (has_gsharedvt) {
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI1_MEMBASE_IMM, args [3]->dreg, i, 0);
}
MonoInst *arg = sp [i + fsig->hasthis];
if (mini_is_gsharedvt_type (fsig->params [i]) || MONO_TYPE_IS_PRIMITIVE (fsig->params [i]) || MONO_TYPE_ISSTRUCT (fsig->params [i])) {
EMIT_NEW_VARLOADA_VREG (cfg, ins, arg->dreg, fsig->params [i]);
addr_reg = ins->dreg;
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, args [4]->dreg, i * sizeof (target_mgreg_t), addr_reg);
} else {
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, args [4]->dreg, i * sizeof (target_mgreg_t), arg->dreg);
}
}
} else {
EMIT_NEW_ICONST (cfg, args [3], 0);
EMIT_NEW_ICONST (cfg, args [4], 0);
}
ins = mono_emit_jit_icall (cfg, mono_gsharedvt_constrained_call, args);
emit_widen = FALSE;
if (mini_is_gsharedvt_type (fsig->ret)) {
ins = handle_unbox_gsharedvt (cfg, mono_class_from_mono_type_internal (fsig->ret), ins);
} else if (MONO_TYPE_IS_PRIMITIVE (fsig->ret) || MONO_TYPE_ISSTRUCT (fsig->ret) || m_class_is_enumtype (mono_class_from_mono_type_internal (fsig->ret))) {
MonoInst *add;
/* Unbox */
NEW_BIALU_IMM (cfg, add, OP_ADD_IMM, alloc_dreg (cfg, STACK_MP), ins->dreg, MONO_ABI_SIZEOF (MonoObject));
MONO_ADD_INS (cfg->cbb, add);
/* Load value */
NEW_LOAD_MEMBASE_TYPE (cfg, ins, fsig->ret, add->dreg, 0);
MONO_ADD_INS (cfg->cbb, ins);
/* ins represents the call result */
}
} else {
GSHAREDVT_FAILURE (CEE_CALLVIRT);
}
*ref_emit_widen = emit_widen;
return ins;
exception_exit:
return NULL;
}
static void
mono_emit_load_got_addr (MonoCompile *cfg)
{
MonoInst *getaddr, *dummy_use;
if (!cfg->got_var || cfg->got_var_allocated)
return;
MONO_INST_NEW (cfg, getaddr, OP_LOAD_GOTADDR);
getaddr->cil_code = cfg->header->code;
getaddr->dreg = cfg->got_var->dreg;
/* Add it to the start of the first bblock */
if (cfg->bb_entry->code) {
getaddr->next = cfg->bb_entry->code;
cfg->bb_entry->code = getaddr;
}
else
MONO_ADD_INS (cfg->bb_entry, getaddr);
cfg->got_var_allocated = TRUE;
/*
* Add a dummy use to keep the got_var alive, since real uses might
* only be generated by the back ends.
* Add it to end_bblock, so the variable's lifetime covers the whole
* method.
* It would be better to make the usage of the got var explicit in all
* cases when the backend needs it (i.e. calls, throw etc.), so this
* wouldn't be needed.
*/
NEW_DUMMY_USE (cfg, dummy_use, cfg->got_var);
MONO_ADD_INS (cfg->bb_exit, dummy_use);
}
static MonoMethod*
get_constrained_method (MonoCompile *cfg, MonoImage *image, guint32 token,
MonoMethod *cil_method, MonoClass *constrained_class,
MonoGenericContext *generic_context)
{
MonoMethod *cmethod = cil_method;
gboolean constrained_is_generic_param =
m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_VAR ||
m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_MVAR;
if (cfg->current_method->wrapper_type != MONO_WRAPPER_NONE) {
if (cfg->verbose_level > 2)
printf ("DM Constrained call to %s\n", mono_type_get_full_name (constrained_class));
if (!(constrained_is_generic_param &&
cfg->gshared)) {
cmethod = mono_get_method_constrained_with_method (image, cil_method, constrained_class, generic_context, cfg->error);
CHECK_CFG_ERROR;
}
} else {
if (cfg->verbose_level > 2)
printf ("Constrained call to %s\n", mono_type_get_full_name (constrained_class));
if (constrained_is_generic_param && cfg->gshared) {
/*
* This is needed since get_method_constrained can't find
* the method in klass representing a type var.
* The type var is guaranteed to be a reference type in this
* case.
*/
if (!mini_is_gsharedvt_klass (constrained_class))
g_assert (!m_class_is_valuetype (cmethod->klass));
} else {
cmethod = mono_get_method_constrained_checked (image, token, constrained_class, generic_context, &cil_method, cfg->error);
CHECK_CFG_ERROR;
}
}
return cmethod;
mono_error_exit:
return NULL;
}
static gboolean
method_does_not_return (MonoMethod *method)
{
// FIXME: Under netcore, these are decorated with the [DoesNotReturn] attribute
return m_class_get_image (method->klass) == mono_defaults.corlib &&
!strcmp (m_class_get_name (method->klass), "ThrowHelper") &&
strstr (method->name, "Throw") == method->name &&
!method->is_inflated;
}
static int inline_limit, llvm_jit_inline_limit, llvm_aot_inline_limit;
static gboolean inline_limit_inited;
static gboolean
mono_method_check_inlining (MonoCompile *cfg, MonoMethod *method)
{
MonoMethodHeaderSummary header;
MonoVTable *vtable;
int limit;
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
MonoMethodSignature *sig = mono_method_signature_internal (method);
int i;
#endif
if (cfg->disable_inline)
return FALSE;
if (cfg->gsharedvt)
return FALSE;
if (cfg->inline_depth > 10)
return FALSE;
if (!mono_method_get_header_summary (method, &header))
return FALSE;
/*runtime, icall and pinvoke are checked by summary call*/
if ((method->iflags & METHOD_IMPL_ATTRIBUTE_NOINLINING) ||
(method->iflags & METHOD_IMPL_ATTRIBUTE_SYNCHRONIZED) ||
header.has_clauses)
return FALSE;
if (method->flags & METHOD_ATTRIBUTE_REQSECOBJ)
/* Used to mark methods containing StackCrawlMark locals */
return FALSE;
/* also consider num_locals? */
/* Do the size check early to avoid creating vtables */
if (!inline_limit_inited) {
char *inlinelimit;
if ((inlinelimit = g_getenv ("MONO_INLINELIMIT"))) {
inline_limit = atoi (inlinelimit);
llvm_jit_inline_limit = inline_limit;
llvm_aot_inline_limit = inline_limit;
g_free (inlinelimit);
} else {
inline_limit = INLINE_LENGTH_LIMIT;
llvm_jit_inline_limit = LLVM_JIT_INLINE_LENGTH_LIMIT;
llvm_aot_inline_limit = LLVM_AOT_INLINE_LENGTH_LIMIT;
}
inline_limit_inited = TRUE;
}
if (COMPILE_LLVM (cfg)) {
if (cfg->compile_aot)
limit = llvm_aot_inline_limit;
else
limit = llvm_jit_inline_limit;
} else {
limit = inline_limit;
}
if (header.code_size >= limit && !(method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING))
return FALSE;
/*
* if we can initialize the class of the method right away, we do,
* otherwise we don't allow inlining if the class needs initialization,
* since it would mean inserting a call to mono_runtime_class_init()
* inside the inlined code
*/
if (cfg->gshared && m_class_has_cctor (method->klass) && mini_class_check_context_used (cfg, method->klass))
return FALSE;
{
/* The AggressiveInlining hint is a good excuse to force that cctor to run. */
if ((cfg->opt & MONO_OPT_AGGRESSIVE_INLINING) || method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) {
if (m_class_has_cctor (method->klass)) {
ERROR_DECL (error);
vtable = mono_class_vtable_checked (method->klass, error);
if (!is_ok (error)) {
mono_error_cleanup (error);
return FALSE;
}
if (!cfg->compile_aot) {
if (!mono_runtime_class_init_full (vtable, error)) {
mono_error_cleanup (error);
return FALSE;
}
}
}
} else if (mono_class_is_before_field_init (method->klass)) {
if (cfg->run_cctors && m_class_has_cctor (method->klass)) {
ERROR_DECL (error);
/*FIXME it would easier and lazier to just use mono_class_try_get_vtable */
if (!m_class_get_runtime_vtable (method->klass))
/* No vtable created yet */
return FALSE;
vtable = mono_class_vtable_checked (method->klass, error);
if (!is_ok (error)) {
mono_error_cleanup (error);
return FALSE;
}
/* This makes so that inline cannot trigger */
/* .cctors: too many apps depend on them */
/* running with a specific order... */
if (! vtable->initialized)
return FALSE;
if (!mono_runtime_class_init_full (vtable, error)) {
mono_error_cleanup (error);
return FALSE;
}
}
} else if (mono_class_needs_cctor_run (method->klass, NULL)) {
ERROR_DECL (error);
if (!m_class_get_runtime_vtable (method->klass))
/* No vtable created yet */
return FALSE;
vtable = mono_class_vtable_checked (method->klass, error);
if (!is_ok (error)) {
mono_error_cleanup (error);
return FALSE;
}
if (!vtable->initialized)
return FALSE;
}
}
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
if (mono_arch_is_soft_float ()) {
/* FIXME: */
if (sig->ret && sig->ret->type == MONO_TYPE_R4)
return FALSE;
for (i = 0; i < sig->param_count; ++i)
if (!m_type_is_byref (sig->params [i]) && sig->params [i]->type == MONO_TYPE_R4)
return FALSE;
}
#endif
if (g_list_find (cfg->dont_inline, method))
return FALSE;
if (mono_profiler_get_call_instrumentation_flags (method))
return FALSE;
if (mono_profiler_coverage_instrumentation_enabled (method))
return FALSE;
if (method_does_not_return (method))
return FALSE;
return TRUE;
}
static gboolean
mini_field_access_needs_cctor_run (MonoCompile *cfg, MonoMethod *method, MonoClass *klass, MonoVTable *vtable)
{
if (!cfg->compile_aot) {
g_assert (vtable);
if (vtable->initialized)
return FALSE;
}
if (mono_class_is_before_field_init (klass)) {
if (cfg->method == method)
return FALSE;
}
if (!mono_class_needs_cctor_run (klass, method))
return FALSE;
if (! (method->flags & METHOD_ATTRIBUTE_STATIC) && (klass == method->klass))
/* The initialization is already done before the method is called */
return FALSE;
return TRUE;
}
int
mini_emit_sext_index_reg (MonoCompile *cfg, MonoInst *index)
{
int index_reg = index->dreg;
int index2_reg;
#if SIZEOF_REGISTER == 8
/* The array reg is 64 bits but the index reg is only 32 */
if (COMPILE_LLVM (cfg)) {
/*
* abcrem can't handle the OP_SEXT_I4, so add this after abcrem,
* during OP_BOUNDS_CHECK decomposition, and in the implementation
* of OP_X86_LEA for llvm.
*/
index2_reg = index_reg;
} else {
index2_reg = alloc_preg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_SEXT_I4, index2_reg, index_reg);
}
#else
if (index->type == STACK_I8) {
index2_reg = alloc_preg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_LCONV_TO_I4, index2_reg, index_reg);
} else {
index2_reg = index_reg;
}
#endif
return index2_reg;
}
MonoInst*
mini_emit_ldelema_1_ins (MonoCompile *cfg, MonoClass *klass, MonoInst *arr, MonoInst *index, gboolean bcheck, gboolean bounded)
{
MonoInst *ins;
guint32 size;
int mult_reg, add_reg, array_reg, index2_reg, bounds_reg, lower_bound_reg, realidx2_reg;
int context_used;
if (mini_is_gsharedvt_variable_klass (klass)) {
size = -1;
} else {
mono_class_init_internal (klass);
size = mono_class_array_element_size (klass);
}
mult_reg = alloc_preg (cfg);
array_reg = arr->dreg;
realidx2_reg = index2_reg = mini_emit_sext_index_reg (cfg, index);
if (bounded) {
bounds_reg = alloc_preg (cfg);
lower_bound_reg = alloc_preg (cfg);
realidx2_reg = alloc_preg (cfg);
MonoBasicBlock *is_null_bb = NULL;
NEW_BBLOCK (cfg, is_null_bb);
// gint32 lower_bound = 0;
// if (arr->bounds)
// lower_bound = arr->bounds.lower_bound;
// realidx2 = index2 - lower_bound;
MONO_EMIT_NEW_PCONST (cfg, lower_bound_reg, NULL);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, bounds_reg, arr->dreg, MONO_STRUCT_OFFSET (MonoArray, bounds));
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, bounds_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, is_null_bb);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, lower_bound_reg, bounds_reg, MONO_STRUCT_OFFSET (MonoArrayBounds, lower_bound));
MONO_START_BB (cfg, is_null_bb);
MONO_EMIT_NEW_BIALU (cfg, OP_PSUB, realidx2_reg, index2_reg, lower_bound_reg);
}
if (bcheck)
MONO_EMIT_BOUNDS_CHECK (cfg, array_reg, MonoArray, max_length, realidx2_reg);
#if defined(TARGET_X86) || defined(TARGET_AMD64)
if (size == 1 || size == 2 || size == 4 || size == 8) {
static const int fast_log2 [] = { 1, 0, 1, -1, 2, -1, -1, -1, 3 };
EMIT_NEW_X86_LEA (cfg, ins, array_reg, realidx2_reg, fast_log2 [size], MONO_STRUCT_OFFSET (MonoArray, vector));
ins->klass = klass;
ins->type = STACK_MP;
return ins;
}
#endif
add_reg = alloc_ireg_mp (cfg);
if (size == -1) {
MonoInst *rgctx_ins;
/* gsharedvt */
g_assert (cfg->gshared);
context_used = mini_class_check_context_used (cfg, klass);
g_assert (context_used);
rgctx_ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_ARRAY_ELEMENT_SIZE);
MONO_EMIT_NEW_BIALU (cfg, OP_IMUL, mult_reg, realidx2_reg, rgctx_ins->dreg);
} else {
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_MUL_IMM, mult_reg, realidx2_reg, size);
}
MONO_EMIT_NEW_BIALU (cfg, OP_PADD, add_reg, array_reg, mult_reg);
NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, add_reg, add_reg, MONO_STRUCT_OFFSET (MonoArray, vector));
ins->klass = klass;
ins->type = STACK_MP;
MONO_ADD_INS (cfg->cbb, ins);
return ins;
}
static MonoInst*
mini_emit_ldelema_2_ins (MonoCompile *cfg, MonoClass *klass, MonoInst *arr, MonoInst *index_ins1, MonoInst *index_ins2)
{
int bounds_reg = alloc_preg (cfg);
int add_reg = alloc_ireg_mp (cfg);
int mult_reg = alloc_preg (cfg);
int mult2_reg = alloc_preg (cfg);
int low1_reg = alloc_preg (cfg);
int low2_reg = alloc_preg (cfg);
int high1_reg = alloc_preg (cfg);
int high2_reg = alloc_preg (cfg);
int realidx1_reg = alloc_preg (cfg);
int realidx2_reg = alloc_preg (cfg);
int sum_reg = alloc_preg (cfg);
int index1, index2;
MonoInst *ins;
guint32 size;
mono_class_init_internal (klass);
size = mono_class_array_element_size (klass);
index1 = index_ins1->dreg;
index2 = index_ins2->dreg;
#if SIZEOF_REGISTER == 8
/* The array reg is 64 bits but the index reg is only 32 */
if (COMPILE_LLVM (cfg)) {
/* Not needed */
} else {
int tmpreg = alloc_preg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_SEXT_I4, tmpreg, index1);
index1 = tmpreg;
tmpreg = alloc_preg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_SEXT_I4, tmpreg, index2);
index2 = tmpreg;
}
#else
// FIXME: Do we need to do something here for i8 indexes, like in ldelema_1_ins ?
#endif
/* range checking */
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, bounds_reg,
arr->dreg, MONO_STRUCT_OFFSET (MonoArray, bounds));
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, low1_reg,
bounds_reg, MONO_STRUCT_OFFSET (MonoArrayBounds, lower_bound));
MONO_EMIT_NEW_BIALU (cfg, OP_PSUB, realidx1_reg, index1, low1_reg);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, high1_reg,
bounds_reg, MONO_STRUCT_OFFSET (MonoArrayBounds, length));
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, high1_reg, realidx1_reg);
MONO_EMIT_NEW_COND_EXC (cfg, LE_UN, "IndexOutOfRangeException");
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, low2_reg,
bounds_reg, sizeof (MonoArrayBounds) + MONO_STRUCT_OFFSET (MonoArrayBounds, lower_bound));
MONO_EMIT_NEW_BIALU (cfg, OP_PSUB, realidx2_reg, index2, low2_reg);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, high2_reg,
bounds_reg, sizeof (MonoArrayBounds) + MONO_STRUCT_OFFSET (MonoArrayBounds, length));
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, high2_reg, realidx2_reg);
MONO_EMIT_NEW_COND_EXC (cfg, LE_UN, "IndexOutOfRangeException");
MONO_EMIT_NEW_BIALU (cfg, OP_PMUL, mult_reg, high2_reg, realidx1_reg);
MONO_EMIT_NEW_BIALU (cfg, OP_PADD, sum_reg, mult_reg, realidx2_reg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PMUL_IMM, mult2_reg, sum_reg, size);
MONO_EMIT_NEW_BIALU (cfg, OP_PADD, add_reg, mult2_reg, arr->dreg);
NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, add_reg, add_reg, MONO_STRUCT_OFFSET (MonoArray, vector));
ins->type = STACK_MP;
ins->klass = klass;
MONO_ADD_INS (cfg->cbb, ins);
return ins;
}
static MonoInst*
mini_emit_ldelema_ins (MonoCompile *cfg, MonoMethod *cmethod, MonoInst **sp, guchar *ip, gboolean is_set)
{
int rank;
MonoInst *addr;
MonoMethod *addr_method;
int element_size;
MonoClass *eclass = m_class_get_element_class (cmethod->klass);
gboolean bounded = m_class_get_byval_arg (cmethod->klass) ? m_class_get_byval_arg (cmethod->klass)->type == MONO_TYPE_ARRAY : FALSE;
rank = mono_method_signature_internal (cmethod)->param_count - (is_set? 1: 0);
if (rank == 1)
return mini_emit_ldelema_1_ins (cfg, eclass, sp [0], sp [1], TRUE, bounded);
/* emit_ldelema_2 depends on OP_LMUL */
if (!cfg->backend->emulate_mul_div && rank == 2 && (cfg->opt & MONO_OPT_INTRINS) && !mini_is_gsharedvt_variable_klass (eclass)) {
return mini_emit_ldelema_2_ins (cfg, eclass, sp [0], sp [1], sp [2]);
}
if (mini_is_gsharedvt_variable_klass (eclass))
element_size = 0;
else
element_size = mono_class_array_element_size (eclass);
addr_method = mono_marshal_get_array_address (rank, element_size);
addr = mono_emit_method_call (cfg, addr_method, sp, NULL);
return addr;
}
static gboolean
mini_class_is_reference (MonoClass *klass)
{
return mini_type_is_reference (m_class_get_byval_arg (klass));
}
MonoInst*
mini_emit_array_store (MonoCompile *cfg, MonoClass *klass, MonoInst **sp, gboolean safety_checks)
{
if (safety_checks && mini_class_is_reference (klass) &&
!(MONO_INS_IS_PCONST_NULL (sp [2]))) {
MonoClass *obj_array = mono_array_class_get_cached (mono_defaults.object_class);
MonoMethod *helper;
MonoInst *iargs [3];
if (sp [0]->type != STACK_OBJ)
return NULL;
if (sp [2]->type != STACK_OBJ)
return NULL;
iargs [2] = sp [2];
iargs [1] = sp [1];
iargs [0] = sp [0];
MonoClass *array_class = sp [0]->klass;
if (array_class && m_class_get_rank (array_class) == 1) {
MonoClass *eclass = m_class_get_element_class (array_class);
if (m_class_is_sealed (eclass)) {
helper = mono_marshal_get_virtual_stelemref (array_class);
/* Make a non-virtual call if possible */
return mono_emit_method_call (cfg, helper, iargs, NULL);
}
}
helper = mono_marshal_get_virtual_stelemref (obj_array);
if (!helper->slot)
mono_class_setup_vtable (obj_array);
g_assert (helper->slot);
return mono_emit_method_call (cfg, helper, iargs, sp [0]);
} else {
MonoInst *ins;
if (mini_is_gsharedvt_variable_klass (klass)) {
MonoInst *addr;
// FIXME-VT: OP_ICONST optimization
addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE);
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0, sp [2]->dreg);
ins->opcode = OP_STOREV_MEMBASE;
} else if (sp [1]->opcode == OP_ICONST) {
int array_reg = sp [0]->dreg;
int index_reg = sp [1]->dreg;
int offset = (mono_class_array_element_size (klass) * sp [1]->inst_c0) + MONO_STRUCT_OFFSET (MonoArray, vector);
if (SIZEOF_REGISTER == 8 && COMPILE_LLVM (cfg) && sp [1]->inst_c0 < 0)
MONO_EMIT_NEW_UNALU (cfg, OP_ZEXT_I4, index_reg, index_reg);
if (safety_checks)
MONO_EMIT_BOUNDS_CHECK (cfg, array_reg, MonoArray, max_length, index_reg);
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), array_reg, offset, sp [2]->dreg);
} else {
MonoInst *addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], safety_checks, FALSE);
if (!mini_debug_options.weak_memory_model && mini_class_is_reference (klass))
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0, sp [2]->dreg);
if (mini_class_is_reference (klass))
mini_emit_write_barrier (cfg, addr, sp [2]);
}
return ins;
}
}
MonoInst*
mini_emit_memory_barrier (MonoCompile *cfg, int kind)
{
MonoInst *ins = NULL;
MONO_INST_NEW (cfg, ins, OP_MEMORY_BARRIER);
MONO_ADD_INS (cfg->cbb, ins);
ins->backend.memory_barrier_kind = kind;
return ins;
}
/*
* This entry point could be used later for arbitrary method
* redirection.
*/
inline static MonoInst*
mini_redirect_call (MonoCompile *cfg, MonoMethod *method,
MonoMethodSignature *signature, MonoInst **args, MonoInst *this_ins)
{
if (method->klass == mono_defaults.string_class) {
/* managed string allocation support */
if (strcmp (method->name, "FastAllocateString") == 0) {
MonoInst *iargs [2];
MonoVTable *vtable = mono_class_vtable_checked (method->klass, cfg->error);
MonoMethod *managed_alloc = NULL;
mono_error_assert_ok (cfg->error); /*Should not fail since it System.String*/
#ifndef MONO_CROSS_COMPILE
managed_alloc = mono_gc_get_managed_allocator (method->klass, FALSE, FALSE);
#endif
if (!managed_alloc)
return NULL;
EMIT_NEW_VTABLECONST (cfg, iargs [0], vtable);
iargs [1] = args [0];
return mono_emit_method_call (cfg, managed_alloc, iargs, this_ins);
}
}
return NULL;
}
static void
mono_save_args (MonoCompile *cfg, MonoMethodSignature *sig, MonoInst **sp)
{
MonoInst *store, *temp;
int i;
for (i = 0; i < sig->param_count + sig->hasthis; ++i) {
MonoType *argtype = (sig->hasthis && (i == 0)) ? type_from_stack_type (*sp) : sig->params [i - sig->hasthis];
/*
* FIXME: We should use *args++ = sp [0], but that would mean the arg
* would be different than the MonoInst's used to represent arguments, and
* the ldelema implementation can't deal with that.
* Solution: When ldelema is used on an inline argument, create a var for
* it, emit ldelema on that var, and emit the saving code below in
* inline_method () if needed.
*/
temp = mono_compile_create_var (cfg, argtype, OP_LOCAL);
cfg->args [i] = temp;
/* This uses cfg->args [i] which is set by the preceding line */
EMIT_NEW_ARGSTORE (cfg, store, i, *sp);
store->cil_code = sp [0]->cil_code;
sp++;
}
}
#define MONO_INLINE_CALLED_LIMITED_METHODS 1
#define MONO_INLINE_CALLER_LIMITED_METHODS 1
#if (MONO_INLINE_CALLED_LIMITED_METHODS)
static gboolean
check_inline_called_method_name_limit (MonoMethod *called_method)
{
int strncmp_result;
static const char *limit = NULL;
if (limit == NULL) {
const char *limit_string = g_getenv ("MONO_INLINE_CALLED_METHOD_NAME_LIMIT");
if (limit_string != NULL)
limit = limit_string;
else
limit = "";
}
if (limit [0] != '\0') {
char *called_method_name = mono_method_full_name (called_method, TRUE);
strncmp_result = strncmp (called_method_name, limit, strlen (limit));
g_free (called_method_name);
//return (strncmp_result <= 0);
return (strncmp_result == 0);
} else {
return TRUE;
}
}
#endif
#if (MONO_INLINE_CALLER_LIMITED_METHODS)
static gboolean
check_inline_caller_method_name_limit (MonoMethod *caller_method)
{
int strncmp_result;
static const char *limit = NULL;
if (limit == NULL) {
const char *limit_string = g_getenv ("MONO_INLINE_CALLER_METHOD_NAME_LIMIT");
if (limit_string != NULL) {
limit = limit_string;
} else {
limit = "";
}
}
if (limit [0] != '\0') {
char *caller_method_name = mono_method_full_name (caller_method, TRUE);
strncmp_result = strncmp (caller_method_name, limit, strlen (limit));
g_free (caller_method_name);
//return (strncmp_result <= 0);
return (strncmp_result == 0);
} else {
return TRUE;
}
}
#endif
void
mini_emit_init_rvar (MonoCompile *cfg, int dreg, MonoType *rtype)
{
static double r8_0 = 0.0;
static float r4_0 = 0.0;
MonoInst *ins;
int t;
rtype = mini_get_underlying_type (rtype);
t = rtype->type;
if (m_type_is_byref (rtype)) {
MONO_EMIT_NEW_PCONST (cfg, dreg, NULL);
} else if (t >= MONO_TYPE_BOOLEAN && t <= MONO_TYPE_U4) {
MONO_EMIT_NEW_ICONST (cfg, dreg, 0);
} else if (t == MONO_TYPE_I8 || t == MONO_TYPE_U8) {
MONO_EMIT_NEW_I8CONST (cfg, dreg, 0);
} else if (cfg->r4fp && t == MONO_TYPE_R4) {
MONO_INST_NEW (cfg, ins, OP_R4CONST);
ins->type = STACK_R4;
ins->inst_p0 = (void*)&r4_0;
ins->dreg = dreg;
MONO_ADD_INS (cfg->cbb, ins);
} else if (t == MONO_TYPE_R4 || t == MONO_TYPE_R8) {
MONO_INST_NEW (cfg, ins, OP_R8CONST);
ins->type = STACK_R8;
ins->inst_p0 = (void*)&r8_0;
ins->dreg = dreg;
MONO_ADD_INS (cfg->cbb, ins);
} else if ((t == MONO_TYPE_VALUETYPE) || (t == MONO_TYPE_TYPEDBYREF) ||
((t == MONO_TYPE_GENERICINST) && mono_type_generic_inst_is_valuetype (rtype))) {
MONO_EMIT_NEW_VZERO (cfg, dreg, mono_class_from_mono_type_internal (rtype));
} else if (((t == MONO_TYPE_VAR) || (t == MONO_TYPE_MVAR)) && mini_type_var_is_vt (rtype)) {
MONO_EMIT_NEW_VZERO (cfg, dreg, mono_class_from_mono_type_internal (rtype));
} else {
MONO_EMIT_NEW_PCONST (cfg, dreg, NULL);
}
}
static void
emit_dummy_init_rvar (MonoCompile *cfg, int dreg, MonoType *rtype)
{
int t;
rtype = mini_get_underlying_type (rtype);
t = rtype->type;
if (m_type_is_byref (rtype)) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_PCONST);
} else if (t >= MONO_TYPE_BOOLEAN && t <= MONO_TYPE_U4) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_ICONST);
} else if (t == MONO_TYPE_I8 || t == MONO_TYPE_U8) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_I8CONST);
} else if (cfg->r4fp && t == MONO_TYPE_R4) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_R4CONST);
} else if (t == MONO_TYPE_R4 || t == MONO_TYPE_R8) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_R8CONST);
} else if ((t == MONO_TYPE_VALUETYPE) || (t == MONO_TYPE_TYPEDBYREF) ||
((t == MONO_TYPE_GENERICINST) && mono_type_generic_inst_is_valuetype (rtype))) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_VZERO);
} else if (((t == MONO_TYPE_VAR) || (t == MONO_TYPE_MVAR)) && mini_type_var_is_vt (rtype)) {
MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_VZERO);
} else {
mini_emit_init_rvar (cfg, dreg, rtype);
}
}
/* If INIT is FALSE, emit dummy initialization statements to keep the IR valid */
static void
emit_init_local (MonoCompile *cfg, int local, MonoType *type, gboolean init)
{
MonoInst *var = cfg->locals [local];
if (COMPILE_SOFT_FLOAT (cfg)) {
MonoInst *store;
int reg = alloc_dreg (cfg, (MonoStackType)var->type);
mini_emit_init_rvar (cfg, reg, type);
EMIT_NEW_LOCSTORE (cfg, store, local, cfg->cbb->last_ins);
} else {
if (init)
mini_emit_init_rvar (cfg, var->dreg, type);
else
emit_dummy_init_rvar (cfg, var->dreg, type);
}
}
int
mini_inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, guchar *ip, guint real_offset, gboolean inline_always)
{
return inline_method (cfg, cmethod, fsig, sp, ip, real_offset, inline_always, NULL);
}
/*
* inline_method:
*
* Return the cost of inlining CMETHOD, or zero if it should not be inlined.
*/
static int
inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp,
guchar *ip, guint real_offset, gboolean inline_always, gboolean *is_empty)
{
ERROR_DECL (error);
MonoInst *ins, *rvar = NULL;
MonoMethodHeader *cheader;
MonoBasicBlock *ebblock, *sbblock;
int i, costs;
MonoInst **prev_locals, **prev_args;
MonoType **prev_arg_types;
guint prev_real_offset;
GHashTable *prev_cbb_hash;
MonoBasicBlock **prev_cil_offset_to_bb;
MonoBasicBlock *prev_cbb;
const guchar *prev_ip;
guchar *prev_cil_start;
guint32 prev_cil_offset_to_bb_len;
MonoMethod *prev_current_method;
MonoGenericContext *prev_generic_context;
gboolean ret_var_set, prev_ret_var_set, prev_disable_inline, virtual_ = FALSE;
g_assert (cfg->exception_type == MONO_EXCEPTION_NONE);
#if (MONO_INLINE_CALLED_LIMITED_METHODS)
if ((! inline_always) && ! check_inline_called_method_name_limit (cmethod))
return 0;
#endif
#if (MONO_INLINE_CALLER_LIMITED_METHODS)
if ((! inline_always) && ! check_inline_caller_method_name_limit (cfg->method))
return 0;
#endif
if (!fsig)
fsig = mono_method_signature_internal (cmethod);
if (cfg->verbose_level > 2)
printf ("INLINE START %p %s -> %s\n", cmethod, mono_method_full_name (cfg->method, TRUE), mono_method_full_name (cmethod, TRUE));
if (!cmethod->inline_info) {
cfg->stat_inlineable_methods++;
cmethod->inline_info = 1;
}
if (is_empty)
*is_empty = FALSE;
/* allocate local variables */
cheader = mono_method_get_header_checked (cmethod, error);
if (!cheader) {
if (inline_always) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
mono_error_move (cfg->error, error);
} else {
mono_error_cleanup (error);
}
return 0;
}
if (is_empty && cheader->code_size == 1 && cheader->code [0] == CEE_RET)
*is_empty = TRUE;
/* allocate space to store the return value */
if (!MONO_TYPE_IS_VOID (fsig->ret)) {
rvar = mono_compile_create_var (cfg, fsig->ret, OP_LOCAL);
}
prev_locals = cfg->locals;
cfg->locals = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, cheader->num_locals * sizeof (MonoInst*));
for (i = 0; i < cheader->num_locals; ++i)
cfg->locals [i] = mono_compile_create_var (cfg, cheader->locals [i], OP_LOCAL);
/* allocate start and end blocks */
/* This is needed so if the inline is aborted, we can clean up */
NEW_BBLOCK (cfg, sbblock);
sbblock->real_offset = real_offset;
NEW_BBLOCK (cfg, ebblock);
ebblock->block_num = cfg->num_bblocks++;
ebblock->real_offset = real_offset;
prev_args = cfg->args;
prev_arg_types = cfg->arg_types;
prev_ret_var_set = cfg->ret_var_set;
prev_real_offset = cfg->real_offset;
prev_cbb_hash = cfg->cbb_hash;
prev_cil_offset_to_bb = cfg->cil_offset_to_bb;
prev_cil_offset_to_bb_len = cfg->cil_offset_to_bb_len;
prev_cil_start = cfg->cil_start;
prev_ip = cfg->ip;
prev_cbb = cfg->cbb;
prev_current_method = cfg->current_method;
prev_generic_context = cfg->generic_context;
prev_disable_inline = cfg->disable_inline;
cfg->ret_var_set = FALSE;
cfg->inline_depth ++;
if (ip && *ip == CEE_CALLVIRT && !(cmethod->flags & METHOD_ATTRIBUTE_STATIC))
virtual_ = TRUE;
costs = mono_method_to_ir (cfg, cmethod, sbblock, ebblock, rvar, sp, real_offset, virtual_);
ret_var_set = cfg->ret_var_set;
cfg->real_offset = prev_real_offset;
cfg->cbb_hash = prev_cbb_hash;
cfg->cil_offset_to_bb = prev_cil_offset_to_bb;
cfg->cil_offset_to_bb_len = prev_cil_offset_to_bb_len;
cfg->cil_start = prev_cil_start;
cfg->ip = prev_ip;
cfg->locals = prev_locals;
cfg->args = prev_args;
cfg->arg_types = prev_arg_types;
cfg->current_method = prev_current_method;
cfg->generic_context = prev_generic_context;
cfg->ret_var_set = prev_ret_var_set;
cfg->disable_inline = prev_disable_inline;
cfg->inline_depth --;
if ((costs >= 0 && costs < 60) || inline_always || (costs >= 0 && (cmethod->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING))) {
if (cfg->verbose_level > 2)
printf ("INLINE END %s -> %s\n", mono_method_full_name (cfg->method, TRUE), mono_method_full_name (cmethod, TRUE));
mono_error_assert_ok (cfg->error);
cfg->stat_inlined_methods++;
/* always add some code to avoid block split failures */
MONO_INST_NEW (cfg, ins, OP_NOP);
MONO_ADD_INS (prev_cbb, ins);
prev_cbb->next_bb = sbblock;
link_bblock (cfg, prev_cbb, sbblock);
/*
* Get rid of the begin and end bblocks if possible to aid local
* optimizations.
*/
if (prev_cbb->out_count == 1)
mono_merge_basic_blocks (cfg, prev_cbb, sbblock);
if ((prev_cbb->out_count == 1) && (prev_cbb->out_bb [0]->in_count == 1) && (prev_cbb->out_bb [0] != ebblock))
mono_merge_basic_blocks (cfg, prev_cbb, prev_cbb->out_bb [0]);
if ((ebblock->in_count == 1) && ebblock->in_bb [0]->out_count == 1) {
MonoBasicBlock *prev = ebblock->in_bb [0];
if (prev->next_bb == ebblock) {
mono_merge_basic_blocks (cfg, prev, ebblock);
cfg->cbb = prev;
if ((prev_cbb->out_count == 1) && (prev_cbb->out_bb [0]->in_count == 1) && (prev_cbb->out_bb [0] == prev)) {
mono_merge_basic_blocks (cfg, prev_cbb, prev);
cfg->cbb = prev_cbb;
}
} else {
/* There could be a bblock after 'prev', and making 'prev' the current bb could cause problems */
cfg->cbb = ebblock;
}
} else {
/*
* Its possible that the rvar is set in some prev bblock, but not in others.
* (#1835).
*/
if (rvar) {
MonoBasicBlock *bb;
for (i = 0; i < ebblock->in_count; ++i) {
bb = ebblock->in_bb [i];
if (bb->last_ins && bb->last_ins->opcode == OP_NOT_REACHED) {
cfg->cbb = bb;
mini_emit_init_rvar (cfg, rvar->dreg, fsig->ret);
}
}
}
cfg->cbb = ebblock;
}
if (rvar) {
/*
* If the inlined method contains only a throw, then the ret var is not
* set, so set it to a dummy value.
*/
if (!ret_var_set)
mini_emit_init_rvar (cfg, rvar->dreg, fsig->ret);
EMIT_NEW_TEMPLOAD (cfg, ins, rvar->inst_c0);
*sp++ = ins;
}
cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, cheader);
return costs + 1;
} else {
if (cfg->verbose_level > 2) {
const char *msg = mono_error_get_message (cfg->error);
printf ("INLINE ABORTED %s (cost %d) %s\n", mono_method_full_name (cmethod, TRUE), costs, msg ? msg : "");
}
cfg->exception_type = MONO_EXCEPTION_NONE;
clear_cfg_error (cfg);
/* This gets rid of the newly added bblocks */
cfg->cbb = prev_cbb;
}
cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, cheader);
return 0;
}
/*
* Some of these comments may well be out-of-date.
* Design decisions: we do a single pass over the IL code (and we do bblock
* splitting/merging in the few cases when it's required: a back jump to an IL
* address that was not already seen as bblock starting point).
* Code is validated as we go (full verification is still better left to metadata/verify.c).
* Complex operations are decomposed in simpler ones right away. We need to let the
* arch-specific code peek and poke inside this process somehow (except when the
* optimizations can take advantage of the full semantic info of coarse opcodes).
* All the opcodes of the form opcode.s are 'normalized' to opcode.
* MonoInst->opcode initially is the IL opcode or some simplification of that
* (OP_LOAD, OP_STORE). The arch-specific code may rearrange it to an arch-specific
* opcode with value bigger than OP_LAST.
* At this point the IR can be handed over to an interpreter, a dumb code generator
* or to the optimizing code generator that will translate it to SSA form.
*
* Profiling directed optimizations.
* We may compile by default with few or no optimizations and instrument the code
* or the user may indicate what methods to optimize the most either in a config file
* or through repeated runs where the compiler applies offline the optimizations to
* each method and then decides if it was worth it.
*/
#define CHECK_TYPE(ins) if (!(ins)->type) UNVERIFIED
#define CHECK_STACK(num) if ((sp - stack_start) < (num)) UNVERIFIED
#define CHECK_STACK_OVF() if (((sp - stack_start) + 1) > header->max_stack) UNVERIFIED
#define CHECK_ARG(num) if ((unsigned)(num) >= (unsigned)num_args) UNVERIFIED
#define CHECK_LOCAL(num) if ((unsigned)(num) >= (unsigned)header->num_locals) UNVERIFIED
#define CHECK_OPSIZE(size) if ((size) < 1 || ip + (size) > end) UNVERIFIED
#define CHECK_UNVERIFIABLE(cfg) if (cfg->unverifiable) UNVERIFIED
#define CHECK_TYPELOAD(klass) if (!(klass) || mono_class_has_failure (klass)) TYPE_LOAD_ERROR ((klass))
/* offset from br.s -> br like opcodes */
#define BIG_BRANCH_OFFSET 13
static gboolean
ip_in_bb (MonoCompile *cfg, MonoBasicBlock *bb, const guint8* ip)
{
MonoBasicBlock *b = cfg->cil_offset_to_bb [ip - cfg->cil_start];
return b == NULL || b == bb;
}
static int
get_basic_blocks (MonoCompile *cfg, MonoMethodHeader* header, guint real_offset, guchar *start, guchar *end, guchar **pos)
{
guchar *ip = start;
guchar *target;
int i;
guint cli_addr;
MonoBasicBlock *bblock;
const MonoOpcode *opcode;
while (ip < end) {
cli_addr = ip - start;
i = mono_opcode_value ((const guint8 **)&ip, end);
if (i < 0)
UNVERIFIED;
opcode = &mono_opcodes [i];
switch (opcode->argument) {
case MonoInlineNone:
ip++;
break;
case MonoInlineString:
case MonoInlineType:
case MonoInlineField:
case MonoInlineMethod:
case MonoInlineTok:
case MonoInlineSig:
case MonoShortInlineR:
case MonoInlineI:
ip += 5;
break;
case MonoInlineVar:
ip += 3;
break;
case MonoShortInlineVar:
case MonoShortInlineI:
ip += 2;
break;
case MonoShortInlineBrTarget:
target = start + cli_addr + 2 + (signed char)ip [1];
GET_BBLOCK (cfg, bblock, target);
ip += 2;
if (ip < end)
GET_BBLOCK (cfg, bblock, ip);
break;
case MonoInlineBrTarget:
target = start + cli_addr + 5 + (gint32)read32 (ip + 1);
GET_BBLOCK (cfg, bblock, target);
ip += 5;
if (ip < end)
GET_BBLOCK (cfg, bblock, ip);
break;
case MonoInlineSwitch: {
guint32 n = read32 (ip + 1);
guint32 j;
ip += 5;
cli_addr += 5 + 4 * n;
target = start + cli_addr;
GET_BBLOCK (cfg, bblock, target);
for (j = 0; j < n; ++j) {
target = start + cli_addr + (gint32)read32 (ip);
GET_BBLOCK (cfg, bblock, target);
ip += 4;
}
break;
}
case MonoInlineR:
case MonoInlineI8:
ip += 9;
break;
default:
g_assert_not_reached ();
}
if (i == CEE_THROW) {
guchar *bb_start = ip - 1;
/* Find the start of the bblock containing the throw */
bblock = NULL;
while ((bb_start >= start) && !bblock) {
bblock = cfg->cil_offset_to_bb [(bb_start) - start];
bb_start --;
}
if (bblock)
bblock->out_of_line = 1;
}
}
return 0;
unverified:
exception_exit:
*pos = ip;
return 1;
}
static MonoMethod *
mini_get_method_allow_open (MonoMethod *m, guint32 token, MonoClass *klass, MonoGenericContext *context, MonoError *error)
{
MonoMethod *method;
error_init (error);
if (m->wrapper_type != MONO_WRAPPER_NONE) {
method = (MonoMethod *)mono_method_get_wrapper_data (m, token);
if (context) {
method = mono_class_inflate_generic_method_checked (method, context, error);
}
} else {
method = mono_get_method_checked (m_class_get_image (m->klass), token, klass, context, error);
}
return method;
}
static MonoMethod *
mini_get_method (MonoCompile *cfg, MonoMethod *m, guint32 token, MonoClass *klass, MonoGenericContext *context)
{
ERROR_DECL (error);
MonoMethod *method = mini_get_method_allow_open (m, token, klass, context, cfg ? cfg->error : error);
if (method && cfg && !cfg->gshared && mono_class_is_open_constructed_type (m_class_get_byval_arg (method->klass))) {
mono_error_set_bad_image (cfg->error, m_class_get_image (cfg->method->klass), "Method with open type while not compiling gshared");
method = NULL;
}
if (!method && !cfg)
mono_error_cleanup (error); /* FIXME don't swallow the error */
return method;
}
static MonoMethodSignature*
mini_get_signature (MonoMethod *method, guint32 token, MonoGenericContext *context, MonoError *error)
{
MonoMethodSignature *fsig;
error_init (error);
if (method->wrapper_type != MONO_WRAPPER_NONE) {
fsig = (MonoMethodSignature *)mono_method_get_wrapper_data (method, token);
} else {
fsig = mono_metadata_parse_signature_checked (m_class_get_image (method->klass), token, error);
return_val_if_nok (error, NULL);
}
if (context) {
fsig = mono_inflate_generic_signature(fsig, context, error);
}
return fsig;
}
/*
* Return the original method is a wrapper is specified. We can only access
* the custom attributes from the original method.
*/
static MonoMethod*
get_original_method (MonoMethod *method)
{
if (method->wrapper_type == MONO_WRAPPER_NONE)
return method;
/* native code (which is like Critical) can call any managed method XXX FIXME XXX to validate all usages */
if (method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED)
return NULL;
/* in other cases we need to find the original method */
return mono_marshal_method_from_wrapper (method);
}
static guchar*
il_read_op (guchar *ip, guchar *end, guchar first_byte, MonoOpcodeEnum desired_il_op)
// If ip is desired_il_op, return the next ip, else NULL.
{
if (G_LIKELY (ip < end) && G_UNLIKELY (*ip == first_byte)) {
MonoOpcodeEnum il_op = MonoOpcodeEnum_Invalid;
// mono_opcode_value_and_size updates ip, but not in the expected way.
const guchar *temp_ip = ip;
const int size = mono_opcode_value_and_size (&temp_ip, end, &il_op);
return (G_LIKELY (size > 0) && G_UNLIKELY (il_op == desired_il_op)) ? (ip + size) : NULL;
}
return NULL;
}
static guchar*
il_read_op_and_token (guchar *ip, guchar *end, guchar first_byte, MonoOpcodeEnum desired_il_op, guint32 *token)
{
ip = il_read_op (ip, end, first_byte, desired_il_op);
if (ip)
*token = read32 (ip - 4); // could be +1 or +2 from start
return ip;
}
static guchar*
il_read_branch_and_target (guchar *ip, guchar *end, guchar first_byte, MonoOpcodeEnum desired_il_op, int size, guchar **target)
{
ip = il_read_op (ip, end, first_byte, desired_il_op);
if (ip) {
gint32 delta = 0;
switch (size) {
case 1:
delta = (signed char)ip [-1];
break;
case 4:
delta = (gint32)read32 (ip - 4);
break;
}
// FIXME verify it is within the function and start of an instruction.
*target = ip + delta;
return ip;
}
return NULL;
}
#define il_read_brtrue(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRTRUE, MONO_CEE_BRTRUE, 4, target))
#define il_read_brtrue_s(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRTRUE_S, MONO_CEE_BRTRUE_S, 1, target))
#define il_read_brfalse(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRFALSE, MONO_CEE_BRFALSE, 4, target))
#define il_read_brfalse_s(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRFALSE_S, MONO_CEE_BRFALSE_S, 1, target))
#define il_read_dup(ip, end) (il_read_op (ip, end, CEE_DUP, MONO_CEE_DUP))
#define il_read_newobj(ip, end, token) (il_read_op_and_token (ip, end, CEE_NEW_OBJ, MONO_CEE_NEWOBJ, token))
#define il_read_ldtoken(ip, end, token) (il_read_op_and_token (ip, end, CEE_LDTOKEN, MONO_CEE_LDTOKEN, token))
#define il_read_call(ip, end, token) (il_read_op_and_token (ip, end, CEE_CALL, MONO_CEE_CALL, token))
#define il_read_callvirt(ip, end, token) (il_read_op_and_token (ip, end, CEE_CALLVIRT, MONO_CEE_CALLVIRT, token))
#define il_read_initobj(ip, end, token) (il_read_op_and_token (ip, end, CEE_PREFIX1, MONO_CEE_INITOBJ, token))
#define il_read_constrained(ip, end, token) (il_read_op_and_token (ip, end, CEE_PREFIX1, MONO_CEE_CONSTRAINED_, token))
#define il_read_unbox_any(ip, end, token) (il_read_op_and_token (ip, end, CEE_UNBOX_ANY, MONO_CEE_UNBOX_ANY, token))
/*
* Check that the IL instructions at ip are the array initialization
* sequence and return the pointer to the data and the size.
*/
static const char*
initialize_array_data (MonoCompile *cfg, MonoMethod *method, gboolean aot, guchar *ip,
guchar *end, MonoClass *klass, guint32 len, int *out_size,
guint32 *out_field_token, MonoOpcodeEnum *il_op, guchar **next_ip)
{
/*
* newarr[System.Int32]
* dup
* ldtoken field valuetype ...
* call void class [mscorlib]System.Runtime.CompilerServices.RuntimeHelpers::InitializeArray(class [mscorlib]System.Array, valuetype [mscorlib]System.RuntimeFieldHandle)
*/
guint32 token;
guint32 field_token;
if ((ip = il_read_dup (ip, end))
&& ip_in_bb (cfg, cfg->cbb, ip)
&& (ip = il_read_ldtoken (ip, end, &field_token))
&& IS_FIELD_DEF (field_token)
&& ip_in_bb (cfg, cfg->cbb, ip)
&& (ip = il_read_call (ip, end, &token))) {
ERROR_DECL (error);
guint32 rva;
const char *data_ptr;
int size = 0;
MonoMethod *cmethod;
MonoClass *dummy_class;
MonoClassField *field = mono_field_from_token_checked (m_class_get_image (method->klass), field_token, &dummy_class, NULL, error);
int dummy_align;
if (!field) {
mono_error_cleanup (error); /* FIXME don't swallow the error */
return NULL;
}
*out_field_token = field_token;
cmethod = mini_get_method (NULL, method, token, NULL, NULL);
if (!cmethod)
return NULL;
if (strcmp (cmethod->name, "InitializeArray") || strcmp (m_class_get_name (cmethod->klass), "RuntimeHelpers") || m_class_get_image (cmethod->klass) != mono_defaults.corlib)
return NULL;
switch (mini_get_underlying_type (m_class_get_byval_arg (klass))->type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
size = 1; break;
/* we need to swap on big endian, so punt. Should we handle R4 and R8 as well? */
#if TARGET_BYTE_ORDER == G_LITTLE_ENDIAN
case MONO_TYPE_I2:
case MONO_TYPE_U2:
size = 2; break;
case MONO_TYPE_I4:
case MONO_TYPE_U4:
case MONO_TYPE_R4:
size = 4; break;
case MONO_TYPE_R8:
case MONO_TYPE_I8:
case MONO_TYPE_U8:
size = 8; break;
#endif
default:
return NULL;
}
size *= len;
if (size > mono_type_size (field->type, &dummy_align))
return NULL;
*out_size = size;
/*g_print ("optimized in %s: size: %d, numelems: %d\n", method->name, size, newarr->inst_newa_len->inst_c0);*/
MonoImage *method_klass_image = m_class_get_image (method->klass);
if (!image_is_dynamic (method_klass_image)) {
guint32 field_index = mono_metadata_token_index (field_token);
mono_metadata_field_info (method_klass_image, field_index - 1, NULL, &rva, NULL);
data_ptr = mono_image_rva_map (method_klass_image, rva);
/*g_print ("field: 0x%08x, rva: %d, rva_ptr: %p\n", read32 (ip + 2), rva, data_ptr);*/
/* for aot code we do the lookup on load */
if (aot && data_ptr)
data_ptr = (const char *)GUINT_TO_POINTER (rva);
} else {
/*FIXME is it possible to AOT a SRE assembly not meant to be saved? */
g_assert (!aot);
data_ptr = mono_field_get_data (field);
}
if (!data_ptr)
return NULL;
*il_op = MONO_CEE_CALL;
*next_ip = ip;
return data_ptr;
}
return NULL;
}
static void
set_exception_type_from_invalid_il (MonoCompile *cfg, MonoMethod *method, guchar *ip)
{
ERROR_DECL (error);
char *method_fname = mono_method_full_name (method, TRUE);
char *method_code;
MonoMethodHeader *header = mono_method_get_header_checked (method, error);
if (!header) {
method_code = g_strdup_printf ("could not parse method body due to %s", mono_error_get_message (error));
mono_error_cleanup (error);
} else if (header->code_size == 0)
method_code = g_strdup ("method body is empty.");
else
method_code = mono_disasm_code_one (NULL, method, ip, NULL);
mono_cfg_set_exception_invalid_program (cfg, g_strdup_printf ("Invalid IL code in %s: %s\n", method_fname, method_code));
g_free (method_fname);
g_free (method_code);
cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, header);
}
guint32
mono_type_to_stloc_coerce (MonoType *type)
{
if (m_type_is_byref (type))
return 0;
type = mini_get_underlying_type (type);
handle_enum:
switch (type->type) {
case MONO_TYPE_I1:
return OP_ICONV_TO_I1;
case MONO_TYPE_U1:
return OP_ICONV_TO_U1;
case MONO_TYPE_I2:
return OP_ICONV_TO_I2;
case MONO_TYPE_U2:
return OP_ICONV_TO_U2;
case MONO_TYPE_I4:
case MONO_TYPE_U4:
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
case MONO_TYPE_I8:
case MONO_TYPE_U8:
case MONO_TYPE_R4:
case MONO_TYPE_R8:
case MONO_TYPE_TYPEDBYREF:
case MONO_TYPE_GENERICINST:
return 0;
case MONO_TYPE_VALUETYPE:
if (m_class_is_enumtype (type->data.klass)) {
type = mono_class_enum_basetype_internal (type->data.klass);
goto handle_enum;
}
return 0;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR: //TODO I believe we don't need to handle gsharedvt as there won't be match and, for example, u1 is not covariant to u32
return 0;
default:
g_error ("unknown type 0x%02x in mono_type_to_stloc_coerce", type->type);
}
return -1;
}
static void
emit_stloc_ir (MonoCompile *cfg, MonoInst **sp, MonoMethodHeader *header, int n)
{
MonoInst *ins;
guint32 coerce_op = mono_type_to_stloc_coerce (header->locals [n]);
if (coerce_op) {
if (cfg->cbb->last_ins == sp [0] && sp [0]->opcode == coerce_op) {
if (cfg->verbose_level > 2)
printf ("Found existing coercing is enough for stloc\n");
} else {
MONO_INST_NEW (cfg, ins, coerce_op);
ins->dreg = alloc_ireg (cfg);
ins->sreg1 = sp [0]->dreg;
ins->type = STACK_I4;
ins->klass = mono_class_from_mono_type_internal (header->locals [n]);
MONO_ADD_INS (cfg->cbb, ins);
*sp = mono_decompose_opcode (cfg, ins);
}
}
guint32 opcode = mono_type_to_regmove (cfg, header->locals [n]);
if (!cfg->deopt && (opcode == OP_MOVE) && cfg->cbb->last_ins == sp [0] &&
((sp [0]->opcode == OP_ICONST) || (sp [0]->opcode == OP_I8CONST))) {
/* Optimize reg-reg moves away */
/*
* Can't optimize other opcodes, since sp[0] might point to
* the last ins of a decomposed opcode.
*/
sp [0]->dreg = (cfg)->locals [n]->dreg;
} else {
EMIT_NEW_LOCSTORE (cfg, ins, n, *sp);
}
}
static void
emit_starg_ir (MonoCompile *cfg, MonoInst **sp, int n)
{
MonoInst *ins;
guint32 coerce_op = mono_type_to_stloc_coerce (cfg->arg_types [n]);
if (coerce_op) {
if (cfg->cbb->last_ins == sp [0] && sp [0]->opcode == coerce_op) {
if (cfg->verbose_level > 2)
printf ("Found existing coercing is enough for starg\n");
} else {
MONO_INST_NEW (cfg, ins, coerce_op);
ins->dreg = alloc_ireg (cfg);
ins->sreg1 = sp [0]->dreg;
ins->type = STACK_I4;
ins->klass = mono_class_from_mono_type_internal (cfg->arg_types [n]);
MONO_ADD_INS (cfg->cbb, ins);
*sp = mono_decompose_opcode (cfg, ins);
}
}
EMIT_NEW_ARGSTORE (cfg, ins, n, *sp);
}
/*
* ldloca inhibits many optimizations so try to get rid of it in common
* cases.
*/
static guchar *
emit_optimized_ldloca_ir (MonoCompile *cfg, guchar *ip, guchar *end, int local)
{
guint32 token;
MonoClass *klass;
MonoType *type;
guchar *start = ip;
if ((ip = il_read_initobj (ip, end, &token)) && ip_in_bb (cfg, cfg->cbb, start + 1)) {
/* From the INITOBJ case */
klass = mini_get_class (cfg->current_method, token, cfg->generic_context);
CHECK_TYPELOAD (klass);
type = mini_get_underlying_type (m_class_get_byval_arg (klass));
emit_init_local (cfg, local, type, TRUE);
return ip;
}
exception_exit:
return NULL;
}
static MonoInst*
handle_call_res_devirt (MonoCompile *cfg, MonoMethod *cmethod, MonoInst *call_res)
{
/*
* Devirt EqualityComparer.Default.Equals () calls for some types.
* The corefx code excepts these calls to be devirtualized.
* This depends on the implementation of EqualityComparer.Default, which is
* in mcs/class/referencesource/mscorlib/system/collections/generic/equalitycomparer.cs
*/
if (m_class_get_image (cmethod->klass) == mono_defaults.corlib &&
!strcmp (m_class_get_name (cmethod->klass), "EqualityComparer`1") &&
!strcmp (cmethod->name, "get_Default")) {
MonoType *param_type = mono_class_get_generic_class (cmethod->klass)->context.class_inst->type_argv [0];
MonoClass *inst;
MonoGenericContext ctx;
ERROR_DECL (error);
memset (&ctx, 0, sizeof (ctx));
MonoType *args [ ] = { param_type };
ctx.class_inst = mono_metadata_get_generic_inst (1, args);
inst = mono_class_inflate_generic_class_checked (mono_class_get_iequatable_class (), &ctx, error);
mono_error_assert_ok (error);
/* EqualityComparer<T>.Default returns specific types depending on T */
// FIXME: Add more
/* 1. Implements IEquatable<T> */
/*
* Can't use this for string/byte as it might use a different comparer:
*
* // Specialize type byte for performance reasons
* if (t == typeof(byte)) {
* return (EqualityComparer<T>)(object)(new ByteEqualityComparer());
* }
* #if MOBILE
* // Breaks .net serialization compatibility
* if (t == typeof (string))
* return (EqualityComparer<T>)(object)new InternalStringComparer ();
* #endif
*/
if (mono_class_is_assignable_from_internal (inst, mono_class_from_mono_type_internal (param_type)) && param_type->type != MONO_TYPE_U1 && param_type->type != MONO_TYPE_STRING) {
MonoInst *typed_objref;
MonoClass *gcomparer_inst;
memset (&ctx, 0, sizeof (ctx));
args [0] = param_type;
ctx.class_inst = mono_metadata_get_generic_inst (1, args);
MonoClass *gcomparer = mono_class_get_geqcomparer_class ();
g_assert (gcomparer);
gcomparer_inst = mono_class_inflate_generic_class_checked (gcomparer, &ctx, error);
if (is_ok (error)) {
MONO_INST_NEW (cfg, typed_objref, OP_TYPED_OBJREF);
typed_objref->type = STACK_OBJ;
typed_objref->dreg = alloc_ireg_ref (cfg);
typed_objref->sreg1 = call_res->dreg;
typed_objref->klass = gcomparer_inst;
MONO_ADD_INS (cfg->cbb, typed_objref);
call_res = typed_objref;
/* Force decompose */
cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE;
cfg->cbb->needs_decompose = TRUE;
}
}
}
return call_res;
}
static gboolean
is_exception_class (MonoClass *klass)
{
if (G_LIKELY (m_class_get_supertypes (klass)))
return mono_class_has_parent_fast (klass, mono_defaults.exception_class);
while (klass) {
if (klass == mono_defaults.exception_class)
return TRUE;
klass = m_class_get_parent (klass);
}
return FALSE;
}
/*
* is_jit_optimizer_disabled:
*
* Determine whenever M's assembly has a DebuggableAttribute with the
* IsJITOptimizerDisabled flag set.
*/
static gboolean
is_jit_optimizer_disabled (MonoMethod *m)
{
MonoAssembly *ass = m_class_get_image (m->klass)->assembly;
g_assert (ass);
if (ass->jit_optimizer_disabled_inited)
return ass->jit_optimizer_disabled;
return mono_assembly_is_jit_optimizer_disabled (ass);
}
gboolean
mono_is_supported_tailcall_helper (gboolean value, const char *svalue)
{
if (!value)
mono_tailcall_print ("%s %s\n", __func__, svalue);
return value;
}
static gboolean
mono_is_not_supported_tailcall_helper (gboolean value, const char *svalue, MonoMethod *method, MonoMethod *cmethod)
{
// Return value, printing if it inhibits tailcall.
if (value && mono_tailcall_print_enabled ()) {
const char *lparen = strchr (svalue, ' ') ? "(" : "";
const char *rparen = *lparen ? ")" : "";
mono_tailcall_print ("%s %s -> %s %s%s%s:%d\n", __func__, method->name, cmethod->name, lparen, svalue, rparen, value);
}
return value;
}
#define IS_NOT_SUPPORTED_TAILCALL(x) (mono_is_not_supported_tailcall_helper((x), #x, method, cmethod))
static gboolean
is_supported_tailcall (MonoCompile *cfg, const guint8 *ip, MonoMethod *method, MonoMethod *cmethod, MonoMethodSignature *fsig,
gboolean virtual_, gboolean extra_arg, gboolean *ptailcall_calli)
{
// Some checks apply to "regular", some to "calli", some to both.
// To ease burden on caller, always compute regular and calli.
gboolean tailcall = TRUE;
gboolean tailcall_calli = TRUE;
if (IS_NOT_SUPPORTED_TAILCALL (virtual_ && !cfg->backend->have_op_tailcall_membase))
tailcall = FALSE;
if (IS_NOT_SUPPORTED_TAILCALL (!cfg->backend->have_op_tailcall_reg))
tailcall_calli = FALSE;
if (!tailcall && !tailcall_calli)
goto exit;
// FIXME in calli, there is no type for for the this parameter,
// so we assume it might be valuetype; in future we should issue a range
// check, so rule out pointing to frame (for other reference parameters also)
if ( IS_NOT_SUPPORTED_TAILCALL (cmethod && fsig->hasthis && m_class_is_valuetype (cmethod->klass)) // This might point to the current method's stack. Emit range check?
|| IS_NOT_SUPPORTED_TAILCALL (cmethod && (cmethod->flags & METHOD_ATTRIBUTE_PINVOKE_IMPL))
|| IS_NOT_SUPPORTED_TAILCALL (fsig->pinvoke) // i.e. if !cmethod (calli)
|| IS_NOT_SUPPORTED_TAILCALL (cfg->method->save_lmf)
|| IS_NOT_SUPPORTED_TAILCALL (!cmethod && fsig->hasthis) // FIXME could be valuetype to current frame; range check
|| IS_NOT_SUPPORTED_TAILCALL (cmethod && cmethod->wrapper_type && cmethod->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD)
// http://www.mono-project.com/docs/advanced/runtime/docs/generic-sharing/
//
// 1. Non-generic non-static methods of reference types have access to the
// RGCTX via the "this" argument (this->vtable->rgctx).
// 2. a Non-generic static methods of reference types and b. non-generic methods
// of value types need to be passed a pointer to the caller's class's VTable in the MONO_ARCH_RGCTX_REG register.
// 3. Generic methods need to be passed a pointer to the MRGCTX in the MONO_ARCH_RGCTX_REG register
//
// That is what vtable_arg is here (always?).
//
// Passing vtable_arg uses (requires?) a volatile non-parameter register,
// such as AMD64 rax, r10, r11, or the return register on many architectures.
// ARM32 does not always clearly have such a register. ARM32's return register
// is a parameter register.
// iPhone could use r9 except on old systems. iPhone/ARM32 is not particularly
// important. Linux/arm32 is less clear.
// ARM32's scratch r12 might work but only with much collateral change.
//
// Imagine F1 calls F2, and F2 tailcalls F3.
// F2 and F3 are managed. F1 is native.
// Without a tailcall, F2 can save and restore everything needed for F1.
// However if the extra parameter were in a non-volatile, such as ARM32 V5/R8,
// F3 cannot easily restore it for F1, in the current scheme. The current
// scheme where the extra parameter is not merely an extra parameter, but
// passed "outside of the ABI".
//
// If all native to managed transitions are intercepted and wrapped (w/o tailcall),
// then they can preserve this register and the rest of the managed callgraph
// treat it as volatile.
//
// Interface method dispatch has the same problem (imt_arg).
|| IS_NOT_SUPPORTED_TAILCALL (extra_arg && !cfg->backend->have_volatile_non_param_register)
|| IS_NOT_SUPPORTED_TAILCALL (cfg->gsharedvt)
) {
tailcall_calli = FALSE;
tailcall = FALSE;
goto exit;
}
for (int i = 0; i < fsig->param_count; ++i) {
if (IS_NOT_SUPPORTED_TAILCALL (m_type_is_byref (fsig->params [i]) || fsig->params [i]->type == MONO_TYPE_PTR || fsig->params [i]->type == MONO_TYPE_FNPTR)) {
tailcall_calli = FALSE;
tailcall = FALSE; // These can point to the current method's stack. Emit range check?
goto exit;
}
}
MonoMethodSignature *caller_signature;
MonoMethodSignature *callee_signature;
caller_signature = mono_method_signature_internal (method);
callee_signature = cmethod ? mono_method_signature_internal (cmethod) : fsig;
g_assert (caller_signature);
g_assert (callee_signature);
// Require an exact match on return type due to various conversions in emit_move_return_value that would be skipped.
// The main troublesome conversions are double <=> float.
// CoreCLR allows some conversions here, such as integer truncation.
// As well I <=> I[48] and U <=> U[48] would be ok, for matching size.
if (IS_NOT_SUPPORTED_TAILCALL (mini_get_underlying_type (caller_signature->ret)->type != mini_get_underlying_type (callee_signature->ret)->type)
|| IS_NOT_SUPPORTED_TAILCALL (!mono_arch_tailcall_supported (cfg, caller_signature, callee_signature, virtual_))) {
tailcall_calli = FALSE;
tailcall = FALSE;
goto exit;
}
/* Debugging support */
#if 0
if (!mono_debug_count ()) {
tailcall_calli = FALSE;
tailcall = FALSE;
goto exit;
}
#endif
// See check_sp in mini_emit_calli_full.
if (tailcall_calli && IS_NOT_SUPPORTED_TAILCALL (mini_should_check_stack_pointer (cfg)))
tailcall_calli = FALSE;
exit:
mono_tailcall_print ("tail.%s %s -> %s tailcall:%d tailcall_calli:%d gshared:%d extra_arg:%d virtual_:%d\n",
mono_opcode_name (*ip), method->name, cmethod ? cmethod->name : "calli", tailcall, tailcall_calli,
cfg->gshared, extra_arg, virtual_);
*ptailcall_calli = tailcall_calli;
return tailcall;
}
/*
* is_addressable_valuetype_load
*
* Returns true if a previous load can be done without doing an extra copy, given the new instruction ip and the type of the object being loaded ldtype
*/
static gboolean
is_addressable_valuetype_load (MonoCompile* cfg, guint8* ip, MonoType* ldtype)
{
/* Avoid loading a struct just to load one of its fields */
gboolean is_load_instruction = (*ip == CEE_LDFLD);
gboolean is_in_previous_bb = ip_in_bb(cfg, cfg->cbb, ip);
gboolean is_struct = MONO_TYPE_ISSTRUCT(ldtype);
return is_load_instruction && is_in_previous_bb && is_struct;
}
/*
* handle_ctor_call:
*
* Handle calls made to ctors from NEWOBJ opcodes.
*/
static void
handle_ctor_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, int context_used,
MonoInst **sp, guint8 *ip, int *inline_costs)
{
MonoInst *vtable_arg = NULL, *callvirt_this_arg = NULL, *ins;
if (cmethod && (ins = mini_emit_inst_for_ctor (cfg, cmethod, fsig, sp))) {
g_assert (MONO_TYPE_IS_VOID (fsig->ret));
CHECK_CFG_EXCEPTION;
return;
}
if (mono_class_generic_sharing_enabled (cmethod->klass) && mono_method_is_generic_sharable (cmethod, TRUE)) {
MonoRgctxAccess access = mini_get_rgctx_access_for_method (cmethod);
if (access == MONO_RGCTX_ACCESS_MRGCTX) {
mono_class_vtable_checked (cmethod->klass, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (cmethod->klass);
vtable_arg = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_METHOD_RGCTX);
} else if (access == MONO_RGCTX_ACCESS_VTABLE) {
vtable_arg = mini_emit_get_rgctx_klass (cfg, context_used,
cmethod->klass, MONO_RGCTX_INFO_VTABLE);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (cmethod->klass);
} else {
g_assert (access == MONO_RGCTX_ACCESS_THIS);
}
}
/* Avoid virtual calls to ctors if possible */
if ((cfg->opt & MONO_OPT_INLINE) && cmethod && !context_used && !vtable_arg &&
mono_method_check_inlining (cfg, cmethod) &&
!mono_class_is_subclass_of_internal (cmethod->klass, mono_defaults.exception_class, FALSE)) {
int costs;
if ((costs = inline_method (cfg, cmethod, fsig, sp, ip, cfg->real_offset, FALSE, NULL))) {
cfg->real_offset += 5;
*inline_costs += costs - 5;
} else {
INLINE_FAILURE ("inline failure");
// FIXME-VT: Clean this up
if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig))
GSHAREDVT_FAILURE(*ip);
mini_emit_method_call_full (cfg, cmethod, fsig, FALSE, sp, callvirt_this_arg, NULL, NULL);
}
} else if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) {
MonoInst *addr;
addr = emit_get_rgctx_gsharedvt_call (cfg, context_used, fsig, cmethod, MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE);
if (cfg->llvm_only) {
// FIXME: Avoid initializing vtable_arg
mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
} else {
mini_emit_calli (cfg, fsig, sp, addr, NULL, vtable_arg);
}
} else if (context_used &&
((!mono_method_is_generic_sharable_full (cmethod, TRUE, FALSE, FALSE) ||
!mono_class_generic_sharing_enabled (cmethod->klass)) || cfg->gsharedvt)) {
MonoInst *cmethod_addr;
/* Generic calls made out of gsharedvt methods cannot be patched, so use an indirect call */
if (cfg->llvm_only) {
MonoInst *addr = emit_get_rgctx_method (cfg, context_used, cmethod,
MONO_RGCTX_INFO_METHOD_FTNDESC);
mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
} else {
cmethod_addr = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_GENERIC_METHOD_CODE);
mini_emit_calli (cfg, fsig, sp, cmethod_addr, NULL, vtable_arg);
}
} else {
INLINE_FAILURE ("ctor call");
ins = mini_emit_method_call_full (cfg, cmethod, fsig, FALSE, sp,
callvirt_this_arg, NULL, vtable_arg);
}
exception_exit:
mono_error_exit:
return;
}
typedef struct {
MonoMethod *method;
gboolean inst_tailcall;
} HandleCallData;
/*
* handle_constrained_call:
*
* Handle constrained calls. Return a MonoInst* representing the call or NULL.
* May overwrite sp [0] and modify the ref_... parameters.
*/
static MonoInst*
handle_constrained_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoClass *constrained_class, MonoInst **sp,
HandleCallData *cdata, MonoMethod **ref_cmethod, gboolean *ref_virtual, gboolean *ref_emit_widen)
{
MonoInst *ins, *addr;
MonoMethod *method = cdata->method;
gboolean constrained_partial_call = FALSE;
gboolean constrained_is_generic_param =
m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_VAR ||
m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_MVAR;
MonoType *gshared_constraint = NULL;
if (constrained_is_generic_param && cfg->gshared) {
if (!mini_is_gsharedvt_klass (constrained_class)) {
g_assert (!m_class_is_valuetype (cmethod->klass));
if (!mini_type_is_reference (m_class_get_byval_arg (constrained_class)))
constrained_partial_call = TRUE;
MonoType *t = m_class_get_byval_arg (constrained_class);
MonoGenericParam *gparam = t->data.generic_param;
gshared_constraint = gparam->gshared_constraint;
}
}
if (mini_is_gsharedvt_klass (constrained_class)) {
if ((cmethod->klass != mono_defaults.object_class) && m_class_is_valuetype (constrained_class) && m_class_is_valuetype (cmethod->klass)) {
/* The 'Own method' case below */
} else if (m_class_get_image (cmethod->klass) != mono_defaults.corlib && !mono_class_is_interface (cmethod->klass) && !m_class_is_valuetype (cmethod->klass)) {
/* 'The type parameter is instantiated as a reference type' case below. */
} else {
ins = handle_constrained_gsharedvt_call (cfg, cmethod, fsig, sp, constrained_class, ref_emit_widen);
CHECK_CFG_EXCEPTION;
g_assert (ins);
if (cdata->inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall constrained_class %s -> %s\n", method->name, cmethod->name);
return ins;
}
}
if (m_method_is_static (cmethod)) {
/* Call to an abstract static method, handled normally */
return NULL;
} else if (constrained_partial_call) {
gboolean need_box = TRUE;
/*
* The receiver is a valuetype, but the exact type is not known at compile time. This means the
* called method is not known at compile time either. The called method could end up being
* one of the methods on the parent classes (object/valuetype/enum), in which case we need
* to box the receiver.
* A simple solution would be to box always and make a normal virtual call, but that would
* be bad performance wise.
*/
if (mono_class_is_interface (cmethod->klass) && mono_class_is_ginst (cmethod->klass) &&
(cmethod->flags & METHOD_ATTRIBUTE_ABSTRACT)) {
/*
* The parent classes implement no generic interfaces, so the called method will be a vtype method, so no boxing necessary.
*/
/* If the method is not abstract, it's a default interface method, and we need to box */
need_box = FALSE;
}
if (gshared_constraint && MONO_TYPE_IS_PRIMITIVE (gshared_constraint) && cmethod->klass == mono_defaults.object_class &&
!strcmp (cmethod->name, "GetHashCode")) {
/*
* The receiver is constrained to a primitive type or an enum with the same basetype.
* Enum.GetHashCode () returns the hash code of the underlying type (see comments in Enum.cs),
* so the constrained call can be replaced with a normal call to the basetype GetHashCode ()
* method.
*/
MonoClass *gshared_constraint_class = mono_class_from_mono_type_internal (gshared_constraint);
cmethod = get_method_nofail (gshared_constraint_class, cmethod->name, 0, 0);
g_assert (cmethod);
*ref_cmethod = cmethod;
*ref_virtual = FALSE;
if (cfg->verbose_level)
printf (" -> %s\n", mono_method_get_full_name (cmethod));
return NULL;
}
if (!(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) && (cmethod->klass == mono_defaults.object_class || cmethod->klass == m_class_get_parent (mono_defaults.enum_class) || cmethod->klass == mono_defaults.enum_class)) {
/* The called method is not virtual, i.e. Object:GetType (), the receiver is a vtype, has to box */
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0);
ins->klass = constrained_class;
sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class));
CHECK_CFG_EXCEPTION;
} else if (need_box) {
MonoInst *box_type;
MonoBasicBlock *is_ref_bb, *end_bb;
MonoInst *nonbox_call, *addr;
/*
* Determine at runtime whenever the called method is defined on object/valuetype/enum, and emit a boxing call
* if needed.
* FIXME: It is possible to inline the called method in a lot of cases, i.e. for T_INT,
* the no-box case goes to a method in Int32, while the box case goes to a method in Enum.
*/
addr = emit_get_rgctx_virt_method (cfg, mono_class_check_context_used (constrained_class), constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE);
NEW_BBLOCK (cfg, is_ref_bb);
NEW_BBLOCK (cfg, end_bb);
box_type = emit_get_rgctx_virt_method (cfg, mono_class_check_context_used (constrained_class), constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_BOX_TYPE);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, box_type->dreg, MONO_GSHAREDVT_BOX_TYPE_REF);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_ref_bb);
/* Non-ref case */
if (cfg->llvm_only)
/* addr is an ftndesc in this case */
nonbox_call = mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
else
nonbox_call = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Ref case */
MONO_START_BB (cfg, is_ref_bb);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0);
ins->klass = constrained_class;
sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class));
CHECK_CFG_EXCEPTION;
if (cfg->llvm_only)
ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
else
ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
MONO_START_BB (cfg, end_bb);
cfg->cbb = end_bb;
nonbox_call->dreg = ins->dreg;
if (cdata->inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall constrained_partial_need_box %s -> %s\n", method->name, cmethod->name);
return ins;
} else {
g_assert (mono_class_is_interface (cmethod->klass));
addr = emit_get_rgctx_virt_method (cfg, mono_class_check_context_used (constrained_class), constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE);
if (cfg->llvm_only)
ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
else
ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL);
if (cdata->inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall constrained_partial %s -> %s\n", method->name, cmethod->name);
return ins;
}
} else if (!m_class_is_valuetype (constrained_class)) {
int dreg = alloc_ireg_ref (cfg);
/*
* The type parameter is instantiated as a reference
* type. We have a managed pointer on the stack, so
* we need to dereference it here.
*/
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, sp [0]->dreg, 0);
ins->type = STACK_OBJ;
sp [0] = ins;
} else if (cmethod->klass == mono_defaults.object_class || cmethod->klass == m_class_get_parent (mono_defaults.enum_class) || cmethod->klass == mono_defaults.enum_class) {
/*
* The type parameter is instantiated as a valuetype,
* but that type doesn't override the method we're
* calling, so we need to box `this'.
*/
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0);
ins->klass = constrained_class;
sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class));
CHECK_CFG_EXCEPTION;
} else {
if (cmethod->klass != constrained_class) {
/* Enums/default interface methods */
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0);
ins->klass = constrained_class;
sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class));
CHECK_CFG_EXCEPTION;
}
*ref_virtual = FALSE;
}
exception_exit:
return NULL;
}
static void
emit_setret (MonoCompile *cfg, MonoInst *val)
{
MonoType *ret_type = mini_get_underlying_type (mono_method_signature_internal (cfg->method)->ret);
MonoInst *ins;
if (mini_type_to_stind (cfg, ret_type) == CEE_STOBJ) {
MonoInst *ret_addr;
if (!cfg->vret_addr) {
EMIT_NEW_VARSTORE (cfg, ins, cfg->ret, ret_type, val);
} else {
EMIT_NEW_RETLOADA (cfg, ret_addr);
MonoClass *ret_class = mono_class_from_mono_type_internal (ret_type);
if (MONO_CLASS_IS_SIMD (cfg, ret_class))
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STOREX_MEMBASE, ret_addr->dreg, 0, val->dreg);
else
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STOREV_MEMBASE, ret_addr->dreg, 0, val->dreg);
ins->klass = ret_class;
}
} else {
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
if (COMPILE_SOFT_FLOAT (cfg) && !m_type_is_byref (ret_type) && ret_type->type == MONO_TYPE_R4) {
MonoInst *conv;
MonoInst *iargs [ ] = { val };
conv = mono_emit_jit_icall (cfg, mono_fload_r4_arg, iargs);
mono_arch_emit_setret (cfg, cfg->method, conv);
} else {
mono_arch_emit_setret (cfg, cfg->method, val);
}
#else
mono_arch_emit_setret (cfg, cfg->method, val);
#endif
}
}
/*
* Emit a call to enter the interpreter for methods with filter clauses.
*/
static void
emit_llvmonly_interp_entry (MonoCompile *cfg, MonoMethodHeader *header)
{
MonoInst *ins;
MonoInst **iargs;
MonoMethodSignature *sig = mono_method_signature_internal (cfg->method);
MonoInst *ftndesc;
cfg->interp_in_signatures = g_slist_prepend_mempool (cfg->mempool, cfg->interp_in_signatures, sig);
/*
* Emit a call to the interp entry function. We emit it here instead of the llvm backend since
* calling conventions etc. are easier to handle here. The LLVM backend will only emit the
* entry/exit bblocks.
*/
g_assert (cfg->cbb == cfg->bb_init);
if (cfg->gsharedvt && mini_is_gsharedvt_variable_signature (sig)) {
/*
* Would have to generate a gsharedvt out wrapper which calls the interp entry wrapper, but
* the gsharedvt out wrapper might not exist if the caller is also a gsharedvt method since
* the concrete signature of the call might not exist in the program.
* So transition directly to the interpreter without the wrappers.
*/
MonoInst *args_ins;
MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM);
ins->dreg = alloc_preg (cfg);
ins->inst_imm = sig->param_count * sizeof (target_mgreg_t);
MONO_ADD_INS (cfg->cbb, ins);
args_ins = ins;
for (int i = 0; i < sig->hasthis + sig->param_count; ++i) {
MonoInst *arg_addr_ins;
EMIT_NEW_VARLOADA ((cfg), arg_addr_ins, cfg->args [i], cfg->arg_types [i]);
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, args_ins->dreg, i * sizeof (target_mgreg_t), arg_addr_ins->dreg);
}
MonoInst *ret_var = NULL;
MonoInst *ret_arg_ins;
if (!MONO_TYPE_IS_VOID (sig->ret)) {
ret_var = mono_compile_create_var (cfg, sig->ret, OP_LOCAL);
EMIT_NEW_VARLOADA (cfg, ret_arg_ins, ret_var, sig->ret);
} else {
EMIT_NEW_PCONST (cfg, ret_arg_ins, NULL);
}
iargs = g_newa (MonoInst*, 3);
iargs [0] = emit_get_rgctx_method (cfg, -1, cfg->method, MONO_RGCTX_INFO_INTERP_METHOD);
iargs [1] = ret_arg_ins;
iargs [2] = args_ins;
mono_emit_jit_icall_id (cfg, MONO_JIT_ICALL_mini_llvmonly_interp_entry_gsharedvt, iargs);
if (!MONO_TYPE_IS_VOID (sig->ret))
EMIT_NEW_VARLOAD (cfg, ins, ret_var, sig->ret);
else
ins = NULL;
} else {
/* Obtain the interp entry function */
ftndesc = emit_get_rgctx_method (cfg, -1, cfg->method, MONO_RGCTX_INFO_LLVMONLY_INTERP_ENTRY);
/* Call it */
iargs = g_newa (MonoInst*, sig->param_count + 1);
for (int i = 0; i < sig->param_count + sig->hasthis; ++i)
EMIT_NEW_ARGLOAD (cfg, iargs [i], i);
ins = mini_emit_llvmonly_calli (cfg, sig, iargs, ftndesc);
}
/* Do a normal return */
if (cfg->ret) {
emit_setret (cfg, ins);
/*
* Since only bb_entry/bb_exit is emitted if interp_entry_only is set,
* its possible that the return value becomes an OP_PHI node whose inputs
* are not emitted. Make it volatile to prevent that.
*/
cfg->ret->flags |= MONO_INST_VOLATILE;
}
MONO_INST_NEW (cfg, ins, OP_BR);
ins->inst_target_bb = cfg->bb_exit;
MONO_ADD_INS (cfg->cbb, ins);
link_bblock (cfg, cfg->cbb, cfg->bb_exit);
}
typedef union _MonoOpcodeParameter {
gint32 i32;
gint64 i64;
float f;
double d;
guchar *branch_target;
} MonoOpcodeParameter;
typedef struct _MonoOpcodeInfo {
guint constant : 4; // private
gint pops : 3; // public -1 means variable
gint pushes : 3; // public -1 means variable
} MonoOpcodeInfo;
static const MonoOpcodeInfo*
mono_opcode_decode (guchar *ip, guint op_size, MonoOpcodeEnum il_op, MonoOpcodeParameter *parameter)
{
#define Push0 (0)
#define Pop0 (0)
#define Push1 (1)
#define Pop1 (1)
#define PushI (1)
#define PopI (1)
#define PushI8 (1)
#define PopI8 (1)
#define PushRef (1)
#define PopRef (1)
#define PushR4 (1)
#define PopR4 (1)
#define PushR8 (1)
#define PopR8 (1)
#define VarPush (-1)
#define VarPop (-1)
static const MonoOpcodeInfo mono_opcode_info [ ] = {
#define OPDEF(name, str, pops, pushes, param, param_constant, a, b, c, flow) {param_constant + 1, pops, pushes },
#include "mono/cil/opcode.def"
#undef OPDEF
};
#undef Push0
#undef Pop0
#undef Push1
#undef Pop1
#undef PushI
#undef PopI
#undef PushI8
#undef PopI8
#undef PushRef
#undef PopRef
#undef PushR4
#undef PopR4
#undef PushR8
#undef PopR8
#undef VarPush
#undef VarPop
gint32 delta;
guchar *next_ip = ip + op_size;
const MonoOpcodeInfo *info = &mono_opcode_info [il_op];
switch (mono_opcodes [il_op].argument) {
case MonoInlineNone:
parameter->i32 = (int)info->constant - 1;
break;
case MonoInlineString:
case MonoInlineType:
case MonoInlineField:
case MonoInlineMethod:
case MonoInlineTok:
case MonoInlineSig:
case MonoShortInlineR:
case MonoInlineI:
parameter->i32 = read32 (next_ip - 4);
// FIXME check token type?
break;
case MonoShortInlineI:
parameter->i32 = (signed char)next_ip [-1];
break;
case MonoInlineVar:
parameter->i32 = read16 (next_ip - 2);
break;
case MonoShortInlineVar:
parameter->i32 = next_ip [-1];
break;
case MonoInlineR:
case MonoInlineI8:
parameter->i64 = read64 (next_ip - 8);
break;
case MonoShortInlineBrTarget:
delta = (signed char)next_ip [-1];
goto branch_target;
case MonoInlineBrTarget:
delta = (gint32)read32 (next_ip - 4);
branch_target:
parameter->branch_target = delta + next_ip;
break;
case MonoInlineSwitch: // complicated
break;
default:
g_error ("%s %d %d\n", __func__, il_op, mono_opcodes [il_op].argument);
}
return info;
}
/*
* mono_method_to_ir:
*
* Translate the .net IL into linear IR.
*
* @start_bblock: if not NULL, the starting basic block, used during inlining.
* @end_bblock: if not NULL, the ending basic block, used during inlining.
* @return_var: if not NULL, the place where the return value is stored, used during inlining.
* @inline_args: if not NULL, contains the arguments to the inline call
* @inline_offset: if not zero, the real offset from the inline call, or zero otherwise.
* @is_virtual_call: whether this method is being called as a result of a call to callvirt
*
* This method is used to turn ECMA IL into Mono's internal Linear IR
* reprensetation. It is used both for entire methods, as well as
* inlining existing methods. In the former case, the @start_bblock,
* @end_bblock, @return_var, @inline_args are all set to NULL, and the
* inline_offset is set to zero.
*
* Returns: the inline cost, or -1 if there was an error processing this method.
*/
int
mono_method_to_ir (MonoCompile *cfg, MonoMethod *method, MonoBasicBlock *start_bblock, MonoBasicBlock *end_bblock,
MonoInst *return_var, MonoInst **inline_args,
guint inline_offset, gboolean is_virtual_call)
{
ERROR_DECL (error);
// Buffer to hold parameters to mono_new_array, instead of varargs.
MonoInst *array_new_localalloc_ins = NULL;
MonoInst *ins, **sp, **stack_start;
MonoBasicBlock *tblock = NULL;
MonoBasicBlock *init_localsbb = NULL, *init_localsbb2 = NULL;
MonoSimpleBasicBlock *bb = NULL, *original_bb = NULL;
MonoMethod *method_definition;
MonoInst **arg_array;
MonoMethodHeader *header;
MonoImage *image;
guint32 token, ins_flag;
MonoClass *klass;
MonoClass *constrained_class = NULL;
gboolean save_last_error = FALSE;
guchar *ip, *end, *target, *err_pos;
MonoMethodSignature *sig;
MonoGenericContext *generic_context = NULL;
MonoGenericContainer *generic_container = NULL;
MonoType **param_types;
int i, n, start_new_bblock, dreg;
int num_calls = 0, inline_costs = 0;
guint num_args;
GSList *class_inits = NULL;
gboolean dont_verify, dont_verify_stloc, readonly = FALSE;
int context_used;
gboolean init_locals, seq_points, skip_dead_blocks;
gboolean sym_seq_points = FALSE;
MonoDebugMethodInfo *minfo;
MonoBitSet *seq_point_locs = NULL;
MonoBitSet *seq_point_set_locs = NULL;
const char *ovf_exc = NULL;
gboolean emitted_funccall_seq_point = FALSE;
gboolean detached_before_ret = FALSE;
gboolean ins_has_side_effect;
if (!cfg->disable_inline)
cfg->disable_inline = (method->iflags & METHOD_IMPL_ATTRIBUTE_NOOPTIMIZATION) || is_jit_optimizer_disabled (method);
cfg->current_method = method;
image = m_class_get_image (method->klass);
/* serialization and xdomain stuff may need access to private fields and methods */
dont_verify = FALSE;
dont_verify |= method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE; /* bug #77896 */
dont_verify |= method->wrapper_type == MONO_WRAPPER_COMINTEROP;
dont_verify |= method->wrapper_type == MONO_WRAPPER_COMINTEROP_INVOKE;
/* still some type unsafety issues in marshal wrappers... (unknown is PtrToStructure) */
dont_verify_stloc = method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE;
dont_verify_stloc |= method->wrapper_type == MONO_WRAPPER_OTHER;
dont_verify_stloc |= method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED;
dont_verify_stloc |= method->wrapper_type == MONO_WRAPPER_STELEMREF;
header = mono_method_get_header_checked (method, cfg->error);
if (!header) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
goto exception_exit;
} else {
cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, header);
}
generic_container = mono_method_get_generic_container (method);
sig = mono_method_signature_internal (method);
num_args = sig->hasthis + sig->param_count;
ip = (guchar*)header->code;
cfg->cil_start = ip;
end = ip + header->code_size;
cfg->stat_cil_code_size += header->code_size;
seq_points = cfg->gen_seq_points && cfg->method == method;
if (method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED) {
/* We could hit a seq point before attaching to the JIT (#8338) */
seq_points = FALSE;
}
if (method->wrapper_type == MONO_WRAPPER_OTHER) {
WrapperInfo *info = mono_marshal_get_wrapper_info (method);
if (info->subtype == WRAPPER_SUBTYPE_INTERP_IN) {
/* We could hit a seq point before attaching to the JIT (#8338) */
seq_points = FALSE;
}
}
if (cfg->prof_coverage) {
if (cfg->compile_aot)
g_error ("Coverage profiling is not supported with AOT.");
INLINE_FAILURE ("coverage profiling");
cfg->coverage_info = mono_profiler_coverage_alloc (cfg->method, header->code_size);
}
if ((cfg->gen_sdb_seq_points && cfg->method == method) || cfg->prof_coverage) {
minfo = mono_debug_lookup_method (method);
if (minfo) {
MonoSymSeqPoint *sps;
int i, n_il_offsets;
mono_debug_get_seq_points (minfo, NULL, NULL, NULL, &sps, &n_il_offsets);
seq_point_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0);
seq_point_set_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0);
sym_seq_points = TRUE;
for (i = 0; i < n_il_offsets; ++i) {
if (sps [i].il_offset < header->code_size)
mono_bitset_set_fast (seq_point_locs, sps [i].il_offset);
}
g_free (sps);
MonoDebugMethodAsyncInfo* asyncMethod = mono_debug_lookup_method_async_debug_info (method);
if (asyncMethod) {
for (i = 0; asyncMethod != NULL && i < asyncMethod->num_awaits; i++)
{
mono_bitset_set_fast (seq_point_locs, asyncMethod->resume_offsets[i]);
mono_bitset_set_fast (seq_point_locs, asyncMethod->yield_offsets[i]);
}
mono_debug_free_method_async_debug_info (asyncMethod);
}
} else if (!method->wrapper_type && !method->dynamic && mono_debug_image_has_debug_info (m_class_get_image (method->klass))) {
/* Methods without line number info like auto-generated property accessors */
seq_point_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0);
seq_point_set_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0);
sym_seq_points = TRUE;
}
}
/*
* Methods without init_locals set could cause asserts in various passes
* (#497220). To work around this, we emit dummy initialization opcodes
* (OP_DUMMY_ICONST etc.) which generate no code. These are only supported
* on some platforms.
*/
if (cfg->opt & MONO_OPT_UNSAFE)
init_locals = header->init_locals;
else
init_locals = TRUE;
method_definition = method;
while (method_definition->is_inflated) {
MonoMethodInflated *imethod = (MonoMethodInflated *) method_definition;
method_definition = imethod->declaring;
}
if (sig->is_inflated)
generic_context = mono_method_get_context (method);
else if (generic_container)
generic_context = &generic_container->context;
cfg->generic_context = generic_context;
if (!cfg->gshared)
g_assert (!sig->has_type_parameters);
if (sig->generic_param_count && method->wrapper_type == MONO_WRAPPER_NONE) {
g_assert (method->is_inflated);
g_assert (mono_method_get_context (method)->method_inst);
}
if (method->is_inflated && mono_method_get_context (method)->method_inst)
g_assert (sig->generic_param_count);
if (cfg->method == method) {
cfg->real_offset = 0;
} else {
cfg->real_offset = inline_offset;
}
cfg->cil_offset_to_bb = (MonoBasicBlock **)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoBasicBlock*) * header->code_size);
cfg->cil_offset_to_bb_len = header->code_size;
if (cfg->verbose_level > 2)
printf ("method to IR %s\n", mono_method_full_name (method, TRUE));
param_types = (MonoType **)mono_mempool_alloc (cfg->mempool, sizeof (MonoType*) * num_args);
if (sig->hasthis)
param_types [0] = m_class_is_valuetype (method->klass) ? m_class_get_this_arg (method->klass) : m_class_get_byval_arg (method->klass);
for (n = 0; n < sig->param_count; ++n)
param_types [n + sig->hasthis] = sig->params [n];
cfg->arg_types = param_types;
cfg->dont_inline = g_list_prepend (cfg->dont_inline, method);
if (cfg->method == method) {
/* ENTRY BLOCK */
NEW_BBLOCK (cfg, start_bblock);
cfg->bb_entry = start_bblock;
start_bblock->cil_code = NULL;
start_bblock->cil_length = 0;
/* EXIT BLOCK */
NEW_BBLOCK (cfg, end_bblock);
cfg->bb_exit = end_bblock;
end_bblock->cil_code = NULL;
end_bblock->cil_length = 0;
end_bblock->flags |= BB_INDIRECT_JUMP_TARGET;
g_assert (cfg->num_bblocks == 2);
arg_array = cfg->args;
if (header->num_clauses) {
cfg->spvars = g_hash_table_new (NULL, NULL);
cfg->exvars = g_hash_table_new (NULL, NULL);
}
cfg->clause_is_dead = mono_mempool_alloc0 (cfg->mempool, sizeof (gboolean) * header->num_clauses);
/* handle exception clauses */
for (i = 0; i < header->num_clauses; ++i) {
MonoBasicBlock *try_bb;
MonoExceptionClause *clause = &header->clauses [i];
GET_BBLOCK (cfg, try_bb, ip + clause->try_offset);
try_bb->real_offset = clause->try_offset;
try_bb->try_start = TRUE;
GET_BBLOCK (cfg, tblock, ip + clause->handler_offset);
tblock->real_offset = clause->handler_offset;
tblock->flags |= BB_EXCEPTION_HANDLER;
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY)
mono_create_exvar_for_offset (cfg, clause->handler_offset);
/*
* Linking the try block with the EH block hinders inlining as we won't be able to
* merge the bblocks from inlining and produce an artificial hole for no good reason.
*/
if (COMPILE_LLVM (cfg))
link_bblock (cfg, try_bb, tblock);
if (*(ip + clause->handler_offset) == CEE_POP)
tblock->flags |= BB_EXCEPTION_DEAD_OBJ;
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY ||
clause->flags == MONO_EXCEPTION_CLAUSE_FILTER ||
clause->flags == MONO_EXCEPTION_CLAUSE_FAULT) {
MONO_INST_NEW (cfg, ins, OP_START_HANDLER);
MONO_ADD_INS (tblock, ins);
if (seq_points && clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY && clause->flags != MONO_EXCEPTION_CLAUSE_FILTER) {
/* finally clauses already have a seq point */
/* seq points for filter clauses are emitted below */
NEW_SEQ_POINT (cfg, ins, clause->handler_offset, TRUE);
MONO_ADD_INS (tblock, ins);
}
/* todo: is a fault block unsafe to optimize? */
if (clause->flags == MONO_EXCEPTION_CLAUSE_FAULT)
tblock->flags |= BB_EXCEPTION_UNSAFE;
}
/*printf ("clause try IL_%04x to IL_%04x handler %d at IL_%04x to IL_%04x\n", clause->try_offset, clause->try_offset + clause->try_len, clause->flags, clause->handler_offset, clause->handler_offset + clause->handler_len);
while (p < end) {
printf ("%s", mono_disasm_code_one (NULL, method, p, &p));
}*/
/* catch and filter blocks get the exception object on the stack */
if (clause->flags == MONO_EXCEPTION_CLAUSE_NONE ||
clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) {
/* mostly like handle_stack_args (), but just sets the input args */
/* printf ("handling clause at IL_%04x\n", clause->handler_offset); */
tblock->in_scount = 1;
tblock->in_stack = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*));
tblock->in_stack [0] = mono_create_exvar_for_offset (cfg, clause->handler_offset);
cfg->cbb = tblock;
#ifdef MONO_CONTEXT_SET_LLVM_EXC_REG
/* The EH code passes in the exception in a register to both JITted and LLVM compiled code */
if (!cfg->compile_llvm) {
MONO_INST_NEW (cfg, ins, OP_GET_EX_OBJ);
ins->dreg = tblock->in_stack [0]->dreg;
MONO_ADD_INS (tblock, ins);
}
#else
MonoInst *dummy_use;
/*
* Add a dummy use for the exvar so its liveness info will be
* correct.
*/
EMIT_NEW_DUMMY_USE (cfg, dummy_use, tblock->in_stack [0]);
#endif
if (seq_points && clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) {
NEW_SEQ_POINT (cfg, ins, clause->handler_offset, TRUE);
MONO_ADD_INS (tblock, ins);
}
if (clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) {
GET_BBLOCK (cfg, tblock, ip + clause->data.filter_offset);
tblock->flags |= BB_EXCEPTION_HANDLER;
tblock->real_offset = clause->data.filter_offset;
tblock->in_scount = 1;
tblock->in_stack = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*));
/* The filter block shares the exvar with the handler block */
tblock->in_stack [0] = mono_create_exvar_for_offset (cfg, clause->handler_offset);
MONO_INST_NEW (cfg, ins, OP_START_HANDLER);
MONO_ADD_INS (tblock, ins);
}
}
if (clause->flags != MONO_EXCEPTION_CLAUSE_FILTER &&
clause->data.catch_class &&
cfg->gshared &&
mono_class_check_context_used (clause->data.catch_class)) {
/*
* In shared generic code with catch
* clauses containing type variables
* the exception handling code has to
* be able to get to the rgctx.
* Therefore we have to make sure that
* the vtable/mrgctx argument (for
* static or generic methods) or the
* "this" argument (for non-static
* methods) are live.
*/
if ((method->flags & METHOD_ATTRIBUTE_STATIC) ||
mini_method_get_context (method)->method_inst ||
m_class_is_valuetype (method->klass)) {
mono_get_vtable_var (cfg);
} else {
MonoInst *dummy_use;
EMIT_NEW_DUMMY_USE (cfg, dummy_use, arg_array [0]);
}
}
}
} else {
arg_array = g_newa (MonoInst*, num_args);
cfg->cbb = start_bblock;
cfg->args = arg_array;
mono_save_args (cfg, sig, inline_args);
}
if (cfg->method == method && cfg->self_init && cfg->compile_aot && !COMPILE_LLVM (cfg)) {
MonoMethod *wrapper;
MonoInst *args [2];
int idx;
/*
* Emit code to initialize this method by calling the init wrapper emitted by LLVM.
* This is not efficient right now, but its only used for the methods which fail
* LLVM compilation.
* FIXME: Optimize this
*/
g_assert (!cfg->gshared);
wrapper = mono_marshal_get_aot_init_wrapper (AOT_INIT_METHOD);
/* Emit this into the entry bb so it comes before the GC safe point which depends on an inited GOT */
cfg->cbb = cfg->bb_entry;
idx = mono_aot_get_method_index (cfg->method);
EMIT_NEW_ICONST (cfg, args [0], idx);
/* Dummy */
EMIT_NEW_ICONST (cfg, args [1], 0);
mono_emit_method_call (cfg, wrapper, args, NULL);
}
if (cfg->llvm_only && cfg->interp && cfg->method == method && !cfg->deopt) {
if (header->num_clauses) {
for (int i = 0; i < header->num_clauses; ++i) {
MonoExceptionClause *clause = &header->clauses [i];
/* Finally clauses are checked after the remove_finally pass */
if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY)
cfg->interp_entry_only = TRUE;
}
}
}
/* we use a separate basic block for the initialization code */
NEW_BBLOCK (cfg, init_localsbb);
if (cfg->method == method)
cfg->bb_init = init_localsbb;
init_localsbb->real_offset = cfg->real_offset;
start_bblock->next_bb = init_localsbb;
link_bblock (cfg, start_bblock, init_localsbb);
init_localsbb2 = init_localsbb;
cfg->cbb = init_localsbb;
if (cfg->gsharedvt && cfg->method == method) {
MonoGSharedVtMethodInfo *info;
MonoInst *var, *locals_var;
int dreg;
info = (MonoGSharedVtMethodInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoGSharedVtMethodInfo));
info->method = cfg->method;
info->count_entries = 16;
info->entries = (MonoRuntimeGenericContextInfoTemplate *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoRuntimeGenericContextInfoTemplate) * info->count_entries);
cfg->gsharedvt_info = info;
var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
/* prevent it from being register allocated */
//var->flags |= MONO_INST_VOLATILE;
cfg->gsharedvt_info_var = var;
ins = emit_get_rgctx_gsharedvt_method (cfg, mini_method_check_context_used (cfg, method), method, info);
MONO_EMIT_NEW_UNALU (cfg, OP_MOVE, var->dreg, ins->dreg);
/* Allocate locals */
locals_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
/* prevent it from being register allocated */
//locals_var->flags |= MONO_INST_VOLATILE;
cfg->gsharedvt_locals_var = locals_var;
dreg = alloc_ireg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, dreg, var->dreg, MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, locals_size));
MONO_INST_NEW (cfg, ins, OP_LOCALLOC);
ins->dreg = locals_var->dreg;
ins->sreg1 = dreg;
MONO_ADD_INS (cfg->cbb, ins);
cfg->gsharedvt_locals_var_ins = ins;
cfg->flags |= MONO_CFG_HAS_ALLOCA;
/*
if (init_locals)
ins->flags |= MONO_INST_INIT;
*/
if (cfg->llvm_only) {
init_localsbb = cfg->cbb;
init_localsbb2 = cfg->cbb;
}
}
if (cfg->deopt) {
/*
* Push an LMFExt frame which points to a MonoMethodILState structure.
*/
emit_push_lmf (cfg);
/* The type doesn't matter, the llvm backend will use the correct type */
MonoInst *il_state_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
il_state_var->flags |= MONO_INST_VOLATILE;
cfg->il_state_var = il_state_var;
EMIT_NEW_VARLOADA (cfg, ins, cfg->il_state_var, NULL);
int il_state_addr_reg = ins->dreg;
/* il_state->method = method */
MonoInst *method_ins = emit_get_rgctx_method (cfg, -1, cfg->method, MONO_RGCTX_INFO_METHOD);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, il_state_addr_reg, MONO_STRUCT_OFFSET (MonoMethodILState, method), method_ins->dreg);
EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL);
int lmf_reg = ins->dreg;
/* lmf->kind = MONO_LMFEXT_IL_STATE */
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI4_MEMBASE_IMM, lmf_reg, MONO_STRUCT_OFFSET (MonoLMFExt, kind), MONO_LMFEXT_IL_STATE);
/* lmf->il_state = il_state */
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, lmf_reg, MONO_STRUCT_OFFSET (MonoLMFExt, il_state), il_state_addr_reg);
/* emit_get_rgctx_method () might create new bblocks */
if (cfg->llvm_only) {
init_localsbb = cfg->cbb;
init_localsbb2 = cfg->cbb;
}
}
if (cfg->llvm_only && cfg->interp && cfg->method == method) {
if (cfg->interp_entry_only)
emit_llvmonly_interp_entry (cfg, header);
}
/* FIRST CODE BLOCK */
NEW_BBLOCK (cfg, tblock);
tblock->cil_code = ip;
cfg->cbb = tblock;
cfg->ip = ip;
init_localsbb->next_bb = cfg->cbb;
link_bblock (cfg, init_localsbb, cfg->cbb);
ADD_BBLOCK (cfg, tblock);
CHECK_CFG_EXCEPTION;
if (header->code_size == 0)
UNVERIFIED;
if (get_basic_blocks (cfg, header, cfg->real_offset, ip, end, &err_pos)) {
ip = err_pos;
UNVERIFIED;
}
if (cfg->method == method) {
int breakpoint_id = mono_debugger_method_has_breakpoint (method);
if (breakpoint_id) {
MONO_INST_NEW (cfg, ins, OP_BREAK);
MONO_ADD_INS (cfg->cbb, ins);
}
mono_debug_init_method (cfg, cfg->cbb, breakpoint_id);
}
for (n = 0; n < header->num_locals; ++n) {
if (header->locals [n]->type == MONO_TYPE_VOID && !m_type_is_byref (header->locals [n]))
UNVERIFIED;
}
class_inits = NULL;
/* We force the vtable variable here for all shared methods
for the possibility that they might show up in a stack
trace where their exact instantiation is needed. */
if (cfg->gshared && method == cfg->method) {
if ((method->flags & METHOD_ATTRIBUTE_STATIC) ||
mini_method_get_context (method)->method_inst ||
m_class_is_valuetype (method->klass)) {
mono_get_vtable_var (cfg);
} else {
/* FIXME: Is there a better way to do this?
We need the variable live for the duration
of the whole method. */
cfg->args [0]->flags |= MONO_INST_VOLATILE;
}
}
/* add a check for this != NULL to inlined methods */
if (is_virtual_call) {
MonoInst *arg_ins;
//
// This is just a hack to avoid checks in empty methods which could get inlined
// into finally clauses preventing the removal of empty finally clauses, since all
// variables in finally clauses are marked volatile so the check can't be removed
//
if (!(cfg->llvm_only && m_class_is_valuetype (method->klass) && header->code_size == 1 && header->code [0] == CEE_RET)) {
NEW_ARGLOAD (cfg, arg_ins, 0);
MONO_ADD_INS (cfg->cbb, arg_ins);
MONO_EMIT_NEW_CHECK_THIS (cfg, arg_ins->dreg);
}
}
skip_dead_blocks = !dont_verify;
if (skip_dead_blocks) {
original_bb = bb = mono_basic_block_split (method, cfg->error, header);
CHECK_CFG_ERROR;
g_assert (bb);
}
/* we use a spare stack slot in SWITCH and NEWOBJ and others */
stack_start = sp = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoInst*) * (header->max_stack + 1));
ins_flag = 0;
start_new_bblock = 0;
MonoOpcodeEnum il_op; il_op = MonoOpcodeEnum_Invalid;
emit_set_deopt_il_offset (cfg, ip - cfg->cil_start);
for (guchar *next_ip = ip; ip < end; ip = next_ip) {
MonoOpcodeEnum previous_il_op = il_op;
const guchar *tmp_ip = ip;
const int op_size = mono_opcode_value_and_size (&tmp_ip, end, &il_op);
CHECK_OPSIZE (op_size);
next_ip += op_size;
if (cfg->method == method)
cfg->real_offset = ip - header->code;
else
cfg->real_offset = inline_offset;
cfg->ip = ip;
context_used = 0;
if (start_new_bblock) {
cfg->cbb->cil_length = ip - cfg->cbb->cil_code;
if (start_new_bblock == 2) {
g_assert (ip == tblock->cil_code);
} else {
GET_BBLOCK (cfg, tblock, ip);
}
cfg->cbb->next_bb = tblock;
cfg->cbb = tblock;
start_new_bblock = 0;
for (i = 0; i < cfg->cbb->in_scount; ++i) {
if (cfg->verbose_level > 3)
printf ("loading %d from temp %d\n", i, (int)cfg->cbb->in_stack [i]->inst_c0);
EMIT_NEW_TEMPLOAD (cfg, ins, cfg->cbb->in_stack [i]->inst_c0);
*sp++ = ins;
}
if (class_inits)
g_slist_free (class_inits);
class_inits = NULL;
emit_set_deopt_il_offset (cfg, ip - cfg->cil_start);
} else {
if ((tblock = cfg->cil_offset_to_bb [ip - cfg->cil_start]) && (tblock != cfg->cbb)) {
link_bblock (cfg, cfg->cbb, tblock);
if (sp != stack_start) {
handle_stack_args (cfg, stack_start, sp - stack_start);
sp = stack_start;
CHECK_UNVERIFIABLE (cfg);
}
cfg->cbb->next_bb = tblock;
cfg->cbb = tblock;
for (i = 0; i < cfg->cbb->in_scount; ++i) {
if (cfg->verbose_level > 3)
printf ("loading %d from temp %d\n", i, (int)cfg->cbb->in_stack [i]->inst_c0);
EMIT_NEW_TEMPLOAD (cfg, ins, cfg->cbb->in_stack [i]->inst_c0);
*sp++ = ins;
}
g_slist_free (class_inits);
class_inits = NULL;
emit_set_deopt_il_offset (cfg, ip - cfg->cil_start);
}
}
/*
* Methods with AggressiveInline flag could be inlined even if the class has a cctor.
* This might create a branch so emit it in the first code bblock instead of into initlocals_bb.
*/
if (ip - header->code == 0 && cfg->method != method && cfg->compile_aot && (method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) && mono_class_needs_cctor_run (method->klass, method)) {
emit_class_init (cfg, method->klass);
}
if (skip_dead_blocks) {
int ip_offset = ip - header->code;
if (ip_offset == bb->end)
bb = bb->next;
if (bb->dead) {
g_assert (op_size > 0); /*The BB formation pass must catch all bad ops*/
if (cfg->verbose_level > 3) printf ("SKIPPING DEAD OP at %x\n", ip_offset);
if (ip_offset + op_size == bb->end) {
MONO_INST_NEW (cfg, ins, OP_NOP);
MONO_ADD_INS (cfg->cbb, ins);
start_new_bblock = 1;
}
continue;
}
}
/*
* Sequence points are points where the debugger can place a breakpoint.
* Currently, we generate these automatically at points where the IL
* stack is empty.
*/
if (seq_points && ((!sym_seq_points && (sp == stack_start)) || (sym_seq_points && mono_bitset_test_fast (seq_point_locs, ip - header->code)))) {
/*
* Make methods interruptable at the beginning, and at the targets of
* backward branches.
* Also, do this at the start of every bblock in methods with clauses too,
* to be able to handle instructions with inprecise control flow like
* throw/endfinally.
* Backward branches are handled at the end of method-to-ir ().
*/
gboolean intr_loc = ip == header->code || (!cfg->cbb->last_ins && cfg->header->num_clauses);
gboolean sym_seq_point = sym_seq_points && mono_bitset_test_fast (seq_point_locs, ip - header->code);
/* Avoid sequence points on empty IL like .volatile */
// FIXME: Enable this
//if (!(cfg->cbb->last_ins && cfg->cbb->last_ins->opcode == OP_SEQ_POINT)) {
NEW_SEQ_POINT (cfg, ins, ip - header->code, intr_loc);
if ((sp != stack_start) && !sym_seq_point)
ins->flags |= MONO_INST_NONEMPTY_STACK;
MONO_ADD_INS (cfg->cbb, ins);
if (sym_seq_points)
mono_bitset_set_fast (seq_point_set_locs, ip - header->code);
if (cfg->prof_coverage) {
guint32 cil_offset = ip - header->code;
gpointer counter = &cfg->coverage_info->data [cil_offset].count;
cfg->coverage_info->data [cil_offset].cil_code = ip;
if (mono_arch_opcode_supported (OP_ATOMIC_ADD_I4)) {
MonoInst *one_ins, *load_ins;
EMIT_NEW_PCONST (cfg, load_ins, counter);
EMIT_NEW_ICONST (cfg, one_ins, 1);
MONO_INST_NEW (cfg, ins, OP_ATOMIC_ADD_I4);
ins->dreg = mono_alloc_ireg (cfg);
ins->inst_basereg = load_ins->dreg;
ins->inst_offset = 0;
ins->sreg2 = one_ins->dreg;
ins->type = STACK_I4;
MONO_ADD_INS (cfg->cbb, ins);
} else {
EMIT_NEW_PCONST (cfg, ins, counter);
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STORE_MEMBASE_IMM, ins->dreg, 0, 1);
}
}
}
cfg->cbb->real_offset = cfg->real_offset;
if (cfg->verbose_level > 3)
printf ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip, NULL));
/*
* This is used to compute BB_HAS_SIDE_EFFECTS, which is used for the elimination of
* foreach finally clauses, so only IL opcodes which occur in such clauses
* need to set this.
*/
ins_has_side_effect = TRUE;
// Variables shared by CEE_CALLI CEE_CALL CEE_CALLVIRT CEE_JMP.
// Initialize to either what they all need or zero.
gboolean emit_widen = TRUE;
gboolean tailcall = FALSE;
gboolean common_call = FALSE;
MonoInst *keep_this_alive = NULL;
MonoMethod *cmethod = NULL;
MonoMethodSignature *fsig = NULL;
// These are used only in CALL/CALLVIRT but must be initialized also for CALLI,
// since it jumps into CALL/CALLVIRT.
gboolean need_seq_point = FALSE;
gboolean push_res = TRUE;
gboolean skip_ret = FALSE;
gboolean tailcall_remove_ret = FALSE;
// FIXME split 500 lines load/store field into separate file/function.
MonoOpcodeParameter parameter;
const MonoOpcodeInfo* info = mono_opcode_decode (ip, op_size, il_op, ¶meter);
g_assert (info);
n = parameter.i32;
token = parameter.i32;
target = parameter.branch_target;
// Check stack size for push/pop except variable cases -- -1 like call/ret/newobj.
const int pushes = info->pushes;
const int pops = info->pops;
if (pushes >= 0 && pops >= 0) {
g_assert (pushes - pops <= 1);
if (pushes - pops == 1)
CHECK_STACK_OVF ();
}
if (pops >= 0)
CHECK_STACK (pops);
switch (il_op) {
case MONO_CEE_NOP:
if (seq_points && !sym_seq_points && sp != stack_start) {
/*
* The C# compiler uses these nops to notify the JIT that it should
* insert seq points.
*/
NEW_SEQ_POINT (cfg, ins, ip - header->code, FALSE);
MONO_ADD_INS (cfg->cbb, ins);
}
if (cfg->keep_cil_nops)
MONO_INST_NEW (cfg, ins, OP_HARD_NOP);
else
MONO_INST_NEW (cfg, ins, OP_NOP);
MONO_ADD_INS (cfg->cbb, ins);
emitted_funccall_seq_point = FALSE;
ins_has_side_effect = FALSE;
break;
case MONO_CEE_BREAK:
if (mini_should_insert_breakpoint (cfg->method)) {
ins = mono_emit_jit_icall (cfg, mono_debugger_agent_user_break, NULL);
} else {
MONO_INST_NEW (cfg, ins, OP_NOP);
MONO_ADD_INS (cfg->cbb, ins);
}
break;
case MONO_CEE_LDARG_0:
case MONO_CEE_LDARG_1:
case MONO_CEE_LDARG_2:
case MONO_CEE_LDARG_3:
case MONO_CEE_LDARG_S:
case MONO_CEE_LDARG:
CHECK_ARG (n);
if (next_ip < end && is_addressable_valuetype_load (cfg, next_ip, cfg->arg_types[n])) {
EMIT_NEW_ARGLOADA (cfg, ins, n);
} else {
EMIT_NEW_ARGLOAD (cfg, ins, n);
}
*sp++ = ins;
break;
case MONO_CEE_LDLOC_0:
case MONO_CEE_LDLOC_1:
case MONO_CEE_LDLOC_2:
case MONO_CEE_LDLOC_3:
case MONO_CEE_LDLOC_S:
case MONO_CEE_LDLOC:
CHECK_LOCAL (n);
if (next_ip < end && is_addressable_valuetype_load (cfg, next_ip, header->locals[n])) {
EMIT_NEW_LOCLOADA (cfg, ins, n);
} else {
EMIT_NEW_LOCLOAD (cfg, ins, n);
}
*sp++ = ins;
break;
case MONO_CEE_STLOC_0:
case MONO_CEE_STLOC_1:
case MONO_CEE_STLOC_2:
case MONO_CEE_STLOC_3:
case MONO_CEE_STLOC_S:
case MONO_CEE_STLOC:
CHECK_LOCAL (n);
--sp;
*sp = convert_value (cfg, header->locals [n], *sp);
if (!dont_verify_stloc && target_type_is_incompatible (cfg, header->locals [n], *sp))
UNVERIFIED;
emit_stloc_ir (cfg, sp, header, n);
inline_costs += 1;
break;
case MONO_CEE_LDARGA_S:
case MONO_CEE_LDARGA:
CHECK_ARG (n);
NEW_ARGLOADA (cfg, ins, n);
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = ins;
break;
case MONO_CEE_STARG_S:
case MONO_CEE_STARG:
--sp;
CHECK_ARG (n);
*sp = convert_value (cfg, param_types [n], *sp);
if (!dont_verify_stloc && target_type_is_incompatible (cfg, param_types [n], *sp))
UNVERIFIED;
emit_starg_ir (cfg, sp, n);
break;
case MONO_CEE_LDLOCA:
case MONO_CEE_LDLOCA_S: {
guchar *tmp_ip;
CHECK_LOCAL (n);
if ((tmp_ip = emit_optimized_ldloca_ir (cfg, next_ip, end, n))) {
next_ip = tmp_ip;
il_op = MONO_CEE_INITOBJ;
inline_costs += 1;
break;
}
ins_has_side_effect = FALSE;
EMIT_NEW_LOCLOADA (cfg, ins, n);
*sp++ = ins;
break;
}
case MONO_CEE_LDNULL:
EMIT_NEW_PCONST (cfg, ins, NULL);
ins->type = STACK_OBJ;
*sp++ = ins;
break;
case MONO_CEE_LDC_I4_M1:
case MONO_CEE_LDC_I4_0:
case MONO_CEE_LDC_I4_1:
case MONO_CEE_LDC_I4_2:
case MONO_CEE_LDC_I4_3:
case MONO_CEE_LDC_I4_4:
case MONO_CEE_LDC_I4_5:
case MONO_CEE_LDC_I4_6:
case MONO_CEE_LDC_I4_7:
case MONO_CEE_LDC_I4_8:
case MONO_CEE_LDC_I4_S:
case MONO_CEE_LDC_I4:
EMIT_NEW_ICONST (cfg, ins, n);
*sp++ = ins;
break;
case MONO_CEE_LDC_I8:
MONO_INST_NEW (cfg, ins, OP_I8CONST);
ins->type = STACK_I8;
ins->dreg = alloc_dreg (cfg, STACK_I8);
ins->inst_l = parameter.i64;
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = ins;
break;
case MONO_CEE_LDC_R4: {
float *f;
gboolean use_aotconst = FALSE;
#ifdef TARGET_POWERPC
/* FIXME: Clean this up */
if (cfg->compile_aot)
use_aotconst = TRUE;
#endif
/* FIXME: we should really allocate this only late in the compilation process */
f = (float *)mono_mem_manager_alloc (cfg->mem_manager, sizeof (float));
if (use_aotconst) {
MonoInst *cons;
int dreg;
EMIT_NEW_AOTCONST (cfg, cons, MONO_PATCH_INFO_R4, f);
dreg = alloc_freg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOADR4_MEMBASE, dreg, cons->dreg, 0);
ins->type = cfg->r4_stack_type;
} else {
MONO_INST_NEW (cfg, ins, OP_R4CONST);
ins->type = cfg->r4_stack_type;
ins->dreg = alloc_dreg (cfg, STACK_R8);
ins->inst_p0 = f;
MONO_ADD_INS (cfg->cbb, ins);
}
*f = parameter.f;
*sp++ = ins;
break;
}
case MONO_CEE_LDC_R8: {
double *d;
gboolean use_aotconst = FALSE;
#ifdef TARGET_POWERPC
/* FIXME: Clean this up */
if (cfg->compile_aot)
use_aotconst = TRUE;
#endif
/* FIXME: we should really allocate this only late in the compilation process */
d = (double *)mono_mem_manager_alloc (cfg->mem_manager, sizeof (double));
if (use_aotconst) {
MonoInst *cons;
int dreg;
EMIT_NEW_AOTCONST (cfg, cons, MONO_PATCH_INFO_R8, d);
dreg = alloc_freg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOADR8_MEMBASE, dreg, cons->dreg, 0);
ins->type = STACK_R8;
} else {
MONO_INST_NEW (cfg, ins, OP_R8CONST);
ins->type = STACK_R8;
ins->dreg = alloc_dreg (cfg, STACK_R8);
ins->inst_p0 = d;
MONO_ADD_INS (cfg->cbb, ins);
}
*d = parameter.d;
*sp++ = ins;
break;
}
case MONO_CEE_DUP: {
MonoInst *temp, *store;
MonoClass *klass;
sp--;
ins = *sp;
klass = ins->klass;
temp = mono_compile_create_var (cfg, type_from_stack_type (ins), OP_LOCAL);
EMIT_NEW_TEMPSTORE (cfg, store, temp->inst_c0, ins);
EMIT_NEW_TEMPLOAD (cfg, ins, temp->inst_c0);
ins->klass = klass;
*sp++ = ins;
EMIT_NEW_TEMPLOAD (cfg, ins, temp->inst_c0);
ins->klass = klass;
*sp++ = ins;
inline_costs += 2;
break;
}
case MONO_CEE_POP:
--sp;
#ifdef TARGET_X86
if (sp [0]->type == STACK_R8)
/* we need to pop the value from the x86 FP stack */
MONO_EMIT_NEW_UNALU (cfg, OP_X86_FPOP, -1, sp [0]->dreg);
#endif
break;
case MONO_CEE_JMP: {
MonoCallInst *call;
int i, n;
INLINE_FAILURE ("jmp");
GSHAREDVT_FAILURE (il_op);
if (stack_start != sp)
UNVERIFIED;
/* FIXME: check the signature matches */
cmethod = mini_get_method (cfg, method, token, NULL, generic_context);
CHECK_CFG_ERROR;
if (cfg->gshared && mono_method_check_context_used (cmethod))
GENERIC_SHARING_FAILURE (CEE_JMP);
mini_profiler_emit_tail_call (cfg, cmethod);
fsig = mono_method_signature_internal (cmethod);
n = fsig->param_count + fsig->hasthis;
if (cfg->llvm_only) {
MonoInst **args;
args = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*) * n);
for (i = 0; i < n; ++i)
EMIT_NEW_ARGLOAD (cfg, args [i], i);
ins = mini_emit_method_call_full (cfg, cmethod, fsig, TRUE, args, NULL, NULL, NULL);
/*
* The code in mono-basic-block.c treats the rest of the code as dead, but we
* have to emit a normal return since llvm expects it.
*/
if (cfg->ret)
emit_setret (cfg, ins);
MONO_INST_NEW (cfg, ins, OP_BR);
ins->inst_target_bb = end_bblock;
MONO_ADD_INS (cfg->cbb, ins);
link_bblock (cfg, cfg->cbb, end_bblock);
break;
} else {
/* Handle tailcalls similarly to calls */
DISABLE_AOT (cfg);
mini_emit_tailcall_parameters (cfg, fsig);
MONO_INST_NEW_CALL (cfg, call, OP_TAILCALL);
call->method = cmethod;
// FIXME Other initialization of the tailcall field occurs after
// it is used. So this is the only "real" use and needs more attention.
call->tailcall = TRUE;
call->signature = fsig;
call->args = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*) * n);
call->inst.inst_p0 = cmethod;
for (i = 0; i < n; ++i)
EMIT_NEW_ARGLOAD (cfg, call->args [i], i);
if (mini_type_is_vtype (mini_get_underlying_type (call->signature->ret)))
call->vret_var = cfg->vret_addr;
mono_arch_emit_call (cfg, call);
cfg->param_area = MAX(cfg->param_area, call->stack_usage);
MONO_ADD_INS (cfg->cbb, (MonoInst*)call);
}
start_new_bblock = 1;
break;
}
case MONO_CEE_CALLI: {
// FIXME tail.calli is problemetic because the this pointer's type
// is not in the signature, and we cannot check for a byref valuetype.
MonoInst *addr;
MonoInst *callee = NULL;
// Variables shared by CEE_CALLI and CEE_CALL/CEE_CALLVIRT.
common_call = TRUE; // i.e. skip_ret/push_res/seq_point logic
cmethod = NULL;
gboolean const inst_tailcall = G_UNLIKELY (debug_tailcall_try_all
? (next_ip < end && next_ip [0] == CEE_RET)
: ((ins_flag & MONO_INST_TAILCALL) != 0));
ins = NULL;
//GSHAREDVT_FAILURE (il_op);
CHECK_STACK (1);
--sp;
addr = *sp;
g_assert (addr);
fsig = mini_get_signature (method, token, generic_context, cfg->error);
CHECK_CFG_ERROR;
if (method->dynamic && fsig->pinvoke) {
MonoInst *args [3];
/*
* This is a call through a function pointer using a pinvoke
* signature. Have to create a wrapper and call that instead.
* FIXME: This is very slow, need to create a wrapper at JIT time
* instead based on the signature.
*/
EMIT_NEW_IMAGECONST (cfg, args [0], ((MonoDynamicMethod*)method)->assembly->image);
EMIT_NEW_PCONST (cfg, args [1], fsig);
args [2] = addr;
// FIXME tailcall?
addr = mono_emit_jit_icall (cfg, mono_get_native_calli_wrapper, args);
}
if (!method->dynamic && fsig->pinvoke &&
!method->wrapper_type) {
/* MONO_WRAPPER_DYNAMIC_METHOD dynamic method handled above in the
method->dynamic case; for other wrapper types assume the code knows
what its doing and added its own GC transitions */
gboolean skip_gc_trans = fsig->suppress_gc_transition;
if (!skip_gc_trans) {
#if 0
fprintf (stderr, "generating wrapper for calli in method %s with wrapper type %s\n", method->name, mono_wrapper_type_to_str (method->wrapper_type));
#endif
/* Call the wrapper that will do the GC transition instead */
MonoMethod *wrapper = mono_marshal_get_native_func_wrapper_indirect (method->klass, fsig, cfg->compile_aot);
fsig = mono_method_signature_internal (wrapper);
n = fsig->param_count - 1; /* wrapper has extra fnptr param */
CHECK_STACK (n);
/* move the args to allow room for 'this' in the first position */
while (n--) {
--sp;
sp [1] = sp [0];
}
sp[0] = addr; /* n+1 args, first arg is the address of the indirect method to call */
g_assert (!fsig->hasthis && !fsig->pinvoke);
ins = mono_emit_method_call (cfg, wrapper, /*args*/sp, NULL);
goto calli_end;
}
}
n = fsig->param_count + fsig->hasthis;
CHECK_STACK (n);
//g_assert (!virtual_ || fsig->hasthis);
sp -= n;
if (!(cfg->method->wrapper_type && cfg->method->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD) && check_call_signature (cfg, fsig, sp)) {
if (break_on_unverified ())
check_call_signature (cfg, fsig, sp); // Again, step through it.
UNVERIFIED;
}
inline_costs += CALL_COST * MIN(10, num_calls++);
/*
* Making generic calls out of gsharedvt methods.
* This needs to be used for all generic calls, not just ones with a gsharedvt signature, to avoid
* patching gshared method addresses into a gsharedvt method.
*/
if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) {
/*
* We pass the address to the gsharedvt trampoline in the rgctx reg
*/
callee = addr;
g_assert (addr); // Doubles as boolean after tailcall check.
}
inst_tailcall && is_supported_tailcall (cfg, ip, method, NULL, fsig,
FALSE/*virtual irrelevant*/, addr != NULL, &tailcall);
if (save_last_error)
mono_emit_jit_icall (cfg, mono_marshal_clear_last_error, NULL);
if (callee) {
if (method->wrapper_type != MONO_WRAPPER_DELEGATE_INVOKE)
/* Not tested */
GSHAREDVT_FAILURE (il_op);
if (cfg->llvm_only)
// FIXME:
GSHAREDVT_FAILURE (il_op);
addr = emit_get_rgctx_sig (cfg, context_used, fsig, MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI);
ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, NULL, callee, tailcall);
goto calli_end;
}
/* Prevent inlining of methods with indirect calls */
INLINE_FAILURE ("indirect call");
if (addr->opcode == OP_PCONST || addr->opcode == OP_AOTCONST || addr->opcode == OP_GOT_ENTRY) {
MonoJumpInfoType info_type;
gpointer info_data;
/*
* Instead of emitting an indirect call, emit a direct call
* with the contents of the aotconst as the patch info.
*/
if (addr->opcode == OP_PCONST || addr->opcode == OP_AOTCONST) {
info_type = (MonoJumpInfoType)addr->inst_c1;
info_data = addr->inst_p0;
} else {
info_type = (MonoJumpInfoType)addr->inst_right->inst_c1;
info_data = addr->inst_right->inst_left;
}
if (info_type == MONO_PATCH_INFO_ICALL_ADDR) {
// non-JIT icall, mostly builtin, but also user-extensible
tailcall = FALSE;
ins = (MonoInst*)mini_emit_abs_call (cfg, MONO_PATCH_INFO_ICALL_ADDR_CALL, info_data, fsig, sp);
NULLIFY_INS (addr);
goto calli_end;
} else if (info_type == MONO_PATCH_INFO_JIT_ICALL_ADDR
|| info_type == MONO_PATCH_INFO_SPECIFIC_TRAMPOLINE_LAZY_FETCH_ADDR) {
tailcall = FALSE;
ins = (MonoInst*)mini_emit_abs_call (cfg, info_type, info_data, fsig, sp);
NULLIFY_INS (addr);
goto calli_end;
}
}
if (cfg->llvm_only && !(cfg->method->wrapper_type && cfg->method->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD))
ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
else
ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, NULL, NULL, tailcall);
goto calli_end;
}
case MONO_CEE_CALL:
case MONO_CEE_CALLVIRT: {
MonoInst *addr; addr = NULL;
int array_rank; array_rank = 0;
gboolean virtual_; virtual_ = il_op == MONO_CEE_CALLVIRT;
gboolean pass_imt_from_rgctx; pass_imt_from_rgctx = FALSE;
MonoInst *imt_arg; imt_arg = NULL;
gboolean pass_vtable; pass_vtable = FALSE;
gboolean pass_mrgctx; pass_mrgctx = FALSE;
MonoInst *vtable_arg; vtable_arg = NULL;
gboolean check_this; check_this = FALSE;
gboolean delegate_invoke; delegate_invoke = FALSE;
gboolean direct_icall; direct_icall = FALSE;
gboolean tailcall_calli; tailcall_calli = FALSE;
gboolean noreturn; noreturn = FALSE;
gboolean gshared_static_virtual; gshared_static_virtual = FALSE;
#ifdef TARGET_WASM
gboolean needs_stack_walk; needs_stack_walk = FALSE;
#endif
// Variables shared by CEE_CALLI and CEE_CALL/CEE_CALLVIRT.
common_call = FALSE;
// variables to help in assertions
gboolean called_is_supported_tailcall; called_is_supported_tailcall = FALSE;
MonoMethod *tailcall_method; tailcall_method = NULL;
MonoMethod *tailcall_cmethod; tailcall_cmethod = NULL;
MonoMethodSignature *tailcall_fsig; tailcall_fsig = NULL;
gboolean tailcall_virtual; tailcall_virtual = FALSE;
gboolean tailcall_extra_arg; tailcall_extra_arg = FALSE;
gboolean inst_tailcall; inst_tailcall = G_UNLIKELY (debug_tailcall_try_all
? (next_ip < end && next_ip [0] == CEE_RET)
: ((ins_flag & MONO_INST_TAILCALL) != 0));
ins = NULL;
/* Used to pass arguments to called functions */
HandleCallData cdata;
memset (&cdata, 0, sizeof (HandleCallData));
cmethod = mini_get_method (cfg, method, token, NULL, generic_context);
CHECK_CFG_ERROR;
if (cfg->verbose_level > 3)
printf ("cmethod = %s\n", mono_method_get_full_name (cmethod));
MonoMethod *cil_method; cil_method = cmethod;
if (constrained_class) {
if (m_method_is_static (cil_method) && mini_class_check_context_used (cfg, constrained_class)) {
/* get_constrained_method () doesn't work on the gparams used by generic sharing */
// FIXME: Other configurations
//if (!cfg->gsharedvt)
// GENERIC_SHARING_FAILURE (CEE_CALL);
gshared_static_virtual = TRUE;
} else {
cmethod = get_constrained_method (cfg, image, token, cil_method, constrained_class, generic_context);
CHECK_CFG_ERROR;
if (m_class_is_enumtype (constrained_class) && !strcmp (cmethod->name, "GetHashCode")) {
/* Use the corresponding method from the base type to avoid boxing */
MonoType *base_type = mono_class_enum_basetype_internal (constrained_class);
g_assert (base_type);
constrained_class = mono_class_from_mono_type_internal (base_type);
cmethod = get_method_nofail (constrained_class, cmethod->name, 0, 0);
g_assert (cmethod);
}
}
}
if (!dont_verify && !cfg->skip_visibility) {
MonoMethod *target_method = cil_method;
if (method->is_inflated) {
MonoGenericContainer *container = mono_method_get_generic_container(method_definition);
MonoGenericContext *context = (container != NULL ? &container->context : NULL);
target_method = mini_get_method_allow_open (method, token, NULL, context, cfg->error);
CHECK_CFG_ERROR;
}
if (!mono_method_can_access_method (method_definition, target_method) &&
!mono_method_can_access_method (method, cil_method))
emit_method_access_failure (cfg, method, cil_method);
}
if (cfg->llvm_only && cmethod && method_needs_stack_walk (cfg, cmethod)) {
if (cfg->interp && !cfg->interp_entry_only) {
/* Use the interpreter instead */
cfg->exception_message = g_strdup ("stack walk");
cfg->disable_llvm = TRUE;
}
#ifdef TARGET_WASM
else {
needs_stack_walk = TRUE;
}
#endif
}
if (!virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_ABSTRACT) && !gshared_static_virtual) {
if (!mono_class_is_interface (method->klass))
emit_bad_image_failure (cfg, method, cil_method);
else
virtual_ = TRUE;
}
if (!m_class_is_inited (cmethod->klass))
if (!mono_class_init_internal (cmethod->klass))
TYPE_LOAD_ERROR (cmethod->klass);
fsig = mono_method_signature_internal (cmethod);
if (!fsig)
LOAD_ERROR;
if (cmethod->iflags & METHOD_IMPL_ATTRIBUTE_INTERNAL_CALL &&
mini_class_is_system_array (cmethod->klass)) {
array_rank = m_class_get_rank (cmethod->klass);
} else if ((cmethod->iflags & METHOD_IMPL_ATTRIBUTE_INTERNAL_CALL) && direct_icalls_enabled (cfg, cmethod)) {
direct_icall = TRUE;
} else if (fsig->pinvoke) {
if (cmethod->flags & METHOD_ATTRIBUTE_PINVOKE_IMPL) {
/*
* Avoid calling mono_marshal_get_native_wrapper () too early, it might call managed
* callbacks on netcore.
*/
fsig = mono_metadata_signature_dup_mempool (cfg->mempool, fsig);
fsig->pinvoke = FALSE;
} else {
MonoMethod *wrapper = mono_marshal_get_native_wrapper (cmethod, TRUE, cfg->compile_aot);
fsig = mono_method_signature_internal (wrapper);
}
} else if (constrained_class) {
} else {
fsig = mono_method_get_signature_checked (cmethod, image, token, generic_context, cfg->error);
CHECK_CFG_ERROR;
}
if (cfg->llvm_only && !cfg->method->wrapper_type && (!cmethod || cmethod->is_inflated))
cfg->signatures = g_slist_prepend_mempool (cfg->mempool, cfg->signatures, fsig);
/* See code below */
if (cmethod->klass == mono_defaults.monitor_class && !strcmp (cmethod->name, "Enter") && mono_method_signature_internal (cmethod)->param_count == 1) {
MonoBasicBlock *tbb;
GET_BBLOCK (cfg, tbb, next_ip);
if (tbb->try_start && MONO_REGION_FLAGS(tbb->region) == MONO_EXCEPTION_CLAUSE_FINALLY) {
/*
* We want to extend the try block to cover the call, but we can't do it if the
* call is made directly since its followed by an exception check.
*/
direct_icall = FALSE;
}
}
mono_save_token_info (cfg, image, token, cil_method);
if (!(seq_point_locs && mono_bitset_test_fast (seq_point_locs, next_ip - header->code)))
need_seq_point = TRUE;
/* Don't support calls made using type arguments for now */
/*
if (cfg->gsharedvt) {
if (mini_is_gsharedvt_signature (fsig))
GSHAREDVT_FAILURE (il_op);
}
*/
if (cmethod->string_ctor && method->wrapper_type != MONO_WRAPPER_RUNTIME_INVOKE)
g_assert_not_reached ();
n = fsig->param_count + fsig->hasthis;
if (!cfg->gshared && mono_class_is_gtd (cmethod->klass))
UNVERIFIED;
if (!cfg->gshared)
g_assert (!mono_method_check_context_used (cmethod));
CHECK_STACK (n);
//g_assert (!virtual_ || fsig->hasthis);
sp -= n;
if (virtual_ && cmethod && sp [0] && sp [0]->opcode == OP_TYPED_OBJREF) {
ERROR_DECL (error);
MonoMethod *new_cmethod = mono_class_get_virtual_method (sp [0]->klass, cmethod, error);
if (is_ok (error)) {
cmethod = new_cmethod;
virtual_ = FALSE;
} else {
mono_error_cleanup (error);
}
}
if (cmethod && method_does_not_return (cmethod)) {
cfg->cbb->out_of_line = TRUE;
noreturn = TRUE;
}
cdata.method = method;
cdata.inst_tailcall = inst_tailcall;
/*
* We have the `constrained.' prefix opcode.
*/
if (constrained_class) {
ins = handle_constrained_call (cfg, cmethod, fsig, constrained_class, sp, &cdata, &cmethod, &virtual_, &emit_widen);
CHECK_CFG_EXCEPTION;
if (!gshared_static_virtual)
constrained_class = NULL;
if (ins)
goto call_end;
}
for (int i = 0; i < fsig->param_count; ++i)
sp [i + fsig->hasthis] = convert_value (cfg, fsig->params [i], sp [i + fsig->hasthis]);
if (check_call_signature (cfg, fsig, sp)) {
if (break_on_unverified ())
check_call_signature (cfg, fsig, sp); // Again, step through it.
UNVERIFIED;
}
if ((m_class_get_parent (cmethod->klass) == mono_defaults.multicastdelegate_class) && !strcmp (cmethod->name, "Invoke"))
delegate_invoke = TRUE;
/*
* Implement a workaround for the inherent races involved in locking:
* Monitor.Enter ()
* try {
* } finally {
* Monitor.Exit ()
* }
* If a thread abort happens between the call to Monitor.Enter () and the start of the
* try block, the Exit () won't be executed, see:
* http://www.bluebytesoftware.com/blog/2007/01/30/MonitorEnterThreadAbortsAndOrphanedLocks.aspx
* To work around this, we extend such try blocks to include the last x bytes
* of the Monitor.Enter () call.
*/
if (cmethod->klass == mono_defaults.monitor_class && !strcmp (cmethod->name, "Enter") && mono_method_signature_internal (cmethod)->param_count == 1) {
MonoBasicBlock *tbb;
GET_BBLOCK (cfg, tbb, next_ip);
/*
* Only extend try blocks with a finally, to avoid catching exceptions thrown
* from Monitor.Enter like ArgumentNullException.
*/
if (tbb->try_start && MONO_REGION_FLAGS(tbb->region) == MONO_EXCEPTION_CLAUSE_FINALLY) {
/* Mark this bblock as needing to be extended */
tbb->extend_try_block = TRUE;
}
}
/* Conversion to a JIT intrinsic */
gboolean ins_type_initialized;
if ((ins = mini_emit_inst_for_method (cfg, cmethod, fsig, sp, &ins_type_initialized))) {
if (!MONO_TYPE_IS_VOID (fsig->ret)) {
if (!ins_type_initialized)
mini_type_to_eval_stack_type ((cfg), fsig->ret, ins);
emit_widen = FALSE;
}
// FIXME This is only missed if in fact the intrinsic involves a call.
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall intrins %s -> %s\n", method->name, cmethod->name);
goto call_end;
}
CHECK_CFG_ERROR;
/*
* If the callee is a shared method, then its static cctor
* might not get called after the call was patched.
*/
if (cfg->gshared && cmethod->klass != method->klass && mono_class_is_ginst (cmethod->klass) && mono_method_is_generic_sharable (cmethod, TRUE) && mono_class_needs_cctor_run (cmethod->klass, method)) {
emit_class_init (cfg, cmethod->klass);
CHECK_TYPELOAD (cmethod->klass);
}
/* Inlining */
if ((cfg->opt & MONO_OPT_INLINE) && !inst_tailcall &&
(!virtual_ || !(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) || MONO_METHOD_IS_FINAL (cmethod)) &&
mono_method_check_inlining (cfg, cmethod)) {
int costs;
gboolean always = FALSE;
gboolean is_empty = FALSE;
if (cmethod->iflags & METHOD_IMPL_ATTRIBUTE_INTERNAL_CALL) {
/* Prevent inlining of methods that call wrappers */
INLINE_FAILURE ("wrapper call");
// FIXME? Does this write to cmethod impact tailcall_supported? Probably not.
// Neither pinvoke or icall are likely to be tailcalled.
cmethod = mono_marshal_get_native_wrapper (cmethod, TRUE, FALSE);
always = TRUE;
}
costs = inline_method (cfg, cmethod, fsig, sp, ip, cfg->real_offset, always, &is_empty);
if (costs) {
cfg->real_offset += 5;
if (!MONO_TYPE_IS_VOID (fsig->ret))
/* *sp is already set by inline_method */
ins = *sp;
inline_costs += costs;
// FIXME This is missed if the inlinee contains tail calls that
// would work, but not once inlined into caller.
// This matchingness could be a factor in inlining.
// i.e. Do not inline if it hurts tailcall, do inline
// if it helps and/or or is neutral, and helps performance
// using usual heuristics.
// Note that inlining will expose multiple tailcall opportunities
// so the tradeoff is not obvious. If we can tailcall anything
// like desktop, then this factor mostly falls away, except
// that inlining can affect tailcall performance due to
// signature match/mismatch.
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall inline %s -> %s\n", method->name, cmethod->name);
if (is_empty)
ins_has_side_effect = FALSE;
goto call_end;
}
}
check_method_sharing (cfg, cmethod, &pass_vtable, &pass_mrgctx);
if (cfg->gshared) {
MonoGenericContext *cmethod_context = mono_method_get_context (cmethod);
context_used = mini_method_check_context_used (cfg, cmethod);
if (!context_used && gshared_static_virtual)
context_used = mini_class_check_context_used (cfg, constrained_class);
if (context_used && mono_class_is_interface (cmethod->klass) && !m_method_is_static (cmethod)) {
/* Generic method interface
calls are resolved via a
helper function and don't
need an imt. */
if (!cmethod_context || !cmethod_context->method_inst)
pass_imt_from_rgctx = TRUE;
}
/*
* If a shared method calls another
* shared method then the caller must
* have a generic sharing context
* because the magic trampoline
* requires it. FIXME: We shouldn't
* have to force the vtable/mrgctx
* variable here. Instead there
* should be a flag in the cfg to
* request a generic sharing context.
*/
if (context_used &&
((cfg->method->flags & METHOD_ATTRIBUTE_STATIC) || m_class_is_valuetype (cfg->method->klass)))
mono_get_vtable_var (cfg);
}
if (pass_vtable) {
if (context_used) {
vtable_arg = mini_emit_get_rgctx_klass (cfg, context_used, cmethod->klass, MONO_RGCTX_INFO_VTABLE);
} else {
MonoVTable *vtable = mono_class_vtable_checked (cmethod->klass, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (cmethod->klass);
EMIT_NEW_VTABLECONST (cfg, vtable_arg, vtable);
}
}
if (pass_mrgctx) {
g_assert (!vtable_arg);
if (!cfg->compile_aot) {
/*
* emit_get_rgctx_method () calls mono_class_vtable () so check
* for type load errors before.
*/
mono_class_setup_vtable (cmethod->klass);
CHECK_TYPELOAD (cmethod->klass);
}
vtable_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD_RGCTX);
if ((!(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) || MONO_METHOD_IS_FINAL (cmethod))) {
if (virtual_)
check_this = TRUE;
virtual_ = FALSE;
}
}
if (pass_imt_from_rgctx) {
g_assert (!pass_vtable);
imt_arg = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_METHOD);
g_assert (imt_arg);
}
if (check_this)
MONO_EMIT_NEW_CHECK_THIS (cfg, sp [0]->dreg);
/* Calling virtual generic methods */
// These temporaries help detangle "pure" computation of
// inputs to is_supported_tailcall from side effects, so that
// is_supported_tailcall can be computed just once.
gboolean virtual_generic; virtual_generic = FALSE;
gboolean virtual_generic_imt; virtual_generic_imt = FALSE;
if (virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) &&
!MONO_METHOD_IS_FINAL (cmethod) &&
fsig->generic_param_count &&
!(cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) &&
!cfg->llvm_only) {
g_assert (fsig->is_inflated);
virtual_generic = TRUE;
/* Prevent inlining of methods that contain indirect calls */
INLINE_FAILURE ("virtual generic call");
if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig))
GSHAREDVT_FAILURE (il_op);
if (cfg->backend->have_generalized_imt_trampoline && cfg->backend->gshared_supported && cmethod->wrapper_type == MONO_WRAPPER_NONE) {
virtual_generic_imt = TRUE;
g_assert (!imt_arg);
if (!context_used)
g_assert (cmethod->is_inflated);
imt_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD);
g_assert (imt_arg);
virtual_ = TRUE;
vtable_arg = NULL;
}
}
// Capture some intent before computing tailcall.
gboolean make_generic_call_out_of_gsharedvt_method;
gboolean will_have_imt_arg;
make_generic_call_out_of_gsharedvt_method = FALSE;
will_have_imt_arg = FALSE;
/*
* Making generic calls out of gsharedvt methods.
* This needs to be used for all generic calls, not just ones with a gsharedvt signature, to avoid
* patching gshared method addresses into a gsharedvt method.
*/
if (cfg->gsharedvt && (mini_is_gsharedvt_signature (fsig) || cmethod->is_inflated || mono_class_is_ginst (cmethod->klass)) &&
!(m_class_get_rank (cmethod->klass) && m_class_get_byval_arg (cmethod->klass)->type != MONO_TYPE_SZARRAY) &&
(!(cfg->llvm_only && virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL)))) {
make_generic_call_out_of_gsharedvt_method = TRUE;
if (virtual_) {
if (fsig->generic_param_count) {
will_have_imt_arg = TRUE;
} else if (mono_class_is_interface (cmethod->klass) && !imt_arg) {
will_have_imt_arg = TRUE;
}
}
}
/* Tail prefix / tailcall optimization */
/* FIXME: Enabling TAILC breaks some inlining/stack trace/etc tests.
Inlining and stack traces are not guaranteed however. */
/* FIXME: runtime generic context pointer for jumps? */
/* FIXME: handle this for generic sharing eventually */
// tailcall means "the backend can and will handle it".
// inst_tailcall means the tail. prefix is present.
tailcall_extra_arg = vtable_arg || imt_arg || will_have_imt_arg || mono_class_is_interface (cmethod->klass);
tailcall = inst_tailcall && is_supported_tailcall (cfg, ip, method, cmethod, fsig,
virtual_, tailcall_extra_arg, &tailcall_calli);
// Writes to imt_arg, vtable_arg, virtual_, cmethod, must not occur from here (inputs to is_supported_tailcall).
// Capture values to later assert they don't change.
called_is_supported_tailcall = TRUE;
tailcall_method = method;
tailcall_cmethod = cmethod;
tailcall_fsig = fsig;
tailcall_virtual = virtual_;
if (virtual_generic) {
if (virtual_generic_imt) {
if (tailcall) {
/* Prevent inlining of methods with tailcalls (the call stack would be altered) */
INLINE_FAILURE ("tailcall");
}
common_call = TRUE;
goto call_end;
}
MonoInst *this_temp, *this_arg_temp, *store;
MonoInst *iargs [4];
this_temp = mono_compile_create_var (cfg, type_from_stack_type (sp [0]), OP_LOCAL);
NEW_TEMPSTORE (cfg, store, this_temp->inst_c0, sp [0]);
MONO_ADD_INS (cfg->cbb, store);
/* FIXME: This should be a managed pointer */
this_arg_temp = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
EMIT_NEW_TEMPLOAD (cfg, iargs [0], this_temp->inst_c0);
iargs [1] = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD);
EMIT_NEW_TEMPLOADA (cfg, iargs [2], this_arg_temp->inst_c0);
addr = mono_emit_jit_icall (cfg, mono_helper_compile_generic_method, iargs);
EMIT_NEW_TEMPLOAD (cfg, sp [0], this_arg_temp->inst_c0);
ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL);
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall virtual generic %s -> %s\n", method->name, cmethod->name);
goto call_end;
}
CHECK_CFG_ERROR;
/* Tail recursion elimination */
if (((cfg->opt & MONO_OPT_TAILCALL) || inst_tailcall) && il_op == MONO_CEE_CALL && cmethod == method && next_ip < end && next_ip [0] == CEE_RET && !vtable_arg) {
gboolean has_vtargs = FALSE;
int i;
/* Prevent inlining of methods with tailcalls (the call stack would be altered) */
INLINE_FAILURE ("tailcall");
/* keep it simple */
for (i = fsig->param_count - 1; !has_vtargs && i >= 0; i--)
has_vtargs = MONO_TYPE_ISSTRUCT (mono_method_signature_internal (cmethod)->params [i]);
if (!has_vtargs) {
if (need_seq_point) {
emit_seq_point (cfg, method, ip, FALSE, TRUE);
need_seq_point = FALSE;
}
for (i = 0; i < n; ++i)
EMIT_NEW_ARGSTORE (cfg, ins, i, sp [i]);
mini_profiler_emit_tail_call (cfg, cmethod);
MONO_INST_NEW (cfg, ins, OP_BR);
MONO_ADD_INS (cfg->cbb, ins);
tblock = start_bblock->out_bb [0];
link_bblock (cfg, cfg->cbb, tblock);
ins->inst_target_bb = tblock;
start_new_bblock = 1;
/* skip the CEE_RET, too */
if (ip_in_bb (cfg, cfg->cbb, next_ip))
skip_ret = TRUE;
push_res = FALSE;
need_seq_point = FALSE;
goto call_end;
}
}
inline_costs += CALL_COST * MIN(10, num_calls++);
/*
* Synchronized wrappers.
* Its hard to determine where to replace a method with its synchronized
* wrapper without causing an infinite recursion. The current solution is
* to add the synchronized wrapper in the trampolines, and to
* change the called method to a dummy wrapper, and resolve that wrapper
* to the real method in mono_jit_compile_method ().
*/
if (cfg->method->wrapper_type == MONO_WRAPPER_SYNCHRONIZED) {
MonoMethod *orig = mono_marshal_method_from_wrapper (cfg->method);
if (cmethod == orig || (cmethod->is_inflated && mono_method_get_declaring_generic_method (cmethod) == orig)) {
// FIXME? Does this write to cmethod impact tailcall_supported? Probably not.
cmethod = mono_marshal_get_synchronized_inner_wrapper (cmethod);
}
}
/*
* Making generic calls out of gsharedvt methods.
* This needs to be used for all generic calls, not just ones with a gsharedvt signature, to avoid
* patching gshared method addresses into a gsharedvt method.
*/
if (make_generic_call_out_of_gsharedvt_method) {
if (virtual_) {
//if (mono_class_is_interface (cmethod->klass))
//GSHAREDVT_FAILURE (il_op);
// disable for possible remoting calls
if (fsig->hasthis && method->klass == mono_defaults.object_class)
GSHAREDVT_FAILURE (il_op);
if (fsig->generic_param_count) {
/* virtual generic call */
g_assert (!imt_arg);
g_assert (will_have_imt_arg);
/* Same as the virtual generic case above */
imt_arg = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_METHOD);
g_assert (imt_arg);
} else if (mono_class_is_interface (cmethod->klass) && !imt_arg) {
/* This can happen when we call a fully instantiated iface method */
g_assert (will_have_imt_arg);
imt_arg = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_METHOD);
g_assert (imt_arg);
}
/* This is not needed, as the trampoline code will pass one, and it might be passed in the same reg as the imt arg */
vtable_arg = NULL;
}
if ((m_class_get_parent (cmethod->klass) == mono_defaults.multicastdelegate_class) && (!strcmp (cmethod->name, "Invoke")))
keep_this_alive = sp [0];
MonoRgctxInfoType info_type;
if (virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL))
info_type = MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE_VIRT;
else
info_type = MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE;
addr = emit_get_rgctx_gsharedvt_call (cfg, context_used, fsig, cmethod, info_type);
if (cfg->llvm_only) {
// FIXME: Avoid initializing vtable_arg
ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall llvmonly gsharedvt %s -> %s\n", method->name, cmethod->name);
} else {
tailcall = tailcall_calli;
ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, imt_arg, vtable_arg, tailcall);
tailcall_remove_ret |= tailcall;
}
goto call_end;
}
/* Generic sharing */
/*
* Calls to generic methods from shared code cannot go through the trampoline infrastructure
* in some cases, because the called method might end up being different on every call.
* Load the called method address from the rgctx and do an indirect call in these cases.
* Use this if the callee is gsharedvt sharable too, since
* at runtime we might find an instantiation so the call cannot
* be patched (the 'no_patch' code path in mini-trampolines.c).
*/
gboolean gshared_indirect;
gshared_indirect = context_used && !imt_arg && !array_rank && !delegate_invoke;
if (gshared_indirect)
gshared_indirect = (!mono_method_is_generic_sharable_full (cmethod, TRUE, FALSE, FALSE) ||
!mono_class_generic_sharing_enabled (cmethod->klass) ||
gshared_static_virtual);
if (gshared_indirect)
gshared_indirect = (!virtual_ || MONO_METHOD_IS_FINAL (cmethod) ||
!(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL));
if (gshared_indirect) {
INLINE_FAILURE ("gshared");
g_assert (cfg->gshared && cmethod);
g_assert (!addr);
if (fsig->hasthis)
MONO_EMIT_NEW_CHECK_THIS (cfg, sp [0]->dreg);
if (cfg->llvm_only) {
if (cfg->gsharedvt && mini_is_gsharedvt_variable_signature (fsig)) {
/* Handled in handle_constrained_gsharedvt_call () */
g_assert (!gshared_static_virtual);
addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_GSHAREDVT_OUT_WRAPPER);
} else {
if (gshared_static_virtual)
addr = emit_get_rgctx_virt_method (cfg, -1, constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE);
else
addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD_FTNDESC);
}
// FIXME: Avoid initializing imt_arg/vtable_arg
ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr);
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall context_used_llvmonly %s -> %s\n", method->name, cmethod->name);
} else {
if (gshared_static_virtual) {
/*
* cmethod is a static interface method, the actual called method at runtime
* needs to be computed using constrained_class and cmethod.
*/
addr = emit_get_rgctx_virt_method (cfg, -1, constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE);
} else {
addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_GENERIC_METHOD_CODE);
}
if (inst_tailcall)
mono_tailcall_print ("%s tailcall_calli#2 %s -> %s\n", tailcall_calli ? "making" : "missed", method->name, cmethod->name);
tailcall = tailcall_calli;
ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, imt_arg, vtable_arg, tailcall);
tailcall_remove_ret |= tailcall;
}
goto call_end;
}
/* Direct calls to icalls */
if (direct_icall) {
MonoMethod *wrapper;
int costs;
/* Inline the wrapper */
wrapper = mono_marshal_get_native_wrapper (cmethod, TRUE, cfg->compile_aot);
costs = inline_method (cfg, wrapper, fsig, sp, ip, cfg->real_offset, TRUE, NULL);
g_assert (costs > 0);
cfg->real_offset += 5;
if (!MONO_TYPE_IS_VOID (fsig->ret))
/* *sp is already set by inline_method */
ins = *sp;
inline_costs += costs;
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall direct_icall %s -> %s\n", method->name, cmethod->name);
goto call_end;
}
/* Array methods */
if (array_rank) {
MonoInst *addr;
if (strcmp (cmethod->name, "Set") == 0) { /* array Set */
MonoInst *val = sp [fsig->param_count];
if (val->type == STACK_OBJ) {
MonoInst *iargs [ ] = { sp [0], val };
mono_emit_jit_icall (cfg, mono_helper_stelem_ref_check, iargs);
}
addr = mini_emit_ldelema_ins (cfg, cmethod, sp, ip, TRUE);
if (!mini_debug_options.weak_memory_model && val->type == STACK_OBJ)
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, fsig->params [fsig->param_count - 1], addr->dreg, 0, val->dreg);
if (cfg->gen_write_barriers && val->type == STACK_OBJ && !MONO_INS_IS_PCONST_NULL (val))
mini_emit_write_barrier (cfg, addr, val);
if (cfg->gen_write_barriers && mini_is_gsharedvt_klass (cmethod->klass))
GSHAREDVT_FAILURE (il_op);
} else if (strcmp (cmethod->name, "Get") == 0) { /* array Get */
addr = mini_emit_ldelema_ins (cfg, cmethod, sp, ip, FALSE);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, fsig->ret, addr->dreg, 0);
} else if (strcmp (cmethod->name, "Address") == 0) { /* array Address */
if (!m_class_is_valuetype (m_class_get_element_class (cmethod->klass)) && !readonly)
mini_emit_check_array_type (cfg, sp [0], cmethod->klass);
CHECK_TYPELOAD (cmethod->klass);
readonly = FALSE;
addr = mini_emit_ldelema_ins (cfg, cmethod, sp, ip, FALSE);
ins = addr;
} else {
g_assert_not_reached ();
}
emit_widen = FALSE;
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall array_rank %s -> %s\n", method->name, cmethod->name);
goto call_end;
}
ins = mini_redirect_call (cfg, cmethod, fsig, sp, virtual_ ? sp [0] : NULL);
if (ins) {
if (inst_tailcall) // FIXME
mono_tailcall_print ("missed tailcall redirect %s -> %s\n", method->name, cmethod->name);
goto call_end;
}
/* Tail prefix / tailcall optimization */
if (tailcall) {
/* Prevent inlining of methods with tailcalls (the call stack would be altered) */
INLINE_FAILURE ("tailcall");
}
/*
* Virtual calls in llvm-only mode.
*/
if (cfg->llvm_only && virtual_ && cmethod && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL)) {
ins = mini_emit_llvmonly_virtual_call (cfg, cmethod, fsig, context_used, sp);
goto call_end;
}
/* Common call */
if (!(cfg->opt & MONO_OPT_AGGRESSIVE_INLINING) && !(method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) && !(cmethod->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) && !method_does_not_return (cmethod))
INLINE_FAILURE ("call");
common_call = TRUE;
#ifdef TARGET_WASM
/* Push an LMF so these frames can be enumerated during stack walks by mono_arch_unwind_frame () */
if (needs_stack_walk && !cfg->deopt) {
MonoInst *method_ins;
int lmf_reg;
emit_push_lmf (cfg);
EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL);
lmf_reg = ins->dreg;
/* The lmf->method field will be used to look up the MonoJitInfo for this method */
method_ins = emit_get_rgctx_method (cfg, mono_method_check_context_used (cfg->method), cfg->method, MONO_RGCTX_INFO_METHOD);
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, method), method_ins->dreg);
}
#endif
call_end:
// Check that the decision to tailcall would not have changed.
g_assert (!called_is_supported_tailcall || tailcall_method == method);
// FIXME? cmethod does change, weaken the assert if we weren't tailcalling anyway.
// If this still fails, restructure the code, or call tailcall_supported again and assert no change.
g_assert (!called_is_supported_tailcall || !tailcall || tailcall_cmethod == cmethod);
g_assert (!called_is_supported_tailcall || tailcall_fsig == fsig);
g_assert (!called_is_supported_tailcall || tailcall_virtual == virtual_);
g_assert (!called_is_supported_tailcall || tailcall_extra_arg == (vtable_arg || imt_arg || will_have_imt_arg || mono_class_is_interface (cmethod->klass)));
if (common_call) // FIXME goto call_end && !common_call often skips tailcall processing.
ins = mini_emit_method_call_full (cfg, cmethod, fsig, tailcall, sp, virtual_ ? sp [0] : NULL,
imt_arg, vtable_arg);
/*
* Handle devirt of some A.B.C calls by replacing the result of A.B with a OP_TYPED_OBJREF instruction, so the .C
* call can be devirtualized above.
*/
if (cmethod)
ins = handle_call_res_devirt (cfg, cmethod, ins);
#ifdef TARGET_WASM
if (common_call && needs_stack_walk && !cfg->deopt)
/* If an exception is thrown, the LMF is popped by a call to mini_llvmonly_pop_lmf () */
emit_pop_lmf (cfg);
#endif
if (noreturn) {
MONO_INST_NEW (cfg, ins, OP_NOT_REACHED);
MONO_ADD_INS (cfg->cbb, ins);
}
calli_end:
if ((tailcall_remove_ret || (common_call && tailcall)) && !cfg->llvm_only) {
link_bblock (cfg, cfg->cbb, end_bblock);
start_new_bblock = 1;
// FIXME: Eliminate unreachable epilogs
/*
* OP_TAILCALL has no return value, so skip the CEE_RET if it is
* only reachable from this call.
*/
GET_BBLOCK (cfg, tblock, next_ip);
if (tblock == cfg->cbb || tblock->in_count == 0)
skip_ret = TRUE;
push_res = FALSE;
need_seq_point = FALSE;
}
if (ins_flag & MONO_INST_TAILCALL)
mini_test_tailcall (cfg, tailcall);
/* End of call, INS should contain the result of the call, if any */
if (push_res && !MONO_TYPE_IS_VOID (fsig->ret)) {
g_assert (ins);
if (emit_widen)
*sp++ = mono_emit_widen_call_res (cfg, ins, fsig);
else
*sp++ = ins;
}
if (save_last_error) {
save_last_error = FALSE;
#ifdef TARGET_WIN32
// Making icalls etc could clobber the value so emit inline code
// to read last error on Windows.
MONO_INST_NEW (cfg, ins, OP_GET_LAST_ERROR);
ins->dreg = alloc_dreg (cfg, STACK_I4);
ins->type = STACK_I4;
MONO_ADD_INS (cfg->cbb, ins);
mono_emit_jit_icall (cfg, mono_marshal_set_last_error_windows, &ins);
#else
mono_emit_jit_icall (cfg, mono_marshal_set_last_error, NULL);
#endif
}
if (keep_this_alive) {
MonoInst *dummy_use;
/* See mini_emit_method_call_full () */
EMIT_NEW_DUMMY_USE (cfg, dummy_use, keep_this_alive);
}
if (cfg->llvm_only && cmethod && method_needs_stack_walk (cfg, cmethod)) {
/*
* Clang can convert these calls to tailcalls which screw up the stack
* walk. This happens even when the -fno-optimize-sibling-calls
* option is passed to clang.
* Work around this by emitting a dummy call.
*/
mono_emit_jit_icall (cfg, mono_dummy_jit_icall, NULL);
}
CHECK_CFG_EXCEPTION;
if (skip_ret) {
// FIXME When not followed by CEE_RET, correct behavior is to raise an exception.
g_assert (next_ip [0] == CEE_RET);
next_ip += 1;
il_op = MonoOpcodeEnum_Invalid; // Call or ret? Unclear.
}
ins_flag = 0;
constrained_class = NULL;
if (need_seq_point) {
//check is is a nested call and remove the non_empty_stack of the last call, only for non native methods
if (!(method->flags & METHOD_IMPL_ATTRIBUTE_NATIVE)) {
if (emitted_funccall_seq_point) {
if (cfg->last_seq_point)
cfg->last_seq_point->flags |= MONO_INST_NESTED_CALL;
}
else
emitted_funccall_seq_point = TRUE;
}
emit_seq_point (cfg, method, next_ip, FALSE, TRUE);
}
break;
}
case MONO_CEE_RET:
if (!detached_before_ret)
mini_profiler_emit_leave (cfg, sig->ret->type != MONO_TYPE_VOID ? sp [-1] : NULL);
g_assert (!method_does_not_return (method));
if (cfg->method != method) {
/* return from inlined method */
/*
* If in_count == 0, that means the ret is unreachable due to
* being preceded by a throw. In that case, inline_method () will
* handle setting the return value
* (test case: test_0_inline_throw ()).
*/
if (return_var && cfg->cbb->in_count) {
MonoType *ret_type = mono_method_signature_internal (method)->ret;
MonoInst *store;
CHECK_STACK (1);
--sp;
*sp = convert_value (cfg, ret_type, *sp);
if ((method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD || method->wrapper_type == MONO_WRAPPER_NONE) && target_type_is_incompatible (cfg, ret_type, *sp))
UNVERIFIED;
//g_assert (returnvar != -1);
EMIT_NEW_TEMPSTORE (cfg, store, return_var->inst_c0, *sp);
cfg->ret_var_set = TRUE;
}
} else {
if (cfg->lmf_var && cfg->cbb->in_count && (!cfg->llvm_only || cfg->deopt))
emit_pop_lmf (cfg);
if (cfg->ret) {
MonoType *ret_type = mini_get_underlying_type (mono_method_signature_internal (method)->ret);
if (seq_points && !sym_seq_points) {
/*
* Place a seq point here too even through the IL stack is not
* empty, so a step over on
* call <FOO>
* ret
* will work correctly.
*/
NEW_SEQ_POINT (cfg, ins, ip - header->code, TRUE);
MONO_ADD_INS (cfg->cbb, ins);
}
g_assert (!return_var);
CHECK_STACK (1);
--sp;
*sp = convert_value (cfg, ret_type, *sp);
if ((method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD || method->wrapper_type == MONO_WRAPPER_NONE) && target_type_is_incompatible (cfg, ret_type, *sp))
UNVERIFIED;
emit_setret (cfg, *sp);
}
}
if (sp != stack_start)
UNVERIFIED;
MONO_INST_NEW (cfg, ins, OP_BR);
ins->inst_target_bb = end_bblock;
MONO_ADD_INS (cfg->cbb, ins);
link_bblock (cfg, cfg->cbb, end_bblock);
start_new_bblock = 1;
break;
case MONO_CEE_BR_S:
MONO_INST_NEW (cfg, ins, OP_BR);
GET_BBLOCK (cfg, tblock, target);
link_bblock (cfg, cfg->cbb, tblock);
ins->inst_target_bb = tblock;
if (sp != stack_start) {
handle_stack_args (cfg, stack_start, sp - stack_start);
sp = stack_start;
CHECK_UNVERIFIABLE (cfg);
}
MONO_ADD_INS (cfg->cbb, ins);
start_new_bblock = 1;
inline_costs += BRANCH_COST;
break;
case MONO_CEE_BEQ_S:
case MONO_CEE_BGE_S:
case MONO_CEE_BGT_S:
case MONO_CEE_BLE_S:
case MONO_CEE_BLT_S:
case MONO_CEE_BNE_UN_S:
case MONO_CEE_BGE_UN_S:
case MONO_CEE_BGT_UN_S:
case MONO_CEE_BLE_UN_S:
case MONO_CEE_BLT_UN_S:
MONO_INST_NEW (cfg, ins, il_op + BIG_BRANCH_OFFSET);
ADD_BINCOND (NULL);
sp = stack_start;
inline_costs += BRANCH_COST;
break;
case MONO_CEE_BR:
MONO_INST_NEW (cfg, ins, OP_BR);
GET_BBLOCK (cfg, tblock, target);
link_bblock (cfg, cfg->cbb, tblock);
ins->inst_target_bb = tblock;
if (sp != stack_start) {
handle_stack_args (cfg, stack_start, sp - stack_start);
sp = stack_start;
CHECK_UNVERIFIABLE (cfg);
}
MONO_ADD_INS (cfg->cbb, ins);
start_new_bblock = 1;
inline_costs += BRANCH_COST;
break;
case MONO_CEE_BRFALSE_S:
case MONO_CEE_BRTRUE_S:
case MONO_CEE_BRFALSE:
case MONO_CEE_BRTRUE: {
MonoInst *cmp;
gboolean is_true = il_op == MONO_CEE_BRTRUE_S || il_op == MONO_CEE_BRTRUE;
if (sp [-1]->type == STACK_VTYPE || sp [-1]->type == STACK_R8)
UNVERIFIED;
sp--;
GET_BBLOCK (cfg, tblock, target);
link_bblock (cfg, cfg->cbb, tblock);
GET_BBLOCK (cfg, tblock, next_ip);
link_bblock (cfg, cfg->cbb, tblock);
if (sp != stack_start) {
handle_stack_args (cfg, stack_start, sp - stack_start);
CHECK_UNVERIFIABLE (cfg);
}
MONO_INST_NEW(cfg, cmp, OP_ICOMPARE_IMM);
cmp->sreg1 = sp [0]->dreg;
type_from_op (cfg, cmp, sp [0], NULL);
CHECK_TYPE (cmp);
#if SIZEOF_REGISTER == 4
if (cmp->opcode == OP_LCOMPARE_IMM) {
/* Convert it to OP_LCOMPARE */
MONO_INST_NEW (cfg, ins, OP_I8CONST);
ins->type = STACK_I8;
ins->dreg = alloc_dreg (cfg, STACK_I8);
ins->inst_l = 0;
MONO_ADD_INS (cfg->cbb, ins);
cmp->opcode = OP_LCOMPARE;
cmp->sreg2 = ins->dreg;
}
#endif
MONO_ADD_INS (cfg->cbb, cmp);
MONO_INST_NEW (cfg, ins, is_true ? CEE_BNE_UN : CEE_BEQ);
type_from_op (cfg, ins, sp [0], NULL);
MONO_ADD_INS (cfg->cbb, ins);
ins->inst_many_bb = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (gpointer) * 2);
GET_BBLOCK (cfg, tblock, target);
ins->inst_true_bb = tblock;
GET_BBLOCK (cfg, tblock, next_ip);
ins->inst_false_bb = tblock;
start_new_bblock = 2;
sp = stack_start;
inline_costs += BRANCH_COST;
break;
}
case MONO_CEE_BEQ:
case MONO_CEE_BGE:
case MONO_CEE_BGT:
case MONO_CEE_BLE:
case MONO_CEE_BLT:
case MONO_CEE_BNE_UN:
case MONO_CEE_BGE_UN:
case MONO_CEE_BGT_UN:
case MONO_CEE_BLE_UN:
case MONO_CEE_BLT_UN:
MONO_INST_NEW (cfg, ins, il_op);
ADD_BINCOND (NULL);
sp = stack_start;
inline_costs += BRANCH_COST;
break;
case MONO_CEE_SWITCH: {
MonoInst *src1;
MonoBasicBlock **targets;
MonoBasicBlock *default_bblock;
MonoJumpInfoBBTable *table;
int offset_reg = alloc_preg (cfg);
int target_reg = alloc_preg (cfg);
int table_reg = alloc_preg (cfg);
int sum_reg = alloc_preg (cfg);
gboolean use_op_switch;
n = read32 (ip + 1);
--sp;
src1 = sp [0];
if ((src1->type != STACK_I4) && (src1->type != STACK_PTR))
UNVERIFIED;
ip += 5;
GET_BBLOCK (cfg, default_bblock, next_ip);
default_bblock->flags |= BB_INDIRECT_JUMP_TARGET;
targets = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (MonoBasicBlock*) * n);
for (i = 0; i < n; ++i) {
GET_BBLOCK (cfg, tblock, next_ip + (gint32)read32 (ip));
targets [i] = tblock;
targets [i]->flags |= BB_INDIRECT_JUMP_TARGET;
ip += 4;
}
if (sp != stack_start) {
/*
* Link the current bb with the targets as well, so handle_stack_args
* will set their in_stack correctly.
*/
link_bblock (cfg, cfg->cbb, default_bblock);
for (i = 0; i < n; ++i)
link_bblock (cfg, cfg->cbb, targets [i]);
handle_stack_args (cfg, stack_start, sp - stack_start);
sp = stack_start;
CHECK_UNVERIFIABLE (cfg);
/* Undo the links */
mono_unlink_bblock (cfg, cfg->cbb, default_bblock);
for (i = 0; i < n; ++i)
mono_unlink_bblock (cfg, cfg->cbb, targets [i]);
}
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ICOMPARE_IMM, -1, src1->dreg, n);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBGE_UN, default_bblock);
for (i = 0; i < n; ++i)
link_bblock (cfg, cfg->cbb, targets [i]);
table = (MonoJumpInfoBBTable *)mono_mempool_alloc (cfg->mempool, sizeof (MonoJumpInfoBBTable));
table->table = targets;
table->table_size = n;
use_op_switch = FALSE;
#ifdef TARGET_ARM
/* ARM implements SWITCH statements differently */
/* FIXME: Make it use the generic implementation */
if (!cfg->compile_aot)
use_op_switch = TRUE;
#endif
if (COMPILE_LLVM (cfg))
use_op_switch = TRUE;
cfg->cbb->has_jump_table = 1;
if (use_op_switch) {
MONO_INST_NEW (cfg, ins, OP_SWITCH);
ins->sreg1 = src1->dreg;
ins->inst_p0 = table;
ins->inst_many_bb = targets;
ins->klass = (MonoClass *)GUINT_TO_POINTER (n);
MONO_ADD_INS (cfg->cbb, ins);
} else {
if (TARGET_SIZEOF_VOID_P == 8)
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHL_IMM, offset_reg, src1->dreg, 3);
else
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHL_IMM, offset_reg, src1->dreg, 2);
#if SIZEOF_REGISTER == 8
/* The upper word might not be zero, and we add it to a 64 bit address later */
MONO_EMIT_NEW_UNALU (cfg, OP_ZEXT_I4, offset_reg, offset_reg);
#endif
if (cfg->compile_aot) {
MONO_EMIT_NEW_AOTCONST (cfg, table_reg, table, MONO_PATCH_INFO_SWITCH);
} else {
MONO_INST_NEW (cfg, ins, OP_JUMP_TABLE);
ins->inst_c1 = MONO_PATCH_INFO_SWITCH;
ins->inst_p0 = table;
ins->dreg = table_reg;
MONO_ADD_INS (cfg->cbb, ins);
}
/* FIXME: Use load_memindex */
MONO_EMIT_NEW_BIALU (cfg, OP_PADD, sum_reg, table_reg, offset_reg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, target_reg, sum_reg, 0);
MONO_EMIT_NEW_UNALU (cfg, OP_BR_REG, -1, target_reg);
}
start_new_bblock = 1;
inline_costs += BRANCH_COST * 2;
break;
}
case MONO_CEE_LDIND_I1:
case MONO_CEE_LDIND_U1:
case MONO_CEE_LDIND_I2:
case MONO_CEE_LDIND_U2:
case MONO_CEE_LDIND_I4:
case MONO_CEE_LDIND_U4:
case MONO_CEE_LDIND_I8:
case MONO_CEE_LDIND_I:
case MONO_CEE_LDIND_R4:
case MONO_CEE_LDIND_R8:
case MONO_CEE_LDIND_REF:
--sp;
if (!(ins_flag & MONO_INST_NONULLCHECK))
MONO_EMIT_NULL_CHECK (cfg, sp [0]->dreg, FALSE);
ins = mini_emit_memory_load (cfg, m_class_get_byval_arg (ldind_to_type (il_op)), sp [0], 0, ins_flag);
*sp++ = ins;
ins_flag = 0;
break;
case MONO_CEE_STIND_REF:
case MONO_CEE_STIND_I1:
case MONO_CEE_STIND_I2:
case MONO_CEE_STIND_I4:
case MONO_CEE_STIND_I8:
case MONO_CEE_STIND_R4:
case MONO_CEE_STIND_R8:
case MONO_CEE_STIND_I: {
sp -= 2;
if (il_op == MONO_CEE_STIND_REF && sp [1]->type != STACK_OBJ) {
/* stind.ref must only be used with object references. */
UNVERIFIED;
}
if (il_op == MONO_CEE_STIND_R4 && sp [1]->type == STACK_R8)
sp [1] = convert_value (cfg, m_class_get_byval_arg (mono_defaults.single_class), sp [1]);
mini_emit_memory_store (cfg, m_class_get_byval_arg (stind_to_type (il_op)), sp [0], sp [1], ins_flag);
ins_flag = 0;
inline_costs += 1;
break;
}
case MONO_CEE_MUL:
MONO_INST_NEW (cfg, ins, il_op);
sp -= 2;
ins->sreg1 = sp [0]->dreg;
ins->sreg2 = sp [1]->dreg;
type_from_op (cfg, ins, sp [0], sp [1]);
CHECK_TYPE (ins);
ins->dreg = alloc_dreg ((cfg), (MonoStackType)(ins)->type);
/* Use the immediate opcodes if possible */
int imm_opcode; imm_opcode = mono_op_to_op_imm_noemul (ins->opcode);
if ((sp [1]->opcode == OP_ICONST) && mono_arch_is_inst_imm (ins->opcode, imm_opcode, sp [1]->inst_c0)) {
if (imm_opcode != -1) {
ins->opcode = imm_opcode;
ins->inst_p1 = (gpointer)(gssize)(sp [1]->inst_c0);
ins->sreg2 = -1;
NULLIFY_INS (sp [1]);
}
}
MONO_ADD_INS ((cfg)->cbb, (ins));
*sp++ = mono_decompose_opcode (cfg, ins);
break;
case MONO_CEE_ADD:
case MONO_CEE_SUB:
case MONO_CEE_DIV:
case MONO_CEE_DIV_UN:
case MONO_CEE_REM:
case MONO_CEE_REM_UN:
case MONO_CEE_AND:
case MONO_CEE_OR:
case MONO_CEE_XOR:
case MONO_CEE_SHL:
case MONO_CEE_SHR:
case MONO_CEE_SHR_UN: {
MONO_INST_NEW (cfg, ins, il_op);
sp -= 2;
ins->sreg1 = sp [0]->dreg;
ins->sreg2 = sp [1]->dreg;
type_from_op (cfg, ins, sp [0], sp [1]);
CHECK_TYPE (ins);
add_widen_op (cfg, ins, &sp [0], &sp [1]);
ins->dreg = alloc_dreg ((cfg), (MonoStackType)(ins)->type);
/* Use the immediate opcodes if possible */
int imm_opcode; imm_opcode = mono_op_to_op_imm_noemul (ins->opcode);
if (((sp [1]->opcode == OP_ICONST) || (sp [1]->opcode == OP_I8CONST)) &&
mono_arch_is_inst_imm (ins->opcode, imm_opcode, sp [1]->opcode == OP_ICONST ? sp [1]->inst_c0 : sp [1]->inst_l)) {
if (imm_opcode != -1) {
ins->opcode = imm_opcode;
if (sp [1]->opcode == OP_I8CONST) {
#if SIZEOF_REGISTER == 8
ins->inst_imm = sp [1]->inst_l;
#else
ins->inst_l = sp [1]->inst_l;
#endif
} else {
ins->inst_imm = (gssize)(sp [1]->inst_c0);
}
ins->sreg2 = -1;
/* Might be followed by an instruction added by add_widen_op */
if (sp [1]->next == NULL)
NULLIFY_INS (sp [1]);
}
}
MONO_ADD_INS ((cfg)->cbb, (ins));
*sp++ = mono_decompose_opcode (cfg, ins);
break;
}
case MONO_CEE_NEG:
case MONO_CEE_NOT:
case MONO_CEE_CONV_I1:
case MONO_CEE_CONV_I2:
case MONO_CEE_CONV_I4:
case MONO_CEE_CONV_R4:
case MONO_CEE_CONV_R8:
case MONO_CEE_CONV_U4:
case MONO_CEE_CONV_I8:
case MONO_CEE_CONV_U8:
case MONO_CEE_CONV_OVF_I8:
case MONO_CEE_CONV_OVF_U8:
case MONO_CEE_CONV_R_UN:
/* Special case this earlier so we have long constants in the IR */
if ((il_op == MONO_CEE_CONV_I8 || il_op == MONO_CEE_CONV_U8) && (sp [-1]->opcode == OP_ICONST)) {
int data = sp [-1]->inst_c0;
sp [-1]->opcode = OP_I8CONST;
sp [-1]->type = STACK_I8;
#if SIZEOF_REGISTER == 8
if (il_op == MONO_CEE_CONV_U8)
sp [-1]->inst_c0 = (guint32)data;
else
sp [-1]->inst_c0 = data;
#else
if (il_op == MONO_CEE_CONV_U8)
sp [-1]->inst_l = (guint32)data;
else
sp [-1]->inst_l = data;
#endif
sp [-1]->dreg = alloc_dreg (cfg, STACK_I8);
}
else {
ADD_UNOP (il_op);
}
break;
case MONO_CEE_CONV_OVF_I4:
case MONO_CEE_CONV_OVF_I1:
case MONO_CEE_CONV_OVF_I2:
case MONO_CEE_CONV_OVF_I:
case MONO_CEE_CONV_OVF_I1_UN:
case MONO_CEE_CONV_OVF_I2_UN:
case MONO_CEE_CONV_OVF_I4_UN:
case MONO_CEE_CONV_OVF_I8_UN:
case MONO_CEE_CONV_OVF_I_UN:
if (sp [-1]->type == STACK_R8 || sp [-1]->type == STACK_R4) {
/* floats are always signed, _UN has no effect */
ADD_UNOP (CEE_CONV_OVF_I8);
if (il_op == MONO_CEE_CONV_OVF_I1_UN)
ADD_UNOP (MONO_CEE_CONV_OVF_I1);
else if (il_op == MONO_CEE_CONV_OVF_I2_UN)
ADD_UNOP (MONO_CEE_CONV_OVF_I2);
else if (il_op == MONO_CEE_CONV_OVF_I4_UN)
ADD_UNOP (MONO_CEE_CONV_OVF_I4);
else if (il_op == MONO_CEE_CONV_OVF_I8_UN)
;
else
ADD_UNOP (il_op);
} else {
ADD_UNOP (il_op);
}
break;
case MONO_CEE_CONV_OVF_U1:
case MONO_CEE_CONV_OVF_U2:
case MONO_CEE_CONV_OVF_U4:
case MONO_CEE_CONV_OVF_U:
case MONO_CEE_CONV_OVF_U1_UN:
case MONO_CEE_CONV_OVF_U2_UN:
case MONO_CEE_CONV_OVF_U4_UN:
case MONO_CEE_CONV_OVF_U8_UN:
case MONO_CEE_CONV_OVF_U_UN:
if (sp [-1]->type == STACK_R8 || sp [-1]->type == STACK_R4) {
/* floats are always signed, _UN has no effect */
ADD_UNOP (CEE_CONV_OVF_U8);
ADD_UNOP (il_op);
} else {
ADD_UNOP (il_op);
}
break;
case MONO_CEE_CONV_U2:
case MONO_CEE_CONV_U1:
case MONO_CEE_CONV_I:
case MONO_CEE_CONV_U:
ADD_UNOP (il_op);
CHECK_CFG_EXCEPTION;
break;
case MONO_CEE_ADD_OVF:
case MONO_CEE_ADD_OVF_UN:
case MONO_CEE_MUL_OVF:
case MONO_CEE_MUL_OVF_UN:
case MONO_CEE_SUB_OVF:
case MONO_CEE_SUB_OVF_UN:
MONO_INST_NEW (cfg, ins, il_op);
sp -= 2;
ins->sreg1 = sp [0]->dreg;
ins->sreg2 = sp [1]->dreg;
type_from_op (cfg, ins, sp [0], sp [1]);
CHECK_TYPE (ins);
if (ovf_exc)
ins->inst_exc_name = ovf_exc;
else
ins->inst_exc_name = "OverflowException";
/* Have to insert a widening op */
add_widen_op (cfg, ins, &sp [0], &sp [1]);
ins->dreg = alloc_dreg (cfg, (MonoStackType)(ins)->type);
MONO_ADD_INS ((cfg)->cbb, ins);
/* The opcode might be emulated, so need to special case this */
if (ovf_exc && mono_find_jit_opcode_emulation (ins->opcode)) {
switch (ins->opcode) {
case OP_IMUL_OVF_UN:
/* This opcode is just a placeholder, it will be emulated also */
ins->opcode = OP_IMUL_OVF_UN_OOM;
break;
case OP_LMUL_OVF_UN:
/* This opcode is just a placeholder, it will be emulated also */
ins->opcode = OP_LMUL_OVF_UN_OOM;
break;
default:
g_assert_not_reached ();
}
}
ovf_exc = NULL;
*sp++ = mono_decompose_opcode (cfg, ins);
break;
case MONO_CEE_CPOBJ:
GSHAREDVT_FAILURE (il_op);
GSHAREDVT_FAILURE (*ip);
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
sp -= 2;
mini_emit_memory_copy (cfg, sp [0], sp [1], klass, FALSE, ins_flag);
ins_flag = 0;
break;
case MONO_CEE_LDOBJ: {
int loc_index = -1;
int stloc_len = 0;
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
/* Optimize the common ldobj+stloc combination */
if (next_ip < end) {
switch (next_ip [0]) {
case MONO_CEE_STLOC_S:
CHECK_OPSIZE (7);
loc_index = next_ip [1];
stloc_len = 2;
break;
case MONO_CEE_STLOC_0:
case MONO_CEE_STLOC_1:
case MONO_CEE_STLOC_2:
case MONO_CEE_STLOC_3:
loc_index = next_ip [0] - CEE_STLOC_0;
stloc_len = 1;
break;
default:
break;
}
}
if ((loc_index != -1) && ip_in_bb (cfg, cfg->cbb, next_ip)) {
CHECK_LOCAL (loc_index);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), sp [0]->dreg, 0);
ins->dreg = cfg->locals [loc_index]->dreg;
ins->flags |= ins_flag;
il_op = (MonoOpcodeEnum)next_ip [0];
next_ip += stloc_len;
if (ins_flag & MONO_INST_VOLATILE) {
/* Volatile loads have acquire semantics, see 12.6.7 in Ecma 335 */
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_ACQ);
}
ins_flag = 0;
break;
}
/* Optimize the ldobj+stobj combination */
if (next_ip + 4 < end && next_ip [0] == CEE_STOBJ && ip_in_bb (cfg, cfg->cbb, next_ip) && read32 (next_ip + 1) == token) {
CHECK_STACK (1);
sp --;
mini_emit_memory_copy (cfg, sp [0], sp [1], klass, FALSE, ins_flag);
il_op = (MonoOpcodeEnum)next_ip [0];
next_ip += 5;
ins_flag = 0;
break;
}
ins = mini_emit_memory_load (cfg, m_class_get_byval_arg (klass), sp [0], 0, ins_flag);
*sp++ = ins;
ins_flag = 0;
inline_costs += 1;
break;
}
case MONO_CEE_LDSTR:
if (method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD) {
EMIT_NEW_PCONST (cfg, ins, mono_method_get_wrapper_data (method, n));
ins->type = STACK_OBJ;
*sp = ins;
}
else if (method->wrapper_type != MONO_WRAPPER_NONE) {
MonoInst *iargs [1];
char *str = (char *)mono_method_get_wrapper_data (method, n);
if (cfg->compile_aot)
EMIT_NEW_LDSTRLITCONST (cfg, iargs [0], str);
else
EMIT_NEW_PCONST (cfg, iargs [0], str);
*sp = mono_emit_jit_icall (cfg, mono_string_new_wrapper_internal, iargs);
} else {
{
if (cfg->cbb->out_of_line) {
MonoInst *iargs [2];
if (image == mono_defaults.corlib) {
/*
* Avoid relocations in AOT and save some space by using a
* version of helper_ldstr specialized to mscorlib.
*/
EMIT_NEW_ICONST (cfg, iargs [0], mono_metadata_token_index (n));
*sp = mono_emit_jit_icall (cfg, mono_helper_ldstr_mscorlib, iargs);
} else {
/* Avoid creating the string object */
EMIT_NEW_IMAGECONST (cfg, iargs [0], image);
EMIT_NEW_ICONST (cfg, iargs [1], mono_metadata_token_index (n));
*sp = mono_emit_jit_icall (cfg, mono_helper_ldstr, iargs);
}
}
else
if (cfg->compile_aot) {
NEW_LDSTRCONST (cfg, ins, image, n);
*sp = ins;
MONO_ADD_INS (cfg->cbb, ins);
}
else {
NEW_PCONST (cfg, ins, NULL);
ins->type = STACK_OBJ;
ins->inst_p0 = mono_ldstr_checked (image, mono_metadata_token_index (n), cfg->error);
CHECK_CFG_ERROR;
if (!ins->inst_p0)
OUT_OF_MEMORY_FAILURE;
*sp = ins;
MONO_ADD_INS (cfg->cbb, ins);
}
}
}
sp++;
break;
case MONO_CEE_NEWOBJ: {
MonoInst *iargs [2];
MonoMethodSignature *fsig;
MonoInst this_ins;
MonoInst *alloc;
MonoInst *vtable_arg = NULL;
cmethod = mini_get_method (cfg, method, token, NULL, generic_context);
CHECK_CFG_ERROR;
fsig = mono_method_get_signature_checked (cmethod, image, token, generic_context, cfg->error);
CHECK_CFG_ERROR;
mono_save_token_info (cfg, image, token, cmethod);
if (!mono_class_init_internal (cmethod->klass))
TYPE_LOAD_ERROR (cmethod->klass);
context_used = mini_method_check_context_used (cfg, cmethod);
if (!dont_verify && !cfg->skip_visibility) {
MonoMethod *cil_method = cmethod;
MonoMethod *target_method = cil_method;
if (method->is_inflated) {
MonoGenericContainer *container = mono_method_get_generic_container(method_definition);
MonoGenericContext *context = (container != NULL ? &container->context : NULL);
target_method = mini_get_method_allow_open (method, token, NULL, context, cfg->error);
CHECK_CFG_ERROR;
}
if (!mono_method_can_access_method (method_definition, target_method) &&
!mono_method_can_access_method (method, cil_method))
emit_method_access_failure (cfg, method, cil_method);
}
if (cfg->gshared && cmethod && cmethod->klass != method->klass && mono_class_is_ginst (cmethod->klass) && mono_method_is_generic_sharable (cmethod, TRUE) && mono_class_needs_cctor_run (cmethod->klass, method)) {
emit_class_init (cfg, cmethod->klass);
CHECK_TYPELOAD (cmethod->klass);
}
/*
if (cfg->gsharedvt) {
if (mini_is_gsharedvt_variable_signature (sig))
GSHAREDVT_FAILURE (il_op);
}
*/
n = fsig->param_count;
CHECK_STACK (n);
/*
* Generate smaller code for the common newobj <exception> instruction in
* argument checking code.
*/
if (cfg->cbb->out_of_line && m_class_get_image (cmethod->klass) == mono_defaults.corlib &&
is_exception_class (cmethod->klass) && n <= 2 &&
((n < 1) || (!m_type_is_byref (fsig->params [0]) && fsig->params [0]->type == MONO_TYPE_STRING)) &&
((n < 2) || (!m_type_is_byref (fsig->params [1]) && fsig->params [1]->type == MONO_TYPE_STRING))) {
MonoInst *iargs [3];
sp -= n;
EMIT_NEW_ICONST (cfg, iargs [0], m_class_get_type_token (cmethod->klass));
switch (n) {
case 0:
*sp ++ = mono_emit_jit_icall (cfg, mono_create_corlib_exception_0, iargs);
break;
case 1:
iargs [1] = sp [0];
*sp ++ = mono_emit_jit_icall (cfg, mono_create_corlib_exception_1, iargs);
break;
case 2:
iargs [1] = sp [0];
iargs [2] = sp [1];
*sp ++ = mono_emit_jit_icall (cfg, mono_create_corlib_exception_2, iargs);
break;
default:
g_assert_not_reached ();
}
inline_costs += 5;
break;
}
/* move the args to allow room for 'this' in the first position */
while (n--) {
--sp;
sp [1] = sp [0];
}
for (int i = 0; i < fsig->param_count; ++i)
sp [i + fsig->hasthis] = convert_value (cfg, fsig->params [i], sp [i + fsig->hasthis]);
/* check_call_signature () requires sp[0] to be set */
this_ins.type = STACK_OBJ;
sp [0] = &this_ins;
if (check_call_signature (cfg, fsig, sp))
UNVERIFIED;
iargs [0] = NULL;
if (mini_class_is_system_array (cmethod->klass)) {
*sp = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_METHOD);
MonoJitICallId function = MONO_JIT_ICALL_ZeroIsReserved;
int rank = m_class_get_rank (cmethod->klass);
int n = fsig->param_count;
/* Optimize the common cases, use ctor using length for each rank (no lbound). */
if (n == rank) {
switch (n) {
case 1: function = MONO_JIT_ICALL_mono_array_new_1;
break;
case 2: function = MONO_JIT_ICALL_mono_array_new_2;
break;
case 3: function = MONO_JIT_ICALL_mono_array_new_3;
break;
case 4: function = MONO_JIT_ICALL_mono_array_new_4;
break;
default:
break;
}
}
/* Regular case, rank > 4 or legnth, lbound specified per rank. */
if (function == MONO_JIT_ICALL_ZeroIsReserved) {
// FIXME Maximum value of param_count? Realistically 64. Fits in imm?
if (!array_new_localalloc_ins) {
MONO_INST_NEW (cfg, array_new_localalloc_ins, OP_LOCALLOC_IMM);
array_new_localalloc_ins->dreg = alloc_preg (cfg);
cfg->flags |= MONO_CFG_HAS_ALLOCA;
MONO_ADD_INS (init_localsbb, array_new_localalloc_ins);
}
array_new_localalloc_ins->inst_imm = MAX (array_new_localalloc_ins->inst_imm, n * sizeof (target_mgreg_t));
int dreg = array_new_localalloc_ins->dreg;
if (2 * rank == n) {
/* [lbound, length, lbound, length, ...]
* mono_array_new_n_icall expects a non-interleaved list of
* lbounds and lengths, so deinterleave here.
*/
for (int l = 0; l < 2; ++l) {
int src = l;
int dst = l * rank;
for (int r = 0; r < rank; ++r, src += 2, ++dst) {
NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, dreg, dst * sizeof (target_mgreg_t), sp [src + 1]->dreg);
MONO_ADD_INS (cfg->cbb, ins);
}
}
} else {
/* [length, length, length, ...] */
for (int i = 0; i < n; ++i) {
NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, dreg, i * sizeof (target_mgreg_t), sp [i + 1]->dreg);
MONO_ADD_INS (cfg->cbb, ins);
}
}
EMIT_NEW_ICONST (cfg, ins, n);
sp [1] = ins;
EMIT_NEW_UNALU (cfg, ins, OP_MOVE, alloc_preg (cfg), dreg);
ins->type = STACK_PTR;
sp [2] = ins;
// FIXME Adjust sp by n - 3? Attempts failed.
function = MONO_JIT_ICALL_mono_array_new_n_icall;
}
alloc = mono_emit_jit_icall_id (cfg, function, sp);
} else if (cmethod->string_ctor) {
g_assert (!context_used);
g_assert (!vtable_arg);
/* we simply pass a null pointer */
EMIT_NEW_PCONST (cfg, *sp, NULL);
/* now call the string ctor */
alloc = mini_emit_method_call_full (cfg, cmethod, fsig, FALSE, sp, NULL, NULL, NULL);
} else {
if (m_class_is_valuetype (cmethod->klass)) {
iargs [0] = mono_compile_create_var (cfg, m_class_get_byval_arg (cmethod->klass), OP_LOCAL);
mini_emit_init_rvar (cfg, iargs [0]->dreg, m_class_get_byval_arg (cmethod->klass));
EMIT_NEW_TEMPLOADA (cfg, *sp, iargs [0]->inst_c0);
alloc = NULL;
/*
* The code generated by mini_emit_virtual_call () expects
* iargs [0] to be a boxed instance, but luckily the vcall
* will be transformed into a normal call there.
*/
} else if (context_used) {
alloc = handle_alloc (cfg, cmethod->klass, FALSE, context_used);
*sp = alloc;
} else {
MonoVTable *vtable = NULL;
if (!cfg->compile_aot)
vtable = mono_class_vtable_checked (cmethod->klass, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (cmethod->klass);
/*
* TypeInitializationExceptions thrown from the mono_runtime_class_init
* call in mono_jit_runtime_invoke () can abort the finalizer thread.
* As a workaround, we call class cctors before allocating objects.
*/
if (mini_field_access_needs_cctor_run (cfg, method, cmethod->klass, vtable) && !(g_slist_find (class_inits, cmethod->klass))) {
emit_class_init (cfg, cmethod->klass);
if (cfg->verbose_level > 2)
printf ("class %s.%s needs init call for ctor\n", m_class_get_name_space (cmethod->klass), m_class_get_name (cmethod->klass));
class_inits = g_slist_prepend (class_inits, cmethod->klass);
}
alloc = handle_alloc (cfg, cmethod->klass, FALSE, 0);
*sp = alloc;
}
CHECK_CFG_EXCEPTION; /*for handle_alloc*/
if (alloc)
MONO_EMIT_NEW_UNALU (cfg, OP_NOT_NULL, -1, alloc->dreg);
/* Now call the actual ctor */
int ctor_inline_costs = 0;
handle_ctor_call (cfg, cmethod, fsig, context_used, sp, ip, &ctor_inline_costs);
// don't contribute to inline_const if ctor has [MethodImpl(MethodImplOptions.AggressiveInlining)]
if (!COMPILE_LLVM(cfg) || !(cmethod->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING))
inline_costs += ctor_inline_costs;
CHECK_CFG_EXCEPTION;
}
if (alloc == NULL) {
/* Valuetype */
EMIT_NEW_TEMPLOAD (cfg, ins, iargs [0]->inst_c0);
mini_type_to_eval_stack_type (cfg, m_class_get_byval_arg (ins->klass), ins);
*sp++= ins;
} else {
*sp++ = alloc;
}
inline_costs += 5;
if (!(seq_point_locs && mono_bitset_test_fast (seq_point_locs, next_ip - header->code)))
emit_seq_point (cfg, method, next_ip, FALSE, TRUE);
break;
}
case MONO_CEE_CASTCLASS:
case MONO_CEE_ISINST: {
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
if (sp [0]->type != STACK_OBJ)
UNVERIFIED;
MONO_INST_NEW (cfg, ins, (il_op == MONO_CEE_ISINST) ? OP_ISINST : OP_CASTCLASS);
ins->dreg = alloc_preg (cfg);
ins->sreg1 = (*sp)->dreg;
ins->klass = klass;
ins->type = STACK_OBJ;
MONO_ADD_INS (cfg->cbb, ins);
CHECK_CFG_EXCEPTION;
*sp++ = ins;
cfg->flags |= MONO_CFG_HAS_TYPE_CHECK;
break;
}
case MONO_CEE_UNBOX_ANY: {
MonoInst *res, *addr;
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
mono_save_token_info (cfg, image, token, klass);
context_used = mini_class_check_context_used (cfg, klass);
if (mini_is_gsharedvt_klass (klass)) {
res = handle_unbox_gsharedvt (cfg, klass, *sp);
inline_costs += 2;
} else if (mini_class_is_reference (klass)) {
if (MONO_INS_IS_PCONST_NULL (*sp)) {
EMIT_NEW_PCONST (cfg, res, NULL);
res->type = STACK_OBJ;
} else {
MONO_INST_NEW (cfg, res, OP_CASTCLASS);
res->dreg = alloc_preg (cfg);
res->sreg1 = (*sp)->dreg;
res->klass = klass;
res->type = STACK_OBJ;
MONO_ADD_INS (cfg->cbb, res);
cfg->flags |= MONO_CFG_HAS_TYPE_CHECK;
}
} else if (mono_class_is_nullable (klass)) {
res = handle_unbox_nullable (cfg, *sp, klass, context_used);
} else {
addr = mini_handle_unbox (cfg, klass, *sp, context_used);
/* LDOBJ */
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0);
res = ins;
inline_costs += 2;
}
*sp ++ = res;
break;
}
case MONO_CEE_BOX: {
MonoInst *val;
MonoClass *enum_class;
MonoMethod *has_flag;
MonoMethodSignature *has_flag_sig;
--sp;
val = *sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
mono_save_token_info (cfg, image, token, klass);
context_used = mini_class_check_context_used (cfg, klass);
if (mini_class_is_reference (klass)) {
*sp++ = val;
break;
}
val = convert_value (cfg, m_class_get_byval_arg (klass), val);
if (klass == mono_defaults.void_class)
UNVERIFIED;
if (target_type_is_incompatible (cfg, m_class_get_byval_arg (klass), val))
UNVERIFIED;
/* frequent check in generic code: box (struct), brtrue */
/*
* Look for:
*
* <push int/long ptr>
* <push int/long>
* box MyFlags
* constrained. MyFlags
* callvirt instace bool class [mscorlib] System.Enum::HasFlag (class [mscorlib] System.Enum)
*
* If we find this sequence and the operand types on box and constrained
* are equal, we can emit a specialized instruction sequence instead of
* the very slow HasFlag () call.
* This code sequence is generated by older mcs/csc, the newer one is handled in
* emit_inst_for_method ().
*/
guint32 constrained_token;
guint32 callvirt_token;
if ((cfg->opt & MONO_OPT_INTRINS) &&
// FIXME ip_in_bb as we go?
next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) &&
(ip = il_read_constrained (next_ip, end, &constrained_token)) &&
ip_in_bb (cfg, cfg->cbb, ip) &&
(ip = il_read_callvirt (ip, end, &callvirt_token)) &&
ip_in_bb (cfg, cfg->cbb, ip) &&
m_class_is_enumtype (klass) &&
(enum_class = mini_get_class (method, constrained_token, generic_context)) &&
(has_flag = mini_get_method (cfg, method, callvirt_token, NULL, generic_context)) &&
has_flag->klass == mono_defaults.enum_class &&
!strcmp (has_flag->name, "HasFlag") &&
(has_flag_sig = mono_method_signature_internal (has_flag)) &&
has_flag_sig->hasthis &&
has_flag_sig->param_count == 1) {
CHECK_TYPELOAD (enum_class);
if (enum_class == klass) {
MonoInst *enum_this, *enum_flag;
next_ip = ip;
il_op = MONO_CEE_CALLVIRT;
--sp;
enum_this = sp [0];
enum_flag = sp [1];
*sp++ = mini_handle_enum_has_flag (cfg, klass, enum_this, -1, enum_flag);
break;
}
}
guint32 unbox_any_token;
/*
* Common in generic code:
* box T1, unbox.any T2.
*/
if ((cfg->opt & MONO_OPT_INTRINS) &&
next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) &&
(ip = il_read_unbox_any (next_ip, end, &unbox_any_token))) {
MonoClass *unbox_klass = mini_get_class (method, unbox_any_token, generic_context);
CHECK_TYPELOAD (unbox_klass);
if (klass == unbox_klass) {
next_ip = ip;
*sp++ = val;
break;
}
}
// Optimize
//
// box
// call object::GetType()
//
guint32 gettype_token;
if ((ip = il_read_call(next_ip, end, &gettype_token)) && ip_in_bb (cfg, cfg->cbb, ip)) {
MonoMethod* gettype_method = mini_get_method (cfg, method, gettype_token, NULL, generic_context);
if (!strcmp (gettype_method->name, "GetType") && gettype_method->klass == mono_defaults.object_class) {
mono_class_init_internal(klass);
if (mono_class_get_checked (m_class_get_image (klass), m_class_get_type_token (klass), error) == klass) {
if (cfg->compile_aot) {
EMIT_NEW_TYPE_FROM_HANDLE_CONST (cfg, ins, m_class_get_image (klass), m_class_get_type_token (klass), generic_context);
} else {
MonoType *klass_type = m_class_get_byval_arg (klass);
MonoReflectionType* reflection_type = mono_type_get_object_checked (klass_type, cfg->error);
EMIT_NEW_PCONST (cfg, ins, reflection_type);
}
ins->type = STACK_OBJ;
ins->klass = mono_defaults.systemtype_class;
*sp++ = ins;
next_ip = ip;
break;
}
}
}
// Optimize
//
// box
// ldnull
// ceq (or cgt.un)
//
// to just
//
// ldc.i4.0 (or 1)
guchar* ldnull_ip;
if ((ldnull_ip = il_read_op (next_ip, end, CEE_LDNULL, MONO_CEE_LDNULL)) && ip_in_bb (cfg, cfg->cbb, ldnull_ip)) {
gboolean is_eq = FALSE, is_neq = FALSE;
if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CEQ)))
is_eq = TRUE;
else if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CGT_UN)))
is_neq = TRUE;
if ((is_eq || is_neq) && ip_in_bb (cfg, cfg->cbb, ip) &&
!mono_class_is_nullable (klass) && !mini_is_gsharedvt_klass (klass)) {
next_ip = ip;
il_op = (MonoOpcodeEnum) (is_eq ? CEE_LDC_I4_0 : CEE_LDC_I4_1);
EMIT_NEW_ICONST (cfg, ins, is_eq ? 0 : 1);
ins->type = STACK_I4;
*sp++ = ins;
break;
}
}
guint32 isinst_tk = 0;
if ((ip = il_read_op_and_token (next_ip, end, CEE_ISINST, MONO_CEE_ISINST, &isinst_tk)) &&
ip_in_bb (cfg, cfg->cbb, ip)) {
MonoClass *isinst_class = mini_get_class (method, isinst_tk, generic_context);
if (!mono_class_is_nullable (klass) && !mono_class_is_nullable (isinst_class) &&
!mini_is_gsharedvt_variable_klass (klass) && !mini_is_gsharedvt_variable_klass (isinst_class) &&
!mono_class_is_open_constructed_type (m_class_get_byval_arg (klass)) &&
!mono_class_is_open_constructed_type (m_class_get_byval_arg (isinst_class))) {
// Optimize
//
// box
// isinst [Type]
// brfalse/brtrue
//
// to
//
// ldc.i4.0 (or 1)
// brfalse/brtrue
//
guchar* br_ip = NULL;
if ((br_ip = il_read_brtrue (ip, end, &target)) || (br_ip = il_read_brtrue_s (ip, end, &target)) ||
(br_ip = il_read_brfalse (ip, end, &target)) || (br_ip = il_read_brfalse_s (ip, end, &target))) {
gboolean isinst = mono_class_is_assignable_from_internal (isinst_class, klass);
next_ip = ip;
il_op = (MonoOpcodeEnum) (isinst ? CEE_LDC_I4_1 : CEE_LDC_I4_0);
EMIT_NEW_ICONST (cfg, ins, isinst ? 1 : 0);
ins->type = STACK_I4;
*sp++ = ins;
break;
}
// Optimize
//
// box
// isinst [Type]
// ldnull
// ceq/cgt.un
//
// to
//
// ldc.i4.0 (or 1)
//
guchar* ldnull_ip = NULL;
if ((ldnull_ip = il_read_op (ip, end, CEE_LDNULL, MONO_CEE_LDNULL)) && ip_in_bb (cfg, cfg->cbb, ldnull_ip)) {
gboolean is_eq = FALSE, is_neq = FALSE;
if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CEQ)))
is_eq = TRUE;
else if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CGT_UN)))
is_neq = TRUE;
if ((is_eq || is_neq) && ip_in_bb (cfg, cfg->cbb, ip) &&
!mono_class_is_nullable (klass) && !mini_is_gsharedvt_klass (klass)) {
gboolean isinst = mono_class_is_assignable_from_internal (isinst_class, klass);
next_ip = ip;
if (is_eq)
isinst = !isinst;
il_op = (MonoOpcodeEnum) (isinst ? CEE_LDC_I4_1 : CEE_LDC_I4_0);
EMIT_NEW_ICONST (cfg, ins, isinst ? 1 : 0);
ins->type = STACK_I4;
*sp++ = ins;
break;
}
}
// Optimize
//
// box
// isinst [Type]
// unbox.any
//
// to
//
// nop
//
guchar* unbox_ip = NULL;
guint32 unbox_token = 0;
if ((unbox_ip = il_read_unbox_any (ip, end, &unbox_token)) && ip_in_bb (cfg, cfg->cbb, unbox_ip)) {
MonoClass *unbox_klass = mini_get_class (method, unbox_token, generic_context);
CHECK_TYPELOAD (unbox_klass);
if (!mono_class_is_nullable (unbox_klass) &&
!mini_is_gsharedvt_klass (unbox_klass) &&
klass == isinst_class &&
klass == unbox_klass)
{
*sp++ = val;
next_ip = unbox_ip;
break;
}
}
}
}
gboolean is_true;
// FIXME: LLVM can't handle the inconsistent bb linking
if (!mono_class_is_nullable (klass) &&
!mini_is_gsharedvt_klass (klass) &&
next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) &&
( (is_true = !!(ip = il_read_brtrue (next_ip, end, &target))) ||
(is_true = !!(ip = il_read_brtrue_s (next_ip, end, &target))) ||
(ip = il_read_brfalse (next_ip, end, &target)) ||
(ip = il_read_brfalse_s (next_ip, end, &target)))) {
int dreg;
MonoBasicBlock *true_bb, *false_bb;
il_op = (MonoOpcodeEnum)next_ip [0];
next_ip = ip;
if (cfg->verbose_level > 3) {
printf ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip, NULL));
printf ("<box+brtrue opt>\n");
}
/*
* We need to link both bblocks, since it is needed for handling stack
* arguments correctly (See test_0_box_brtrue_opt_regress_81102).
* Branching to only one of them would lead to inconsistencies, so
* generate an ICONST+BRTRUE, the branch opts will get rid of them.
*/
GET_BBLOCK (cfg, true_bb, target);
GET_BBLOCK (cfg, false_bb, next_ip);
mono_link_bblock (cfg, cfg->cbb, true_bb);
mono_link_bblock (cfg, cfg->cbb, false_bb);
if (sp != stack_start) {
handle_stack_args (cfg, stack_start, sp - stack_start);
sp = stack_start;
CHECK_UNVERIFIABLE (cfg);
}
if (COMPILE_LLVM (cfg)) {
dreg = alloc_ireg (cfg);
MONO_EMIT_NEW_ICONST (cfg, dreg, 0);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, dreg, is_true ? 0 : 1);
MONO_EMIT_NEW_BRANCH_BLOCK2 (cfg, OP_IBEQ, true_bb, false_bb);
} else {
/* The JIT can't eliminate the iconst+compare */
MONO_INST_NEW (cfg, ins, OP_BR);
ins->inst_target_bb = is_true ? true_bb : false_bb;
MONO_ADD_INS (cfg->cbb, ins);
}
start_new_bblock = 1;
break;
}
if (m_class_is_enumtype (klass) && !mini_is_gsharedvt_klass (klass) && !(val->type == STACK_I8 && TARGET_SIZEOF_VOID_P == 4)) {
/* Can't do this with 64 bit enums on 32 bit since the vtype decomp pass is ran after the long decomp pass */
if (val->opcode == OP_ICONST) {
MONO_INST_NEW (cfg, ins, OP_BOX_ICONST);
ins->type = STACK_OBJ;
ins->klass = klass;
ins->inst_c0 = val->inst_c0;
ins->dreg = alloc_dreg (cfg, (MonoStackType)val->type);
} else {
MONO_INST_NEW (cfg, ins, OP_BOX);
ins->type = STACK_OBJ;
ins->klass = klass;
ins->sreg1 = val->dreg;
ins->dreg = alloc_dreg (cfg, (MonoStackType)val->type);
}
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = ins;
} else {
*sp++ = mini_emit_box (cfg, val, klass, context_used);
}
CHECK_CFG_EXCEPTION;
inline_costs += 1;
break;
}
case MONO_CEE_UNBOX: {
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
mono_save_token_info (cfg, image, token, klass);
context_used = mini_class_check_context_used (cfg, klass);
if (mono_class_is_nullable (klass)) {
MonoInst *val;
val = handle_unbox_nullable (cfg, *sp, klass, context_used);
EMIT_NEW_VARLOADA (cfg, ins, get_vreg_to_inst (cfg, val->dreg), m_class_get_byval_arg (val->klass));
*sp++= ins;
} else {
ins = mini_handle_unbox (cfg, klass, *sp, context_used);
*sp++ = ins;
}
inline_costs += 2;
break;
}
case MONO_CEE_LDFLD:
case MONO_CEE_LDFLDA:
case MONO_CEE_STFLD:
case MONO_CEE_LDSFLD:
case MONO_CEE_LDSFLDA:
case MONO_CEE_STSFLD: {
MonoClassField *field;
guint foffset;
gboolean is_instance;
gpointer addr = NULL;
gboolean is_special_static;
MonoType *ftype;
MonoInst *store_val = NULL;
MonoInst *thread_ins;
is_instance = (il_op == MONO_CEE_LDFLD || il_op == MONO_CEE_LDFLDA || il_op == MONO_CEE_STFLD);
if (is_instance) {
if (il_op == MONO_CEE_STFLD) {
sp -= 2;
store_val = sp [1];
} else {
--sp;
}
if (sp [0]->type == STACK_I4 || sp [0]->type == STACK_I8 || sp [0]->type == STACK_R8)
UNVERIFIED;
if (il_op != MONO_CEE_LDFLD && sp [0]->type == STACK_VTYPE)
UNVERIFIED;
} else {
if (il_op == MONO_CEE_STSFLD) {
sp--;
store_val = sp [0];
}
}
if (method->wrapper_type != MONO_WRAPPER_NONE) {
field = (MonoClassField *)mono_method_get_wrapper_data (method, token);
klass = m_field_get_parent (field);
}
else {
klass = NULL;
field = mono_field_from_token_checked (image, token, &klass, generic_context, cfg->error);
if (!field)
CHECK_TYPELOAD (klass);
CHECK_CFG_ERROR;
}
if (!dont_verify && !cfg->skip_visibility && !mono_method_can_access_field (method, field))
FIELD_ACCESS_FAILURE (method, field);
mono_class_init_internal (klass);
mono_class_setup_fields (klass);
ftype = mono_field_get_type_internal (field);
/*
* LDFLD etc. is usable on static fields as well, so convert those cases to
* the static case.
*/
if (is_instance && ftype->attrs & FIELD_ATTRIBUTE_STATIC) {
switch (il_op) {
case MONO_CEE_LDFLD:
il_op = MONO_CEE_LDSFLD;
break;
case MONO_CEE_STFLD:
il_op = MONO_CEE_STSFLD;
break;
case MONO_CEE_LDFLDA:
il_op = MONO_CEE_LDSFLDA;
break;
default:
g_assert_not_reached ();
}
is_instance = FALSE;
}
context_used = mini_class_check_context_used (cfg, klass);
if (il_op == MONO_CEE_LDSFLD) {
ins = mini_emit_inst_for_field_load (cfg, field);
if (ins) {
*sp++ = ins;
goto field_access_end;
}
}
/* INSTANCE CASE */
if (is_instance)
g_assert (field->offset);
foffset = m_class_is_valuetype (klass) ? field->offset - MONO_ABI_SIZEOF (MonoObject): field->offset;
if (il_op == MONO_CEE_STFLD) {
sp [1] = convert_value (cfg, field->type, sp [1]);
if (target_type_is_incompatible (cfg, field->type, sp [1]))
UNVERIFIED;
{
MonoInst *store;
MONO_EMIT_NULL_CHECK (cfg, sp [0]->dreg, foffset > mono_target_pagesize ());
if (ins_flag & MONO_INST_VOLATILE) {
/* Volatile stores have release semantics, see 12.6.7 in Ecma 335 */
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
}
if (mini_is_gsharedvt_klass (klass)) {
MonoInst *offset_ins;
context_used = mini_class_check_context_used (cfg, klass);
offset_ins = emit_get_gsharedvt_info (cfg, field, MONO_RGCTX_INFO_FIELD_OFFSET);
/* The value is offset by 1 */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1);
dreg = alloc_ireg_mp (cfg);
EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, sp [0]->dreg, offset_ins->dreg);
if (cfg->gen_write_barriers && mini_type_to_stind (cfg, field->type) == CEE_STIND_REF && !MONO_INS_IS_PCONST_NULL (sp [1])) {
store = mini_emit_storing_write_barrier (cfg, ins, sp [1]);
} else {
/* The decomposition will call mini_emit_memory_copy () which will emit a wbarrier if needed */
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, store, field->type, dreg, 0, sp [1]->dreg);
}
} else {
if (cfg->gen_write_barriers && mini_type_to_stind (cfg, field->type) == CEE_STIND_REF && !MONO_INS_IS_PCONST_NULL (sp [1])) {
/* insert call to write barrier */
MonoInst *ptr;
int dreg;
dreg = alloc_ireg_mp (cfg);
EMIT_NEW_BIALU_IMM (cfg, ptr, OP_PADD_IMM, dreg, sp [0]->dreg, foffset);
store = mini_emit_storing_write_barrier (cfg, ptr, sp [1]);
} else {
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, store, field->type, sp [0]->dreg, foffset, sp [1]->dreg);
}
}
if (sp [0]->opcode != OP_LDADDR)
store->flags |= MONO_INST_FAULT;
store->flags |= ins_flag;
}
goto field_access_end;
}
if (is_instance) {
if (sp [0]->type == STACK_VTYPE) {
MonoInst *var;
/* Have to compute the address of the variable */
var = get_vreg_to_inst (cfg, sp [0]->dreg);
if (!var)
var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (klass), OP_LOCAL, sp [0]->dreg);
else
g_assert (var->klass == klass);
EMIT_NEW_VARLOADA (cfg, ins, var, m_class_get_byval_arg (var->klass));
sp [0] = ins;
}
if (il_op == MONO_CEE_LDFLDA) {
if (sp [0]->type == STACK_OBJ) {
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, sp [0]->dreg, 0);
MONO_EMIT_NEW_COND_EXC (cfg, EQ, "NullReferenceException");
}
dreg = alloc_ireg_mp (cfg);
if (mini_is_gsharedvt_klass (klass)) {
MonoInst *offset_ins;
offset_ins = emit_get_gsharedvt_info (cfg, field, MONO_RGCTX_INFO_FIELD_OFFSET);
/* The value is offset by 1 */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1);
EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, sp [0]->dreg, offset_ins->dreg);
} else {
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, dreg, sp [0]->dreg, foffset);
}
ins->klass = mono_class_from_mono_type_internal (field->type);
ins->type = STACK_MP;
*sp++ = ins;
} else {
MonoInst *load;
MONO_EMIT_NULL_CHECK (cfg, sp [0]->dreg, foffset > mono_target_pagesize ());
#ifdef MONO_ARCH_SIMD_INTRINSICS
if (sp [0]->opcode == OP_LDADDR && m_class_is_simd_type (klass) && cfg->opt & MONO_OPT_SIMD) {
ins = mono_emit_simd_field_load (cfg, field, sp [0]);
if (ins) {
*sp++ = ins;
goto field_access_end;
}
}
#endif
MonoInst *field_add_inst = sp [0];
if (mini_is_gsharedvt_klass (klass)) {
MonoInst *offset_ins;
offset_ins = emit_get_gsharedvt_info (cfg, field, MONO_RGCTX_INFO_FIELD_OFFSET);
/* The value is offset by 1 */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1);
EMIT_NEW_BIALU (cfg, field_add_inst, OP_PADD, alloc_ireg_mp (cfg), sp [0]->dreg, offset_ins->dreg);
foffset = 0;
}
load = mini_emit_memory_load (cfg, field->type, field_add_inst, foffset, ins_flag);
if (sp [0]->opcode != OP_LDADDR)
load->flags |= MONO_INST_FAULT;
*sp++ = load;
}
}
if (is_instance)
goto field_access_end;
/* STATIC CASE */
context_used = mini_class_check_context_used (cfg, klass);
if (ftype->attrs & FIELD_ATTRIBUTE_LITERAL) {
mono_error_set_field_missing (cfg->error, m_field_get_parent (field), field->name, NULL, "Using static instructions with literal field");
CHECK_CFG_ERROR;
}
/* The special_static_fields field is init'd in mono_class_vtable, so it needs
* to be called here.
*/
if (!context_used) {
mono_class_vtable_checked (klass, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (klass);
}
addr = mono_special_static_field_get_offset (field, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (klass);
is_special_static = mono_class_field_is_special_static (field);
if (is_special_static && ((gsize)addr & 0x80000000) == 0)
thread_ins = mono_create_tls_get (cfg, TLS_KEY_THREAD);
else
thread_ins = NULL;
/* Generate IR to compute the field address */
if (is_special_static && ((gsize)addr & 0x80000000) == 0 && thread_ins &&
!(context_used && cfg->gsharedvt && mini_is_gsharedvt_klass (klass))) {
/*
* Fast access to TLS data
* Inline version of get_thread_static_data () in
* threads.c.
*/
guint32 offset;
int idx, static_data_reg, array_reg, dreg;
static_data_reg = alloc_ireg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, static_data_reg, thread_ins->dreg, MONO_STRUCT_OFFSET (MonoInternalThread, static_data));
if (cfg->compile_aot || context_used) {
int offset_reg, offset2_reg, idx_reg;
/* For TLS variables, this will return the TLS offset */
if (context_used) {
MonoInst *addr_ins = emit_get_rgctx_field (cfg, context_used, field, MONO_RGCTX_INFO_FIELD_OFFSET);
/* The value is offset by 1 */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, addr_ins->dreg, addr_ins->dreg, 1);
} else {
EMIT_NEW_SFLDACONST (cfg, ins, field);
}
offset_reg = ins->dreg;
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IAND_IMM, offset_reg, offset_reg, 0x7fffffff);
idx_reg = alloc_ireg (cfg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IAND_IMM, idx_reg, offset_reg, 0x3f);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ISHL_IMM, idx_reg, idx_reg, TARGET_SIZEOF_VOID_P == 8 ? 3 : 2);
MONO_EMIT_NEW_BIALU (cfg, OP_PADD, static_data_reg, static_data_reg, idx_reg);
array_reg = alloc_ireg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, array_reg, static_data_reg, 0);
offset2_reg = alloc_ireg (cfg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ISHR_UN_IMM, offset2_reg, offset_reg, 6);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IAND_IMM, offset2_reg, offset2_reg, 0x1ffffff);
dreg = alloc_ireg (cfg);
EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, array_reg, offset2_reg);
} else {
offset = (gsize)addr & 0x7fffffff;
idx = offset & 0x3f;
array_reg = alloc_ireg (cfg);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, array_reg, static_data_reg, idx * TARGET_SIZEOF_VOID_P);
dreg = alloc_ireg (cfg);
EMIT_NEW_BIALU_IMM (cfg, ins, OP_ADD_IMM, dreg, array_reg, ((offset >> 6) & 0x1ffffff));
}
} else if ((cfg->compile_aot && is_special_static) ||
(context_used && is_special_static)) {
MonoInst *iargs [1];
g_assert (m_field_get_parent (field));
if (context_used) {
iargs [0] = emit_get_rgctx_field (cfg, context_used,
field, MONO_RGCTX_INFO_CLASS_FIELD);
} else {
EMIT_NEW_FIELDCONST (cfg, iargs [0], field);
}
ins = mono_emit_jit_icall (cfg, mono_class_static_field_address, iargs);
} else if (context_used) {
MonoInst *static_data;
/*
g_print ("sharing static field access in %s.%s.%s - depth %d offset %d\n",
method->klass->name_space, method->klass->name, method->name,
depth, field->offset);
*/
if (mono_class_needs_cctor_run (klass, method))
emit_class_init (cfg, klass);
/*
* The pointer we're computing here is
*
* super_info.static_data + field->offset
*/
static_data = mini_emit_get_rgctx_klass (cfg, context_used,
klass, MONO_RGCTX_INFO_STATIC_DATA);
if (mini_is_gsharedvt_klass (klass)) {
MonoInst *offset_ins;
offset_ins = emit_get_rgctx_field (cfg, context_used, field, MONO_RGCTX_INFO_FIELD_OFFSET);
/* The value is offset by 1 */
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1);
dreg = alloc_ireg_mp (cfg);
EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, static_data->dreg, offset_ins->dreg);
} else if (field->offset == 0) {
ins = static_data;
} else {
int addr_reg = mono_alloc_preg (cfg);
EMIT_NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, addr_reg, static_data->dreg, field->offset);
}
} else if (cfg->compile_aot && addr) {
MonoInst *iargs [1];
g_assert (m_field_get_parent (field));
EMIT_NEW_FIELDCONST (cfg, iargs [0], field);
ins = mono_emit_jit_icall (cfg, mono_class_static_field_address, iargs);
} else {
MonoVTable *vtable = NULL;
if (!cfg->compile_aot)
vtable = mono_class_vtable_checked (klass, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (klass);
if (!addr) {
if (mini_field_access_needs_cctor_run (cfg, method, klass, vtable)) {
if (!(g_slist_find (class_inits, klass))) {
emit_class_init (cfg, klass);
if (cfg->verbose_level > 2)
printf ("class %s.%s needs init call for %s\n", m_class_get_name_space (klass), m_class_get_name (klass), mono_field_get_name (field));
class_inits = g_slist_prepend (class_inits, klass);
}
} else {
if (cfg->run_cctors) {
/* This makes so that inline cannot trigger */
/* .cctors: too many apps depend on them */
/* running with a specific order... */
g_assert (vtable);
if (!vtable->initialized && m_class_has_cctor (vtable->klass))
INLINE_FAILURE ("class init");
if (!mono_runtime_class_init_full (vtable, cfg->error)) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
goto exception_exit;
}
}
}
if (cfg->compile_aot)
EMIT_NEW_SFLDACONST (cfg, ins, field);
else {
g_assert (vtable);
addr = mono_static_field_get_addr (vtable, field);
g_assert (addr);
EMIT_NEW_PCONST (cfg, ins, addr);
}
} else {
MonoInst *iargs [1];
EMIT_NEW_ICONST (cfg, iargs [0], GPOINTER_TO_UINT (addr));
ins = mono_emit_jit_icall (cfg, mono_get_special_static_data, iargs);
}
}
/* Generate IR to do the actual load/store operation */
if ((il_op == MONO_CEE_STFLD || il_op == MONO_CEE_STSFLD)) {
if (ins_flag & MONO_INST_VOLATILE) {
/* Volatile stores have release semantics, see 12.6.7 in Ecma 335 */
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
} else if (!mini_debug_options.weak_memory_model && mini_type_is_reference (ftype)) {
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL);
}
}
if (il_op == MONO_CEE_LDSFLDA) {
ins->klass = mono_class_from_mono_type_internal (ftype);
ins->type = STACK_PTR;
*sp++ = ins;
} else if (il_op == MONO_CEE_STSFLD) {
MonoInst *store;
EMIT_NEW_STORE_MEMBASE_TYPE (cfg, store, ftype, ins->dreg, 0, store_val->dreg);
store->flags |= ins_flag;
} else {
gboolean is_const = FALSE;
MonoVTable *vtable = NULL;
gpointer addr = NULL;
if (!context_used) {
vtable = mono_class_vtable_checked (klass, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (klass);
}
if ((ftype->attrs & FIELD_ATTRIBUTE_INIT_ONLY) && (((addr = mono_aot_readonly_field_override (field)) != NULL) ||
(!context_used && !cfg->compile_aot && vtable->initialized))) {
int ro_type = ftype->type;
if (!addr)
addr = mono_static_field_get_addr (vtable, field);
if (ro_type == MONO_TYPE_VALUETYPE && m_class_is_enumtype (ftype->data.klass)) {
ro_type = mono_class_enum_basetype_internal (ftype->data.klass)->type;
}
GSHAREDVT_FAILURE (il_op);
/* printf ("RO-FIELD %s.%s:%s\n", klass->name_space, klass->name, mono_field_get_name (field));*/
is_const = TRUE;
switch (ro_type) {
case MONO_TYPE_BOOLEAN:
case MONO_TYPE_U1:
EMIT_NEW_ICONST (cfg, *sp, *((guint8 *)addr));
sp++;
break;
case MONO_TYPE_I1:
EMIT_NEW_ICONST (cfg, *sp, *((gint8 *)addr));
sp++;
break;
case MONO_TYPE_CHAR:
case MONO_TYPE_U2:
EMIT_NEW_ICONST (cfg, *sp, *((guint16 *)addr));
sp++;
break;
case MONO_TYPE_I2:
EMIT_NEW_ICONST (cfg, *sp, *((gint16 *)addr));
sp++;
break;
break;
case MONO_TYPE_I4:
EMIT_NEW_ICONST (cfg, *sp, *((gint32 *)addr));
sp++;
break;
case MONO_TYPE_U4:
EMIT_NEW_ICONST (cfg, *sp, *((guint32 *)addr));
sp++;
break;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
EMIT_NEW_PCONST (cfg, *sp, *((gpointer *)addr));
mini_type_to_eval_stack_type ((cfg), field->type, *sp);
sp++;
break;
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_CLASS:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
if (!mono_gc_is_moving ()) {
EMIT_NEW_PCONST (cfg, *sp, *((gpointer *)addr));
mini_type_to_eval_stack_type ((cfg), field->type, *sp);
sp++;
} else {
is_const = FALSE;
}
break;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
EMIT_NEW_I8CONST (cfg, *sp, *((gint64 *)addr));
sp++;
break;
case MONO_TYPE_R4:
case MONO_TYPE_R8:
case MONO_TYPE_VALUETYPE:
default:
is_const = FALSE;
break;
}
}
if (!is_const) {
MonoInst *load;
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, load, field->type, ins->dreg, 0);
load->flags |= ins_flag;
*sp++ = load;
}
}
field_access_end:
if ((il_op == MONO_CEE_LDFLD || il_op == MONO_CEE_LDSFLD) && (ins_flag & MONO_INST_VOLATILE)) {
/* Volatile loads have acquire semantics, see 12.6.7 in Ecma 335 */
mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_ACQ);
}
ins_flag = 0;
break;
}
case MONO_CEE_STOBJ:
sp -= 2;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
/* FIXME: should check item at sp [1] is compatible with the type of the store. */
mini_emit_memory_store (cfg, m_class_get_byval_arg (klass), sp [0], sp [1], ins_flag);
ins_flag = 0;
inline_costs += 1;
break;
/*
* Array opcodes
*/
case MONO_CEE_NEWARR: {
MonoInst *len_ins;
const char *data_ptr;
int data_size = 0;
guint32 field_token;
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
if (m_class_get_byval_arg (klass)->type == MONO_TYPE_VOID)
UNVERIFIED;
context_used = mini_class_check_context_used (cfg, klass);
#ifndef TARGET_S390X
if (sp [0]->type == STACK_I8 && TARGET_SIZEOF_VOID_P == 4) {
MONO_INST_NEW (cfg, ins, OP_LCONV_TO_OVF_U4);
ins->sreg1 = sp [0]->dreg;
ins->type = STACK_I4;
ins->dreg = alloc_ireg (cfg);
MONO_ADD_INS (cfg->cbb, ins);
*sp = mono_decompose_opcode (cfg, ins);
}
#else
/* The array allocator expects a 64-bit input, and we cannot rely
on the high bits of a 32-bit result, so we have to extend. */
if (sp [0]->type == STACK_I4 && TARGET_SIZEOF_VOID_P == 8) {
MONO_INST_NEW (cfg, ins, OP_ICONV_TO_I8);
ins->sreg1 = sp [0]->dreg;
ins->type = STACK_I8;
ins->dreg = alloc_ireg (cfg);
MONO_ADD_INS (cfg->cbb, ins);
*sp = mono_decompose_opcode (cfg, ins);
}
#endif
if (context_used) {
MonoInst *args [3];
MonoClass *array_class = mono_class_create_array (klass, 1);
MonoMethod *managed_alloc = mono_gc_get_managed_array_allocator (array_class);
/* FIXME: Use OP_NEWARR and decompose later to help abcrem */
/* vtable */
args [0] = mini_emit_get_rgctx_klass (cfg, context_used,
array_class, MONO_RGCTX_INFO_VTABLE);
/* array len */
args [1] = sp [0];
if (managed_alloc)
ins = mono_emit_method_call (cfg, managed_alloc, args, NULL);
else
ins = mono_emit_jit_icall (cfg, ves_icall_array_new_specific, args);
} else {
/* Decompose later since it is needed by abcrem */
MonoClass *array_type = mono_class_create_array (klass, 1);
mono_class_vtable_checked (array_type, cfg->error);
CHECK_CFG_ERROR;
CHECK_TYPELOAD (array_type);
MONO_INST_NEW (cfg, ins, OP_NEWARR);
ins->dreg = alloc_ireg_ref (cfg);
ins->sreg1 = sp [0]->dreg;
ins->inst_newa_class = klass;
ins->type = STACK_OBJ;
ins->klass = array_type;
MONO_ADD_INS (cfg->cbb, ins);
cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE;
cfg->cbb->needs_decompose = TRUE;
/* Needed so mono_emit_load_get_addr () gets called */
mono_get_got_var (cfg);
}
len_ins = sp [0];
ip += 5;
*sp++ = ins;
inline_costs += 1;
/*
* we inline/optimize the initialization sequence if possible.
* we should also allocate the array as not cleared, since we spend as much time clearing to 0 as initializing
* for small sizes open code the memcpy
* ensure the rva field is big enough
*/
if ((cfg->opt & MONO_OPT_INTRINS) && next_ip < end
&& ip_in_bb (cfg, cfg->cbb, next_ip)
&& (len_ins->opcode == OP_ICONST)
&& (data_ptr = initialize_array_data (cfg, method,
cfg->compile_aot, next_ip, end, klass,
len_ins->inst_c0, &data_size, &field_token,
&il_op, &next_ip))) {
MonoMethod *memcpy_method = mini_get_memcpy_method ();
MonoInst *iargs [3];
int add_reg = alloc_ireg_mp (cfg);
EMIT_NEW_BIALU_IMM (cfg, iargs [0], OP_PADD_IMM, add_reg, ins->dreg, MONO_STRUCT_OFFSET (MonoArray, vector));
if (cfg->compile_aot) {
EMIT_NEW_AOTCONST_TOKEN (cfg, iargs [1], MONO_PATCH_INFO_RVA, m_class_get_image (method->klass), GPOINTER_TO_UINT(field_token), STACK_PTR, NULL);
} else {
EMIT_NEW_PCONST (cfg, iargs [1], (char*)data_ptr);
}
EMIT_NEW_ICONST (cfg, iargs [2], data_size);
mono_emit_method_call (cfg, memcpy_method, iargs, NULL);
}
break;
}
case MONO_CEE_LDLEN:
--sp;
if (sp [0]->type != STACK_OBJ)
UNVERIFIED;
MONO_INST_NEW (cfg, ins, OP_LDLEN);
ins->dreg = alloc_preg (cfg);
ins->sreg1 = sp [0]->dreg;
ins->inst_imm = MONO_STRUCT_OFFSET (MonoArray, max_length);
ins->type = STACK_I4;
/* This flag will be inherited by the decomposition */
ins->flags |= MONO_INST_FAULT | MONO_INST_INVARIANT_LOAD;
MONO_ADD_INS (cfg->cbb, ins);
cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE;
cfg->cbb->needs_decompose = TRUE;
MONO_EMIT_NEW_UNALU (cfg, OP_NOT_NULL, -1, sp [0]->dreg);
*sp++ = ins;
break;
case MONO_CEE_LDELEMA:
sp -= 2;
if (sp [0]->type != STACK_OBJ)
UNVERIFIED;
cfg->flags |= MONO_CFG_HAS_LDELEMA;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
/* we need to make sure that this array is exactly the type it needs
* to be for correctness. the wrappers are lax with their usage
* so we need to ignore them here
*/
if (!m_class_is_valuetype (klass) && method->wrapper_type == MONO_WRAPPER_NONE && !readonly) {
MonoClass *array_class = mono_class_create_array (klass, 1);
mini_emit_check_array_type (cfg, sp [0], array_class);
CHECK_TYPELOAD (array_class);
}
readonly = FALSE;
ins = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE);
*sp++ = ins;
break;
case MONO_CEE_LDELEM:
case MONO_CEE_LDELEM_I1:
case MONO_CEE_LDELEM_U1:
case MONO_CEE_LDELEM_I2:
case MONO_CEE_LDELEM_U2:
case MONO_CEE_LDELEM_I4:
case MONO_CEE_LDELEM_U4:
case MONO_CEE_LDELEM_I8:
case MONO_CEE_LDELEM_I:
case MONO_CEE_LDELEM_R4:
case MONO_CEE_LDELEM_R8:
case MONO_CEE_LDELEM_REF: {
MonoInst *addr;
sp -= 2;
if (il_op == MONO_CEE_LDELEM) {
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
mono_class_init_internal (klass);
}
else
klass = array_access_to_klass (il_op);
if (sp [0]->type != STACK_OBJ)
UNVERIFIED;
cfg->flags |= MONO_CFG_HAS_LDELEMA;
if (mini_is_gsharedvt_variable_klass (klass)) {
// FIXME-VT: OP_ICONST optimization
addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0);
ins->opcode = OP_LOADV_MEMBASE;
} else if (sp [1]->opcode == OP_ICONST) {
int array_reg = sp [0]->dreg;
int index_reg = sp [1]->dreg;
int offset = (mono_class_array_element_size (klass) * sp [1]->inst_c0) + MONO_STRUCT_OFFSET (MonoArray, vector);
if (SIZEOF_REGISTER == 8 && COMPILE_LLVM (cfg))
MONO_EMIT_NEW_UNALU (cfg, OP_ZEXT_I4, index_reg, index_reg);
MONO_EMIT_BOUNDS_CHECK (cfg, array_reg, MonoArray, max_length, index_reg);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), array_reg, offset);
} else {
addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0);
}
*sp++ = ins;
break;
}
case MONO_CEE_STELEM_I:
case MONO_CEE_STELEM_I1:
case MONO_CEE_STELEM_I2:
case MONO_CEE_STELEM_I4:
case MONO_CEE_STELEM_I8:
case MONO_CEE_STELEM_R4:
case MONO_CEE_STELEM_R8:
case MONO_CEE_STELEM_REF:
case MONO_CEE_STELEM: {
sp -= 3;
cfg->flags |= MONO_CFG_HAS_LDELEMA;
if (il_op == MONO_CEE_STELEM) {
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
mono_class_init_internal (klass);
}
else
klass = array_access_to_klass (il_op);
if (sp [0]->type != STACK_OBJ)
UNVERIFIED;
sp [2] = convert_value (cfg, m_class_get_byval_arg (klass), sp [2]);
mini_emit_array_store (cfg, klass, sp, TRUE);
inline_costs += 1;
break;
}
case MONO_CEE_CKFINITE: {
--sp;
if (cfg->llvm_only) {
MonoInst *iargs [1];
iargs [0] = sp [0];
*sp++ = mono_emit_jit_icall (cfg, mono_ckfinite, iargs);
} else {
sp [0] = convert_value (cfg, m_class_get_byval_arg (mono_defaults.double_class), sp [0]);
MONO_INST_NEW (cfg, ins, OP_CKFINITE);
ins->sreg1 = sp [0]->dreg;
ins->dreg = alloc_freg (cfg);
ins->type = STACK_R8;
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = mono_decompose_opcode (cfg, ins);
}
break;
}
case MONO_CEE_REFANYVAL: {
MonoInst *src_var, *src;
int klass_reg = alloc_preg (cfg);
int dreg = alloc_preg (cfg);
GSHAREDVT_FAILURE (il_op);
MONO_INST_NEW (cfg, ins, il_op);
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
context_used = mini_class_check_context_used (cfg, klass);
// FIXME:
src_var = get_vreg_to_inst (cfg, sp [0]->dreg);
if (!src_var)
src_var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.typed_reference_class), OP_LOCAL, sp [0]->dreg);
EMIT_NEW_VARLOADA (cfg, src, src_var, src_var->inst_vtype);
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, klass_reg, src->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, klass));
if (context_used) {
MonoInst *klass_ins;
klass_ins = mini_emit_get_rgctx_klass (cfg, context_used,
klass, MONO_RGCTX_INFO_KLASS);
// FIXME:
MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, klass_reg, klass_ins->dreg);
MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "InvalidCastException");
} else {
mini_emit_class_check (cfg, klass_reg, klass);
}
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, src->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, value));
ins->type = STACK_MP;
ins->klass = klass;
*sp++ = ins;
break;
}
case MONO_CEE_MKREFANY: {
MonoInst *loc, *addr;
GSHAREDVT_FAILURE (il_op);
MONO_INST_NEW (cfg, ins, il_op);
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
context_used = mini_class_check_context_used (cfg, klass);
loc = mono_compile_create_var (cfg, m_class_get_byval_arg (mono_defaults.typed_reference_class), OP_LOCAL);
EMIT_NEW_TEMPLOADA (cfg, addr, loc->inst_c0);
MonoInst *const_ins = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_KLASS);
int type_reg = alloc_preg (cfg);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREP_MEMBASE_REG, addr->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, klass), const_ins->dreg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ADD_IMM, type_reg, const_ins->dreg, m_class_offsetof_byval_arg ());
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREP_MEMBASE_REG, addr->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, type), type_reg);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREP_MEMBASE_REG, addr->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, value), sp [0]->dreg);
EMIT_NEW_TEMPLOAD (cfg, ins, loc->inst_c0);
ins->type = STACK_VTYPE;
ins->klass = mono_defaults.typed_reference_class;
*sp++ = ins;
break;
}
case MONO_CEE_LDTOKEN: {
gpointer handle;
MonoClass *handle_class;
if (method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD ||
method->wrapper_type == MONO_WRAPPER_SYNCHRONIZED) {
handle = mono_method_get_wrapper_data (method, n);
handle_class = (MonoClass *)mono_method_get_wrapper_data (method, n + 1);
if (handle_class == mono_defaults.typehandle_class)
handle = m_class_get_byval_arg ((MonoClass*)handle);
}
else {
handle = mono_ldtoken_checked (image, n, &handle_class, generic_context, cfg->error);
CHECK_CFG_ERROR;
}
if (!handle)
LOAD_ERROR;
mono_class_init_internal (handle_class);
if (cfg->gshared) {
if (mono_metadata_token_table (n) == MONO_TABLE_TYPEDEF ||
mono_metadata_token_table (n) == MONO_TABLE_TYPEREF) {
/* This case handles ldtoken
of an open type, like for
typeof(Gen<>). */
context_used = 0;
} else if (handle_class == mono_defaults.typehandle_class) {
context_used = mini_class_check_context_used (cfg, mono_class_from_mono_type_internal ((MonoType *)handle));
} else if (handle_class == mono_defaults.fieldhandle_class)
context_used = mini_class_check_context_used (cfg, m_field_get_parent (((MonoClassField*)handle)));
else if (handle_class == mono_defaults.methodhandle_class)
context_used = mini_method_check_context_used (cfg, (MonoMethod *)handle);
else
g_assert_not_reached ();
}
{
if ((next_ip + 4 < end) && ip_in_bb (cfg, cfg->cbb, next_ip) &&
((next_ip [0] == CEE_CALL) || (next_ip [0] == CEE_CALLVIRT)) &&
(cmethod = mini_get_method (cfg, method, read32 (next_ip + 1), NULL, generic_context)) &&
(cmethod->klass == mono_defaults.systemtype_class) &&
(strcmp (cmethod->name, "GetTypeFromHandle") == 0)) {
MonoClass *tclass = mono_class_from_mono_type_internal ((MonoType *)handle);
mono_class_init_internal (tclass);
// Optimize to true/false if next instruction is `call instance bool Type::get_IsValueType()`
guchar *is_vt_ip;
guint32 is_vt_token;
if ((is_vt_ip = il_read_call (next_ip + 5, end, &is_vt_token)) && ip_in_bb (cfg, cfg->cbb, is_vt_ip)) {
MonoMethod *is_vt_method = mini_get_method (cfg, method, is_vt_token, NULL, generic_context);
if (is_vt_method->klass == mono_defaults.systemtype_class &&
!mini_is_gsharedvt_variable_klass (tclass) &&
!mono_class_is_open_constructed_type (m_class_get_byval_arg (tclass)) &&
!strcmp ("get_IsValueType", is_vt_method->name)) {
next_ip = is_vt_ip;
EMIT_NEW_ICONST (cfg, ins, m_class_is_valuetype (tclass) ? 1 : 0);
ins->type = STACK_I4;
*sp++ = ins;
break;
}
}
if (context_used) {
MONO_INST_NEW (cfg, ins, OP_RTTYPE);
ins->dreg = alloc_ireg_ref (cfg);
ins->inst_p0 = tclass;
ins->type = STACK_OBJ;
MONO_ADD_INS (cfg->cbb, ins);
cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE;
cfg->cbb->needs_decompose = TRUE;
} else if (cfg->compile_aot) {
if (method->wrapper_type) {
error_init (error); //got to do it since there are multiple conditionals below
if (mono_class_get_checked (m_class_get_image (tclass), m_class_get_type_token (tclass), error) == tclass && !generic_context) {
/* Special case for static synchronized wrappers */
EMIT_NEW_TYPE_FROM_HANDLE_CONST (cfg, ins, m_class_get_image (tclass), m_class_get_type_token (tclass), generic_context);
} else {
mono_error_cleanup (error); /* FIXME don't swallow the error */
/* FIXME: n is not a normal token */
DISABLE_AOT (cfg);
EMIT_NEW_PCONST (cfg, ins, NULL);
}
} else {
EMIT_NEW_TYPE_FROM_HANDLE_CONST (cfg, ins, image, n, generic_context);
}
} else {
MonoReflectionType *rt = mono_type_get_object_checked ((MonoType *)handle, cfg->error);
CHECK_CFG_ERROR;
EMIT_NEW_PCONST (cfg, ins, rt);
}
ins->type = STACK_OBJ;
ins->klass = mono_defaults.runtimetype_class;
il_op = (MonoOpcodeEnum)next_ip [0];
next_ip += 5;
} else {
MonoInst *addr, *vtvar;
vtvar = mono_compile_create_var (cfg, m_class_get_byval_arg (handle_class), OP_LOCAL);
if (context_used) {
if (handle_class == mono_defaults.typehandle_class) {
ins = mini_emit_get_rgctx_klass (cfg, context_used,
mono_class_from_mono_type_internal ((MonoType *)handle),
MONO_RGCTX_INFO_TYPE);
} else if (handle_class == mono_defaults.methodhandle_class) {
ins = emit_get_rgctx_method (cfg, context_used,
(MonoMethod *)handle, MONO_RGCTX_INFO_METHOD);
} else if (handle_class == mono_defaults.fieldhandle_class) {
ins = emit_get_rgctx_field (cfg, context_used,
(MonoClassField *)handle, MONO_RGCTX_INFO_CLASS_FIELD);
} else {
g_assert_not_reached ();
}
} else if (cfg->compile_aot) {
EMIT_NEW_LDTOKENCONST (cfg, ins, image, n, generic_context);
} else {
EMIT_NEW_PCONST (cfg, ins, handle);
}
EMIT_NEW_TEMPLOADA (cfg, addr, vtvar->inst_c0);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, addr->dreg, 0, ins->dreg);
EMIT_NEW_TEMPLOAD (cfg, ins, vtvar->inst_c0);
}
}
*sp++ = ins;
break;
}
case MONO_CEE_THROW:
if (sp [-1]->type != STACK_OBJ)
UNVERIFIED;
MONO_INST_NEW (cfg, ins, OP_THROW);
--sp;
ins->sreg1 = sp [0]->dreg;
cfg->cbb->out_of_line = TRUE;
MONO_ADD_INS (cfg->cbb, ins);
MONO_INST_NEW (cfg, ins, OP_NOT_REACHED);
MONO_ADD_INS (cfg->cbb, ins);
sp = stack_start;
link_bblock (cfg, cfg->cbb, end_bblock);
start_new_bblock = 1;
/* This can complicate code generation for llvm since the return value might not be defined */
if (COMPILE_LLVM (cfg))
INLINE_FAILURE ("throw");
break;
case MONO_CEE_ENDFINALLY:
if (!ip_in_finally_clause (cfg, ip - header->code))
UNVERIFIED;
/* mono_save_seq_point_info () depends on this */
if (sp != stack_start)
emit_seq_point (cfg, method, ip, FALSE, FALSE);
MONO_INST_NEW (cfg, ins, OP_ENDFINALLY);
MONO_ADD_INS (cfg->cbb, ins);
start_new_bblock = 1;
ins_has_side_effect = FALSE;
/*
* Control will leave the method so empty the stack, otherwise
* the next basic block will start with a nonempty stack.
*/
while (sp != stack_start) {
sp--;
}
break;
case MONO_CEE_LEAVE:
case MONO_CEE_LEAVE_S: {
GList *handlers;
/* empty the stack */
g_assert (sp >= stack_start);
sp = stack_start;
/*
* If this leave statement is in a catch block, check for a
* pending exception, and rethrow it if necessary.
* We avoid doing this in runtime invoke wrappers, since those are called
* by native code which excepts the wrapper to catch all exceptions.
*/
for (i = 0; i < header->num_clauses; ++i) {
MonoExceptionClause *clause = &header->clauses [i];
/*
* Use <= in the final comparison to handle clauses with multiple
* leave statements, like in bug #78024.
* The ordering of the exception clauses guarantees that we find the
* innermost clause.
*/
if (MONO_OFFSET_IN_HANDLER (clause, ip - header->code) && (clause->flags == MONO_EXCEPTION_CLAUSE_NONE) && (ip - header->code + ((il_op == MONO_CEE_LEAVE) ? 5 : 2)) <= (clause->handler_offset + clause->handler_len) && method->wrapper_type != MONO_WRAPPER_RUNTIME_INVOKE) {
MonoInst *exc_ins;
MonoBasicBlock *dont_throw;
/*
MonoInst *load;
NEW_TEMPLOAD (cfg, load, mono_find_exvar_for_offset (cfg, clause->handler_offset)->inst_c0);
*/
exc_ins = mono_emit_jit_icall (cfg, mono_thread_get_undeniable_exception, NULL);
NEW_BBLOCK (cfg, dont_throw);
/*
* Currently, we always rethrow the abort exception, despite the
* fact that this is not correct. See thread6.cs for an example.
* But propagating the abort exception is more important than
* getting the semantics right.
*/
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, exc_ins->dreg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, dont_throw);
MONO_EMIT_NEW_UNALU (cfg, OP_THROW, -1, exc_ins->dreg);
MONO_START_BB (cfg, dont_throw);
}
}
#ifdef ENABLE_LLVM
cfg->cbb->try_end = (intptr_t)(ip - header->code);
#endif
if ((handlers = mono_find_leave_clauses (cfg, ip, target))) {
GList *tmp;
/*
* For each finally clause that we exit we need to invoke the finally block.
* After each invocation we need to add try holes for all the clauses that
* we already exited.
*/
for (tmp = handlers; tmp; tmp = tmp->next) {
MonoLeaveClause *leave = (MonoLeaveClause *) tmp->data;
MonoExceptionClause *clause = leave->clause;
if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY)
continue;
MonoInst *abort_exc = (MonoInst *)mono_find_exvar_for_offset (cfg, clause->handler_offset);
MonoBasicBlock *dont_throw;
/*
* Emit instrumentation code before linking the basic blocks below as this
* will alter cfg->cbb.
*/
mini_profiler_emit_call_finally (cfg, header, ip, leave->index, clause);
tblock = cfg->cil_offset_to_bb [clause->handler_offset];
g_assert (tblock);
link_bblock (cfg, cfg->cbb, tblock);
MONO_EMIT_NEW_PCONST (cfg, abort_exc->dreg, 0);
MONO_INST_NEW (cfg, ins, OP_CALL_HANDLER);
ins->inst_target_bb = tblock;
ins->inst_eh_blocks = tmp;
MONO_ADD_INS (cfg->cbb, ins);
cfg->cbb->has_call_handler = 1;
/* Throw exception if exvar is set */
/* FIXME Do we need this for calls from catch/filter ? */
NEW_BBLOCK (cfg, dont_throw);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, abort_exc->dreg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, dont_throw);
mono_emit_jit_icall (cfg, ves_icall_thread_finish_async_abort, NULL);
cfg->cbb->clause_holes = tmp;
MONO_START_BB (cfg, dont_throw);
cfg->cbb->clause_holes = tmp;
if (COMPILE_LLVM (cfg)) {
MonoBasicBlock *target_bb;
/*
* Link the finally bblock with the target, since it will
* conceptually branch there.
*/
GET_BBLOCK (cfg, tblock, cfg->cil_start + clause->handler_offset + clause->handler_len - 1);
GET_BBLOCK (cfg, target_bb, target);
link_bblock (cfg, tblock, target_bb);
}
}
}
MONO_INST_NEW (cfg, ins, OP_BR);
MONO_ADD_INS (cfg->cbb, ins);
GET_BBLOCK (cfg, tblock, target);
link_bblock (cfg, cfg->cbb, tblock);
ins->inst_target_bb = tblock;
start_new_bblock = 1;
break;
}
/*
* Mono specific opcodes
*/
case MONO_CEE_MONO_ICALL: {
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
const MonoJitICallId jit_icall_id = (MonoJitICallId)token;
MonoJitICallInfo * const info = mono_find_jit_icall_info (jit_icall_id);
CHECK_STACK (info->sig->param_count);
sp -= info->sig->param_count;
if (token == MONO_JIT_ICALL_mono_threads_attach_coop) {
MonoInst *addr;
MonoBasicBlock *next_bb;
if (cfg->compile_aot) {
/*
* This is called on unattached threads, so it cannot go through the trampoline
* infrastructure. Use an indirect call through a got slot initialized at load time
* instead.
*/
EMIT_NEW_AOTCONST (cfg, addr, MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL, GUINT_TO_POINTER (jit_icall_id));
ins = mini_emit_calli (cfg, info->sig, sp, addr, NULL, NULL);
} else {
ins = mono_emit_jit_icall_id (cfg, jit_icall_id, sp);
}
/*
* Parts of the initlocals code needs to come after this, since it might call methods like memset.
* Also profiling needs to be after attach.
*/
init_localsbb2 = cfg->cbb;
NEW_BBLOCK (cfg, next_bb);
MONO_START_BB (cfg, next_bb);
} else {
if (token == MONO_JIT_ICALL_mono_threads_detach_coop) {
/* can't emit profiling code after a detach, so emit it now */
mini_profiler_emit_leave (cfg, NULL);
detached_before_ret = TRUE;
}
ins = mono_emit_jit_icall_id (cfg, jit_icall_id, sp);
}
if (!MONO_TYPE_IS_VOID (info->sig->ret))
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
}
MonoJumpInfoType ldptr_type;
case MONO_CEE_MONO_LDPTR_CARD_TABLE:
ldptr_type = MONO_PATCH_INFO_GC_CARD_TABLE_ADDR;
goto mono_ldptr;
case MONO_CEE_MONO_LDPTR_NURSERY_START:
ldptr_type = MONO_PATCH_INFO_GC_NURSERY_START;
goto mono_ldptr;
case MONO_CEE_MONO_LDPTR_NURSERY_BITS:
ldptr_type = MONO_PATCH_INFO_GC_NURSERY_BITS;
goto mono_ldptr;
case MONO_CEE_MONO_LDPTR_INT_REQ_FLAG:
ldptr_type = MONO_PATCH_INFO_INTERRUPTION_REQUEST_FLAG;
goto mono_ldptr;
case MONO_CEE_MONO_LDPTR_PROFILER_ALLOCATION_COUNT:
ldptr_type = MONO_PATCH_INFO_PROFILER_ALLOCATION_COUNT;
mono_ldptr:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
ins = mini_emit_runtime_constant (cfg, ldptr_type, NULL);
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
case MONO_CEE_MONO_LDPTR: {
gpointer ptr;
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
ptr = mono_method_get_wrapper_data (method, token);
EMIT_NEW_PCONST (cfg, ins, ptr);
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
/* Can't embed random pointers into AOT code */
DISABLE_AOT (cfg);
break;
}
case MONO_CEE_MONO_JIT_ICALL_ADDR:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
EMIT_NEW_JIT_ICALL_ADDRCONST (cfg, ins, GUINT_TO_POINTER (token));
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
case MONO_CEE_MONO_ICALL_ADDR: {
MonoMethod *cmethod;
gpointer ptr;
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
cmethod = (MonoMethod *)mono_method_get_wrapper_data (method, token);
if (cfg->compile_aot) {
if (cfg->direct_pinvoke && ip + 6 < end && (ip [6] == CEE_POP)) {
/*
* This is generated by emit_native_wrapper () to resolve the pinvoke address
* before the call, its not needed when using direct pinvoke.
* This is not an optimization, but its used to avoid looking up pinvokes
* on platforms which don't support dlopen ().
*/
EMIT_NEW_PCONST (cfg, ins, NULL);
} else {
EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_ICALL_ADDR, cmethod);
}
} else {
ptr = mono_lookup_internal_call (cmethod);
g_assert (ptr);
EMIT_NEW_PCONST (cfg, ins, ptr);
}
*sp++ = ins;
break;
}
case MONO_CEE_MONO_VTADDR: {
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
MonoInst *src_var, *src;
--sp;
// FIXME:
src_var = get_vreg_to_inst (cfg, sp [0]->dreg);
EMIT_NEW_VARLOADA ((cfg), (src), src_var, src_var->inst_vtype);
*sp++ = src;
break;
}
case MONO_CEE_MONO_NEWOBJ: {
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
MonoInst *iargs [2];
klass = (MonoClass *)mono_method_get_wrapper_data (method, token);
mono_class_init_internal (klass);
NEW_CLASSCONST (cfg, iargs [0], klass);
MONO_ADD_INS (cfg->cbb, iargs [0]);
*sp++ = mono_emit_jit_icall (cfg, ves_icall_object_new, iargs);
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
}
case MONO_CEE_MONO_OBJADDR:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
--sp;
MONO_INST_NEW (cfg, ins, OP_MOVE);
ins->dreg = alloc_ireg_mp (cfg);
ins->sreg1 = sp [0]->dreg;
ins->type = STACK_MP;
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = ins;
break;
case MONO_CEE_MONO_LDNATIVEOBJ:
/*
* Similar to LDOBJ, but instead load the unmanaged
* representation of the vtype to the stack.
*/
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
--sp;
klass = (MonoClass *)mono_method_get_wrapper_data (method, token);
g_assert (m_class_is_valuetype (klass));
mono_class_init_internal (klass);
{
MonoInst *src, *dest, *temp;
src = sp [0];
temp = mono_compile_create_var (cfg, m_class_get_byval_arg (klass), OP_LOCAL);
temp->backend.is_pinvoke = 1;
EMIT_NEW_TEMPLOADA (cfg, dest, temp->inst_c0);
mini_emit_memory_copy (cfg, dest, src, klass, TRUE, 0);
EMIT_NEW_TEMPLOAD (cfg, dest, temp->inst_c0);
dest->type = STACK_VTYPE;
dest->klass = klass;
*sp ++ = dest;
}
break;
case MONO_CEE_MONO_RETOBJ: {
/*
* Same as RET, but return the native representation of a vtype
* to the caller.
*/
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
g_assert (cfg->ret);
g_assert (mono_method_signature_internal (method)->pinvoke);
--sp;
klass = (MonoClass *)mono_method_get_wrapper_data (method, token);
if (!cfg->vret_addr) {
g_assert (cfg->ret_var_is_local);
EMIT_NEW_VARLOADA (cfg, ins, cfg->ret, cfg->ret->inst_vtype);
} else {
EMIT_NEW_RETLOADA (cfg, ins);
}
mini_emit_memory_copy (cfg, ins, sp [0], klass, TRUE, 0);
if (sp != stack_start)
UNVERIFIED;
if (!detached_before_ret)
mini_profiler_emit_leave (cfg, sp [0]);
MONO_INST_NEW (cfg, ins, OP_BR);
ins->inst_target_bb = end_bblock;
MONO_ADD_INS (cfg->cbb, ins);
link_bblock (cfg, cfg->cbb, end_bblock);
start_new_bblock = 1;
break;
}
case MONO_CEE_MONO_SAVE_LMF:
case MONO_CEE_MONO_RESTORE_LMF:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
break;
case MONO_CEE_MONO_CLASSCONST:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
EMIT_NEW_CLASSCONST (cfg, ins, mono_method_get_wrapper_data (method, token));
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
case MONO_CEE_MONO_METHODCONST:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
EMIT_NEW_METHODCONST (cfg, ins, mono_method_get_wrapper_data (method, token));
*sp++ = ins;
break;
case MONO_CEE_MONO_PINVOKE_ADDR_CACHE: {
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
MonoMethod *pinvoke_method = (MonoMethod*)mono_method_get_wrapper_data (method, token);
/* This is a memory slot used by the wrapper */
if (cfg->compile_aot) {
EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_METHOD_PINVOKE_ADDR_CACHE, pinvoke_method);
} else {
gpointer addr = mono_mem_manager_alloc0 (cfg->mem_manager, sizeof (gpointer));
EMIT_NEW_PCONST (cfg, ins, addr);
}
*sp++ = ins;
break;
}
case MONO_CEE_MONO_NOT_TAKEN:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
cfg->cbb->out_of_line = TRUE;
break;
case MONO_CEE_MONO_TLS: {
MonoTlsKey key;
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
key = (MonoTlsKey)n;
g_assert (key < TLS_KEY_NUM);
ins = mono_create_tls_get (cfg, key);
g_assert (ins);
ins->type = STACK_PTR;
*sp++ = ins;
break;
}
case MONO_CEE_MONO_DYN_CALL: {
MonoCallInst *call;
/* It would be easier to call a trampoline, but that would put an
* extra frame on the stack, confusing exception handling. So
* implement it inline using an opcode for now.
*/
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
if (!cfg->dyn_call_var) {
cfg->dyn_call_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
/* prevent it from being register allocated */
cfg->dyn_call_var->flags |= MONO_INST_VOLATILE;
}
/* Has to use a call inst since local regalloc expects it */
MONO_INST_NEW_CALL (cfg, call, OP_DYN_CALL);
ins = (MonoInst*)call;
sp -= 2;
ins->sreg1 = sp [0]->dreg;
ins->sreg2 = sp [1]->dreg;
MONO_ADD_INS (cfg->cbb, ins);
cfg->param_area = MAX (cfg->param_area, cfg->backend->dyn_call_param_area);
/* OP_DYN_CALL might need to allocate a dynamically sized param area */
cfg->flags |= MONO_CFG_HAS_ALLOCA;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
}
case MONO_CEE_MONO_MEMORY_BARRIER: {
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
mini_emit_memory_barrier (cfg, (int)n);
break;
}
case MONO_CEE_MONO_ATOMIC_STORE_I4: {
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
g_assert (mono_arch_opcode_supported (OP_ATOMIC_STORE_I4));
sp -= 2;
MONO_INST_NEW (cfg, ins, OP_ATOMIC_STORE_I4);
ins->dreg = sp [0]->dreg;
ins->sreg1 = sp [1]->dreg;
ins->backend.memory_barrier_kind = (int)n;
MONO_ADD_INS (cfg->cbb, ins);
break;
}
case MONO_CEE_MONO_LD_DELEGATE_METHOD_PTR: {
CHECK_STACK (1);
--sp;
dreg = alloc_preg (cfg);
EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, sp [0]->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_ptr));
*sp++ = ins;
break;
}
case MONO_CEE_MONO_CALLI_EXTRA_ARG: {
MonoInst *addr;
MonoMethodSignature *fsig;
MonoInst *arg;
/*
* This is the same as CEE_CALLI, but passes an additional argument
* to the called method in llvmonly mode.
* This is only used by delegate invoke wrappers to call the
* actual delegate method.
*/
g_assert (method->wrapper_type == MONO_WRAPPER_DELEGATE_INVOKE);
ins = NULL;
cmethod = NULL;
CHECK_STACK (1);
--sp;
addr = *sp;
fsig = mini_get_signature (method, token, generic_context, cfg->error);
CHECK_CFG_ERROR;
if (cfg->llvm_only)
cfg->signatures = g_slist_prepend_mempool (cfg->mempool, cfg->signatures, fsig);
n = fsig->param_count + fsig->hasthis + 1;
CHECK_STACK (n);
sp -= n;
arg = sp [n - 1];
if (cfg->llvm_only) {
/*
* The lowest bit of 'arg' determines whenever the callee uses the gsharedvt
* cconv. This is set by mono_init_delegate ().
*/
if (cfg->gsharedvt && mini_is_gsharedvt_variable_signature (fsig)) {
MonoInst *callee = addr;
MonoInst *call, *localloc_ins;
MonoBasicBlock *is_gsharedvt_bb, *end_bb;
int low_bit_reg = alloc_preg (cfg);
NEW_BBLOCK (cfg, is_gsharedvt_bb);
NEW_BBLOCK (cfg, end_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PAND_IMM, low_bit_reg, arg->dreg, 1);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, low_bit_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBNE_UN, is_gsharedvt_bb);
/* Normal case: callee uses a normal cconv, have to add an out wrapper */
addr = emit_get_rgctx_sig (cfg, context_used,
fsig, MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI);
/*
* ADDR points to a gsharedvt-out wrapper, have to pass <callee, arg> as an extra arg.
*/
MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM);
ins->dreg = alloc_preg (cfg);
ins->inst_imm = 2 * TARGET_SIZEOF_VOID_P;
MONO_ADD_INS (cfg->cbb, ins);
localloc_ins = ins;
cfg->flags |= MONO_CFG_HAS_ALLOCA;
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, 0, callee->dreg);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, TARGET_SIZEOF_VOID_P, arg->dreg);
call = mini_emit_extra_arg_calli (cfg, fsig, sp, localloc_ins->dreg, addr);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Gsharedvt case: callee uses a gsharedvt cconv, no conversion is needed */
MONO_START_BB (cfg, is_gsharedvt_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PXOR_IMM, arg->dreg, arg->dreg, 1);
ins = mini_emit_extra_arg_calli (cfg, fsig, sp, arg->dreg, callee);
ins->dreg = call->dreg;
MONO_START_BB (cfg, end_bb);
} else {
/* Caller uses a normal calling conv */
MonoInst *callee = addr;
MonoInst *call, *localloc_ins;
MonoBasicBlock *is_gsharedvt_bb, *end_bb;
int low_bit_reg = alloc_preg (cfg);
NEW_BBLOCK (cfg, is_gsharedvt_bb);
NEW_BBLOCK (cfg, end_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PAND_IMM, low_bit_reg, arg->dreg, 1);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, low_bit_reg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBNE_UN, is_gsharedvt_bb);
/* Normal case: callee uses a normal cconv, no conversion is needed */
call = mini_emit_extra_arg_calli (cfg, fsig, sp, arg->dreg, callee);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
/* Gsharedvt case: callee uses a gsharedvt cconv, have to add an in wrapper */
MONO_START_BB (cfg, is_gsharedvt_bb);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PXOR_IMM, arg->dreg, arg->dreg, 1);
NEW_AOTCONST (cfg, addr, MONO_PATCH_INFO_GSHAREDVT_IN_WRAPPER, fsig);
MONO_ADD_INS (cfg->cbb, addr);
/*
* ADDR points to a gsharedvt-in wrapper, have to pass <callee, arg> as an extra arg.
*/
MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM);
ins->dreg = alloc_preg (cfg);
ins->inst_imm = 2 * TARGET_SIZEOF_VOID_P;
MONO_ADD_INS (cfg->cbb, ins);
localloc_ins = ins;
cfg->flags |= MONO_CFG_HAS_ALLOCA;
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, 0, callee->dreg);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, TARGET_SIZEOF_VOID_P, arg->dreg);
ins = mini_emit_extra_arg_calli (cfg, fsig, sp, localloc_ins->dreg, addr);
ins->dreg = call->dreg;
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
MONO_START_BB (cfg, end_bb);
}
} else {
/* Same as CEE_CALLI */
if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) {
/*
* We pass the address to the gsharedvt trampoline in the rgctx reg
*/
MonoInst *callee = addr;
addr = emit_get_rgctx_sig (cfg, context_used,
fsig, MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI);
ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, callee);
} else {
ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL);
}
}
if (!MONO_TYPE_IS_VOID (fsig->ret))
*sp++ = mono_emit_widen_call_res (cfg, ins, fsig);
CHECK_CFG_EXCEPTION;
ins_flag = 0;
constrained_class = NULL;
break;
}
case MONO_CEE_MONO_LDDOMAIN: {
MonoDomain *domain = mono_get_root_domain ();
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
EMIT_NEW_PCONST (cfg, ins, cfg->compile_aot ? NULL : domain);
*sp++ = ins;
break;
}
case MONO_CEE_MONO_SAVE_LAST_ERROR:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
// Just an IL prefix, setting this flag, picked up by call instructions.
save_last_error = TRUE;
break;
case MONO_CEE_MONO_GET_RGCTX_ARG:
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
mono_create_rgctx_var (cfg);
MONO_INST_NEW (cfg, ins, OP_MOVE);
ins->dreg = alloc_dreg (cfg, STACK_PTR);
ins->sreg1 = cfg->rgctx_var->dreg;
ins->type = STACK_PTR;
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = ins;
break;
case MONO_CEE_MONO_GET_SP: {
/* Used by COOP only, so this is good enough */
MonoInst *var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
EMIT_NEW_VARLOADA (cfg, ins, var, NULL);
*sp++ = ins;
break;
}
case MONO_CEE_MONO_REMAP_OVF_EXC:
/* Remap the exception thrown by the next _OVF opcode */
g_assert (method->wrapper_type != MONO_WRAPPER_NONE);
ovf_exc = (const char*)mono_method_get_wrapper_data (method, token);
break;
case MONO_CEE_ARGLIST: {
/* somewhat similar to LDTOKEN */
MonoInst *addr, *vtvar;
vtvar = mono_compile_create_var (cfg, m_class_get_byval_arg (mono_defaults.argumenthandle_class), OP_LOCAL);
EMIT_NEW_TEMPLOADA (cfg, addr, vtvar->inst_c0);
EMIT_NEW_UNALU (cfg, ins, OP_ARGLIST, -1, addr->dreg);
EMIT_NEW_TEMPLOAD (cfg, ins, vtvar->inst_c0);
ins->type = STACK_VTYPE;
ins->klass = mono_defaults.argumenthandle_class;
*sp++ = ins;
break;
}
case MONO_CEE_CEQ:
case MONO_CEE_CGT:
case MONO_CEE_CGT_UN:
case MONO_CEE_CLT:
case MONO_CEE_CLT_UN: {
MonoInst *cmp, *arg1, *arg2;
sp -= 2;
arg1 = sp [0];
arg2 = sp [1];
/*
* The following transforms:
* CEE_CEQ into OP_CEQ
* CEE_CGT into OP_CGT
* CEE_CGT_UN into OP_CGT_UN
* CEE_CLT into OP_CLT
* CEE_CLT_UN into OP_CLT_UN
*/
MONO_INST_NEW (cfg, cmp, (OP_CEQ - CEE_CEQ) + ip [1]);
MONO_INST_NEW (cfg, ins, cmp->opcode);
cmp->sreg1 = arg1->dreg;
cmp->sreg2 = arg2->dreg;
type_from_op (cfg, cmp, arg1, arg2);
CHECK_TYPE (cmp);
add_widen_op (cfg, cmp, &arg1, &arg2);
if ((arg1->type == STACK_I8) || ((TARGET_SIZEOF_VOID_P == 8) && ((arg1->type == STACK_PTR) || (arg1->type == STACK_OBJ) || (arg1->type == STACK_MP))))
cmp->opcode = OP_LCOMPARE;
else if (arg1->type == STACK_R4)
cmp->opcode = OP_RCOMPARE;
else if (arg1->type == STACK_R8)
cmp->opcode = OP_FCOMPARE;
else
cmp->opcode = OP_ICOMPARE;
MONO_ADD_INS (cfg->cbb, cmp);
ins->type = STACK_I4;
ins->dreg = alloc_dreg (cfg, (MonoStackType)ins->type);
type_from_op (cfg, ins, arg1, arg2);
if (cmp->opcode == OP_FCOMPARE || cmp->opcode == OP_RCOMPARE) {
/*
* The backends expect the fceq opcodes to do the
* comparison too.
*/
ins->sreg1 = cmp->sreg1;
ins->sreg2 = cmp->sreg2;
NULLIFY_INS (cmp);
}
MONO_ADD_INS (cfg->cbb, ins);
*sp++ = ins;
break;
}
case MONO_CEE_LDFTN: {
MonoInst *argconst;
MonoMethod *cil_method;
cmethod = mini_get_method (cfg, method, n, NULL, generic_context);
CHECK_CFG_ERROR;
if (constrained_class) {
if (m_method_is_static (cmethod) && mini_class_check_context_used (cfg, constrained_class))
// FIXME:
GENERIC_SHARING_FAILURE (CEE_LDFTN);
cmethod = get_constrained_method (cfg, image, n, cmethod, constrained_class, generic_context);
constrained_class = NULL;
CHECK_CFG_ERROR;
}
mono_class_init_internal (cmethod->klass);
mono_save_token_info (cfg, image, n, cmethod);
context_used = mini_method_check_context_used (cfg, cmethod);
cil_method = cmethod;
if (!dont_verify && !cfg->skip_visibility && !mono_method_can_access_method (method, cmethod))
emit_method_access_failure (cfg, method, cil_method);
const gboolean has_unmanaged_callers_only =
cmethod->wrapper_type == MONO_WRAPPER_NONE &&
mono_method_has_unmanaged_callers_only_attribute (cmethod);
/*
* Optimize the common case of ldftn+delegate creation
*/
if ((sp > stack_start) && (next_ip + 4 < end) && ip_in_bb (cfg, cfg->cbb, next_ip) && (next_ip [0] == CEE_NEWOBJ)) {
MonoMethod *ctor_method = mini_get_method (cfg, method, read32 (next_ip + 1), NULL, generic_context);
if (ctor_method && (m_class_get_parent (ctor_method->klass) == mono_defaults.multicastdelegate_class)) {
MonoInst *target_ins, *handle_ins;
MonoMethod *invoke;
int invoke_context_used;
if (G_UNLIKELY (has_unmanaged_callers_only)) {
mono_error_set_not_supported (cfg->error, "Cannot create delegate from method with UnmanagedCallersOnlyAttribute");
CHECK_CFG_ERROR;
}
invoke = mono_get_delegate_invoke_internal (ctor_method->klass);
if (!invoke || !mono_method_signature_internal (invoke))
LOAD_ERROR;
invoke_context_used = mini_method_check_context_used (cfg, invoke);
target_ins = sp [-1];
if (!(cmethod->flags & METHOD_ATTRIBUTE_STATIC)) {
/*BAD IMPL: We must not add a null check for virtual invoke delegates.*/
if (mono_method_signature_internal (invoke)->param_count == mono_method_signature_internal (cmethod)->param_count) {
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, target_ins->dreg, 0);
MONO_EMIT_NEW_COND_EXC (cfg, EQ, "ArgumentException");
}
}
if ((invoke_context_used == 0 || !cfg->gsharedvt) || cfg->llvm_only) {
if (cfg->verbose_level > 3)
g_print ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip + 6, NULL));
if ((handle_ins = handle_delegate_ctor (cfg, ctor_method->klass, target_ins, cmethod, context_used, invoke_context_used, FALSE))) {
sp --;
*sp = handle_ins;
CHECK_CFG_EXCEPTION;
sp ++;
next_ip += 5;
il_op = MONO_CEE_NEWOBJ;
break;
} else {
CHECK_CFG_ERROR;
}
}
}
}
/* UnmanagedCallersOnlyAttribute means ldftn should return a method callable from native */
if (G_UNLIKELY (has_unmanaged_callers_only)) {
if (G_UNLIKELY (cmethod->flags & METHOD_ATTRIBUTE_PINVOKE_IMPL)) {
// Follow CoreCLR, disallow [UnmanagedCallersOnly] and [DllImport] to be used
// together
emit_not_supported_failure (cfg);
EMIT_NEW_PCONST (cfg, ins, NULL);
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
}
MonoClass *delegate_klass = NULL;
MonoGCHandle target_handle = 0;
ERROR_DECL (wrapper_error);
MonoMethod *wrapped_cmethod;
wrapped_cmethod = mono_marshal_get_managed_wrapper (cmethod, delegate_klass, target_handle, wrapper_error);
if (!is_ok (wrapper_error)) {
/* if we couldn't create a wrapper because cmethod isn't supposed to have an
UnmanagedCallersOnly attribute, follow CoreCLR behavior and throw when the
method with the ldftn is executing, not when it is being compiled. */
emit_invalid_program_with_msg (cfg, wrapper_error, method, cmethod);
mono_error_cleanup (wrapper_error);
EMIT_NEW_PCONST (cfg, ins, NULL);
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
} else {
cmethod = wrapped_cmethod;
}
}
argconst = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD);
ins = mono_emit_jit_icall (cfg, mono_ldftn, &argconst);
*sp++ = ins;
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
}
case MONO_CEE_LDVIRTFTN: {
MonoInst *args [2];
cmethod = mini_get_method (cfg, method, n, NULL, generic_context);
CHECK_CFG_ERROR;
mono_class_init_internal (cmethod->klass);
context_used = mini_method_check_context_used (cfg, cmethod);
/*
* Optimize the common case of ldvirtftn+delegate creation
*/
if (previous_il_op == MONO_CEE_DUP && (sp > stack_start) && (next_ip + 4 < end) && ip_in_bb (cfg, cfg->cbb, next_ip) && (next_ip [0] == CEE_NEWOBJ)) {
MonoMethod *ctor_method = mini_get_method (cfg, method, read32 (next_ip + 1), NULL, generic_context);
if (ctor_method && (m_class_get_parent (ctor_method->klass) == mono_defaults.multicastdelegate_class)) {
MonoInst *target_ins, *handle_ins;
MonoMethod *invoke;
int invoke_context_used;
const gboolean is_virtual = (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) != 0;
invoke = mono_get_delegate_invoke_internal (ctor_method->klass);
if (!invoke || !mono_method_signature_internal (invoke))
LOAD_ERROR;
invoke_context_used = mini_method_check_context_used (cfg, invoke);
target_ins = sp [-1];
if (invoke_context_used == 0 || !cfg->gsharedvt || cfg->llvm_only) {
if (cfg->verbose_level > 3)
g_print ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip + 6, NULL));
if ((handle_ins = handle_delegate_ctor (cfg, ctor_method->klass, target_ins, cmethod, context_used, invoke_context_used, is_virtual))) {
sp -= 2;
*sp = handle_ins;
CHECK_CFG_EXCEPTION;
next_ip += 5;
previous_il_op = MONO_CEE_NEWOBJ;
sp ++;
break;
} else {
CHECK_CFG_ERROR;
}
}
}
}
--sp;
args [0] = *sp;
args [1] = emit_get_rgctx_method (cfg, context_used,
cmethod, MONO_RGCTX_INFO_METHOD);
if (context_used)
*sp++ = mono_emit_jit_icall (cfg, mono_ldvirtfn_gshared, args);
else
*sp++ = mono_emit_jit_icall (cfg, mono_ldvirtfn, args);
inline_costs += CALL_COST * MIN(10, num_calls++);
break;
}
case MONO_CEE_LOCALLOC: {
MonoBasicBlock *non_zero_bb, *end_bb;
int alloc_ptr = alloc_preg (cfg);
--sp;
if (sp != stack_start)
UNVERIFIED;
if (cfg->method != method)
/*
* Inlining this into a loop in a parent could lead to
* stack overflows which is different behavior than the
* non-inlined case, thus disable inlining in this case.
*/
INLINE_FAILURE("localloc");
NEW_BBLOCK (cfg, non_zero_bb);
NEW_BBLOCK (cfg, end_bb);
/* if size != zero */
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, sp [0]->dreg, 0);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBNE_UN, non_zero_bb);
//size is zero, so result is NULL
MONO_EMIT_NEW_PCONST (cfg, alloc_ptr, NULL);
MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb);
MONO_START_BB (cfg, non_zero_bb);
MONO_INST_NEW (cfg, ins, OP_LOCALLOC);
ins->dreg = alloc_ptr;
ins->sreg1 = sp [0]->dreg;
ins->type = STACK_PTR;
MONO_ADD_INS (cfg->cbb, ins);
cfg->flags |= MONO_CFG_HAS_ALLOCA;
if (header->init_locals)
ins->flags |= MONO_INST_INIT;
MONO_START_BB (cfg, end_bb);
EMIT_NEW_UNALU (cfg, ins, OP_MOVE, alloc_preg (cfg), alloc_ptr);
ins->type = STACK_PTR;
*sp++ = ins;
break;
}
case MONO_CEE_ENDFILTER: {
MonoExceptionClause *clause, *nearest;
int cc;
--sp;
if ((sp != stack_start) || (sp [0]->type != STACK_I4))
UNVERIFIED;
MONO_INST_NEW (cfg, ins, OP_ENDFILTER);
ins->sreg1 = (*sp)->dreg;
MONO_ADD_INS (cfg->cbb, ins);
start_new_bblock = 1;
nearest = NULL;
for (cc = 0; cc < header->num_clauses; ++cc) {
clause = &header->clauses [cc];
if ((clause->flags & MONO_EXCEPTION_CLAUSE_FILTER) &&
((next_ip - header->code) > clause->data.filter_offset && (next_ip - header->code) <= clause->handler_offset) &&
(!nearest || (clause->data.filter_offset < nearest->data.filter_offset)))
nearest = clause;
}
g_assert (nearest);
if ((next_ip - header->code) != nearest->handler_offset)
UNVERIFIED;
break;
}
case MONO_CEE_UNALIGNED_:
ins_flag |= MONO_INST_UNALIGNED;
/* FIXME: record alignment? we can assume 1 for now */
break;
case MONO_CEE_VOLATILE_:
ins_flag |= MONO_INST_VOLATILE;
break;
case MONO_CEE_TAIL_:
ins_flag |= MONO_INST_TAILCALL;
cfg->flags |= MONO_CFG_HAS_TAILCALL;
/* Can't inline tailcalls at this time */
inline_costs += 100000;
break;
case MONO_CEE_INITOBJ:
--sp;
klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
if (mini_class_is_reference (klass))
MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STORE_MEMBASE_IMM, sp [0]->dreg, 0, 0);
else
mini_emit_initobj (cfg, *sp, NULL, klass);
inline_costs += 1;
break;
case MONO_CEE_CONSTRAINED_:
constrained_class = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (constrained_class);
ins_has_side_effect = FALSE;
break;
case MONO_CEE_CPBLK:
sp -= 3;
mini_emit_memory_copy_bytes (cfg, sp [0], sp [1], sp [2], ins_flag);
ins_flag = 0;
inline_costs += 1;
break;
case MONO_CEE_INITBLK:
sp -= 3;
mini_emit_memory_init_bytes (cfg, sp [0], sp [1], sp [2], ins_flag);
ins_flag = 0;
inline_costs += 1;
break;
case MONO_CEE_NO_:
if (ip [2] & CEE_NO_TYPECHECK)
ins_flag |= MONO_INST_NOTYPECHECK;
if (ip [2] & CEE_NO_RANGECHECK)
ins_flag |= MONO_INST_NORANGECHECK;
if (ip [2] & CEE_NO_NULLCHECK)
ins_flag |= MONO_INST_NONULLCHECK;
break;
case MONO_CEE_RETHROW: {
MonoInst *load;
int handler_offset = -1;
for (i = 0; i < header->num_clauses; ++i) {
MonoExceptionClause *clause = &header->clauses [i];
if (MONO_OFFSET_IN_HANDLER (clause, ip - header->code) && !(clause->flags & MONO_EXCEPTION_CLAUSE_FINALLY)) {
handler_offset = clause->handler_offset;
break;
}
}
cfg->cbb->flags |= BB_EXCEPTION_UNSAFE;
if (handler_offset == -1)
UNVERIFIED;
EMIT_NEW_TEMPLOAD (cfg, load, mono_find_exvar_for_offset (cfg, handler_offset)->inst_c0);
MONO_INST_NEW (cfg, ins, OP_RETHROW);
ins->sreg1 = load->dreg;
MONO_ADD_INS (cfg->cbb, ins);
MONO_INST_NEW (cfg, ins, OP_NOT_REACHED);
MONO_ADD_INS (cfg->cbb, ins);
sp = stack_start;
link_bblock (cfg, cfg->cbb, end_bblock);
start_new_bblock = 1;
break;
}
case MONO_CEE_MONO_RETHROW: {
if (sp [-1]->type != STACK_OBJ)
UNVERIFIED;
MONO_INST_NEW (cfg, ins, OP_RETHROW);
--sp;
ins->sreg1 = sp [0]->dreg;
cfg->cbb->out_of_line = TRUE;
MONO_ADD_INS (cfg->cbb, ins);
MONO_INST_NEW (cfg, ins, OP_NOT_REACHED);
MONO_ADD_INS (cfg->cbb, ins);
sp = stack_start;
link_bblock (cfg, cfg->cbb, end_bblock);
start_new_bblock = 1;
/* This can complicate code generation for llvm since the return value might not be defined */
if (COMPILE_LLVM (cfg))
INLINE_FAILURE ("mono_rethrow");
break;
}
case MONO_CEE_SIZEOF: {
guint32 val;
int ialign;
if (mono_metadata_token_table (token) == MONO_TABLE_TYPESPEC && !image_is_dynamic (m_class_get_image (method->klass)) && !generic_context) {
MonoType *type = mono_type_create_from_typespec_checked (image, token, cfg->error);
CHECK_CFG_ERROR;
val = mono_type_size (type, &ialign);
EMIT_NEW_ICONST (cfg, ins, val);
} else {
MonoClass *klass = mini_get_class (method, token, generic_context);
CHECK_TYPELOAD (klass);
if (mini_is_gsharedvt_klass (klass)) {
ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_CLASS_SIZEOF);
ins->type = STACK_I4;
} else {
val = mono_type_size (m_class_get_byval_arg (klass), &ialign);
EMIT_NEW_ICONST (cfg, ins, val);
}
}
*sp++ = ins;
break;
}
case MONO_CEE_REFANYTYPE: {
MonoInst *src_var, *src;
GSHAREDVT_FAILURE (il_op);
--sp;
// FIXME:
src_var = get_vreg_to_inst (cfg, sp [0]->dreg);
if (!src_var)
src_var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.typed_reference_class), OP_LOCAL, sp [0]->dreg);
EMIT_NEW_VARLOADA (cfg, src, src_var, src_var->inst_vtype);
EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (mono_defaults.typehandle_class), src->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, type));
*sp++ = ins;
break;
}
case MONO_CEE_READONLY_:
readonly = TRUE;
break;
case MONO_CEE_UNUSED56:
case MONO_CEE_UNUSED57:
case MONO_CEE_UNUSED70:
case MONO_CEE_UNUSED:
case MONO_CEE_UNUSED99:
case MONO_CEE_UNUSED58:
case MONO_CEE_UNUSED1:
UNVERIFIED;
default:
g_warning ("opcode 0x%02x not handled", il_op);
UNVERIFIED;
}
if (ins_has_side_effect)
cfg->cbb->flags |= BB_HAS_SIDE_EFFECTS;
}
if (start_new_bblock != 1)
UNVERIFIED;
cfg->cbb->cil_length = ip - cfg->cbb->cil_code;
if (cfg->cbb->next_bb) {
/* This could already be set because of inlining, #693905 */
MonoBasicBlock *bb = cfg->cbb;
while (bb->next_bb)
bb = bb->next_bb;
bb->next_bb = end_bblock;
} else {
cfg->cbb->next_bb = end_bblock;
}
#if defined(TARGET_POWERPC) || defined(TARGET_X86)
if (cfg->compile_aot)
/* FIXME: The plt slots require a GOT var even if the method doesn't use it */
mono_get_got_var (cfg);
#endif
#ifdef TARGET_WASM
if (cfg->lmf_var && !cfg->deopt) {
// mini_llvmonly_pop_lmf () might be called before emit_push_lmf () so initialize the LMF
cfg->cbb = init_localsbb;
EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL);
int lmf_reg = ins->dreg;
EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_IMM, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, previous_lmf), 0);
}
#endif
if (cfg->method == method && cfg->got_var)
mono_emit_load_got_addr (cfg);
if (init_localsbb) {
cfg->cbb = init_localsbb;
cfg->ip = NULL;
for (i = 0; i < header->num_locals; ++i) {
/*
* Vtype initialization might need to be done after CEE_JIT_ATTACH, since it can make calls to memset (),
* which need the trampoline code to work.
*/
if (MONO_TYPE_ISSTRUCT (header->locals [i]))
cfg->cbb = init_localsbb2;
else
cfg->cbb = init_localsbb;
emit_init_local (cfg, i, header->locals [i], init_locals);
}
}
if (cfg->init_ref_vars && cfg->method == method) {
/* Emit initialization for ref vars */
// FIXME: Avoid duplication initialization for IL locals.
for (i = 0; i < cfg->num_varinfo; ++i) {
MonoInst *ins = cfg->varinfo [i];
if (ins->opcode == OP_LOCAL && ins->type == STACK_OBJ)
MONO_EMIT_NEW_PCONST (cfg, ins->dreg, NULL);
}
}
if (cfg->lmf_var && cfg->method == method && !cfg->llvm_only) {
cfg->cbb = init_localsbb;
emit_push_lmf (cfg);
}
/* emit profiler enter code after a jit attach if there is one */
cfg->cbb = init_localsbb2;
mini_profiler_emit_enter (cfg);
cfg->cbb = init_localsbb;
if (seq_points) {
MonoBasicBlock *bb;
/*
* Make seq points at backward branch targets interruptable.
*/
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
if (bb->code && bb->in_count > 1 && bb->code->opcode == OP_SEQ_POINT)
bb->code->flags |= MONO_INST_SINGLE_STEP_LOC;
}
/* Add a sequence point for method entry/exit events */
if (seq_points && cfg->gen_sdb_seq_points) {
NEW_SEQ_POINT (cfg, ins, METHOD_ENTRY_IL_OFFSET, FALSE);
MONO_ADD_INS (init_localsbb, ins);
NEW_SEQ_POINT (cfg, ins, METHOD_EXIT_IL_OFFSET, FALSE);
MONO_ADD_INS (cfg->bb_exit, ins);
}
/*
* Add seq points for IL offsets which have line number info, but wasn't generated a seq point during JITting because
* the code they refer to was dead (#11880).
*/
if (sym_seq_points) {
for (i = 0; i < header->code_size; ++i) {
if (mono_bitset_test_fast (seq_point_locs, i) && !mono_bitset_test_fast (seq_point_set_locs, i)) {
MonoInst *ins;
NEW_SEQ_POINT (cfg, ins, i, FALSE);
mono_add_seq_point (cfg, NULL, ins, SEQ_POINT_NATIVE_OFFSET_DEAD_CODE);
}
}
}
cfg->ip = NULL;
if (cfg->method == method) {
compute_bb_regions (cfg);
} else {
MonoBasicBlock *bb;
/* get_most_deep_clause () in mini-llvm.c depends on this for inlined bblocks */
for (bb = start_bblock; bb != end_bblock; bb = bb->next_bb) {
bb->real_offset = inline_offset;
}
}
if (inline_costs < 0) {
char *mname;
/* Method is too large */
mname = mono_method_full_name (method, TRUE);
mono_cfg_set_exception_invalid_program (cfg, g_strdup_printf ("Method %s is too complex.", mname));
g_free (mname);
}
if ((cfg->verbose_level > 2) && (cfg->method == method))
mono_print_code (cfg, "AFTER METHOD-TO-IR");
goto cleanup;
mono_error_exit:
if (cfg->verbose_level > 3)
g_print ("exiting due to error");
g_assert (!is_ok (cfg->error));
goto cleanup;
exception_exit:
if (cfg->verbose_level > 3)
g_print ("exiting due to exception");
g_assert (cfg->exception_type != MONO_EXCEPTION_NONE);
goto cleanup;
unverified:
if (cfg->verbose_level > 3)
g_print ("exiting due to invalid il");
set_exception_type_from_invalid_il (cfg, method, ip);
goto cleanup;
cleanup:
g_slist_free (class_inits);
mono_basic_block_free (original_bb);
cfg->dont_inline = g_list_remove (cfg->dont_inline, method);
if (cfg->exception_type)
return -1;
else
return inline_costs;
}
static int
store_membase_reg_to_store_membase_imm (int opcode)
{
switch (opcode) {
case OP_STORE_MEMBASE_REG:
return OP_STORE_MEMBASE_IMM;
case OP_STOREI1_MEMBASE_REG:
return OP_STOREI1_MEMBASE_IMM;
case OP_STOREI2_MEMBASE_REG:
return OP_STOREI2_MEMBASE_IMM;
case OP_STOREI4_MEMBASE_REG:
return OP_STOREI4_MEMBASE_IMM;
case OP_STOREI8_MEMBASE_REG:
return OP_STOREI8_MEMBASE_IMM;
default:
g_assert_not_reached ();
}
return -1;
}
int
mono_op_to_op_imm (int opcode)
{
switch (opcode) {
case OP_IADD:
return OP_IADD_IMM;
case OP_ISUB:
return OP_ISUB_IMM;
case OP_IDIV:
return OP_IDIV_IMM;
case OP_IDIV_UN:
return OP_IDIV_UN_IMM;
case OP_IREM:
return OP_IREM_IMM;
case OP_IREM_UN:
return OP_IREM_UN_IMM;
case OP_IMUL:
return OP_IMUL_IMM;
case OP_IAND:
return OP_IAND_IMM;
case OP_IOR:
return OP_IOR_IMM;
case OP_IXOR:
return OP_IXOR_IMM;
case OP_ISHL:
return OP_ISHL_IMM;
case OP_ISHR:
return OP_ISHR_IMM;
case OP_ISHR_UN:
return OP_ISHR_UN_IMM;
case OP_LADD:
return OP_LADD_IMM;
case OP_LSUB:
return OP_LSUB_IMM;
case OP_LAND:
return OP_LAND_IMM;
case OP_LOR:
return OP_LOR_IMM;
case OP_LXOR:
return OP_LXOR_IMM;
case OP_LSHL:
return OP_LSHL_IMM;
case OP_LSHR:
return OP_LSHR_IMM;
case OP_LSHR_UN:
return OP_LSHR_UN_IMM;
#if SIZEOF_REGISTER == 8
case OP_LMUL:
return OP_LMUL_IMM;
case OP_LREM:
return OP_LREM_IMM;
#endif
case OP_COMPARE:
return OP_COMPARE_IMM;
case OP_ICOMPARE:
return OP_ICOMPARE_IMM;
case OP_LCOMPARE:
return OP_LCOMPARE_IMM;
case OP_STORE_MEMBASE_REG:
return OP_STORE_MEMBASE_IMM;
case OP_STOREI1_MEMBASE_REG:
return OP_STOREI1_MEMBASE_IMM;
case OP_STOREI2_MEMBASE_REG:
return OP_STOREI2_MEMBASE_IMM;
case OP_STOREI4_MEMBASE_REG:
return OP_STOREI4_MEMBASE_IMM;
#if defined(TARGET_X86) || defined (TARGET_AMD64)
case OP_X86_PUSH:
return OP_X86_PUSH_IMM;
case OP_X86_COMPARE_MEMBASE_REG:
return OP_X86_COMPARE_MEMBASE_IMM;
#endif
#if defined(TARGET_AMD64)
case OP_AMD64_ICOMPARE_MEMBASE_REG:
return OP_AMD64_ICOMPARE_MEMBASE_IMM;
#endif
case OP_VOIDCALL_REG:
return OP_VOIDCALL;
case OP_CALL_REG:
return OP_CALL;
case OP_LCALL_REG:
return OP_LCALL;
case OP_FCALL_REG:
return OP_FCALL;
case OP_LOCALLOC:
return OP_LOCALLOC_IMM;
}
return -1;
}
int
mono_load_membase_to_load_mem (int opcode)
{
// FIXME: Add a MONO_ARCH_HAVE_LOAD_MEM macro
#if defined(TARGET_X86) || defined(TARGET_AMD64)
switch (opcode) {
case OP_LOAD_MEMBASE:
return OP_LOAD_MEM;
case OP_LOADU1_MEMBASE:
return OP_LOADU1_MEM;
case OP_LOADU2_MEMBASE:
return OP_LOADU2_MEM;
case OP_LOADI4_MEMBASE:
return OP_LOADI4_MEM;
case OP_LOADU4_MEMBASE:
return OP_LOADU4_MEM;
#if SIZEOF_REGISTER == 8
case OP_LOADI8_MEMBASE:
return OP_LOADI8_MEM;
#endif
}
#endif
return -1;
}
static int
op_to_op_dest_membase (int store_opcode, int opcode)
{
#if defined(TARGET_X86)
if (!((store_opcode == OP_STORE_MEMBASE_REG) || (store_opcode == OP_STOREI4_MEMBASE_REG)))
return -1;
switch (opcode) {
case OP_IADD:
return OP_X86_ADD_MEMBASE_REG;
case OP_ISUB:
return OP_X86_SUB_MEMBASE_REG;
case OP_IAND:
return OP_X86_AND_MEMBASE_REG;
case OP_IOR:
return OP_X86_OR_MEMBASE_REG;
case OP_IXOR:
return OP_X86_XOR_MEMBASE_REG;
case OP_ADD_IMM:
case OP_IADD_IMM:
return OP_X86_ADD_MEMBASE_IMM;
case OP_SUB_IMM:
case OP_ISUB_IMM:
return OP_X86_SUB_MEMBASE_IMM;
case OP_AND_IMM:
case OP_IAND_IMM:
return OP_X86_AND_MEMBASE_IMM;
case OP_OR_IMM:
case OP_IOR_IMM:
return OP_X86_OR_MEMBASE_IMM;
case OP_XOR_IMM:
case OP_IXOR_IMM:
return OP_X86_XOR_MEMBASE_IMM;
case OP_MOVE:
return OP_NOP;
}
#endif
#if defined(TARGET_AMD64)
if (!((store_opcode == OP_STORE_MEMBASE_REG) || (store_opcode == OP_STOREI4_MEMBASE_REG) || (store_opcode == OP_STOREI8_MEMBASE_REG)))
return -1;
switch (opcode) {
case OP_IADD:
return OP_X86_ADD_MEMBASE_REG;
case OP_ISUB:
return OP_X86_SUB_MEMBASE_REG;
case OP_IAND:
return OP_X86_AND_MEMBASE_REG;
case OP_IOR:
return OP_X86_OR_MEMBASE_REG;
case OP_IXOR:
return OP_X86_XOR_MEMBASE_REG;
case OP_IADD_IMM:
return OP_X86_ADD_MEMBASE_IMM;
case OP_ISUB_IMM:
return OP_X86_SUB_MEMBASE_IMM;
case OP_IAND_IMM:
return OP_X86_AND_MEMBASE_IMM;
case OP_IOR_IMM:
return OP_X86_OR_MEMBASE_IMM;
case OP_IXOR_IMM:
return OP_X86_XOR_MEMBASE_IMM;
case OP_LADD:
return OP_AMD64_ADD_MEMBASE_REG;
case OP_LSUB:
return OP_AMD64_SUB_MEMBASE_REG;
case OP_LAND:
return OP_AMD64_AND_MEMBASE_REG;
case OP_LOR:
return OP_AMD64_OR_MEMBASE_REG;
case OP_LXOR:
return OP_AMD64_XOR_MEMBASE_REG;
case OP_ADD_IMM:
case OP_LADD_IMM:
return OP_AMD64_ADD_MEMBASE_IMM;
case OP_SUB_IMM:
case OP_LSUB_IMM:
return OP_AMD64_SUB_MEMBASE_IMM;
case OP_AND_IMM:
case OP_LAND_IMM:
return OP_AMD64_AND_MEMBASE_IMM;
case OP_OR_IMM:
case OP_LOR_IMM:
return OP_AMD64_OR_MEMBASE_IMM;
case OP_XOR_IMM:
case OP_LXOR_IMM:
return OP_AMD64_XOR_MEMBASE_IMM;
case OP_MOVE:
return OP_NOP;
}
#endif
return -1;
}
static int
op_to_op_store_membase (int store_opcode, int opcode)
{
#if defined(TARGET_X86) || defined(TARGET_AMD64)
switch (opcode) {
case OP_ICEQ:
if (store_opcode == OP_STOREI1_MEMBASE_REG)
return OP_X86_SETEQ_MEMBASE;
case OP_CNE:
if (store_opcode == OP_STOREI1_MEMBASE_REG)
return OP_X86_SETNE_MEMBASE;
}
#endif
return -1;
}
static int
op_to_op_src1_membase (MonoCompile *cfg, int load_opcode, int opcode)
{
#ifdef TARGET_X86
/* FIXME: This has sign extension issues */
/*
if ((opcode == OP_ICOMPARE_IMM) && (load_opcode == OP_LOADU1_MEMBASE))
return OP_X86_COMPARE_MEMBASE8_IMM;
*/
if (!((load_opcode == OP_LOAD_MEMBASE) || (load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE)))
return -1;
switch (opcode) {
case OP_X86_PUSH:
return OP_X86_PUSH_MEMBASE;
case OP_COMPARE_IMM:
case OP_ICOMPARE_IMM:
return OP_X86_COMPARE_MEMBASE_IMM;
case OP_COMPARE:
case OP_ICOMPARE:
return OP_X86_COMPARE_MEMBASE_REG;
}
#endif
#ifdef TARGET_AMD64
/* FIXME: This has sign extension issues */
/*
if ((opcode == OP_ICOMPARE_IMM) && (load_opcode == OP_LOADU1_MEMBASE))
return OP_X86_COMPARE_MEMBASE8_IMM;
*/
switch (opcode) {
case OP_X86_PUSH:
if ((load_opcode == OP_LOAD_MEMBASE && !cfg->backend->ilp32) || (load_opcode == OP_LOADI8_MEMBASE))
return OP_X86_PUSH_MEMBASE;
break;
/* FIXME: This only works for 32 bit immediates
case OP_COMPARE_IMM:
case OP_LCOMPARE_IMM:
if ((load_opcode == OP_LOAD_MEMBASE) || (load_opcode == OP_LOADI8_MEMBASE))
return OP_AMD64_COMPARE_MEMBASE_IMM;
*/
case OP_ICOMPARE_IMM:
if ((load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE))
return OP_AMD64_ICOMPARE_MEMBASE_IMM;
break;
case OP_COMPARE:
case OP_LCOMPARE:
if (cfg->backend->ilp32 && load_opcode == OP_LOAD_MEMBASE)
return OP_AMD64_ICOMPARE_MEMBASE_REG;
if ((load_opcode == OP_LOAD_MEMBASE && !cfg->backend->ilp32) || (load_opcode == OP_LOADI8_MEMBASE))
return OP_AMD64_COMPARE_MEMBASE_REG;
break;
case OP_ICOMPARE:
if ((load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE))
return OP_AMD64_ICOMPARE_MEMBASE_REG;
break;
}
#endif
return -1;
}
static int
op_to_op_src2_membase (MonoCompile *cfg, int load_opcode, int opcode)
{
#ifdef TARGET_X86
if (!((load_opcode == OP_LOAD_MEMBASE) || (load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE)))
return -1;
switch (opcode) {
case OP_COMPARE:
case OP_ICOMPARE:
return OP_X86_COMPARE_REG_MEMBASE;
case OP_IADD:
return OP_X86_ADD_REG_MEMBASE;
case OP_ISUB:
return OP_X86_SUB_REG_MEMBASE;
case OP_IAND:
return OP_X86_AND_REG_MEMBASE;
case OP_IOR:
return OP_X86_OR_REG_MEMBASE;
case OP_IXOR:
return OP_X86_XOR_REG_MEMBASE;
}
#endif
#ifdef TARGET_AMD64
if ((load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE) || (load_opcode == OP_LOAD_MEMBASE && cfg->backend->ilp32)) {
switch (opcode) {
case OP_ICOMPARE:
return OP_AMD64_ICOMPARE_REG_MEMBASE;
case OP_IADD:
return OP_X86_ADD_REG_MEMBASE;
case OP_ISUB:
return OP_X86_SUB_REG_MEMBASE;
case OP_IAND:
return OP_X86_AND_REG_MEMBASE;
case OP_IOR:
return OP_X86_OR_REG_MEMBASE;
case OP_IXOR:
return OP_X86_XOR_REG_MEMBASE;
}
} else if ((load_opcode == OP_LOADI8_MEMBASE) || (load_opcode == OP_LOAD_MEMBASE && !cfg->backend->ilp32)) {
switch (opcode) {
case OP_COMPARE:
case OP_LCOMPARE:
return OP_AMD64_COMPARE_REG_MEMBASE;
case OP_LADD:
return OP_AMD64_ADD_REG_MEMBASE;
case OP_LSUB:
return OP_AMD64_SUB_REG_MEMBASE;
case OP_LAND:
return OP_AMD64_AND_REG_MEMBASE;
case OP_LOR:
return OP_AMD64_OR_REG_MEMBASE;
case OP_LXOR:
return OP_AMD64_XOR_REG_MEMBASE;
}
}
#endif
return -1;
}
int
mono_op_to_op_imm_noemul (int opcode)
{
MONO_DISABLE_WARNING(4065) // switch with default but no case
switch (opcode) {
#if SIZEOF_REGISTER == 4 && !defined(MONO_ARCH_NO_EMULATE_LONG_SHIFT_OPS)
case OP_LSHR:
case OP_LSHL:
case OP_LSHR_UN:
return -1;
#endif
#if defined(MONO_ARCH_EMULATE_MUL_DIV) || defined(MONO_ARCH_EMULATE_DIV)
case OP_IDIV:
case OP_IDIV_UN:
case OP_IREM:
case OP_IREM_UN:
return -1;
#endif
#if defined(MONO_ARCH_EMULATE_MUL_DIV)
case OP_IMUL:
return -1;
#endif
default:
return mono_op_to_op_imm (opcode);
}
MONO_RESTORE_WARNING
}
gboolean
mono_op_no_side_effects (int opcode)
{
/* FIXME: Add more instructions */
/* INEG sets the condition codes, and the OP_LNEG decomposition depends on this on x86 */
switch (opcode) {
case OP_MOVE:
case OP_FMOVE:
case OP_VMOVE:
case OP_XMOVE:
case OP_RMOVE:
case OP_VZERO:
case OP_XZERO:
case OP_ICONST:
case OP_I8CONST:
case OP_ADD_IMM:
case OP_R8CONST:
case OP_LADD_IMM:
case OP_ISUB_IMM:
case OP_IADD_IMM:
case OP_LNEG:
case OP_ISUB:
case OP_CMOV_IGE:
case OP_ISHL_IMM:
case OP_ISHR_IMM:
case OP_ISHR_UN_IMM:
case OP_IAND_IMM:
case OP_ICONV_TO_U1:
case OP_ICONV_TO_I1:
case OP_SEXT_I4:
case OP_LCONV_TO_U1:
case OP_ICONV_TO_U2:
case OP_ICONV_TO_I2:
case OP_LCONV_TO_I2:
case OP_LDADDR:
case OP_PHI:
case OP_NOP:
case OP_ZEXT_I4:
case OP_NOT_NULL:
case OP_IL_SEQ_POINT:
case OP_RTTYPE:
return TRUE;
default:
return FALSE;
}
}
gboolean
mono_ins_no_side_effects (MonoInst *ins)
{
if (mono_op_no_side_effects (ins->opcode))
return TRUE;
if (ins->opcode == OP_AOTCONST) {
MonoJumpInfoType type = (MonoJumpInfoType)(intptr_t)ins->inst_p1;
// Some AOTCONSTs have side effects
switch (type) {
case MONO_PATCH_INFO_TYPE_FROM_HANDLE:
case MONO_PATCH_INFO_LDSTR:
case MONO_PATCH_INFO_VTABLE:
case MONO_PATCH_INFO_METHOD_RGCTX:
return TRUE;
}
}
return FALSE;
}
/**
* mono_handle_global_vregs:
*
* Make vregs used in more than one bblock 'global', i.e. allocate a variable
* for them.
*/
void
mono_handle_global_vregs (MonoCompile *cfg)
{
gint32 *vreg_to_bb;
MonoBasicBlock *bb;
int i, pos;
vreg_to_bb = (gint32 *)mono_mempool_alloc0 (cfg->mempool, sizeof (gint32*) * cfg->next_vreg + 1);
#ifdef MONO_ARCH_SIMD_INTRINSICS
if (cfg->uses_simd_intrinsics & MONO_CFG_USES_SIMD_INTRINSICS_SIMPLIFY_INDIRECTION)
mono_simd_simplify_indirection (cfg);
#endif
/* Find local vregs used in more than one bb */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
MonoInst *ins = bb->code;
int block_num = bb->block_num;
if (cfg->verbose_level > 2)
printf ("\nHANDLE-GLOBAL-VREGS BLOCK %d:\n", bb->block_num);
cfg->cbb = bb;
for (; ins; ins = ins->next) {
const char *spec = INS_INFO (ins->opcode);
int regtype = 0, regindex;
gint32 prev_bb;
if (G_UNLIKELY (cfg->verbose_level > 2))
mono_print_ins (ins);
g_assert (ins->opcode >= MONO_CEE_LAST);
for (regindex = 0; regindex < 4; regindex ++) {
int vreg = 0;
if (regindex == 0) {
regtype = spec [MONO_INST_DEST];
if (regtype == ' ')
continue;
vreg = ins->dreg;
} else if (regindex == 1) {
regtype = spec [MONO_INST_SRC1];
if (regtype == ' ')
continue;
vreg = ins->sreg1;
} else if (regindex == 2) {
regtype = spec [MONO_INST_SRC2];
if (regtype == ' ')
continue;
vreg = ins->sreg2;
} else if (regindex == 3) {
regtype = spec [MONO_INST_SRC3];
if (regtype == ' ')
continue;
vreg = ins->sreg3;
}
#if SIZEOF_REGISTER == 4
/* In the LLVM case, the long opcodes are not decomposed */
if (regtype == 'l' && !COMPILE_LLVM (cfg)) {
/*
* Since some instructions reference the original long vreg,
* and some reference the two component vregs, it is quite hard
* to determine when it needs to be global. So be conservative.
*/
if (!get_vreg_to_inst (cfg, vreg)) {
mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.int64_class), OP_LOCAL, vreg);
if (cfg->verbose_level > 2)
printf ("LONG VREG R%d made global.\n", vreg);
}
/*
* Make the component vregs volatile since the optimizations can
* get confused otherwise.
*/
get_vreg_to_inst (cfg, MONO_LVREG_LS (vreg))->flags |= MONO_INST_VOLATILE;
get_vreg_to_inst (cfg, MONO_LVREG_MS (vreg))->flags |= MONO_INST_VOLATILE;
}
#endif
g_assert (vreg != -1);
prev_bb = vreg_to_bb [vreg];
if (prev_bb == 0) {
/* 0 is a valid block num */
vreg_to_bb [vreg] = block_num + 1;
} else if ((prev_bb != block_num + 1) && (prev_bb != -1)) {
if (((regtype == 'i' && (vreg < MONO_MAX_IREGS))) || (regtype == 'f' && (vreg < MONO_MAX_FREGS)))
continue;
if (!get_vreg_to_inst (cfg, vreg)) {
if (G_UNLIKELY (cfg->verbose_level > 2))
printf ("VREG R%d used in BB%d and BB%d made global.\n", vreg, vreg_to_bb [vreg], block_num);
switch (regtype) {
case 'i':
if (vreg_is_ref (cfg, vreg))
mono_compile_create_var_for_vreg (cfg, mono_get_object_type (), OP_LOCAL, vreg);
else
mono_compile_create_var_for_vreg (cfg, mono_get_int_type (), OP_LOCAL, vreg);
break;
case 'l':
mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.int64_class), OP_LOCAL, vreg);
break;
case 'f':
mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.double_class), OP_LOCAL, vreg);
break;
case 'v':
case 'x':
mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (ins->klass), OP_LOCAL, vreg);
break;
default:
g_assert_not_reached ();
}
}
/* Flag as having been used in more than one bb */
vreg_to_bb [vreg] = -1;
}
}
}
}
/* If a variable is used in only one bblock, convert it into a local vreg */
for (i = 0; i < cfg->num_varinfo; i++) {
MonoInst *var = cfg->varinfo [i];
MonoMethodVar *vmv = MONO_VARINFO (cfg, i);
switch (var->type) {
case STACK_I4:
case STACK_OBJ:
case STACK_PTR:
case STACK_MP:
case STACK_VTYPE:
#if SIZEOF_REGISTER == 8
case STACK_I8:
#endif
#if !defined(TARGET_X86)
/* Enabling this screws up the fp stack on x86 */
case STACK_R8:
#endif
if (mono_arch_is_soft_float ())
break;
/*
if (var->type == STACK_VTYPE && cfg->gsharedvt && mini_is_gsharedvt_variable_type (var->inst_vtype))
break;
*/
/* Arguments are implicitly global */
/* Putting R4 vars into registers doesn't work currently */
/* The gsharedvt vars are implicitly referenced by ldaddr opcodes, but those opcodes are only generated later */
if ((var->opcode != OP_ARG) && (var != cfg->ret) && !(var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) && (vreg_to_bb [var->dreg] != -1) && (m_class_get_byval_arg (var->klass)->type != MONO_TYPE_R4) && !cfg->disable_vreg_to_lvreg && var != cfg->gsharedvt_info_var && var != cfg->gsharedvt_locals_var && var != cfg->lmf_addr_var) {
/*
* Make that the variable's liveness interval doesn't contain a call, since
* that would cause the lvreg to be spilled, making the whole optimization
* useless.
*/
/* This is too slow for JIT compilation */
#if 0
if (cfg->compile_aot && vreg_to_bb [var->dreg]) {
MonoInst *ins;
int def_index, call_index, ins_index;
gboolean spilled = FALSE;
def_index = -1;
call_index = -1;
ins_index = 0;
for (ins = vreg_to_bb [var->dreg]->code; ins; ins = ins->next) {
const char *spec = INS_INFO (ins->opcode);
if ((spec [MONO_INST_DEST] != ' ') && (ins->dreg == var->dreg))
def_index = ins_index;
if (((spec [MONO_INST_SRC1] != ' ') && (ins->sreg1 == var->dreg)) ||
((spec [MONO_INST_SRC1] != ' ') && (ins->sreg1 == var->dreg))) {
if (call_index > def_index) {
spilled = TRUE;
break;
}
}
if (MONO_IS_CALL (ins))
call_index = ins_index;
ins_index ++;
}
if (spilled)
break;
}
#endif
if (G_UNLIKELY (cfg->verbose_level > 2))
printf ("CONVERTED R%d(%d) TO VREG.\n", var->dreg, vmv->idx);
var->flags |= MONO_INST_IS_DEAD;
cfg->vreg_to_inst [var->dreg] = NULL;
}
break;
}
}
/*
* Compress the varinfo and vars tables so the liveness computation is faster and
* takes up less space.
*/
pos = 0;
for (i = 0; i < cfg->num_varinfo; ++i) {
MonoInst *var = cfg->varinfo [i];
if (pos < i && cfg->locals_start == i)
cfg->locals_start = pos;
if (!(var->flags & MONO_INST_IS_DEAD)) {
if (pos < i) {
cfg->varinfo [pos] = cfg->varinfo [i];
cfg->varinfo [pos]->inst_c0 = pos;
memcpy (&cfg->vars [pos], &cfg->vars [i], sizeof (MonoMethodVar));
cfg->vars [pos].idx = pos;
#if SIZEOF_REGISTER == 4
if (cfg->varinfo [pos]->type == STACK_I8) {
/* Modify the two component vars too */
MonoInst *var1;
var1 = get_vreg_to_inst (cfg, MONO_LVREG_LS (cfg->varinfo [pos]->dreg));
var1->inst_c0 = pos;
var1 = get_vreg_to_inst (cfg, MONO_LVREG_MS (cfg->varinfo [pos]->dreg));
var1->inst_c0 = pos;
}
#endif
}
pos ++;
}
}
cfg->num_varinfo = pos;
if (cfg->locals_start > cfg->num_varinfo)
cfg->locals_start = cfg->num_varinfo;
}
/*
* mono_allocate_gsharedvt_vars:
*
* Allocate variables with gsharedvt types to entries in the MonoGSharedVtMethodRuntimeInfo.entries array.
* Initialize cfg->gsharedvt_vreg_to_idx with the mapping between vregs and indexes.
*/
void
mono_allocate_gsharedvt_vars (MonoCompile *cfg)
{
int i;
cfg->gsharedvt_vreg_to_idx = (int *)mono_mempool_alloc0 (cfg->mempool, sizeof (int) * cfg->next_vreg);
for (i = 0; i < cfg->num_varinfo; ++i) {
MonoInst *ins = cfg->varinfo [i];
int idx;
if (mini_is_gsharedvt_variable_type (ins->inst_vtype)) {
if (i >= cfg->locals_start) {
/* Local */
idx = get_gsharedvt_info_slot (cfg, ins->inst_vtype, MONO_RGCTX_INFO_LOCAL_OFFSET);
cfg->gsharedvt_vreg_to_idx [ins->dreg] = idx + 1;
ins->opcode = OP_GSHAREDVT_LOCAL;
ins->inst_imm = idx;
} else {
/* Arg */
cfg->gsharedvt_vreg_to_idx [ins->dreg] = -1;
ins->opcode = OP_GSHAREDVT_ARG_REGOFFSET;
}
}
}
}
/**
* mono_spill_global_vars:
*
* Generate spill code for variables which are not allocated to registers,
* and replace vregs with their allocated hregs. *need_local_opts is set to TRUE if
* code is generated which could be optimized by the local optimization passes.
*/
void
mono_spill_global_vars (MonoCompile *cfg, gboolean *need_local_opts)
{
MonoBasicBlock *bb;
char spec2 [16];
int orig_next_vreg;
guint32 *vreg_to_lvreg;
guint32 *lvregs;
guint32 i, lvregs_len, lvregs_size;
gboolean dest_has_lvreg = FALSE;
MonoStackType stacktypes [128];
MonoInst **live_range_start, **live_range_end;
MonoBasicBlock **live_range_start_bb, **live_range_end_bb;
*need_local_opts = FALSE;
memset (spec2, 0, sizeof (spec2));
/* FIXME: Move this function to mini.c */
stacktypes [(int)'i'] = STACK_PTR;
stacktypes [(int)'l'] = STACK_I8;
stacktypes [(int)'f'] = STACK_R8;
#ifdef MONO_ARCH_SIMD_INTRINSICS
stacktypes [(int)'x'] = STACK_VTYPE;
#endif
#if SIZEOF_REGISTER == 4
/* Create MonoInsts for longs */
for (i = 0; i < cfg->num_varinfo; i++) {
MonoInst *ins = cfg->varinfo [i];
if ((ins->opcode != OP_REGVAR) && !(ins->flags & MONO_INST_IS_DEAD)) {
switch (ins->type) {
case STACK_R8:
case STACK_I8: {
MonoInst *tree;
if (ins->type == STACK_R8 && !COMPILE_SOFT_FLOAT (cfg))
break;
g_assert (ins->opcode == OP_REGOFFSET);
tree = get_vreg_to_inst (cfg, MONO_LVREG_LS (ins->dreg));
g_assert (tree);
tree->opcode = OP_REGOFFSET;
tree->inst_basereg = ins->inst_basereg;
tree->inst_offset = ins->inst_offset + MINI_LS_WORD_OFFSET;
tree = get_vreg_to_inst (cfg, MONO_LVREG_MS (ins->dreg));
g_assert (tree);
tree->opcode = OP_REGOFFSET;
tree->inst_basereg = ins->inst_basereg;
tree->inst_offset = ins->inst_offset + MINI_MS_WORD_OFFSET;
break;
}
default:
break;
}
}
}
#endif
if (cfg->compute_gc_maps) {
/* registers need liveness info even for !non refs */
for (i = 0; i < cfg->num_varinfo; i++) {
MonoInst *ins = cfg->varinfo [i];
if (ins->opcode == OP_REGVAR)
ins->flags |= MONO_INST_GC_TRACK;
}
}
/* FIXME: widening and truncation */
/*
* As an optimization, when a variable allocated to the stack is first loaded into
* an lvreg, we will remember the lvreg and use it the next time instead of loading
* the variable again.
*/
orig_next_vreg = cfg->next_vreg;
vreg_to_lvreg = (guint32 *)mono_mempool_alloc0 (cfg->mempool, sizeof (guint32) * cfg->next_vreg);
lvregs_size = 1024;
lvregs = (guint32 *)mono_mempool_alloc (cfg->mempool, sizeof (guint32) * lvregs_size);
lvregs_len = 0;
/*
* These arrays contain the first and last instructions accessing a given
* variable.
* Since we emit bblocks in the same order we process them here, and we
* don't split live ranges, these will precisely describe the live range of
* the variable, i.e. the instruction range where a valid value can be found
* in the variables location.
* The live range is computed using the liveness info computed by the liveness pass.
* We can't use vmv->range, since that is an abstract live range, and we need
* one which is instruction precise.
* FIXME: Variables used in out-of-line bblocks have a hole in their live range.
*/
/* FIXME: Only do this if debugging info is requested */
live_range_start = g_new0 (MonoInst*, cfg->next_vreg);
live_range_end = g_new0 (MonoInst*, cfg->next_vreg);
live_range_start_bb = g_new (MonoBasicBlock*, cfg->next_vreg);
live_range_end_bb = g_new (MonoBasicBlock*, cfg->next_vreg);
/* Add spill loads/stores */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
MonoInst *ins;
if (cfg->verbose_level > 2)
printf ("\nSPILL BLOCK %d:\n", bb->block_num);
/* Clear vreg_to_lvreg array */
for (i = 0; i < lvregs_len; i++)
vreg_to_lvreg [lvregs [i]] = 0;
lvregs_len = 0;
cfg->cbb = bb;
MONO_BB_FOR_EACH_INS (bb, ins) {
const char *spec = INS_INFO (ins->opcode);
int regtype, srcindex, sreg, tmp_reg, prev_dreg, num_sregs;
gboolean store, no_lvreg;
int sregs [MONO_MAX_SRC_REGS];
if (G_UNLIKELY (cfg->verbose_level > 2))
mono_print_ins (ins);
if (ins->opcode == OP_NOP)
continue;
/*
* We handle LDADDR here as well, since it can only be decomposed
* when variable addresses are known.
*/
if (ins->opcode == OP_LDADDR) {
MonoInst *var = (MonoInst *)ins->inst_p0;
if (var->opcode == OP_VTARG_ADDR) {
/* Happens on SPARC/S390 where vtypes are passed by reference */
MonoInst *vtaddr = var->inst_left;
if (vtaddr->opcode == OP_REGVAR) {
ins->opcode = OP_MOVE;
ins->sreg1 = vtaddr->dreg;
}
else if (var->inst_left->opcode == OP_REGOFFSET) {
ins->opcode = OP_LOAD_MEMBASE;
ins->inst_basereg = vtaddr->inst_basereg;
ins->inst_offset = vtaddr->inst_offset;
} else
NOT_IMPLEMENTED;
} else if (cfg->gsharedvt && cfg->gsharedvt_vreg_to_idx [var->dreg] < 0) {
/* gsharedvt arg passed by ref */
g_assert (var->opcode == OP_GSHAREDVT_ARG_REGOFFSET);
ins->opcode = OP_LOAD_MEMBASE;
ins->inst_basereg = var->inst_basereg;
ins->inst_offset = var->inst_offset;
} else if (cfg->gsharedvt && cfg->gsharedvt_vreg_to_idx [var->dreg]) {
MonoInst *load, *load2, *load3;
int idx = cfg->gsharedvt_vreg_to_idx [var->dreg] - 1;
int reg1, reg2, reg3;
MonoInst *info_var = cfg->gsharedvt_info_var;
MonoInst *locals_var = cfg->gsharedvt_locals_var;
/*
* gsharedvt local.
* Compute the address of the local as gsharedvt_locals_var + gsharedvt_info_var->locals_offsets [idx].
*/
g_assert (var->opcode == OP_GSHAREDVT_LOCAL);
g_assert (info_var);
g_assert (locals_var);
/* Mark the instruction used to compute the locals var as used */
cfg->gsharedvt_locals_var_ins = NULL;
/* Load the offset */
if (info_var->opcode == OP_REGOFFSET) {
reg1 = alloc_ireg (cfg);
NEW_LOAD_MEMBASE (cfg, load, OP_LOAD_MEMBASE, reg1, info_var->inst_basereg, info_var->inst_offset);
} else if (info_var->opcode == OP_REGVAR) {
load = NULL;
reg1 = info_var->dreg;
} else {
g_assert_not_reached ();
}
reg2 = alloc_ireg (cfg);
NEW_LOAD_MEMBASE (cfg, load2, OP_LOADI4_MEMBASE, reg2, reg1, MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, entries) + (idx * TARGET_SIZEOF_VOID_P));
/* Load the locals area address */
reg3 = alloc_ireg (cfg);
if (locals_var->opcode == OP_REGOFFSET) {
NEW_LOAD_MEMBASE (cfg, load3, OP_LOAD_MEMBASE, reg3, locals_var->inst_basereg, locals_var->inst_offset);
} else if (locals_var->opcode == OP_REGVAR) {
NEW_UNALU (cfg, load3, OP_MOVE, reg3, locals_var->dreg);
} else {
g_assert_not_reached ();
}
/* Compute the address */
ins->opcode = OP_PADD;
ins->sreg1 = reg3;
ins->sreg2 = reg2;
mono_bblock_insert_before_ins (bb, ins, load3);
mono_bblock_insert_before_ins (bb, load3, load2);
if (load)
mono_bblock_insert_before_ins (bb, load2, load);
} else {
g_assert (var->opcode == OP_REGOFFSET);
ins->opcode = OP_ADD_IMM;
ins->sreg1 = var->inst_basereg;
ins->inst_imm = var->inst_offset;
}
*need_local_opts = TRUE;
spec = INS_INFO (ins->opcode);
}
if (ins->opcode < MONO_CEE_LAST) {
mono_print_ins (ins);
g_assert_not_reached ();
}
/*
* Store opcodes have destbasereg in the dreg, but in reality, it is an
* src register.
* FIXME:
*/
if (MONO_IS_STORE_MEMBASE (ins)) {
tmp_reg = ins->dreg;
ins->dreg = ins->sreg2;
ins->sreg2 = tmp_reg;
store = TRUE;
spec2 [MONO_INST_DEST] = ' ';
spec2 [MONO_INST_SRC1] = spec [MONO_INST_SRC1];
spec2 [MONO_INST_SRC2] = spec [MONO_INST_DEST];
spec2 [MONO_INST_SRC3] = ' ';
spec = spec2;
} else if (MONO_IS_STORE_MEMINDEX (ins))
g_assert_not_reached ();
else
store = FALSE;
no_lvreg = FALSE;
if (G_UNLIKELY (cfg->verbose_level > 2)) {
printf ("\t %.3s %d", spec, ins->dreg);
num_sregs = mono_inst_get_src_registers (ins, sregs);
for (srcindex = 0; srcindex < num_sregs; ++srcindex)
printf (" %d", sregs [srcindex]);
printf ("\n");
}
/***************/
/* DREG */
/***************/
regtype = spec [MONO_INST_DEST];
g_assert (((ins->dreg == -1) && (regtype == ' ')) || ((ins->dreg != -1) && (regtype != ' ')));
prev_dreg = -1;
int dreg_using_dest_to_membase_op = -1;
if ((ins->dreg != -1) && get_vreg_to_inst (cfg, ins->dreg)) {
MonoInst *var = get_vreg_to_inst (cfg, ins->dreg);
MonoInst *store_ins;
int store_opcode;
MonoInst *def_ins = ins;
int dreg = ins->dreg; /* The original vreg */
store_opcode = mono_type_to_store_membase (cfg, var->inst_vtype);
if (var->opcode == OP_REGVAR) {
ins->dreg = var->dreg;
} else if ((ins->dreg == ins->sreg1) && (spec [MONO_INST_DEST] == 'i') && (spec [MONO_INST_SRC1] == 'i') && !vreg_to_lvreg [ins->dreg] && (op_to_op_dest_membase (store_opcode, ins->opcode) != -1)) {
/*
* Instead of emitting a load+store, use a _membase opcode.
*/
g_assert (var->opcode == OP_REGOFFSET);
if (ins->opcode == OP_MOVE) {
NULLIFY_INS (ins);
def_ins = NULL;
} else {
dreg_using_dest_to_membase_op = ins->dreg;
ins->opcode = op_to_op_dest_membase (store_opcode, ins->opcode);
ins->inst_basereg = var->inst_basereg;
ins->inst_offset = var->inst_offset;
ins->dreg = -1;
}
spec = INS_INFO (ins->opcode);
} else {
guint32 lvreg;
g_assert (var->opcode == OP_REGOFFSET);
prev_dreg = ins->dreg;
/* Invalidate any previous lvreg for this vreg */
vreg_to_lvreg [ins->dreg] = 0;
lvreg = 0;
if (COMPILE_SOFT_FLOAT (cfg) && store_opcode == OP_STORER8_MEMBASE_REG) {
regtype = 'l';
store_opcode = OP_STOREI8_MEMBASE_REG;
}
ins->dreg = alloc_dreg (cfg, stacktypes [regtype]);
#if SIZEOF_REGISTER != 8
if (regtype == 'l') {
NEW_STORE_MEMBASE (cfg, store_ins, OP_STOREI4_MEMBASE_REG, var->inst_basereg, var->inst_offset + MINI_LS_WORD_OFFSET, MONO_LVREG_LS (ins->dreg));
mono_bblock_insert_after_ins (bb, ins, store_ins);
NEW_STORE_MEMBASE (cfg, store_ins, OP_STOREI4_MEMBASE_REG, var->inst_basereg, var->inst_offset + MINI_MS_WORD_OFFSET, MONO_LVREG_MS (ins->dreg));
mono_bblock_insert_after_ins (bb, ins, store_ins);
def_ins = store_ins;
}
else
#endif
{
g_assert (store_opcode != OP_STOREV_MEMBASE);
/* Try to fuse the store into the instruction itself */
/* FIXME: Add more instructions */
if (!lvreg && ((ins->opcode == OP_ICONST) || ((ins->opcode == OP_I8CONST) && (ins->inst_c0 == 0)))) {
ins->opcode = store_membase_reg_to_store_membase_imm (store_opcode);
ins->inst_imm = ins->inst_c0;
ins->inst_destbasereg = var->inst_basereg;
ins->inst_offset = var->inst_offset;
spec = INS_INFO (ins->opcode);
} else if (!lvreg && ((ins->opcode == OP_MOVE) || (ins->opcode == OP_FMOVE) || (ins->opcode == OP_LMOVE) || (ins->opcode == OP_RMOVE))) {
ins->opcode = store_opcode;
ins->inst_destbasereg = var->inst_basereg;
ins->inst_offset = var->inst_offset;
no_lvreg = TRUE;
tmp_reg = ins->dreg;
ins->dreg = ins->sreg2;
ins->sreg2 = tmp_reg;
store = TRUE;
spec2 [MONO_INST_DEST] = ' ';
spec2 [MONO_INST_SRC1] = spec [MONO_INST_SRC1];
spec2 [MONO_INST_SRC2] = spec [MONO_INST_DEST];
spec2 [MONO_INST_SRC3] = ' ';
spec = spec2;
} else if (!lvreg && (op_to_op_store_membase (store_opcode, ins->opcode) != -1)) {
// FIXME: The backends expect the base reg to be in inst_basereg
ins->opcode = op_to_op_store_membase (store_opcode, ins->opcode);
ins->dreg = -1;
ins->inst_basereg = var->inst_basereg;
ins->inst_offset = var->inst_offset;
spec = INS_INFO (ins->opcode);
} else {
/* printf ("INS: "); mono_print_ins (ins); */
/* Create a store instruction */
NEW_STORE_MEMBASE (cfg, store_ins, store_opcode, var->inst_basereg, var->inst_offset, ins->dreg);
/* Insert it after the instruction */
mono_bblock_insert_after_ins (bb, ins, store_ins);
def_ins = store_ins;
/*
* We can't assign ins->dreg to var->dreg here, since the
* sregs could use it. So set a flag, and do it after
* the sregs.
*/
if ((!cfg->backend->use_fpstack || ((store_opcode != OP_STORER8_MEMBASE_REG) && (store_opcode != OP_STORER4_MEMBASE_REG))) && !((var)->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)))
dest_has_lvreg = TRUE;
}
}
}
if (def_ins && !live_range_start [dreg]) {
live_range_start [dreg] = def_ins;
live_range_start_bb [dreg] = bb;
}
if (cfg->compute_gc_maps && def_ins && (var->flags & MONO_INST_GC_TRACK)) {
MonoInst *tmp;
MONO_INST_NEW (cfg, tmp, OP_GC_LIVENESS_DEF);
tmp->inst_c1 = dreg;
mono_bblock_insert_after_ins (bb, def_ins, tmp);
}
}
/************/
/* SREGS */
/************/
num_sregs = mono_inst_get_src_registers (ins, sregs);
for (srcindex = 0; srcindex < 3; ++srcindex) {
regtype = spec [MONO_INST_SRC1 + srcindex];
sreg = sregs [srcindex];
g_assert (((sreg == -1) && (regtype == ' ')) || ((sreg != -1) && (regtype != ' ')));
if ((sreg != -1) && get_vreg_to_inst (cfg, sreg)) {
MonoInst *var = get_vreg_to_inst (cfg, sreg);
MonoInst *use_ins = ins;
MonoInst *load_ins;
guint32 load_opcode;
if (var->opcode == OP_REGVAR) {
sregs [srcindex] = var->dreg;
//mono_inst_set_src_registers (ins, sregs);
live_range_end [sreg] = use_ins;
live_range_end_bb [sreg] = bb;
if (cfg->compute_gc_maps && var->dreg < orig_next_vreg && (var->flags & MONO_INST_GC_TRACK)) {
MonoInst *tmp;
MONO_INST_NEW (cfg, tmp, OP_GC_LIVENESS_USE);
/* var->dreg is a hreg */
tmp->inst_c1 = sreg;
mono_bblock_insert_after_ins (bb, ins, tmp);
}
continue;
}
g_assert (var->opcode == OP_REGOFFSET);
load_opcode = mono_type_to_load_membase (cfg, var->inst_vtype);
g_assert (load_opcode != OP_LOADV_MEMBASE);
if (vreg_to_lvreg [sreg]) {
g_assert (vreg_to_lvreg [sreg] != -1);
/* The variable is already loaded to an lvreg */
if (G_UNLIKELY (cfg->verbose_level > 2))
printf ("\t\tUse lvreg R%d for R%d.\n", vreg_to_lvreg [sreg], sreg);
sregs [srcindex] = vreg_to_lvreg [sreg];
//mono_inst_set_src_registers (ins, sregs);
continue;
}
/* Try to fuse the load into the instruction */
if ((srcindex == 0) && (op_to_op_src1_membase (cfg, load_opcode, ins->opcode) != -1)) {
ins->opcode = op_to_op_src1_membase (cfg, load_opcode, ins->opcode);
sregs [0] = var->inst_basereg;
//mono_inst_set_src_registers (ins, sregs);
ins->inst_offset = var->inst_offset;
} else if ((srcindex == 1) && (op_to_op_src2_membase (cfg, load_opcode, ins->opcode) != -1)) {
ins->opcode = op_to_op_src2_membase (cfg, load_opcode, ins->opcode);
sregs [1] = var->inst_basereg;
//mono_inst_set_src_registers (ins, sregs);
ins->inst_offset = var->inst_offset;
} else {
if (MONO_IS_REAL_MOVE (ins)) {
ins->opcode = OP_NOP;
sreg = ins->dreg;
} else {
//printf ("%d ", srcindex); mono_print_ins (ins);
sreg = alloc_dreg (cfg, stacktypes [regtype]);
if ((!cfg->backend->use_fpstack || ((load_opcode != OP_LOADR8_MEMBASE) && (load_opcode != OP_LOADR4_MEMBASE))) && !((var)->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) && !no_lvreg) {
if (var->dreg == prev_dreg) {
/*
* sreg refers to the value loaded by the load
* emitted below, but we need to use ins->dreg
* since it refers to the store emitted earlier.
*/
sreg = ins->dreg;
}
g_assert (sreg != -1);
if (var->dreg == dreg_using_dest_to_membase_op) {
if (cfg->verbose_level > 2)
printf ("\tCan't cache R%d because it's part of a dreg dest_membase optimization\n", var->dreg);
} else {
vreg_to_lvreg [var->dreg] = sreg;
}
if (lvregs_len >= lvregs_size) {
guint32 *new_lvregs = mono_mempool_alloc0 (cfg->mempool, sizeof (guint32) * lvregs_size * 2);
memcpy (new_lvregs, lvregs, sizeof (guint32) * lvregs_size);
lvregs = new_lvregs;
lvregs_size *= 2;
}
lvregs [lvregs_len ++] = var->dreg;
}
}
sregs [srcindex] = sreg;
//mono_inst_set_src_registers (ins, sregs);
#if SIZEOF_REGISTER != 8
if (regtype == 'l') {
NEW_LOAD_MEMBASE (cfg, load_ins, OP_LOADI4_MEMBASE, MONO_LVREG_MS (sreg), var->inst_basereg, var->inst_offset + MINI_MS_WORD_OFFSET);
mono_bblock_insert_before_ins (bb, ins, load_ins);
NEW_LOAD_MEMBASE (cfg, load_ins, OP_LOADI4_MEMBASE, MONO_LVREG_LS (sreg), var->inst_basereg, var->inst_offset + MINI_LS_WORD_OFFSET);
mono_bblock_insert_before_ins (bb, ins, load_ins);
use_ins = load_ins;
}
else
#endif
{
#if SIZEOF_REGISTER == 4
g_assert (load_opcode != OP_LOADI8_MEMBASE);
#endif
NEW_LOAD_MEMBASE (cfg, load_ins, load_opcode, sreg, var->inst_basereg, var->inst_offset);
mono_bblock_insert_before_ins (bb, ins, load_ins);
use_ins = load_ins;
}
if (cfg->verbose_level > 2)
mono_print_ins_index (0, use_ins);
}
if (var->dreg < orig_next_vreg) {
live_range_end [var->dreg] = use_ins;
live_range_end_bb [var->dreg] = bb;
}
if (cfg->compute_gc_maps && var->dreg < orig_next_vreg && (var->flags & MONO_INST_GC_TRACK)) {
MonoInst *tmp;
MONO_INST_NEW (cfg, tmp, OP_GC_LIVENESS_USE);
tmp->inst_c1 = var->dreg;
mono_bblock_insert_after_ins (bb, ins, tmp);
}
}
}
mono_inst_set_src_registers (ins, sregs);
if (dest_has_lvreg) {
g_assert (ins->dreg != -1);
vreg_to_lvreg [prev_dreg] = ins->dreg;
if (lvregs_len >= lvregs_size) {
guint32 *new_lvregs = mono_mempool_alloc0 (cfg->mempool, sizeof (guint32) * lvregs_size * 2);
memcpy (new_lvregs, lvregs, sizeof (guint32) * lvregs_size);
lvregs = new_lvregs;
lvregs_size *= 2;
}
lvregs [lvregs_len ++] = prev_dreg;
dest_has_lvreg = FALSE;
}
if (store) {
tmp_reg = ins->dreg;
ins->dreg = ins->sreg2;
ins->sreg2 = tmp_reg;
}
if (MONO_IS_CALL (ins)) {
/* Clear vreg_to_lvreg array */
for (i = 0; i < lvregs_len; i++)
vreg_to_lvreg [lvregs [i]] = 0;
lvregs_len = 0;
} else if (ins->opcode == OP_NOP) {
ins->dreg = -1;
MONO_INST_NULLIFY_SREGS (ins);
}
if (cfg->verbose_level > 2)
mono_print_ins_index (1, ins);
}
/* Extend the live range based on the liveness info */
if (cfg->compute_precise_live_ranges && bb->live_out_set && bb->code) {
for (i = 0; i < cfg->num_varinfo; i ++) {
MonoMethodVar *vi = MONO_VARINFO (cfg, i);
if (vreg_is_volatile (cfg, vi->vreg))
/* The liveness info is incomplete */
continue;
if (mono_bitset_test_fast (bb->live_in_set, i) && !live_range_start [vi->vreg]) {
/* Live from at least the first ins of this bb */
live_range_start [vi->vreg] = bb->code;
live_range_start_bb [vi->vreg] = bb;
}
if (mono_bitset_test_fast (bb->live_out_set, i)) {
/* Live at least until the last ins of this bb */
live_range_end [vi->vreg] = bb->last_ins;
live_range_end_bb [vi->vreg] = bb;
}
}
}
}
/*
* Emit LIVERANGE_START/LIVERANGE_END opcodes, the backend will implement them
* by storing the current native offset into MonoMethodVar->live_range_start/end.
*/
if (cfg->compute_precise_live_ranges && cfg->comp_done & MONO_COMP_LIVENESS) {
for (i = 0; i < cfg->num_varinfo; ++i) {
int vreg = MONO_VARINFO (cfg, i)->vreg;
MonoInst *ins;
if (live_range_start [vreg]) {
MONO_INST_NEW (cfg, ins, OP_LIVERANGE_START);
ins->inst_c0 = i;
ins->inst_c1 = vreg;
mono_bblock_insert_after_ins (live_range_start_bb [vreg], live_range_start [vreg], ins);
}
if (live_range_end [vreg]) {
MONO_INST_NEW (cfg, ins, OP_LIVERANGE_END);
ins->inst_c0 = i;
ins->inst_c1 = vreg;
if (live_range_end [vreg] == live_range_end_bb [vreg]->last_ins)
mono_add_ins_to_end (live_range_end_bb [vreg], ins);
else
mono_bblock_insert_after_ins (live_range_end_bb [vreg], live_range_end [vreg], ins);
}
}
}
if (cfg->gsharedvt_locals_var_ins) {
/* Nullify if unused */
cfg->gsharedvt_locals_var_ins->opcode = OP_PCONST;
cfg->gsharedvt_locals_var_ins->inst_imm = 0;
}
g_free (live_range_start);
g_free (live_range_end);
g_free (live_range_start_bb);
g_free (live_range_end_bb);
}
/**
* FIXME:
* - use 'iadd' instead of 'int_add'
* - handling ovf opcodes: decompose in method_to_ir.
* - unify iregs/fregs
* -> partly done, the missing parts are:
* - a more complete unification would involve unifying the hregs as well, so
* code wouldn't need if (fp) all over the place. but that would mean the hregs
* would no longer map to the machine hregs, so the code generators would need to
* be modified. Also, on ia64 for example, niregs + nfregs > 256 -> bitmasks
* wouldn't work any more. Duplicating the code in mono_local_regalloc () into
* fp/non-fp branches speeds it up by about 15%.
* - use sext/zext opcodes instead of shifts
* - add OP_ICALL
* - get rid of TEMPLOADs if possible and use vregs instead
* - clean up usage of OP_P/OP_ opcodes
* - cleanup usage of DUMMY_USE
* - cleanup the setting of ins->type for MonoInst's which are pushed on the
* stack
* - set the stack type and allocate a dreg in the EMIT_NEW macros
* - get rid of all the <foo>2 stuff when the new JIT is ready.
* - make sure handle_stack_args () is called before the branch is emitted
* - when the new IR is done, get rid of all unused stuff
* - COMPARE/BEQ as separate instructions or unify them ?
* - keeping them separate allows specialized compare instructions like
* compare_imm, compare_membase
* - most back ends unify fp compare+branch, fp compare+ceq
* - integrate mono_save_args into inline_method
* - get rid of the empty bblocks created by MONO_EMIT_NEW_BRACH_BLOCK2
* - handle long shift opts on 32 bit platforms somehow: they require
* 3 sregs (2 for arg1 and 1 for arg2)
* - make byref a 'normal' type.
* - use vregs for bb->out_stacks if possible, handle_global_vreg will make them a
* variable if needed.
* - do not start a new IL level bblock when cfg->cbb is changed by a function call
* like inline_method.
* - remove inlining restrictions
* - fix LNEG and enable cfold of INEG
* - generalize x86 optimizations like ldelema as a peephole optimization
* - add store_mem_imm for amd64
* - optimize the loading of the interruption flag in the managed->native wrappers
* - avoid special handling of OP_NOP in passes
* - move code inserting instructions into one function/macro.
* - try a coalescing phase after liveness analysis
* - add float -> vreg conversion + local optimizations on !x86
* - figure out how to handle decomposed branches during optimizations, ie.
* compare+branch, op_jump_table+op_br etc.
* - promote RuntimeXHandles to vregs
* - vtype cleanups:
* - add a NEW_VARLOADA_VREG macro
* - the vtype optimizations are blocked by the LDADDR opcodes generated for
* accessing vtype fields.
* - get rid of I8CONST on 64 bit platforms
* - dealing with the increase in code size due to branches created during opcode
* decomposition:
* - use extended basic blocks
* - all parts of the JIT
* - handle_global_vregs () && local regalloc
* - avoid introducing global vregs during decomposition, like 'vtable' in isinst
* - sources of increase in code size:
* - vtypes
* - long compares
* - isinst and castclass
* - lvregs not allocated to global registers even if used multiple times
* - call cctors outside the JIT, to make -v output more readable and JIT timings more
* meaningful.
* - check for fp stack leakage in other opcodes too. (-> 'exceptions' optimization)
* - add all micro optimizations from the old JIT
* - put tree optimizations into the deadce pass
* - decompose op_start_handler/op_endfilter/op_endfinally earlier using an arch
* specific function.
* - unify the float comparison opcodes with the other comparison opcodes, i.e.
* fcompare + branchCC.
* - create a helper function for allocating a stack slot, taking into account
* MONO_CFG_HAS_SPILLUP.
* - merge r68207.
* - optimize mono_regstate2_alloc_int/float.
* - fix the pessimistic handling of variables accessed in exception handler blocks.
* - need to write a tree optimization pass, but the creation of trees is difficult, i.e.
* parts of the tree could be separated by other instructions, killing the tree
* arguments, or stores killing loads etc. Also, should we fold loads into other
* instructions if the result of the load is used multiple times ?
* - make the REM_IMM optimization in mini-x86.c arch-independent.
* - LAST MERGE: 108395.
* - when returning vtypes in registers, generate IR and append it to the end of the
* last bb instead of doing it in the epilog.
* - change the store opcodes so they use sreg1 instead of dreg to store the base register.
*/
/*
NOTES
-----
- When to decompose opcodes:
- earlier: this makes some optimizations hard to implement, since the low level IR
no longer contains the necessary information. But it is easier to do.
- later: harder to implement, enables more optimizations.
- Branches inside bblocks:
- created when decomposing complex opcodes.
- branches to another bblock: harmless, but not tracked by the branch
optimizations, so need to branch to a label at the start of the bblock.
- branches to inside the same bblock: very problematic, trips up the local
reg allocator. Can be fixed by spitting the current bblock, but that is a
complex operation, since some local vregs can become global vregs etc.
- Local/global vregs:
- local vregs: temporary vregs used inside one bblock. Assigned to hregs by the
local register allocator.
- global vregs: used in more than one bblock. Have an associated MonoMethodVar
structure, created by mono_create_var (). Assigned to hregs or the stack by
the global register allocator.
- When to do optimizations like alu->alu_imm:
- earlier -> saves work later on since the IR will be smaller/simpler
- later -> can work on more instructions
- Handling of valuetypes:
- When a vtype is pushed on the stack, a new temporary is created, an
instruction computing its address (LDADDR) is emitted and pushed on
the stack. Need to optimize cases when the vtype is used immediately as in
argument passing, stloc etc.
- Instead of the to_end stuff in the old JIT, simply call the function handling
the values on the stack before emitting the last instruction of the bb.
*/
#else /* !DISABLE_JIT */
MONO_EMPTY_SOURCE_FILE (method_to_ir);
#endif /* !DISABLE_JIT */
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/mono/mini/mini.c | /**
* \file
* The new Mono code generator.
*
* Authors:
* Paolo Molaro ([email protected])
* Dietmar Maurer ([email protected])
*
* Copyright 2002-2003 Ximian, Inc.
* Copyright 2003-2010 Novell, Inc.
* Copyright 2011 Xamarin, Inc (http://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#include <config.h>
#ifdef HAVE_ALLOCA_H
#include <alloca.h>
#endif
#ifdef HAVE_UNISTD_H
#include <unistd.h>
#endif
#include <math.h>
#ifdef HAVE_SYS_TIME_H
#include <sys/time.h>
#endif
#include <mono/utils/memcheck.h>
#include <mono/metadata/assembly.h>
#include <mono/metadata/loader.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/class.h>
#include <mono/metadata/object.h>
#include <mono/metadata/tokentype.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/threads.h>
#include <mono/metadata/appdomain.h>
#include <mono/metadata/debug-helpers.h>
#include <mono/metadata/profiler-private.h>
#include <mono/metadata/mono-config.h>
#include <mono/metadata/environment.h>
#include <mono/metadata/mono-debug.h>
#include <mono/metadata/gc-internals.h>
#include <mono/metadata/threads-types.h>
#include <mono/metadata/verify.h>
#include <mono/metadata/mempool-internals.h>
#include <mono/metadata/runtime.h>
#include <mono/metadata/attrdefs.h>
#include <mono/utils/mono-math.h>
#include <mono/utils/mono-compiler.h>
#include <mono/utils/mono-counters.h>
#include <mono/utils/mono-error-internals.h>
#include <mono/utils/mono-logger-internals.h>
#include <mono/utils/mono-mmap.h>
#include <mono/utils/mono-path.h>
#include <mono/utils/mono-tls.h>
#include <mono/utils/mono-hwcap.h>
#include <mono/utils/dtrace.h>
#include <mono/utils/mono-threads.h>
#include <mono/utils/mono-threads-coop.h>
#include <mono/utils/unlocked.h>
#include <mono/utils/mono-time.h>
#include "mini.h"
#include "seq-points.h"
#include <string.h>
#include <ctype.h>
#include "trace.h"
#include "ir-emit.h"
#include "jit-icalls.h"
#include "mini-gc.h"
#include "llvm-runtime.h"
#include "mini-llvm.h"
#include "lldb.h"
#include "aot-runtime.h"
#include "mini-runtime.h"
MonoCallSpec *mono_jit_trace_calls;
MonoMethodDesc *mono_inject_async_exc_method;
int mono_inject_async_exc_pos;
MonoMethodDesc *mono_break_at_bb_method;
int mono_break_at_bb_bb_num;
gboolean mono_do_x86_stack_align = TRUE;
/* Counters */
static guint32 discarded_code;
static gint64 discarded_jit_time;
#define mono_jit_lock() mono_os_mutex_lock (&jit_mutex)
#define mono_jit_unlock() mono_os_mutex_unlock (&jit_mutex)
static mono_mutex_t jit_mutex;
#ifndef DISABLE_JIT
static guint32 jinfo_try_holes_size;
static MonoBackend *current_backend;
gpointer
mono_realloc_native_code (MonoCompile *cfg)
{
return g_realloc (cfg->native_code, cfg->code_size);
}
typedef struct {
MonoExceptionClause *clause;
MonoBasicBlock *basic_block;
int start_offset;
} TryBlockHole;
/**
* mono_emit_unwind_op:
*
* Add an unwind op with the given parameters for the list of unwind ops stored in
* cfg->unwind_ops.
*/
void
mono_emit_unwind_op (MonoCompile *cfg, int when, int tag, int reg, int val)
{
MonoUnwindOp *op = (MonoUnwindOp *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoUnwindOp));
op->op = tag;
op->reg = reg;
op->val = val;
op->when = when;
cfg->unwind_ops = g_slist_append_mempool (cfg->mempool, cfg->unwind_ops, op);
if (cfg->verbose_level > 1) {
switch (tag) {
case DW_CFA_def_cfa:
printf ("CFA: [%x] def_cfa: %s+0x%x\n", when, mono_arch_regname (reg), val);
break;
case DW_CFA_def_cfa_register:
printf ("CFA: [%x] def_cfa_reg: %s\n", when, mono_arch_regname (reg));
break;
case DW_CFA_def_cfa_offset:
printf ("CFA: [%x] def_cfa_offset: 0x%x\n", when, val);
break;
case DW_CFA_offset:
printf ("CFA: [%x] offset: %s at cfa-0x%x\n", when, mono_arch_regname (reg), -val);
break;
}
}
}
/**
* mono_unlink_bblock:
*
* Unlink two basic blocks.
*/
void
mono_unlink_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to)
{
int i, pos;
gboolean found;
found = FALSE;
for (i = 0; i < from->out_count; ++i) {
if (to == from->out_bb [i]) {
found = TRUE;
break;
}
}
if (found) {
pos = 0;
for (i = 0; i < from->out_count; ++i) {
if (from->out_bb [i] != to)
from->out_bb [pos ++] = from->out_bb [i];
}
g_assert (pos == from->out_count - 1);
from->out_count--;
}
found = FALSE;
for (i = 0; i < to->in_count; ++i) {
if (from == to->in_bb [i]) {
found = TRUE;
break;
}
}
if (found) {
pos = 0;
for (i = 0; i < to->in_count; ++i) {
if (to->in_bb [i] != from)
to->in_bb [pos ++] = to->in_bb [i];
}
g_assert (pos == to->in_count - 1);
to->in_count--;
}
}
/*
* mono_bblocks_linked:
*
* Return whenever BB1 and BB2 are linked in the CFG.
*/
gboolean
mono_bblocks_linked (MonoBasicBlock *bb1, MonoBasicBlock *bb2)
{
int i;
for (i = 0; i < bb1->out_count; ++i) {
if (bb1->out_bb [i] == bb2)
return TRUE;
}
return FALSE;
}
static int
mono_find_block_region_notry (MonoCompile *cfg, int offset)
{
MonoMethodHeader *header = cfg->header;
MonoExceptionClause *clause;
int i;
for (i = 0; i < header->num_clauses; ++i) {
clause = &header->clauses [i];
if ((clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) && (offset >= clause->data.filter_offset) &&
(offset < (clause->handler_offset)))
return ((i + 1) << 8) | MONO_REGION_FILTER | clause->flags;
if (MONO_OFFSET_IN_HANDLER (clause, offset)) {
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY)
return ((i + 1) << 8) | MONO_REGION_FINALLY | clause->flags;
else if (clause->flags == MONO_EXCEPTION_CLAUSE_FAULT)
return ((i + 1) << 8) | MONO_REGION_FAULT | clause->flags;
else
return ((i + 1) << 8) | MONO_REGION_CATCH | clause->flags;
}
}
return -1;
}
/*
* mono_get_block_region_notry:
*
* Return the region corresponding to REGION, ignoring try clauses nested inside
* finally clauses.
*/
int
mono_get_block_region_notry (MonoCompile *cfg, int region)
{
if ((region & (0xf << 4)) == MONO_REGION_TRY) {
MonoMethodHeader *header = cfg->header;
/*
* This can happen if a try clause is nested inside a finally clause.
*/
int clause_index = (region >> 8) - 1;
g_assert (clause_index >= 0 && clause_index < header->num_clauses);
region = mono_find_block_region_notry (cfg, header->clauses [clause_index].try_offset);
}
return region;
}
MonoInst *
mono_find_spvar_for_region (MonoCompile *cfg, int region)
{
region = mono_get_block_region_notry (cfg, region);
return (MonoInst *)g_hash_table_lookup (cfg->spvars, GINT_TO_POINTER (region));
}
static void
df_visit (MonoBasicBlock *start, int *dfn, MonoBasicBlock **array)
{
int i;
array [*dfn] = start;
/* g_print ("visit %d at %p (BB%ld)\n", *dfn, start->cil_code, start->block_num); */
for (i = 0; i < start->out_count; ++i) {
if (start->out_bb [i]->dfn)
continue;
(*dfn)++;
start->out_bb [i]->dfn = *dfn;
start->out_bb [i]->df_parent = start;
array [*dfn] = start->out_bb [i];
df_visit (start->out_bb [i], dfn, array);
}
}
guint32
mono_reverse_branch_op (guint32 opcode)
{
static const int reverse_map [] = {
CEE_BNE_UN, CEE_BLT, CEE_BLE, CEE_BGT, CEE_BGE,
CEE_BEQ, CEE_BLT_UN, CEE_BLE_UN, CEE_BGT_UN, CEE_BGE_UN
};
static const int reverse_fmap [] = {
OP_FBNE_UN, OP_FBLT, OP_FBLE, OP_FBGT, OP_FBGE,
OP_FBEQ, OP_FBLT_UN, OP_FBLE_UN, OP_FBGT_UN, OP_FBGE_UN
};
static const int reverse_lmap [] = {
OP_LBNE_UN, OP_LBLT, OP_LBLE, OP_LBGT, OP_LBGE,
OP_LBEQ, OP_LBLT_UN, OP_LBLE_UN, OP_LBGT_UN, OP_LBGE_UN
};
static const int reverse_imap [] = {
OP_IBNE_UN, OP_IBLT, OP_IBLE, OP_IBGT, OP_IBGE,
OP_IBEQ, OP_IBLT_UN, OP_IBLE_UN, OP_IBGT_UN, OP_IBGE_UN
};
if (opcode >= CEE_BEQ && opcode <= CEE_BLT_UN) {
opcode = reverse_map [opcode - CEE_BEQ];
} else if (opcode >= OP_FBEQ && opcode <= OP_FBLT_UN) {
opcode = reverse_fmap [opcode - OP_FBEQ];
} else if (opcode >= OP_LBEQ && opcode <= OP_LBLT_UN) {
opcode = reverse_lmap [opcode - OP_LBEQ];
} else if (opcode >= OP_IBEQ && opcode <= OP_IBLT_UN) {
opcode = reverse_imap [opcode - OP_IBEQ];
} else
g_assert_not_reached ();
return opcode;
}
guint
mono_type_to_store_membase (MonoCompile *cfg, MonoType *type)
{
type = mini_get_underlying_type (type);
handle_enum:
switch (type->type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
return OP_STOREI1_MEMBASE_REG;
case MONO_TYPE_I2:
case MONO_TYPE_U2:
return OP_STOREI2_MEMBASE_REG;
case MONO_TYPE_I4:
case MONO_TYPE_U4:
return OP_STOREI4_MEMBASE_REG;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
return OP_STORE_MEMBASE_REG;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
return OP_STORE_MEMBASE_REG;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
return OP_STOREI8_MEMBASE_REG;
case MONO_TYPE_R4:
return OP_STORER4_MEMBASE_REG;
case MONO_TYPE_R8:
return OP_STORER8_MEMBASE_REG;
case MONO_TYPE_VALUETYPE:
if (m_class_is_enumtype (type->data.klass)) {
type = mono_class_enum_basetype_internal (type->data.klass);
goto handle_enum;
}
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type)))
return OP_STOREX_MEMBASE;
return OP_STOREV_MEMBASE;
case MONO_TYPE_TYPEDBYREF:
return OP_STOREV_MEMBASE;
case MONO_TYPE_GENERICINST:
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type)))
return OP_STOREX_MEMBASE;
type = m_class_get_byval_arg (type->data.generic_class->container_class);
goto handle_enum;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
g_assert (mini_type_var_is_vt (type));
return OP_STOREV_MEMBASE;
default:
g_error ("unknown type 0x%02x in type_to_store_membase", type->type);
}
return -1;
}
guint
mono_type_to_load_membase (MonoCompile *cfg, MonoType *type)
{
type = mini_get_underlying_type (type);
switch (type->type) {
case MONO_TYPE_I1:
return OP_LOADI1_MEMBASE;
case MONO_TYPE_U1:
return OP_LOADU1_MEMBASE;
case MONO_TYPE_I2:
return OP_LOADI2_MEMBASE;
case MONO_TYPE_U2:
return OP_LOADU2_MEMBASE;
case MONO_TYPE_I4:
return OP_LOADI4_MEMBASE;
case MONO_TYPE_U4:
return OP_LOADU4_MEMBASE;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
return OP_LOAD_MEMBASE;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
return OP_LOAD_MEMBASE;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
return OP_LOADI8_MEMBASE;
case MONO_TYPE_R4:
return OP_LOADR4_MEMBASE;
case MONO_TYPE_R8:
return OP_LOADR8_MEMBASE;
case MONO_TYPE_VALUETYPE:
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type)))
return OP_LOADX_MEMBASE;
case MONO_TYPE_TYPEDBYREF:
return OP_LOADV_MEMBASE;
case MONO_TYPE_GENERICINST:
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type)))
return OP_LOADX_MEMBASE;
if (mono_type_generic_inst_is_valuetype (type))
return OP_LOADV_MEMBASE;
else
return OP_LOAD_MEMBASE;
break;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
g_assert (cfg->gshared);
g_assert (mini_type_var_is_vt (type));
return OP_LOADV_MEMBASE;
default:
g_error ("unknown type 0x%02x in type_to_load_membase", type->type);
}
return -1;
}
guint
mini_type_to_stind (MonoCompile* cfg, MonoType *type)
{
type = mini_get_underlying_type (type);
if (cfg->gshared && !m_type_is_byref (type) && (type->type == MONO_TYPE_VAR || type->type == MONO_TYPE_MVAR)) {
g_assert (mini_type_var_is_vt (type));
return CEE_STOBJ;
}
return mono_type_to_stind (type);
}
int
mono_op_imm_to_op (int opcode)
{
switch (opcode) {
case OP_ADD_IMM:
#if SIZEOF_REGISTER == 4
return OP_IADD;
#else
return OP_LADD;
#endif
case OP_IADD_IMM:
return OP_IADD;
case OP_LADD_IMM:
return OP_LADD;
case OP_ISUB_IMM:
return OP_ISUB;
case OP_LSUB_IMM:
return OP_LSUB;
case OP_IMUL_IMM:
return OP_IMUL;
case OP_LMUL_IMM:
return OP_LMUL;
case OP_AND_IMM:
#if SIZEOF_REGISTER == 4
return OP_IAND;
#else
return OP_LAND;
#endif
case OP_OR_IMM:
#if SIZEOF_REGISTER == 4
return OP_IOR;
#else
return OP_LOR;
#endif
case OP_XOR_IMM:
#if SIZEOF_REGISTER == 4
return OP_IXOR;
#else
return OP_LXOR;
#endif
case OP_IAND_IMM:
return OP_IAND;
case OP_LAND_IMM:
return OP_LAND;
case OP_IOR_IMM:
return OP_IOR;
case OP_LOR_IMM:
return OP_LOR;
case OP_IXOR_IMM:
return OP_IXOR;
case OP_LXOR_IMM:
return OP_LXOR;
case OP_ISHL_IMM:
return OP_ISHL;
case OP_LSHL_IMM:
return OP_LSHL;
case OP_ISHR_IMM:
return OP_ISHR;
case OP_LSHR_IMM:
return OP_LSHR;
case OP_ISHR_UN_IMM:
return OP_ISHR_UN;
case OP_LSHR_UN_IMM:
return OP_LSHR_UN;
case OP_IDIV_IMM:
return OP_IDIV;
case OP_LDIV_IMM:
return OP_LDIV;
case OP_IDIV_UN_IMM:
return OP_IDIV_UN;
case OP_LDIV_UN_IMM:
return OP_LDIV_UN;
case OP_IREM_UN_IMM:
return OP_IREM_UN;
case OP_LREM_UN_IMM:
return OP_LREM_UN;
case OP_IREM_IMM:
return OP_IREM;
case OP_LREM_IMM:
return OP_LREM;
case OP_DIV_IMM:
#if SIZEOF_REGISTER == 4
return OP_IDIV;
#else
return OP_LDIV;
#endif
case OP_REM_IMM:
#if SIZEOF_REGISTER == 4
return OP_IREM;
#else
return OP_LREM;
#endif
case OP_ADDCC_IMM:
return OP_ADDCC;
case OP_ADC_IMM:
return OP_ADC;
case OP_SUBCC_IMM:
return OP_SUBCC;
case OP_SBB_IMM:
return OP_SBB;
case OP_IADC_IMM:
return OP_IADC;
case OP_ISBB_IMM:
return OP_ISBB;
case OP_COMPARE_IMM:
return OP_COMPARE;
case OP_ICOMPARE_IMM:
return OP_ICOMPARE;
case OP_LOCALLOC_IMM:
return OP_LOCALLOC;
}
return -1;
}
/*
* mono_decompose_op_imm:
*
* Replace the OP_.._IMM INS with its non IMM variant.
*/
void
mono_decompose_op_imm (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins)
{
int opcode2 = mono_op_imm_to_op (ins->opcode);
MonoInst *temp;
guint32 dreg;
const char *spec = INS_INFO (ins->opcode);
if (spec [MONO_INST_SRC2] == 'l') {
dreg = mono_alloc_lreg (cfg);
/* Load the 64bit constant using decomposed ops */
MONO_INST_NEW (cfg, temp, OP_ICONST);
temp->inst_c0 = ins_get_l_low (ins);
temp->dreg = MONO_LVREG_LS (dreg);
mono_bblock_insert_before_ins (bb, ins, temp);
MONO_INST_NEW (cfg, temp, OP_ICONST);
temp->inst_c0 = ins_get_l_high (ins);
temp->dreg = MONO_LVREG_MS (dreg);
} else {
dreg = mono_alloc_ireg (cfg);
MONO_INST_NEW (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = dreg;
}
mono_bblock_insert_before_ins (bb, ins, temp);
if (opcode2 == -1)
g_error ("mono_op_imm_to_op failed for %s\n", mono_inst_name (ins->opcode));
ins->opcode = opcode2;
if (ins->opcode == OP_LOCALLOC)
ins->sreg1 = dreg;
else
ins->sreg2 = dreg;
bb->max_vreg = MAX (bb->max_vreg, cfg->next_vreg);
}
static void
set_vreg_to_inst (MonoCompile *cfg, int vreg, MonoInst *inst)
{
if (vreg >= cfg->vreg_to_inst_len) {
MonoInst **tmp = cfg->vreg_to_inst;
int size = cfg->vreg_to_inst_len;
while (vreg >= cfg->vreg_to_inst_len)
cfg->vreg_to_inst_len = cfg->vreg_to_inst_len ? cfg->vreg_to_inst_len * 2 : 32;
cfg->vreg_to_inst = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoInst*) * cfg->vreg_to_inst_len);
if (size)
memcpy (cfg->vreg_to_inst, tmp, size * sizeof (MonoInst*));
}
cfg->vreg_to_inst [vreg] = inst;
}
#define mono_type_is_long(type) (!m_type_is_byref (type) && ((mono_type_get_underlying_type (type)->type == MONO_TYPE_I8) || (mono_type_get_underlying_type (type)->type == MONO_TYPE_U8)))
#define mono_type_is_float(type) (!m_type_is_byref (type) && (((type)->type == MONO_TYPE_R8) || ((type)->type == MONO_TYPE_R4)))
MonoInst*
mono_compile_create_var_for_vreg (MonoCompile *cfg, MonoType *type, int opcode, int vreg)
{
MonoInst *inst;
int num = cfg->num_varinfo;
gboolean regpair;
type = mini_get_underlying_type (type);
if ((num + 1) >= cfg->varinfo_count) {
int orig_count = cfg->varinfo_count;
cfg->varinfo_count = cfg->varinfo_count ? (cfg->varinfo_count * 2) : 32;
cfg->varinfo = (MonoInst **)g_realloc (cfg->varinfo, sizeof (MonoInst*) * cfg->varinfo_count);
cfg->vars = (MonoMethodVar *)g_realloc (cfg->vars, sizeof (MonoMethodVar) * cfg->varinfo_count);
memset (&cfg->vars [orig_count], 0, (cfg->varinfo_count - orig_count) * sizeof (MonoMethodVar));
}
cfg->stat_allocate_var++;
MONO_INST_NEW (cfg, inst, opcode);
inst->inst_c0 = num;
inst->inst_vtype = type;
inst->klass = mono_class_from_mono_type_internal (type);
mini_type_to_eval_stack_type (cfg, type, inst);
/* if set to 1 the variable is native */
inst->backend.is_pinvoke = 0;
inst->dreg = vreg;
if (mono_class_has_failure (inst->klass))
mono_cfg_set_exception (cfg, MONO_EXCEPTION_TYPE_LOAD);
if (cfg->compute_gc_maps) {
if (m_type_is_byref (type)) {
mono_mark_vreg_as_mp (cfg, vreg);
} else {
if ((MONO_TYPE_ISSTRUCT (type) && m_class_has_references (inst->klass)) || mini_type_is_reference (type)) {
inst->flags |= MONO_INST_GC_TRACK;
mono_mark_vreg_as_ref (cfg, vreg);
}
}
}
#ifdef TARGET_WASM
if (mini_type_is_reference (type))
mono_mark_vreg_as_ref (cfg, vreg);
#endif
cfg->varinfo [num] = inst;
cfg->vars [num].idx = num;
cfg->vars [num].vreg = vreg;
cfg->vars [num].range.first_use.pos.bid = 0xffff;
cfg->vars [num].reg = -1;
if (vreg != -1)
set_vreg_to_inst (cfg, vreg, inst);
#if SIZEOF_REGISTER == 4
if (mono_arch_is_soft_float ()) {
regpair = mono_type_is_long (type) || mono_type_is_float (type);
} else {
regpair = mono_type_is_long (type);
}
#else
regpair = FALSE;
#endif
if (regpair) {
MonoInst *tree;
/*
* These two cannot be allocated using create_var_for_vreg since that would
* put it into the cfg->varinfo array, confusing many parts of the JIT.
*/
/*
* Set flags to VOLATILE so SSA skips it.
*/
if (cfg->verbose_level >= 4) {
printf (" Create LVAR R%d (R%d, R%d)\n", inst->dreg, MONO_LVREG_LS (inst->dreg), MONO_LVREG_MS (inst->dreg));
}
if (mono_arch_is_soft_float () && cfg->opt & MONO_OPT_SSA) {
if (mono_type_is_float (type))
inst->flags = MONO_INST_VOLATILE;
}
/* Allocate a dummy MonoInst for the first vreg */
MONO_INST_NEW (cfg, tree, OP_LOCAL);
tree->dreg = MONO_LVREG_LS (inst->dreg);
if (cfg->opt & MONO_OPT_SSA)
tree->flags = MONO_INST_VOLATILE;
tree->inst_c0 = num;
tree->type = STACK_I4;
tree->inst_vtype = mono_get_int32_type ();
tree->klass = mono_class_from_mono_type_internal (tree->inst_vtype);
set_vreg_to_inst (cfg, MONO_LVREG_LS (inst->dreg), tree);
/* Allocate a dummy MonoInst for the second vreg */
MONO_INST_NEW (cfg, tree, OP_LOCAL);
tree->dreg = MONO_LVREG_MS (inst->dreg);
if (cfg->opt & MONO_OPT_SSA)
tree->flags = MONO_INST_VOLATILE;
tree->inst_c0 = num;
tree->type = STACK_I4;
tree->inst_vtype = mono_get_int32_type ();
tree->klass = mono_class_from_mono_type_internal (tree->inst_vtype);
set_vreg_to_inst (cfg, MONO_LVREG_MS (inst->dreg), tree);
}
cfg->num_varinfo++;
if (cfg->verbose_level > 2)
g_print ("created temp %d (R%d) of type %s\n", num, vreg, mono_type_get_name (type));
return inst;
}
MonoInst*
mono_compile_create_var (MonoCompile *cfg, MonoType *type, int opcode)
{
int dreg;
if (type->type == MONO_TYPE_VALUETYPE && !m_type_is_byref (type)) {
MonoClass *klass = mono_class_from_mono_type_internal (type);
if (m_class_is_enumtype (klass) && m_class_get_image (klass) == mono_get_corlib () && !strcmp (m_class_get_name (klass), "StackCrawlMark")) {
if (!(cfg->method->flags & METHOD_ATTRIBUTE_REQSECOBJ))
g_error ("Method '%s' which contains a StackCrawlMark local variable must be decorated with [System.Security.DynamicSecurityMethod].", mono_method_get_full_name (cfg->method));
}
}
type = mini_get_underlying_type (type);
if (mono_type_is_long (type))
dreg = mono_alloc_dreg (cfg, STACK_I8);
else if (mono_arch_is_soft_float () && mono_type_is_float (type))
dreg = mono_alloc_dreg (cfg, STACK_R8);
else
/* All the others are unified */
dreg = mono_alloc_preg (cfg);
return mono_compile_create_var_for_vreg (cfg, type, opcode, dreg);
}
MonoInst*
mini_get_int_to_float_spill_area (MonoCompile *cfg)
{
#ifdef TARGET_X86
if (!cfg->iconv_raw_var) {
cfg->iconv_raw_var = mono_compile_create_var (cfg, mono_get_int32_type (), OP_LOCAL);
cfg->iconv_raw_var->flags |= MONO_INST_VOLATILE; /*FIXME, use the don't regalloc flag*/
}
return cfg->iconv_raw_var;
#else
return NULL;
#endif
}
void
mono_mark_vreg_as_ref (MonoCompile *cfg, int vreg)
{
if (vreg >= cfg->vreg_is_ref_len) {
gboolean *tmp = cfg->vreg_is_ref;
int size = cfg->vreg_is_ref_len;
while (vreg >= cfg->vreg_is_ref_len)
cfg->vreg_is_ref_len = cfg->vreg_is_ref_len ? cfg->vreg_is_ref_len * 2 : 32;
cfg->vreg_is_ref = (gboolean *)mono_mempool_alloc0 (cfg->mempool, sizeof (gboolean) * cfg->vreg_is_ref_len);
if (size)
memcpy (cfg->vreg_is_ref, tmp, size * sizeof (gboolean));
}
cfg->vreg_is_ref [vreg] = TRUE;
}
void
mono_mark_vreg_as_mp (MonoCompile *cfg, int vreg)
{
if (vreg >= cfg->vreg_is_mp_len) {
gboolean *tmp = cfg->vreg_is_mp;
int size = cfg->vreg_is_mp_len;
while (vreg >= cfg->vreg_is_mp_len)
cfg->vreg_is_mp_len = cfg->vreg_is_mp_len ? cfg->vreg_is_mp_len * 2 : 32;
cfg->vreg_is_mp = (gboolean *)mono_mempool_alloc0 (cfg->mempool, sizeof (gboolean) * cfg->vreg_is_mp_len);
if (size)
memcpy (cfg->vreg_is_mp, tmp, size * sizeof (gboolean));
}
cfg->vreg_is_mp [vreg] = TRUE;
}
static MonoType*
type_from_stack_type (MonoInst *ins)
{
switch (ins->type) {
case STACK_I4: return mono_get_int32_type ();
case STACK_I8: return m_class_get_byval_arg (mono_defaults.int64_class);
case STACK_PTR: return mono_get_int_type ();
case STACK_R8: return m_class_get_byval_arg (mono_defaults.double_class);
case STACK_MP:
/*
* this if used to be commented without any specific reason, but
* it breaks #80235 when commented
*/
if (ins->klass)
return m_class_get_this_arg (ins->klass);
else
return mono_class_get_byref_type (mono_defaults.object_class);
case STACK_OBJ:
/* ins->klass may not be set for ldnull.
* Also, if we have a boxed valuetype, we want an object lass,
* not the valuetype class
*/
if (ins->klass && !m_class_is_valuetype (ins->klass))
return m_class_get_byval_arg (ins->klass);
return mono_get_object_type ();
case STACK_VTYPE: return m_class_get_byval_arg (ins->klass);
default:
g_error ("stack type %d to montype not handled\n", ins->type);
}
return NULL;
}
MonoType*
mono_type_from_stack_type (MonoInst *ins)
{
return type_from_stack_type (ins);
}
/*
* mono_add_ins_to_end:
*
* Same as MONO_ADD_INS, but add INST before any branches at the end of BB.
*/
void
mono_add_ins_to_end (MonoBasicBlock *bb, MonoInst *inst)
{
int opcode;
if (!bb->code) {
MONO_ADD_INS (bb, inst);
return;
}
switch (bb->last_ins->opcode) {
case OP_BR:
case OP_BR_REG:
case CEE_BEQ:
case CEE_BGE:
case CEE_BGT:
case CEE_BLE:
case CEE_BLT:
case CEE_BNE_UN:
case CEE_BGE_UN:
case CEE_BGT_UN:
case CEE_BLE_UN:
case CEE_BLT_UN:
case OP_SWITCH:
mono_bblock_insert_before_ins (bb, bb->last_ins, inst);
break;
default:
if (MONO_IS_COND_BRANCH_OP (bb->last_ins)) {
/* Need to insert the ins before the compare */
if (bb->code == bb->last_ins) {
mono_bblock_insert_before_ins (bb, bb->last_ins, inst);
return;
}
if (bb->code->next == bb->last_ins) {
/* Only two instructions */
opcode = bb->code->opcode;
if ((opcode == OP_COMPARE) || (opcode == OP_COMPARE_IMM) || (opcode == OP_ICOMPARE) || (opcode == OP_ICOMPARE_IMM) || (opcode == OP_FCOMPARE) || (opcode == OP_LCOMPARE) || (opcode == OP_LCOMPARE_IMM) || (opcode == OP_RCOMPARE)) {
/* NEW IR */
mono_bblock_insert_before_ins (bb, bb->code, inst);
} else {
mono_bblock_insert_before_ins (bb, bb->last_ins, inst);
}
} else {
opcode = bb->last_ins->prev->opcode;
if ((opcode == OP_COMPARE) || (opcode == OP_COMPARE_IMM) || (opcode == OP_ICOMPARE) || (opcode == OP_ICOMPARE_IMM) || (opcode == OP_FCOMPARE) || (opcode == OP_LCOMPARE) || (opcode == OP_LCOMPARE_IMM) || (opcode == OP_RCOMPARE)) {
/* NEW IR */
mono_bblock_insert_before_ins (bb, bb->last_ins->prev, inst);
} else {
mono_bblock_insert_before_ins (bb, bb->last_ins, inst);
}
}
}
else
MONO_ADD_INS (bb, inst);
break;
}
}
void
mono_create_jump_table (MonoCompile *cfg, MonoInst *label, MonoBasicBlock **bbs, int num_blocks)
{
MonoJumpInfo *ji = (MonoJumpInfo *)mono_mempool_alloc (cfg->mempool, sizeof (MonoJumpInfo));
MonoJumpInfoBBTable *table;
table = (MonoJumpInfoBBTable *)mono_mempool_alloc (cfg->mempool, sizeof (MonoJumpInfoBBTable));
table->table = bbs;
table->table_size = num_blocks;
ji->ip.label = label;
ji->type = MONO_PATCH_INFO_SWITCH;
ji->data.table = table;
ji->next = cfg->patch_info;
cfg->patch_info = ji;
}
gboolean
mini_assembly_can_skip_verification (MonoMethod *method)
{
MonoAssembly *assembly = m_class_get_image (method->klass)->assembly;
if (method->wrapper_type != MONO_WRAPPER_NONE && method->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD)
return FALSE;
if (assembly->image == mono_defaults.corlib)
return FALSE;
return mono_assembly_has_skip_verification (assembly);
}
typedef struct {
MonoClass *vtype;
GList *active, *inactive;
GSList *slots;
} StackSlotInfo;
static gint
compare_by_interval_start_pos_func (gconstpointer a, gconstpointer b)
{
MonoMethodVar *v1 = (MonoMethodVar*)a;
MonoMethodVar *v2 = (MonoMethodVar*)b;
if (v1 == v2)
return 0;
else if (v1->interval->range && v2->interval->range)
return v1->interval->range->from - v2->interval->range->from;
else if (v1->interval->range)
return -1;
else
return 1;
}
#if 0
#define LSCAN_DEBUG(a) do { a; } while (0)
#else
#define LSCAN_DEBUG(a) do { } while (0) /* non-empty to avoid warning */
#endif
static gint32*
mono_allocate_stack_slots2 (MonoCompile *cfg, gboolean backward, guint32 *stack_size, guint32 *stack_align)
{
int i, slot, offset, size;
guint32 align;
MonoMethodVar *vmv;
MonoInst *inst;
gint32 *offsets;
GList *vars = NULL, *l, *unhandled;
StackSlotInfo *scalar_stack_slots, *vtype_stack_slots, *slot_info;
MonoType *t;
int nvtypes;
int vtype_stack_slots_size = 256;
gboolean reuse_slot;
LSCAN_DEBUG (printf ("Allocate Stack Slots 2 for %s:\n", mono_method_full_name (cfg->method, TRUE)));
scalar_stack_slots = (StackSlotInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (StackSlotInfo) * MONO_TYPE_PINNED);
vtype_stack_slots = NULL;
nvtypes = 0;
offsets = (gint32 *)mono_mempool_alloc (cfg->mempool, sizeof (gint32) * cfg->num_varinfo);
for (i = 0; i < cfg->num_varinfo; ++i)
offsets [i] = -1;
for (i = cfg->locals_start; i < cfg->num_varinfo; i++) {
inst = cfg->varinfo [i];
vmv = MONO_VARINFO (cfg, i);
if ((inst->flags & MONO_INST_IS_DEAD) || inst->opcode == OP_REGVAR || inst->opcode == OP_REGOFFSET)
continue;
vars = g_list_prepend (vars, vmv);
}
vars = g_list_sort (vars, compare_by_interval_start_pos_func);
/* Sanity check */
/*
i = 0;
for (unhandled = vars; unhandled; unhandled = unhandled->next) {
MonoMethodVar *current = unhandled->data;
if (current->interval->range) {
g_assert (current->interval->range->from >= i);
i = current->interval->range->from;
}
}
*/
offset = 0;
*stack_align = 0;
for (unhandled = vars; unhandled; unhandled = unhandled->next) {
MonoMethodVar *current = (MonoMethodVar *)unhandled->data;
vmv = current;
inst = cfg->varinfo [vmv->idx];
t = mono_type_get_underlying_type (inst->inst_vtype);
if (cfg->gsharedvt && mini_is_gsharedvt_variable_type (t))
continue;
/* inst->backend.is_pinvoke indicates native sized value types, this is used by the
* pinvoke wrappers when they call functions returning structures */
if (inst->backend.is_pinvoke && MONO_TYPE_ISSTRUCT (t) && t->type != MONO_TYPE_TYPEDBYREF) {
size = mono_class_native_size (mono_class_from_mono_type_internal (t), &align);
}
else {
int ialign;
size = mini_type_stack_size (t, &ialign);
align = ialign;
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (t)))
align = 16;
}
reuse_slot = TRUE;
if (cfg->disable_reuse_stack_slots)
reuse_slot = FALSE;
t = mini_get_underlying_type (t);
switch (t->type) {
case MONO_TYPE_GENERICINST:
if (!mono_type_generic_inst_is_valuetype (t)) {
slot_info = &scalar_stack_slots [t->type];
break;
}
/* Fall through */
case MONO_TYPE_VALUETYPE:
if (!vtype_stack_slots)
vtype_stack_slots = (StackSlotInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (StackSlotInfo) * vtype_stack_slots_size);
for (i = 0; i < nvtypes; ++i)
if (t->data.klass == vtype_stack_slots [i].vtype)
break;
if (i < nvtypes)
slot_info = &vtype_stack_slots [i];
else {
if (nvtypes == vtype_stack_slots_size) {
int new_slots_size = vtype_stack_slots_size * 2;
StackSlotInfo* new_slots = (StackSlotInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (StackSlotInfo) * new_slots_size);
memcpy (new_slots, vtype_stack_slots, sizeof (StackSlotInfo) * vtype_stack_slots_size);
vtype_stack_slots = new_slots;
vtype_stack_slots_size = new_slots_size;
}
vtype_stack_slots [nvtypes].vtype = t->data.klass;
slot_info = &vtype_stack_slots [nvtypes];
nvtypes ++;
}
if (cfg->disable_reuse_ref_stack_slots)
reuse_slot = FALSE;
break;
case MONO_TYPE_PTR:
case MONO_TYPE_I:
case MONO_TYPE_U:
#if TARGET_SIZEOF_VOID_P == 4
case MONO_TYPE_I4:
#else
case MONO_TYPE_I8:
#endif
if (cfg->disable_ref_noref_stack_slot_share) {
slot_info = &scalar_stack_slots [MONO_TYPE_I];
break;
}
/* Fall through */
case MONO_TYPE_CLASS:
case MONO_TYPE_OBJECT:
case MONO_TYPE_ARRAY:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_STRING:
/* Share non-float stack slots of the same size */
slot_info = &scalar_stack_slots [MONO_TYPE_CLASS];
if (cfg->disable_reuse_ref_stack_slots)
reuse_slot = FALSE;
break;
default:
slot_info = &scalar_stack_slots [t->type];
}
slot = 0xffffff;
if (cfg->comp_done & MONO_COMP_LIVENESS) {
int pos;
gboolean changed;
//printf ("START %2d %08x %08x\n", vmv->idx, vmv->range.first_use.abs_pos, vmv->range.last_use.abs_pos);
if (!current->interval->range) {
if (inst->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT))
pos = ~0;
else {
/* Dead */
inst->flags |= MONO_INST_IS_DEAD;
continue;
}
}
else
pos = current->interval->range->from;
LSCAN_DEBUG (printf ("process R%d ", inst->dreg));
if (current->interval->range)
LSCAN_DEBUG (mono_linterval_print (current->interval));
LSCAN_DEBUG (printf ("\n"));
/* Check for intervals in active which expired or inactive */
changed = TRUE;
/* FIXME: Optimize this */
while (changed) {
changed = FALSE;
for (l = slot_info->active; l != NULL; l = l->next) {
MonoMethodVar *v = (MonoMethodVar*)l->data;
if (v->interval->last_range->to < pos) {
slot_info->active = g_list_delete_link (slot_info->active, l);
slot_info->slots = g_slist_prepend_mempool (cfg->mempool, slot_info->slots, GINT_TO_POINTER (offsets [v->idx]));
LSCAN_DEBUG (printf ("Interval R%d has expired, adding 0x%x to slots\n", cfg->varinfo [v->idx]->dreg, offsets [v->idx]));
changed = TRUE;
break;
}
else if (!mono_linterval_covers (v->interval, pos)) {
slot_info->inactive = g_list_append (slot_info->inactive, v);
slot_info->active = g_list_delete_link (slot_info->active, l);
LSCAN_DEBUG (printf ("Interval R%d became inactive\n", cfg->varinfo [v->idx]->dreg));
changed = TRUE;
break;
}
}
}
/* Check for intervals in inactive which expired or active */
changed = TRUE;
/* FIXME: Optimize this */
while (changed) {
changed = FALSE;
for (l = slot_info->inactive; l != NULL; l = l->next) {
MonoMethodVar *v = (MonoMethodVar*)l->data;
if (v->interval->last_range->to < pos) {
slot_info->inactive = g_list_delete_link (slot_info->inactive, l);
// FIXME: Enabling this seems to cause impossible to debug crashes
//slot_info->slots = g_slist_prepend_mempool (cfg->mempool, slot_info->slots, GINT_TO_POINTER (offsets [v->idx]));
LSCAN_DEBUG (printf ("Interval R%d has expired, adding 0x%x to slots\n", cfg->varinfo [v->idx]->dreg, offsets [v->idx]));
changed = TRUE;
break;
}
else if (mono_linterval_covers (v->interval, pos)) {
slot_info->active = g_list_append (slot_info->active, v);
slot_info->inactive = g_list_delete_link (slot_info->inactive, l);
LSCAN_DEBUG (printf ("\tInterval R%d became active\n", cfg->varinfo [v->idx]->dreg));
changed = TRUE;
break;
}
}
}
/*
* This also handles the case when the variable is used in an
* exception region, as liveness info is not computed there.
*/
/*
* FIXME: All valuetypes are marked as INDIRECT because of LDADDR
* opcodes.
*/
if (! (inst->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT))) {
if (slot_info->slots) {
slot = GPOINTER_TO_INT (slot_info->slots->data);
slot_info->slots = slot_info->slots->next;
}
/* FIXME: We might want to consider the inactive intervals as well if slot_info->slots is empty */
slot_info->active = mono_varlist_insert_sorted (cfg, slot_info->active, vmv, TRUE);
}
}
#if 0
{
static int count = 0;
count ++;
if (count == atoi (g_getenv ("COUNT3")))
printf ("LAST: %s\n", mono_method_full_name (cfg->method, TRUE));
if (count > atoi (g_getenv ("COUNT3")))
slot = 0xffffff;
else
mono_print_ins (inst);
}
#endif
LSCAN_DEBUG (printf ("R%d %s -> 0x%x\n", inst->dreg, mono_type_full_name (t), slot));
if (inst->flags & MONO_INST_LMF) {
size = MONO_ABI_SIZEOF (MonoLMF);
align = sizeof (target_mgreg_t);
reuse_slot = FALSE;
}
if (!reuse_slot)
slot = 0xffffff;
if (slot == 0xffffff) {
/*
* Allways allocate valuetypes to sizeof (target_mgreg_t) to allow more
* efficient copying (and to work around the fact that OP_MEMCPY
* and OP_MEMSET ignores alignment).
*/
if (MONO_TYPE_ISSTRUCT (t)) {
align = MAX (align, sizeof (target_mgreg_t));
align = MAX (align, mono_class_min_align (mono_class_from_mono_type_internal (t)));
}
if (backward) {
offset += size;
offset += align - 1;
offset &= ~(align - 1);
slot = offset;
}
else {
offset += align - 1;
offset &= ~(align - 1);
slot = offset;
offset += size;
}
if (*stack_align == 0)
*stack_align = align;
}
offsets [vmv->idx] = slot;
}
g_list_free (vars);
for (i = 0; i < MONO_TYPE_PINNED; ++i) {
if (scalar_stack_slots [i].active)
g_list_free (scalar_stack_slots [i].active);
}
for (i = 0; i < nvtypes; ++i) {
if (vtype_stack_slots [i].active)
g_list_free (vtype_stack_slots [i].active);
}
cfg->stat_locals_stack_size += offset;
*stack_size = offset;
return offsets;
}
/*
* mono_allocate_stack_slots:
*
* Allocate stack slots for all non register allocated variables using a
* linear scan algorithm.
* Returns: an array of stack offsets.
* STACK_SIZE is set to the amount of stack space needed.
* STACK_ALIGN is set to the alignment needed by the locals area.
*/
gint32*
mono_allocate_stack_slots (MonoCompile *cfg, gboolean backward, guint32 *stack_size, guint32 *stack_align)
{
int i, slot, offset, size;
guint32 align;
MonoMethodVar *vmv;
MonoInst *inst;
gint32 *offsets;
GList *vars = NULL, *l;
StackSlotInfo *scalar_stack_slots, *vtype_stack_slots, *slot_info;
MonoType *t;
int nvtypes;
int vtype_stack_slots_size = 256;
gboolean reuse_slot;
if ((cfg->num_varinfo > 0) && MONO_VARINFO (cfg, 0)->interval)
return mono_allocate_stack_slots2 (cfg, backward, stack_size, stack_align);
scalar_stack_slots = (StackSlotInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (StackSlotInfo) * MONO_TYPE_PINNED);
vtype_stack_slots = NULL;
nvtypes = 0;
offsets = (gint32 *)mono_mempool_alloc (cfg->mempool, sizeof (gint32) * cfg->num_varinfo);
for (i = 0; i < cfg->num_varinfo; ++i)
offsets [i] = -1;
for (i = cfg->locals_start; i < cfg->num_varinfo; i++) {
inst = cfg->varinfo [i];
vmv = MONO_VARINFO (cfg, i);
if ((inst->flags & MONO_INST_IS_DEAD) || inst->opcode == OP_REGVAR || inst->opcode == OP_REGOFFSET)
continue;
vars = g_list_prepend (vars, vmv);
}
vars = mono_varlist_sort (cfg, vars, 0);
offset = 0;
*stack_align = sizeof (target_mgreg_t);
for (l = vars; l; l = l->next) {
vmv = (MonoMethodVar *)l->data;
inst = cfg->varinfo [vmv->idx];
t = mono_type_get_underlying_type (inst->inst_vtype);
if (cfg->gsharedvt && mini_is_gsharedvt_variable_type (t))
continue;
/* inst->backend.is_pinvoke indicates native sized value types, this is used by the
* pinvoke wrappers when they call functions returning structures */
if (inst->backend.is_pinvoke && MONO_TYPE_ISSTRUCT (t) && t->type != MONO_TYPE_TYPEDBYREF) {
size = mono_class_native_size (mono_class_from_mono_type_internal (t), &align);
} else {
int ialign;
size = mini_type_stack_size (t, &ialign);
align = ialign;
if (mono_class_has_failure (mono_class_from_mono_type_internal (t)))
mono_cfg_set_exception (cfg, MONO_EXCEPTION_TYPE_LOAD);
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (t)))
align = 16;
}
reuse_slot = TRUE;
if (cfg->disable_reuse_stack_slots)
reuse_slot = FALSE;
t = mini_get_underlying_type (t);
switch (t->type) {
case MONO_TYPE_GENERICINST:
if (!mono_type_generic_inst_is_valuetype (t)) {
slot_info = &scalar_stack_slots [t->type];
break;
}
/* Fall through */
case MONO_TYPE_VALUETYPE:
if (!vtype_stack_slots)
vtype_stack_slots = (StackSlotInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (StackSlotInfo) * vtype_stack_slots_size);
for (i = 0; i < nvtypes; ++i)
if (t->data.klass == vtype_stack_slots [i].vtype)
break;
if (i < nvtypes)
slot_info = &vtype_stack_slots [i];
else {
if (nvtypes == vtype_stack_slots_size) {
int new_slots_size = vtype_stack_slots_size * 2;
StackSlotInfo* new_slots = (StackSlotInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (StackSlotInfo) * new_slots_size);
memcpy (new_slots, vtype_stack_slots, sizeof (StackSlotInfo) * vtype_stack_slots_size);
vtype_stack_slots = new_slots;
vtype_stack_slots_size = new_slots_size;
}
vtype_stack_slots [nvtypes].vtype = t->data.klass;
slot_info = &vtype_stack_slots [nvtypes];
nvtypes ++;
}
if (cfg->disable_reuse_ref_stack_slots)
reuse_slot = FALSE;
break;
case MONO_TYPE_PTR:
case MONO_TYPE_I:
case MONO_TYPE_U:
#if TARGET_SIZEOF_VOID_P == 4
case MONO_TYPE_I4:
#else
case MONO_TYPE_I8:
#endif
if (cfg->disable_ref_noref_stack_slot_share) {
slot_info = &scalar_stack_slots [MONO_TYPE_I];
break;
}
/* Fall through */
case MONO_TYPE_CLASS:
case MONO_TYPE_OBJECT:
case MONO_TYPE_ARRAY:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_STRING:
/* Share non-float stack slots of the same size */
slot_info = &scalar_stack_slots [MONO_TYPE_CLASS];
if (cfg->disable_reuse_ref_stack_slots)
reuse_slot = FALSE;
break;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
slot_info = &scalar_stack_slots [t->type];
break;
default:
slot_info = &scalar_stack_slots [t->type];
break;
}
slot = 0xffffff;
if (cfg->comp_done & MONO_COMP_LIVENESS) {
//printf ("START %2d %08x %08x\n", vmv->idx, vmv->range.first_use.abs_pos, vmv->range.last_use.abs_pos);
/* expire old intervals in active */
while (slot_info->active) {
MonoMethodVar *amv = (MonoMethodVar *)slot_info->active->data;
if (amv->range.last_use.abs_pos > vmv->range.first_use.abs_pos)
break;
//printf ("EXPIR %2d %08x %08x C%d R%d\n", amv->idx, amv->range.first_use.abs_pos, amv->range.last_use.abs_pos, amv->spill_costs, amv->reg);
slot_info->active = g_list_delete_link (slot_info->active, slot_info->active);
slot_info->slots = g_slist_prepend_mempool (cfg->mempool, slot_info->slots, GINT_TO_POINTER (offsets [amv->idx]));
}
/*
* This also handles the case when the variable is used in an
* exception region, as liveness info is not computed there.
*/
/*
* FIXME: All valuetypes are marked as INDIRECT because of LDADDR
* opcodes.
*/
if (! (inst->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT))) {
if (slot_info->slots) {
slot = GPOINTER_TO_INT (slot_info->slots->data);
slot_info->slots = slot_info->slots->next;
}
slot_info->active = mono_varlist_insert_sorted (cfg, slot_info->active, vmv, TRUE);
}
}
#if 0
{
static int count = 0;
count ++;
if (count == atoi (g_getenv ("COUNT")))
printf ("LAST: %s\n", mono_method_full_name (cfg->method, TRUE));
if (count > atoi (g_getenv ("COUNT")))
slot = 0xffffff;
else
mono_print_ins (inst);
}
#endif
if (inst->flags & MONO_INST_LMF) {
/*
* This variable represents a MonoLMF structure, which has no corresponding
* CLR type, so hard-code its size/alignment.
*/
size = MONO_ABI_SIZEOF (MonoLMF);
align = sizeof (target_mgreg_t);
reuse_slot = FALSE;
}
if (!reuse_slot)
slot = 0xffffff;
if (slot == 0xffffff) {
/*
* Allways allocate valuetypes to sizeof (target_mgreg_t) to allow more
* efficient copying (and to work around the fact that OP_MEMCPY
* and OP_MEMSET ignores alignment).
*/
if (MONO_TYPE_ISSTRUCT (t)) {
align = MAX (align, sizeof (target_mgreg_t));
align = MAX (align, mono_class_min_align (mono_class_from_mono_type_internal (t)));
/*
* Align the size too so the code generated for passing vtypes in
* registers doesn't overwrite random locals.
*/
size = (size + (align - 1)) & ~(align -1);
}
if (backward) {
offset += size;
offset += align - 1;
offset &= ~(align - 1);
slot = offset;
}
else {
offset += align - 1;
offset &= ~(align - 1);
slot = offset;
offset += size;
}
*stack_align = MAX (*stack_align, align);
}
offsets [vmv->idx] = slot;
}
g_list_free (vars);
for (i = 0; i < MONO_TYPE_PINNED; ++i) {
if (scalar_stack_slots [i].active)
g_list_free (scalar_stack_slots [i].active);
}
for (i = 0; i < nvtypes; ++i) {
if (vtype_stack_slots [i].active)
g_list_free (vtype_stack_slots [i].active);
}
cfg->stat_locals_stack_size += offset;
*stack_size = offset;
return offsets;
}
#define EMUL_HIT_SHIFT 3
#define EMUL_HIT_MASK ((1 << EMUL_HIT_SHIFT) - 1)
/* small hit bitmap cache */
static mono_byte emul_opcode_hit_cache [(OP_LAST>>EMUL_HIT_SHIFT) + 1] = {0};
static short emul_opcode_num = 0;
static short emul_opcode_alloced = 0;
static short *emul_opcode_opcodes;
static MonoJitICallInfo **emul_opcode_map;
MonoJitICallInfo *
mono_find_jit_opcode_emulation (int opcode)
{
g_assert (opcode >= 0 && opcode <= OP_LAST);
if (emul_opcode_hit_cache [opcode >> (EMUL_HIT_SHIFT + 3)] & (1 << (opcode & EMUL_HIT_MASK))) {
int i;
for (i = 0; i < emul_opcode_num; ++i) {
if (emul_opcode_opcodes [i] == opcode)
return emul_opcode_map [i];
}
}
return NULL;
}
void
mini_register_opcode_emulation (int opcode, MonoJitICallInfo *info, const char *name, MonoMethodSignature *sig, gpointer func, const char *symbol, gboolean no_wrapper)
{
g_assert (info);
g_assert (!sig->hasthis);
g_assert (sig->param_count < 3);
mono_register_jit_icall_info (info, func, name, sig, no_wrapper, symbol);
if (emul_opcode_num >= emul_opcode_alloced) {
int incr = emul_opcode_alloced? emul_opcode_alloced/2: 16;
emul_opcode_alloced += incr;
emul_opcode_map = (MonoJitICallInfo **)g_realloc (emul_opcode_map, sizeof (emul_opcode_map [0]) * emul_opcode_alloced);
emul_opcode_opcodes = (short *)g_realloc (emul_opcode_opcodes, sizeof (emul_opcode_opcodes [0]) * emul_opcode_alloced);
}
emul_opcode_map [emul_opcode_num] = info;
emul_opcode_opcodes [emul_opcode_num] = opcode;
emul_opcode_num++;
emul_opcode_hit_cache [opcode >> (EMUL_HIT_SHIFT + 3)] |= (1 << (opcode & EMUL_HIT_MASK));
}
static void
print_dfn (MonoCompile *cfg)
{
int i, j;
char *code;
MonoBasicBlock *bb;
MonoInst *c;
{
char *method_name = mono_method_full_name (cfg->method, TRUE);
g_print ("IR code for method %s\n", method_name);
g_free (method_name);
}
for (i = 0; i < cfg->num_bblocks; ++i) {
bb = cfg->bblocks [i];
/*if (bb->cil_code) {
char* code1, *code2;
code1 = mono_disasm_code_one (NULL, cfg->method, bb->cil_code, NULL);
if (bb->last_ins->cil_code)
code2 = mono_disasm_code_one (NULL, cfg->method, bb->last_ins->cil_code, NULL);
else
code2 = g_strdup ("");
code1 [strlen (code1) - 1] = 0;
code = g_strdup_printf ("%s -> %s", code1, code2);
g_free (code1);
g_free (code2);
} else*/
code = g_strdup ("\n");
g_print ("\nBB%d (%d) (len: %d): %s", bb->block_num, i, bb->cil_length, code);
MONO_BB_FOR_EACH_INS (bb, c) {
mono_print_ins_index (-1, c);
}
g_print ("\tprev:");
for (j = 0; j < bb->in_count; ++j) {
g_print (" BB%d", bb->in_bb [j]->block_num);
}
g_print ("\t\tsucc:");
for (j = 0; j < bb->out_count; ++j) {
g_print (" BB%d", bb->out_bb [j]->block_num);
}
g_print ("\n\tidom: BB%d\n", bb->idom? bb->idom->block_num: -1);
if (bb->idom)
g_assert (mono_bitset_test_fast (bb->dominators, bb->idom->dfn));
if (bb->dominators)
mono_blockset_print (cfg, bb->dominators, "\tdominators", bb->idom? bb->idom->dfn: -1);
if (bb->dfrontier)
mono_blockset_print (cfg, bb->dfrontier, "\tdfrontier", -1);
g_free (code);
}
g_print ("\n");
}
void
mono_bblock_add_inst (MonoBasicBlock *bb, MonoInst *inst)
{
MONO_ADD_INS (bb, inst);
}
void
mono_bblock_insert_after_ins (MonoBasicBlock *bb, MonoInst *ins, MonoInst *ins_to_insert)
{
if (ins == NULL) {
ins = bb->code;
bb->code = ins_to_insert;
/* Link with next */
ins_to_insert->next = ins;
if (ins)
ins->prev = ins_to_insert;
if (bb->last_ins == NULL)
bb->last_ins = ins_to_insert;
} else {
/* Link with next */
ins_to_insert->next = ins->next;
if (ins->next)
ins->next->prev = ins_to_insert;
/* Link with previous */
ins->next = ins_to_insert;
ins_to_insert->prev = ins;
if (bb->last_ins == ins)
bb->last_ins = ins_to_insert;
}
}
void
mono_bblock_insert_before_ins (MonoBasicBlock *bb, MonoInst *ins, MonoInst *ins_to_insert)
{
if (ins == NULL) {
ins = bb->code;
if (ins)
ins->prev = ins_to_insert;
bb->code = ins_to_insert;
ins_to_insert->next = ins;
if (bb->last_ins == NULL)
bb->last_ins = ins_to_insert;
} else {
/* Link with previous */
if (ins->prev)
ins->prev->next = ins_to_insert;
ins_to_insert->prev = ins->prev;
/* Link with next */
ins->prev = ins_to_insert;
ins_to_insert->next = ins;
if (bb->code == ins)
bb->code = ins_to_insert;
}
}
/*
* mono_verify_bblock:
*
* Verify that the next and prev pointers are consistent inside the instructions in BB.
*/
void
mono_verify_bblock (MonoBasicBlock *bb)
{
MonoInst *ins, *prev;
prev = NULL;
for (ins = bb->code; ins; ins = ins->next) {
g_assert (ins->prev == prev);
prev = ins;
}
if (bb->last_ins)
g_assert (!bb->last_ins->next);
}
/*
* mono_verify_cfg:
*
* Perform consistency checks on the JIT data structures and the IR
*/
void
mono_verify_cfg (MonoCompile *cfg)
{
MonoBasicBlock *bb;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
mono_verify_bblock (bb);
}
// This will free many fields in cfg to save
// memory. Note that this must be safe to call
// multiple times. It must be idempotent.
void
mono_empty_compile (MonoCompile *cfg)
{
mono_free_loop_info (cfg);
// These live in the mempool, and so must be freed
// first
for (GSList *l = cfg->headers_to_free; l; l = l->next) {
mono_metadata_free_mh ((MonoMethodHeader *)l->data);
}
cfg->headers_to_free = NULL;
if (cfg->mempool) {
//mono_mempool_stats (cfg->mempool);
mono_mempool_destroy (cfg->mempool);
cfg->mempool = NULL;
}
g_free (cfg->varinfo);
cfg->varinfo = NULL;
g_free (cfg->vars);
cfg->vars = NULL;
if (cfg->rs) {
mono_regstate_free (cfg->rs);
cfg->rs = NULL;
}
}
void
mono_destroy_compile (MonoCompile *cfg)
{
mono_empty_compile (cfg);
mono_metadata_free_mh (cfg->header);
g_hash_table_destroy (cfg->spvars);
g_hash_table_destroy (cfg->exvars);
g_list_free (cfg->ldstr_list);
g_hash_table_destroy (cfg->token_info_hash);
g_hash_table_destroy (cfg->abs_patches);
mono_debug_free_method (cfg);
g_free (cfg->varinfo);
g_free (cfg->vars);
g_free (cfg->exception_message);
g_free (cfg);
}
void
mono_add_patch_info (MonoCompile *cfg, int ip, MonoJumpInfoType type, gconstpointer target)
{
if (type == MONO_PATCH_INFO_NONE)
return;
MonoJumpInfo *ji = (MonoJumpInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfo));
ji->ip.i = ip;
ji->type = type;
ji->data.target = target;
ji->next = cfg->patch_info;
cfg->patch_info = ji;
}
void
mono_add_patch_info_rel (MonoCompile *cfg, int ip, MonoJumpInfoType type, gconstpointer target, int relocation)
{
if (type == MONO_PATCH_INFO_NONE)
return;
MonoJumpInfo *ji = (MonoJumpInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfo));
ji->ip.i = ip;
ji->type = type;
ji->relocation = relocation;
ji->data.target = target;
ji->next = cfg->patch_info;
cfg->patch_info = ji;
}
void
mono_remove_patch_info (MonoCompile *cfg, int ip)
{
MonoJumpInfo **ji = &cfg->patch_info;
while (*ji) {
if ((*ji)->ip.i == ip)
*ji = (*ji)->next;
else
ji = &((*ji)->next);
}
}
void
mono_add_seq_point (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins, int native_offset)
{
ins->inst_offset = native_offset;
g_ptr_array_add (cfg->seq_points, ins);
if (bb) {
bb->seq_points = g_slist_prepend_mempool (cfg->mempool, bb->seq_points, ins);
bb->last_seq_point = ins;
}
}
void
mono_add_var_location (MonoCompile *cfg, MonoInst *var, gboolean is_reg, int reg, int offset, int from, int to)
{
MonoDwarfLocListEntry *entry = (MonoDwarfLocListEntry *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoDwarfLocListEntry));
if (is_reg)
g_assert (offset == 0);
entry->is_reg = is_reg;
entry->reg = reg;
entry->offset = offset;
entry->from = from;
entry->to = to;
if (var == cfg->args [0])
cfg->this_loclist = g_slist_append_mempool (cfg->mempool, cfg->this_loclist, entry);
else if (var == cfg->rgctx_var)
cfg->rgctx_loclist = g_slist_append_mempool (cfg->mempool, cfg->rgctx_loclist, entry);
}
static void
mono_apply_volatile (MonoInst *inst, MonoBitSet *set, gsize index)
{
inst->flags |= mono_bitset_test_safe (set, index) ? MONO_INST_VOLATILE : 0;
}
static void
mono_compile_create_vars (MonoCompile *cfg)
{
MonoMethodSignature *sig;
MonoMethodHeader *header;
int i;
header = cfg->header;
sig = mono_method_signature_internal (cfg->method);
if (!MONO_TYPE_IS_VOID (sig->ret)) {
cfg->ret = mono_compile_create_var (cfg, sig->ret, OP_ARG);
/* Inhibit optimizations */
cfg->ret->flags |= MONO_INST_VOLATILE;
}
if (cfg->verbose_level > 2)
g_print ("creating vars\n");
cfg->args = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, (sig->param_count + sig->hasthis) * sizeof (MonoInst*));
if (sig->hasthis) {
MonoInst* arg = mono_compile_create_var (cfg, m_class_get_this_arg (cfg->method->klass), OP_ARG);
mono_apply_volatile (arg, header->volatile_args, 0);
cfg->args [0] = arg;
cfg->this_arg = arg;
}
for (i = 0; i < sig->param_count; ++i) {
MonoInst* arg = mono_compile_create_var (cfg, sig->params [i], OP_ARG);
mono_apply_volatile (arg, header->volatile_args, i + sig->hasthis);
cfg->args [i + sig->hasthis] = arg;
}
if (cfg->verbose_level > 2) {
if (cfg->ret) {
printf ("\treturn : ");
mono_print_ins (cfg->ret);
}
if (sig->hasthis) {
printf ("\tthis: ");
mono_print_ins (cfg->args [0]);
}
for (i = 0; i < sig->param_count; ++i) {
printf ("\targ [%d]: ", i);
mono_print_ins (cfg->args [i + sig->hasthis]);
}
}
cfg->locals_start = cfg->num_varinfo;
cfg->locals = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, header->num_locals * sizeof (MonoInst*));
if (cfg->verbose_level > 2)
g_print ("creating locals\n");
for (i = 0; i < header->num_locals; ++i) {
if (cfg->verbose_level > 2)
g_print ("\tlocal [%d]: ", i);
cfg->locals [i] = mono_compile_create_var (cfg, header->locals [i], OP_LOCAL);
mono_apply_volatile (cfg->locals [i], header->volatile_locals, i);
}
if (cfg->verbose_level > 2)
g_print ("locals done\n");
#ifdef ENABLE_LLVM
if (COMPILE_LLVM (cfg))
mono_llvm_create_vars (cfg);
else
mono_arch_create_vars (cfg);
#else
mono_arch_create_vars (cfg);
#endif
if (cfg->method->save_lmf && cfg->create_lmf_var) {
MonoInst *lmf_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
lmf_var->flags |= MONO_INST_VOLATILE;
lmf_var->flags |= MONO_INST_LMF;
cfg->lmf_var = lmf_var;
}
}
void
mono_print_code (MonoCompile *cfg, const char* msg)
{
MonoBasicBlock *bb;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
mono_print_bb (bb, msg);
}
static void
mono_postprocess_patches (MonoCompile *cfg)
{
MonoJumpInfo *patch_info;
int i;
for (patch_info = cfg->patch_info; patch_info; patch_info = patch_info->next) {
switch (patch_info->type) {
case MONO_PATCH_INFO_ABS: {
/*
* Change patches of type MONO_PATCH_INFO_ABS into patches describing the
* absolute address.
*/
if (cfg->abs_patches) {
MonoJumpInfo *abs_ji = (MonoJumpInfo *)g_hash_table_lookup (cfg->abs_patches, patch_info->data.target);
if (abs_ji) {
patch_info->type = abs_ji->type;
patch_info->data.target = abs_ji->data.target;
}
}
break;
}
case MONO_PATCH_INFO_SWITCH: {
gpointer *table;
if (cfg->method->dynamic) {
table = (void **)mono_code_manager_reserve (cfg->dynamic_info->code_mp, sizeof (gpointer) * patch_info->data.table->table_size);
} else {
table = (void **)mono_mem_manager_code_reserve (cfg->mem_manager, sizeof (gpointer) * patch_info->data.table->table_size);
}
for (i = 0; i < patch_info->data.table->table_size; i++) {
/* Might be NULL if the switch is eliminated */
if (patch_info->data.table->table [i]) {
g_assert (patch_info->data.table->table [i]->native_offset);
table [i] = GINT_TO_POINTER (patch_info->data.table->table [i]->native_offset);
} else {
table [i] = NULL;
}
}
patch_info->data.table->table = (MonoBasicBlock**)table;
break;
}
default:
/* do nothing */
break;
}
}
}
/* Those patches require the JitInfo of the compiled method already be in place when used */
static void
mono_postprocess_patches_after_ji_publish (MonoCompile *cfg)
{
MonoJumpInfo *patch_info;
for (patch_info = cfg->patch_info; patch_info; patch_info = patch_info->next) {
switch (patch_info->type) {
case MONO_PATCH_INFO_METHOD_JUMP: {
unsigned char *ip = cfg->native_code + patch_info->ip.i;
mini_register_jump_site (patch_info->data.method, ip);
break;
}
default:
/* do nothing */
break;
}
}
}
void
mono_codegen (MonoCompile *cfg)
{
MonoBasicBlock *bb;
int max_epilog_size;
guint8 *code;
MonoMemoryManager *code_mem_manager = cfg->mem_manager;
guint unwindlen = 0;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
cfg->spill_count = 0;
/* we reuse dfn here */
/* bb->dfn = bb_count++; */
mono_arch_lowering_pass (cfg, bb);
if (cfg->opt & MONO_OPT_PEEPHOLE)
mono_arch_peephole_pass_1 (cfg, bb);
mono_local_regalloc (cfg, bb);
if (cfg->opt & MONO_OPT_PEEPHOLE)
mono_arch_peephole_pass_2 (cfg, bb);
if (cfg->gen_seq_points && !cfg->gen_sdb_seq_points)
mono_bb_deduplicate_op_il_seq_points (cfg, bb);
}
code = mono_arch_emit_prolog (cfg);
set_code_cursor (cfg, code);
cfg->prolog_end = cfg->code_len;
cfg->cfa_reg = cfg->cur_cfa_reg;
cfg->cfa_offset = cfg->cur_cfa_offset;
mono_debug_open_method (cfg);
/* emit code all basic blocks */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
bb->native_offset = cfg->code_len;
bb->real_native_offset = cfg->code_len;
//if ((bb == cfg->bb_entry) || !(bb->region == -1 && !bb->dfn))
mono_arch_output_basic_block (cfg, bb);
bb->native_length = cfg->code_len - bb->native_offset;
if (bb == cfg->bb_exit) {
cfg->epilog_begin = cfg->code_len;
mono_arch_emit_epilog (cfg);
cfg->epilog_end = cfg->code_len;
}
if (bb->clause_holes) {
GList *tmp;
for (tmp = bb->clause_holes; tmp; tmp = tmp->prev)
mono_cfg_add_try_hole (cfg, ((MonoLeaveClause *) tmp->data)->clause, cfg->native_code + bb->native_offset, bb);
}
}
mono_arch_emit_exceptions (cfg);
max_epilog_size = 0;
cfg->code_size = cfg->code_len + max_epilog_size;
/* fixme: align to MONO_ARCH_CODE_ALIGNMENT */
#ifdef MONO_ARCH_HAVE_UNWIND_TABLE
if (!cfg->compile_aot)
unwindlen = mono_arch_unwindinfo_init_method_unwind_info (cfg);
#endif
if (cfg->method->dynamic) {
/* Allocate the code into a separate memory pool so it can be freed */
cfg->dynamic_info = g_new0 (MonoJitDynamicMethodInfo, 1);
cfg->dynamic_info->code_mp = mono_code_manager_new_dynamic ();
MonoJitMemoryManager *jit_mm = (MonoJitMemoryManager*)cfg->jit_mm;
jit_mm_lock (jit_mm);
if (!jit_mm->dynamic_code_hash)
jit_mm->dynamic_code_hash = g_hash_table_new (NULL, NULL);
g_hash_table_insert (jit_mm->dynamic_code_hash, cfg->method, cfg->dynamic_info);
jit_mm_unlock (jit_mm);
code = (guint8 *)mono_code_manager_reserve (cfg->dynamic_info->code_mp, cfg->code_size + cfg->thunk_area + unwindlen);
} else {
code = (guint8 *)mono_mem_manager_code_reserve (code_mem_manager, cfg->code_size + cfg->thunk_area + unwindlen);
}
mono_codeman_enable_write ();
if (cfg->thunk_area) {
cfg->thunks_offset = cfg->code_size + unwindlen;
cfg->thunks = code + cfg->thunks_offset;
memset (cfg->thunks, 0, cfg->thunk_area);
}
g_assert (code);
memcpy (code, cfg->native_code, cfg->code_len);
g_free (cfg->native_code);
cfg->native_code = code;
code = cfg->native_code + cfg->code_len;
/* g_assert (((int)cfg->native_code & (MONO_ARCH_CODE_ALIGNMENT - 1)) == 0); */
mono_postprocess_patches (cfg);
#ifdef VALGRIND_JIT_REGISTER_MAP
if (valgrind_register){
char* nm = mono_method_full_name (cfg->method, TRUE);
VALGRIND_JIT_REGISTER_MAP (nm, cfg->native_code, cfg->native_code + cfg->code_len);
g_free (nm);
}
#endif
if (cfg->verbose_level > 0) {
char* nm = mono_method_get_full_name (cfg->method);
g_print ("Method %s emitted at %p to %p (code length %d)\n",
nm,
cfg->native_code, cfg->native_code + cfg->code_len, cfg->code_len);
g_free (nm);
}
{
gboolean is_generic = FALSE;
if (cfg->method->is_inflated || mono_method_get_generic_container (cfg->method) ||
mono_class_is_gtd (cfg->method->klass) || mono_class_is_ginst (cfg->method->klass)) {
is_generic = TRUE;
}
if (cfg->gshared)
g_assert (is_generic);
}
#ifdef MONO_ARCH_HAVE_SAVE_UNWIND_INFO
mono_arch_save_unwind_info (cfg);
#endif
{
MonoJumpInfo *ji;
gpointer target;
for (ji = cfg->patch_info; ji; ji = ji->next) {
if (cfg->compile_aot) {
switch (ji->type) {
case MONO_PATCH_INFO_BB:
case MONO_PATCH_INFO_LABEL:
break;
default:
/* No need to patch these */
continue;
}
}
if (ji->type == MONO_PATCH_INFO_NONE)
continue;
target = mono_resolve_patch_target (cfg->method, cfg->native_code, ji, cfg->run_cctors, cfg->error);
if (!is_ok (cfg->error)) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
return;
}
mono_arch_patch_code_new (cfg, cfg->native_code, ji, target);
}
}
if (cfg->method->dynamic) {
mono_code_manager_commit (cfg->dynamic_info->code_mp, cfg->native_code, cfg->code_size, cfg->code_len);
} else {
mono_mem_manager_code_commit (code_mem_manager, cfg->native_code, cfg->code_size, cfg->code_len);
}
mono_codeman_disable_write ();
MONO_PROFILER_RAISE (jit_code_buffer, (cfg->native_code, cfg->code_len, MONO_PROFILER_CODE_BUFFER_METHOD, cfg->method));
mono_arch_flush_icache (cfg->native_code, cfg->code_len);
mono_debug_close_method (cfg);
#ifdef MONO_ARCH_HAVE_UNWIND_TABLE
if (!cfg->compile_aot)
mono_arch_unwindinfo_install_method_unwind_info (&cfg->arch.unwindinfo, cfg->native_code, cfg->code_len);
#endif
}
static void
compute_reachable (MonoBasicBlock *bb)
{
int i;
if (!(bb->flags & BB_VISITED)) {
bb->flags |= BB_VISITED;
for (i = 0; i < bb->out_count; ++i)
compute_reachable (bb->out_bb [i]);
}
}
static void mono_bb_ordering (MonoCompile *cfg)
{
int dfn = 0;
/* Depth-first ordering on basic blocks */
cfg->bblocks = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (MonoBasicBlock*) * (cfg->num_bblocks + 1));
cfg->max_block_num = cfg->num_bblocks;
df_visit (cfg->bb_entry, &dfn, cfg->bblocks);
#if defined(__GNUC__) && __GNUC__ == 7 && defined(__x86_64__)
/* workaround for an AMD specific issue that only happens on GCC 7 so far,
* for more information see https://github.com/mono/mono/issues/9298 */
mono_memory_barrier ();
#endif
g_assertf (cfg->num_bblocks >= dfn, "cfg->num_bblocks=%d, dfn=%d\n", cfg->num_bblocks, dfn);
if (cfg->num_bblocks != dfn + 1) {
MonoBasicBlock *bb;
cfg->num_bblocks = dfn + 1;
/* remove unreachable code, because the code in them may be
* inconsistent (access to dead variables for example) */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
bb->flags &= ~BB_VISITED;
compute_reachable (cfg->bb_entry);
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
if (bb->flags & BB_EXCEPTION_HANDLER)
compute_reachable (bb);
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
if (!(bb->flags & BB_VISITED)) {
if (cfg->verbose_level > 1)
g_print ("found unreachable code in BB%d\n", bb->block_num);
bb->code = bb->last_ins = NULL;
while (bb->out_count)
mono_unlink_bblock (cfg, bb, bb->out_bb [0]);
}
}
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
bb->flags &= ~BB_VISITED;
}
}
static void
mono_handle_out_of_line_bblock (MonoCompile *cfg)
{
MonoBasicBlock *bb;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
if (bb->next_bb && bb->next_bb->out_of_line && bb->last_ins && !MONO_IS_BRANCH_OP (bb->last_ins)) {
MonoInst *ins;
MONO_INST_NEW (cfg, ins, OP_BR);
MONO_ADD_INS (bb, ins);
ins->inst_target_bb = bb->next_bb;
}
}
}
static MonoJitInfo*
create_jit_info (MonoCompile *cfg, MonoMethod *method_to_compile)
{
GSList *tmp;
MonoMethodHeader *header;
MonoJitInfo *jinfo;
MonoJitInfoFlags flags = JIT_INFO_NONE;
int num_clauses, num_holes = 0;
guint32 stack_size = 0;
g_assert (method_to_compile == cfg->method);
header = cfg->header;
if (cfg->gshared)
flags |= JIT_INFO_HAS_GENERIC_JIT_INFO;
if (cfg->arch_eh_jit_info) {
MonoJitArgumentInfo *arg_info;
MonoMethodSignature *sig = mono_method_signature_internal (cfg->method_to_register);
/*
* This cannot be computed during stack walking, as
* mono_arch_get_argument_info () is not signal safe.
*/
arg_info = g_newa (MonoJitArgumentInfo, sig->param_count + 1);
stack_size = mono_arch_get_argument_info (sig, sig->param_count, arg_info);
if (stack_size)
flags |= JIT_INFO_HAS_ARCH_EH_INFO;
}
if (cfg->has_unwind_info_for_epilog && !(flags & JIT_INFO_HAS_ARCH_EH_INFO))
flags |= JIT_INFO_HAS_ARCH_EH_INFO;
if (cfg->thunk_area)
flags |= JIT_INFO_HAS_THUNK_INFO;
if (cfg->try_block_holes) {
for (tmp = cfg->try_block_holes; tmp; tmp = tmp->next) {
TryBlockHole *hole = (TryBlockHole *)tmp->data;
MonoExceptionClause *ec = hole->clause;
int hole_end = hole->basic_block->native_offset + hole->basic_block->native_length;
MonoBasicBlock *clause_last_bb = cfg->cil_offset_to_bb [ec->try_offset + ec->try_len];
g_assert (clause_last_bb);
/* Holes at the end of a try region can be represented by simply reducing the size of the block itself.*/
if (clause_last_bb->native_offset != hole_end)
++num_holes;
}
if (num_holes)
flags |= JIT_INFO_HAS_TRY_BLOCK_HOLES;
if (G_UNLIKELY (cfg->verbose_level >= 4))
printf ("Number of try block holes %d\n", num_holes);
}
if (COMPILE_LLVM (cfg)) {
num_clauses = cfg->llvm_ex_info_len;
} else {
num_clauses = header->num_clauses;
int dead_clauses = 0;
for (int i = 0; i < header->num_clauses; ++i)
if (cfg->clause_is_dead [i])
dead_clauses ++;
num_clauses -= dead_clauses;
}
if (cfg->method->dynamic)
jinfo = (MonoJitInfo *)g_malloc0 (mono_jit_info_size (flags, num_clauses, num_holes));
else
jinfo = (MonoJitInfo *)mono_mem_manager_alloc0 (cfg->mem_manager, mono_jit_info_size (flags, num_clauses, num_holes));
jinfo_try_holes_size += num_holes * sizeof (MonoTryBlockHoleJitInfo);
mono_jit_info_init (jinfo, cfg->method_to_register, cfg->native_code, cfg->code_len, flags, num_clauses, num_holes);
if (COMPILE_LLVM (cfg))
jinfo->from_llvm = TRUE;
if (cfg->gshared) {
MonoInst *inst;
MonoGenericJitInfo *gi;
GSList *loclist = NULL;
gi = mono_jit_info_get_generic_jit_info (jinfo);
g_assert (gi);
if (cfg->method->dynamic)
gi->generic_sharing_context = g_new0 (MonoGenericSharingContext, 1);
else
gi->generic_sharing_context = (MonoGenericSharingContext *)mono_mem_manager_alloc0 (cfg->mem_manager, sizeof (MonoGenericSharingContext));
mini_init_gsctx (NULL, cfg->gsctx_context, gi->generic_sharing_context);
if ((method_to_compile->flags & METHOD_ATTRIBUTE_STATIC) ||
mini_method_get_context (method_to_compile)->method_inst ||
m_class_is_valuetype (method_to_compile->klass)) {
g_assert (cfg->rgctx_var);
}
gi->has_this = 1;
if ((method_to_compile->flags & METHOD_ATTRIBUTE_STATIC) ||
mini_method_get_context (method_to_compile)->method_inst ||
m_class_is_valuetype (method_to_compile->klass)) {
inst = cfg->rgctx_var;
if (!COMPILE_LLVM (cfg))
g_assert (inst->opcode == OP_REGOFFSET);
loclist = cfg->rgctx_loclist;
} else {
inst = cfg->args [0];
loclist = cfg->this_loclist;
}
if (loclist) {
/* Needed to handle async exceptions */
GSList *l;
int i;
gi->nlocs = g_slist_length (loclist);
if (cfg->method->dynamic)
gi->locations = (MonoDwarfLocListEntry *)g_malloc0 (gi->nlocs * sizeof (MonoDwarfLocListEntry));
else
gi->locations = (MonoDwarfLocListEntry *)mono_mem_manager_alloc0 (cfg->mem_manager, gi->nlocs * sizeof (MonoDwarfLocListEntry));
i = 0;
for (l = loclist; l; l = l->next) {
memcpy (&(gi->locations [i]), l->data, sizeof (MonoDwarfLocListEntry));
i ++;
}
}
if (COMPILE_LLVM (cfg)) {
g_assert (cfg->llvm_this_reg != -1);
gi->this_in_reg = 0;
gi->this_reg = cfg->llvm_this_reg;
gi->this_offset = cfg->llvm_this_offset;
} else if (inst->opcode == OP_REGVAR) {
gi->this_in_reg = 1;
gi->this_reg = inst->dreg;
} else {
g_assert (inst->opcode == OP_REGOFFSET);
#ifdef TARGET_X86
g_assert (inst->inst_basereg == X86_EBP);
#elif defined(TARGET_AMD64)
g_assert (inst->inst_basereg == X86_EBP || inst->inst_basereg == X86_ESP);
#endif
g_assert (inst->inst_offset >= G_MININT32 && inst->inst_offset <= G_MAXINT32);
gi->this_in_reg = 0;
gi->this_reg = inst->inst_basereg;
gi->this_offset = inst->inst_offset;
}
}
if (num_holes) {
MonoTryBlockHoleTableJitInfo *table;
int i;
table = mono_jit_info_get_try_block_hole_table_info (jinfo);
table->num_holes = (guint16)num_holes;
i = 0;
for (tmp = cfg->try_block_holes; tmp; tmp = tmp->next) {
guint32 start_bb_offset;
MonoTryBlockHoleJitInfo *hole;
TryBlockHole *hole_data = (TryBlockHole *)tmp->data;
MonoExceptionClause *ec = hole_data->clause;
int hole_end = hole_data->basic_block->native_offset + hole_data->basic_block->native_length;
MonoBasicBlock *clause_last_bb = cfg->cil_offset_to_bb [ec->try_offset + ec->try_len];
g_assert (clause_last_bb);
/* Holes at the end of a try region can be represented by simply reducing the size of the block itself.*/
if (clause_last_bb->native_offset == hole_end)
continue;
start_bb_offset = hole_data->start_offset - hole_data->basic_block->native_offset;
hole = &table->holes [i++];
hole->clause = hole_data->clause - &header->clauses [0];
hole->offset = (guint32)hole_data->start_offset;
hole->length = (guint16)(hole_data->basic_block->native_length - start_bb_offset);
if (G_UNLIKELY (cfg->verbose_level >= 4))
printf ("\tTry block hole at eh clause %d offset %x length %x\n", hole->clause, hole->offset, hole->length);
}
g_assert (i == num_holes);
}
if (jinfo->has_arch_eh_info) {
MonoArchEHJitInfo *info;
info = mono_jit_info_get_arch_eh_info (jinfo);
info->stack_size = stack_size;
}
if (cfg->thunk_area) {
MonoThunkJitInfo *info;
info = mono_jit_info_get_thunk_info (jinfo);
info->thunks_offset = cfg->thunks_offset;
info->thunks_size = cfg->thunk_area;
}
if (COMPILE_LLVM (cfg)) {
if (num_clauses)
memcpy (&jinfo->clauses [0], &cfg->llvm_ex_info [0], num_clauses * sizeof (MonoJitExceptionInfo));
} else {
int eindex = 0;
for (int i = 0; i < header->num_clauses; i++) {
MonoExceptionClause *ec = &header->clauses [i];
MonoJitExceptionInfo *ei = &jinfo->clauses [eindex];
MonoBasicBlock *tblock;
MonoInst *exvar;
if (cfg->clause_is_dead [i])
continue;
eindex ++;
ei->flags = ec->flags;
if (G_UNLIKELY (cfg->verbose_level >= 4))
printf ("IL clause: try 0x%x-0x%x handler 0x%x-0x%x filter 0x%x\n", ec->try_offset, ec->try_offset + ec->try_len, ec->handler_offset, ec->handler_offset + ec->handler_len, ec->flags == MONO_EXCEPTION_CLAUSE_FILTER ? ec->data.filter_offset : 0);
exvar = mono_find_exvar_for_offset (cfg, ec->handler_offset);
ei->exvar_offset = exvar ? exvar->inst_offset : 0;
if (ei->flags == MONO_EXCEPTION_CLAUSE_FILTER) {
tblock = cfg->cil_offset_to_bb [ec->data.filter_offset];
g_assert (tblock);
ei->data.filter = cfg->native_code + tblock->native_offset;
} else {
ei->data.catch_class = ec->data.catch_class;
}
tblock = cfg->cil_offset_to_bb [ec->try_offset];
g_assert (tblock);
g_assert (tblock->native_offset);
ei->try_start = cfg->native_code + tblock->native_offset;
if (tblock->extend_try_block) {
/*
* Extend the try block backwards to include parts of the previous call
* instruction.
*/
ei->try_start = (guint8*)ei->try_start - cfg->backend->monitor_enter_adjustment;
}
if (ec->try_offset + ec->try_len < header->code_size)
tblock = cfg->cil_offset_to_bb [ec->try_offset + ec->try_len];
else
tblock = cfg->bb_exit;
if (G_UNLIKELY (cfg->verbose_level >= 4))
printf ("looking for end of try [%d, %d] -> %p (code size %d)\n", ec->try_offset, ec->try_len, tblock, header->code_size);
g_assert (tblock);
if (!tblock->native_offset) {
int j, end;
for (j = ec->try_offset + ec->try_len, end = ec->try_offset; j >= end; --j) {
MonoBasicBlock *bb = cfg->cil_offset_to_bb [j];
if (bb && bb->native_offset) {
tblock = bb;
break;
}
}
}
ei->try_end = cfg->native_code + tblock->native_offset;
g_assert (tblock->native_offset);
tblock = cfg->cil_offset_to_bb [ec->handler_offset];
g_assert (tblock);
ei->handler_start = cfg->native_code + tblock->native_offset;
for (tmp = cfg->try_block_holes; tmp; tmp = tmp->next) {
TryBlockHole *hole = (TryBlockHole *)tmp->data;
gpointer hole_end = cfg->native_code + (hole->basic_block->native_offset + hole->basic_block->native_length);
if (hole->clause == ec && hole_end == ei->try_end) {
if (G_UNLIKELY (cfg->verbose_level >= 4))
printf ("\tShortening try block %d from %x to %x\n", i, (int)((guint8*)ei->try_end - cfg->native_code), hole->start_offset);
ei->try_end = cfg->native_code + hole->start_offset;
break;
}
}
if (ec->flags == MONO_EXCEPTION_CLAUSE_FINALLY) {
int end_offset;
if (ec->handler_offset + ec->handler_len < header->code_size) {
tblock = cfg->cil_offset_to_bb [ec->handler_offset + ec->handler_len];
if (tblock->native_offset) {
end_offset = tblock->native_offset;
} else {
int j, end;
for (j = ec->handler_offset + ec->handler_len, end = ec->handler_offset; j >= end; --j) {
MonoBasicBlock *bb = cfg->cil_offset_to_bb [j];
if (bb && bb->native_offset) {
tblock = bb;
break;
}
}
end_offset = tblock->native_offset + tblock->native_length;
}
} else {
end_offset = cfg->epilog_begin;
}
ei->data.handler_end = cfg->native_code + end_offset;
}
/* Keep try_start/end non-authenticated, they are never branched to */
//ei->try_start = MINI_ADDR_TO_FTNPTR (ei->try_start);
//ei->try_end = MINI_ADDR_TO_FTNPTR (ei->try_end);
ei->handler_start = MINI_ADDR_TO_FTNPTR (ei->handler_start);
if (ei->flags == MONO_EXCEPTION_CLAUSE_FILTER)
ei->data.filter = MINI_ADDR_TO_FTNPTR (ei->data.filter);
else if (ei->flags == MONO_EXCEPTION_CLAUSE_FINALLY)
ei->data.handler_end = MINI_ADDR_TO_FTNPTR (ei->data.handler_end);
}
}
if (G_UNLIKELY (cfg->verbose_level >= 4)) {
int i;
for (i = 0; i < jinfo->num_clauses; i++) {
MonoJitExceptionInfo *ei = &jinfo->clauses [i];
int start = (guint8*)ei->try_start - cfg->native_code;
int end = (guint8*)ei->try_end - cfg->native_code;
int handler = (guint8*)ei->handler_start - cfg->native_code;
int handler_end = (guint8*)ei->data.handler_end - cfg->native_code;
printf ("JitInfo EH clause %d flags %x try %x-%x handler %x-%x\n", i, ei->flags, start, end, handler, handler_end);
}
}
if (cfg->encoded_unwind_ops) {
/* Generated by LLVM */
jinfo->unwind_info = mono_cache_unwind_info (cfg->encoded_unwind_ops, cfg->encoded_unwind_ops_len);
g_free (cfg->encoded_unwind_ops);
} else if (cfg->unwind_ops) {
guint32 info_len;
guint8 *unwind_info = mono_unwind_ops_encode (cfg->unwind_ops, &info_len);
guint32 unwind_desc;
unwind_desc = mono_cache_unwind_info (unwind_info, info_len);
if (cfg->has_unwind_info_for_epilog) {
MonoArchEHJitInfo *info;
info = mono_jit_info_get_arch_eh_info (jinfo);
g_assert (info);
info->epilog_size = cfg->code_len - cfg->epilog_begin;
}
jinfo->unwind_info = unwind_desc;
g_free (unwind_info);
} else {
jinfo->unwind_info = cfg->used_int_regs;
}
return jinfo;
}
/* Return whenever METHOD is a gsharedvt method */
static gboolean
is_gsharedvt_method (MonoMethod *method)
{
MonoGenericContext *context;
MonoGenericInst *inst;
int i;
if (!method->is_inflated)
return FALSE;
context = mono_method_get_context (method);
inst = context->class_inst;
if (inst) {
for (i = 0; i < inst->type_argc; ++i)
if (mini_is_gsharedvt_gparam (inst->type_argv [i]))
return TRUE;
}
inst = context->method_inst;
if (inst) {
for (i = 0; i < inst->type_argc; ++i)
if (mini_is_gsharedvt_gparam (inst->type_argv [i]))
return TRUE;
}
return FALSE;
}
static gboolean
is_open_method (MonoMethod *method)
{
MonoGenericContext *context;
if (!method->is_inflated)
return FALSE;
context = mono_method_get_context (method);
if (context->class_inst && context->class_inst->is_open)
return TRUE;
if (context->method_inst && context->method_inst->is_open)
return TRUE;
return FALSE;
}
static void
mono_insert_nop_in_empty_bb (MonoCompile *cfg)
{
MonoBasicBlock *bb;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
if (bb->code)
continue;
MonoInst *nop;
MONO_INST_NEW (cfg, nop, OP_NOP);
MONO_ADD_INS (bb, nop);
}
}
static void
insert_safepoint (MonoCompile *cfg, MonoBasicBlock *bblock)
{
MonoInst *poll_addr, *ins;
if (cfg->disable_gc_safe_points)
return;
if (cfg->verbose_level > 1)
printf ("ADDING SAFE POINT TO BB %d\n", bblock->block_num);
g_assert (mini_safepoints_enabled ());
NEW_AOTCONST (cfg, poll_addr, MONO_PATCH_INFO_GC_SAFE_POINT_FLAG, (gpointer)&mono_polling_required);
MONO_INST_NEW (cfg, ins, OP_GC_SAFE_POINT);
ins->sreg1 = poll_addr->dreg;
if (bblock->flags & BB_EXCEPTION_HANDLER) {
MonoInst *eh_op = bblock->code;
if (eh_op && eh_op->opcode != OP_START_HANDLER && eh_op->opcode != OP_GET_EX_OBJ) {
eh_op = NULL;
} else {
MonoInst *next_eh_op = eh_op ? eh_op->next : NULL;
// skip all EH relateds ops
while (next_eh_op && (next_eh_op->opcode == OP_START_HANDLER || next_eh_op->opcode == OP_GET_EX_OBJ)) {
eh_op = next_eh_op;
next_eh_op = eh_op->next;
}
}
mono_bblock_insert_after_ins (bblock, eh_op, poll_addr);
mono_bblock_insert_after_ins (bblock, poll_addr, ins);
} else if (bblock == cfg->bb_entry) {
mono_bblock_insert_after_ins (bblock, bblock->last_ins, poll_addr);
mono_bblock_insert_after_ins (bblock, poll_addr, ins);
} else {
mono_bblock_insert_before_ins (bblock, NULL, poll_addr);
mono_bblock_insert_after_ins (bblock, poll_addr, ins);
}
}
/*
This code inserts safepoints into managed code at important code paths.
Those are:
-the first basic block
-landing BB for exception handlers
-loop body starts.
*/
static void
insert_safepoints (MonoCompile *cfg)
{
MonoBasicBlock *bb;
g_assert (mini_safepoints_enabled ());
if (COMPILE_LLVM (cfg)) {
if (!cfg->llvm_only) {
/* We rely on LLVM's safepoints insertion capabilities. */
if (cfg->verbose_level > 1)
printf ("SKIPPING SAFEPOINTS for code compiled with LLVM\n");
return;
}
}
if (cfg->method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE) {
WrapperInfo *info = mono_marshal_get_wrapper_info (cfg->method);
/* These wrappers are called from the wrapper for the polling function, leading to potential stack overflow */
if (info && info->subtype == WRAPPER_SUBTYPE_ICALL_WRAPPER &&
(info->d.icall.jit_icall_id == MONO_JIT_ICALL_mono_threads_state_poll ||
info->d.icall.jit_icall_id == MONO_JIT_ICALL_mono_thread_interruption_checkpoint ||
info->d.icall.jit_icall_id == MONO_JIT_ICALL_mono_threads_exit_gc_safe_region_unbalanced)) {
if (cfg->verbose_level > 1)
printf ("SKIPPING SAFEPOINTS for the polling function icall\n");
return;
}
}
if (cfg->method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED) {
if (cfg->verbose_level > 1)
printf ("SKIPPING SAFEPOINTS for native-to-managed wrappers.\n");
return;
}
if (cfg->method->wrapper_type == MONO_WRAPPER_OTHER) {
WrapperInfo *info = mono_marshal_get_wrapper_info (cfg->method);
if (info && (info->subtype == WRAPPER_SUBTYPE_INTERP_IN || info->subtype == WRAPPER_SUBTYPE_INTERP_LMF)) {
/* These wrappers shouldn't do any icalls */
if (cfg->verbose_level > 1)
printf ("SKIPPING SAFEPOINTS for interp-in wrappers.\n");
return;
}
}
if (cfg->method->wrapper_type == MONO_WRAPPER_WRITE_BARRIER) {
if (cfg->verbose_level > 1)
printf ("SKIPPING SAFEPOINTS for write barrier wrappers.\n");
return;
}
if (cfg->verbose_level > 1)
printf ("INSERTING SAFEPOINTS\n");
if (cfg->verbose_level > 2)
mono_print_code (cfg, "BEFORE SAFEPOINTS");
/* if the method doesn't contain
* (1) a call (so it's a leaf method)
* (2) and no loops
* we can skip the GC safepoint on method entry. */
gboolean requires_safepoint = cfg->has_calls;
for (bb = cfg->bb_entry->next_bb; bb; bb = bb->next_bb) {
if (bb->loop_body_start || (bb->flags & BB_EXCEPTION_HANDLER)) {
requires_safepoint = TRUE;
insert_safepoint (cfg, bb);
}
}
if (requires_safepoint)
insert_safepoint (cfg, cfg->bb_entry);
if (cfg->verbose_level > 2)
mono_print_code (cfg, "AFTER SAFEPOINTS");
}
static void
mono_insert_branches_between_bblocks (MonoCompile *cfg)
{
MonoBasicBlock *bb;
/* Add branches between non-consecutive bblocks */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
if (bb->last_ins && MONO_IS_COND_BRANCH_OP (bb->last_ins) &&
bb->last_ins->inst_false_bb && bb->next_bb != bb->last_ins->inst_false_bb) {
/* we are careful when inverting, since bugs like #59580
* could show up when dealing with NaNs.
*/
if (MONO_IS_COND_BRANCH_NOFP(bb->last_ins) && bb->next_bb == bb->last_ins->inst_true_bb) {
MonoBasicBlock *tmp = bb->last_ins->inst_true_bb;
bb->last_ins->inst_true_bb = bb->last_ins->inst_false_bb;
bb->last_ins->inst_false_bb = tmp;
bb->last_ins->opcode = mono_reverse_branch_op (bb->last_ins->opcode);
} else {
MonoInst *inst = (MonoInst *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoInst));
inst->opcode = OP_BR;
inst->inst_target_bb = bb->last_ins->inst_false_bb;
mono_bblock_add_inst (bb, inst);
}
}
}
if (cfg->verbose_level >= 4) {
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
MonoInst *tree = bb->code;
g_print ("DUMP BLOCK %d:\n", bb->block_num);
if (!tree)
continue;
for (; tree; tree = tree->next) {
mono_print_ins_index (-1, tree);
}
}
}
/* FIXME: */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
bb->max_vreg = cfg->next_vreg;
}
}
static G_GNUC_UNUSED void
remove_empty_finally_pass (MonoCompile *cfg)
{
MonoBasicBlock *bb;
MonoInst *ins;
gboolean remove_call_handler = FALSE;
// FIXME: other configurations
if (!cfg->llvm_only)
return;
for (int i = 0; i < cfg->header->num_clauses; ++i) {
MonoExceptionClause *clause = &cfg->header->clauses [i];
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY) {
MonoInst *first, *last;
bb = cfg->cil_offset_to_bb [clause->handler_offset];
g_assert (bb);
/* Support only 1 bb for now */
first = mono_bb_first_inst (bb, 0);
if (first->opcode != OP_START_HANDLER)
break;
gboolean empty = TRUE;
while (TRUE) {
if (bb->out_count > 1) {
empty = FALSE;
break;
}
if (bb->flags & BB_HAS_SIDE_EFFECTS) {
empty = FALSE;
break;
}
if (bb->out_count == 0)
break;
if (mono_bb_last_inst (bb, 0)->opcode == OP_ENDFINALLY)
break;
bb = bb->out_bb [0];
}
if (empty) {
/*
* Avoid doing this in nested clauses, because it might mess up the EH code generated by
* the llvm backend.
*/
for (int j = 0; j < cfg->header->num_clauses; ++j) {
MonoExceptionClause *clause2 = &cfg->header->clauses [j];
if (i != j && MONO_OFFSET_IN_CLAUSE (clause2, clause->handler_offset))
empty = FALSE;
}
}
if (empty) {
/* Nullify OP_START_HANDLER */
NULLIFY_INS (first);
last = mono_bb_last_inst (bb, 0);
if (last->opcode == OP_ENDFINALLY)
NULLIFY_INS (last);
if (cfg->verbose_level > 1)
g_print ("removed empty finally clause %d.\n", i);
/* Mark the handler bb as not used anymore */
bb = cfg->cil_offset_to_bb [clause->handler_offset];
bb->flags &= ~BB_EXCEPTION_HANDLER;
cfg->clause_is_dead [i] = TRUE;
remove_call_handler = TRUE;
}
}
}
if (remove_call_handler) {
/* Remove OP_CALL_HANDLER opcodes pointing to the removed finally blocks */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
MONO_BB_FOR_EACH_INS (bb, ins) {
if (ins->opcode == OP_CALL_HANDLER && ins->inst_target_bb && !(ins->inst_target_bb->flags & BB_EXCEPTION_HANDLER)) {
NULLIFY_INS (ins);
for (MonoInst *ins2 = ins->next; ins2; ins2 = ins2->next)
NULLIFY_INS (ins2);
break;
}
}
}
}
}
static void
init_backend (MonoBackend *backend)
{
#ifdef MONO_ARCH_NEED_GOT_VAR
backend->need_got_var = 1;
#endif
#ifdef MONO_ARCH_HAVE_CARD_TABLE_WBARRIER
backend->have_card_table_wb = 1;
#endif
#ifdef MONO_ARCH_HAVE_OP_GENERIC_CLASS_INIT
backend->have_op_generic_class_init = 1;
#endif
#ifdef MONO_ARCH_EMULATE_MUL_DIV
backend->emulate_mul_div = 1;
#endif
#ifdef MONO_ARCH_EMULATE_DIV
backend->emulate_div = 1;
#endif
#if !defined(MONO_ARCH_NO_EMULATE_LONG_SHIFT_OPS)
backend->emulate_long_shift_opts = 1;
#endif
#ifdef MONO_ARCH_HAVE_OBJC_GET_SELECTOR
backend->have_objc_get_selector = 1;
#endif
#ifdef MONO_ARCH_HAVE_GENERALIZED_IMT_TRAMPOLINE
backend->have_generalized_imt_trampoline = 1;
#endif
#ifdef MONO_ARCH_GSHARED_SUPPORTED
backend->gshared_supported = 1;
#endif
if (MONO_ARCH_USE_FPSTACK)
backend->use_fpstack = 1;
// Does the ABI have a volatile non-parameter register, so tailcall
// can pass context to generics or interfaces?
backend->have_volatile_non_param_register = MONO_ARCH_HAVE_VOLATILE_NON_PARAM_REGISTER;
#ifdef MONO_ARCH_HAVE_OP_TAILCALL_MEMBASE
backend->have_op_tailcall_membase = 1;
#endif
#ifdef MONO_ARCH_HAVE_OP_TAILCALL_REG
backend->have_op_tailcall_reg = 1;
#endif
#ifndef MONO_ARCH_MONITOR_ENTER_ADJUSTMENT
backend->monitor_enter_adjustment = 1;
#else
backend->monitor_enter_adjustment = MONO_ARCH_MONITOR_ENTER_ADJUSTMENT;
#endif
#if defined(MONO_ARCH_ILP32)
backend->ilp32 = 1;
#endif
#ifdef MONO_ARCH_NEED_DIV_CHECK
backend->need_div_check = 1;
#endif
#ifdef NO_UNALIGNED_ACCESS
backend->no_unaligned_access = 1;
#endif
#ifdef MONO_ARCH_DYN_CALL_PARAM_AREA
backend->dyn_call_param_area = MONO_ARCH_DYN_CALL_PARAM_AREA;
#endif
#ifdef MONO_ARCH_NO_DIV_WITH_MUL
backend->disable_div_with_mul = 1;
#endif
#ifdef MONO_ARCH_EXPLICIT_NULL_CHECKS
backend->explicit_null_checks = 1;
#endif
#ifdef MONO_ARCH_HAVE_OPTIMIZED_DIV
backend->optimized_div = 1;
#endif
#ifdef MONO_ARCH_FORCE_FLOAT32
backend->force_float32 = 1;
#endif
}
static gboolean
is_simd_supported (MonoCompile *cfg)
{
#ifdef DISABLE_SIMD
return FALSE;
#endif
// FIXME: Clean this up
#ifdef TARGET_WASM
if ((mini_get_cpu_features (cfg) & MONO_CPU_WASM_SIMD) == 0)
return FALSE;
#else
if (cfg->llvm_only)
return FALSE;
#endif
return TRUE;
}
/* Determine how an rgctx is passed to a method */
MonoRgctxAccess
mini_get_rgctx_access_for_method (MonoMethod *method)
{
/* gshared dim methods use an mrgctx */
if (mini_method_is_default_method (method))
return MONO_RGCTX_ACCESS_MRGCTX;
if (mono_method_get_context (method)->method_inst)
return MONO_RGCTX_ACCESS_MRGCTX;
if (method->flags & METHOD_ATTRIBUTE_STATIC || m_class_is_valuetype (method->klass))
return MONO_RGCTX_ACCESS_VTABLE;
return MONO_RGCTX_ACCESS_THIS;
}
/*
* mini_method_compile:
* @method: the method to compile
* @opts: the optimization flags to use
* @flags: compilation flags
* @parts: debug flag
*
* Returns: a MonoCompile* pointer. Caller must check the exception_type
* field in the returned struct to see if compilation succeded.
*/
MonoCompile*
mini_method_compile (MonoMethod *method, guint32 opts, JitFlags flags, int parts, int aot_method_index)
{
MonoMethodHeader *header;
MonoMethodSignature *sig;
MonoCompile *cfg;
int i;
gboolean try_generic_shared, try_llvm = FALSE;
MonoMethod *method_to_compile, *method_to_register;
gboolean method_is_gshared = FALSE;
gboolean run_cctors = (flags & JIT_FLAG_RUN_CCTORS) ? 1 : 0;
gboolean compile_aot = (flags & JIT_FLAG_AOT) ? 1 : 0;
gboolean full_aot = (flags & JIT_FLAG_FULL_AOT) ? 1 : 0;
gboolean disable_direct_icalls = (flags & JIT_FLAG_NO_DIRECT_ICALLS) ? 1 : 0;
gboolean gsharedvt_method = FALSE;
gboolean interp_entry_only = FALSE;
#ifdef ENABLE_LLVM
gboolean llvm = (flags & JIT_FLAG_LLVM) ? 1 : 0;
#endif
static gboolean verbose_method_inited;
static char **verbose_method_names;
mono_atomic_inc_i32 (&mono_jit_stats.methods_compiled);
MONO_PROFILER_RAISE (jit_begin, (method));
if (MONO_METHOD_COMPILE_BEGIN_ENABLED ())
MONO_PROBE_METHOD_COMPILE_BEGIN (method);
gsharedvt_method = is_gsharedvt_method (method);
/*
* In AOT mode, method can be the following:
* - a gsharedvt method.
* - a method inflated with type parameters. This is for ref/partial sharing.
* - a method inflated with concrete types.
*/
if (compile_aot) {
if (is_open_method (method)) {
try_generic_shared = TRUE;
method_is_gshared = TRUE;
} else {
try_generic_shared = FALSE;
}
g_assert (opts & MONO_OPT_GSHARED);
} else {
try_generic_shared = mono_class_generic_sharing_enabled (method->klass) &&
(opts & MONO_OPT_GSHARED) && mono_method_is_generic_sharable_full (method, FALSE, FALSE, FALSE);
if (mini_is_gsharedvt_sharable_method (method)) {
/*
if (!mono_debug_count ())
try_generic_shared = FALSE;
*/
}
}
/*
if (try_generic_shared && !mono_debug_count ())
try_generic_shared = FALSE;
*/
if (opts & MONO_OPT_GSHARED) {
if (try_generic_shared)
mono_atomic_inc_i32 (&mono_stats.generics_sharable_methods);
else if (mono_method_is_generic_impl (method))
mono_atomic_inc_i32 (&mono_stats.generics_unsharable_methods);
}
#ifdef ENABLE_LLVM
try_llvm = mono_use_llvm || llvm;
#endif
#ifndef MONO_ARCH_FLOAT32_SUPPORTED
opts &= ~MONO_OPT_FLOAT32;
#endif
if (current_backend->force_float32)
/* Force float32 mode on newer platforms */
opts |= MONO_OPT_FLOAT32;
restart_compile:
if (method_is_gshared) {
method_to_compile = method;
} else {
if (try_generic_shared) {
ERROR_DECL (error);
method_to_compile = mini_get_shared_method_full (method, SHARE_MODE_NONE, error);
mono_error_assert_ok (error);
} else {
method_to_compile = method;
}
}
cfg = g_new0 (MonoCompile, 1);
cfg->method = method_to_compile;
cfg->mempool = mono_mempool_new ();
cfg->opt = opts;
cfg->run_cctors = run_cctors;
cfg->verbose_level = mini_verbose;
cfg->compile_aot = compile_aot;
cfg->full_aot = full_aot;
cfg->disable_omit_fp = mini_debug_options.disable_omit_fp;
cfg->skip_visibility = method->skip_visibility;
cfg->orig_method = method;
cfg->gen_seq_points = !mini_debug_options.no_seq_points_compact_data || mini_debug_options.gen_sdb_seq_points;
cfg->gen_sdb_seq_points = mini_debug_options.gen_sdb_seq_points;
cfg->llvm_only = (flags & JIT_FLAG_LLVM_ONLY) != 0;
cfg->interp = (flags & JIT_FLAG_INTERP) != 0;
cfg->use_current_cpu = (flags & JIT_FLAG_USE_CURRENT_CPU) != 0;
cfg->self_init = (flags & JIT_FLAG_SELF_INIT) != 0;
cfg->code_exec_only = (flags & JIT_FLAG_CODE_EXEC_ONLY) != 0;
cfg->backend = current_backend;
cfg->jit_mm = jit_mm_for_method (cfg->method);
cfg->mem_manager = m_method_get_mem_manager (cfg->method);
if (cfg->method->wrapper_type == MONO_WRAPPER_ALLOC) {
/* We can't have seq points inside gc critical regions */
cfg->gen_seq_points = FALSE;
cfg->gen_sdb_seq_points = FALSE;
}
/* coop requires loop detection to happen */
if (mini_safepoints_enabled ())
cfg->opt |= MONO_OPT_LOOP;
cfg->disable_llvm_implicit_null_checks = mini_debug_options.llvm_disable_implicit_null_checks;
if (cfg->backend->explicit_null_checks || mini_debug_options.explicit_null_checks) {
/* some platforms have null pages, so we can't SIGSEGV */
cfg->explicit_null_checks = TRUE;
cfg->disable_llvm_implicit_null_checks = TRUE;
} else {
cfg->explicit_null_checks = flags & JIT_FLAG_EXPLICIT_NULL_CHECKS;
}
cfg->soft_breakpoints = mini_debug_options.soft_breakpoints;
cfg->check_pinvoke_callconv = mini_debug_options.check_pinvoke_callconv;
cfg->disable_direct_icalls = disable_direct_icalls;
cfg->direct_pinvoke = (flags & JIT_FLAG_DIRECT_PINVOKE) != 0;
cfg->interp_entry_only = interp_entry_only;
if (try_generic_shared)
cfg->gshared = TRUE;
if (cfg->gshared)
cfg->rgctx_access = mini_get_rgctx_access_for_method (cfg->method);
cfg->compile_llvm = try_llvm;
cfg->token_info_hash = g_hash_table_new (NULL, NULL);
if (cfg->compile_aot)
cfg->method_index = aot_method_index;
if (cfg->compile_llvm)
cfg->explicit_null_checks = TRUE;
if (cfg->explicit_null_checks && method->wrapper_type == MONO_WRAPPER_OTHER &&
(mono_marshal_get_wrapper_info (method)->subtype == WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG ||
mono_marshal_get_wrapper_info (method)->subtype == WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG)) {
/* These wrappers contain loads/stores which can't fail */
cfg->explicit_null_checks = FALSE;
}
/*
if (!mono_debug_count ())
cfg->opt &= ~MONO_OPT_FLOAT32;
*/
if (!is_simd_supported (cfg))
cfg->opt &= ~MONO_OPT_SIMD;
cfg->r4fp = (cfg->opt & MONO_OPT_FLOAT32) ? 1 : 0;
cfg->r4_stack_type = cfg->r4fp ? STACK_R4 : STACK_R8;
if (cfg->gen_seq_points)
cfg->seq_points = g_ptr_array_new ();
cfg->error = (MonoError*)&cfg->error_value;
error_init (cfg->error);
if (cfg->compile_aot && !try_generic_shared && (method->is_generic || mono_class_is_gtd (method->klass) || method_is_gshared)) {
cfg->exception_type = MONO_EXCEPTION_GENERIC_SHARING_FAILED;
return cfg;
}
if (cfg->gshared && (gsharedvt_method || mini_is_gsharedvt_sharable_method (method))) {
MonoMethodInflated *inflated;
MonoGenericContext *context;
if (gsharedvt_method) {
g_assert (method->is_inflated);
inflated = (MonoMethodInflated*)method;
context = &inflated->context;
/* We are compiling a gsharedvt method directly */
g_assert (compile_aot);
} else {
g_assert (method_to_compile->is_inflated);
inflated = (MonoMethodInflated*)method_to_compile;
context = &inflated->context;
}
mini_init_gsctx (cfg->mempool, context, &cfg->gsctx);
cfg->gsctx_context = context;
cfg->gsharedvt = TRUE;
if (!cfg->llvm_only) {
cfg->disable_llvm = TRUE;
cfg->exception_message = g_strdup ("gsharedvt");
}
}
if (cfg->gshared) {
method_to_register = method_to_compile;
} else {
g_assert (method == method_to_compile);
method_to_register = method;
}
cfg->method_to_register = method_to_register;
ERROR_DECL (err);
sig = mono_method_signature_checked (cfg->method, err);
if (!sig) {
cfg->exception_type = MONO_EXCEPTION_TYPE_LOAD;
cfg->exception_message = g_strdup (mono_error_get_message (err));
mono_error_cleanup (err);
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, FALSE);
return cfg;
}
header = cfg->header = mono_method_get_header_checked (cfg->method, cfg->error);
if (!header) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, FALSE);
return cfg;
}
if (cfg->llvm_only && cfg->interp && !cfg->interp_entry_only && header->num_clauses) {
cfg->deopt = TRUE;
/* Can't reconstruct inlined state */
cfg->disable_inline = TRUE;
}
#ifdef ENABLE_LLVM
{
static gboolean inited;
if (!inited)
inited = TRUE;
/*
* Check for methods which cannot be compiled by LLVM early, to avoid
* the extra compilation pass.
*/
if (COMPILE_LLVM (cfg)) {
mono_llvm_check_method_supported (cfg);
if (cfg->disable_llvm) {
if (cfg->verbose_level > 0) {
//nm = mono_method_full_name (cfg->method, TRUE);
printf ("LLVM failed for '%s.%s': %s\n", m_class_get_name (method->klass), method->name, cfg->exception_message);
//g_free (nm);
}
if (cfg->llvm_only) {
g_free (cfg->exception_message);
cfg->disable_aot = TRUE;
return cfg;
}
mono_destroy_compile (cfg);
try_llvm = FALSE;
goto restart_compile;
}
}
}
#endif
cfg->prof_flags = mono_profiler_get_call_instrumentation_flags (cfg->method);
cfg->prof_coverage = mono_profiler_coverage_instrumentation_enabled (cfg->method);
gboolean trace = mono_jit_trace_calls != NULL && mono_trace_eval (cfg->method);
if (trace)
cfg->prof_flags = (MonoProfilerCallInstrumentationFlags)(
MONO_PROFILER_CALL_INSTRUMENTATION_ENTER | MONO_PROFILER_CALL_INSTRUMENTATION_ENTER_CONTEXT |
MONO_PROFILER_CALL_INSTRUMENTATION_LEAVE | MONO_PROFILER_CALL_INSTRUMENTATION_LEAVE_CONTEXT);
/* The debugger has no liveness information, so avoid sharing registers/stack slots */
if (mini_debug_options.mdb_optimizations || MONO_CFG_PROFILE_CALL_CONTEXT (cfg)) {
cfg->disable_reuse_registers = TRUE;
cfg->disable_reuse_stack_slots = TRUE;
/*
* This decreases the change the debugger will read registers/stack slots which are
* not yet initialized.
*/
cfg->disable_initlocals_opt = TRUE;
cfg->extend_live_ranges = TRUE;
/* The debugger needs all locals to be on the stack or in a global register */
cfg->disable_vreg_to_lvreg = TRUE;
/* Don't remove unused variables when running inside the debugger since the user
* may still want to view them. */
cfg->disable_deadce_vars = TRUE;
cfg->opt &= ~MONO_OPT_DEADCE;
cfg->opt &= ~MONO_OPT_INLINE;
cfg->opt &= ~MONO_OPT_COPYPROP;
cfg->opt &= ~MONO_OPT_CONSPROP;
/* This is needed for the soft debugger, which doesn't like code after the epilog */
cfg->disable_out_of_line_bblocks = TRUE;
}
mini_gc_init_cfg (cfg);
if (method->wrapper_type == MONO_WRAPPER_OTHER) {
WrapperInfo *info = mono_marshal_get_wrapper_info (method);
if ((info && (info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG || info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG))) {
cfg->disable_gc_safe_points = TRUE;
/* This is safe, these wrappers only store to the stack */
cfg->gen_write_barriers = FALSE;
}
}
if (COMPILE_LLVM (cfg)) {
cfg->opt |= MONO_OPT_ABCREM;
}
if (!verbose_method_inited) {
char *env = g_getenv ("MONO_VERBOSE_METHOD");
if (env != NULL)
verbose_method_names = g_strsplit (env, ";", -1);
verbose_method_inited = TRUE;
}
if (verbose_method_names) {
int i;
for (i = 0; verbose_method_names [i] != NULL; i++){
const char *name = verbose_method_names [i];
if ((strchr (name, '.') > name) || strchr (name, ':') || strchr (name, '*')) {
MonoMethodDesc *desc;
desc = mono_method_desc_new (name, TRUE);
if (desc) {
if (mono_method_desc_full_match (desc, cfg->method)) {
cfg->verbose_level = 4;
}
mono_method_desc_free (desc);
}
} else {
if (strcmp (cfg->method->name, name) == 0)
cfg->verbose_level = 4;
}
}
}
cfg->intvars = (guint16 *)mono_mempool_alloc0 (cfg->mempool, sizeof (guint16) * STACK_MAX * header->max_stack);
if (cfg->verbose_level > 0) {
char *method_name;
method_name = mono_method_get_full_name (method);
g_print ("converting %s%s%s%smethod %s\n", COMPILE_LLVM (cfg) ? "llvm " : "", cfg->gsharedvt ? "gsharedvt " : "", (cfg->gshared && !cfg->gsharedvt) ? "gshared " : "", cfg->interp_entry_only ? "interp only " : "", method_name);
/*
if (COMPILE_LLVM (cfg))
g_print ("converting llvm method %s\n", method_name = mono_method_full_name (method, TRUE));
else if (cfg->gsharedvt)
g_print ("converting gsharedvt method %s\n", method_name = mono_method_full_name (method_to_compile, TRUE));
else if (cfg->gshared)
g_print ("converting shared method %s\n", method_name = mono_method_full_name (method_to_compile, TRUE));
else
g_print ("converting method %s\n", method_name = mono_method_full_name (method, TRUE));
*/
g_free (method_name);
}
if (cfg->opt & MONO_OPT_ABCREM)
cfg->opt |= MONO_OPT_SSA;
cfg->rs = mono_regstate_new ();
cfg->next_vreg = cfg->rs->next_vreg;
/* FIXME: Fix SSA to handle branches inside bblocks */
if (cfg->opt & MONO_OPT_SSA)
cfg->enable_extended_bblocks = FALSE;
/*
* FIXME: This confuses liveness analysis because variables which are assigned after
* a branch inside a bblock become part of the kill set, even though the assignment
* might not get executed. This causes the optimize_initlocals pass to delete some
* assignments which are needed.
* Also, the mono_if_conversion pass needs to be modified to recognize the code
* created by this.
*/
//cfg->enable_extended_bblocks = TRUE;
/*
* create MonoInst* which represents arguments and local variables
*/
mono_compile_create_vars (cfg);
mono_cfg_dump_create_context (cfg);
mono_cfg_dump_begin_group (cfg);
MONO_TIME_TRACK (mono_jit_stats.jit_method_to_ir, i = mono_method_to_ir (cfg, method_to_compile, NULL, NULL, NULL, NULL, 0, FALSE));
mono_cfg_dump_ir (cfg, "method-to-ir");
if (cfg->gdump_ctx != NULL) {
/* workaround for graph visualization, as it doesn't handle empty basic blocks properly */
mono_insert_nop_in_empty_bb (cfg);
mono_cfg_dump_ir (cfg, "mono_insert_nop_in_empty_bb");
}
if (i < 0) {
if (try_generic_shared && cfg->exception_type == MONO_EXCEPTION_GENERIC_SHARING_FAILED) {
if (compile_aot) {
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, FALSE);
return cfg;
}
mono_destroy_compile (cfg);
try_generic_shared = FALSE;
goto restart_compile;
}
g_assert (cfg->exception_type != MONO_EXCEPTION_GENERIC_SHARING_FAILED);
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, FALSE);
/* cfg contains the details of the failure, so let the caller cleanup */
return cfg;
}
cfg->stat_basic_blocks += cfg->num_bblocks;
if (COMPILE_LLVM (cfg)) {
MonoInst *ins;
/* The IR has to be in SSA form for LLVM */
cfg->opt |= MONO_OPT_SSA;
// FIXME:
if (cfg->ret) {
// Allow SSA on the result value
if (!cfg->interp_entry_only)
cfg->ret->flags &= ~MONO_INST_VOLATILE;
// Add an explicit return instruction referencing the return value
MONO_INST_NEW (cfg, ins, OP_SETRET);
ins->sreg1 = cfg->ret->dreg;
MONO_ADD_INS (cfg->bb_exit, ins);
}
cfg->opt &= ~MONO_OPT_LINEARS;
/* FIXME: */
cfg->opt &= ~MONO_OPT_BRANCH;
}
cfg->after_method_to_ir = TRUE;
/* todo: remove code when we have verified that the liveness for try/catch blocks
* works perfectly
*/
/*
* Currently, this can't be commented out since exception blocks are not
* processed during liveness analysis.
* It is also needed, because otherwise the local optimization passes would
* delete assignments in cases like this:
* r1 <- 1
* <something which throws>
* r1 <- 2
* This also allows SSA to be run on methods containing exception clauses, since
* SSA will ignore variables marked VOLATILE.
*/
MONO_TIME_TRACK (mono_jit_stats.jit_liveness_handle_exception_clauses, mono_liveness_handle_exception_clauses (cfg));
mono_cfg_dump_ir (cfg, "liveness_handle_exception_clauses");
MONO_TIME_TRACK (mono_jit_stats.jit_handle_out_of_line_bblock, mono_handle_out_of_line_bblock (cfg));
mono_cfg_dump_ir (cfg, "handle_out_of_line_bblock");
/*g_print ("numblocks = %d\n", cfg->num_bblocks);*/
if (!COMPILE_LLVM (cfg)) {
MONO_TIME_TRACK (mono_jit_stats.jit_decompose_long_opts, mono_decompose_long_opts (cfg));
mono_cfg_dump_ir (cfg, "decompose_long_opts");
}
/* Should be done before branch opts */
if (cfg->opt & (MONO_OPT_CONSPROP | MONO_OPT_COPYPROP)) {
MONO_TIME_TRACK (mono_jit_stats.jit_local_cprop, mono_local_cprop (cfg));
mono_cfg_dump_ir (cfg, "local_cprop");
}
if (cfg->flags & MONO_CFG_HAS_TYPE_CHECK) {
MONO_TIME_TRACK (mono_jit_stats.jit_decompose_typechecks, mono_decompose_typechecks (cfg));
if (cfg->gdump_ctx != NULL) {
/* workaround for graph visualization, as it doesn't handle empty basic blocks properly */
mono_insert_nop_in_empty_bb (cfg);
}
mono_cfg_dump_ir (cfg, "decompose_typechecks");
}
/*
* Should be done after cprop which can do strength reduction on
* some of these ops, after propagating immediates.
*/
if (cfg->has_emulated_ops) {
MONO_TIME_TRACK (mono_jit_stats.jit_local_emulate_ops, mono_local_emulate_ops (cfg));
mono_cfg_dump_ir (cfg, "local_emulate_ops");
}
if (cfg->opt & MONO_OPT_BRANCH) {
MONO_TIME_TRACK (mono_jit_stats.jit_optimize_branches, mono_optimize_branches (cfg));
mono_cfg_dump_ir (cfg, "optimize_branches");
}
/* This must be done _before_ global reg alloc and _after_ decompose */
MONO_TIME_TRACK (mono_jit_stats.jit_handle_global_vregs, mono_handle_global_vregs (cfg));
mono_cfg_dump_ir (cfg, "handle_global_vregs");
if (cfg->opt & MONO_OPT_DEADCE) {
MONO_TIME_TRACK (mono_jit_stats.jit_local_deadce, mono_local_deadce (cfg));
mono_cfg_dump_ir (cfg, "local_deadce");
}
if (cfg->opt & MONO_OPT_ALIAS_ANALYSIS) {
MONO_TIME_TRACK (mono_jit_stats.jit_local_alias_analysis, mono_local_alias_analysis (cfg));
mono_cfg_dump_ir (cfg, "local_alias_analysis");
}
/* Disable this for LLVM to make the IR easier to handle */
if (!COMPILE_LLVM (cfg)) {
MONO_TIME_TRACK (mono_jit_stats.jit_if_conversion, mono_if_conversion (cfg));
mono_cfg_dump_ir (cfg, "if_conversion");
}
remove_empty_finally_pass (cfg);
if (cfg->llvm_only && cfg->interp && !cfg->method->wrapper_type && !interp_entry_only && !cfg->deopt) {
/* Disable llvm if there are still finally clauses left */
for (int i = 0; i < cfg->header->num_clauses; ++i) {
MonoExceptionClause *clause = &header->clauses [i];
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY && !cfg->clause_is_dead [i]) {
cfg->exception_message = g_strdup ("finally clause.");
cfg->disable_llvm = TRUE;
break;
}
}
}
mono_threads_safepoint ();
MONO_TIME_TRACK (mono_jit_stats.jit_bb_ordering, mono_bb_ordering (cfg));
mono_cfg_dump_ir (cfg, "bb_ordering");
if (((cfg->num_varinfo > 2000) || (cfg->num_bblocks > 1000)) && !cfg->compile_aot) {
/*
* we disable some optimizations if there are too many variables
* because JIT time may become too expensive. The actual number needs
* to be tweaked and eventually the non-linear algorithms should be fixed.
*/
cfg->opt &= ~ (MONO_OPT_LINEARS | MONO_OPT_COPYPROP | MONO_OPT_CONSPROP);
cfg->disable_ssa = TRUE;
}
if (cfg->num_varinfo > 10000 && !cfg->llvm_only)
/* Disable llvm for overly complex methods */
cfg->disable_ssa = TRUE;
if (cfg->opt & MONO_OPT_LOOP) {
MONO_TIME_TRACK (mono_jit_stats.jit_compile_dominator_info, mono_compile_dominator_info (cfg, MONO_COMP_DOM | MONO_COMP_IDOM));
MONO_TIME_TRACK (mono_jit_stats.jit_compute_natural_loops, mono_compute_natural_loops (cfg));
}
if (mono_threads_are_safepoints_enabled ()) {
MONO_TIME_TRACK (mono_jit_stats.jit_insert_safepoints, insert_safepoints (cfg));
mono_cfg_dump_ir (cfg, "insert_safepoints");
}
/* after method_to_ir */
if (parts == 1) {
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, TRUE);
return cfg;
}
/*
if (header->num_clauses)
cfg->disable_ssa = TRUE;
*/
//#define DEBUGSSA "logic_run"
//#define DEBUGSSA_CLASS "Tests"
#ifdef DEBUGSSA
if (!cfg->disable_ssa) {
mono_local_cprop (cfg);
#ifndef DISABLE_SSA
mono_ssa_compute (cfg);
#endif
}
#else
if (cfg->opt & MONO_OPT_SSA) {
if (!(cfg->comp_done & MONO_COMP_SSA) && !cfg->disable_ssa) {
#ifndef DISABLE_SSA
MONO_TIME_TRACK (mono_jit_stats.jit_ssa_compute, mono_ssa_compute (cfg));
mono_cfg_dump_ir (cfg, "ssa_compute");
#endif
if (cfg->verbose_level >= 2) {
print_dfn (cfg);
}
}
}
#endif
/* after SSA translation */
if (parts == 2) {
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, TRUE);
return cfg;
}
if ((cfg->opt & MONO_OPT_CONSPROP) || (cfg->opt & MONO_OPT_COPYPROP)) {
if (cfg->comp_done & MONO_COMP_SSA && !COMPILE_LLVM (cfg)) {
#ifndef DISABLE_SSA
MONO_TIME_TRACK (mono_jit_stats.jit_ssa_cprop, mono_ssa_cprop (cfg));
mono_cfg_dump_ir (cfg, "ssa_cprop");
#endif
}
}
#ifndef DISABLE_SSA
if (cfg->comp_done & MONO_COMP_SSA && !COMPILE_LLVM (cfg)) {
//mono_ssa_strength_reduction (cfg);
if (cfg->opt & MONO_OPT_DEADCE) {
MONO_TIME_TRACK (mono_jit_stats.jit_ssa_deadce, mono_ssa_deadce (cfg));
mono_cfg_dump_ir (cfg, "ssa_deadce");
}
if ((cfg->flags & (MONO_CFG_HAS_LDELEMA|MONO_CFG_HAS_CHECK_THIS)) && (cfg->opt & MONO_OPT_ABCREM)) {
MONO_TIME_TRACK (mono_jit_stats.jit_perform_abc_removal, mono_perform_abc_removal (cfg));
mono_cfg_dump_ir (cfg, "perform_abc_removal");
}
MONO_TIME_TRACK (mono_jit_stats.jit_ssa_remove, mono_ssa_remove (cfg));
mono_cfg_dump_ir (cfg, "ssa_remove");
MONO_TIME_TRACK (mono_jit_stats.jit_local_cprop2, mono_local_cprop (cfg));
mono_cfg_dump_ir (cfg, "local_cprop2");
MONO_TIME_TRACK (mono_jit_stats.jit_handle_global_vregs2, mono_handle_global_vregs (cfg));
mono_cfg_dump_ir (cfg, "handle_global_vregs2");
if (cfg->opt & MONO_OPT_DEADCE) {
MONO_TIME_TRACK (mono_jit_stats.jit_local_deadce2, mono_local_deadce (cfg));
mono_cfg_dump_ir (cfg, "local_deadce2");
}
if (cfg->opt & MONO_OPT_BRANCH) {
MONO_TIME_TRACK (mono_jit_stats.jit_optimize_branches2, mono_optimize_branches (cfg));
mono_cfg_dump_ir (cfg, "optimize_branches2");
}
}
#endif
if (cfg->comp_done & MONO_COMP_SSA && COMPILE_LLVM (cfg)) {
mono_ssa_loop_invariant_code_motion (cfg);
mono_cfg_dump_ir (cfg, "loop_invariant_code_motion");
/* This removes MONO_INST_FAULT flags too so perform it unconditionally */
if (cfg->opt & MONO_OPT_ABCREM) {
mono_perform_abc_removal (cfg);
mono_cfg_dump_ir (cfg, "abc_removal");
}
}
/* after SSA removal */
if (parts == 3) {
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, TRUE);
return cfg;
}
if (cfg->llvm_only && cfg->gsharedvt)
mono_ssa_remove_gsharedvt (cfg);
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
if (COMPILE_SOFT_FLOAT (cfg))
mono_decompose_soft_float (cfg);
#endif
MONO_TIME_TRACK (mono_jit_stats.jit_decompose_vtype_opts, mono_decompose_vtype_opts (cfg));
if (cfg->flags & MONO_CFG_NEEDS_DECOMPOSE) {
MONO_TIME_TRACK (mono_jit_stats.jit_decompose_array_access_opts, mono_decompose_array_access_opts (cfg));
mono_cfg_dump_ir (cfg, "decompose_array_access_opts");
}
if (cfg->got_var) {
#ifndef MONO_ARCH_GOT_REG
GList *regs;
#endif
int got_reg;
g_assert (cfg->got_var_allocated);
/*
* Allways allocate the GOT var to a register, because keeping it
* in memory will increase the number of live temporaries in some
* code created by inssel.brg, leading to the well known spills+
* branches problem. Testcase: mcs crash in
* System.MonoCustomAttrs:GetCustomAttributes.
*/
#ifdef MONO_ARCH_GOT_REG
got_reg = MONO_ARCH_GOT_REG;
#else
regs = mono_arch_get_global_int_regs (cfg);
g_assert (regs);
got_reg = GPOINTER_TO_INT (regs->data);
g_list_free (regs);
#endif
cfg->got_var->opcode = OP_REGVAR;
cfg->got_var->dreg = got_reg;
cfg->used_int_regs |= 1LL << cfg->got_var->dreg;
}
/*
* Have to call this again to process variables added since the first call.
*/
MONO_TIME_TRACK(mono_jit_stats.jit_liveness_handle_exception_clauses2, mono_liveness_handle_exception_clauses (cfg));
if (cfg->opt & MONO_OPT_LINEARS) {
GList *vars, *regs, *l;
/* fixme: maybe we can avoid to compute livenesss here if already computed ? */
cfg->comp_done &= ~MONO_COMP_LIVENESS;
if (!(cfg->comp_done & MONO_COMP_LIVENESS))
MONO_TIME_TRACK (mono_jit_stats.jit_analyze_liveness, mono_analyze_liveness (cfg));
if ((vars = mono_arch_get_allocatable_int_vars (cfg))) {
regs = mono_arch_get_global_int_regs (cfg);
/* Remove the reg reserved for holding the GOT address */
if (cfg->got_var) {
for (l = regs; l; l = l->next) {
if (GPOINTER_TO_UINT (l->data) == cfg->got_var->dreg) {
regs = g_list_delete_link (regs, l);
break;
}
}
}
MONO_TIME_TRACK (mono_jit_stats.jit_linear_scan, mono_linear_scan (cfg, vars, regs, &cfg->used_int_regs));
mono_cfg_dump_ir (cfg, "linear_scan");
}
}
//mono_print_code (cfg, "");
//print_dfn (cfg);
/* variables are allocated after decompose, since decompose could create temps */
if (!COMPILE_LLVM (cfg)) {
MONO_TIME_TRACK (mono_jit_stats.jit_arch_allocate_vars, mono_arch_allocate_vars (cfg));
mono_cfg_dump_ir (cfg, "arch_allocate_vars");
if (cfg->exception_type)
return cfg;
}
if (cfg->gsharedvt)
mono_allocate_gsharedvt_vars (cfg);
if (!COMPILE_LLVM (cfg)) {
gboolean need_local_opts;
MONO_TIME_TRACK (mono_jit_stats.jit_spill_global_vars, mono_spill_global_vars (cfg, &need_local_opts));
mono_cfg_dump_ir (cfg, "spill_global_vars");
if (need_local_opts || cfg->compile_aot) {
/* To optimize code created by spill_global_vars */
MONO_TIME_TRACK (mono_jit_stats.jit_local_cprop3, mono_local_cprop (cfg));
if (cfg->opt & MONO_OPT_DEADCE)
MONO_TIME_TRACK (mono_jit_stats.jit_local_deadce3, mono_local_deadce (cfg));
mono_cfg_dump_ir (cfg, "needs_local_opts");
}
}
mono_insert_branches_between_bblocks (cfg);
if (COMPILE_LLVM (cfg)) {
#ifdef ENABLE_LLVM
char *nm;
/* The IR has to be in SSA form for LLVM */
if (!(cfg->comp_done & MONO_COMP_SSA)) {
cfg->exception_message = g_strdup ("SSA disabled.");
cfg->disable_llvm = TRUE;
}
if (cfg->flags & MONO_CFG_NEEDS_DECOMPOSE)
mono_decompose_array_access_opts (cfg);
if (!cfg->disable_llvm)
mono_llvm_emit_method (cfg);
if (cfg->disable_llvm) {
if (cfg->verbose_level > 0) {
//nm = mono_method_full_name (cfg->method, TRUE);
printf ("LLVM failed for '%s.%s': %s\n", m_class_get_name (method->klass), method->name, cfg->exception_message);
//g_free (nm);
}
if (cfg->llvm_only && cfg->interp && !interp_entry_only) {
// If interp support is enabled, restart compilation, generating interp entry code only
interp_entry_only = TRUE;
mono_destroy_compile (cfg);
goto restart_compile;
}
if (cfg->llvm_only) {
cfg->disable_aot = TRUE;
return cfg;
}
mono_destroy_compile (cfg);
try_llvm = FALSE;
goto restart_compile;
}
if (cfg->verbose_level > 0 && !cfg->compile_aot) {
nm = mono_method_get_full_name (cfg->method);
g_print ("LLVM Method %s emitted at %p to %p (code length %d)\n",
nm,
cfg->native_code, cfg->native_code + cfg->code_len, cfg->code_len);
g_free (nm);
}
#endif
} else {
MONO_TIME_TRACK (mono_jit_stats.jit_codegen, mono_codegen (cfg));
mono_cfg_dump_ir (cfg, "codegen");
if (cfg->exception_type)
return cfg;
}
if (COMPILE_LLVM (cfg))
mono_atomic_inc_i32 (&mono_jit_stats.methods_with_llvm);
else
mono_atomic_inc_i32 (&mono_jit_stats.methods_without_llvm);
MONO_TIME_TRACK (mono_jit_stats.jit_create_jit_info, cfg->jit_info = create_jit_info (cfg, method_to_compile));
if (cfg->extend_live_ranges) {
/* Extend live ranges to cover the whole method */
for (i = 0; i < cfg->num_varinfo; ++i)
MONO_VARINFO (cfg, i)->live_range_end = cfg->code_len;
}
MONO_TIME_TRACK (mono_jit_stats.jit_gc_create_gc_map, mini_gc_create_gc_map (cfg));
MONO_TIME_TRACK (mono_jit_stats.jit_save_seq_point_info, mono_save_seq_point_info (cfg, cfg->jit_info));
if (!cfg->compile_aot)
mono_lldb_save_method_info (cfg);
if (cfg->verbose_level >= 2) {
char *id = mono_method_full_name (cfg->method, TRUE);
g_print ("\n*** ASM for %s ***\n", id);
mono_disassemble_code (cfg, cfg->native_code, cfg->code_len, id + 3);
g_print ("***\n\n");
g_free (id);
}
if (!cfg->compile_aot && !(flags & JIT_FLAG_DISCARD_RESULTS)) {
mono_jit_info_table_add (cfg->jit_info);
if (cfg->method->dynamic) {
MonoJitMemoryManager *jit_mm = (MonoJitMemoryManager*)cfg->jit_mm;
MonoJitDynamicMethodInfo *res;
jit_mm_lock (jit_mm);
g_assert (jit_mm->dynamic_code_hash);
res = (MonoJitDynamicMethodInfo *)g_hash_table_lookup (jit_mm->dynamic_code_hash, method);
jit_mm_unlock (jit_mm);
g_assert (res);
res->ji = cfg->jit_info;
}
mono_postprocess_patches_after_ji_publish (cfg);
}
#if 0
if (cfg->gsharedvt)
printf ("GSHAREDVT: %s\n", mono_method_full_name (cfg->method, TRUE));
#endif
/* collect statistics */
#ifndef DISABLE_PERFCOUNTERS
mono_atomic_inc_i32 (&mono_perfcounters->jit_methods);
mono_atomic_fetch_add_i32 (&mono_perfcounters->jit_bytes, header->code_size);
#endif
gint32 code_size_ratio = cfg->code_len;
mono_atomic_fetch_add_i32 (&mono_jit_stats.allocated_code_size, code_size_ratio);
mono_atomic_fetch_add_i32 (&mono_jit_stats.native_code_size, code_size_ratio);
/* FIXME: use an explicit function to read booleans */
if ((gboolean)mono_atomic_load_i32 ((gint32*)&mono_jit_stats.enabled)) {
if (code_size_ratio > mono_atomic_load_i32 (&mono_jit_stats.biggest_method_size)) {
mono_atomic_store_i32 (&mono_jit_stats.biggest_method_size, code_size_ratio);
char *biggest_method = g_strdup_printf ("%s::%s)", m_class_get_name (method->klass), method->name);
biggest_method = (char*)mono_atomic_xchg_ptr ((gpointer*)&mono_jit_stats.biggest_method, biggest_method);
g_free (biggest_method);
}
code_size_ratio = (code_size_ratio * 100) / header->code_size;
if (code_size_ratio > mono_atomic_load_i32 (&mono_jit_stats.max_code_size_ratio)) {
mono_atomic_store_i32 (&mono_jit_stats.max_code_size_ratio, code_size_ratio);
char *max_ratio_method = g_strdup_printf ("%s::%s)", m_class_get_name (method->klass), method->name);
max_ratio_method = (char*)mono_atomic_xchg_ptr ((gpointer*)&mono_jit_stats.max_ratio_method, max_ratio_method);
g_free (max_ratio_method);
}
}
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, TRUE);
mono_cfg_dump_close_group (cfg);
return cfg;
}
gboolean
mini_class_has_reference_variant_generic_argument (MonoCompile *cfg, MonoClass *klass, int context_used)
{
int i;
MonoGenericContainer *container;
MonoGenericInst *ginst;
if (mono_class_is_ginst (klass)) {
container = mono_class_get_generic_container (mono_class_get_generic_class (klass)->container_class);
ginst = mono_class_get_generic_class (klass)->context.class_inst;
} else if (mono_class_is_gtd (klass) && context_used) {
container = mono_class_get_generic_container (klass);
ginst = container->context.class_inst;
} else {
return FALSE;
}
for (i = 0; i < container->type_argc; ++i) {
MonoType *type;
if (!(mono_generic_container_get_param_info (container, i)->flags & (MONO_GEN_PARAM_VARIANT|MONO_GEN_PARAM_COVARIANT)))
continue;
type = ginst->type_argv [i];
if (mini_type_is_reference (type))
return TRUE;
}
return FALSE;
}
void
mono_cfg_add_try_hole (MonoCompile *cfg, MonoExceptionClause *clause, guint8 *start, MonoBasicBlock *bb)
{
TryBlockHole *hole = (TryBlockHole *)mono_mempool_alloc (cfg->mempool, sizeof (TryBlockHole));
hole->clause = clause;
hole->start_offset = start - cfg->native_code;
hole->basic_block = bb;
cfg->try_block_holes = g_slist_append_mempool (cfg->mempool, cfg->try_block_holes, hole);
}
void
mono_cfg_set_exception (MonoCompile *cfg, MonoExceptionType type)
{
cfg->exception_type = type;
}
/* Assumes ownership of the MSG argument */
void
mono_cfg_set_exception_invalid_program (MonoCompile *cfg, char *msg)
{
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
mono_error_set_generic_error (cfg->error, "System", "InvalidProgramException", "%s", msg);
}
#endif /* DISABLE_JIT */
gint64 mono_time_track_start ()
{
return mono_100ns_ticks ();
}
/*
* mono_time_track_end:
*
* Uses UnlockedAddDouble () to update \param time.
*/
void mono_time_track_end (gint64 *time, gint64 start)
{
UnlockedAdd64 (time, mono_100ns_ticks () - start);
}
/*
* mono_update_jit_stats:
*
* Only call this function in locked environments to avoid data races.
*/
MONO_NO_SANITIZE_THREAD
void
mono_update_jit_stats (MonoCompile *cfg)
{
mono_jit_stats.allocate_var += cfg->stat_allocate_var;
mono_jit_stats.locals_stack_size += cfg->stat_locals_stack_size;
mono_jit_stats.basic_blocks += cfg->stat_basic_blocks;
mono_jit_stats.max_basic_blocks = MAX (cfg->stat_basic_blocks, mono_jit_stats.max_basic_blocks);
mono_jit_stats.cil_code_size += cfg->stat_cil_code_size;
mono_jit_stats.regvars += cfg->stat_n_regvars;
mono_jit_stats.inlineable_methods += cfg->stat_inlineable_methods;
mono_jit_stats.inlined_methods += cfg->stat_inlined_methods;
mono_jit_stats.code_reallocs += cfg->stat_code_reallocs;
}
/*
* mono_jit_compile_method_inner:
*
* Main entry point for the JIT.
*/
gpointer
mono_jit_compile_method_inner (MonoMethod *method, int opt, MonoError *error)
{
MonoCompile *cfg;
gpointer code = NULL;
MonoJitInfo *jinfo, *info;
MonoVTable *vtable;
MonoException *ex = NULL;
gint64 start;
MonoMethod *prof_method, *shared;
error_init (error);
start = mono_time_track_start ();
cfg = mini_method_compile (method, opt, JIT_FLAG_RUN_CCTORS, 0, -1);
gint64 jit_time = 0.0;
mono_time_track_end (&jit_time, start);
UnlockedAdd64 (&mono_jit_stats.jit_time, jit_time);
prof_method = cfg->method;
switch (cfg->exception_type) {
case MONO_EXCEPTION_NONE:
break;
case MONO_EXCEPTION_TYPE_LOAD:
case MONO_EXCEPTION_MISSING_FIELD:
case MONO_EXCEPTION_MISSING_METHOD:
case MONO_EXCEPTION_FILE_NOT_FOUND:
case MONO_EXCEPTION_BAD_IMAGE:
case MONO_EXCEPTION_INVALID_PROGRAM: {
/* Throw a type load exception if needed */
if (cfg->exception_ptr) {
ex = mono_class_get_exception_for_failure ((MonoClass *)cfg->exception_ptr);
} else {
if (cfg->exception_type == MONO_EXCEPTION_MISSING_FIELD)
ex = mono_exception_from_name_msg (mono_defaults.corlib, "System", "MissingFieldException", cfg->exception_message);
else if (cfg->exception_type == MONO_EXCEPTION_MISSING_METHOD)
ex = mono_exception_from_name_msg (mono_defaults.corlib, "System", "MissingMethodException", cfg->exception_message);
else if (cfg->exception_type == MONO_EXCEPTION_TYPE_LOAD)
ex = mono_exception_from_name_msg (mono_defaults.corlib, "System", "TypeLoadException", cfg->exception_message);
else if (cfg->exception_type == MONO_EXCEPTION_FILE_NOT_FOUND)
ex = mono_exception_from_name_msg (mono_defaults.corlib, "System.IO", "FileNotFoundException", cfg->exception_message);
else if (cfg->exception_type == MONO_EXCEPTION_BAD_IMAGE)
ex = mono_get_exception_bad_image_format (cfg->exception_message);
else if (cfg->exception_type == MONO_EXCEPTION_INVALID_PROGRAM)
ex = mono_exception_from_name_msg (mono_defaults.corlib, "System", "InvalidProgramException", cfg->exception_message);
else
g_assert_not_reached ();
}
break;
}
case MONO_EXCEPTION_MONO_ERROR:
// FIXME: MonoError has no copy ctor
g_assert (!is_ok (cfg->error));
ex = mono_error_convert_to_exception (cfg->error);
break;
default:
g_assert_not_reached ();
}
if (ex) {
MONO_PROFILER_RAISE (jit_failed, (method));
mono_destroy_compile (cfg);
mono_error_set_exception_instance (error, ex);
return NULL;
}
if (mono_method_is_generic_sharable (method, FALSE)) {
shared = mini_get_shared_method_full (method, SHARE_MODE_NONE, error);
if (!is_ok (error)) {
MONO_PROFILER_RAISE (jit_failed, (method));
mono_destroy_compile (cfg);
return NULL;
}
} else {
shared = NULL;
}
mono_loader_lock ();
if (mono_stats_method_desc && mono_method_desc_full_match (mono_stats_method_desc, method)) {
g_printf ("Printing runtime stats at method: %s\n", mono_method_get_full_name (method));
mono_runtime_print_stats ();
}
/* Check if some other thread already did the job. In this case, we can
discard the code this thread generated. */
info = mini_lookup_method (method, shared);
if (info) {
code = info->code_start;
discarded_code ++;
discarded_jit_time += jit_time;
}
if (code == NULL) {
MonoJitMemoryManager *jit_mm = (MonoJitMemoryManager*)cfg->jit_mm;
/* The lookup + insert is atomic since this is done inside the domain lock */
jit_code_hash_lock (jit_mm);
mono_internal_hash_table_insert (&jit_mm->jit_code_hash, cfg->jit_info->d.method, cfg->jit_info);
jit_code_hash_unlock (jit_mm);
code = cfg->native_code;
if (cfg->gshared && mono_method_is_generic_sharable (method, FALSE))
mono_atomic_inc_i32 (&mono_stats.generics_shared_methods);
if (cfg->gsharedvt)
mono_atomic_inc_i32 (&mono_stats.gsharedvt_methods);
}
jinfo = cfg->jit_info;
/*
* Update global stats while holding a lock, instead of doing many
* mono_atomic_inc_i32 operations during JITting.
*/
mono_update_jit_stats (cfg);
mono_destroy_compile (cfg);
mini_patch_llvm_jit_callees (method, code);
#ifndef DISABLE_JIT
mono_emit_jit_map (jinfo);
mono_emit_jit_dump (jinfo, code);
#endif
mono_loader_unlock ();
if (!is_ok (error))
return NULL;
vtable = mono_class_vtable_checked (method->klass, error);
return_val_if_nok (error, NULL);
if (method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE) {
if (mono_marshal_method_from_wrapper (method)) {
/* Native func wrappers have no method */
/* The profiler doesn't know about wrappers, so pass the original icall method */
MONO_PROFILER_RAISE (jit_done, (mono_marshal_method_from_wrapper (method), jinfo));
}
}
MONO_PROFILER_RAISE (jit_done, (method, jinfo));
if (prof_method != method)
MONO_PROFILER_RAISE (jit_done, (prof_method, jinfo));
if (!mono_runtime_class_init_full (vtable, error))
return NULL;
return MINI_ADDR_TO_FTNPTR (code);
}
/*
* mini_get_underlying_type:
*
* Return the type the JIT will use during compilation.
* Handles: byref, enums, native types, bool/char, ref types, generic sharing.
* For gsharedvt types, it will return the original VAR/MVAR.
*/
MonoType*
mini_get_underlying_type (MonoType *type)
{
return mini_type_get_underlying_type (type);
}
void
mini_jit_init (void)
{
mono_os_mutex_init_recursive (&jit_mutex);
#ifndef DISABLE_JIT
mono_counters_register ("Discarded method code", MONO_COUNTER_JIT | MONO_COUNTER_INT, &discarded_code);
mono_counters_register ("Time spent JITting discarded code", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &discarded_jit_time);
mono_counters_register ("Try holes memory size", MONO_COUNTER_JIT | MONO_COUNTER_INT, &jinfo_try_holes_size);
mono_counters_register ("JIT/method_to_ir", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_method_to_ir);
mono_counters_register ("JIT/liveness_handle_exception_clauses", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_liveness_handle_exception_clauses);
mono_counters_register ("JIT/handle_out_of_line_bblock", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_handle_out_of_line_bblock);
mono_counters_register ("JIT/decompose_long_opts", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_decompose_long_opts);
mono_counters_register ("JIT/decompose_typechecks", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_decompose_typechecks);
mono_counters_register ("JIT/local_cprop", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_cprop);
mono_counters_register ("JIT/local_emulate_ops", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_emulate_ops);
mono_counters_register ("JIT/optimize_branches", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_optimize_branches);
mono_counters_register ("JIT/handle_global_vregs", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_handle_global_vregs);
mono_counters_register ("JIT/local_deadce", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_deadce);
mono_counters_register ("JIT/local_alias_analysis", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_alias_analysis);
mono_counters_register ("JIT/if_conversion", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_if_conversion);
mono_counters_register ("JIT/bb_ordering", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_bb_ordering);
mono_counters_register ("JIT/compile_dominator_info", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_compile_dominator_info);
mono_counters_register ("JIT/compute_natural_loops", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_compute_natural_loops);
mono_counters_register ("JIT/insert_safepoints", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_insert_safepoints);
mono_counters_register ("JIT/ssa_compute", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_ssa_compute);
mono_counters_register ("JIT/ssa_cprop", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_ssa_cprop);
mono_counters_register ("JIT/ssa_deadce", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_ssa_deadce);
mono_counters_register ("JIT/perform_abc_removal", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_perform_abc_removal);
mono_counters_register ("JIT/ssa_remove", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_ssa_remove);
mono_counters_register ("JIT/local_cprop2", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_cprop2);
mono_counters_register ("JIT/handle_global_vregs2", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_handle_global_vregs2);
mono_counters_register ("JIT/local_deadce2", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_deadce2);
mono_counters_register ("JIT/optimize_branches2", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_optimize_branches2);
mono_counters_register ("JIT/decompose_vtype_opts", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_decompose_vtype_opts);
mono_counters_register ("JIT/decompose_array_access_opts", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_decompose_array_access_opts);
mono_counters_register ("JIT/liveness_handle_exception_clauses2", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_liveness_handle_exception_clauses2);
mono_counters_register ("JIT/analyze_liveness", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_analyze_liveness);
mono_counters_register ("JIT/linear_scan", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_linear_scan);
mono_counters_register ("JIT/arch_allocate_vars", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_arch_allocate_vars);
mono_counters_register ("JIT/spill_global_var", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_spill_global_vars);
mono_counters_register ("JIT/local_cprop3", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_cprop3);
mono_counters_register ("JIT/local_deadce3", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_deadce3);
mono_counters_register ("JIT/codegen", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_codegen);
mono_counters_register ("JIT/create_jit_info", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_create_jit_info);
mono_counters_register ("JIT/gc_create_gc_map", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_gc_create_gc_map);
mono_counters_register ("JIT/save_seq_point_info", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_save_seq_point_info);
mono_counters_register ("Total time spent JITting", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_time);
mono_counters_register ("Basic blocks", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.basic_blocks);
mono_counters_register ("Max basic blocks", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.max_basic_blocks);
mono_counters_register ("Allocated vars", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.allocate_var);
mono_counters_register ("Code reallocs", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.code_reallocs);
mono_counters_register ("Allocated code size", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.allocated_code_size);
mono_counters_register ("Allocated seq points size", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.allocated_seq_points_size);
mono_counters_register ("Inlineable methods", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.inlineable_methods);
mono_counters_register ("Inlined methods", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.inlined_methods);
mono_counters_register ("Regvars", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.regvars);
mono_counters_register ("Locals stack size", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.locals_stack_size);
mono_counters_register ("Method cache lookups", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.methods_lookups);
mono_counters_register ("Compiled CIL code size", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.cil_code_size);
mono_counters_register ("Native code size", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.native_code_size);
mono_counters_register ("Aliases found", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.alias_found);
mono_counters_register ("Aliases eliminated", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.alias_removed);
mono_counters_register ("Aliased loads eliminated", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.loads_eliminated);
mono_counters_register ("Aliased stores eliminated", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.stores_eliminated);
mono_counters_register ("Optimized immediate divisions", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.optimized_divisions);
current_backend = g_new0 (MonoBackend, 1);
init_backend (current_backend);
#endif
}
#ifndef ENABLE_LLVM
void
mono_llvm_emit_aot_file_info (MonoAotFileInfo *info, gboolean has_jitted_code)
{
g_assert_not_reached ();
}
gpointer
mono_llvm_emit_aot_data (const char *symbol, guint8 *data, int data_len)
{
g_assert_not_reached ();
}
gpointer
mono_llvm_emit_aot_data_aligned (const char *symbol, guint8 *data, int data_len, int align)
{
g_assert_not_reached ();
}
#endif
#if !defined(ENABLE_LLVM_RUNTIME) && !defined(ENABLE_LLVM)
void
mono_llvm_cpp_throw_exception (void)
{
g_assert_not_reached ();
}
void
mono_llvm_cpp_catch_exception (MonoLLVMInvokeCallback cb, gpointer arg, gboolean *out_thrown)
{
g_assert_not_reached ();
}
#endif
#ifdef DISABLE_JIT
MonoCompile*
mini_method_compile (MonoMethod *method, guint32 opts, JitFlags flags, int parts, int aot_method_index)
{
g_assert_not_reached ();
return NULL;
}
void
mono_destroy_compile (MonoCompile *cfg)
{
g_assert_not_reached ();
}
void
mono_add_patch_info (MonoCompile *cfg, int ip, MonoJumpInfoType type, gconstpointer target)
{
g_assert_not_reached ();
}
#else // DISABLE_JIT
guint8*
mini_realloc_code_slow (MonoCompile *cfg, int size)
{
const int EXTRA_CODE_SPACE = 16;
if (cfg->code_len + size > (cfg->code_size - EXTRA_CODE_SPACE)) {
while (cfg->code_len + size > (cfg->code_size - EXTRA_CODE_SPACE))
cfg->code_size = cfg->code_size * 2 + EXTRA_CODE_SPACE;
cfg->native_code = g_realloc (cfg->native_code, cfg->code_size);
cfg->stat_code_reallocs++;
}
return cfg->native_code + cfg->code_len;
}
#endif /* DISABLE_JIT */
gboolean
mini_class_is_system_array (MonoClass *klass)
{
return m_class_get_parent (klass) == mono_defaults.array_class;
}
/*
* mono_target_pagesize:
*
* query pagesize used to determine if an implicit NRE can be used
*/
int
mono_target_pagesize (void)
{
/* We could query the system's pagesize via mono_pagesize (), however there
* are pitfalls: sysconf (3) is called on some posix like systems, and per
* POSIX.1-2008 this function doesn't have to be async-safe. Since this
* function can be called from a signal handler, we simplify things by
* using 4k on all targets. Implicit null-checks with an offset larger than
* 4k are _very_ uncommon, so we don't mind emitting an explicit null-check
* for those cases.
*/
return 4 * 1024;
}
MonoCPUFeatures
mini_get_cpu_features (MonoCompile* cfg)
{
MonoCPUFeatures features = (MonoCPUFeatures)0;
#if !defined(MONO_CROSS_COMPILE)
if (!cfg->compile_aot || cfg->use_current_cpu) {
// detect current CPU features if we are in JIT mode or AOT with use_current_cpu flag.
#if defined(ENABLE_LLVM)
features = mono_llvm_get_cpu_features (); // llvm has a nice built-in API to detect features
#elif defined(TARGET_AMD64) || defined(TARGET_X86)
features = mono_arch_get_cpu_features ();
#endif
}
#endif
#if defined(TARGET_ARM64)
// All Arm64 devices have this set
features |= MONO_CPU_ARM64_BASE;
// This is a standard part of ARMv8-A; see A1.5 in "ARM
// Architecture Reference Manual ARMv8, for ARMv8-A
// architecture profile"
features |= MONO_CPU_ARM64_NEON;
#endif
// apply parameters passed via -mattr
return (features | mono_cpu_features_enabled) & ~mono_cpu_features_disabled;
}
| /**
* \file
* The new Mono code generator.
*
* Authors:
* Paolo Molaro ([email protected])
* Dietmar Maurer ([email protected])
*
* Copyright 2002-2003 Ximian, Inc.
* Copyright 2003-2010 Novell, Inc.
* Copyright 2011 Xamarin, Inc (http://www.xamarin.com)
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#include <config.h>
#ifdef HAVE_ALLOCA_H
#include <alloca.h>
#endif
#ifdef HAVE_UNISTD_H
#include <unistd.h>
#endif
#include <math.h>
#ifdef HAVE_SYS_TIME_H
#include <sys/time.h>
#endif
#include <mono/utils/memcheck.h>
#include <mono/metadata/assembly.h>
#include <mono/metadata/loader.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/class.h>
#include <mono/metadata/object.h>
#include <mono/metadata/tokentype.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/threads.h>
#include <mono/metadata/appdomain.h>
#include <mono/metadata/debug-helpers.h>
#include <mono/metadata/profiler-private.h>
#include <mono/metadata/mono-config.h>
#include <mono/metadata/environment.h>
#include <mono/metadata/mono-debug.h>
#include <mono/metadata/gc-internals.h>
#include <mono/metadata/threads-types.h>
#include <mono/metadata/verify.h>
#include <mono/metadata/mempool-internals.h>
#include <mono/metadata/runtime.h>
#include <mono/metadata/attrdefs.h>
#include <mono/utils/mono-math.h>
#include <mono/utils/mono-compiler.h>
#include <mono/utils/mono-counters.h>
#include <mono/utils/mono-error-internals.h>
#include <mono/utils/mono-logger-internals.h>
#include <mono/utils/mono-mmap.h>
#include <mono/utils/mono-path.h>
#include <mono/utils/mono-tls.h>
#include <mono/utils/mono-hwcap.h>
#include <mono/utils/dtrace.h>
#include <mono/utils/mono-threads.h>
#include <mono/utils/mono-threads-coop.h>
#include <mono/utils/unlocked.h>
#include <mono/utils/mono-time.h>
#include "mini.h"
#include "seq-points.h"
#include <string.h>
#include <ctype.h>
#include "trace.h"
#include "ir-emit.h"
#include "jit-icalls.h"
#include "mini-gc.h"
#include "llvm-runtime.h"
#include "mini-llvm.h"
#include "lldb.h"
#include "aot-runtime.h"
#include "mini-runtime.h"
MonoCallSpec *mono_jit_trace_calls;
MonoMethodDesc *mono_inject_async_exc_method;
int mono_inject_async_exc_pos;
MonoMethodDesc *mono_break_at_bb_method;
int mono_break_at_bb_bb_num;
gboolean mono_do_x86_stack_align = TRUE;
/* Counters */
static guint32 discarded_code;
static gint64 discarded_jit_time;
#define mono_jit_lock() mono_os_mutex_lock (&jit_mutex)
#define mono_jit_unlock() mono_os_mutex_unlock (&jit_mutex)
static mono_mutex_t jit_mutex;
#ifndef DISABLE_JIT
static guint32 jinfo_try_holes_size;
static MonoBackend *current_backend;
gpointer
mono_realloc_native_code (MonoCompile *cfg)
{
return g_realloc (cfg->native_code, cfg->code_size);
}
typedef struct {
MonoExceptionClause *clause;
MonoBasicBlock *basic_block;
int start_offset;
} TryBlockHole;
/**
* mono_emit_unwind_op:
*
* Add an unwind op with the given parameters for the list of unwind ops stored in
* cfg->unwind_ops.
*/
void
mono_emit_unwind_op (MonoCompile *cfg, int when, int tag, int reg, int val)
{
MonoUnwindOp *op = (MonoUnwindOp *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoUnwindOp));
op->op = tag;
op->reg = reg;
op->val = val;
op->when = when;
cfg->unwind_ops = g_slist_append_mempool (cfg->mempool, cfg->unwind_ops, op);
if (cfg->verbose_level > 1) {
switch (tag) {
case DW_CFA_def_cfa:
printf ("CFA: [%x] def_cfa: %s+0x%x\n", when, mono_arch_regname (reg), val);
break;
case DW_CFA_def_cfa_register:
printf ("CFA: [%x] def_cfa_reg: %s\n", when, mono_arch_regname (reg));
break;
case DW_CFA_def_cfa_offset:
printf ("CFA: [%x] def_cfa_offset: 0x%x\n", when, val);
break;
case DW_CFA_offset:
printf ("CFA: [%x] offset: %s at cfa-0x%x\n", when, mono_arch_regname (reg), -val);
break;
}
}
}
/**
* mono_unlink_bblock:
*
* Unlink two basic blocks.
*/
void
mono_unlink_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to)
{
int i, pos;
gboolean found;
found = FALSE;
for (i = 0; i < from->out_count; ++i) {
if (to == from->out_bb [i]) {
found = TRUE;
break;
}
}
if (found) {
pos = 0;
for (i = 0; i < from->out_count; ++i) {
if (from->out_bb [i] != to)
from->out_bb [pos ++] = from->out_bb [i];
}
g_assert (pos == from->out_count - 1);
from->out_count--;
}
found = FALSE;
for (i = 0; i < to->in_count; ++i) {
if (from == to->in_bb [i]) {
found = TRUE;
break;
}
}
if (found) {
pos = 0;
for (i = 0; i < to->in_count; ++i) {
if (to->in_bb [i] != from)
to->in_bb [pos ++] = to->in_bb [i];
}
g_assert (pos == to->in_count - 1);
to->in_count--;
}
}
/*
* mono_bblocks_linked:
*
* Return whenever BB1 and BB2 are linked in the CFG.
*/
gboolean
mono_bblocks_linked (MonoBasicBlock *bb1, MonoBasicBlock *bb2)
{
int i;
for (i = 0; i < bb1->out_count; ++i) {
if (bb1->out_bb [i] == bb2)
return TRUE;
}
return FALSE;
}
static int
mono_find_block_region_notry (MonoCompile *cfg, int offset)
{
MonoMethodHeader *header = cfg->header;
MonoExceptionClause *clause;
int i;
for (i = 0; i < header->num_clauses; ++i) {
clause = &header->clauses [i];
if ((clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) && (offset >= clause->data.filter_offset) &&
(offset < (clause->handler_offset)))
return ((i + 1) << 8) | MONO_REGION_FILTER | clause->flags;
if (MONO_OFFSET_IN_HANDLER (clause, offset)) {
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY)
return ((i + 1) << 8) | MONO_REGION_FINALLY | clause->flags;
else if (clause->flags == MONO_EXCEPTION_CLAUSE_FAULT)
return ((i + 1) << 8) | MONO_REGION_FAULT | clause->flags;
else
return ((i + 1) << 8) | MONO_REGION_CATCH | clause->flags;
}
}
return -1;
}
/*
* mono_get_block_region_notry:
*
* Return the region corresponding to REGION, ignoring try clauses nested inside
* finally clauses.
*/
int
mono_get_block_region_notry (MonoCompile *cfg, int region)
{
if ((region & (0xf << 4)) == MONO_REGION_TRY) {
MonoMethodHeader *header = cfg->header;
/*
* This can happen if a try clause is nested inside a finally clause.
*/
int clause_index = (region >> 8) - 1;
g_assert (clause_index >= 0 && clause_index < header->num_clauses);
region = mono_find_block_region_notry (cfg, header->clauses [clause_index].try_offset);
}
return region;
}
MonoInst *
mono_find_spvar_for_region (MonoCompile *cfg, int region)
{
region = mono_get_block_region_notry (cfg, region);
return (MonoInst *)g_hash_table_lookup (cfg->spvars, GINT_TO_POINTER (region));
}
static void
df_visit (MonoBasicBlock *start, int *dfn, MonoBasicBlock **array)
{
int i;
array [*dfn] = start;
/* g_print ("visit %d at %p (BB%ld)\n", *dfn, start->cil_code, start->block_num); */
for (i = 0; i < start->out_count; ++i) {
if (start->out_bb [i]->dfn)
continue;
(*dfn)++;
start->out_bb [i]->dfn = *dfn;
start->out_bb [i]->df_parent = start;
array [*dfn] = start->out_bb [i];
df_visit (start->out_bb [i], dfn, array);
}
}
guint32
mono_reverse_branch_op (guint32 opcode)
{
static const int reverse_map [] = {
CEE_BNE_UN, CEE_BLT, CEE_BLE, CEE_BGT, CEE_BGE,
CEE_BEQ, CEE_BLT_UN, CEE_BLE_UN, CEE_BGT_UN, CEE_BGE_UN
};
static const int reverse_fmap [] = {
OP_FBNE_UN, OP_FBLT, OP_FBLE, OP_FBGT, OP_FBGE,
OP_FBEQ, OP_FBLT_UN, OP_FBLE_UN, OP_FBGT_UN, OP_FBGE_UN
};
static const int reverse_lmap [] = {
OP_LBNE_UN, OP_LBLT, OP_LBLE, OP_LBGT, OP_LBGE,
OP_LBEQ, OP_LBLT_UN, OP_LBLE_UN, OP_LBGT_UN, OP_LBGE_UN
};
static const int reverse_imap [] = {
OP_IBNE_UN, OP_IBLT, OP_IBLE, OP_IBGT, OP_IBGE,
OP_IBEQ, OP_IBLT_UN, OP_IBLE_UN, OP_IBGT_UN, OP_IBGE_UN
};
if (opcode >= CEE_BEQ && opcode <= CEE_BLT_UN) {
opcode = reverse_map [opcode - CEE_BEQ];
} else if (opcode >= OP_FBEQ && opcode <= OP_FBLT_UN) {
opcode = reverse_fmap [opcode - OP_FBEQ];
} else if (opcode >= OP_LBEQ && opcode <= OP_LBLT_UN) {
opcode = reverse_lmap [opcode - OP_LBEQ];
} else if (opcode >= OP_IBEQ && opcode <= OP_IBLT_UN) {
opcode = reverse_imap [opcode - OP_IBEQ];
} else
g_assert_not_reached ();
return opcode;
}
guint
mono_type_to_store_membase (MonoCompile *cfg, MonoType *type)
{
type = mini_get_underlying_type (type);
handle_enum:
switch (type->type) {
case MONO_TYPE_I1:
case MONO_TYPE_U1:
return OP_STOREI1_MEMBASE_REG;
case MONO_TYPE_I2:
case MONO_TYPE_U2:
return OP_STOREI2_MEMBASE_REG;
case MONO_TYPE_I4:
case MONO_TYPE_U4:
return OP_STOREI4_MEMBASE_REG;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
return OP_STORE_MEMBASE_REG;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
return OP_STORE_MEMBASE_REG;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
return OP_STOREI8_MEMBASE_REG;
case MONO_TYPE_R4:
return OP_STORER4_MEMBASE_REG;
case MONO_TYPE_R8:
return OP_STORER8_MEMBASE_REG;
case MONO_TYPE_VALUETYPE:
if (m_class_is_enumtype (type->data.klass)) {
type = mono_class_enum_basetype_internal (type->data.klass);
goto handle_enum;
}
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type)))
return OP_STOREX_MEMBASE;
return OP_STOREV_MEMBASE;
case MONO_TYPE_TYPEDBYREF:
return OP_STOREV_MEMBASE;
case MONO_TYPE_GENERICINST:
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type)))
return OP_STOREX_MEMBASE;
type = m_class_get_byval_arg (type->data.generic_class->container_class);
goto handle_enum;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
g_assert (mini_type_var_is_vt (type));
return OP_STOREV_MEMBASE;
default:
g_error ("unknown type 0x%02x in type_to_store_membase", type->type);
}
return -1;
}
guint
mono_type_to_load_membase (MonoCompile *cfg, MonoType *type)
{
type = mini_get_underlying_type (type);
switch (type->type) {
case MONO_TYPE_I1:
return OP_LOADI1_MEMBASE;
case MONO_TYPE_U1:
return OP_LOADU1_MEMBASE;
case MONO_TYPE_I2:
return OP_LOADI2_MEMBASE;
case MONO_TYPE_U2:
return OP_LOADU2_MEMBASE;
case MONO_TYPE_I4:
return OP_LOADI4_MEMBASE;
case MONO_TYPE_U4:
return OP_LOADU4_MEMBASE;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
return OP_LOAD_MEMBASE;
case MONO_TYPE_CLASS:
case MONO_TYPE_STRING:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
return OP_LOAD_MEMBASE;
case MONO_TYPE_I8:
case MONO_TYPE_U8:
return OP_LOADI8_MEMBASE;
case MONO_TYPE_R4:
return OP_LOADR4_MEMBASE;
case MONO_TYPE_R8:
return OP_LOADR8_MEMBASE;
case MONO_TYPE_VALUETYPE:
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type)))
return OP_LOADX_MEMBASE;
case MONO_TYPE_TYPEDBYREF:
return OP_LOADV_MEMBASE;
case MONO_TYPE_GENERICINST:
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type)))
return OP_LOADX_MEMBASE;
if (mono_type_generic_inst_is_valuetype (type))
return OP_LOADV_MEMBASE;
else
return OP_LOAD_MEMBASE;
break;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
g_assert (cfg->gshared);
g_assert (mini_type_var_is_vt (type));
return OP_LOADV_MEMBASE;
default:
g_error ("unknown type 0x%02x in type_to_load_membase", type->type);
}
return -1;
}
guint
mini_type_to_stind (MonoCompile* cfg, MonoType *type)
{
type = mini_get_underlying_type (type);
if (cfg->gshared && !m_type_is_byref (type) && (type->type == MONO_TYPE_VAR || type->type == MONO_TYPE_MVAR)) {
g_assert (mini_type_var_is_vt (type));
return CEE_STOBJ;
}
return mono_type_to_stind (type);
}
int
mono_op_imm_to_op (int opcode)
{
switch (opcode) {
case OP_ADD_IMM:
#if SIZEOF_REGISTER == 4
return OP_IADD;
#else
return OP_LADD;
#endif
case OP_IADD_IMM:
return OP_IADD;
case OP_LADD_IMM:
return OP_LADD;
case OP_ISUB_IMM:
return OP_ISUB;
case OP_LSUB_IMM:
return OP_LSUB;
case OP_IMUL_IMM:
return OP_IMUL;
case OP_LMUL_IMM:
return OP_LMUL;
case OP_AND_IMM:
#if SIZEOF_REGISTER == 4
return OP_IAND;
#else
return OP_LAND;
#endif
case OP_OR_IMM:
#if SIZEOF_REGISTER == 4
return OP_IOR;
#else
return OP_LOR;
#endif
case OP_XOR_IMM:
#if SIZEOF_REGISTER == 4
return OP_IXOR;
#else
return OP_LXOR;
#endif
case OP_IAND_IMM:
return OP_IAND;
case OP_LAND_IMM:
return OP_LAND;
case OP_IOR_IMM:
return OP_IOR;
case OP_LOR_IMM:
return OP_LOR;
case OP_IXOR_IMM:
return OP_IXOR;
case OP_LXOR_IMM:
return OP_LXOR;
case OP_ISHL_IMM:
return OP_ISHL;
case OP_LSHL_IMM:
return OP_LSHL;
case OP_ISHR_IMM:
return OP_ISHR;
case OP_LSHR_IMM:
return OP_LSHR;
case OP_ISHR_UN_IMM:
return OP_ISHR_UN;
case OP_LSHR_UN_IMM:
return OP_LSHR_UN;
case OP_IDIV_IMM:
return OP_IDIV;
case OP_LDIV_IMM:
return OP_LDIV;
case OP_IDIV_UN_IMM:
return OP_IDIV_UN;
case OP_LDIV_UN_IMM:
return OP_LDIV_UN;
case OP_IREM_UN_IMM:
return OP_IREM_UN;
case OP_LREM_UN_IMM:
return OP_LREM_UN;
case OP_IREM_IMM:
return OP_IREM;
case OP_LREM_IMM:
return OP_LREM;
case OP_DIV_IMM:
#if SIZEOF_REGISTER == 4
return OP_IDIV;
#else
return OP_LDIV;
#endif
case OP_REM_IMM:
#if SIZEOF_REGISTER == 4
return OP_IREM;
#else
return OP_LREM;
#endif
case OP_ADDCC_IMM:
return OP_ADDCC;
case OP_ADC_IMM:
return OP_ADC;
case OP_SUBCC_IMM:
return OP_SUBCC;
case OP_SBB_IMM:
return OP_SBB;
case OP_IADC_IMM:
return OP_IADC;
case OP_ISBB_IMM:
return OP_ISBB;
case OP_COMPARE_IMM:
return OP_COMPARE;
case OP_ICOMPARE_IMM:
return OP_ICOMPARE;
case OP_LOCALLOC_IMM:
return OP_LOCALLOC;
}
return -1;
}
/*
* mono_decompose_op_imm:
*
* Replace the OP_.._IMM INS with its non IMM variant.
*/
void
mono_decompose_op_imm (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins)
{
int opcode2 = mono_op_imm_to_op (ins->opcode);
MonoInst *temp;
guint32 dreg;
const char *spec = INS_INFO (ins->opcode);
if (spec [MONO_INST_SRC2] == 'l') {
dreg = mono_alloc_lreg (cfg);
/* Load the 64bit constant using decomposed ops */
MONO_INST_NEW (cfg, temp, OP_ICONST);
temp->inst_c0 = ins_get_l_low (ins);
temp->dreg = MONO_LVREG_LS (dreg);
mono_bblock_insert_before_ins (bb, ins, temp);
MONO_INST_NEW (cfg, temp, OP_ICONST);
temp->inst_c0 = ins_get_l_high (ins);
temp->dreg = MONO_LVREG_MS (dreg);
} else {
dreg = mono_alloc_ireg (cfg);
MONO_INST_NEW (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = dreg;
}
mono_bblock_insert_before_ins (bb, ins, temp);
if (opcode2 == -1)
g_error ("mono_op_imm_to_op failed for %s\n", mono_inst_name (ins->opcode));
ins->opcode = opcode2;
if (ins->opcode == OP_LOCALLOC)
ins->sreg1 = dreg;
else
ins->sreg2 = dreg;
bb->max_vreg = MAX (bb->max_vreg, cfg->next_vreg);
}
static void
set_vreg_to_inst (MonoCompile *cfg, int vreg, MonoInst *inst)
{
if (vreg >= cfg->vreg_to_inst_len) {
MonoInst **tmp = cfg->vreg_to_inst;
int size = cfg->vreg_to_inst_len;
while (vreg >= cfg->vreg_to_inst_len)
cfg->vreg_to_inst_len = cfg->vreg_to_inst_len ? cfg->vreg_to_inst_len * 2 : 32;
cfg->vreg_to_inst = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoInst*) * cfg->vreg_to_inst_len);
if (size)
memcpy (cfg->vreg_to_inst, tmp, size * sizeof (MonoInst*));
}
cfg->vreg_to_inst [vreg] = inst;
}
#define mono_type_is_long(type) (!m_type_is_byref (type) && ((mono_type_get_underlying_type (type)->type == MONO_TYPE_I8) || (mono_type_get_underlying_type (type)->type == MONO_TYPE_U8)))
#define mono_type_is_float(type) (!m_type_is_byref (type) && (((type)->type == MONO_TYPE_R8) || ((type)->type == MONO_TYPE_R4)))
MonoInst*
mono_compile_create_var_for_vreg (MonoCompile *cfg, MonoType *type, int opcode, int vreg)
{
MonoInst *inst;
int num = cfg->num_varinfo;
gboolean regpair;
type = mini_get_underlying_type (type);
if ((num + 1) >= cfg->varinfo_count) {
int orig_count = cfg->varinfo_count;
cfg->varinfo_count = cfg->varinfo_count ? (cfg->varinfo_count * 2) : 32;
cfg->varinfo = (MonoInst **)g_realloc (cfg->varinfo, sizeof (MonoInst*) * cfg->varinfo_count);
cfg->vars = (MonoMethodVar *)g_realloc (cfg->vars, sizeof (MonoMethodVar) * cfg->varinfo_count);
memset (&cfg->vars [orig_count], 0, (cfg->varinfo_count - orig_count) * sizeof (MonoMethodVar));
}
cfg->stat_allocate_var++;
MONO_INST_NEW (cfg, inst, opcode);
inst->inst_c0 = num;
inst->inst_vtype = type;
inst->klass = mono_class_from_mono_type_internal (type);
mini_type_to_eval_stack_type (cfg, type, inst);
/* if set to 1 the variable is native */
inst->backend.is_pinvoke = 0;
inst->dreg = vreg;
if (mono_class_has_failure (inst->klass))
mono_cfg_set_exception (cfg, MONO_EXCEPTION_TYPE_LOAD);
if (cfg->compute_gc_maps) {
if (m_type_is_byref (type)) {
mono_mark_vreg_as_mp (cfg, vreg);
} else {
if ((MONO_TYPE_ISSTRUCT (type) && m_class_has_references (inst->klass)) || mini_type_is_reference (type)) {
inst->flags |= MONO_INST_GC_TRACK;
mono_mark_vreg_as_ref (cfg, vreg);
}
}
}
#ifdef TARGET_WASM
if (mini_type_is_reference (type))
mono_mark_vreg_as_ref (cfg, vreg);
#endif
cfg->varinfo [num] = inst;
cfg->vars [num].idx = num;
cfg->vars [num].vreg = vreg;
cfg->vars [num].range.first_use.pos.bid = 0xffff;
cfg->vars [num].reg = -1;
if (vreg != -1)
set_vreg_to_inst (cfg, vreg, inst);
#if SIZEOF_REGISTER == 4
if (mono_arch_is_soft_float ()) {
regpair = mono_type_is_long (type) || mono_type_is_float (type);
} else {
regpair = mono_type_is_long (type);
}
#else
regpair = FALSE;
#endif
if (regpair) {
MonoInst *tree;
/*
* These two cannot be allocated using create_var_for_vreg since that would
* put it into the cfg->varinfo array, confusing many parts of the JIT.
*/
/*
* Set flags to VOLATILE so SSA skips it.
*/
if (cfg->verbose_level >= 4) {
printf (" Create LVAR R%d (R%d, R%d)\n", inst->dreg, MONO_LVREG_LS (inst->dreg), MONO_LVREG_MS (inst->dreg));
}
if (mono_arch_is_soft_float () && cfg->opt & MONO_OPT_SSA) {
if (mono_type_is_float (type))
inst->flags = MONO_INST_VOLATILE;
}
/* Allocate a dummy MonoInst for the first vreg */
MONO_INST_NEW (cfg, tree, OP_LOCAL);
tree->dreg = MONO_LVREG_LS (inst->dreg);
if (cfg->opt & MONO_OPT_SSA)
tree->flags = MONO_INST_VOLATILE;
tree->inst_c0 = num;
tree->type = STACK_I4;
tree->inst_vtype = mono_get_int32_type ();
tree->klass = mono_class_from_mono_type_internal (tree->inst_vtype);
set_vreg_to_inst (cfg, MONO_LVREG_LS (inst->dreg), tree);
/* Allocate a dummy MonoInst for the second vreg */
MONO_INST_NEW (cfg, tree, OP_LOCAL);
tree->dreg = MONO_LVREG_MS (inst->dreg);
if (cfg->opt & MONO_OPT_SSA)
tree->flags = MONO_INST_VOLATILE;
tree->inst_c0 = num;
tree->type = STACK_I4;
tree->inst_vtype = mono_get_int32_type ();
tree->klass = mono_class_from_mono_type_internal (tree->inst_vtype);
set_vreg_to_inst (cfg, MONO_LVREG_MS (inst->dreg), tree);
}
cfg->num_varinfo++;
if (cfg->verbose_level > 2)
g_print ("created temp %d (R%d) of type %s\n", num, vreg, mono_type_get_name (type));
return inst;
}
MonoInst*
mono_compile_create_var (MonoCompile *cfg, MonoType *type, int opcode)
{
int dreg;
if (type->type == MONO_TYPE_VALUETYPE && !m_type_is_byref (type)) {
MonoClass *klass = mono_class_from_mono_type_internal (type);
if (m_class_is_enumtype (klass) && m_class_get_image (klass) == mono_get_corlib () && !strcmp (m_class_get_name (klass), "StackCrawlMark")) {
if (!(cfg->method->flags & METHOD_ATTRIBUTE_REQSECOBJ))
g_error ("Method '%s' which contains a StackCrawlMark local variable must be decorated with [System.Security.DynamicSecurityMethod].", mono_method_get_full_name (cfg->method));
}
}
type = mini_get_underlying_type (type);
if (mono_type_is_long (type))
dreg = mono_alloc_dreg (cfg, STACK_I8);
else if (mono_arch_is_soft_float () && mono_type_is_float (type))
dreg = mono_alloc_dreg (cfg, STACK_R8);
else
/* All the others are unified */
dreg = mono_alloc_preg (cfg);
return mono_compile_create_var_for_vreg (cfg, type, opcode, dreg);
}
MonoInst*
mini_get_int_to_float_spill_area (MonoCompile *cfg)
{
#ifdef TARGET_X86
if (!cfg->iconv_raw_var) {
cfg->iconv_raw_var = mono_compile_create_var (cfg, mono_get_int32_type (), OP_LOCAL);
cfg->iconv_raw_var->flags |= MONO_INST_VOLATILE; /*FIXME, use the don't regalloc flag*/
}
return cfg->iconv_raw_var;
#else
return NULL;
#endif
}
void
mono_mark_vreg_as_ref (MonoCompile *cfg, int vreg)
{
if (vreg >= cfg->vreg_is_ref_len) {
gboolean *tmp = cfg->vreg_is_ref;
int size = cfg->vreg_is_ref_len;
while (vreg >= cfg->vreg_is_ref_len)
cfg->vreg_is_ref_len = cfg->vreg_is_ref_len ? cfg->vreg_is_ref_len * 2 : 32;
cfg->vreg_is_ref = (gboolean *)mono_mempool_alloc0 (cfg->mempool, sizeof (gboolean) * cfg->vreg_is_ref_len);
if (size)
memcpy (cfg->vreg_is_ref, tmp, size * sizeof (gboolean));
}
cfg->vreg_is_ref [vreg] = TRUE;
}
void
mono_mark_vreg_as_mp (MonoCompile *cfg, int vreg)
{
if (vreg >= cfg->vreg_is_mp_len) {
gboolean *tmp = cfg->vreg_is_mp;
int size = cfg->vreg_is_mp_len;
while (vreg >= cfg->vreg_is_mp_len)
cfg->vreg_is_mp_len = cfg->vreg_is_mp_len ? cfg->vreg_is_mp_len * 2 : 32;
cfg->vreg_is_mp = (gboolean *)mono_mempool_alloc0 (cfg->mempool, sizeof (gboolean) * cfg->vreg_is_mp_len);
if (size)
memcpy (cfg->vreg_is_mp, tmp, size * sizeof (gboolean));
}
cfg->vreg_is_mp [vreg] = TRUE;
}
static MonoType*
type_from_stack_type (MonoInst *ins)
{
switch (ins->type) {
case STACK_I4: return mono_get_int32_type ();
case STACK_I8: return m_class_get_byval_arg (mono_defaults.int64_class);
case STACK_PTR: return mono_get_int_type ();
case STACK_R8: return m_class_get_byval_arg (mono_defaults.double_class);
case STACK_MP:
/*
* this if used to be commented without any specific reason, but
* it breaks #80235 when commented
*/
if (ins->klass)
return m_class_get_this_arg (ins->klass);
else
return mono_class_get_byref_type (mono_defaults.object_class);
case STACK_OBJ:
/* ins->klass may not be set for ldnull.
* Also, if we have a boxed valuetype, we want an object lass,
* not the valuetype class
*/
if (ins->klass && !m_class_is_valuetype (ins->klass))
return m_class_get_byval_arg (ins->klass);
return mono_get_object_type ();
case STACK_VTYPE: return m_class_get_byval_arg (ins->klass);
default:
g_error ("stack type %d to montype not handled\n", ins->type);
}
return NULL;
}
MonoType*
mono_type_from_stack_type (MonoInst *ins)
{
return type_from_stack_type (ins);
}
/*
* mono_add_ins_to_end:
*
* Same as MONO_ADD_INS, but add INST before any branches at the end of BB.
*/
void
mono_add_ins_to_end (MonoBasicBlock *bb, MonoInst *inst)
{
int opcode;
if (!bb->code) {
MONO_ADD_INS (bb, inst);
return;
}
switch (bb->last_ins->opcode) {
case OP_BR:
case OP_BR_REG:
case CEE_BEQ:
case CEE_BGE:
case CEE_BGT:
case CEE_BLE:
case CEE_BLT:
case CEE_BNE_UN:
case CEE_BGE_UN:
case CEE_BGT_UN:
case CEE_BLE_UN:
case CEE_BLT_UN:
case OP_SWITCH:
mono_bblock_insert_before_ins (bb, bb->last_ins, inst);
break;
default:
if (MONO_IS_COND_BRANCH_OP (bb->last_ins)) {
/* Need to insert the ins before the compare */
if (bb->code == bb->last_ins) {
mono_bblock_insert_before_ins (bb, bb->last_ins, inst);
return;
}
if (bb->code->next == bb->last_ins) {
/* Only two instructions */
opcode = bb->code->opcode;
if ((opcode == OP_COMPARE) || (opcode == OP_COMPARE_IMM) || (opcode == OP_ICOMPARE) || (opcode == OP_ICOMPARE_IMM) || (opcode == OP_FCOMPARE) || (opcode == OP_LCOMPARE) || (opcode == OP_LCOMPARE_IMM) || (opcode == OP_RCOMPARE)) {
/* NEW IR */
mono_bblock_insert_before_ins (bb, bb->code, inst);
} else {
mono_bblock_insert_before_ins (bb, bb->last_ins, inst);
}
} else {
opcode = bb->last_ins->prev->opcode;
if ((opcode == OP_COMPARE) || (opcode == OP_COMPARE_IMM) || (opcode == OP_ICOMPARE) || (opcode == OP_ICOMPARE_IMM) || (opcode == OP_FCOMPARE) || (opcode == OP_LCOMPARE) || (opcode == OP_LCOMPARE_IMM) || (opcode == OP_RCOMPARE)) {
/* NEW IR */
mono_bblock_insert_before_ins (bb, bb->last_ins->prev, inst);
} else {
mono_bblock_insert_before_ins (bb, bb->last_ins, inst);
}
}
}
else
MONO_ADD_INS (bb, inst);
break;
}
}
void
mono_create_jump_table (MonoCompile *cfg, MonoInst *label, MonoBasicBlock **bbs, int num_blocks)
{
MonoJumpInfo *ji = (MonoJumpInfo *)mono_mempool_alloc (cfg->mempool, sizeof (MonoJumpInfo));
MonoJumpInfoBBTable *table;
table = (MonoJumpInfoBBTable *)mono_mempool_alloc (cfg->mempool, sizeof (MonoJumpInfoBBTable));
table->table = bbs;
table->table_size = num_blocks;
ji->ip.label = label;
ji->type = MONO_PATCH_INFO_SWITCH;
ji->data.table = table;
ji->next = cfg->patch_info;
cfg->patch_info = ji;
}
typedef struct {
MonoClass *vtype;
GList *active, *inactive;
GSList *slots;
} StackSlotInfo;
static gint
compare_by_interval_start_pos_func (gconstpointer a, gconstpointer b)
{
MonoMethodVar *v1 = (MonoMethodVar*)a;
MonoMethodVar *v2 = (MonoMethodVar*)b;
if (v1 == v2)
return 0;
else if (v1->interval->range && v2->interval->range)
return v1->interval->range->from - v2->interval->range->from;
else if (v1->interval->range)
return -1;
else
return 1;
}
#if 0
#define LSCAN_DEBUG(a) do { a; } while (0)
#else
#define LSCAN_DEBUG(a) do { } while (0) /* non-empty to avoid warning */
#endif
static gint32*
mono_allocate_stack_slots2 (MonoCompile *cfg, gboolean backward, guint32 *stack_size, guint32 *stack_align)
{
int i, slot, offset, size;
guint32 align;
MonoMethodVar *vmv;
MonoInst *inst;
gint32 *offsets;
GList *vars = NULL, *l, *unhandled;
StackSlotInfo *scalar_stack_slots, *vtype_stack_slots, *slot_info;
MonoType *t;
int nvtypes;
int vtype_stack_slots_size = 256;
gboolean reuse_slot;
LSCAN_DEBUG (printf ("Allocate Stack Slots 2 for %s:\n", mono_method_full_name (cfg->method, TRUE)));
scalar_stack_slots = (StackSlotInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (StackSlotInfo) * MONO_TYPE_PINNED);
vtype_stack_slots = NULL;
nvtypes = 0;
offsets = (gint32 *)mono_mempool_alloc (cfg->mempool, sizeof (gint32) * cfg->num_varinfo);
for (i = 0; i < cfg->num_varinfo; ++i)
offsets [i] = -1;
for (i = cfg->locals_start; i < cfg->num_varinfo; i++) {
inst = cfg->varinfo [i];
vmv = MONO_VARINFO (cfg, i);
if ((inst->flags & MONO_INST_IS_DEAD) || inst->opcode == OP_REGVAR || inst->opcode == OP_REGOFFSET)
continue;
vars = g_list_prepend (vars, vmv);
}
vars = g_list_sort (vars, compare_by_interval_start_pos_func);
/* Sanity check */
/*
i = 0;
for (unhandled = vars; unhandled; unhandled = unhandled->next) {
MonoMethodVar *current = unhandled->data;
if (current->interval->range) {
g_assert (current->interval->range->from >= i);
i = current->interval->range->from;
}
}
*/
offset = 0;
*stack_align = 0;
for (unhandled = vars; unhandled; unhandled = unhandled->next) {
MonoMethodVar *current = (MonoMethodVar *)unhandled->data;
vmv = current;
inst = cfg->varinfo [vmv->idx];
t = mono_type_get_underlying_type (inst->inst_vtype);
if (cfg->gsharedvt && mini_is_gsharedvt_variable_type (t))
continue;
/* inst->backend.is_pinvoke indicates native sized value types, this is used by the
* pinvoke wrappers when they call functions returning structures */
if (inst->backend.is_pinvoke && MONO_TYPE_ISSTRUCT (t) && t->type != MONO_TYPE_TYPEDBYREF) {
size = mono_class_native_size (mono_class_from_mono_type_internal (t), &align);
}
else {
int ialign;
size = mini_type_stack_size (t, &ialign);
align = ialign;
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (t)))
align = 16;
}
reuse_slot = TRUE;
if (cfg->disable_reuse_stack_slots)
reuse_slot = FALSE;
t = mini_get_underlying_type (t);
switch (t->type) {
case MONO_TYPE_GENERICINST:
if (!mono_type_generic_inst_is_valuetype (t)) {
slot_info = &scalar_stack_slots [t->type];
break;
}
/* Fall through */
case MONO_TYPE_VALUETYPE:
if (!vtype_stack_slots)
vtype_stack_slots = (StackSlotInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (StackSlotInfo) * vtype_stack_slots_size);
for (i = 0; i < nvtypes; ++i)
if (t->data.klass == vtype_stack_slots [i].vtype)
break;
if (i < nvtypes)
slot_info = &vtype_stack_slots [i];
else {
if (nvtypes == vtype_stack_slots_size) {
int new_slots_size = vtype_stack_slots_size * 2;
StackSlotInfo* new_slots = (StackSlotInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (StackSlotInfo) * new_slots_size);
memcpy (new_slots, vtype_stack_slots, sizeof (StackSlotInfo) * vtype_stack_slots_size);
vtype_stack_slots = new_slots;
vtype_stack_slots_size = new_slots_size;
}
vtype_stack_slots [nvtypes].vtype = t->data.klass;
slot_info = &vtype_stack_slots [nvtypes];
nvtypes ++;
}
if (cfg->disable_reuse_ref_stack_slots)
reuse_slot = FALSE;
break;
case MONO_TYPE_PTR:
case MONO_TYPE_I:
case MONO_TYPE_U:
#if TARGET_SIZEOF_VOID_P == 4
case MONO_TYPE_I4:
#else
case MONO_TYPE_I8:
#endif
if (cfg->disable_ref_noref_stack_slot_share) {
slot_info = &scalar_stack_slots [MONO_TYPE_I];
break;
}
/* Fall through */
case MONO_TYPE_CLASS:
case MONO_TYPE_OBJECT:
case MONO_TYPE_ARRAY:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_STRING:
/* Share non-float stack slots of the same size */
slot_info = &scalar_stack_slots [MONO_TYPE_CLASS];
if (cfg->disable_reuse_ref_stack_slots)
reuse_slot = FALSE;
break;
default:
slot_info = &scalar_stack_slots [t->type];
}
slot = 0xffffff;
if (cfg->comp_done & MONO_COMP_LIVENESS) {
int pos;
gboolean changed;
//printf ("START %2d %08x %08x\n", vmv->idx, vmv->range.first_use.abs_pos, vmv->range.last_use.abs_pos);
if (!current->interval->range) {
if (inst->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT))
pos = ~0;
else {
/* Dead */
inst->flags |= MONO_INST_IS_DEAD;
continue;
}
}
else
pos = current->interval->range->from;
LSCAN_DEBUG (printf ("process R%d ", inst->dreg));
if (current->interval->range)
LSCAN_DEBUG (mono_linterval_print (current->interval));
LSCAN_DEBUG (printf ("\n"));
/* Check for intervals in active which expired or inactive */
changed = TRUE;
/* FIXME: Optimize this */
while (changed) {
changed = FALSE;
for (l = slot_info->active; l != NULL; l = l->next) {
MonoMethodVar *v = (MonoMethodVar*)l->data;
if (v->interval->last_range->to < pos) {
slot_info->active = g_list_delete_link (slot_info->active, l);
slot_info->slots = g_slist_prepend_mempool (cfg->mempool, slot_info->slots, GINT_TO_POINTER (offsets [v->idx]));
LSCAN_DEBUG (printf ("Interval R%d has expired, adding 0x%x to slots\n", cfg->varinfo [v->idx]->dreg, offsets [v->idx]));
changed = TRUE;
break;
}
else if (!mono_linterval_covers (v->interval, pos)) {
slot_info->inactive = g_list_append (slot_info->inactive, v);
slot_info->active = g_list_delete_link (slot_info->active, l);
LSCAN_DEBUG (printf ("Interval R%d became inactive\n", cfg->varinfo [v->idx]->dreg));
changed = TRUE;
break;
}
}
}
/* Check for intervals in inactive which expired or active */
changed = TRUE;
/* FIXME: Optimize this */
while (changed) {
changed = FALSE;
for (l = slot_info->inactive; l != NULL; l = l->next) {
MonoMethodVar *v = (MonoMethodVar*)l->data;
if (v->interval->last_range->to < pos) {
slot_info->inactive = g_list_delete_link (slot_info->inactive, l);
// FIXME: Enabling this seems to cause impossible to debug crashes
//slot_info->slots = g_slist_prepend_mempool (cfg->mempool, slot_info->slots, GINT_TO_POINTER (offsets [v->idx]));
LSCAN_DEBUG (printf ("Interval R%d has expired, adding 0x%x to slots\n", cfg->varinfo [v->idx]->dreg, offsets [v->idx]));
changed = TRUE;
break;
}
else if (mono_linterval_covers (v->interval, pos)) {
slot_info->active = g_list_append (slot_info->active, v);
slot_info->inactive = g_list_delete_link (slot_info->inactive, l);
LSCAN_DEBUG (printf ("\tInterval R%d became active\n", cfg->varinfo [v->idx]->dreg));
changed = TRUE;
break;
}
}
}
/*
* This also handles the case when the variable is used in an
* exception region, as liveness info is not computed there.
*/
/*
* FIXME: All valuetypes are marked as INDIRECT because of LDADDR
* opcodes.
*/
if (! (inst->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT))) {
if (slot_info->slots) {
slot = GPOINTER_TO_INT (slot_info->slots->data);
slot_info->slots = slot_info->slots->next;
}
/* FIXME: We might want to consider the inactive intervals as well if slot_info->slots is empty */
slot_info->active = mono_varlist_insert_sorted (cfg, slot_info->active, vmv, TRUE);
}
}
#if 0
{
static int count = 0;
count ++;
if (count == atoi (g_getenv ("COUNT3")))
printf ("LAST: %s\n", mono_method_full_name (cfg->method, TRUE));
if (count > atoi (g_getenv ("COUNT3")))
slot = 0xffffff;
else
mono_print_ins (inst);
}
#endif
LSCAN_DEBUG (printf ("R%d %s -> 0x%x\n", inst->dreg, mono_type_full_name (t), slot));
if (inst->flags & MONO_INST_LMF) {
size = MONO_ABI_SIZEOF (MonoLMF);
align = sizeof (target_mgreg_t);
reuse_slot = FALSE;
}
if (!reuse_slot)
slot = 0xffffff;
if (slot == 0xffffff) {
/*
* Allways allocate valuetypes to sizeof (target_mgreg_t) to allow more
* efficient copying (and to work around the fact that OP_MEMCPY
* and OP_MEMSET ignores alignment).
*/
if (MONO_TYPE_ISSTRUCT (t)) {
align = MAX (align, sizeof (target_mgreg_t));
align = MAX (align, mono_class_min_align (mono_class_from_mono_type_internal (t)));
}
if (backward) {
offset += size;
offset += align - 1;
offset &= ~(align - 1);
slot = offset;
}
else {
offset += align - 1;
offset &= ~(align - 1);
slot = offset;
offset += size;
}
if (*stack_align == 0)
*stack_align = align;
}
offsets [vmv->idx] = slot;
}
g_list_free (vars);
for (i = 0; i < MONO_TYPE_PINNED; ++i) {
if (scalar_stack_slots [i].active)
g_list_free (scalar_stack_slots [i].active);
}
for (i = 0; i < nvtypes; ++i) {
if (vtype_stack_slots [i].active)
g_list_free (vtype_stack_slots [i].active);
}
cfg->stat_locals_stack_size += offset;
*stack_size = offset;
return offsets;
}
/*
* mono_allocate_stack_slots:
*
* Allocate stack slots for all non register allocated variables using a
* linear scan algorithm.
* Returns: an array of stack offsets.
* STACK_SIZE is set to the amount of stack space needed.
* STACK_ALIGN is set to the alignment needed by the locals area.
*/
gint32*
mono_allocate_stack_slots (MonoCompile *cfg, gboolean backward, guint32 *stack_size, guint32 *stack_align)
{
int i, slot, offset, size;
guint32 align;
MonoMethodVar *vmv;
MonoInst *inst;
gint32 *offsets;
GList *vars = NULL, *l;
StackSlotInfo *scalar_stack_slots, *vtype_stack_slots, *slot_info;
MonoType *t;
int nvtypes;
int vtype_stack_slots_size = 256;
gboolean reuse_slot;
if ((cfg->num_varinfo > 0) && MONO_VARINFO (cfg, 0)->interval)
return mono_allocate_stack_slots2 (cfg, backward, stack_size, stack_align);
scalar_stack_slots = (StackSlotInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (StackSlotInfo) * MONO_TYPE_PINNED);
vtype_stack_slots = NULL;
nvtypes = 0;
offsets = (gint32 *)mono_mempool_alloc (cfg->mempool, sizeof (gint32) * cfg->num_varinfo);
for (i = 0; i < cfg->num_varinfo; ++i)
offsets [i] = -1;
for (i = cfg->locals_start; i < cfg->num_varinfo; i++) {
inst = cfg->varinfo [i];
vmv = MONO_VARINFO (cfg, i);
if ((inst->flags & MONO_INST_IS_DEAD) || inst->opcode == OP_REGVAR || inst->opcode == OP_REGOFFSET)
continue;
vars = g_list_prepend (vars, vmv);
}
vars = mono_varlist_sort (cfg, vars, 0);
offset = 0;
*stack_align = sizeof (target_mgreg_t);
for (l = vars; l; l = l->next) {
vmv = (MonoMethodVar *)l->data;
inst = cfg->varinfo [vmv->idx];
t = mono_type_get_underlying_type (inst->inst_vtype);
if (cfg->gsharedvt && mini_is_gsharedvt_variable_type (t))
continue;
/* inst->backend.is_pinvoke indicates native sized value types, this is used by the
* pinvoke wrappers when they call functions returning structures */
if (inst->backend.is_pinvoke && MONO_TYPE_ISSTRUCT (t) && t->type != MONO_TYPE_TYPEDBYREF) {
size = mono_class_native_size (mono_class_from_mono_type_internal (t), &align);
} else {
int ialign;
size = mini_type_stack_size (t, &ialign);
align = ialign;
if (mono_class_has_failure (mono_class_from_mono_type_internal (t)))
mono_cfg_set_exception (cfg, MONO_EXCEPTION_TYPE_LOAD);
if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (t)))
align = 16;
}
reuse_slot = TRUE;
if (cfg->disable_reuse_stack_slots)
reuse_slot = FALSE;
t = mini_get_underlying_type (t);
switch (t->type) {
case MONO_TYPE_GENERICINST:
if (!mono_type_generic_inst_is_valuetype (t)) {
slot_info = &scalar_stack_slots [t->type];
break;
}
/* Fall through */
case MONO_TYPE_VALUETYPE:
if (!vtype_stack_slots)
vtype_stack_slots = (StackSlotInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (StackSlotInfo) * vtype_stack_slots_size);
for (i = 0; i < nvtypes; ++i)
if (t->data.klass == vtype_stack_slots [i].vtype)
break;
if (i < nvtypes)
slot_info = &vtype_stack_slots [i];
else {
if (nvtypes == vtype_stack_slots_size) {
int new_slots_size = vtype_stack_slots_size * 2;
StackSlotInfo* new_slots = (StackSlotInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (StackSlotInfo) * new_slots_size);
memcpy (new_slots, vtype_stack_slots, sizeof (StackSlotInfo) * vtype_stack_slots_size);
vtype_stack_slots = new_slots;
vtype_stack_slots_size = new_slots_size;
}
vtype_stack_slots [nvtypes].vtype = t->data.klass;
slot_info = &vtype_stack_slots [nvtypes];
nvtypes ++;
}
if (cfg->disable_reuse_ref_stack_slots)
reuse_slot = FALSE;
break;
case MONO_TYPE_PTR:
case MONO_TYPE_I:
case MONO_TYPE_U:
#if TARGET_SIZEOF_VOID_P == 4
case MONO_TYPE_I4:
#else
case MONO_TYPE_I8:
#endif
if (cfg->disable_ref_noref_stack_slot_share) {
slot_info = &scalar_stack_slots [MONO_TYPE_I];
break;
}
/* Fall through */
case MONO_TYPE_CLASS:
case MONO_TYPE_OBJECT:
case MONO_TYPE_ARRAY:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_STRING:
/* Share non-float stack slots of the same size */
slot_info = &scalar_stack_slots [MONO_TYPE_CLASS];
if (cfg->disable_reuse_ref_stack_slots)
reuse_slot = FALSE;
break;
case MONO_TYPE_VAR:
case MONO_TYPE_MVAR:
slot_info = &scalar_stack_slots [t->type];
break;
default:
slot_info = &scalar_stack_slots [t->type];
break;
}
slot = 0xffffff;
if (cfg->comp_done & MONO_COMP_LIVENESS) {
//printf ("START %2d %08x %08x\n", vmv->idx, vmv->range.first_use.abs_pos, vmv->range.last_use.abs_pos);
/* expire old intervals in active */
while (slot_info->active) {
MonoMethodVar *amv = (MonoMethodVar *)slot_info->active->data;
if (amv->range.last_use.abs_pos > vmv->range.first_use.abs_pos)
break;
//printf ("EXPIR %2d %08x %08x C%d R%d\n", amv->idx, amv->range.first_use.abs_pos, amv->range.last_use.abs_pos, amv->spill_costs, amv->reg);
slot_info->active = g_list_delete_link (slot_info->active, slot_info->active);
slot_info->slots = g_slist_prepend_mempool (cfg->mempool, slot_info->slots, GINT_TO_POINTER (offsets [amv->idx]));
}
/*
* This also handles the case when the variable is used in an
* exception region, as liveness info is not computed there.
*/
/*
* FIXME: All valuetypes are marked as INDIRECT because of LDADDR
* opcodes.
*/
if (! (inst->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT))) {
if (slot_info->slots) {
slot = GPOINTER_TO_INT (slot_info->slots->data);
slot_info->slots = slot_info->slots->next;
}
slot_info->active = mono_varlist_insert_sorted (cfg, slot_info->active, vmv, TRUE);
}
}
#if 0
{
static int count = 0;
count ++;
if (count == atoi (g_getenv ("COUNT")))
printf ("LAST: %s\n", mono_method_full_name (cfg->method, TRUE));
if (count > atoi (g_getenv ("COUNT")))
slot = 0xffffff;
else
mono_print_ins (inst);
}
#endif
if (inst->flags & MONO_INST_LMF) {
/*
* This variable represents a MonoLMF structure, which has no corresponding
* CLR type, so hard-code its size/alignment.
*/
size = MONO_ABI_SIZEOF (MonoLMF);
align = sizeof (target_mgreg_t);
reuse_slot = FALSE;
}
if (!reuse_slot)
slot = 0xffffff;
if (slot == 0xffffff) {
/*
* Allways allocate valuetypes to sizeof (target_mgreg_t) to allow more
* efficient copying (and to work around the fact that OP_MEMCPY
* and OP_MEMSET ignores alignment).
*/
if (MONO_TYPE_ISSTRUCT (t)) {
align = MAX (align, sizeof (target_mgreg_t));
align = MAX (align, mono_class_min_align (mono_class_from_mono_type_internal (t)));
/*
* Align the size too so the code generated for passing vtypes in
* registers doesn't overwrite random locals.
*/
size = (size + (align - 1)) & ~(align -1);
}
if (backward) {
offset += size;
offset += align - 1;
offset &= ~(align - 1);
slot = offset;
}
else {
offset += align - 1;
offset &= ~(align - 1);
slot = offset;
offset += size;
}
*stack_align = MAX (*stack_align, align);
}
offsets [vmv->idx] = slot;
}
g_list_free (vars);
for (i = 0; i < MONO_TYPE_PINNED; ++i) {
if (scalar_stack_slots [i].active)
g_list_free (scalar_stack_slots [i].active);
}
for (i = 0; i < nvtypes; ++i) {
if (vtype_stack_slots [i].active)
g_list_free (vtype_stack_slots [i].active);
}
cfg->stat_locals_stack_size += offset;
*stack_size = offset;
return offsets;
}
#define EMUL_HIT_SHIFT 3
#define EMUL_HIT_MASK ((1 << EMUL_HIT_SHIFT) - 1)
/* small hit bitmap cache */
static mono_byte emul_opcode_hit_cache [(OP_LAST>>EMUL_HIT_SHIFT) + 1] = {0};
static short emul_opcode_num = 0;
static short emul_opcode_alloced = 0;
static short *emul_opcode_opcodes;
static MonoJitICallInfo **emul_opcode_map;
MonoJitICallInfo *
mono_find_jit_opcode_emulation (int opcode)
{
g_assert (opcode >= 0 && opcode <= OP_LAST);
if (emul_opcode_hit_cache [opcode >> (EMUL_HIT_SHIFT + 3)] & (1 << (opcode & EMUL_HIT_MASK))) {
int i;
for (i = 0; i < emul_opcode_num; ++i) {
if (emul_opcode_opcodes [i] == opcode)
return emul_opcode_map [i];
}
}
return NULL;
}
void
mini_register_opcode_emulation (int opcode, MonoJitICallInfo *info, const char *name, MonoMethodSignature *sig, gpointer func, const char *symbol, gboolean no_wrapper)
{
g_assert (info);
g_assert (!sig->hasthis);
g_assert (sig->param_count < 3);
mono_register_jit_icall_info (info, func, name, sig, no_wrapper, symbol);
if (emul_opcode_num >= emul_opcode_alloced) {
int incr = emul_opcode_alloced? emul_opcode_alloced/2: 16;
emul_opcode_alloced += incr;
emul_opcode_map = (MonoJitICallInfo **)g_realloc (emul_opcode_map, sizeof (emul_opcode_map [0]) * emul_opcode_alloced);
emul_opcode_opcodes = (short *)g_realloc (emul_opcode_opcodes, sizeof (emul_opcode_opcodes [0]) * emul_opcode_alloced);
}
emul_opcode_map [emul_opcode_num] = info;
emul_opcode_opcodes [emul_opcode_num] = opcode;
emul_opcode_num++;
emul_opcode_hit_cache [opcode >> (EMUL_HIT_SHIFT + 3)] |= (1 << (opcode & EMUL_HIT_MASK));
}
static void
print_dfn (MonoCompile *cfg)
{
int i, j;
char *code;
MonoBasicBlock *bb;
MonoInst *c;
{
char *method_name = mono_method_full_name (cfg->method, TRUE);
g_print ("IR code for method %s\n", method_name);
g_free (method_name);
}
for (i = 0; i < cfg->num_bblocks; ++i) {
bb = cfg->bblocks [i];
/*if (bb->cil_code) {
char* code1, *code2;
code1 = mono_disasm_code_one (NULL, cfg->method, bb->cil_code, NULL);
if (bb->last_ins->cil_code)
code2 = mono_disasm_code_one (NULL, cfg->method, bb->last_ins->cil_code, NULL);
else
code2 = g_strdup ("");
code1 [strlen (code1) - 1] = 0;
code = g_strdup_printf ("%s -> %s", code1, code2);
g_free (code1);
g_free (code2);
} else*/
code = g_strdup ("\n");
g_print ("\nBB%d (%d) (len: %d): %s", bb->block_num, i, bb->cil_length, code);
MONO_BB_FOR_EACH_INS (bb, c) {
mono_print_ins_index (-1, c);
}
g_print ("\tprev:");
for (j = 0; j < bb->in_count; ++j) {
g_print (" BB%d", bb->in_bb [j]->block_num);
}
g_print ("\t\tsucc:");
for (j = 0; j < bb->out_count; ++j) {
g_print (" BB%d", bb->out_bb [j]->block_num);
}
g_print ("\n\tidom: BB%d\n", bb->idom? bb->idom->block_num: -1);
if (bb->idom)
g_assert (mono_bitset_test_fast (bb->dominators, bb->idom->dfn));
if (bb->dominators)
mono_blockset_print (cfg, bb->dominators, "\tdominators", bb->idom? bb->idom->dfn: -1);
if (bb->dfrontier)
mono_blockset_print (cfg, bb->dfrontier, "\tdfrontier", -1);
g_free (code);
}
g_print ("\n");
}
void
mono_bblock_add_inst (MonoBasicBlock *bb, MonoInst *inst)
{
MONO_ADD_INS (bb, inst);
}
void
mono_bblock_insert_after_ins (MonoBasicBlock *bb, MonoInst *ins, MonoInst *ins_to_insert)
{
if (ins == NULL) {
ins = bb->code;
bb->code = ins_to_insert;
/* Link with next */
ins_to_insert->next = ins;
if (ins)
ins->prev = ins_to_insert;
if (bb->last_ins == NULL)
bb->last_ins = ins_to_insert;
} else {
/* Link with next */
ins_to_insert->next = ins->next;
if (ins->next)
ins->next->prev = ins_to_insert;
/* Link with previous */
ins->next = ins_to_insert;
ins_to_insert->prev = ins;
if (bb->last_ins == ins)
bb->last_ins = ins_to_insert;
}
}
void
mono_bblock_insert_before_ins (MonoBasicBlock *bb, MonoInst *ins, MonoInst *ins_to_insert)
{
if (ins == NULL) {
ins = bb->code;
if (ins)
ins->prev = ins_to_insert;
bb->code = ins_to_insert;
ins_to_insert->next = ins;
if (bb->last_ins == NULL)
bb->last_ins = ins_to_insert;
} else {
/* Link with previous */
if (ins->prev)
ins->prev->next = ins_to_insert;
ins_to_insert->prev = ins->prev;
/* Link with next */
ins->prev = ins_to_insert;
ins_to_insert->next = ins;
if (bb->code == ins)
bb->code = ins_to_insert;
}
}
/*
* mono_verify_bblock:
*
* Verify that the next and prev pointers are consistent inside the instructions in BB.
*/
void
mono_verify_bblock (MonoBasicBlock *bb)
{
MonoInst *ins, *prev;
prev = NULL;
for (ins = bb->code; ins; ins = ins->next) {
g_assert (ins->prev == prev);
prev = ins;
}
if (bb->last_ins)
g_assert (!bb->last_ins->next);
}
/*
* mono_verify_cfg:
*
* Perform consistency checks on the JIT data structures and the IR
*/
void
mono_verify_cfg (MonoCompile *cfg)
{
MonoBasicBlock *bb;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
mono_verify_bblock (bb);
}
// This will free many fields in cfg to save
// memory. Note that this must be safe to call
// multiple times. It must be idempotent.
void
mono_empty_compile (MonoCompile *cfg)
{
mono_free_loop_info (cfg);
// These live in the mempool, and so must be freed
// first
for (GSList *l = cfg->headers_to_free; l; l = l->next) {
mono_metadata_free_mh ((MonoMethodHeader *)l->data);
}
cfg->headers_to_free = NULL;
if (cfg->mempool) {
//mono_mempool_stats (cfg->mempool);
mono_mempool_destroy (cfg->mempool);
cfg->mempool = NULL;
}
g_free (cfg->varinfo);
cfg->varinfo = NULL;
g_free (cfg->vars);
cfg->vars = NULL;
if (cfg->rs) {
mono_regstate_free (cfg->rs);
cfg->rs = NULL;
}
}
void
mono_destroy_compile (MonoCompile *cfg)
{
mono_empty_compile (cfg);
mono_metadata_free_mh (cfg->header);
g_hash_table_destroy (cfg->spvars);
g_hash_table_destroy (cfg->exvars);
g_list_free (cfg->ldstr_list);
g_hash_table_destroy (cfg->token_info_hash);
g_hash_table_destroy (cfg->abs_patches);
mono_debug_free_method (cfg);
g_free (cfg->varinfo);
g_free (cfg->vars);
g_free (cfg->exception_message);
g_free (cfg);
}
void
mono_add_patch_info (MonoCompile *cfg, int ip, MonoJumpInfoType type, gconstpointer target)
{
if (type == MONO_PATCH_INFO_NONE)
return;
MonoJumpInfo *ji = (MonoJumpInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfo));
ji->ip.i = ip;
ji->type = type;
ji->data.target = target;
ji->next = cfg->patch_info;
cfg->patch_info = ji;
}
void
mono_add_patch_info_rel (MonoCompile *cfg, int ip, MonoJumpInfoType type, gconstpointer target, int relocation)
{
if (type == MONO_PATCH_INFO_NONE)
return;
MonoJumpInfo *ji = (MonoJumpInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfo));
ji->ip.i = ip;
ji->type = type;
ji->relocation = relocation;
ji->data.target = target;
ji->next = cfg->patch_info;
cfg->patch_info = ji;
}
void
mono_remove_patch_info (MonoCompile *cfg, int ip)
{
MonoJumpInfo **ji = &cfg->patch_info;
while (*ji) {
if ((*ji)->ip.i == ip)
*ji = (*ji)->next;
else
ji = &((*ji)->next);
}
}
void
mono_add_seq_point (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins, int native_offset)
{
ins->inst_offset = native_offset;
g_ptr_array_add (cfg->seq_points, ins);
if (bb) {
bb->seq_points = g_slist_prepend_mempool (cfg->mempool, bb->seq_points, ins);
bb->last_seq_point = ins;
}
}
void
mono_add_var_location (MonoCompile *cfg, MonoInst *var, gboolean is_reg, int reg, int offset, int from, int to)
{
MonoDwarfLocListEntry *entry = (MonoDwarfLocListEntry *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoDwarfLocListEntry));
if (is_reg)
g_assert (offset == 0);
entry->is_reg = is_reg;
entry->reg = reg;
entry->offset = offset;
entry->from = from;
entry->to = to;
if (var == cfg->args [0])
cfg->this_loclist = g_slist_append_mempool (cfg->mempool, cfg->this_loclist, entry);
else if (var == cfg->rgctx_var)
cfg->rgctx_loclist = g_slist_append_mempool (cfg->mempool, cfg->rgctx_loclist, entry);
}
static void
mono_apply_volatile (MonoInst *inst, MonoBitSet *set, gsize index)
{
inst->flags |= mono_bitset_test_safe (set, index) ? MONO_INST_VOLATILE : 0;
}
static void
mono_compile_create_vars (MonoCompile *cfg)
{
MonoMethodSignature *sig;
MonoMethodHeader *header;
int i;
header = cfg->header;
sig = mono_method_signature_internal (cfg->method);
if (!MONO_TYPE_IS_VOID (sig->ret)) {
cfg->ret = mono_compile_create_var (cfg, sig->ret, OP_ARG);
/* Inhibit optimizations */
cfg->ret->flags |= MONO_INST_VOLATILE;
}
if (cfg->verbose_level > 2)
g_print ("creating vars\n");
cfg->args = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, (sig->param_count + sig->hasthis) * sizeof (MonoInst*));
if (sig->hasthis) {
MonoInst* arg = mono_compile_create_var (cfg, m_class_get_this_arg (cfg->method->klass), OP_ARG);
mono_apply_volatile (arg, header->volatile_args, 0);
cfg->args [0] = arg;
cfg->this_arg = arg;
}
for (i = 0; i < sig->param_count; ++i) {
MonoInst* arg = mono_compile_create_var (cfg, sig->params [i], OP_ARG);
mono_apply_volatile (arg, header->volatile_args, i + sig->hasthis);
cfg->args [i + sig->hasthis] = arg;
}
if (cfg->verbose_level > 2) {
if (cfg->ret) {
printf ("\treturn : ");
mono_print_ins (cfg->ret);
}
if (sig->hasthis) {
printf ("\tthis: ");
mono_print_ins (cfg->args [0]);
}
for (i = 0; i < sig->param_count; ++i) {
printf ("\targ [%d]: ", i);
mono_print_ins (cfg->args [i + sig->hasthis]);
}
}
cfg->locals_start = cfg->num_varinfo;
cfg->locals = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, header->num_locals * sizeof (MonoInst*));
if (cfg->verbose_level > 2)
g_print ("creating locals\n");
for (i = 0; i < header->num_locals; ++i) {
if (cfg->verbose_level > 2)
g_print ("\tlocal [%d]: ", i);
cfg->locals [i] = mono_compile_create_var (cfg, header->locals [i], OP_LOCAL);
mono_apply_volatile (cfg->locals [i], header->volatile_locals, i);
}
if (cfg->verbose_level > 2)
g_print ("locals done\n");
#ifdef ENABLE_LLVM
if (COMPILE_LLVM (cfg))
mono_llvm_create_vars (cfg);
else
mono_arch_create_vars (cfg);
#else
mono_arch_create_vars (cfg);
#endif
if (cfg->method->save_lmf && cfg->create_lmf_var) {
MonoInst *lmf_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL);
lmf_var->flags |= MONO_INST_VOLATILE;
lmf_var->flags |= MONO_INST_LMF;
cfg->lmf_var = lmf_var;
}
}
void
mono_print_code (MonoCompile *cfg, const char* msg)
{
MonoBasicBlock *bb;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
mono_print_bb (bb, msg);
}
static void
mono_postprocess_patches (MonoCompile *cfg)
{
MonoJumpInfo *patch_info;
int i;
for (patch_info = cfg->patch_info; patch_info; patch_info = patch_info->next) {
switch (patch_info->type) {
case MONO_PATCH_INFO_ABS: {
/*
* Change patches of type MONO_PATCH_INFO_ABS into patches describing the
* absolute address.
*/
if (cfg->abs_patches) {
MonoJumpInfo *abs_ji = (MonoJumpInfo *)g_hash_table_lookup (cfg->abs_patches, patch_info->data.target);
if (abs_ji) {
patch_info->type = abs_ji->type;
patch_info->data.target = abs_ji->data.target;
}
}
break;
}
case MONO_PATCH_INFO_SWITCH: {
gpointer *table;
if (cfg->method->dynamic) {
table = (void **)mono_code_manager_reserve (cfg->dynamic_info->code_mp, sizeof (gpointer) * patch_info->data.table->table_size);
} else {
table = (void **)mono_mem_manager_code_reserve (cfg->mem_manager, sizeof (gpointer) * patch_info->data.table->table_size);
}
for (i = 0; i < patch_info->data.table->table_size; i++) {
/* Might be NULL if the switch is eliminated */
if (patch_info->data.table->table [i]) {
g_assert (patch_info->data.table->table [i]->native_offset);
table [i] = GINT_TO_POINTER (patch_info->data.table->table [i]->native_offset);
} else {
table [i] = NULL;
}
}
patch_info->data.table->table = (MonoBasicBlock**)table;
break;
}
default:
/* do nothing */
break;
}
}
}
/* Those patches require the JitInfo of the compiled method already be in place when used */
static void
mono_postprocess_patches_after_ji_publish (MonoCompile *cfg)
{
MonoJumpInfo *patch_info;
for (patch_info = cfg->patch_info; patch_info; patch_info = patch_info->next) {
switch (patch_info->type) {
case MONO_PATCH_INFO_METHOD_JUMP: {
unsigned char *ip = cfg->native_code + patch_info->ip.i;
mini_register_jump_site (patch_info->data.method, ip);
break;
}
default:
/* do nothing */
break;
}
}
}
void
mono_codegen (MonoCompile *cfg)
{
MonoBasicBlock *bb;
int max_epilog_size;
guint8 *code;
MonoMemoryManager *code_mem_manager = cfg->mem_manager;
guint unwindlen = 0;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
cfg->spill_count = 0;
/* we reuse dfn here */
/* bb->dfn = bb_count++; */
mono_arch_lowering_pass (cfg, bb);
if (cfg->opt & MONO_OPT_PEEPHOLE)
mono_arch_peephole_pass_1 (cfg, bb);
mono_local_regalloc (cfg, bb);
if (cfg->opt & MONO_OPT_PEEPHOLE)
mono_arch_peephole_pass_2 (cfg, bb);
if (cfg->gen_seq_points && !cfg->gen_sdb_seq_points)
mono_bb_deduplicate_op_il_seq_points (cfg, bb);
}
code = mono_arch_emit_prolog (cfg);
set_code_cursor (cfg, code);
cfg->prolog_end = cfg->code_len;
cfg->cfa_reg = cfg->cur_cfa_reg;
cfg->cfa_offset = cfg->cur_cfa_offset;
mono_debug_open_method (cfg);
/* emit code all basic blocks */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
bb->native_offset = cfg->code_len;
bb->real_native_offset = cfg->code_len;
//if ((bb == cfg->bb_entry) || !(bb->region == -1 && !bb->dfn))
mono_arch_output_basic_block (cfg, bb);
bb->native_length = cfg->code_len - bb->native_offset;
if (bb == cfg->bb_exit) {
cfg->epilog_begin = cfg->code_len;
mono_arch_emit_epilog (cfg);
cfg->epilog_end = cfg->code_len;
}
if (bb->clause_holes) {
GList *tmp;
for (tmp = bb->clause_holes; tmp; tmp = tmp->prev)
mono_cfg_add_try_hole (cfg, ((MonoLeaveClause *) tmp->data)->clause, cfg->native_code + bb->native_offset, bb);
}
}
mono_arch_emit_exceptions (cfg);
max_epilog_size = 0;
cfg->code_size = cfg->code_len + max_epilog_size;
/* fixme: align to MONO_ARCH_CODE_ALIGNMENT */
#ifdef MONO_ARCH_HAVE_UNWIND_TABLE
if (!cfg->compile_aot)
unwindlen = mono_arch_unwindinfo_init_method_unwind_info (cfg);
#endif
if (cfg->method->dynamic) {
/* Allocate the code into a separate memory pool so it can be freed */
cfg->dynamic_info = g_new0 (MonoJitDynamicMethodInfo, 1);
cfg->dynamic_info->code_mp = mono_code_manager_new_dynamic ();
MonoJitMemoryManager *jit_mm = (MonoJitMemoryManager*)cfg->jit_mm;
jit_mm_lock (jit_mm);
if (!jit_mm->dynamic_code_hash)
jit_mm->dynamic_code_hash = g_hash_table_new (NULL, NULL);
g_hash_table_insert (jit_mm->dynamic_code_hash, cfg->method, cfg->dynamic_info);
jit_mm_unlock (jit_mm);
code = (guint8 *)mono_code_manager_reserve (cfg->dynamic_info->code_mp, cfg->code_size + cfg->thunk_area + unwindlen);
} else {
code = (guint8 *)mono_mem_manager_code_reserve (code_mem_manager, cfg->code_size + cfg->thunk_area + unwindlen);
}
mono_codeman_enable_write ();
if (cfg->thunk_area) {
cfg->thunks_offset = cfg->code_size + unwindlen;
cfg->thunks = code + cfg->thunks_offset;
memset (cfg->thunks, 0, cfg->thunk_area);
}
g_assert (code);
memcpy (code, cfg->native_code, cfg->code_len);
g_free (cfg->native_code);
cfg->native_code = code;
code = cfg->native_code + cfg->code_len;
/* g_assert (((int)cfg->native_code & (MONO_ARCH_CODE_ALIGNMENT - 1)) == 0); */
mono_postprocess_patches (cfg);
#ifdef VALGRIND_JIT_REGISTER_MAP
if (valgrind_register){
char* nm = mono_method_full_name (cfg->method, TRUE);
VALGRIND_JIT_REGISTER_MAP (nm, cfg->native_code, cfg->native_code + cfg->code_len);
g_free (nm);
}
#endif
if (cfg->verbose_level > 0) {
char* nm = mono_method_get_full_name (cfg->method);
g_print ("Method %s emitted at %p to %p (code length %d)\n",
nm,
cfg->native_code, cfg->native_code + cfg->code_len, cfg->code_len);
g_free (nm);
}
{
gboolean is_generic = FALSE;
if (cfg->method->is_inflated || mono_method_get_generic_container (cfg->method) ||
mono_class_is_gtd (cfg->method->klass) || mono_class_is_ginst (cfg->method->klass)) {
is_generic = TRUE;
}
if (cfg->gshared)
g_assert (is_generic);
}
#ifdef MONO_ARCH_HAVE_SAVE_UNWIND_INFO
mono_arch_save_unwind_info (cfg);
#endif
{
MonoJumpInfo *ji;
gpointer target;
for (ji = cfg->patch_info; ji; ji = ji->next) {
if (cfg->compile_aot) {
switch (ji->type) {
case MONO_PATCH_INFO_BB:
case MONO_PATCH_INFO_LABEL:
break;
default:
/* No need to patch these */
continue;
}
}
if (ji->type == MONO_PATCH_INFO_NONE)
continue;
target = mono_resolve_patch_target (cfg->method, cfg->native_code, ji, cfg->run_cctors, cfg->error);
if (!is_ok (cfg->error)) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
return;
}
mono_arch_patch_code_new (cfg, cfg->native_code, ji, target);
}
}
if (cfg->method->dynamic) {
mono_code_manager_commit (cfg->dynamic_info->code_mp, cfg->native_code, cfg->code_size, cfg->code_len);
} else {
mono_mem_manager_code_commit (code_mem_manager, cfg->native_code, cfg->code_size, cfg->code_len);
}
mono_codeman_disable_write ();
MONO_PROFILER_RAISE (jit_code_buffer, (cfg->native_code, cfg->code_len, MONO_PROFILER_CODE_BUFFER_METHOD, cfg->method));
mono_arch_flush_icache (cfg->native_code, cfg->code_len);
mono_debug_close_method (cfg);
#ifdef MONO_ARCH_HAVE_UNWIND_TABLE
if (!cfg->compile_aot)
mono_arch_unwindinfo_install_method_unwind_info (&cfg->arch.unwindinfo, cfg->native_code, cfg->code_len);
#endif
}
static void
compute_reachable (MonoBasicBlock *bb)
{
int i;
if (!(bb->flags & BB_VISITED)) {
bb->flags |= BB_VISITED;
for (i = 0; i < bb->out_count; ++i)
compute_reachable (bb->out_bb [i]);
}
}
static void mono_bb_ordering (MonoCompile *cfg)
{
int dfn = 0;
/* Depth-first ordering on basic blocks */
cfg->bblocks = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (MonoBasicBlock*) * (cfg->num_bblocks + 1));
cfg->max_block_num = cfg->num_bblocks;
df_visit (cfg->bb_entry, &dfn, cfg->bblocks);
#if defined(__GNUC__) && __GNUC__ == 7 && defined(__x86_64__)
/* workaround for an AMD specific issue that only happens on GCC 7 so far,
* for more information see https://github.com/mono/mono/issues/9298 */
mono_memory_barrier ();
#endif
g_assertf (cfg->num_bblocks >= dfn, "cfg->num_bblocks=%d, dfn=%d\n", cfg->num_bblocks, dfn);
if (cfg->num_bblocks != dfn + 1) {
MonoBasicBlock *bb;
cfg->num_bblocks = dfn + 1;
/* remove unreachable code, because the code in them may be
* inconsistent (access to dead variables for example) */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
bb->flags &= ~BB_VISITED;
compute_reachable (cfg->bb_entry);
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
if (bb->flags & BB_EXCEPTION_HANDLER)
compute_reachable (bb);
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
if (!(bb->flags & BB_VISITED)) {
if (cfg->verbose_level > 1)
g_print ("found unreachable code in BB%d\n", bb->block_num);
bb->code = bb->last_ins = NULL;
while (bb->out_count)
mono_unlink_bblock (cfg, bb, bb->out_bb [0]);
}
}
for (bb = cfg->bb_entry; bb; bb = bb->next_bb)
bb->flags &= ~BB_VISITED;
}
}
static void
mono_handle_out_of_line_bblock (MonoCompile *cfg)
{
MonoBasicBlock *bb;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
if (bb->next_bb && bb->next_bb->out_of_line && bb->last_ins && !MONO_IS_BRANCH_OP (bb->last_ins)) {
MonoInst *ins;
MONO_INST_NEW (cfg, ins, OP_BR);
MONO_ADD_INS (bb, ins);
ins->inst_target_bb = bb->next_bb;
}
}
}
static MonoJitInfo*
create_jit_info (MonoCompile *cfg, MonoMethod *method_to_compile)
{
GSList *tmp;
MonoMethodHeader *header;
MonoJitInfo *jinfo;
MonoJitInfoFlags flags = JIT_INFO_NONE;
int num_clauses, num_holes = 0;
guint32 stack_size = 0;
g_assert (method_to_compile == cfg->method);
header = cfg->header;
if (cfg->gshared)
flags |= JIT_INFO_HAS_GENERIC_JIT_INFO;
if (cfg->arch_eh_jit_info) {
MonoJitArgumentInfo *arg_info;
MonoMethodSignature *sig = mono_method_signature_internal (cfg->method_to_register);
/*
* This cannot be computed during stack walking, as
* mono_arch_get_argument_info () is not signal safe.
*/
arg_info = g_newa (MonoJitArgumentInfo, sig->param_count + 1);
stack_size = mono_arch_get_argument_info (sig, sig->param_count, arg_info);
if (stack_size)
flags |= JIT_INFO_HAS_ARCH_EH_INFO;
}
if (cfg->has_unwind_info_for_epilog && !(flags & JIT_INFO_HAS_ARCH_EH_INFO))
flags |= JIT_INFO_HAS_ARCH_EH_INFO;
if (cfg->thunk_area)
flags |= JIT_INFO_HAS_THUNK_INFO;
if (cfg->try_block_holes) {
for (tmp = cfg->try_block_holes; tmp; tmp = tmp->next) {
TryBlockHole *hole = (TryBlockHole *)tmp->data;
MonoExceptionClause *ec = hole->clause;
int hole_end = hole->basic_block->native_offset + hole->basic_block->native_length;
MonoBasicBlock *clause_last_bb = cfg->cil_offset_to_bb [ec->try_offset + ec->try_len];
g_assert (clause_last_bb);
/* Holes at the end of a try region can be represented by simply reducing the size of the block itself.*/
if (clause_last_bb->native_offset != hole_end)
++num_holes;
}
if (num_holes)
flags |= JIT_INFO_HAS_TRY_BLOCK_HOLES;
if (G_UNLIKELY (cfg->verbose_level >= 4))
printf ("Number of try block holes %d\n", num_holes);
}
if (COMPILE_LLVM (cfg)) {
num_clauses = cfg->llvm_ex_info_len;
} else {
num_clauses = header->num_clauses;
int dead_clauses = 0;
for (int i = 0; i < header->num_clauses; ++i)
if (cfg->clause_is_dead [i])
dead_clauses ++;
num_clauses -= dead_clauses;
}
if (cfg->method->dynamic)
jinfo = (MonoJitInfo *)g_malloc0 (mono_jit_info_size (flags, num_clauses, num_holes));
else
jinfo = (MonoJitInfo *)mono_mem_manager_alloc0 (cfg->mem_manager, mono_jit_info_size (flags, num_clauses, num_holes));
jinfo_try_holes_size += num_holes * sizeof (MonoTryBlockHoleJitInfo);
mono_jit_info_init (jinfo, cfg->method_to_register, cfg->native_code, cfg->code_len, flags, num_clauses, num_holes);
if (COMPILE_LLVM (cfg))
jinfo->from_llvm = TRUE;
if (cfg->gshared) {
MonoInst *inst;
MonoGenericJitInfo *gi;
GSList *loclist = NULL;
gi = mono_jit_info_get_generic_jit_info (jinfo);
g_assert (gi);
if (cfg->method->dynamic)
gi->generic_sharing_context = g_new0 (MonoGenericSharingContext, 1);
else
gi->generic_sharing_context = (MonoGenericSharingContext *)mono_mem_manager_alloc0 (cfg->mem_manager, sizeof (MonoGenericSharingContext));
mini_init_gsctx (NULL, cfg->gsctx_context, gi->generic_sharing_context);
if ((method_to_compile->flags & METHOD_ATTRIBUTE_STATIC) ||
mini_method_get_context (method_to_compile)->method_inst ||
m_class_is_valuetype (method_to_compile->klass)) {
g_assert (cfg->rgctx_var);
}
gi->has_this = 1;
if ((method_to_compile->flags & METHOD_ATTRIBUTE_STATIC) ||
mini_method_get_context (method_to_compile)->method_inst ||
m_class_is_valuetype (method_to_compile->klass)) {
inst = cfg->rgctx_var;
if (!COMPILE_LLVM (cfg))
g_assert (inst->opcode == OP_REGOFFSET);
loclist = cfg->rgctx_loclist;
} else {
inst = cfg->args [0];
loclist = cfg->this_loclist;
}
if (loclist) {
/* Needed to handle async exceptions */
GSList *l;
int i;
gi->nlocs = g_slist_length (loclist);
if (cfg->method->dynamic)
gi->locations = (MonoDwarfLocListEntry *)g_malloc0 (gi->nlocs * sizeof (MonoDwarfLocListEntry));
else
gi->locations = (MonoDwarfLocListEntry *)mono_mem_manager_alloc0 (cfg->mem_manager, gi->nlocs * sizeof (MonoDwarfLocListEntry));
i = 0;
for (l = loclist; l; l = l->next) {
memcpy (&(gi->locations [i]), l->data, sizeof (MonoDwarfLocListEntry));
i ++;
}
}
if (COMPILE_LLVM (cfg)) {
g_assert (cfg->llvm_this_reg != -1);
gi->this_in_reg = 0;
gi->this_reg = cfg->llvm_this_reg;
gi->this_offset = cfg->llvm_this_offset;
} else if (inst->opcode == OP_REGVAR) {
gi->this_in_reg = 1;
gi->this_reg = inst->dreg;
} else {
g_assert (inst->opcode == OP_REGOFFSET);
#ifdef TARGET_X86
g_assert (inst->inst_basereg == X86_EBP);
#elif defined(TARGET_AMD64)
g_assert (inst->inst_basereg == X86_EBP || inst->inst_basereg == X86_ESP);
#endif
g_assert (inst->inst_offset >= G_MININT32 && inst->inst_offset <= G_MAXINT32);
gi->this_in_reg = 0;
gi->this_reg = inst->inst_basereg;
gi->this_offset = inst->inst_offset;
}
}
if (num_holes) {
MonoTryBlockHoleTableJitInfo *table;
int i;
table = mono_jit_info_get_try_block_hole_table_info (jinfo);
table->num_holes = (guint16)num_holes;
i = 0;
for (tmp = cfg->try_block_holes; tmp; tmp = tmp->next) {
guint32 start_bb_offset;
MonoTryBlockHoleJitInfo *hole;
TryBlockHole *hole_data = (TryBlockHole *)tmp->data;
MonoExceptionClause *ec = hole_data->clause;
int hole_end = hole_data->basic_block->native_offset + hole_data->basic_block->native_length;
MonoBasicBlock *clause_last_bb = cfg->cil_offset_to_bb [ec->try_offset + ec->try_len];
g_assert (clause_last_bb);
/* Holes at the end of a try region can be represented by simply reducing the size of the block itself.*/
if (clause_last_bb->native_offset == hole_end)
continue;
start_bb_offset = hole_data->start_offset - hole_data->basic_block->native_offset;
hole = &table->holes [i++];
hole->clause = hole_data->clause - &header->clauses [0];
hole->offset = (guint32)hole_data->start_offset;
hole->length = (guint16)(hole_data->basic_block->native_length - start_bb_offset);
if (G_UNLIKELY (cfg->verbose_level >= 4))
printf ("\tTry block hole at eh clause %d offset %x length %x\n", hole->clause, hole->offset, hole->length);
}
g_assert (i == num_holes);
}
if (jinfo->has_arch_eh_info) {
MonoArchEHJitInfo *info;
info = mono_jit_info_get_arch_eh_info (jinfo);
info->stack_size = stack_size;
}
if (cfg->thunk_area) {
MonoThunkJitInfo *info;
info = mono_jit_info_get_thunk_info (jinfo);
info->thunks_offset = cfg->thunks_offset;
info->thunks_size = cfg->thunk_area;
}
if (COMPILE_LLVM (cfg)) {
if (num_clauses)
memcpy (&jinfo->clauses [0], &cfg->llvm_ex_info [0], num_clauses * sizeof (MonoJitExceptionInfo));
} else {
int eindex = 0;
for (int i = 0; i < header->num_clauses; i++) {
MonoExceptionClause *ec = &header->clauses [i];
MonoJitExceptionInfo *ei = &jinfo->clauses [eindex];
MonoBasicBlock *tblock;
MonoInst *exvar;
if (cfg->clause_is_dead [i])
continue;
eindex ++;
ei->flags = ec->flags;
if (G_UNLIKELY (cfg->verbose_level >= 4))
printf ("IL clause: try 0x%x-0x%x handler 0x%x-0x%x filter 0x%x\n", ec->try_offset, ec->try_offset + ec->try_len, ec->handler_offset, ec->handler_offset + ec->handler_len, ec->flags == MONO_EXCEPTION_CLAUSE_FILTER ? ec->data.filter_offset : 0);
exvar = mono_find_exvar_for_offset (cfg, ec->handler_offset);
ei->exvar_offset = exvar ? exvar->inst_offset : 0;
if (ei->flags == MONO_EXCEPTION_CLAUSE_FILTER) {
tblock = cfg->cil_offset_to_bb [ec->data.filter_offset];
g_assert (tblock);
ei->data.filter = cfg->native_code + tblock->native_offset;
} else {
ei->data.catch_class = ec->data.catch_class;
}
tblock = cfg->cil_offset_to_bb [ec->try_offset];
g_assert (tblock);
g_assert (tblock->native_offset);
ei->try_start = cfg->native_code + tblock->native_offset;
if (tblock->extend_try_block) {
/*
* Extend the try block backwards to include parts of the previous call
* instruction.
*/
ei->try_start = (guint8*)ei->try_start - cfg->backend->monitor_enter_adjustment;
}
if (ec->try_offset + ec->try_len < header->code_size)
tblock = cfg->cil_offset_to_bb [ec->try_offset + ec->try_len];
else
tblock = cfg->bb_exit;
if (G_UNLIKELY (cfg->verbose_level >= 4))
printf ("looking for end of try [%d, %d] -> %p (code size %d)\n", ec->try_offset, ec->try_len, tblock, header->code_size);
g_assert (tblock);
if (!tblock->native_offset) {
int j, end;
for (j = ec->try_offset + ec->try_len, end = ec->try_offset; j >= end; --j) {
MonoBasicBlock *bb = cfg->cil_offset_to_bb [j];
if (bb && bb->native_offset) {
tblock = bb;
break;
}
}
}
ei->try_end = cfg->native_code + tblock->native_offset;
g_assert (tblock->native_offset);
tblock = cfg->cil_offset_to_bb [ec->handler_offset];
g_assert (tblock);
ei->handler_start = cfg->native_code + tblock->native_offset;
for (tmp = cfg->try_block_holes; tmp; tmp = tmp->next) {
TryBlockHole *hole = (TryBlockHole *)tmp->data;
gpointer hole_end = cfg->native_code + (hole->basic_block->native_offset + hole->basic_block->native_length);
if (hole->clause == ec && hole_end == ei->try_end) {
if (G_UNLIKELY (cfg->verbose_level >= 4))
printf ("\tShortening try block %d from %x to %x\n", i, (int)((guint8*)ei->try_end - cfg->native_code), hole->start_offset);
ei->try_end = cfg->native_code + hole->start_offset;
break;
}
}
if (ec->flags == MONO_EXCEPTION_CLAUSE_FINALLY) {
int end_offset;
if (ec->handler_offset + ec->handler_len < header->code_size) {
tblock = cfg->cil_offset_to_bb [ec->handler_offset + ec->handler_len];
if (tblock->native_offset) {
end_offset = tblock->native_offset;
} else {
int j, end;
for (j = ec->handler_offset + ec->handler_len, end = ec->handler_offset; j >= end; --j) {
MonoBasicBlock *bb = cfg->cil_offset_to_bb [j];
if (bb && bb->native_offset) {
tblock = bb;
break;
}
}
end_offset = tblock->native_offset + tblock->native_length;
}
} else {
end_offset = cfg->epilog_begin;
}
ei->data.handler_end = cfg->native_code + end_offset;
}
/* Keep try_start/end non-authenticated, they are never branched to */
//ei->try_start = MINI_ADDR_TO_FTNPTR (ei->try_start);
//ei->try_end = MINI_ADDR_TO_FTNPTR (ei->try_end);
ei->handler_start = MINI_ADDR_TO_FTNPTR (ei->handler_start);
if (ei->flags == MONO_EXCEPTION_CLAUSE_FILTER)
ei->data.filter = MINI_ADDR_TO_FTNPTR (ei->data.filter);
else if (ei->flags == MONO_EXCEPTION_CLAUSE_FINALLY)
ei->data.handler_end = MINI_ADDR_TO_FTNPTR (ei->data.handler_end);
}
}
if (G_UNLIKELY (cfg->verbose_level >= 4)) {
int i;
for (i = 0; i < jinfo->num_clauses; i++) {
MonoJitExceptionInfo *ei = &jinfo->clauses [i];
int start = (guint8*)ei->try_start - cfg->native_code;
int end = (guint8*)ei->try_end - cfg->native_code;
int handler = (guint8*)ei->handler_start - cfg->native_code;
int handler_end = (guint8*)ei->data.handler_end - cfg->native_code;
printf ("JitInfo EH clause %d flags %x try %x-%x handler %x-%x\n", i, ei->flags, start, end, handler, handler_end);
}
}
if (cfg->encoded_unwind_ops) {
/* Generated by LLVM */
jinfo->unwind_info = mono_cache_unwind_info (cfg->encoded_unwind_ops, cfg->encoded_unwind_ops_len);
g_free (cfg->encoded_unwind_ops);
} else if (cfg->unwind_ops) {
guint32 info_len;
guint8 *unwind_info = mono_unwind_ops_encode (cfg->unwind_ops, &info_len);
guint32 unwind_desc;
unwind_desc = mono_cache_unwind_info (unwind_info, info_len);
if (cfg->has_unwind_info_for_epilog) {
MonoArchEHJitInfo *info;
info = mono_jit_info_get_arch_eh_info (jinfo);
g_assert (info);
info->epilog_size = cfg->code_len - cfg->epilog_begin;
}
jinfo->unwind_info = unwind_desc;
g_free (unwind_info);
} else {
jinfo->unwind_info = cfg->used_int_regs;
}
return jinfo;
}
/* Return whenever METHOD is a gsharedvt method */
static gboolean
is_gsharedvt_method (MonoMethod *method)
{
MonoGenericContext *context;
MonoGenericInst *inst;
int i;
if (!method->is_inflated)
return FALSE;
context = mono_method_get_context (method);
inst = context->class_inst;
if (inst) {
for (i = 0; i < inst->type_argc; ++i)
if (mini_is_gsharedvt_gparam (inst->type_argv [i]))
return TRUE;
}
inst = context->method_inst;
if (inst) {
for (i = 0; i < inst->type_argc; ++i)
if (mini_is_gsharedvt_gparam (inst->type_argv [i]))
return TRUE;
}
return FALSE;
}
static gboolean
is_open_method (MonoMethod *method)
{
MonoGenericContext *context;
if (!method->is_inflated)
return FALSE;
context = mono_method_get_context (method);
if (context->class_inst && context->class_inst->is_open)
return TRUE;
if (context->method_inst && context->method_inst->is_open)
return TRUE;
return FALSE;
}
static void
mono_insert_nop_in_empty_bb (MonoCompile *cfg)
{
MonoBasicBlock *bb;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
if (bb->code)
continue;
MonoInst *nop;
MONO_INST_NEW (cfg, nop, OP_NOP);
MONO_ADD_INS (bb, nop);
}
}
static void
insert_safepoint (MonoCompile *cfg, MonoBasicBlock *bblock)
{
MonoInst *poll_addr, *ins;
if (cfg->disable_gc_safe_points)
return;
if (cfg->verbose_level > 1)
printf ("ADDING SAFE POINT TO BB %d\n", bblock->block_num);
g_assert (mini_safepoints_enabled ());
NEW_AOTCONST (cfg, poll_addr, MONO_PATCH_INFO_GC_SAFE_POINT_FLAG, (gpointer)&mono_polling_required);
MONO_INST_NEW (cfg, ins, OP_GC_SAFE_POINT);
ins->sreg1 = poll_addr->dreg;
if (bblock->flags & BB_EXCEPTION_HANDLER) {
MonoInst *eh_op = bblock->code;
if (eh_op && eh_op->opcode != OP_START_HANDLER && eh_op->opcode != OP_GET_EX_OBJ) {
eh_op = NULL;
} else {
MonoInst *next_eh_op = eh_op ? eh_op->next : NULL;
// skip all EH relateds ops
while (next_eh_op && (next_eh_op->opcode == OP_START_HANDLER || next_eh_op->opcode == OP_GET_EX_OBJ)) {
eh_op = next_eh_op;
next_eh_op = eh_op->next;
}
}
mono_bblock_insert_after_ins (bblock, eh_op, poll_addr);
mono_bblock_insert_after_ins (bblock, poll_addr, ins);
} else if (bblock == cfg->bb_entry) {
mono_bblock_insert_after_ins (bblock, bblock->last_ins, poll_addr);
mono_bblock_insert_after_ins (bblock, poll_addr, ins);
} else {
mono_bblock_insert_before_ins (bblock, NULL, poll_addr);
mono_bblock_insert_after_ins (bblock, poll_addr, ins);
}
}
/*
This code inserts safepoints into managed code at important code paths.
Those are:
-the first basic block
-landing BB for exception handlers
-loop body starts.
*/
static void
insert_safepoints (MonoCompile *cfg)
{
MonoBasicBlock *bb;
g_assert (mini_safepoints_enabled ());
if (COMPILE_LLVM (cfg)) {
if (!cfg->llvm_only) {
/* We rely on LLVM's safepoints insertion capabilities. */
if (cfg->verbose_level > 1)
printf ("SKIPPING SAFEPOINTS for code compiled with LLVM\n");
return;
}
}
if (cfg->method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE) {
WrapperInfo *info = mono_marshal_get_wrapper_info (cfg->method);
/* These wrappers are called from the wrapper for the polling function, leading to potential stack overflow */
if (info && info->subtype == WRAPPER_SUBTYPE_ICALL_WRAPPER &&
(info->d.icall.jit_icall_id == MONO_JIT_ICALL_mono_threads_state_poll ||
info->d.icall.jit_icall_id == MONO_JIT_ICALL_mono_thread_interruption_checkpoint ||
info->d.icall.jit_icall_id == MONO_JIT_ICALL_mono_threads_exit_gc_safe_region_unbalanced)) {
if (cfg->verbose_level > 1)
printf ("SKIPPING SAFEPOINTS for the polling function icall\n");
return;
}
}
if (cfg->method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED) {
if (cfg->verbose_level > 1)
printf ("SKIPPING SAFEPOINTS for native-to-managed wrappers.\n");
return;
}
if (cfg->method->wrapper_type == MONO_WRAPPER_OTHER) {
WrapperInfo *info = mono_marshal_get_wrapper_info (cfg->method);
if (info && (info->subtype == WRAPPER_SUBTYPE_INTERP_IN || info->subtype == WRAPPER_SUBTYPE_INTERP_LMF)) {
/* These wrappers shouldn't do any icalls */
if (cfg->verbose_level > 1)
printf ("SKIPPING SAFEPOINTS for interp-in wrappers.\n");
return;
}
}
if (cfg->method->wrapper_type == MONO_WRAPPER_WRITE_BARRIER) {
if (cfg->verbose_level > 1)
printf ("SKIPPING SAFEPOINTS for write barrier wrappers.\n");
return;
}
if (cfg->verbose_level > 1)
printf ("INSERTING SAFEPOINTS\n");
if (cfg->verbose_level > 2)
mono_print_code (cfg, "BEFORE SAFEPOINTS");
/* if the method doesn't contain
* (1) a call (so it's a leaf method)
* (2) and no loops
* we can skip the GC safepoint on method entry. */
gboolean requires_safepoint = cfg->has_calls;
for (bb = cfg->bb_entry->next_bb; bb; bb = bb->next_bb) {
if (bb->loop_body_start || (bb->flags & BB_EXCEPTION_HANDLER)) {
requires_safepoint = TRUE;
insert_safepoint (cfg, bb);
}
}
if (requires_safepoint)
insert_safepoint (cfg, cfg->bb_entry);
if (cfg->verbose_level > 2)
mono_print_code (cfg, "AFTER SAFEPOINTS");
}
static void
mono_insert_branches_between_bblocks (MonoCompile *cfg)
{
MonoBasicBlock *bb;
/* Add branches between non-consecutive bblocks */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
if (bb->last_ins && MONO_IS_COND_BRANCH_OP (bb->last_ins) &&
bb->last_ins->inst_false_bb && bb->next_bb != bb->last_ins->inst_false_bb) {
/* we are careful when inverting, since bugs like #59580
* could show up when dealing with NaNs.
*/
if (MONO_IS_COND_BRANCH_NOFP(bb->last_ins) && bb->next_bb == bb->last_ins->inst_true_bb) {
MonoBasicBlock *tmp = bb->last_ins->inst_true_bb;
bb->last_ins->inst_true_bb = bb->last_ins->inst_false_bb;
bb->last_ins->inst_false_bb = tmp;
bb->last_ins->opcode = mono_reverse_branch_op (bb->last_ins->opcode);
} else {
MonoInst *inst = (MonoInst *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoInst));
inst->opcode = OP_BR;
inst->inst_target_bb = bb->last_ins->inst_false_bb;
mono_bblock_add_inst (bb, inst);
}
}
}
if (cfg->verbose_level >= 4) {
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
MonoInst *tree = bb->code;
g_print ("DUMP BLOCK %d:\n", bb->block_num);
if (!tree)
continue;
for (; tree; tree = tree->next) {
mono_print_ins_index (-1, tree);
}
}
}
/* FIXME: */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
bb->max_vreg = cfg->next_vreg;
}
}
static G_GNUC_UNUSED void
remove_empty_finally_pass (MonoCompile *cfg)
{
MonoBasicBlock *bb;
MonoInst *ins;
gboolean remove_call_handler = FALSE;
// FIXME: other configurations
if (!cfg->llvm_only)
return;
for (int i = 0; i < cfg->header->num_clauses; ++i) {
MonoExceptionClause *clause = &cfg->header->clauses [i];
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY) {
MonoInst *first, *last;
bb = cfg->cil_offset_to_bb [clause->handler_offset];
g_assert (bb);
/* Support only 1 bb for now */
first = mono_bb_first_inst (bb, 0);
if (first->opcode != OP_START_HANDLER)
break;
gboolean empty = TRUE;
while (TRUE) {
if (bb->out_count > 1) {
empty = FALSE;
break;
}
if (bb->flags & BB_HAS_SIDE_EFFECTS) {
empty = FALSE;
break;
}
if (bb->out_count == 0)
break;
if (mono_bb_last_inst (bb, 0)->opcode == OP_ENDFINALLY)
break;
bb = bb->out_bb [0];
}
if (empty) {
/*
* Avoid doing this in nested clauses, because it might mess up the EH code generated by
* the llvm backend.
*/
for (int j = 0; j < cfg->header->num_clauses; ++j) {
MonoExceptionClause *clause2 = &cfg->header->clauses [j];
if (i != j && MONO_OFFSET_IN_CLAUSE (clause2, clause->handler_offset))
empty = FALSE;
}
}
if (empty) {
/* Nullify OP_START_HANDLER */
NULLIFY_INS (first);
last = mono_bb_last_inst (bb, 0);
if (last->opcode == OP_ENDFINALLY)
NULLIFY_INS (last);
if (cfg->verbose_level > 1)
g_print ("removed empty finally clause %d.\n", i);
/* Mark the handler bb as not used anymore */
bb = cfg->cil_offset_to_bb [clause->handler_offset];
bb->flags &= ~BB_EXCEPTION_HANDLER;
cfg->clause_is_dead [i] = TRUE;
remove_call_handler = TRUE;
}
}
}
if (remove_call_handler) {
/* Remove OP_CALL_HANDLER opcodes pointing to the removed finally blocks */
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
MONO_BB_FOR_EACH_INS (bb, ins) {
if (ins->opcode == OP_CALL_HANDLER && ins->inst_target_bb && !(ins->inst_target_bb->flags & BB_EXCEPTION_HANDLER)) {
NULLIFY_INS (ins);
for (MonoInst *ins2 = ins->next; ins2; ins2 = ins2->next)
NULLIFY_INS (ins2);
break;
}
}
}
}
}
static void
init_backend (MonoBackend *backend)
{
#ifdef MONO_ARCH_NEED_GOT_VAR
backend->need_got_var = 1;
#endif
#ifdef MONO_ARCH_HAVE_CARD_TABLE_WBARRIER
backend->have_card_table_wb = 1;
#endif
#ifdef MONO_ARCH_HAVE_OP_GENERIC_CLASS_INIT
backend->have_op_generic_class_init = 1;
#endif
#ifdef MONO_ARCH_EMULATE_MUL_DIV
backend->emulate_mul_div = 1;
#endif
#ifdef MONO_ARCH_EMULATE_DIV
backend->emulate_div = 1;
#endif
#if !defined(MONO_ARCH_NO_EMULATE_LONG_SHIFT_OPS)
backend->emulate_long_shift_opts = 1;
#endif
#ifdef MONO_ARCH_HAVE_OBJC_GET_SELECTOR
backend->have_objc_get_selector = 1;
#endif
#ifdef MONO_ARCH_HAVE_GENERALIZED_IMT_TRAMPOLINE
backend->have_generalized_imt_trampoline = 1;
#endif
#ifdef MONO_ARCH_GSHARED_SUPPORTED
backend->gshared_supported = 1;
#endif
if (MONO_ARCH_USE_FPSTACK)
backend->use_fpstack = 1;
// Does the ABI have a volatile non-parameter register, so tailcall
// can pass context to generics or interfaces?
backend->have_volatile_non_param_register = MONO_ARCH_HAVE_VOLATILE_NON_PARAM_REGISTER;
#ifdef MONO_ARCH_HAVE_OP_TAILCALL_MEMBASE
backend->have_op_tailcall_membase = 1;
#endif
#ifdef MONO_ARCH_HAVE_OP_TAILCALL_REG
backend->have_op_tailcall_reg = 1;
#endif
#ifndef MONO_ARCH_MONITOR_ENTER_ADJUSTMENT
backend->monitor_enter_adjustment = 1;
#else
backend->monitor_enter_adjustment = MONO_ARCH_MONITOR_ENTER_ADJUSTMENT;
#endif
#if defined(MONO_ARCH_ILP32)
backend->ilp32 = 1;
#endif
#ifdef MONO_ARCH_NEED_DIV_CHECK
backend->need_div_check = 1;
#endif
#ifdef NO_UNALIGNED_ACCESS
backend->no_unaligned_access = 1;
#endif
#ifdef MONO_ARCH_DYN_CALL_PARAM_AREA
backend->dyn_call_param_area = MONO_ARCH_DYN_CALL_PARAM_AREA;
#endif
#ifdef MONO_ARCH_NO_DIV_WITH_MUL
backend->disable_div_with_mul = 1;
#endif
#ifdef MONO_ARCH_EXPLICIT_NULL_CHECKS
backend->explicit_null_checks = 1;
#endif
#ifdef MONO_ARCH_HAVE_OPTIMIZED_DIV
backend->optimized_div = 1;
#endif
#ifdef MONO_ARCH_FORCE_FLOAT32
backend->force_float32 = 1;
#endif
}
static gboolean
is_simd_supported (MonoCompile *cfg)
{
#ifdef DISABLE_SIMD
return FALSE;
#endif
// FIXME: Clean this up
#ifdef TARGET_WASM
if ((mini_get_cpu_features (cfg) & MONO_CPU_WASM_SIMD) == 0)
return FALSE;
#else
if (cfg->llvm_only)
return FALSE;
#endif
return TRUE;
}
/* Determine how an rgctx is passed to a method */
MonoRgctxAccess
mini_get_rgctx_access_for_method (MonoMethod *method)
{
/* gshared dim methods use an mrgctx */
if (mini_method_is_default_method (method))
return MONO_RGCTX_ACCESS_MRGCTX;
if (mono_method_get_context (method)->method_inst)
return MONO_RGCTX_ACCESS_MRGCTX;
if (method->flags & METHOD_ATTRIBUTE_STATIC || m_class_is_valuetype (method->klass))
return MONO_RGCTX_ACCESS_VTABLE;
return MONO_RGCTX_ACCESS_THIS;
}
/*
* mini_method_compile:
* @method: the method to compile
* @opts: the optimization flags to use
* @flags: compilation flags
* @parts: debug flag
*
* Returns: a MonoCompile* pointer. Caller must check the exception_type
* field in the returned struct to see if compilation succeded.
*/
MonoCompile*
mini_method_compile (MonoMethod *method, guint32 opts, JitFlags flags, int parts, int aot_method_index)
{
MonoMethodHeader *header;
MonoMethodSignature *sig;
MonoCompile *cfg;
int i;
gboolean try_generic_shared, try_llvm = FALSE;
MonoMethod *method_to_compile, *method_to_register;
gboolean method_is_gshared = FALSE;
gboolean run_cctors = (flags & JIT_FLAG_RUN_CCTORS) ? 1 : 0;
gboolean compile_aot = (flags & JIT_FLAG_AOT) ? 1 : 0;
gboolean full_aot = (flags & JIT_FLAG_FULL_AOT) ? 1 : 0;
gboolean disable_direct_icalls = (flags & JIT_FLAG_NO_DIRECT_ICALLS) ? 1 : 0;
gboolean gsharedvt_method = FALSE;
gboolean interp_entry_only = FALSE;
#ifdef ENABLE_LLVM
gboolean llvm = (flags & JIT_FLAG_LLVM) ? 1 : 0;
#endif
static gboolean verbose_method_inited;
static char **verbose_method_names;
mono_atomic_inc_i32 (&mono_jit_stats.methods_compiled);
MONO_PROFILER_RAISE (jit_begin, (method));
if (MONO_METHOD_COMPILE_BEGIN_ENABLED ())
MONO_PROBE_METHOD_COMPILE_BEGIN (method);
gsharedvt_method = is_gsharedvt_method (method);
/*
* In AOT mode, method can be the following:
* - a gsharedvt method.
* - a method inflated with type parameters. This is for ref/partial sharing.
* - a method inflated with concrete types.
*/
if (compile_aot) {
if (is_open_method (method)) {
try_generic_shared = TRUE;
method_is_gshared = TRUE;
} else {
try_generic_shared = FALSE;
}
g_assert (opts & MONO_OPT_GSHARED);
} else {
try_generic_shared = mono_class_generic_sharing_enabled (method->klass) &&
(opts & MONO_OPT_GSHARED) && mono_method_is_generic_sharable_full (method, FALSE, FALSE, FALSE);
if (mini_is_gsharedvt_sharable_method (method)) {
/*
if (!mono_debug_count ())
try_generic_shared = FALSE;
*/
}
}
/*
if (try_generic_shared && !mono_debug_count ())
try_generic_shared = FALSE;
*/
if (opts & MONO_OPT_GSHARED) {
if (try_generic_shared)
mono_atomic_inc_i32 (&mono_stats.generics_sharable_methods);
else if (mono_method_is_generic_impl (method))
mono_atomic_inc_i32 (&mono_stats.generics_unsharable_methods);
}
#ifdef ENABLE_LLVM
try_llvm = mono_use_llvm || llvm;
#endif
#ifndef MONO_ARCH_FLOAT32_SUPPORTED
opts &= ~MONO_OPT_FLOAT32;
#endif
if (current_backend->force_float32)
/* Force float32 mode on newer platforms */
opts |= MONO_OPT_FLOAT32;
restart_compile:
if (method_is_gshared) {
method_to_compile = method;
} else {
if (try_generic_shared) {
ERROR_DECL (error);
method_to_compile = mini_get_shared_method_full (method, SHARE_MODE_NONE, error);
mono_error_assert_ok (error);
} else {
method_to_compile = method;
}
}
cfg = g_new0 (MonoCompile, 1);
cfg->method = method_to_compile;
cfg->mempool = mono_mempool_new ();
cfg->opt = opts;
cfg->run_cctors = run_cctors;
cfg->verbose_level = mini_verbose;
cfg->compile_aot = compile_aot;
cfg->full_aot = full_aot;
cfg->disable_omit_fp = mini_debug_options.disable_omit_fp;
cfg->skip_visibility = method->skip_visibility;
cfg->orig_method = method;
cfg->gen_seq_points = !mini_debug_options.no_seq_points_compact_data || mini_debug_options.gen_sdb_seq_points;
cfg->gen_sdb_seq_points = mini_debug_options.gen_sdb_seq_points;
cfg->llvm_only = (flags & JIT_FLAG_LLVM_ONLY) != 0;
cfg->interp = (flags & JIT_FLAG_INTERP) != 0;
cfg->use_current_cpu = (flags & JIT_FLAG_USE_CURRENT_CPU) != 0;
cfg->self_init = (flags & JIT_FLAG_SELF_INIT) != 0;
cfg->code_exec_only = (flags & JIT_FLAG_CODE_EXEC_ONLY) != 0;
cfg->backend = current_backend;
cfg->jit_mm = jit_mm_for_method (cfg->method);
cfg->mem_manager = m_method_get_mem_manager (cfg->method);
if (cfg->method->wrapper_type == MONO_WRAPPER_ALLOC) {
/* We can't have seq points inside gc critical regions */
cfg->gen_seq_points = FALSE;
cfg->gen_sdb_seq_points = FALSE;
}
/* coop requires loop detection to happen */
if (mini_safepoints_enabled ())
cfg->opt |= MONO_OPT_LOOP;
cfg->disable_llvm_implicit_null_checks = mini_debug_options.llvm_disable_implicit_null_checks;
if (cfg->backend->explicit_null_checks || mini_debug_options.explicit_null_checks) {
/* some platforms have null pages, so we can't SIGSEGV */
cfg->explicit_null_checks = TRUE;
cfg->disable_llvm_implicit_null_checks = TRUE;
} else {
cfg->explicit_null_checks = flags & JIT_FLAG_EXPLICIT_NULL_CHECKS;
}
cfg->soft_breakpoints = mini_debug_options.soft_breakpoints;
cfg->check_pinvoke_callconv = mini_debug_options.check_pinvoke_callconv;
cfg->disable_direct_icalls = disable_direct_icalls;
cfg->direct_pinvoke = (flags & JIT_FLAG_DIRECT_PINVOKE) != 0;
cfg->interp_entry_only = interp_entry_only;
if (try_generic_shared)
cfg->gshared = TRUE;
if (cfg->gshared)
cfg->rgctx_access = mini_get_rgctx_access_for_method (cfg->method);
cfg->compile_llvm = try_llvm;
cfg->token_info_hash = g_hash_table_new (NULL, NULL);
if (cfg->compile_aot)
cfg->method_index = aot_method_index;
if (cfg->compile_llvm)
cfg->explicit_null_checks = TRUE;
if (cfg->explicit_null_checks && method->wrapper_type == MONO_WRAPPER_OTHER &&
(mono_marshal_get_wrapper_info (method)->subtype == WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG ||
mono_marshal_get_wrapper_info (method)->subtype == WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG)) {
/* These wrappers contain loads/stores which can't fail */
cfg->explicit_null_checks = FALSE;
}
/*
if (!mono_debug_count ())
cfg->opt &= ~MONO_OPT_FLOAT32;
*/
if (!is_simd_supported (cfg))
cfg->opt &= ~MONO_OPT_SIMD;
cfg->r4fp = (cfg->opt & MONO_OPT_FLOAT32) ? 1 : 0;
cfg->r4_stack_type = cfg->r4fp ? STACK_R4 : STACK_R8;
if (cfg->gen_seq_points)
cfg->seq_points = g_ptr_array_new ();
cfg->error = (MonoError*)&cfg->error_value;
error_init (cfg->error);
if (cfg->compile_aot && !try_generic_shared && (method->is_generic || mono_class_is_gtd (method->klass) || method_is_gshared)) {
cfg->exception_type = MONO_EXCEPTION_GENERIC_SHARING_FAILED;
return cfg;
}
if (cfg->gshared && (gsharedvt_method || mini_is_gsharedvt_sharable_method (method))) {
MonoMethodInflated *inflated;
MonoGenericContext *context;
if (gsharedvt_method) {
g_assert (method->is_inflated);
inflated = (MonoMethodInflated*)method;
context = &inflated->context;
/* We are compiling a gsharedvt method directly */
g_assert (compile_aot);
} else {
g_assert (method_to_compile->is_inflated);
inflated = (MonoMethodInflated*)method_to_compile;
context = &inflated->context;
}
mini_init_gsctx (cfg->mempool, context, &cfg->gsctx);
cfg->gsctx_context = context;
cfg->gsharedvt = TRUE;
if (!cfg->llvm_only) {
cfg->disable_llvm = TRUE;
cfg->exception_message = g_strdup ("gsharedvt");
}
}
if (cfg->gshared) {
method_to_register = method_to_compile;
} else {
g_assert (method == method_to_compile);
method_to_register = method;
}
cfg->method_to_register = method_to_register;
ERROR_DECL (err);
sig = mono_method_signature_checked (cfg->method, err);
if (!sig) {
cfg->exception_type = MONO_EXCEPTION_TYPE_LOAD;
cfg->exception_message = g_strdup (mono_error_get_message (err));
mono_error_cleanup (err);
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, FALSE);
return cfg;
}
header = cfg->header = mono_method_get_header_checked (cfg->method, cfg->error);
if (!header) {
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, FALSE);
return cfg;
}
if (cfg->llvm_only && cfg->interp && !cfg->interp_entry_only && header->num_clauses) {
cfg->deopt = TRUE;
/* Can't reconstruct inlined state */
cfg->disable_inline = TRUE;
}
#ifdef ENABLE_LLVM
{
static gboolean inited;
if (!inited)
inited = TRUE;
/*
* Check for methods which cannot be compiled by LLVM early, to avoid
* the extra compilation pass.
*/
if (COMPILE_LLVM (cfg)) {
mono_llvm_check_method_supported (cfg);
if (cfg->disable_llvm) {
if (cfg->verbose_level > 0) {
//nm = mono_method_full_name (cfg->method, TRUE);
printf ("LLVM failed for '%s.%s': %s\n", m_class_get_name (method->klass), method->name, cfg->exception_message);
//g_free (nm);
}
if (cfg->llvm_only) {
g_free (cfg->exception_message);
cfg->disable_aot = TRUE;
return cfg;
}
mono_destroy_compile (cfg);
try_llvm = FALSE;
goto restart_compile;
}
}
}
#endif
cfg->prof_flags = mono_profiler_get_call_instrumentation_flags (cfg->method);
cfg->prof_coverage = mono_profiler_coverage_instrumentation_enabled (cfg->method);
gboolean trace = mono_jit_trace_calls != NULL && mono_trace_eval (cfg->method);
if (trace)
cfg->prof_flags = (MonoProfilerCallInstrumentationFlags)(
MONO_PROFILER_CALL_INSTRUMENTATION_ENTER | MONO_PROFILER_CALL_INSTRUMENTATION_ENTER_CONTEXT |
MONO_PROFILER_CALL_INSTRUMENTATION_LEAVE | MONO_PROFILER_CALL_INSTRUMENTATION_LEAVE_CONTEXT);
/* The debugger has no liveness information, so avoid sharing registers/stack slots */
if (mini_debug_options.mdb_optimizations || MONO_CFG_PROFILE_CALL_CONTEXT (cfg)) {
cfg->disable_reuse_registers = TRUE;
cfg->disable_reuse_stack_slots = TRUE;
/*
* This decreases the change the debugger will read registers/stack slots which are
* not yet initialized.
*/
cfg->disable_initlocals_opt = TRUE;
cfg->extend_live_ranges = TRUE;
/* The debugger needs all locals to be on the stack or in a global register */
cfg->disable_vreg_to_lvreg = TRUE;
/* Don't remove unused variables when running inside the debugger since the user
* may still want to view them. */
cfg->disable_deadce_vars = TRUE;
cfg->opt &= ~MONO_OPT_DEADCE;
cfg->opt &= ~MONO_OPT_INLINE;
cfg->opt &= ~MONO_OPT_COPYPROP;
cfg->opt &= ~MONO_OPT_CONSPROP;
/* This is needed for the soft debugger, which doesn't like code after the epilog */
cfg->disable_out_of_line_bblocks = TRUE;
}
mini_gc_init_cfg (cfg);
if (method->wrapper_type == MONO_WRAPPER_OTHER) {
WrapperInfo *info = mono_marshal_get_wrapper_info (method);
if ((info && (info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG || info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG))) {
cfg->disable_gc_safe_points = TRUE;
/* This is safe, these wrappers only store to the stack */
cfg->gen_write_barriers = FALSE;
}
}
if (COMPILE_LLVM (cfg)) {
cfg->opt |= MONO_OPT_ABCREM;
}
if (!verbose_method_inited) {
char *env = g_getenv ("MONO_VERBOSE_METHOD");
if (env != NULL)
verbose_method_names = g_strsplit (env, ";", -1);
verbose_method_inited = TRUE;
}
if (verbose_method_names) {
int i;
for (i = 0; verbose_method_names [i] != NULL; i++){
const char *name = verbose_method_names [i];
if ((strchr (name, '.') > name) || strchr (name, ':') || strchr (name, '*')) {
MonoMethodDesc *desc;
desc = mono_method_desc_new (name, TRUE);
if (desc) {
if (mono_method_desc_full_match (desc, cfg->method)) {
cfg->verbose_level = 4;
}
mono_method_desc_free (desc);
}
} else {
if (strcmp (cfg->method->name, name) == 0)
cfg->verbose_level = 4;
}
}
}
cfg->intvars = (guint16 *)mono_mempool_alloc0 (cfg->mempool, sizeof (guint16) * STACK_MAX * header->max_stack);
if (cfg->verbose_level > 0) {
char *method_name;
method_name = mono_method_get_full_name (method);
g_print ("converting %s%s%s%smethod %s\n", COMPILE_LLVM (cfg) ? "llvm " : "", cfg->gsharedvt ? "gsharedvt " : "", (cfg->gshared && !cfg->gsharedvt) ? "gshared " : "", cfg->interp_entry_only ? "interp only " : "", method_name);
/*
if (COMPILE_LLVM (cfg))
g_print ("converting llvm method %s\n", method_name = mono_method_full_name (method, TRUE));
else if (cfg->gsharedvt)
g_print ("converting gsharedvt method %s\n", method_name = mono_method_full_name (method_to_compile, TRUE));
else if (cfg->gshared)
g_print ("converting shared method %s\n", method_name = mono_method_full_name (method_to_compile, TRUE));
else
g_print ("converting method %s\n", method_name = mono_method_full_name (method, TRUE));
*/
g_free (method_name);
}
if (cfg->opt & MONO_OPT_ABCREM)
cfg->opt |= MONO_OPT_SSA;
cfg->rs = mono_regstate_new ();
cfg->next_vreg = cfg->rs->next_vreg;
/* FIXME: Fix SSA to handle branches inside bblocks */
if (cfg->opt & MONO_OPT_SSA)
cfg->enable_extended_bblocks = FALSE;
/*
* FIXME: This confuses liveness analysis because variables which are assigned after
* a branch inside a bblock become part of the kill set, even though the assignment
* might not get executed. This causes the optimize_initlocals pass to delete some
* assignments which are needed.
* Also, the mono_if_conversion pass needs to be modified to recognize the code
* created by this.
*/
//cfg->enable_extended_bblocks = TRUE;
/*
* create MonoInst* which represents arguments and local variables
*/
mono_compile_create_vars (cfg);
mono_cfg_dump_create_context (cfg);
mono_cfg_dump_begin_group (cfg);
MONO_TIME_TRACK (mono_jit_stats.jit_method_to_ir, i = mono_method_to_ir (cfg, method_to_compile, NULL, NULL, NULL, NULL, 0, FALSE));
mono_cfg_dump_ir (cfg, "method-to-ir");
if (cfg->gdump_ctx != NULL) {
/* workaround for graph visualization, as it doesn't handle empty basic blocks properly */
mono_insert_nop_in_empty_bb (cfg);
mono_cfg_dump_ir (cfg, "mono_insert_nop_in_empty_bb");
}
if (i < 0) {
if (try_generic_shared && cfg->exception_type == MONO_EXCEPTION_GENERIC_SHARING_FAILED) {
if (compile_aot) {
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, FALSE);
return cfg;
}
mono_destroy_compile (cfg);
try_generic_shared = FALSE;
goto restart_compile;
}
g_assert (cfg->exception_type != MONO_EXCEPTION_GENERIC_SHARING_FAILED);
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, FALSE);
/* cfg contains the details of the failure, so let the caller cleanup */
return cfg;
}
cfg->stat_basic_blocks += cfg->num_bblocks;
if (COMPILE_LLVM (cfg)) {
MonoInst *ins;
/* The IR has to be in SSA form for LLVM */
cfg->opt |= MONO_OPT_SSA;
// FIXME:
if (cfg->ret) {
// Allow SSA on the result value
if (!cfg->interp_entry_only)
cfg->ret->flags &= ~MONO_INST_VOLATILE;
// Add an explicit return instruction referencing the return value
MONO_INST_NEW (cfg, ins, OP_SETRET);
ins->sreg1 = cfg->ret->dreg;
MONO_ADD_INS (cfg->bb_exit, ins);
}
cfg->opt &= ~MONO_OPT_LINEARS;
/* FIXME: */
cfg->opt &= ~MONO_OPT_BRANCH;
}
cfg->after_method_to_ir = TRUE;
/* todo: remove code when we have verified that the liveness for try/catch blocks
* works perfectly
*/
/*
* Currently, this can't be commented out since exception blocks are not
* processed during liveness analysis.
* It is also needed, because otherwise the local optimization passes would
* delete assignments in cases like this:
* r1 <- 1
* <something which throws>
* r1 <- 2
* This also allows SSA to be run on methods containing exception clauses, since
* SSA will ignore variables marked VOLATILE.
*/
MONO_TIME_TRACK (mono_jit_stats.jit_liveness_handle_exception_clauses, mono_liveness_handle_exception_clauses (cfg));
mono_cfg_dump_ir (cfg, "liveness_handle_exception_clauses");
MONO_TIME_TRACK (mono_jit_stats.jit_handle_out_of_line_bblock, mono_handle_out_of_line_bblock (cfg));
mono_cfg_dump_ir (cfg, "handle_out_of_line_bblock");
/*g_print ("numblocks = %d\n", cfg->num_bblocks);*/
if (!COMPILE_LLVM (cfg)) {
MONO_TIME_TRACK (mono_jit_stats.jit_decompose_long_opts, mono_decompose_long_opts (cfg));
mono_cfg_dump_ir (cfg, "decompose_long_opts");
}
/* Should be done before branch opts */
if (cfg->opt & (MONO_OPT_CONSPROP | MONO_OPT_COPYPROP)) {
MONO_TIME_TRACK (mono_jit_stats.jit_local_cprop, mono_local_cprop (cfg));
mono_cfg_dump_ir (cfg, "local_cprop");
}
if (cfg->flags & MONO_CFG_HAS_TYPE_CHECK) {
MONO_TIME_TRACK (mono_jit_stats.jit_decompose_typechecks, mono_decompose_typechecks (cfg));
if (cfg->gdump_ctx != NULL) {
/* workaround for graph visualization, as it doesn't handle empty basic blocks properly */
mono_insert_nop_in_empty_bb (cfg);
}
mono_cfg_dump_ir (cfg, "decompose_typechecks");
}
/*
* Should be done after cprop which can do strength reduction on
* some of these ops, after propagating immediates.
*/
if (cfg->has_emulated_ops) {
MONO_TIME_TRACK (mono_jit_stats.jit_local_emulate_ops, mono_local_emulate_ops (cfg));
mono_cfg_dump_ir (cfg, "local_emulate_ops");
}
if (cfg->opt & MONO_OPT_BRANCH) {
MONO_TIME_TRACK (mono_jit_stats.jit_optimize_branches, mono_optimize_branches (cfg));
mono_cfg_dump_ir (cfg, "optimize_branches");
}
/* This must be done _before_ global reg alloc and _after_ decompose */
MONO_TIME_TRACK (mono_jit_stats.jit_handle_global_vregs, mono_handle_global_vregs (cfg));
mono_cfg_dump_ir (cfg, "handle_global_vregs");
if (cfg->opt & MONO_OPT_DEADCE) {
MONO_TIME_TRACK (mono_jit_stats.jit_local_deadce, mono_local_deadce (cfg));
mono_cfg_dump_ir (cfg, "local_deadce");
}
if (cfg->opt & MONO_OPT_ALIAS_ANALYSIS) {
MONO_TIME_TRACK (mono_jit_stats.jit_local_alias_analysis, mono_local_alias_analysis (cfg));
mono_cfg_dump_ir (cfg, "local_alias_analysis");
}
/* Disable this for LLVM to make the IR easier to handle */
if (!COMPILE_LLVM (cfg)) {
MONO_TIME_TRACK (mono_jit_stats.jit_if_conversion, mono_if_conversion (cfg));
mono_cfg_dump_ir (cfg, "if_conversion");
}
remove_empty_finally_pass (cfg);
if (cfg->llvm_only && cfg->interp && !cfg->method->wrapper_type && !interp_entry_only && !cfg->deopt) {
/* Disable llvm if there are still finally clauses left */
for (int i = 0; i < cfg->header->num_clauses; ++i) {
MonoExceptionClause *clause = &header->clauses [i];
if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY && !cfg->clause_is_dead [i]) {
cfg->exception_message = g_strdup ("finally clause.");
cfg->disable_llvm = TRUE;
break;
}
}
}
mono_threads_safepoint ();
MONO_TIME_TRACK (mono_jit_stats.jit_bb_ordering, mono_bb_ordering (cfg));
mono_cfg_dump_ir (cfg, "bb_ordering");
if (((cfg->num_varinfo > 2000) || (cfg->num_bblocks > 1000)) && !cfg->compile_aot) {
/*
* we disable some optimizations if there are too many variables
* because JIT time may become too expensive. The actual number needs
* to be tweaked and eventually the non-linear algorithms should be fixed.
*/
cfg->opt &= ~ (MONO_OPT_LINEARS | MONO_OPT_COPYPROP | MONO_OPT_CONSPROP);
cfg->disable_ssa = TRUE;
}
if (cfg->num_varinfo > 10000 && !cfg->llvm_only)
/* Disable llvm for overly complex methods */
cfg->disable_ssa = TRUE;
if (cfg->opt & MONO_OPT_LOOP) {
MONO_TIME_TRACK (mono_jit_stats.jit_compile_dominator_info, mono_compile_dominator_info (cfg, MONO_COMP_DOM | MONO_COMP_IDOM));
MONO_TIME_TRACK (mono_jit_stats.jit_compute_natural_loops, mono_compute_natural_loops (cfg));
}
if (mono_threads_are_safepoints_enabled ()) {
MONO_TIME_TRACK (mono_jit_stats.jit_insert_safepoints, insert_safepoints (cfg));
mono_cfg_dump_ir (cfg, "insert_safepoints");
}
/* after method_to_ir */
if (parts == 1) {
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, TRUE);
return cfg;
}
/*
if (header->num_clauses)
cfg->disable_ssa = TRUE;
*/
//#define DEBUGSSA "logic_run"
//#define DEBUGSSA_CLASS "Tests"
#ifdef DEBUGSSA
if (!cfg->disable_ssa) {
mono_local_cprop (cfg);
#ifndef DISABLE_SSA
mono_ssa_compute (cfg);
#endif
}
#else
if (cfg->opt & MONO_OPT_SSA) {
if (!(cfg->comp_done & MONO_COMP_SSA) && !cfg->disable_ssa) {
#ifndef DISABLE_SSA
MONO_TIME_TRACK (mono_jit_stats.jit_ssa_compute, mono_ssa_compute (cfg));
mono_cfg_dump_ir (cfg, "ssa_compute");
#endif
if (cfg->verbose_level >= 2) {
print_dfn (cfg);
}
}
}
#endif
/* after SSA translation */
if (parts == 2) {
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, TRUE);
return cfg;
}
if ((cfg->opt & MONO_OPT_CONSPROP) || (cfg->opt & MONO_OPT_COPYPROP)) {
if (cfg->comp_done & MONO_COMP_SSA && !COMPILE_LLVM (cfg)) {
#ifndef DISABLE_SSA
MONO_TIME_TRACK (mono_jit_stats.jit_ssa_cprop, mono_ssa_cprop (cfg));
mono_cfg_dump_ir (cfg, "ssa_cprop");
#endif
}
}
#ifndef DISABLE_SSA
if (cfg->comp_done & MONO_COMP_SSA && !COMPILE_LLVM (cfg)) {
//mono_ssa_strength_reduction (cfg);
if (cfg->opt & MONO_OPT_DEADCE) {
MONO_TIME_TRACK (mono_jit_stats.jit_ssa_deadce, mono_ssa_deadce (cfg));
mono_cfg_dump_ir (cfg, "ssa_deadce");
}
if ((cfg->flags & (MONO_CFG_HAS_LDELEMA|MONO_CFG_HAS_CHECK_THIS)) && (cfg->opt & MONO_OPT_ABCREM)) {
MONO_TIME_TRACK (mono_jit_stats.jit_perform_abc_removal, mono_perform_abc_removal (cfg));
mono_cfg_dump_ir (cfg, "perform_abc_removal");
}
MONO_TIME_TRACK (mono_jit_stats.jit_ssa_remove, mono_ssa_remove (cfg));
mono_cfg_dump_ir (cfg, "ssa_remove");
MONO_TIME_TRACK (mono_jit_stats.jit_local_cprop2, mono_local_cprop (cfg));
mono_cfg_dump_ir (cfg, "local_cprop2");
MONO_TIME_TRACK (mono_jit_stats.jit_handle_global_vregs2, mono_handle_global_vregs (cfg));
mono_cfg_dump_ir (cfg, "handle_global_vregs2");
if (cfg->opt & MONO_OPT_DEADCE) {
MONO_TIME_TRACK (mono_jit_stats.jit_local_deadce2, mono_local_deadce (cfg));
mono_cfg_dump_ir (cfg, "local_deadce2");
}
if (cfg->opt & MONO_OPT_BRANCH) {
MONO_TIME_TRACK (mono_jit_stats.jit_optimize_branches2, mono_optimize_branches (cfg));
mono_cfg_dump_ir (cfg, "optimize_branches2");
}
}
#endif
if (cfg->comp_done & MONO_COMP_SSA && COMPILE_LLVM (cfg)) {
mono_ssa_loop_invariant_code_motion (cfg);
mono_cfg_dump_ir (cfg, "loop_invariant_code_motion");
/* This removes MONO_INST_FAULT flags too so perform it unconditionally */
if (cfg->opt & MONO_OPT_ABCREM) {
mono_perform_abc_removal (cfg);
mono_cfg_dump_ir (cfg, "abc_removal");
}
}
/* after SSA removal */
if (parts == 3) {
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, TRUE);
return cfg;
}
if (cfg->llvm_only && cfg->gsharedvt)
mono_ssa_remove_gsharedvt (cfg);
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
if (COMPILE_SOFT_FLOAT (cfg))
mono_decompose_soft_float (cfg);
#endif
MONO_TIME_TRACK (mono_jit_stats.jit_decompose_vtype_opts, mono_decompose_vtype_opts (cfg));
if (cfg->flags & MONO_CFG_NEEDS_DECOMPOSE) {
MONO_TIME_TRACK (mono_jit_stats.jit_decompose_array_access_opts, mono_decompose_array_access_opts (cfg));
mono_cfg_dump_ir (cfg, "decompose_array_access_opts");
}
if (cfg->got_var) {
#ifndef MONO_ARCH_GOT_REG
GList *regs;
#endif
int got_reg;
g_assert (cfg->got_var_allocated);
/*
* Allways allocate the GOT var to a register, because keeping it
* in memory will increase the number of live temporaries in some
* code created by inssel.brg, leading to the well known spills+
* branches problem. Testcase: mcs crash in
* System.MonoCustomAttrs:GetCustomAttributes.
*/
#ifdef MONO_ARCH_GOT_REG
got_reg = MONO_ARCH_GOT_REG;
#else
regs = mono_arch_get_global_int_regs (cfg);
g_assert (regs);
got_reg = GPOINTER_TO_INT (regs->data);
g_list_free (regs);
#endif
cfg->got_var->opcode = OP_REGVAR;
cfg->got_var->dreg = got_reg;
cfg->used_int_regs |= 1LL << cfg->got_var->dreg;
}
/*
* Have to call this again to process variables added since the first call.
*/
MONO_TIME_TRACK(mono_jit_stats.jit_liveness_handle_exception_clauses2, mono_liveness_handle_exception_clauses (cfg));
if (cfg->opt & MONO_OPT_LINEARS) {
GList *vars, *regs, *l;
/* fixme: maybe we can avoid to compute livenesss here if already computed ? */
cfg->comp_done &= ~MONO_COMP_LIVENESS;
if (!(cfg->comp_done & MONO_COMP_LIVENESS))
MONO_TIME_TRACK (mono_jit_stats.jit_analyze_liveness, mono_analyze_liveness (cfg));
if ((vars = mono_arch_get_allocatable_int_vars (cfg))) {
regs = mono_arch_get_global_int_regs (cfg);
/* Remove the reg reserved for holding the GOT address */
if (cfg->got_var) {
for (l = regs; l; l = l->next) {
if (GPOINTER_TO_UINT (l->data) == cfg->got_var->dreg) {
regs = g_list_delete_link (regs, l);
break;
}
}
}
MONO_TIME_TRACK (mono_jit_stats.jit_linear_scan, mono_linear_scan (cfg, vars, regs, &cfg->used_int_regs));
mono_cfg_dump_ir (cfg, "linear_scan");
}
}
//mono_print_code (cfg, "");
//print_dfn (cfg);
/* variables are allocated after decompose, since decompose could create temps */
if (!COMPILE_LLVM (cfg)) {
MONO_TIME_TRACK (mono_jit_stats.jit_arch_allocate_vars, mono_arch_allocate_vars (cfg));
mono_cfg_dump_ir (cfg, "arch_allocate_vars");
if (cfg->exception_type)
return cfg;
}
if (cfg->gsharedvt)
mono_allocate_gsharedvt_vars (cfg);
if (!COMPILE_LLVM (cfg)) {
gboolean need_local_opts;
MONO_TIME_TRACK (mono_jit_stats.jit_spill_global_vars, mono_spill_global_vars (cfg, &need_local_opts));
mono_cfg_dump_ir (cfg, "spill_global_vars");
if (need_local_opts || cfg->compile_aot) {
/* To optimize code created by spill_global_vars */
MONO_TIME_TRACK (mono_jit_stats.jit_local_cprop3, mono_local_cprop (cfg));
if (cfg->opt & MONO_OPT_DEADCE)
MONO_TIME_TRACK (mono_jit_stats.jit_local_deadce3, mono_local_deadce (cfg));
mono_cfg_dump_ir (cfg, "needs_local_opts");
}
}
mono_insert_branches_between_bblocks (cfg);
if (COMPILE_LLVM (cfg)) {
#ifdef ENABLE_LLVM
char *nm;
/* The IR has to be in SSA form for LLVM */
if (!(cfg->comp_done & MONO_COMP_SSA)) {
cfg->exception_message = g_strdup ("SSA disabled.");
cfg->disable_llvm = TRUE;
}
if (cfg->flags & MONO_CFG_NEEDS_DECOMPOSE)
mono_decompose_array_access_opts (cfg);
if (!cfg->disable_llvm)
mono_llvm_emit_method (cfg);
if (cfg->disable_llvm) {
if (cfg->verbose_level > 0) {
//nm = mono_method_full_name (cfg->method, TRUE);
printf ("LLVM failed for '%s.%s': %s\n", m_class_get_name (method->klass), method->name, cfg->exception_message);
//g_free (nm);
}
if (cfg->llvm_only && cfg->interp && !interp_entry_only) {
// If interp support is enabled, restart compilation, generating interp entry code only
interp_entry_only = TRUE;
mono_destroy_compile (cfg);
goto restart_compile;
}
if (cfg->llvm_only) {
cfg->disable_aot = TRUE;
return cfg;
}
mono_destroy_compile (cfg);
try_llvm = FALSE;
goto restart_compile;
}
if (cfg->verbose_level > 0 && !cfg->compile_aot) {
nm = mono_method_get_full_name (cfg->method);
g_print ("LLVM Method %s emitted at %p to %p (code length %d)\n",
nm,
cfg->native_code, cfg->native_code + cfg->code_len, cfg->code_len);
g_free (nm);
}
#endif
} else {
MONO_TIME_TRACK (mono_jit_stats.jit_codegen, mono_codegen (cfg));
mono_cfg_dump_ir (cfg, "codegen");
if (cfg->exception_type)
return cfg;
}
if (COMPILE_LLVM (cfg))
mono_atomic_inc_i32 (&mono_jit_stats.methods_with_llvm);
else
mono_atomic_inc_i32 (&mono_jit_stats.methods_without_llvm);
MONO_TIME_TRACK (mono_jit_stats.jit_create_jit_info, cfg->jit_info = create_jit_info (cfg, method_to_compile));
if (cfg->extend_live_ranges) {
/* Extend live ranges to cover the whole method */
for (i = 0; i < cfg->num_varinfo; ++i)
MONO_VARINFO (cfg, i)->live_range_end = cfg->code_len;
}
MONO_TIME_TRACK (mono_jit_stats.jit_gc_create_gc_map, mini_gc_create_gc_map (cfg));
MONO_TIME_TRACK (mono_jit_stats.jit_save_seq_point_info, mono_save_seq_point_info (cfg, cfg->jit_info));
if (!cfg->compile_aot)
mono_lldb_save_method_info (cfg);
if (cfg->verbose_level >= 2) {
char *id = mono_method_full_name (cfg->method, TRUE);
g_print ("\n*** ASM for %s ***\n", id);
mono_disassemble_code (cfg, cfg->native_code, cfg->code_len, id + 3);
g_print ("***\n\n");
g_free (id);
}
if (!cfg->compile_aot && !(flags & JIT_FLAG_DISCARD_RESULTS)) {
mono_jit_info_table_add (cfg->jit_info);
if (cfg->method->dynamic) {
MonoJitMemoryManager *jit_mm = (MonoJitMemoryManager*)cfg->jit_mm;
MonoJitDynamicMethodInfo *res;
jit_mm_lock (jit_mm);
g_assert (jit_mm->dynamic_code_hash);
res = (MonoJitDynamicMethodInfo *)g_hash_table_lookup (jit_mm->dynamic_code_hash, method);
jit_mm_unlock (jit_mm);
g_assert (res);
res->ji = cfg->jit_info;
}
mono_postprocess_patches_after_ji_publish (cfg);
}
#if 0
if (cfg->gsharedvt)
printf ("GSHAREDVT: %s\n", mono_method_full_name (cfg->method, TRUE));
#endif
/* collect statistics */
#ifndef DISABLE_PERFCOUNTERS
mono_atomic_inc_i32 (&mono_perfcounters->jit_methods);
mono_atomic_fetch_add_i32 (&mono_perfcounters->jit_bytes, header->code_size);
#endif
gint32 code_size_ratio = cfg->code_len;
mono_atomic_fetch_add_i32 (&mono_jit_stats.allocated_code_size, code_size_ratio);
mono_atomic_fetch_add_i32 (&mono_jit_stats.native_code_size, code_size_ratio);
/* FIXME: use an explicit function to read booleans */
if ((gboolean)mono_atomic_load_i32 ((gint32*)&mono_jit_stats.enabled)) {
if (code_size_ratio > mono_atomic_load_i32 (&mono_jit_stats.biggest_method_size)) {
mono_atomic_store_i32 (&mono_jit_stats.biggest_method_size, code_size_ratio);
char *biggest_method = g_strdup_printf ("%s::%s)", m_class_get_name (method->klass), method->name);
biggest_method = (char*)mono_atomic_xchg_ptr ((gpointer*)&mono_jit_stats.biggest_method, biggest_method);
g_free (biggest_method);
}
code_size_ratio = (code_size_ratio * 100) / header->code_size;
if (code_size_ratio > mono_atomic_load_i32 (&mono_jit_stats.max_code_size_ratio)) {
mono_atomic_store_i32 (&mono_jit_stats.max_code_size_ratio, code_size_ratio);
char *max_ratio_method = g_strdup_printf ("%s::%s)", m_class_get_name (method->klass), method->name);
max_ratio_method = (char*)mono_atomic_xchg_ptr ((gpointer*)&mono_jit_stats.max_ratio_method, max_ratio_method);
g_free (max_ratio_method);
}
}
if (MONO_METHOD_COMPILE_END_ENABLED ())
MONO_PROBE_METHOD_COMPILE_END (method, TRUE);
mono_cfg_dump_close_group (cfg);
return cfg;
}
gboolean
mini_class_has_reference_variant_generic_argument (MonoCompile *cfg, MonoClass *klass, int context_used)
{
int i;
MonoGenericContainer *container;
MonoGenericInst *ginst;
if (mono_class_is_ginst (klass)) {
container = mono_class_get_generic_container (mono_class_get_generic_class (klass)->container_class);
ginst = mono_class_get_generic_class (klass)->context.class_inst;
} else if (mono_class_is_gtd (klass) && context_used) {
container = mono_class_get_generic_container (klass);
ginst = container->context.class_inst;
} else {
return FALSE;
}
for (i = 0; i < container->type_argc; ++i) {
MonoType *type;
if (!(mono_generic_container_get_param_info (container, i)->flags & (MONO_GEN_PARAM_VARIANT|MONO_GEN_PARAM_COVARIANT)))
continue;
type = ginst->type_argv [i];
if (mini_type_is_reference (type))
return TRUE;
}
return FALSE;
}
void
mono_cfg_add_try_hole (MonoCompile *cfg, MonoExceptionClause *clause, guint8 *start, MonoBasicBlock *bb)
{
TryBlockHole *hole = (TryBlockHole *)mono_mempool_alloc (cfg->mempool, sizeof (TryBlockHole));
hole->clause = clause;
hole->start_offset = start - cfg->native_code;
hole->basic_block = bb;
cfg->try_block_holes = g_slist_append_mempool (cfg->mempool, cfg->try_block_holes, hole);
}
void
mono_cfg_set_exception (MonoCompile *cfg, MonoExceptionType type)
{
cfg->exception_type = type;
}
/* Assumes ownership of the MSG argument */
void
mono_cfg_set_exception_invalid_program (MonoCompile *cfg, char *msg)
{
mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR);
mono_error_set_generic_error (cfg->error, "System", "InvalidProgramException", "%s", msg);
}
#endif /* DISABLE_JIT */
gint64 mono_time_track_start ()
{
return mono_100ns_ticks ();
}
/*
* mono_time_track_end:
*
* Uses UnlockedAddDouble () to update \param time.
*/
void mono_time_track_end (gint64 *time, gint64 start)
{
UnlockedAdd64 (time, mono_100ns_ticks () - start);
}
/*
* mono_update_jit_stats:
*
* Only call this function in locked environments to avoid data races.
*/
MONO_NO_SANITIZE_THREAD
void
mono_update_jit_stats (MonoCompile *cfg)
{
mono_jit_stats.allocate_var += cfg->stat_allocate_var;
mono_jit_stats.locals_stack_size += cfg->stat_locals_stack_size;
mono_jit_stats.basic_blocks += cfg->stat_basic_blocks;
mono_jit_stats.max_basic_blocks = MAX (cfg->stat_basic_blocks, mono_jit_stats.max_basic_blocks);
mono_jit_stats.cil_code_size += cfg->stat_cil_code_size;
mono_jit_stats.regvars += cfg->stat_n_regvars;
mono_jit_stats.inlineable_methods += cfg->stat_inlineable_methods;
mono_jit_stats.inlined_methods += cfg->stat_inlined_methods;
mono_jit_stats.code_reallocs += cfg->stat_code_reallocs;
}
/*
* mono_jit_compile_method_inner:
*
* Main entry point for the JIT.
*/
gpointer
mono_jit_compile_method_inner (MonoMethod *method, int opt, MonoError *error)
{
MonoCompile *cfg;
gpointer code = NULL;
MonoJitInfo *jinfo, *info;
MonoVTable *vtable;
MonoException *ex = NULL;
gint64 start;
MonoMethod *prof_method, *shared;
error_init (error);
start = mono_time_track_start ();
cfg = mini_method_compile (method, opt, JIT_FLAG_RUN_CCTORS, 0, -1);
gint64 jit_time = 0.0;
mono_time_track_end (&jit_time, start);
UnlockedAdd64 (&mono_jit_stats.jit_time, jit_time);
prof_method = cfg->method;
switch (cfg->exception_type) {
case MONO_EXCEPTION_NONE:
break;
case MONO_EXCEPTION_TYPE_LOAD:
case MONO_EXCEPTION_MISSING_FIELD:
case MONO_EXCEPTION_MISSING_METHOD:
case MONO_EXCEPTION_FILE_NOT_FOUND:
case MONO_EXCEPTION_BAD_IMAGE:
case MONO_EXCEPTION_INVALID_PROGRAM: {
/* Throw a type load exception if needed */
if (cfg->exception_ptr) {
ex = mono_class_get_exception_for_failure ((MonoClass *)cfg->exception_ptr);
} else {
if (cfg->exception_type == MONO_EXCEPTION_MISSING_FIELD)
ex = mono_exception_from_name_msg (mono_defaults.corlib, "System", "MissingFieldException", cfg->exception_message);
else if (cfg->exception_type == MONO_EXCEPTION_MISSING_METHOD)
ex = mono_exception_from_name_msg (mono_defaults.corlib, "System", "MissingMethodException", cfg->exception_message);
else if (cfg->exception_type == MONO_EXCEPTION_TYPE_LOAD)
ex = mono_exception_from_name_msg (mono_defaults.corlib, "System", "TypeLoadException", cfg->exception_message);
else if (cfg->exception_type == MONO_EXCEPTION_FILE_NOT_FOUND)
ex = mono_exception_from_name_msg (mono_defaults.corlib, "System.IO", "FileNotFoundException", cfg->exception_message);
else if (cfg->exception_type == MONO_EXCEPTION_BAD_IMAGE)
ex = mono_get_exception_bad_image_format (cfg->exception_message);
else if (cfg->exception_type == MONO_EXCEPTION_INVALID_PROGRAM)
ex = mono_exception_from_name_msg (mono_defaults.corlib, "System", "InvalidProgramException", cfg->exception_message);
else
g_assert_not_reached ();
}
break;
}
case MONO_EXCEPTION_MONO_ERROR:
// FIXME: MonoError has no copy ctor
g_assert (!is_ok (cfg->error));
ex = mono_error_convert_to_exception (cfg->error);
break;
default:
g_assert_not_reached ();
}
if (ex) {
MONO_PROFILER_RAISE (jit_failed, (method));
mono_destroy_compile (cfg);
mono_error_set_exception_instance (error, ex);
return NULL;
}
if (mono_method_is_generic_sharable (method, FALSE)) {
shared = mini_get_shared_method_full (method, SHARE_MODE_NONE, error);
if (!is_ok (error)) {
MONO_PROFILER_RAISE (jit_failed, (method));
mono_destroy_compile (cfg);
return NULL;
}
} else {
shared = NULL;
}
mono_loader_lock ();
if (mono_stats_method_desc && mono_method_desc_full_match (mono_stats_method_desc, method)) {
g_printf ("Printing runtime stats at method: %s\n", mono_method_get_full_name (method));
mono_runtime_print_stats ();
}
/* Check if some other thread already did the job. In this case, we can
discard the code this thread generated. */
info = mini_lookup_method (method, shared);
if (info) {
code = info->code_start;
discarded_code ++;
discarded_jit_time += jit_time;
}
if (code == NULL) {
MonoJitMemoryManager *jit_mm = (MonoJitMemoryManager*)cfg->jit_mm;
/* The lookup + insert is atomic since this is done inside the domain lock */
jit_code_hash_lock (jit_mm);
mono_internal_hash_table_insert (&jit_mm->jit_code_hash, cfg->jit_info->d.method, cfg->jit_info);
jit_code_hash_unlock (jit_mm);
code = cfg->native_code;
if (cfg->gshared && mono_method_is_generic_sharable (method, FALSE))
mono_atomic_inc_i32 (&mono_stats.generics_shared_methods);
if (cfg->gsharedvt)
mono_atomic_inc_i32 (&mono_stats.gsharedvt_methods);
}
jinfo = cfg->jit_info;
/*
* Update global stats while holding a lock, instead of doing many
* mono_atomic_inc_i32 operations during JITting.
*/
mono_update_jit_stats (cfg);
mono_destroy_compile (cfg);
mini_patch_llvm_jit_callees (method, code);
#ifndef DISABLE_JIT
mono_emit_jit_map (jinfo);
mono_emit_jit_dump (jinfo, code);
#endif
mono_loader_unlock ();
if (!is_ok (error))
return NULL;
vtable = mono_class_vtable_checked (method->klass, error);
return_val_if_nok (error, NULL);
if (method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE) {
if (mono_marshal_method_from_wrapper (method)) {
/* Native func wrappers have no method */
/* The profiler doesn't know about wrappers, so pass the original icall method */
MONO_PROFILER_RAISE (jit_done, (mono_marshal_method_from_wrapper (method), jinfo));
}
}
MONO_PROFILER_RAISE (jit_done, (method, jinfo));
if (prof_method != method)
MONO_PROFILER_RAISE (jit_done, (prof_method, jinfo));
if (!mono_runtime_class_init_full (vtable, error))
return NULL;
return MINI_ADDR_TO_FTNPTR (code);
}
/*
* mini_get_underlying_type:
*
* Return the type the JIT will use during compilation.
* Handles: byref, enums, native types, bool/char, ref types, generic sharing.
* For gsharedvt types, it will return the original VAR/MVAR.
*/
MonoType*
mini_get_underlying_type (MonoType *type)
{
return mini_type_get_underlying_type (type);
}
void
mini_jit_init (void)
{
mono_os_mutex_init_recursive (&jit_mutex);
#ifndef DISABLE_JIT
mono_counters_register ("Discarded method code", MONO_COUNTER_JIT | MONO_COUNTER_INT, &discarded_code);
mono_counters_register ("Time spent JITting discarded code", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &discarded_jit_time);
mono_counters_register ("Try holes memory size", MONO_COUNTER_JIT | MONO_COUNTER_INT, &jinfo_try_holes_size);
mono_counters_register ("JIT/method_to_ir", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_method_to_ir);
mono_counters_register ("JIT/liveness_handle_exception_clauses", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_liveness_handle_exception_clauses);
mono_counters_register ("JIT/handle_out_of_line_bblock", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_handle_out_of_line_bblock);
mono_counters_register ("JIT/decompose_long_opts", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_decompose_long_opts);
mono_counters_register ("JIT/decompose_typechecks", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_decompose_typechecks);
mono_counters_register ("JIT/local_cprop", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_cprop);
mono_counters_register ("JIT/local_emulate_ops", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_emulate_ops);
mono_counters_register ("JIT/optimize_branches", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_optimize_branches);
mono_counters_register ("JIT/handle_global_vregs", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_handle_global_vregs);
mono_counters_register ("JIT/local_deadce", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_deadce);
mono_counters_register ("JIT/local_alias_analysis", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_alias_analysis);
mono_counters_register ("JIT/if_conversion", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_if_conversion);
mono_counters_register ("JIT/bb_ordering", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_bb_ordering);
mono_counters_register ("JIT/compile_dominator_info", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_compile_dominator_info);
mono_counters_register ("JIT/compute_natural_loops", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_compute_natural_loops);
mono_counters_register ("JIT/insert_safepoints", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_insert_safepoints);
mono_counters_register ("JIT/ssa_compute", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_ssa_compute);
mono_counters_register ("JIT/ssa_cprop", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_ssa_cprop);
mono_counters_register ("JIT/ssa_deadce", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_ssa_deadce);
mono_counters_register ("JIT/perform_abc_removal", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_perform_abc_removal);
mono_counters_register ("JIT/ssa_remove", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_ssa_remove);
mono_counters_register ("JIT/local_cprop2", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_cprop2);
mono_counters_register ("JIT/handle_global_vregs2", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_handle_global_vregs2);
mono_counters_register ("JIT/local_deadce2", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_deadce2);
mono_counters_register ("JIT/optimize_branches2", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_optimize_branches2);
mono_counters_register ("JIT/decompose_vtype_opts", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_decompose_vtype_opts);
mono_counters_register ("JIT/decompose_array_access_opts", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_decompose_array_access_opts);
mono_counters_register ("JIT/liveness_handle_exception_clauses2", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_liveness_handle_exception_clauses2);
mono_counters_register ("JIT/analyze_liveness", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_analyze_liveness);
mono_counters_register ("JIT/linear_scan", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_linear_scan);
mono_counters_register ("JIT/arch_allocate_vars", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_arch_allocate_vars);
mono_counters_register ("JIT/spill_global_var", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_spill_global_vars);
mono_counters_register ("JIT/local_cprop3", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_cprop3);
mono_counters_register ("JIT/local_deadce3", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_local_deadce3);
mono_counters_register ("JIT/codegen", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_codegen);
mono_counters_register ("JIT/create_jit_info", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_create_jit_info);
mono_counters_register ("JIT/gc_create_gc_map", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_gc_create_gc_map);
mono_counters_register ("JIT/save_seq_point_info", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_save_seq_point_info);
mono_counters_register ("Total time spent JITting", MONO_COUNTER_JIT | MONO_COUNTER_LONG | MONO_COUNTER_TIME, &mono_jit_stats.jit_time);
mono_counters_register ("Basic blocks", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.basic_blocks);
mono_counters_register ("Max basic blocks", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.max_basic_blocks);
mono_counters_register ("Allocated vars", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.allocate_var);
mono_counters_register ("Code reallocs", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.code_reallocs);
mono_counters_register ("Allocated code size", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.allocated_code_size);
mono_counters_register ("Allocated seq points size", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.allocated_seq_points_size);
mono_counters_register ("Inlineable methods", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.inlineable_methods);
mono_counters_register ("Inlined methods", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.inlined_methods);
mono_counters_register ("Regvars", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.regvars);
mono_counters_register ("Locals stack size", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.locals_stack_size);
mono_counters_register ("Method cache lookups", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.methods_lookups);
mono_counters_register ("Compiled CIL code size", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.cil_code_size);
mono_counters_register ("Native code size", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.native_code_size);
mono_counters_register ("Aliases found", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.alias_found);
mono_counters_register ("Aliases eliminated", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.alias_removed);
mono_counters_register ("Aliased loads eliminated", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.loads_eliminated);
mono_counters_register ("Aliased stores eliminated", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.stores_eliminated);
mono_counters_register ("Optimized immediate divisions", MONO_COUNTER_JIT | MONO_COUNTER_INT, &mono_jit_stats.optimized_divisions);
current_backend = g_new0 (MonoBackend, 1);
init_backend (current_backend);
#endif
}
#ifndef ENABLE_LLVM
void
mono_llvm_emit_aot_file_info (MonoAotFileInfo *info, gboolean has_jitted_code)
{
g_assert_not_reached ();
}
gpointer
mono_llvm_emit_aot_data (const char *symbol, guint8 *data, int data_len)
{
g_assert_not_reached ();
}
gpointer
mono_llvm_emit_aot_data_aligned (const char *symbol, guint8 *data, int data_len, int align)
{
g_assert_not_reached ();
}
#endif
#if !defined(ENABLE_LLVM_RUNTIME) && !defined(ENABLE_LLVM)
void
mono_llvm_cpp_throw_exception (void)
{
g_assert_not_reached ();
}
void
mono_llvm_cpp_catch_exception (MonoLLVMInvokeCallback cb, gpointer arg, gboolean *out_thrown)
{
g_assert_not_reached ();
}
#endif
#ifdef DISABLE_JIT
MonoCompile*
mini_method_compile (MonoMethod *method, guint32 opts, JitFlags flags, int parts, int aot_method_index)
{
g_assert_not_reached ();
return NULL;
}
void
mono_destroy_compile (MonoCompile *cfg)
{
g_assert_not_reached ();
}
void
mono_add_patch_info (MonoCompile *cfg, int ip, MonoJumpInfoType type, gconstpointer target)
{
g_assert_not_reached ();
}
#else // DISABLE_JIT
guint8*
mini_realloc_code_slow (MonoCompile *cfg, int size)
{
const int EXTRA_CODE_SPACE = 16;
if (cfg->code_len + size > (cfg->code_size - EXTRA_CODE_SPACE)) {
while (cfg->code_len + size > (cfg->code_size - EXTRA_CODE_SPACE))
cfg->code_size = cfg->code_size * 2 + EXTRA_CODE_SPACE;
cfg->native_code = g_realloc (cfg->native_code, cfg->code_size);
cfg->stat_code_reallocs++;
}
return cfg->native_code + cfg->code_len;
}
#endif /* DISABLE_JIT */
gboolean
mini_class_is_system_array (MonoClass *klass)
{
return m_class_get_parent (klass) == mono_defaults.array_class;
}
/*
* mono_target_pagesize:
*
* query pagesize used to determine if an implicit NRE can be used
*/
int
mono_target_pagesize (void)
{
/* We could query the system's pagesize via mono_pagesize (), however there
* are pitfalls: sysconf (3) is called on some posix like systems, and per
* POSIX.1-2008 this function doesn't have to be async-safe. Since this
* function can be called from a signal handler, we simplify things by
* using 4k on all targets. Implicit null-checks with an offset larger than
* 4k are _very_ uncommon, so we don't mind emitting an explicit null-check
* for those cases.
*/
return 4 * 1024;
}
MonoCPUFeatures
mini_get_cpu_features (MonoCompile* cfg)
{
MonoCPUFeatures features = (MonoCPUFeatures)0;
#if !defined(MONO_CROSS_COMPILE)
if (!cfg->compile_aot || cfg->use_current_cpu) {
// detect current CPU features if we are in JIT mode or AOT with use_current_cpu flag.
#if defined(ENABLE_LLVM)
features = mono_llvm_get_cpu_features (); // llvm has a nice built-in API to detect features
#elif defined(TARGET_AMD64) || defined(TARGET_X86)
features = mono_arch_get_cpu_features ();
#endif
}
#endif
#if defined(TARGET_ARM64)
// All Arm64 devices have this set
features |= MONO_CPU_ARM64_BASE;
// This is a standard part of ARMv8-A; see A1.5 in "ARM
// Architecture Reference Manual ARMv8, for ARMv8-A
// architecture profile"
features |= MONO_CPU_ARM64_NEON;
#endif
// apply parameters passed via -mattr
return (features | mono_cpu_features_enabled) & ~mono_cpu_features_disabled;
}
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/mono/mini/mini.h | /**
* \file
* Copyright 2002-2003 Ximian Inc
* Copyright 2003-2011 Novell Inc
* Copyright 2011 Xamarin Inc
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#ifndef __MONO_MINI_H__
#define __MONO_MINI_H__
#include "config.h"
#include <glib.h>
#include <signal.h>
#ifdef HAVE_SYS_TYPES_H
#include <sys/types.h>
#endif
#include <mono/utils/mono-forward-internal.h>
#include <mono/metadata/loader.h>
#include <mono/metadata/mempool.h>
#include <mono/utils/monobitset.h>
#include <mono/metadata/class.h>
#include <mono/metadata/object.h>
#include <mono/metadata/opcodes.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/domain-internals.h>
#include "mono/metadata/class-internals.h"
#include "mono/metadata/class-init.h"
#include "mono/metadata/object-internals.h"
#include <mono/metadata/profiler-private.h>
#include <mono/metadata/debug-helpers.h>
#include <mono/metadata/abi-details.h>
#include <mono/metadata/jit-info.h>
#include <mono/utils/mono-compiler.h>
#include <mono/utils/mono-machine.h>
#include <mono/utils/mono-stack-unwinding.h>
#include <mono/utils/mono-threads.h>
#include <mono/utils/mono-threads-coop.h>
#include <mono/utils/mono-tls.h>
#include <mono/utils/atomic.h>
#include <mono/utils/mono-jemalloc.h>
#include <mono/utils/mono-conc-hashtable.h>
#include <mono/utils/mono-signal-handler.h>
#include <mono/utils/ftnptr.h>
#include <mono/metadata/icalls.h>
// Forward declare so that mini-*.h can have pointers to them.
// CallInfo is presently architecture specific.
typedef struct MonoInst MonoInst;
typedef struct CallInfo CallInfo;
typedef struct SeqPointInfo SeqPointInfo;
#include "mini-arch.h"
#include "regalloc.h"
#include "mini-unwind.h"
#include <mono/jit/jit.h>
#include "cfgdump.h"
#include "tiered.h"
#include "mono/metadata/tabledefs.h"
#include "mono/metadata/marshal.h"
#include "mono/metadata/exception.h"
#include "mono/metadata/callspec.h"
#include "mono/metadata/icall-signatures.h"
/*
* The mini code should not have any compile time dependencies on the GC being used, so the same object file from mini/
* can be linked into both mono and mono-sgen.
*/
#if !defined(MONO_DLL_EXPORT) || !defined(_MSC_VER)
#if defined(HAVE_BOEHM_GC) || defined(HAVE_SGEN_GC)
#error "The code in mini/ should not depend on these defines."
#endif
#endif
#ifndef __GNUC__
/*#define __alignof__(a) sizeof(a)*/
#define __alignof__(type) G_STRUCT_OFFSET(struct { char c; type x; }, x)
#endif
#if DISABLE_LOGGING
#define MINI_DEBUG(level,limit,code)
#else
#define MINI_DEBUG(level,limit,code) do {if (G_UNLIKELY ((level) >= (limit))) code} while (0)
#endif
#if !defined(DISABLE_TASKLETS) && defined(MONO_ARCH_SUPPORT_TASKLETS)
#if defined(__GNUC__)
#define MONO_SUPPORT_TASKLETS 1
#elif defined(HOST_WIN32)
#define MONO_SUPPORT_TASKLETS 1
// Replace some gnu intrinsics needed for tasklets with MSVC equivalents.
#define __builtin_extract_return_addr(x) x
#define __builtin_return_address(x) _ReturnAddress()
#define __builtin_frame_address(x) _AddressOfReturnAddress()
#endif
#endif
#if ENABLE_LLVM
#define COMPILE_LLVM(cfg) ((cfg)->compile_llvm)
#define LLVM_ENABLED TRUE
#else
#define COMPILE_LLVM(cfg) (0)
#define LLVM_ENABLED FALSE
#endif
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
#define COMPILE_SOFT_FLOAT(cfg) (!COMPILE_LLVM ((cfg)) && mono_arch_is_soft_float ())
#else
#define COMPILE_SOFT_FLOAT(cfg) (0)
#endif
#define NOT_IMPLEMENTED do { g_assert_not_reached (); } while (0)
/* for 32 bit systems */
#if G_BYTE_ORDER == G_LITTLE_ENDIAN
#define MINI_LS_WORD_IDX 0
#define MINI_MS_WORD_IDX 1
#else
#define MINI_LS_WORD_IDX 1
#define MINI_MS_WORD_IDX 0
#endif
#define MINI_LS_WORD_OFFSET (MINI_LS_WORD_IDX * 4)
#define MINI_MS_WORD_OFFSET (MINI_MS_WORD_IDX * 4)
#define MONO_LVREG_LS(lvreg) ((lvreg) + 1)
#define MONO_LVREG_MS(lvreg) ((lvreg) + 2)
#ifndef DISABLE_AOT
#define MONO_USE_AOT_COMPILER
#endif
//TODO: This is x86/amd64 specific.
#define mono_simd_shuffle_mask(a,b,c,d) ((a) | ((b) << 2) | ((c) << 4) | ((d) << 6))
/* Remap printf to g_print (we use a mix of these in the mini code) */
#ifdef HOST_ANDROID
#define printf g_print
#endif
#define MONO_TYPE_IS_PRIMITIVE(t) ((!m_type_is_byref ((t)) && ((((t)->type >= MONO_TYPE_BOOLEAN && (t)->type <= MONO_TYPE_R8) || ((t)->type >= MONO_TYPE_I && (t)->type <= MONO_TYPE_U)))))
#define MONO_TYPE_IS_VECTOR_PRIMITIVE(t) ((!m_type_is_byref ((t)) && ((((t)->type >= MONO_TYPE_I1 && (t)->type <= MONO_TYPE_R8) || ((t)->type >= MONO_TYPE_I && (t)->type <= MONO_TYPE_U)))))
//XXX this ignores if t is byref
#define MONO_TYPE_IS_PRIMITIVE_SCALAR(t) ((((((t)->type >= MONO_TYPE_BOOLEAN && (t)->type <= MONO_TYPE_U8) || ((t)->type >= MONO_TYPE_I && (t)->type <= MONO_TYPE_U)))))
typedef struct
{
MonoClass *klass;
MonoMethod *method;
} MonoClassMethodPair;
typedef struct
{
MonoClass *klass;
MonoMethod *method;
gboolean is_virtual;
} MonoDelegateClassMethodPair;
typedef struct {
MonoJitInfo *ji;
MonoCodeManager *code_mp;
} MonoJitDynamicMethodInfo;
/* An extension of MonoGenericParamFull used in generic sharing */
typedef struct {
MonoGenericParamFull param;
MonoGenericParam *parent;
} MonoGSharedGenericParam;
/* Contains a list of ips which needs to be patched when a method is compiled */
typedef struct {
GSList *list;
} MonoJumpList;
/* Arch-specific */
typedef struct {
int dummy;
} MonoDynCallInfo;
typedef struct {
guint32 index;
MonoExceptionClause *clause;
} MonoLeaveClause;
/*
* Information about a stack frame.
* FIXME This typedef exists only to avoid tons of code rewriting
*/
typedef MonoStackFrameInfo StackFrameInfo;
#if 0
#define mono_bitset_foreach_bit(set,b,n) \
for (b = 0; b < n; b++)\
if (mono_bitset_test_fast(set,b))
#else
#define mono_bitset_foreach_bit(set,b,n) \
for (b = mono_bitset_find_start (set); b < n && b >= 0; b = mono_bitset_find_first (set, b))
#endif
/*
* Pull the list of opcodes
*/
#define OPDEF(a,b,c,d,e,f,g,h,i,j) \
a = i,
enum {
#include "mono/cil/opcode.def"
CEE_LASTOP
};
#undef OPDEF
#define MONO_VARINFO(cfg,varnum) (&(cfg)->vars [varnum])
#define MONO_INST_NULLIFY_SREGS(dest) do { \
(dest)->sreg1 = (dest)->sreg2 = (dest)->sreg3 = -1; \
} while (0)
#define MONO_INST_NEW(cfg,dest,op) do { \
(dest) = (MonoInst *)mono_mempool_alloc0 ((cfg)->mempool, sizeof (MonoInst)); \
(dest)->opcode = (op); \
(dest)->dreg = -1; \
MONO_INST_NULLIFY_SREGS ((dest)); \
(dest)->cil_code = (cfg)->ip; \
} while (0)
#define MONO_INST_NEW_CALL(cfg,dest,op) do { \
(dest) = (MonoCallInst *)mono_mempool_alloc0 ((cfg)->mempool, sizeof (MonoCallInst)); \
(dest)->inst.opcode = (op); \
(dest)->inst.dreg = -1; \
MONO_INST_NULLIFY_SREGS (&(dest)->inst); \
(dest)->inst.cil_code = (cfg)->ip; \
} while (0)
#define MONO_ADD_INS(b,inst) do { \
if ((b)->last_ins) { \
(b)->last_ins->next = (inst); \
(inst)->prev = (b)->last_ins; \
(b)->last_ins = (inst); \
} else { \
(b)->code = (b)->last_ins = (inst); \
} \
} while (0)
#define NULLIFY_INS(ins) do { \
(ins)->opcode = OP_NOP; \
(ins)->dreg = -1; \
MONO_INST_NULLIFY_SREGS ((ins)); \
} while (0)
/* Remove INS from BB */
#define MONO_REMOVE_INS(bb,ins) do { \
if ((ins)->prev) \
(ins)->prev->next = (ins)->next; \
if ((ins)->next) \
(ins)->next->prev = (ins)->prev; \
if ((bb)->code == (ins)) \
(bb)->code = (ins)->next; \
if ((bb)->last_ins == (ins)) \
(bb)->last_ins = (ins)->prev; \
} while (0)
/* Remove INS from BB and nullify it */
#define MONO_DELETE_INS(bb,ins) do { \
MONO_REMOVE_INS ((bb), (ins)); \
NULLIFY_INS ((ins)); \
} while (0)
/*
* this is used to determine when some branch optimizations are possible: we exclude FP compares
* because they have weird semantics with NaNs.
*/
#define MONO_IS_COND_BRANCH_OP(ins) (((ins)->opcode >= OP_LBEQ && (ins)->opcode <= OP_LBLT_UN) || ((ins)->opcode >= OP_FBEQ && (ins)->opcode <= OP_FBLT_UN) || ((ins)->opcode >= OP_IBEQ && (ins)->opcode <= OP_IBLT_UN))
#define MONO_IS_COND_BRANCH_NOFP(ins) (MONO_IS_COND_BRANCH_OP(ins) && !(((ins)->opcode >= OP_FBEQ) && ((ins)->opcode <= OP_FBLT_UN)))
#define MONO_IS_BRANCH_OP(ins) (MONO_IS_COND_BRANCH_OP(ins) || ((ins)->opcode == OP_BR) || ((ins)->opcode == OP_BR_REG) || ((ins)->opcode == OP_SWITCH))
#define MONO_IS_COND_EXC(ins) ((((ins)->opcode >= OP_COND_EXC_EQ) && ((ins)->opcode <= OP_COND_EXC_LT_UN)) || (((ins)->opcode >= OP_COND_EXC_IEQ) && ((ins)->opcode <= OP_COND_EXC_ILT_UN)))
#define MONO_IS_SETCC(ins) ((((ins)->opcode >= OP_CEQ) && ((ins)->opcode <= OP_CLT_UN)) || (((ins)->opcode >= OP_ICEQ) && ((ins)->opcode <= OP_ICLE_UN)) || (((ins)->opcode >= OP_LCEQ) && ((ins)->opcode <= OP_LCLT_UN)) || (((ins)->opcode >= OP_FCEQ) && ((ins)->opcode <= OP_FCLT_UN)))
#define MONO_HAS_CUSTOM_EMULATION(ins) (((ins)->opcode >= OP_FBEQ && (ins)->opcode <= OP_FBLT_UN) || ((ins)->opcode >= OP_FCEQ && (ins)->opcode <= OP_FCLT_UN))
#define MONO_IS_LOAD_MEMBASE(ins) (((ins)->opcode >= OP_LOAD_MEMBASE && (ins)->opcode <= OP_LOADV_MEMBASE) || ((ins)->opcode >= OP_ATOMIC_LOAD_I1 && (ins)->opcode <= OP_ATOMIC_LOAD_R8))
#define MONO_IS_STORE_MEMBASE(ins) (((ins)->opcode >= OP_STORE_MEMBASE_REG && (ins)->opcode <= OP_STOREV_MEMBASE) || ((ins)->opcode >= OP_ATOMIC_STORE_I1 && (ins)->opcode <= OP_ATOMIC_STORE_R8))
#define MONO_IS_STORE_MEMINDEX(ins) (((ins)->opcode >= OP_STORE_MEMINDEX) && ((ins)->opcode <= OP_STORER8_MEMINDEX))
// This is internal because it is easily confused with any enum or integer.
#define MONO_IS_TAILCALL_OPCODE_INTERNAL(opcode) ((opcode) == OP_TAILCALL || (opcode) == OP_TAILCALL_MEMBASE || (opcode) == OP_TAILCALL_REG)
#define MONO_IS_TAILCALL_OPCODE(call) (MONO_IS_TAILCALL_OPCODE_INTERNAL (call->inst.opcode))
// OP_DYN_CALL is not a MonoCallInst
#define MONO_IS_CALL(ins) (((ins)->opcode >= OP_VOIDCALL && (ins)->opcode <= OP_VCALL2_MEMBASE) || \
MONO_IS_TAILCALL_OPCODE_INTERNAL ((ins)->opcode))
#define MONO_IS_JUMP_TABLE(ins) (((ins)->opcode == OP_JUMP_TABLE) ? TRUE : ((((ins)->opcode == OP_AOTCONST) && (ins->inst_i1 == (gpointer)MONO_PATCH_INFO_SWITCH)) ? TRUE : ((ins)->opcode == OP_SWITCH) ? TRUE : ((((ins)->opcode == OP_GOT_ENTRY) && ((ins)->inst_right->inst_i1 == (gpointer)MONO_PATCH_INFO_SWITCH)) ? TRUE : FALSE)))
#define MONO_JUMP_TABLE_FROM_INS(ins) (((ins)->opcode == OP_JUMP_TABLE) ? (ins)->inst_p0 : (((ins)->opcode == OP_AOTCONST) && (ins->inst_i1 == (gpointer)MONO_PATCH_INFO_SWITCH) ? (ins)->inst_p0 : (((ins)->opcode == OP_SWITCH) ? (ins)->inst_p0 : ((((ins)->opcode == OP_GOT_ENTRY) && ((ins)->inst_right->inst_i1 == (gpointer)MONO_PATCH_INFO_SWITCH)) ? (ins)->inst_right->inst_p0 : NULL))))
#define MONO_INS_HAS_NO_SIDE_EFFECT(ins) (mono_ins_no_side_effects ((ins)))
#define MONO_INS_IS_PCONST_NULL(ins) ((ins)->opcode == OP_PCONST && (ins)->inst_p0 == 0)
#define MONO_METHOD_IS_FINAL(m) (((m)->flags & METHOD_ATTRIBUTE_FINAL) || ((m)->klass && (mono_class_get_flags ((m)->klass) & TYPE_ATTRIBUTE_SEALED)))
/* Determine whenever 'ins' represents a load of the 'this' argument */
#define MONO_CHECK_THIS(ins) (mono_method_signature_internal (cfg->method)->hasthis && ((ins)->opcode == OP_MOVE) && ((ins)->sreg1 == cfg->args [0]->dreg))
#ifdef MONO_ARCH_SIMD_INTRINSICS
#define MONO_IS_PHI(ins) (((ins)->opcode == OP_PHI) || ((ins)->opcode == OP_FPHI) || ((ins)->opcode == OP_VPHI) || ((ins)->opcode == OP_XPHI))
#define MONO_IS_MOVE(ins) (((ins)->opcode == OP_MOVE) || ((ins)->opcode == OP_FMOVE) || ((ins)->opcode == OP_VMOVE) || ((ins)->opcode == OP_XMOVE) || ((ins)->opcode == OP_RMOVE))
#define MONO_IS_NON_FP_MOVE(ins) (((ins)->opcode == OP_MOVE) || ((ins)->opcode == OP_VMOVE) || ((ins)->opcode == OP_XMOVE))
#define MONO_IS_REAL_MOVE(ins) (((ins)->opcode == OP_MOVE) || ((ins)->opcode == OP_FMOVE) || ((ins)->opcode == OP_XMOVE) || ((ins)->opcode == OP_RMOVE))
#define MONO_IS_ZERO(ins) (((ins)->opcode == OP_VZERO) || ((ins)->opcode == OP_XZERO))
#ifdef TARGET_ARM64
/*
* SIMD is only supported on arm64 when using the LLVM backend. When not using
* the LLVM backend, treat SIMD datatypes as regular value types.
*/
#define MONO_CLASS_IS_SIMD(cfg, klass) (((cfg)->opt & MONO_OPT_SIMD) && COMPILE_LLVM (cfg) && m_class_is_simd_type (klass))
#else
#define MONO_CLASS_IS_SIMD(cfg, klass) (((cfg)->opt & MONO_OPT_SIMD) && m_class_is_simd_type (klass) && (COMPILE_LLVM (cfg) || mono_type_size (m_class_get_byval_arg (klass), NULL) == 16))
#endif
#else
#define MONO_IS_PHI(ins) (((ins)->opcode == OP_PHI) || ((ins)->opcode == OP_FPHI) || ((ins)->opcode == OP_VPHI))
#define MONO_IS_MOVE(ins) (((ins)->opcode == OP_MOVE) || ((ins)->opcode == OP_FMOVE) || ((ins)->opcode == OP_VMOVE) || ((ins)->opcode == OP_RMOVE))
#define MONO_IS_NON_FP_MOVE(ins) (((ins)->opcode == OP_MOVE) || ((ins)->opcode == OP_VMOVE))
/*A real MOVE is one that isn't decomposed such as a VMOVE or LMOVE*/
#define MONO_IS_REAL_MOVE(ins) (((ins)->opcode == OP_MOVE) || ((ins)->opcode == OP_FMOVE) || ((ins)->opcode == OP_RMOVE))
#define MONO_IS_ZERO(ins) ((ins)->opcode == OP_VZERO)
#define MONO_CLASS_IS_SIMD(cfg, klass) (0)
#endif
#if defined(TARGET_X86) || defined(TARGET_AMD64)
#define EMIT_NEW_X86_LEA(cfg,dest,sr1,sr2,shift,imm) do { \
MONO_INST_NEW (cfg, dest, OP_X86_LEA); \
(dest)->dreg = alloc_ireg_mp ((cfg)); \
(dest)->sreg1 = (sr1); \
(dest)->sreg2 = (sr2); \
(dest)->inst_imm = (imm); \
(dest)->backend.shift_amount = (shift); \
MONO_ADD_INS ((cfg)->cbb, (dest)); \
} while (0)
#endif
typedef struct MonoInstList MonoInstList;
typedef struct MonoCallInst MonoCallInst;
typedef struct MonoCallArgParm MonoCallArgParm;
typedef struct MonoMethodVar MonoMethodVar;
typedef struct MonoBasicBlock MonoBasicBlock;
typedef struct MonoSpillInfo MonoSpillInfo;
extern MonoCallSpec *mono_jit_trace_calls;
extern MonoMethodDesc *mono_inject_async_exc_method;
extern int mono_inject_async_exc_pos;
extern MonoMethodDesc *mono_break_at_bb_method;
extern int mono_break_at_bb_bb_num;
extern gboolean mono_do_x86_stack_align;
extern int mini_verbose;
extern int valgrind_register;
#define INS_INFO(opcode) (&mini_ins_info [((opcode) - OP_START - 1) * 4])
/* instruction description for use in regalloc/scheduling */
enum {
MONO_INST_DEST = 0,
MONO_INST_SRC1 = 1, /* we depend on the SRCs to be consecutive */
MONO_INST_SRC2 = 2,
MONO_INST_SRC3 = 3,
MONO_INST_LEN = 4,
MONO_INST_CLOB = 5,
/* Unused, commented out to reduce the size of the mdesc tables
MONO_INST_FLAGS,
MONO_INST_COST,
MONO_INST_DELAY,
MONO_INST_RES,
*/
MONO_INST_MAX = 6
};
typedef union MonoInstSpec { // instruction specification
struct {
char dest;
char src1;
char src2;
char src3;
unsigned char len;
char clob;
// char flags;
// char cost;
// char delay;
// char res;
};
struct {
char xdest;
char src [3];
unsigned char xlen;
char xclob;
};
char bytes[MONO_INST_MAX];
} MonoInstSpec;
extern const char mini_ins_info[];
extern const gint8 mini_ins_sreg_counts [];
#ifndef DISABLE_JIT
#define mono_inst_get_num_src_registers(ins) (mini_ins_sreg_counts [(ins)->opcode - OP_START - 1])
#else
#define mono_inst_get_num_src_registers(ins) 0
#endif
#define mono_inst_get_src_registers(ins, regs) (((regs) [0] = (ins)->sreg1), ((regs) [1] = (ins)->sreg2), ((regs) [2] = (ins)->sreg3), mono_inst_get_num_src_registers ((ins)))
#define MONO_BB_FOR_EACH_INS(bb, ins) for ((ins) = (bb)->code; (ins); (ins) = (ins)->next)
#define MONO_BB_FOR_EACH_INS_SAFE(bb, n, ins) for ((ins) = (bb)->code, n = (ins) ? (ins)->next : NULL; (ins); (ins) = (n), (n) = (ins) ? (ins)->next : NULL)
#define MONO_BB_FOR_EACH_INS_REVERSE(bb, ins) for ((ins) = (bb)->last_ins; (ins); (ins) = (ins)->prev)
#define MONO_BB_FOR_EACH_INS_REVERSE_SAFE(bb, p, ins) for ((ins) = (bb)->last_ins, p = (ins) ? (ins)->prev : NULL; (ins); (ins) = (p), (p) = (ins) ? (ins)->prev : NULL)
#define mono_bb_first_ins(bb) (bb)->code
/*
* Iterate through all used registers in the instruction.
* Relies on the existing order of the MONO_INST enum: MONO_INST_{DREG,SREG1,SREG2,SREG3,LEN}
* INS is the instruction, IDX is the register index, REG is the pointer to a register.
*/
#define MONO_INS_FOR_EACH_REG(ins, idx, reg) for ((idx) = INS_INFO ((ins)->opcode)[MONO_INST_DEST] != ' ' ? MONO_INST_DEST : \
(mono_inst_get_num_src_registers (ins) ? MONO_INST_SRC1 : MONO_INST_LEN); \
(reg) = (idx) == MONO_INST_DEST ? &(ins)->dreg : \
((idx) == MONO_INST_SRC1 ? &(ins)->sreg1 : \
((idx) == MONO_INST_SRC2 ? &(ins)->sreg2 : \
((idx) == MONO_INST_SRC3 ? &(ins)->sreg3 : NULL))), \
idx < MONO_INST_LEN; \
(idx) = (idx) > mono_inst_get_num_src_registers (ins) + (INS_INFO ((ins)->opcode)[MONO_INST_DEST] != ' ') ? MONO_INST_LEN : (idx) + 1)
struct MonoSpillInfo {
int offset;
};
/*
* Information about a call site for the GC map creation code
*/
typedef struct {
/* The next offset after the call instruction */
int pc_offset;
/* The basic block containing the call site */
MonoBasicBlock *bb;
/*
* The set of variables live at the call site.
* Has length cfg->num_varinfo in bits.
*/
guint8 *liveness;
/*
* List of OP_GC_PARAM_SLOT_LIVENESS_DEF instructions defining the param slots
* used by this call.
*/
GSList *param_slots;
} GCCallSite;
/*
* The IR-level extended basic block.
*
* A basic block can have multiple exits just fine, as long as the point of
* 'departure' is the last instruction in the basic block. Extended basic
* blocks, on the other hand, may have instructions that leave the block
* midstream. The important thing is that they cannot be _entered_
* midstream, ie, execution of a basic block (or extened bb) always start
* at the beginning of the block, never in the middle.
*/
struct MonoBasicBlock {
MonoInst *last_ins;
/* the next basic block in the order it appears in IL */
MonoBasicBlock *next_bb;
/*
* Before instruction selection it is the first tree in the
* forest and the first item in the list of trees. After
* instruction selection it is the first instruction and the
* first item in the list of instructions.
*/
MonoInst *code;
/* unique block number identification */
gint32 block_num;
gint32 dfn;
/* Basic blocks: incoming and outgoing counts and pointers */
/* Each bb should only appear once in each array */
gint16 out_count, in_count;
MonoBasicBlock **in_bb;
MonoBasicBlock **out_bb;
/* Points to the start of the CIL code that initiated this BB */
unsigned char* cil_code;
/* Length of the CIL block */
gint32 cil_length;
/* The offset of the generated code, used for fixups */
int native_offset;
/* The length of the generated code, doesn't include alignment padding */
int native_length;
/* The real native offset, which includes alignment padding too */
int real_native_offset;
int max_offset;
int max_length;
/* Visited and reachable flags */
guint32 flags;
/*
* SSA and loop based flags
*/
MonoBitSet *dominators;
MonoBitSet *dfrontier;
MonoBasicBlock *idom;
GSList *dominated;
/* fast dominator algorithm */
MonoBasicBlock *df_parent, *ancestor, *child, *label;
int size, sdom, idomn;
/* loop nesting and recognition */
GList *loop_blocks;
gint8 nesting;
gint8 loop_body_start;
/*
* Whenever the bblock is rarely executed so it should be emitted after
* the function epilog.
*/
guint out_of_line : 1;
/* Caches the result of uselessness calculation during optimize_branches */
guint not_useless : 1;
/* Whenever the decompose_array_access_opts () pass needs to process this bblock */
guint needs_decompose : 1;
/* Whenever this bblock is extended, ie. it has branches inside it */
guint extended : 1;
/* Whenever this bblock contains a OP_JUMP_TABLE instruction */
guint has_jump_table : 1;
/* Whenever this bblock contains an OP_CALL_HANDLER instruction */
guint has_call_handler : 1;
/* Whenever this bblock starts a try block */
guint try_start : 1;
#ifdef ENABLE_LLVM
/* The offset of the CIL instruction in this bblock which ends a try block */
intptr_t try_end;
#endif
/*
* If this is set, extend the try range started by this bblock by an arch specific
* number of bytes to encompass the end of the previous bblock (e.g. a Monitor.Enter
* call).
*/
guint extend_try_block : 1;
/* use for liveness analysis */
MonoBitSet *gen_set;
MonoBitSet *kill_set;
MonoBitSet *live_in_set;
MonoBitSet *live_out_set;
/* fields to deal with non-empty stack slots at bb boundary */
guint16 out_scount, in_scount;
MonoInst **out_stack;
MonoInst **in_stack;
/* we use that to prevent merging of bblocks covered by different clauses*/
guint real_offset;
GSList *seq_points;
// The MonoInst of the last sequence point for the current basic block.
MonoInst *last_seq_point;
// This will hold a list of last sequence points of incoming basic blocks
MonoInst **pred_seq_points;
guint num_pred_seq_points;
GSList *spill_slot_defs;
/* List of call sites in this bblock sorted by pc_offset */
GSList *gc_callsites;
/*
* If this is not null, the basic block is a try hole for all the clauses
* in the list previous to this element (including the element).
*/
GList *clause_holes;
/*
* The region encodes whether the basic block is inside
* a finally, catch, filter or none of these.
*
* If the value is -1, then it is neither finally, catch nor filter
*
* Otherwise the format is:
*
* Bits: | 0-3 | 4-7 | 8-31
* | | |
* | clause-flags | MONO_REGION | clause-index
*
*/
guint region;
/* The current symbolic register number, used in local register allocation. */
guint32 max_vreg;
};
/* BBlock flags */
enum {
BB_VISITED = 1 << 0,
BB_REACHABLE = 1 << 1,
BB_EXCEPTION_DEAD_OBJ = 1 << 2,
BB_EXCEPTION_UNSAFE = 1 << 3,
BB_EXCEPTION_HANDLER = 1 << 4,
/* for Native Client, mark the blocks that can be jumped to indirectly */
BB_INDIRECT_JUMP_TARGET = 1 << 5 ,
/* Contains code with some side effects */
BB_HAS_SIDE_EFFECTS = 1 << 6,
};
typedef struct MonoMemcpyArgs {
int size, align;
} MonoMemcpyArgs;
typedef enum {
LLVMArgNone,
/* Scalar argument passed by value */
LLVMArgNormal,
/* Only in ainfo->pair_storage */
LLVMArgInIReg,
/* Only in ainfo->pair_storage */
LLVMArgInFPReg,
/* Valuetype passed in 1-2 consecutive register */
LLVMArgVtypeInReg,
LLVMArgVtypeByVal,
LLVMArgVtypeRetAddr, /* On on cinfo->ret */
LLVMArgGSharedVt,
/* Fixed size argument passed to/returned from gsharedvt method by ref */
LLVMArgGsharedvtFixed,
/* Fixed size vtype argument passed to/returned from gsharedvt method by ref */
LLVMArgGsharedvtFixedVtype,
/* Variable sized argument passed to/returned from gsharedvt method by ref */
LLVMArgGsharedvtVariable,
/* Vtype passed/returned as one int array argument */
LLVMArgAsIArgs,
/* Vtype passed as a set of fp arguments */
LLVMArgAsFpArgs,
/*
* Only for returns, a structure which
* consists of floats/doubles.
*/
LLVMArgFpStruct,
LLVMArgVtypeByRef,
/* Vtype returned as an int */
LLVMArgVtypeAsScalar,
/* Address to local vtype passed as argument (using register or stack). */
LLVMArgVtypeAddr,
/*
* On WASM, a one element vtype is passed/returned as a scalar with the same
* type as the element.
* esize is the size of the value.
*/
LLVMArgWasmVtypeAsScalar
} LLVMArgStorage;
typedef struct {
LLVMArgStorage storage;
/*
* Only if storage == ArgVtypeInReg/LLVMArgAsFpArgs.
* This contains how the parts of the vtype are passed.
*/
LLVMArgStorage pair_storage [8];
/*
* Only if storage == LLVMArgAsIArgs/LLVMArgAsFpArgs/LLVMArgFpStruct.
* If storage == LLVMArgAsFpArgs, this is the number of arguments
* used to pass the value.
* If storage == LLVMArgFpStruct, this is the number of fields
* in the structure.
*/
int nslots;
/* Only if storage == LLVMArgAsIArgs/LLVMArgAsFpArgs/LLVMArgFpStruct (4/8) */
int esize;
/* Parameter index in the LLVM signature */
int pindex;
MonoType *type;
/* Only if storage == LLVMArgAsFpArgs. Dummy fp args to insert before this arg */
int ndummy_fpargs;
} LLVMArgInfo;
typedef struct {
LLVMArgInfo ret;
/* Whenever there is an rgctx argument */
gboolean rgctx_arg;
/* Whenever there is an IMT argument */
gboolean imt_arg;
/* Whenever there is a dummy extra argument */
gboolean dummy_arg;
/*
* The position of the vret arg in the argument list.
* Only if ret->storage == ArgVtypeRetAddr.
* Should be 0 or 1.
*/
int vret_arg_index;
/* The indexes of various special arguments in the LLVM signature */
int vret_arg_pindex, this_arg_pindex, rgctx_arg_pindex, imt_arg_pindex, dummy_arg_pindex;
/* Inline array of argument info */
/* args [0] is for the this argument if it exists */
LLVMArgInfo args [1];
} LLVMCallInfo;
#define MONO_MAX_SRC_REGS 3
struct MonoInst {
guint16 opcode;
guint8 type; /* stack type */
guint8 flags;
/* used by the register allocator */
gint32 dreg, sreg1, sreg2, sreg3;
MonoInst *next, *prev;
union {
union {
MonoInst *src;
MonoMethodVar *var;
target_mgreg_t const_val;
#if (SIZEOF_REGISTER > TARGET_SIZEOF_VOID_P) && (G_BYTE_ORDER == G_BIG_ENDIAN)
struct {
gpointer p[SIZEOF_REGISTER/TARGET_SIZEOF_VOID_P];
} pdata;
#else
gpointer p;
#endif
MonoMethod *method;
MonoMethodSignature *signature;
MonoBasicBlock **many_blocks;
MonoBasicBlock *target_block;
MonoInst **args;
MonoType *vtype;
MonoClass *klass;
int *phi_args;
MonoCallInst *call_inst;
GList *exception_clauses;
const char *exc_name;
} op [2];
gint64 i8const;
double r8const;
} data;
const unsigned char* cil_code; /* for debugging and bblock splitting */
/* used mostly by the backend to store additional info it may need */
union {
gint32 reg3;
gint32 arg_info;
gint32 size;
MonoMemcpyArgs *memcpy_args; /* in OP_MEMSET and OP_MEMCPY */
gpointer data;
gint shift_amount;
gboolean is_pinvoke; /* for variables in the unmanaged marshal format */
gboolean record_cast_details; /* For CEE_CASTCLASS */
MonoInst *spill_var; /* for OP_MOVE_I4_TO_F/F_TO_I4 and OP_FCONV_TO_R8_X */
guint16 source_opcode; /*OP_XCONV_R8_TO_I4 needs to know which op was used to do proper widening*/
int pc_offset; /* OP_GC_LIVERANGE_START/END */
/*
* memory_barrier: MONO_MEMORY_BARRIER_{ACQ,REL,SEQ}
* atomic_load_*: MONO_MEMORY_BARRIER_{ACQ,SEQ}
* atomic_store_*: MONO_MEMORY_BARRIER_{REL,SEQ}
*/
int memory_barrier_kind;
} backend;
MonoClass *klass;
};
struct MonoCallInst {
MonoInst inst;
MonoMethodSignature *signature;
MonoMethod *method;
MonoInst **args;
MonoInst *out_args;
MonoInst *vret_var;
gconstpointer fptr;
MonoJitICallId jit_icall_id;
guint stack_usage;
guint stack_align_amount;
regmask_t used_iregs;
regmask_t used_fregs;
GSList *out_ireg_args;
GSList *out_freg_args;
GSList *outarg_vts;
CallInfo *call_info;
#ifdef ENABLE_LLVM
LLVMCallInfo *cinfo;
int rgctx_arg_reg, imt_arg_reg;
#endif
#ifdef TARGET_ARM
/* See the comment in mini-arm.c!mono_arch_emit_call for RegTypeFP. */
GSList *float_args;
#endif
// Bitfields are at the end to minimize padding for alignment,
// unless there is a placement to increase locality.
guint is_virtual : 1;
// FIXME tailcall field is written after read; prefer MONO_IS_TAILCALL_OPCODE.
guint tailcall : 1;
/* If this is TRUE, 'fptr' points to a MonoJumpInfo instead of an address. */
guint fptr_is_patch : 1;
/*
* If this is true, then the call returns a vtype in a register using the same
* calling convention as OP_CALL.
*/
guint vret_in_reg : 1;
/* Whenever vret_in_reg returns fp values */
guint vret_in_reg_fp : 1;
/* Whenever there is an IMT argument and it is dynamic */
guint dynamic_imt_arg : 1;
/* Whenever there is an RGCTX argument */
guint32 rgctx_reg : 1;
/* Whenever the call will need an unbox trampoline */
guint need_unbox_trampoline : 1;
};
struct MonoCallArgParm {
MonoInst ins;
gint32 size;
gint32 offset;
gint32 offPrm;
};
/*
* flags for MonoInst
* Note: some of the values overlap, because they can't appear
* in the same MonoInst.
*/
enum {
MONO_INST_HAS_METHOD = 1,
MONO_INST_INIT = 1, /* in localloc */
MONO_INST_SINGLE_STEP_LOC = 1, /* in SEQ_POINT */
MONO_INST_IS_DEAD = 2,
MONO_INST_TAILCALL = 4,
MONO_INST_VOLATILE = 4,
MONO_INST_NOTYPECHECK = 4,
MONO_INST_NONEMPTY_STACK = 4, /* in SEQ_POINT */
MONO_INST_UNALIGNED = 8,
MONO_INST_NESTED_CALL = 8, /* in SEQ_POINT */
MONO_INST_CFOLD_TAKEN = 8, /* On branches */
MONO_INST_CFOLD_NOT_TAKEN = 16, /* On branches */
MONO_INST_DEFINITION_HAS_SIDE_EFFECTS = 8,
/* the address of the variable has been taken */
MONO_INST_INDIRECT = 16,
MONO_INST_NORANGECHECK = 16,
/* On loads, the source address can be null */
MONO_INST_FAULT = 32,
/*
* On variables, identifies LMF variables. These variables have a dummy type (int), but
* require stack space for a MonoLMF struct.
*/
MONO_INST_LMF = 32,
/* On loads, the source address points to a constant value */
MONO_INST_INVARIANT_LOAD = 64,
/* On stores, the destination is the stack */
MONO_INST_STACK_STORE = 64,
/* On variables, the variable needs GC tracking */
MONO_INST_GC_TRACK = 128,
/*
* Set on instructions during code emission which make calls, i.e. OP_CALL, OP_THROW.
* backend.pc_offset will be set to the pc offset at the end of the native call instructions.
*/
MONO_INST_GC_CALLSITE = 128,
/* On comparisons, mark the branch following the condition as likely to be taken */
MONO_INST_LIKELY = 128,
MONO_INST_NONULLCHECK = 128,
};
#define inst_c0 data.op[0].const_val
#define inst_c1 data.op[1].const_val
#define inst_i0 data.op[0].src
#define inst_i1 data.op[1].src
#if (SIZEOF_REGISTER > TARGET_SIZEOF_VOID_P) && (G_BYTE_ORDER == G_BIG_ENDIAN)
#define inst_p0 data.op[0].pdata.p[SIZEOF_REGISTER/TARGET_SIZEOF_VOID_P - 1]
#define inst_p1 data.op[1].pdata.p[SIZEOF_REGISTER/TARGET_SIZEOF_VOID_P - 1]
#else
#define inst_p0 data.op[0].p
#define inst_p1 data.op[1].p
#endif
#define inst_l data.i8const
#define inst_r data.r8const
#define inst_left data.op[0].src
#define inst_right data.op[1].src
#define inst_newa_len data.op[0].src
#define inst_newa_class data.op[1].klass
/* In _OVF opcodes */
#define inst_exc_name data.op[0].exc_name
#define inst_var data.op[0].var
#define inst_vtype data.op[1].vtype
/* in branch instructions */
#define inst_many_bb data.op[1].many_blocks
#define inst_target_bb data.op[0].target_block
#define inst_true_bb data.op[1].many_blocks[0]
#define inst_false_bb data.op[1].many_blocks[1]
#define inst_basereg sreg1
#define inst_indexreg sreg2
#define inst_destbasereg dreg
#define inst_offset data.op[0].const_val
#define inst_imm data.op[1].const_val
#define inst_call data.op[1].call_inst
#define inst_phi_args data.op[1].phi_args
#define inst_eh_blocks data.op[1].exception_clauses
/* Return the lower 32 bits of the 64 bit immediate in INS */
static inline guint32
ins_get_l_low (MonoInst *ins)
{
return (guint32)(ins->data.i8const & 0xffffffff);
}
/* Return the higher 32 bits of the 64 bit immediate in INS */
static inline guint32
ins_get_l_high (MonoInst *ins)
{
return (guint32)((ins->data.i8const >> 32) & 0xffffffff);
}
static inline void
mono_inst_set_src_registers (MonoInst *ins, int *regs)
{
ins->sreg1 = regs [0];
ins->sreg2 = regs [1];
ins->sreg3 = regs [2];
}
typedef union {
struct {
guint16 tid; /* tree number */
guint16 bid; /* block number */
} pos ;
guint32 abs_pos;
} MonoPosition;
typedef struct {
MonoPosition first_use, last_use;
} MonoLiveRange;
typedef struct MonoLiveRange2 MonoLiveRange2;
struct MonoLiveRange2 {
int from, to;
MonoLiveRange2 *next;
};
typedef struct {
/* List of live ranges sorted by 'from' */
MonoLiveRange2 *range;
MonoLiveRange2 *last_range;
} MonoLiveInterval;
/*
* Additional information about a variable
*/
struct MonoMethodVar {
guint idx; /* inside cfg->varinfo, cfg->vars */
MonoLiveRange range; /* generated by liveness analysis */
MonoLiveInterval *interval; /* generated by liveness analysis */
int reg; /* != -1 if allocated into a register */
int spill_costs;
MonoBitSet *def_in; /* used by SSA */
MonoInst *def; /* used by SSA */
MonoBasicBlock *def_bb; /* used by SSA */
GList *uses; /* used by SSA */
char cpstate; /* used by SSA conditional constant propagation */
/* The native offsets corresponding to the live range of the variable */
gint32 live_range_start, live_range_end;
/*
* cfg->varinfo [idx]->dreg could be replaced for OP_REGVAR, this contains the
* original vreg.
*/
gint32 vreg;
};
/* Generic sharing */
/*
* Flags for which contexts were used in inflating a generic.
*/
enum {
MONO_GENERIC_CONTEXT_USED_CLASS = 1,
MONO_GENERIC_CONTEXT_USED_METHOD = 2
};
enum {
/* Cannot be 0 since this is stored in rgctx slots, and 0 means an unitialized rgctx slot */
MONO_GSHAREDVT_BOX_TYPE_VTYPE = 1,
MONO_GSHAREDVT_BOX_TYPE_REF = 2,
MONO_GSHAREDVT_BOX_TYPE_NULLABLE = 3
};
typedef enum {
MONO_RGCTX_INFO_STATIC_DATA = 0,
MONO_RGCTX_INFO_KLASS = 1,
MONO_RGCTX_INFO_ELEMENT_KLASS = 2,
MONO_RGCTX_INFO_VTABLE = 3,
MONO_RGCTX_INFO_TYPE = 4,
MONO_RGCTX_INFO_REFLECTION_TYPE = 5,
MONO_RGCTX_INFO_METHOD = 6,
MONO_RGCTX_INFO_GENERIC_METHOD_CODE = 7,
MONO_RGCTX_INFO_GSHAREDVT_OUT_WRAPPER = 8,
MONO_RGCTX_INFO_CLASS_FIELD = 9,
MONO_RGCTX_INFO_METHOD_RGCTX = 10,
MONO_RGCTX_INFO_METHOD_CONTEXT = 11,
MONO_RGCTX_INFO_REMOTING_INVOKE_WITH_CHECK = 12,
MONO_RGCTX_INFO_METHOD_DELEGATE_CODE = 13,
MONO_RGCTX_INFO_CAST_CACHE = 14,
MONO_RGCTX_INFO_ARRAY_ELEMENT_SIZE = 15,
MONO_RGCTX_INFO_VALUE_SIZE = 16,
/* +1 to avoid zero values in rgctx slots */
MONO_RGCTX_INFO_FIELD_OFFSET = 17,
/* Either the code for a gsharedvt method, or the address for a gsharedvt-out trampoline for the method */
/* In llvmonly mode, this is a function descriptor */
MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE = 18,
/* Same for virtual calls */
/* In llvmonly mode, this is a function descriptor */
MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE_VIRT = 19,
/* Same for calli, associated with a signature */
MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI = 20,
MONO_RGCTX_INFO_SIG_GSHAREDVT_IN_TRAMPOLINE_CALLI = 21,
/* One of MONO_GSHAREDVT_BOX_TYPE */
MONO_RGCTX_INFO_CLASS_BOX_TYPE = 22,
/* Resolves to a MonoGSharedVtMethodRuntimeInfo */
MONO_RGCTX_INFO_METHOD_GSHAREDVT_INFO = 23,
MONO_RGCTX_INFO_LOCAL_OFFSET = 24,
MONO_RGCTX_INFO_MEMCPY = 25,
MONO_RGCTX_INFO_BZERO = 26,
/* The address of Nullable<T>.Box () */
/* In llvmonly mode, this is a function descriptor */
MONO_RGCTX_INFO_NULLABLE_CLASS_BOX = 27,
MONO_RGCTX_INFO_NULLABLE_CLASS_UNBOX = 28,
/* MONO_PATCH_INFO_VCALL_METHOD */
/* In llvmonly mode, this is a function descriptor */
MONO_RGCTX_INFO_VIRT_METHOD_CODE = 29,
/*
* MONO_PATCH_INFO_VCALL_METHOD
* Same as MONO_RGCTX_INFO_CLASS_BOX_TYPE, but for the class
* which implements the method.
*/
MONO_RGCTX_INFO_VIRT_METHOD_BOX_TYPE = 30,
/* Resolve to 2 (TRUE) or 1 (FALSE) */
MONO_RGCTX_INFO_CLASS_IS_REF_OR_CONTAINS_REFS = 31,
/* The MonoDelegateTrampInfo instance */
MONO_RGCTX_INFO_DELEGATE_TRAMP_INFO = 32,
/* Same as MONO_PATCH_INFO_METHOD_FTNDESC */
MONO_RGCTX_INFO_METHOD_FTNDESC = 33,
/* mono_type_size () for a class */
MONO_RGCTX_INFO_CLASS_SIZEOF = 34,
/* The InterpMethod for a method */
MONO_RGCTX_INFO_INTERP_METHOD = 35,
/* The llvmonly interp entry for a method */
MONO_RGCTX_INFO_LLVMONLY_INTERP_ENTRY = 36
} MonoRgctxInfoType;
/* How an rgctx is passed to a method */
typedef enum {
MONO_RGCTX_ACCESS_NONE = 0,
/* Loaded from this->vtable->rgctx */
MONO_RGCTX_ACCESS_THIS = 1,
/* Loaded from an additional mrgctx argument */
MONO_RGCTX_ACCESS_MRGCTX = 2,
/* Loaded from an additional vtable argument */
MONO_RGCTX_ACCESS_VTABLE = 3
} MonoRgctxAccess;
typedef struct _MonoRuntimeGenericContextInfoTemplate {
MonoRgctxInfoType info_type;
gpointer data;
struct _MonoRuntimeGenericContextInfoTemplate *next;
} MonoRuntimeGenericContextInfoTemplate;
typedef struct {
MonoClass *next_subclass;
MonoRuntimeGenericContextInfoTemplate *infos;
GSList *method_templates;
} MonoRuntimeGenericContextTemplate;
typedef struct {
MonoVTable *class_vtable; /* must be the first element */
MonoGenericInst *method_inst;
gpointer infos [MONO_ZERO_LEN_ARRAY];
} MonoMethodRuntimeGenericContext;
/* MONO_ABI_SIZEOF () would include the 'infos' field as well */
#define MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT (TARGET_SIZEOF_VOID_P * 2)
#define MONO_RGCTX_SLOT_MAKE_RGCTX(i) (i)
#define MONO_RGCTX_SLOT_MAKE_MRGCTX(i) ((i) | 0x80000000)
#define MONO_RGCTX_SLOT_INDEX(s) ((s) & 0x7fffffff)
#define MONO_RGCTX_SLOT_IS_MRGCTX(s) (((s) & 0x80000000) ? TRUE : FALSE)
#define MONO_GSHAREDVT_DEL_INVOKE_VT_OFFSET -2
typedef struct {
MonoMethod *method;
MonoRuntimeGenericContextInfoTemplate *entries;
int num_entries, count_entries;
} MonoGSharedVtMethodInfo;
/* This is used by gsharedvt methods to allocate locals and compute local offsets */
typedef struct {
int locals_size;
/*
* The results of resolving the entries in MOonGSharedVtMethodInfo->entries.
* We use this instead of rgctx slots since these can be loaded using a load instead
* of a call to an rgctx fetch trampoline.
*/
gpointer entries [MONO_ZERO_LEN_ARRAY];
} MonoGSharedVtMethodRuntimeInfo;
typedef struct
{
MonoClass *klass;
MonoMethod *invoke;
MonoMethod *method;
MonoMethodSignature *invoke_sig;
MonoMethodSignature *sig;
gpointer method_ptr;
gpointer invoke_impl;
gpointer impl_this;
gpointer impl_nothis;
gboolean need_rgctx_tramp;
} MonoDelegateTrampInfo;
/*
* A function descriptor, which is a function address + argument pair.
* In llvm-only mode, these are used instead of trampolines to pass
* extra arguments to runtime functions/methods.
*/
typedef struct
{
gpointer addr;
gpointer arg;
MonoMethod *method;
/* Tagged InterpMethod* */
gpointer interp_method;
} MonoFtnDesc;
typedef enum {
#define PATCH_INFO(a,b) MONO_PATCH_INFO_ ## a,
#include "patch-info.h"
#undef PATCH_INFO
MONO_PATCH_INFO_NUM
} MonoJumpInfoType;
typedef struct MonoJumpInfoRgctxEntry MonoJumpInfoRgctxEntry;
typedef struct MonoJumpInfo MonoJumpInfo;
typedef struct MonoJumpInfoGSharedVtCall MonoJumpInfoGSharedVtCall;
// Subset of MonoJumpInfo.
typedef struct MonoJumpInfoTarget {
MonoJumpInfoType type;
gconstpointer target;
} MonoJumpInfoTarget;
// This ordering is mimiced in MONO_JIT_ICALLS.
typedef enum {
MONO_TRAMPOLINE_JIT = 0,
MONO_TRAMPOLINE_JUMP = 1,
MONO_TRAMPOLINE_RGCTX_LAZY_FETCH = 2,
MONO_TRAMPOLINE_AOT = 3,
MONO_TRAMPOLINE_AOT_PLT = 4,
MONO_TRAMPOLINE_DELEGATE = 5,
MONO_TRAMPOLINE_VCALL = 6,
MONO_TRAMPOLINE_NUM = 7,
} MonoTrampolineType;
// Assuming MONO_TRAMPOLINE_JIT / MONO_JIT_ICALL_generic_trampoline_jit are first.
#if __cplusplus
g_static_assert (MONO_TRAMPOLINE_JIT == 0);
#endif
#define mono_trampoline_type_to_jit_icall_id(a) ((a) + MONO_JIT_ICALL_generic_trampoline_jit)
#define mono_jit_icall_id_to_trampoline_type(a) ((MonoTrampolineType)((a) - MONO_JIT_ICALL_generic_trampoline_jit))
/* These trampolines return normally to their caller */
#define MONO_TRAMPOLINE_TYPE_MUST_RETURN(t) \
((t) == MONO_TRAMPOLINE_RGCTX_LAZY_FETCH)
/* These trampolines receive an argument directly in a register */
#define MONO_TRAMPOLINE_TYPE_HAS_ARG(t) \
(FALSE)
/* optimization flags */
#define OPTFLAG(id,shift,name,descr) MONO_OPT_ ## id = 1 << shift,
enum {
#include "optflags-def.h"
MONO_OPT_LAST
};
/*
* This structure represents a JIT backend.
*/
typedef struct {
guint have_card_table_wb : 1;
guint have_op_generic_class_init : 1;
guint emulate_mul_div : 1;
guint emulate_div : 1;
guint emulate_long_shift_opts : 1;
guint have_objc_get_selector : 1;
guint have_generalized_imt_trampoline : 1;
gboolean have_op_tailcall_membase : 1;
gboolean have_op_tailcall_reg : 1;
gboolean have_volatile_non_param_register : 1;
guint gshared_supported : 1;
guint use_fpstack : 1;
guint ilp32 : 1;
guint need_got_var : 1;
guint need_div_check : 1;
guint no_unaligned_access : 1;
guint disable_div_with_mul : 1;
guint explicit_null_checks : 1;
guint optimized_div : 1;
guint force_float32 : 1;
int monitor_enter_adjustment;
int dyn_call_param_area;
} MonoBackend;
/* Flags for mini_method_compile () */
typedef enum {
/* Whenever to run cctors during JITting */
JIT_FLAG_RUN_CCTORS = (1 << 0),
/* Whenever this is an AOT compilation */
JIT_FLAG_AOT = (1 << 1),
/* Whenever this is a full AOT compilation */
JIT_FLAG_FULL_AOT = (1 << 2),
/* Whenever to compile with LLVM */
JIT_FLAG_LLVM = (1 << 3),
/* Whenever to disable direct calls to icall functions */
JIT_FLAG_NO_DIRECT_ICALLS = (1 << 4),
/* Emit explicit null checks */
JIT_FLAG_EXPLICIT_NULL_CHECKS = (1 << 5),
/* Whenever to compile in llvm-only mode */
JIT_FLAG_LLVM_ONLY = (1 << 6),
/* Whenever calls to pinvoke functions are made directly */
JIT_FLAG_DIRECT_PINVOKE = (1 << 7),
/* Whenever this is a compile-all run and the result should be discarded */
JIT_FLAG_DISCARD_RESULTS = (1 << 8),
/* Whenever to generate code which can work with the interpreter */
JIT_FLAG_INTERP = (1 << 9),
/* Allow AOT to use all current CPU instructions */
JIT_FLAG_USE_CURRENT_CPU = (1 << 10),
/* Generate code to self-init the method for AOT */
JIT_FLAG_SELF_INIT = (1 << 11),
/* Assume code memory is exec only */
JIT_FLAG_CODE_EXEC_ONLY = (1 << 12),
} JitFlags;
/* Bit-fields in the MonoBasicBlock.region */
#define MONO_REGION_TRY 0
#define MONO_REGION_FINALLY 16
#define MONO_REGION_CATCH 32
#define MONO_REGION_FAULT 64
#define MONO_REGION_FILTER 128
#define MONO_BBLOCK_IS_IN_REGION(bblock, regtype) (((bblock)->region & (0xf << 4)) == (regtype))
#define MONO_REGION_FLAGS(region) ((region) & 0x7)
#define MONO_REGION_CLAUSE_INDEX(region) (((region) >> 8) - 1)
#define get_vreg_to_inst(cfg, vreg) ((vreg) < (cfg)->vreg_to_inst_len ? (cfg)->vreg_to_inst [(vreg)] : NULL)
#define vreg_is_volatile(cfg, vreg) (G_UNLIKELY (get_vreg_to_inst ((cfg), (vreg)) && (get_vreg_to_inst ((cfg), (vreg))->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT))))
#define vreg_is_ref(cfg, vreg) ((vreg) < (cfg)->vreg_is_ref_len ? (cfg)->vreg_is_ref [(vreg)] : 0)
#define vreg_is_mp(cfg, vreg) ((vreg) < (cfg)->vreg_is_mp_len ? (cfg)->vreg_is_mp [(vreg)] : 0)
/*
* Control Flow Graph and compilation unit information
*/
typedef struct {
MonoMethod *method;
MonoMethodHeader *header;
MonoMemPool *mempool;
MonoInst **varinfo;
MonoMethodVar *vars;
MonoInst *ret;
MonoBasicBlock *bb_entry;
MonoBasicBlock *bb_exit;
MonoBasicBlock *bb_init;
MonoBasicBlock **bblocks;
MonoBasicBlock **cil_offset_to_bb;
MonoMemPool *state_pool; /* used by instruction selection */
MonoBasicBlock *cbb; /* used by instruction selection */
MonoInst *prev_ins; /* in decompose */
MonoJumpInfo *patch_info;
MonoJitInfo *jit_info;
MonoJitDynamicMethodInfo *dynamic_info;
guint num_bblocks, max_block_num;
guint locals_start;
guint num_varinfo; /* used items in varinfo */
guint varinfo_count; /* total storage in varinfo */
gint stack_offset;
gint max_ireg;
gint cil_offset_to_bb_len;
MonoRegState *rs;
MonoSpillInfo *spill_info [16]; /* machine register spills */
gint spill_count;
gint spill_info_len [16];
/* unsigned char *cil_code; */
MonoInst *got_var; /* Global Offset Table variable */
MonoInst **locals;
/* Variable holding the mrgctx/vtable address for gshared methods */
MonoInst *rgctx_var;
MonoInst **args;
MonoType **arg_types;
MonoMethod *current_method; /* The method currently processed by method_to_ir () */
MonoMethod *method_to_register; /* The method to register in JIT info tables */
MonoGenericContext *generic_context;
MonoInst *this_arg;
MonoBackend *backend;
/*
* This variable represents the hidden argument holding the vtype
* return address. If the method returns something other than a vtype, or
* the vtype is returned in registers this is NULL.
*/
MonoInst *vret_addr;
/*
* This is used to initialize the cil_code field of MonoInst's.
*/
const unsigned char *ip;
struct MonoAliasingInformation *aliasing_info;
/* A hashtable of region ID-> SP var mappings */
/* An SP var is a place to store the stack pointer (used by handlers)*/
/*
* FIXME We can potentially get rid of this, since it was mainly used
* for hijacking return address for handler.
*/
GHashTable *spvars;
/*
* A hashtable of region ID -> EX var mappings
* An EX var stores the exception object passed to catch/filter blocks
* For finally blocks, it is set to TRUE if we should throw an abort
* once the execution of the finally block is over.
*/
GHashTable *exvars;
GList *ldstr_list; /* used by AOT */
guint real_offset;
GHashTable *cbb_hash;
/* The current virtual register number */
guint32 next_vreg;
MonoRgctxAccess rgctx_access;
MonoGenericSharingContext gsctx;
MonoGenericContext *gsctx_context;
MonoGSharedVtMethodInfo *gsharedvt_info;
gpointer jit_mm;
MonoMemoryManager *mem_manager;
/* Points to the gsharedvt locals area at runtime */
MonoInst *gsharedvt_locals_var;
/* The localloc instruction used to initialize gsharedvt_locals_var */
MonoInst *gsharedvt_locals_var_ins;
/* Points to a MonoGSharedVtMethodRuntimeInfo at runtime */
MonoInst *gsharedvt_info_var;
/* For native-to-managed wrappers, CEE_MONO_JIT_(AT|DE)TACH opcodes */
MonoInst *orig_domain_var;
MonoInst *lmf_var;
MonoInst *lmf_addr_var;
MonoInst *il_state_var;
MonoInst *stack_inbalance_var;
unsigned char *cil_start;
unsigned char *native_code;
guint code_size;
guint code_len;
guint prolog_end;
guint epilog_begin;
guint epilog_end;
regmask_t used_int_regs;
guint32 opt;
guint32 flags;
guint32 comp_done;
guint32 verbose_level;
guint32 stack_usage;
guint32 param_area;
guint32 frame_reg;
gint32 sig_cookie;
guint disable_aot : 1;
guint disable_ssa : 1;
guint disable_llvm : 1;
guint enable_extended_bblocks : 1;
guint run_cctors : 1;
guint need_lmf_area : 1;
guint compile_aot : 1;
guint full_aot : 1;
guint compile_llvm : 1;
guint got_var_allocated : 1;
guint ret_var_is_local : 1;
guint ret_var_set : 1;
guint unverifiable : 1;
guint skip_visibility : 1;
guint disable_llvm_implicit_null_checks : 1;
guint disable_reuse_registers : 1;
guint disable_reuse_stack_slots : 1;
guint disable_reuse_ref_stack_slots : 1;
guint disable_ref_noref_stack_slot_share : 1;
guint disable_initlocals_opt : 1;
guint disable_initlocals_opt_refs : 1;
guint disable_omit_fp : 1;
guint disable_vreg_to_lvreg : 1;
guint disable_deadce_vars : 1;
guint disable_out_of_line_bblocks : 1;
guint disable_direct_icalls : 1;
guint disable_gc_safe_points : 1;
guint direct_pinvoke : 1;
guint create_lmf_var : 1;
/*
* When this is set, the code to push/pop the LMF from the LMF stack is generated as IR
* instead of being generated in emit_prolog ()/emit_epilog ().
*/
guint lmf_ir : 1;
guint gen_write_barriers : 1;
guint init_ref_vars : 1;
guint extend_live_ranges : 1;
guint compute_precise_live_ranges : 1;
guint has_got_slots : 1;
guint uses_rgctx_reg : 1;
guint uses_vtable_reg : 1;
guint keep_cil_nops : 1;
guint gen_seq_points : 1;
/* Generate seq points for use by the debugger */
guint gen_sdb_seq_points : 1;
guint explicit_null_checks : 1;
guint compute_gc_maps : 1;
guint soft_breakpoints : 1;
guint arch_eh_jit_info : 1;
guint has_calls : 1;
guint has_emulated_ops : 1;
guint has_indirection : 1;
guint has_atomic_add_i4 : 1;
guint has_atomic_exchange_i4 : 1;
guint has_atomic_cas_i4 : 1;
guint check_pinvoke_callconv : 1;
guint has_unwind_info_for_epilog : 1;
guint disable_inline : 1;
/* Disable inlining into caller */
guint no_inline : 1;
guint gshared : 1;
guint gsharedvt : 1;
guint r4fp : 1;
guint llvm_only : 1;
guint interp : 1;
guint use_current_cpu : 1;
guint self_init : 1;
guint code_exec_only : 1;
guint interp_entry_only : 1;
guint after_method_to_ir : 1;
guint disable_inline_rgctx_fetch : 1;
guint deopt : 1;
guint8 uses_simd_intrinsics;
int r4_stack_type;
gpointer debug_info;
guint32 lmf_offset;
guint16 *intvars;
MonoProfilerCoverageInfo *coverage_info;
GHashTable *token_info_hash;
MonoCompileArch arch;
guint32 inline_depth;
/* Size of memory reserved for thunks */
int thunk_area;
/* Thunks */
guint8 *thunks;
/* Offset between the start of code and the thunks area */
int thunks_offset;
MonoExceptionType exception_type; /* MONO_EXCEPTION_* */
guint32 exception_data;
char* exception_message;
gpointer exception_ptr;
guint8 * encoded_unwind_ops;
guint32 encoded_unwind_ops_len;
GSList* unwind_ops;
GList* dont_inline;
/* Fields used by the local reg allocator */
void* reginfo;
int reginfo_len;
/* Maps vregs to their associated MonoInst's */
/* vregs with an associated MonoInst are 'global' while others are 'local' */
MonoInst **vreg_to_inst;
/* Size of above array */
guint32 vreg_to_inst_len;
/* Marks vregs which hold a GC ref */
/* FIXME: Use a bitmap */
gboolean *vreg_is_ref;
/* Size of above array */
guint32 vreg_is_ref_len;
/* Marks vregs which hold a managed pointer */
/* FIXME: Use a bitmap */
gboolean *vreg_is_mp;
/* Size of above array */
guint32 vreg_is_mp_len;
/*
* The original method to compile, differs from 'method' when doing generic
* sharing.
*/
MonoMethod *orig_method;
/* Patches which describe absolute addresses embedded into the native code */
GHashTable *abs_patches;
/* Used to implement move_i4_to_f on archs that can't do raw
copy between an ireg and a freg. This is an int32 var.*/
MonoInst *iconv_raw_var;
/* Used to implement fconv_to_r8_x. This is a double (8 bytes) var.*/
MonoInst *fconv_to_r8_x_var;
/*Use to implement simd constructors. This is a vector (16 bytes) var.*/
MonoInst *simd_ctor_var;
/* Used to implement dyn_call */
MonoInst *dyn_call_var;
MonoInst *last_seq_point;
/*
* List of sequence points represented as IL offset+native offset pairs.
* Allocated using glib.
* IL offset can be -1 or 0xffffff to refer to the sequence points
* inside the prolog and epilog used to implement method entry/exit events.
*/
GPtrArray *seq_points;
/* The encoded sequence point info */
struct MonoSeqPointInfo *seq_point_info;
/* Method headers which need to be freed after compilation */
GSList *headers_to_free;
/* Used by AOT */
guint32 got_offset, ex_info_offset, method_info_offset, method_index;
guint32 aot_method_flags;
/* For llvm */
guint32 got_access_count;
gpointer llvmonly_init_cond;
gpointer llvm_dummy_info_var, llvm_info_var;
/* Symbol used to refer to this method in generated assembly */
char *asm_symbol;
char *asm_debug_symbol;
char *llvm_method_name;
int castclass_cache_index;
MonoJitExceptionInfo *llvm_ex_info;
guint32 llvm_ex_info_len;
int llvm_this_reg, llvm_this_offset;
GSList *try_block_holes;
/* DWARF location list for 'this' */
GSList *this_loclist;
/* DWARF location list for 'rgctx_var' */
GSList *rgctx_loclist;
int *gsharedvt_vreg_to_idx;
GSList *signatures;
GSList *interp_in_signatures;
/* GC Maps */
/* The offsets of the locals area relative to the frame pointer */
gint locals_min_stack_offset, locals_max_stack_offset;
/* The current CFA rule */
int cur_cfa_reg, cur_cfa_offset;
/* The final CFA rule at the end of the prolog */
int cfa_reg, cfa_offset;
/* Points to a MonoCompileGC */
gpointer gc_info;
/*
* The encoded GC map along with its size. This contains binary data so it can be saved in an AOT
* image etc, but it requires a 4 byte alignment.
*/
guint8 *gc_map;
guint32 gc_map_size;
/* Error handling */
MonoError* error;
MonoErrorInternal error_value;
/* pointer to context datastructure used for graph dumping */
MonoGraphDumper *gdump_ctx;
gboolean *clause_is_dead;
/* Stats */
int stat_allocate_var;
int stat_locals_stack_size;
int stat_basic_blocks;
int stat_cil_code_size;
int stat_n_regvars;
int stat_inlineable_methods;
int stat_inlined_methods;
int stat_code_reallocs;
MonoProfilerCallInstrumentationFlags prof_flags;
gboolean prof_coverage;
/* For deduplication */
gboolean skip;
} MonoCompile;
#define MONO_CFG_PROFILE(cfg, flag) \
G_UNLIKELY ((cfg)->prof_flags & MONO_PROFILER_CALL_INSTRUMENTATION_ ## flag)
#define MONO_CFG_PROFILE_CALL_CONTEXT(cfg) \
(MONO_CFG_PROFILE (cfg, ENTER_CONTEXT) || MONO_CFG_PROFILE (cfg, LEAVE_CONTEXT))
typedef enum {
MONO_CFG_HAS_ALLOCA = 1 << 0,
MONO_CFG_HAS_CALLS = 1 << 1,
MONO_CFG_HAS_LDELEMA = 1 << 2,
MONO_CFG_HAS_VARARGS = 1 << 3,
MONO_CFG_HAS_TAILCALL = 1 << 4,
MONO_CFG_HAS_FPOUT = 1 << 5, /* there are fp values passed in int registers */
MONO_CFG_HAS_SPILLUP = 1 << 6, /* spill var slots are allocated from bottom to top */
MONO_CFG_HAS_CHECK_THIS = 1 << 7,
MONO_CFG_NEEDS_DECOMPOSE = 1 << 8,
MONO_CFG_HAS_TYPE_CHECK = 1 << 9
} MonoCompileFlags;
typedef enum {
MONO_CFG_USES_SIMD_INTRINSICS = 1 << 0,
MONO_CFG_USES_SIMD_INTRINSICS_SIMPLIFY_INDIRECTION = 1 << 1
} MonoSimdIntrinsicsFlags;
typedef struct {
gint32 methods_compiled;
gint32 methods_aot;
gint32 methods_aot_llvm;
gint32 methods_lookups;
gint32 allocate_var;
gint32 cil_code_size;
gint32 native_code_size;
gint32 code_reallocs;
gint32 max_code_size_ratio;
gint32 biggest_method_size;
gint32 allocated_code_size;
gint32 allocated_seq_points_size;
gint32 inlineable_methods;
gint32 inlined_methods;
gint32 basic_blocks;
gint32 max_basic_blocks;
gint32 locals_stack_size;
gint32 regvars;
gint32 generic_virtual_invocations;
gint32 alias_found;
gint32 alias_removed;
gint32 loads_eliminated;
gint32 stores_eliminated;
gint32 optimized_divisions;
gint32 methods_with_llvm;
gint32 methods_without_llvm;
gint32 methods_with_interp;
char *max_ratio_method;
char *biggest_method;
gint64 jit_method_to_ir;
gint64 jit_liveness_handle_exception_clauses;
gint64 jit_handle_out_of_line_bblock;
gint64 jit_decompose_long_opts;
gint64 jit_decompose_typechecks;
gint64 jit_local_cprop;
gint64 jit_local_emulate_ops;
gint64 jit_optimize_branches;
gint64 jit_handle_global_vregs;
gint64 jit_local_deadce;
gint64 jit_local_alias_analysis;
gint64 jit_if_conversion;
gint64 jit_bb_ordering;
gint64 jit_compile_dominator_info;
gint64 jit_compute_natural_loops;
gint64 jit_insert_safepoints;
gint64 jit_ssa_compute;
gint64 jit_ssa_cprop;
gint64 jit_ssa_deadce;
gint64 jit_perform_abc_removal;
gint64 jit_ssa_remove;
gint64 jit_local_cprop2;
gint64 jit_handle_global_vregs2;
gint64 jit_local_deadce2;
gint64 jit_optimize_branches2;
gint64 jit_decompose_vtype_opts;
gint64 jit_decompose_array_access_opts;
gint64 jit_liveness_handle_exception_clauses2;
gint64 jit_analyze_liveness;
gint64 jit_linear_scan;
gint64 jit_arch_allocate_vars;
gint64 jit_spill_global_vars;
gint64 jit_local_cprop3;
gint64 jit_local_deadce3;
gint64 jit_codegen;
gint64 jit_create_jit_info;
gint64 jit_gc_create_gc_map;
gint64 jit_save_seq_point_info;
gint64 jit_time;
gboolean enabled;
} MonoJitStats;
extern MonoJitStats mono_jit_stats;
static inline void
get_jit_stats (gint64 *methods_compiled, gint64 *cil_code_size_bytes, gint64 *native_code_size_bytes, gint64 *jit_time)
{
*methods_compiled = mono_jit_stats.methods_compiled;
*cil_code_size_bytes = mono_jit_stats.cil_code_size;
*native_code_size_bytes = mono_jit_stats.native_code_size;
*jit_time = mono_jit_stats.jit_time;
}
guint32
mono_get_exception_count (void);
static inline void
get_exception_stats (guint32 *exception_count)
{
*exception_count = mono_get_exception_count ();
}
/* opcodes: value assigned after all the CIL opcodes */
#ifdef MINI_OP
#undef MINI_OP
#endif
#ifdef MINI_OP3
#undef MINI_OP3
#endif
#define MINI_OP(a,b,dest,src1,src2) a,
#define MINI_OP3(a,b,dest,src1,src2,src3) a,
enum {
OP_START = MONO_CEE_LAST - 1,
#include "mini-ops.h"
OP_LAST
};
#undef MINI_OP
#undef MINI_OP3
#if TARGET_SIZEOF_VOID_P == 8
#define OP_PCONST OP_I8CONST
#define OP_DUMMY_PCONST OP_DUMMY_I8CONST
#define OP_PADD OP_LADD
#define OP_PADD_IMM OP_LADD_IMM
#define OP_PSUB_IMM OP_LSUB_IMM
#define OP_PAND_IMM OP_LAND_IMM
#define OP_PXOR_IMM OP_LXOR_IMM
#define OP_PSUB OP_LSUB
#define OP_PMUL OP_LMUL
#define OP_PMUL_IMM OP_LMUL_IMM
#define OP_POR_IMM OP_LOR_IMM
#define OP_PNEG OP_LNEG
#define OP_PCONV_TO_I1 OP_LCONV_TO_I1
#define OP_PCONV_TO_U1 OP_LCONV_TO_U1
#define OP_PCONV_TO_I2 OP_LCONV_TO_I2
#define OP_PCONV_TO_U2 OP_LCONV_TO_U2
#define OP_PCONV_TO_OVF_I1_UN OP_LCONV_TO_OVF_I1_UN
#define OP_PCONV_TO_OVF_I1 OP_LCONV_TO_OVF_I1
#define OP_PBEQ OP_LBEQ
#define OP_PCEQ OP_LCEQ
#define OP_PCLT OP_LCLT
#define OP_PCGT OP_LCGT
#define OP_PCLT_UN OP_LCLT_UN
#define OP_PCGT_UN OP_LCGT_UN
#define OP_PBNE_UN OP_LBNE_UN
#define OP_PBGE_UN OP_LBGE_UN
#define OP_PBLT_UN OP_LBLT_UN
#define OP_PBGE OP_LBGE
#define OP_STOREP_MEMBASE_REG OP_STOREI8_MEMBASE_REG
#define OP_STOREP_MEMBASE_IMM OP_STOREI8_MEMBASE_IMM
#else
#define OP_PCONST OP_ICONST
#define OP_DUMMY_PCONST OP_DUMMY_ICONST
#define OP_PADD OP_IADD
#define OP_PADD_IMM OP_IADD_IMM
#define OP_PSUB_IMM OP_ISUB_IMM
#define OP_PAND_IMM OP_IAND_IMM
#define OP_PXOR_IMM OP_IXOR_IMM
#define OP_PSUB OP_ISUB
#define OP_PMUL OP_IMUL
#define OP_PMUL_IMM OP_IMUL_IMM
#define OP_POR_IMM OP_IOR_IMM
#define OP_PNEG OP_INEG
#define OP_PCONV_TO_I1 OP_ICONV_TO_I1
#define OP_PCONV_TO_U1 OP_ICONV_TO_U1
#define OP_PCONV_TO_I2 OP_ICONV_TO_I2
#define OP_PCONV_TO_U2 OP_ICONV_TO_U2
#define OP_PCONV_TO_OVF_I1_UN OP_ICONV_TO_OVF_I1_UN
#define OP_PCONV_TO_OVF_I1 OP_ICONV_TO_OVF_I1
#define OP_PBEQ OP_IBEQ
#define OP_PCEQ OP_ICEQ
#define OP_PCLT OP_ICLT
#define OP_PCGT OP_ICGT
#define OP_PCLT_UN OP_ICLT_UN
#define OP_PCGT_UN OP_ICGT_UN
#define OP_PBNE_UN OP_IBNE_UN
#define OP_PBGE_UN OP_IBGE_UN
#define OP_PBLT_UN OP_IBLT_UN
#define OP_PBGE OP_IBGE
#define OP_STOREP_MEMBASE_REG OP_STOREI4_MEMBASE_REG
#define OP_STOREP_MEMBASE_IMM OP_STOREI4_MEMBASE_IMM
#endif
/* Opcodes to load/store regsize quantities */
#if defined (MONO_ARCH_ILP32)
#define OP_LOADR_MEMBASE OP_LOADI8_MEMBASE
#define OP_STORER_MEMBASE_REG OP_STOREI8_MEMBASE_REG
#else
#define OP_LOADR_MEMBASE OP_LOAD_MEMBASE
#define OP_STORER_MEMBASE_REG OP_STORE_MEMBASE_REG
#endif
typedef enum {
STACK_INV,
STACK_I4,
STACK_I8,
STACK_PTR,
STACK_R8,
STACK_MP,
STACK_OBJ,
STACK_VTYPE,
STACK_R4,
STACK_MAX
} MonoStackType;
typedef struct {
union {
double r8;
gint32 i4;
gint64 i8;
gpointer p;
MonoClass *klass;
} data;
int type;
} StackSlot;
extern const MonoInstSpec MONO_ARCH_CPU_SPEC [];
#define MONO_ARCH_CPU_SPEC_IDX_COMBINE(a) a ## _idx
#define MONO_ARCH_CPU_SPEC_IDX(a) MONO_ARCH_CPU_SPEC_IDX_COMBINE(a)
extern const guint16 MONO_ARCH_CPU_SPEC_IDX(MONO_ARCH_CPU_SPEC) [];
#define ins_get_spec(op) ((const char*)&MONO_ARCH_CPU_SPEC [MONO_ARCH_CPU_SPEC_IDX(MONO_ARCH_CPU_SPEC)[(op) - OP_LOAD]])
#ifndef DISABLE_JIT
static inline int
ins_get_size (int opcode)
{
return ((guint8 *)ins_get_spec (opcode))[MONO_INST_LEN];
}
guint8*
mini_realloc_code_slow (MonoCompile *cfg, int size);
static inline guint8*
realloc_code (MonoCompile *cfg, int size)
{
const int EXTRA_CODE_SPACE = 16;
const int code_len = cfg->code_len;
if (G_UNLIKELY ((guint)(code_len + size) > (cfg->code_size - EXTRA_CODE_SPACE)))
return mini_realloc_code_slow (cfg, size);
return cfg->native_code + code_len;
}
static inline void
set_code_len (MonoCompile *cfg, int len)
{
g_assert ((guint)len <= cfg->code_size);
cfg->code_len = len;
}
static inline void
set_code_cursor (MonoCompile *cfg, void* void_code)
{
guint8* code = (guint8*)void_code;
g_assert (code <= (cfg->native_code + cfg->code_size));
set_code_len (cfg, code - cfg->native_code);
}
#endif
enum {
MONO_COMP_DOM = 1,
MONO_COMP_IDOM = 2,
MONO_COMP_DFRONTIER = 4,
MONO_COMP_DOM_REV = 8,
MONO_COMP_LIVENESS = 16,
MONO_COMP_SSA = 32,
MONO_COMP_SSA_DEF_USE = 64,
MONO_COMP_REACHABILITY = 128,
MONO_COMP_LOOPS = 256
};
typedef enum {
MONO_GRAPH_CFG = 1,
MONO_GRAPH_DTREE = 2,
MONO_GRAPH_CFG_CODE = 4,
MONO_GRAPH_CFG_SSA = 8,
MONO_GRAPH_CFG_OPTCODE = 16
} MonoGraphOptions;
typedef struct {
guint16 size;
guint16 offset;
guint8 pad;
} MonoJitArgumentInfo;
enum {
BRANCH_NOT_TAKEN,
BRANCH_TAKEN,
BRANCH_UNDEF
};
typedef enum {
CMP_EQ,
CMP_NE,
CMP_LE,
CMP_GE,
CMP_LT,
CMP_GT,
CMP_LE_UN,
CMP_GE_UN,
CMP_LT_UN,
CMP_GT_UN,
CMP_ORD,
CMP_UNORD
} CompRelation;
typedef enum {
CMP_TYPE_L,
CMP_TYPE_I,
CMP_TYPE_F
} CompType;
/* Implicit exceptions */
enum {
MONO_EXC_INDEX_OUT_OF_RANGE,
MONO_EXC_OVERFLOW,
MONO_EXC_ARITHMETIC,
MONO_EXC_DIVIDE_BY_ZERO,
MONO_EXC_INVALID_CAST,
MONO_EXC_NULL_REF,
MONO_EXC_ARRAY_TYPE_MISMATCH,
MONO_EXC_ARGUMENT,
MONO_EXC_ARGUMENT_OUT_OF_RANGE,
MONO_EXC_ARGUMENT_OUT_OF_MEMORY,
MONO_EXC_INTRINS_NUM
};
/*
* Information about a trampoline function.
*/
struct MonoTrampInfo
{
/*
* The native code of the trampoline. Not owned by this structure.
*/
guint8 *code;
guint32 code_size;
/*
* The name of the trampoline which can be used in AOT/xdebug. Owned by this
* structure.
*/
char *name;
/*
* Patches required by the trampoline when aot-ing. Owned by this structure.
*/
MonoJumpInfo *ji;
/*
* Unwind information. Owned by this structure.
*/
GSList *unwind_ops;
MonoJitICallInfo *jit_icall_info;
/*
* The method the trampoline is associated with, if any.
*/
MonoMethod *method;
/*
* Encoded unwind info loaded from AOT images
*/
guint8 *uw_info;
guint32 uw_info_len;
/* Whenever uw_info is owned by this structure */
gboolean owns_uw_info;
};
typedef void (*MonoInstFunc) (MonoInst *tree, gpointer data);
enum {
FILTER_IL_SEQ_POINT = 1 << 0,
FILTER_NOP = 1 << 1,
};
static inline gboolean
mono_inst_filter (MonoInst *ins, int filter)
{
if (!ins || !filter)
return FALSE;
if ((filter & FILTER_IL_SEQ_POINT) && ins->opcode == OP_IL_SEQ_POINT)
return TRUE;
if ((filter & FILTER_NOP) && ins->opcode == OP_NOP)
return TRUE;
return FALSE;
}
static inline MonoInst*
mono_inst_next (MonoInst *ins, int filter)
{
do {
ins = ins->next;
} while (mono_inst_filter (ins, filter));
return ins;
}
static inline MonoInst*
mono_inst_prev (MonoInst *ins, int filter)
{
do {
ins = ins->prev;
} while (mono_inst_filter (ins, filter));
return ins;
}
static inline MonoInst*
mono_bb_first_inst (MonoBasicBlock *bb, int filter)
{
MonoInst *ins = bb->code;
if (mono_inst_filter (ins, filter))
ins = mono_inst_next (ins, filter);
return ins;
}
static inline MonoInst*
mono_bb_last_inst (MonoBasicBlock *bb, int filter)
{
MonoInst *ins = bb->last_ins;
if (mono_inst_filter (ins, filter))
ins = mono_inst_prev (ins, filter);
return ins;
}
/* profiler support */
void mini_add_profiler_argument (const char *desc);
void mini_profiler_emit_enter (MonoCompile *cfg);
void mini_profiler_emit_leave (MonoCompile *cfg, MonoInst *ret);
void mini_profiler_emit_tail_call (MonoCompile *cfg, MonoMethod *target);
void mini_profiler_emit_call_finally (MonoCompile *cfg, MonoMethodHeader *header, unsigned char *ip, guint32 index, MonoExceptionClause *clause);
void mini_profiler_context_enable (void);
gpointer mini_profiler_context_get_this (MonoProfilerCallContext *ctx);
gpointer mini_profiler_context_get_argument (MonoProfilerCallContext *ctx, guint32 pos);
gpointer mini_profiler_context_get_local (MonoProfilerCallContext *ctx, guint32 pos);
gpointer mini_profiler_context_get_result (MonoProfilerCallContext *ctx);
void mini_profiler_context_free_buffer (gpointer buffer);
/* graph dumping */
void mono_cfg_dump_create_context (MonoCompile *cfg);
void mono_cfg_dump_begin_group (MonoCompile *cfg);
void mono_cfg_dump_close_group (MonoCompile *cfg);
void mono_cfg_dump_ir (MonoCompile *cfg, const char *phase_name);
/* helper methods */
MonoInst* mono_find_spvar_for_region (MonoCompile *cfg, int region);
MonoInst* mono_find_exvar_for_offset (MonoCompile *cfg, int offset);
int mono_get_block_region_notry (MonoCompile *cfg, int region);
void mono_bblock_add_inst (MonoBasicBlock *bb, MonoInst *inst);
void mono_bblock_insert_after_ins (MonoBasicBlock *bb, MonoInst *ins, MonoInst *ins_to_insert);
void mono_bblock_insert_before_ins (MonoBasicBlock *bb, MonoInst *ins, MonoInst *ins_to_insert);
void mono_verify_bblock (MonoBasicBlock *bb);
void mono_verify_cfg (MonoCompile *cfg);
void mono_constant_fold (MonoCompile *cfg);
MonoInst* mono_constant_fold_ins (MonoCompile *cfg, MonoInst *ins, MonoInst *arg1, MonoInst *arg2, gboolean overwrite);
int mono_eval_cond_branch (MonoInst *branch);
int mono_is_power_of_two (guint32 val);
void mono_cprop_local (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst **acp, int acp_size);
MonoInst* mono_compile_create_var (MonoCompile *cfg, MonoType *type, int opcode);
MonoInst* mono_compile_create_var_for_vreg (MonoCompile *cfg, MonoType *type, int opcode, int vreg);
void mono_compile_make_var_load (MonoCompile *cfg, MonoInst *dest, gssize var_index);
MonoInst* mini_get_int_to_float_spill_area (MonoCompile *cfg);
MonoType* mono_type_from_stack_type (MonoInst *ins);
guint32 mono_alloc_ireg (MonoCompile *cfg);
guint32 mono_alloc_lreg (MonoCompile *cfg);
guint32 mono_alloc_freg (MonoCompile *cfg);
guint32 mono_alloc_preg (MonoCompile *cfg);
guint32 mono_alloc_dreg (MonoCompile *cfg, MonoStackType stack_type);
guint32 mono_alloc_ireg_ref (MonoCompile *cfg);
guint32 mono_alloc_ireg_mp (MonoCompile *cfg);
guint32 mono_alloc_ireg_copy (MonoCompile *cfg, guint32 vreg);
void mono_mark_vreg_as_ref (MonoCompile *cfg, int vreg);
void mono_mark_vreg_as_mp (MonoCompile *cfg, int vreg);
void mono_link_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to);
void mono_unlink_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to);
gboolean mono_bblocks_linked (MonoBasicBlock *bb1, MonoBasicBlock *bb2);
void mono_remove_bblock (MonoCompile *cfg, MonoBasicBlock *bb);
void mono_nullify_basic_block (MonoBasicBlock *bb);
void mono_merge_basic_blocks (MonoCompile *cfg, MonoBasicBlock *bb, MonoBasicBlock *bbn);
void mono_optimize_branches (MonoCompile *cfg);
void mono_blockset_print (MonoCompile *cfg, MonoBitSet *set, const char *name, guint idom);
void mono_print_ins_index (int i, MonoInst *ins);
GString *mono_print_ins_index_strbuf (int i, MonoInst *ins);
void mono_print_ins (MonoInst *ins);
void mono_print_bb (MonoBasicBlock *bb, const char *msg);
void mono_print_code (MonoCompile *cfg, const char *msg);
const char* mono_inst_name (int op);
int mono_op_to_op_imm (int opcode);
int mono_op_imm_to_op (int opcode);
int mono_load_membase_to_load_mem (int opcode);
gboolean mono_op_no_side_effects (int opcode);
gboolean mono_ins_no_side_effects (MonoInst *ins);
guint mono_type_to_load_membase (MonoCompile *cfg, MonoType *type);
guint mono_type_to_store_membase (MonoCompile *cfg, MonoType *type);
guint32 mono_type_to_stloc_coerce (MonoType *type);
guint mini_type_to_stind (MonoCompile* cfg, MonoType *type);
MonoStackType mini_type_to_stack_type (MonoCompile *cfg, MonoType *t);
MonoJitInfo* mini_lookup_method (MonoMethod *method, MonoMethod *shared);
guint32 mono_reverse_branch_op (guint32 opcode);
void mono_disassemble_code (MonoCompile *cfg, guint8 *code, int size, char *id);
MonoJumpInfoTarget mono_call_to_patch (MonoCallInst *call);
void mono_call_add_patch_info (MonoCompile *cfg, MonoCallInst *call, int ip);
void mono_add_patch_info (MonoCompile *cfg, int ip, MonoJumpInfoType type, gconstpointer target);
void mono_add_patch_info_rel (MonoCompile *cfg, int ip, MonoJumpInfoType type, gconstpointer target, int relocation);
void mono_remove_patch_info (MonoCompile *cfg, int ip);
gpointer mono_jit_compile_method_inner (MonoMethod *method, int opt, MonoError *error);
GList *mono_varlist_insert_sorted (MonoCompile *cfg, GList *list, MonoMethodVar *mv, int sort_type);
GList *mono_varlist_sort (MonoCompile *cfg, GList *list, int sort_type);
void mono_analyze_liveness (MonoCompile *cfg);
void mono_analyze_liveness_gc (MonoCompile *cfg);
void mono_linear_scan (MonoCompile *cfg, GList *vars, GList *regs, regmask_t *used_mask);
void mono_global_regalloc (MonoCompile *cfg);
void mono_create_jump_table (MonoCompile *cfg, MonoInst *label, MonoBasicBlock **bbs, int num_blocks);
MonoCompile *mini_method_compile (MonoMethod *method, guint32 opts, JitFlags flags, int parts, int aot_method_index);
void mono_destroy_compile (MonoCompile *cfg);
void mono_empty_compile (MonoCompile *cfg);
MonoJitICallInfo *mono_find_jit_opcode_emulation (int opcode);
void mono_print_ins_index (int i, MonoInst *ins);
void mono_print_ins (MonoInst *ins);
gboolean mini_assembly_can_skip_verification (MonoMethod *method);
MonoInst *mono_get_got_var (MonoCompile *cfg);
void mono_add_seq_point (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins, int native_offset);
void mono_add_var_location (MonoCompile *cfg, MonoInst *var, gboolean is_reg, int reg, int offset, int from, int to);
MonoInst* mono_emit_jit_icall_id (MonoCompile *cfg, MonoJitICallId jit_icall_id, MonoInst **args);
#define mono_emit_jit_icall(cfg, name, args) (mono_emit_jit_icall_id ((cfg), MONO_JIT_ICALL_ ## name, (args)))
MonoInst* mono_emit_jit_icall_by_info (MonoCompile *cfg, int il_offset, MonoJitICallInfo *info, MonoInst **args);
MonoInst* mono_emit_method_call (MonoCompile *cfg, MonoMethod *method, MonoInst **args, MonoInst *this_ins);
gboolean mini_should_insert_breakpoint (MonoMethod *method);
int mono_target_pagesize (void);
gboolean mini_class_is_system_array (MonoClass *klass);
void mono_linterval_add_range (MonoCompile *cfg, MonoLiveInterval *interval, int from, int to);
void mono_linterval_print (MonoLiveInterval *interval);
void mono_linterval_print_nl (MonoLiveInterval *interval);
gboolean mono_linterval_covers (MonoLiveInterval *interval, int pos);
gint32 mono_linterval_get_intersect_pos (MonoLiveInterval *i1, MonoLiveInterval *i2);
void mono_linterval_split (MonoCompile *cfg, MonoLiveInterval *interval, MonoLiveInterval **i1, MonoLiveInterval **i2, int pos);
void mono_liveness_handle_exception_clauses (MonoCompile *cfg);
gpointer mono_realloc_native_code (MonoCompile *cfg);
void mono_register_opcode_emulation (int opcode, const char* name, MonoMethodSignature *sig, gpointer func, gboolean no_throw);
void mono_draw_graph (MonoCompile *cfg, MonoGraphOptions draw_options);
void mono_add_ins_to_end (MonoBasicBlock *bb, MonoInst *inst);
void mono_replace_ins (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins, MonoInst **prev, MonoBasicBlock *first_bb, MonoBasicBlock *last_bb);
void mini_register_opcode_emulation (int opcode, MonoJitICallInfo *jit_icall_info, const char *name, MonoMethodSignature *sig, gpointer func, const char *symbol, gboolean no_throw);
#ifdef __cplusplus
template <typename T>
inline void
mini_register_opcode_emulation (int opcode, MonoJitICallInfo *jit_icall_info, const char *name, MonoMethodSignature *sig, T func, const char *symbol, gboolean no_throw)
{
mini_register_opcode_emulation (opcode, jit_icall_info, name, sig, (gpointer)func, symbol, no_throw);
}
#endif // __cplusplus
void mono_trampolines_init (void);
guint8 * mono_get_trampoline_code (MonoTrampolineType tramp_type);
gpointer mono_create_specific_trampoline (MonoMemoryManager *mem_manager, gpointer arg1, MonoTrampolineType tramp_type, guint32 *code_len);
gpointer mono_create_jump_trampoline (MonoMethod *method,
gboolean add_sync_wrapper,
MonoError *error);
gpointer mono_create_jit_trampoline (MonoMethod *method, MonoError *error);
gpointer mono_create_jit_trampoline_from_token (MonoImage *image, guint32 token);
gpointer mono_create_delegate_trampoline (MonoClass *klass);
MonoDelegateTrampInfo* mono_create_delegate_trampoline_info (MonoClass *klass, MonoMethod *method);
gpointer mono_create_delegate_virtual_trampoline (MonoClass *klass, MonoMethod *method);
gpointer mono_create_rgctx_lazy_fetch_trampoline (guint32 offset);
gpointer mono_create_static_rgctx_trampoline (MonoMethod *m, gpointer addr);
gpointer mono_create_ftnptr_arg_trampoline (gpointer arg, gpointer addr);
guint32 mono_find_rgctx_lazy_fetch_trampoline_by_addr (gconstpointer addr);
gpointer mono_magic_trampoline (host_mgreg_t *regs, guint8 *code, gpointer arg, guint8* tramp);
gpointer mono_delegate_trampoline (host_mgreg_t *regs, guint8 *code, gpointer *tramp_data, guint8* tramp);
gpointer mono_aot_trampoline (host_mgreg_t *regs, guint8 *code, guint8 *token_info,
guint8* tramp);
gpointer mono_aot_plt_trampoline (host_mgreg_t *regs, guint8 *code, guint8 *token_info,
guint8* tramp);
gconstpointer mono_get_trampoline_func (MonoTrampolineType tramp_type);
gpointer mini_get_vtable_trampoline (MonoVTable *vt, int slot_index);
const char* mono_get_generic_trampoline_simple_name (MonoTrampolineType tramp_type);
const char* mono_get_generic_trampoline_name (MonoTrampolineType tramp_type);
char* mono_get_rgctx_fetch_trampoline_name (int slot);
gpointer mini_get_single_step_trampoline (void);
gpointer mini_get_breakpoint_trampoline (void);
gpointer mini_add_method_trampoline (MonoMethod *m, gpointer compiled_method, gboolean add_static_rgctx_tramp, gboolean add_unbox_tramp);
gboolean mini_jit_info_is_gsharedvt (MonoJitInfo *ji);
gpointer* mini_resolve_imt_method (MonoVTable *vt, gpointer *vtable_slot, MonoMethod *imt_method, MonoMethod **impl_method, gpointer *out_aot_addr,
gboolean *out_need_rgctx_tramp, MonoMethod **variant_iface,
MonoError *error);
void* mono_global_codeman_reserve (int size);
#define mono_global_codeman_reserve(size) (g_cast (mono_global_codeman_reserve ((size))))
void mono_global_codeman_foreach (MonoCodeManagerFunc func, void *user_data);
const char *mono_regname_full (int reg, int bank);
gint32* mono_allocate_stack_slots (MonoCompile *cfg, gboolean backward, guint32 *stack_size, guint32 *stack_align);
void mono_local_regalloc (MonoCompile *cfg, MonoBasicBlock *bb);
MonoInst *mono_branch_optimize_exception_target (MonoCompile *cfg, MonoBasicBlock *bb, const char * exname);
void mono_remove_critical_edges (MonoCompile *cfg);
gboolean mono_is_regsize_var (MonoType *t);
MonoJumpInfo * mono_patch_info_new (MonoMemPool *mp, int ip, MonoJumpInfoType type, gconstpointer target);
int mini_class_check_context_used (MonoCompile *cfg, MonoClass *klass);
int mini_method_check_context_used (MonoCompile *cfg, MonoMethod *method);
void mini_type_from_op (MonoCompile *cfg, MonoInst *ins, MonoInst *src1, MonoInst *src2);
void mini_set_inline_failure (MonoCompile *cfg, const char *msg);
void mini_test_tailcall (MonoCompile *cfg, gboolean tailcall);
gboolean mini_should_check_stack_pointer (MonoCompile *cfg);
MonoInst* mini_emit_box (MonoCompile *cfg, MonoInst *val, MonoClass *klass, int context_used);
void mini_emit_memcpy (MonoCompile *cfg, int destreg, int doffset, int srcreg, int soffset, int size, int align);
void mini_emit_memset (MonoCompile *cfg, int destreg, int offset, int size, int val, int align);
void mini_emit_stobj (MonoCompile *cfg, MonoInst *dest, MonoInst *src, MonoClass *klass, gboolean native);
void mini_emit_initobj (MonoCompile *cfg, MonoInst *dest, const guchar *ip, MonoClass *klass);
void mini_emit_init_rvar (MonoCompile *cfg, int dreg, MonoType *rtype);
int mini_emit_sext_index_reg (MonoCompile *cfg, MonoInst *index);
MonoInst* mini_emit_ldelema_1_ins (MonoCompile *cfg, MonoClass *klass, MonoInst *arr, MonoInst *index, gboolean bcheck, gboolean bounded);
MonoInst* mini_emit_get_gsharedvt_info_klass (MonoCompile *cfg, MonoClass *klass, MonoRgctxInfoType rgctx_type);
MonoInst* mini_emit_get_rgctx_method (MonoCompile *cfg, int context_used,
MonoMethod *cmethod, MonoRgctxInfoType rgctx_type);
void mini_emit_tailcall_parameters (MonoCompile *cfg, MonoMethodSignature *sig);
MonoCallInst * mini_emit_call_args (MonoCompile *cfg, MonoMethodSignature *sig,
MonoInst **args, gboolean calli, gboolean virtual_, gboolean tailcall,
gboolean rgctx, gboolean unbox_trampoline, MonoMethod *target);
MonoInst* mini_emit_calli (MonoCompile *cfg, MonoMethodSignature *sig, MonoInst **args, MonoInst *addr, MonoInst *imt_arg, MonoInst *rgctx_arg);
MonoInst* mini_emit_calli_full (MonoCompile *cfg, MonoMethodSignature *sig, MonoInst **args, MonoInst *addr,
MonoInst *imt_arg, MonoInst *rgctx_arg, gboolean tailcall);
MonoInst* mini_emit_method_call_full (MonoCompile *cfg, MonoMethod *method, MonoMethodSignature *sig, gboolean tailcall,
MonoInst **args, MonoInst *this_ins, MonoInst *imt_arg, MonoInst *rgctx_arg);
MonoInst* mini_emit_abs_call (MonoCompile *cfg, MonoJumpInfoType patch_type, gconstpointer data,
MonoMethodSignature *sig, MonoInst **args);
MonoInst* mini_emit_extra_arg_calli (MonoCompile *cfg, MonoMethodSignature *fsig, MonoInst **orig_args, int arg_reg, MonoInst *call_target);
MonoInst* mini_emit_llvmonly_calli (MonoCompile *cfg, MonoMethodSignature *fsig, MonoInst **args, MonoInst *addr);
MonoInst* mini_emit_llvmonly_virtual_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, int context_used, MonoInst **sp);
MonoInst* mini_emit_memory_barrier (MonoCompile *cfg, int kind);
MonoInst* mini_emit_storing_write_barrier (MonoCompile *cfg, MonoInst *ptr, MonoInst *value);
void mini_emit_write_barrier (MonoCompile *cfg, MonoInst *ptr, MonoInst *value);
MonoInst* mini_emit_memory_load (MonoCompile *cfg, MonoType *type, MonoInst *src, int offset, int ins_flag);
void mini_emit_memory_store (MonoCompile *cfg, MonoType *type, MonoInst *dest, MonoInst *value, int ins_flag);
void mini_emit_memory_copy_bytes (MonoCompile *cfg, MonoInst *dest, MonoInst *src, MonoInst *size, int ins_flag);
void mini_emit_memory_init_bytes (MonoCompile *cfg, MonoInst *dest, MonoInst *value, MonoInst *size, int ins_flag);
void mini_emit_memory_copy (MonoCompile *cfg, MonoInst *dest, MonoInst *src, MonoClass *klass, gboolean native, int ins_flag);
MonoInst* mini_emit_array_store (MonoCompile *cfg, MonoClass *klass, MonoInst **sp, gboolean safety_checks);
MonoInst* mini_emit_inst_for_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args, gboolean *ins_type_initialized);
MonoInst* mini_emit_inst_for_ctor (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args);
MonoInst* mini_emit_inst_for_field_load (MonoCompile *cfg, MonoClassField *field);
MonoInst* mini_handle_enum_has_flag (MonoCompile *cfg, MonoClass *klass, MonoInst *enum_this, int enum_val_reg, MonoInst *enum_flag);
MonoInst* mini_handle_unbox (MonoCompile *cfg, MonoClass *klass, MonoInst *val, int context_used);
MonoMethod* mini_get_memcpy_method (void);
MonoMethod* mini_get_memset_method (void);
int mini_class_check_context_used (MonoCompile *cfg, MonoClass *klass);
MonoRgctxAccess mini_get_rgctx_access_for_method (MonoMethod *method);
CompRelation mono_opcode_to_cond (int opcode);
CompType mono_opcode_to_type (int opcode, int cmp_opcode);
CompRelation mono_negate_cond (CompRelation cond);
int mono_op_imm_to_op (int opcode);
void mono_decompose_op_imm (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins);
void mono_peephole_ins (MonoBasicBlock *bb, MonoInst *ins);
MonoUnwindOp *mono_create_unwind_op (int when,
int tag, int reg,
int val);
void mono_emit_unwind_op (MonoCompile *cfg, int when,
int tag, int reg,
int val);
MonoTrampInfo* mono_tramp_info_create (const char *name, guint8 *code, guint32 code_size, MonoJumpInfo *ji, GSList *unwind_ops);
void mono_tramp_info_free (MonoTrampInfo *info);
void mono_aot_tramp_info_register (MonoTrampInfo *info, MonoMemoryManager *mem_manager);
void mono_tramp_info_register (MonoTrampInfo *info, MonoMemoryManager *mem_manager);
int mini_exception_id_by_name (const char *name);
gboolean mini_type_is_hfa (MonoType *t, int *out_nfields, int *out_esize);
int mono_method_to_ir (MonoCompile *cfg, MonoMethod *method, MonoBasicBlock *start_bblock, MonoBasicBlock *end_bblock,
MonoInst *return_var, MonoInst **inline_args,
guint inline_offset, gboolean is_virtual_call);
//the following methods could just be renamed/moved from method-to-ir.c
int mini_inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, guchar *ip,
guint real_offset, gboolean inline_always);
MonoInst* mini_emit_get_rgctx_klass (MonoCompile *cfg, int context_used, MonoClass *klass, MonoRgctxInfoType rgctx_type);
MonoInst* mini_emit_runtime_constant (MonoCompile *cfg, MonoJumpInfoType patch_type, gpointer data);
void mini_save_cast_details (MonoCompile *cfg, MonoClass *klass, int obj_reg, gboolean null_check);
void mini_reset_cast_details (MonoCompile *cfg);
void mini_emit_class_check (MonoCompile *cfg, int klass_reg, MonoClass *klass);
gboolean mini_class_has_reference_variant_generic_argument (MonoCompile *cfg, MonoClass *klass, int context_used);
MonoInst *mono_decompose_opcode (MonoCompile *cfg, MonoInst *ins);
void mono_decompose_long_opts (MonoCompile *cfg);
void mono_decompose_vtype_opts (MonoCompile *cfg);
void mono_decompose_array_access_opts (MonoCompile *cfg);
void mono_decompose_soft_float (MonoCompile *cfg);
void mono_local_emulate_ops (MonoCompile *cfg);
void mono_handle_global_vregs (MonoCompile *cfg);
void mono_spill_global_vars (MonoCompile *cfg, gboolean *need_local_opts);
void mono_allocate_gsharedvt_vars (MonoCompile *cfg);
void mono_if_conversion (MonoCompile *cfg);
/* Delegates */
char* mono_get_delegate_virtual_invoke_impl_name (gboolean load_imt_reg, int offset);
gpointer mono_get_delegate_virtual_invoke_impl (MonoMethodSignature *sig, MonoMethod *method);
void mono_codegen (MonoCompile *cfg);
void mono_call_inst_add_outarg_reg (MonoCompile *cfg, MonoCallInst *call, int vreg, int hreg, int bank);
void mono_call_inst_add_outarg_vt (MonoCompile *cfg, MonoCallInst *call, MonoInst *outarg_vt);
/* methods that must be provided by the arch-specific port */
void mono_arch_init (void);
void mono_arch_finish_init (void);
void mono_arch_cleanup (void);
void mono_arch_cpu_init (void);
guint32 mono_arch_cpu_optimizations (guint32 *exclude_mask);
const char *mono_arch_regname (int reg);
const char *mono_arch_fregname (int reg);
void mono_arch_exceptions_init (void);
guchar* mono_arch_create_generic_trampoline (MonoTrampolineType tramp_type, MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_create_rgctx_lazy_fetch_trampoline (guint32 slot, MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_create_general_rgctx_lazy_fetch_trampoline (MonoTrampInfo **info, gboolean aot);
guint8* mono_arch_create_sdb_trampoline (gboolean single_step, MonoTrampInfo **info, gboolean aot);
guint8 *mono_arch_create_llvm_native_thunk (guint8* addr);
gpointer mono_arch_get_get_tls_tramp (void);
GList *mono_arch_get_allocatable_int_vars (MonoCompile *cfg);
GList *mono_arch_get_global_int_regs (MonoCompile *cfg);
guint32 mono_arch_regalloc_cost (MonoCompile *cfg, MonoMethodVar *vmv);
void mono_arch_patch_code_new (MonoCompile *cfg, guint8 *code, MonoJumpInfo *ji, gpointer target);
void mono_arch_flush_icache (guint8 *code, gint size);
guint8 *mono_arch_emit_prolog (MonoCompile *cfg);
void mono_arch_emit_epilog (MonoCompile *cfg);
void mono_arch_emit_exceptions (MonoCompile *cfg);
void mono_arch_lowering_pass (MonoCompile *cfg, MonoBasicBlock *bb);
void mono_arch_peephole_pass_1 (MonoCompile *cfg, MonoBasicBlock *bb);
void mono_arch_peephole_pass_2 (MonoCompile *cfg, MonoBasicBlock *bb);
void mono_arch_output_basic_block (MonoCompile *cfg, MonoBasicBlock *bb);
void mono_arch_fill_argument_info (MonoCompile *cfg);
void mono_arch_allocate_vars (MonoCompile *m);
int mono_arch_get_argument_info (MonoMethodSignature *csig, int param_count, MonoJitArgumentInfo *arg_info);
void mono_arch_emit_call (MonoCompile *cfg, MonoCallInst *call);
void mono_arch_emit_outarg_vt (MonoCompile *cfg, MonoInst *ins, MonoInst *src);
void mono_arch_emit_setret (MonoCompile *cfg, MonoMethod *method, MonoInst *val);
MonoDynCallInfo *mono_arch_dyn_call_prepare (MonoMethodSignature *sig);
void mono_arch_dyn_call_free (MonoDynCallInfo *info);
int mono_arch_dyn_call_get_buf_size (MonoDynCallInfo *info);
void mono_arch_start_dyn_call (MonoDynCallInfo *info, gpointer **args, guint8 *ret, guint8 *buf);
void mono_arch_finish_dyn_call (MonoDynCallInfo *info, guint8 *buf);
MonoInst *mono_arch_emit_inst_for_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args);
void mono_arch_decompose_opts (MonoCompile *cfg, MonoInst *ins);
void mono_arch_decompose_long_opts (MonoCompile *cfg, MonoInst *ins);
GSList* mono_arch_get_delegate_invoke_impls (void);
LLVMCallInfo* mono_arch_get_llvm_call_info (MonoCompile *cfg, MonoMethodSignature *sig);
guint8* mono_arch_emit_load_got_addr (guint8 *start, guint8 *code, MonoCompile *cfg, MonoJumpInfo **ji);
guint8* mono_arch_emit_load_aotconst (guint8 *start, guint8 *code, MonoJumpInfo **ji, MonoJumpInfoType tramp_type, gconstpointer target);
GSList* mono_arch_get_cie_program (void);
void mono_arch_set_target (char *mtriple);
gboolean mono_arch_gsharedvt_sig_supported (MonoMethodSignature *sig);
gpointer mono_arch_get_gsharedvt_trampoline (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_gsharedvt_call_info (MonoMemoryManager *mem_manager, gpointer addr, MonoMethodSignature *normal_sig, MonoMethodSignature *gsharedvt_sig, gboolean gsharedvt_in, gint32 vcall_offset, gboolean calli);
gboolean mono_arch_opcode_needs_emulation (MonoCompile *cfg, int opcode);
gboolean mono_arch_tailcall_supported (MonoCompile *cfg, MonoMethodSignature *caller_sig, MonoMethodSignature *callee_sig, gboolean virtual_);
int mono_arch_translate_tls_offset (int offset);
gboolean mono_arch_opcode_supported (int opcode);
MONO_COMPONENT_API void mono_arch_setup_resume_sighandler_ctx (MonoContext *ctx, gpointer func);
gboolean mono_arch_have_fast_tls (void);
#ifdef MONO_ARCH_HAS_REGISTER_ICALL
void mono_arch_register_icall (void);
#endif
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
gboolean mono_arch_is_soft_float (void);
#else
static inline MONO_ALWAYS_INLINE gboolean
mono_arch_is_soft_float (void)
{
return FALSE;
}
#endif
/* Soft Debug support */
#ifdef MONO_ARCH_SOFT_DEBUG_SUPPORTED
MONO_COMPONENT_API void mono_arch_set_breakpoint (MonoJitInfo *ji, guint8 *ip);
MONO_COMPONENT_API void mono_arch_clear_breakpoint (MonoJitInfo *ji, guint8 *ip);
MONO_COMPONENT_API void mono_arch_start_single_stepping (void);
MONO_COMPONENT_API void mono_arch_stop_single_stepping (void);
gboolean mono_arch_is_single_step_event (void *info, void *sigctx);
gboolean mono_arch_is_breakpoint_event (void *info, void *sigctx);
MONO_COMPONENT_API void mono_arch_skip_breakpoint (MonoContext *ctx, MonoJitInfo *ji);
MONO_COMPONENT_API void mono_arch_skip_single_step (MonoContext *ctx);
SeqPointInfo *mono_arch_get_seq_point_info (guint8 *code);
#endif
gboolean
mono_arch_unwind_frame (MonoJitTlsData *jit_tls,
MonoJitInfo *ji, MonoContext *ctx,
MonoContext *new_ctx, MonoLMF **lmf,
host_mgreg_t **save_locations,
StackFrameInfo *frame_info);
gpointer mono_arch_get_throw_exception_by_name (void);
gpointer mono_arch_get_call_filter (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_restore_context (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_throw_exception (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_rethrow_exception (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_rethrow_preserve_exception (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_throw_corlib_exception (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_throw_pending_exception (MonoTrampInfo **info, gboolean aot);
gboolean mono_arch_handle_exception (void *sigctx, gpointer obj);
void mono_arch_handle_altstack_exception (void *sigctx, MONO_SIG_HANDLER_INFO_TYPE *siginfo, gpointer fault_addr, gboolean stack_ovf);
gboolean mono_handle_soft_stack_ovf (MonoJitTlsData *jit_tls, MonoJitInfo *ji, void *ctx, MONO_SIG_HANDLER_INFO_TYPE *siginfo, guint8* fault_addr);
void mono_handle_hard_stack_ovf (MonoJitTlsData *jit_tls, MonoJitInfo *ji, MonoContext *mctx, guint8* fault_addr);
void mono_arch_undo_ip_adjustment (MonoContext *ctx);
void mono_arch_do_ip_adjustment (MonoContext *ctx);
gpointer mono_arch_ip_from_context (void *sigctx);
MONO_COMPONENT_API host_mgreg_t mono_arch_context_get_int_reg (MonoContext *ctx, int reg);
MONO_COMPONENT_API host_mgreg_t*mono_arch_context_get_int_reg_address (MonoContext *ctx, int reg);
MONO_COMPONENT_API void mono_arch_context_set_int_reg (MonoContext *ctx, int reg, host_mgreg_t val);
void mono_arch_flush_register_windows (void);
gboolean mono_arch_is_inst_imm (int opcode, int imm_opcode, gint64 imm);
gboolean mono_arch_is_int_overflow (void *sigctx, void *info);
void mono_arch_invalidate_method (MonoJitInfo *ji, void *func, gpointer func_arg);
guint32 mono_arch_get_patch_offset (guint8 *code);
gpointer*mono_arch_get_delegate_method_ptr_addr (guint8* code, host_mgreg_t *regs);
void mono_arch_create_vars (MonoCompile *cfg);
void mono_arch_save_unwind_info (MonoCompile *cfg);
void mono_arch_register_lowlevel_calls (void);
gpointer mono_arch_get_unbox_trampoline (MonoMethod *m, gpointer addr);
gpointer mono_arch_get_static_rgctx_trampoline (MonoMemoryManager *mem_manager, gpointer arg, gpointer addr);
gpointer mono_arch_get_ftnptr_arg_trampoline (MonoMemoryManager *mem_manager, gpointer arg, gpointer addr);
gpointer mono_arch_get_gsharedvt_arg_trampoline (gpointer arg, gpointer addr);
void mono_arch_patch_callsite (guint8 *method_start, guint8 *code, guint8 *addr);
void mono_arch_patch_plt_entry (guint8 *code, gpointer *got, host_mgreg_t *regs, guint8 *addr);
int mono_arch_get_this_arg_reg (guint8 *code);
gpointer mono_arch_get_this_arg_from_call (host_mgreg_t *regs, guint8 *code);
gpointer mono_arch_get_delegate_invoke_impl (MonoMethodSignature *sig, gboolean has_target);
gpointer mono_arch_get_delegate_virtual_invoke_impl (MonoMethodSignature *sig, MonoMethod *method, int offset, gboolean load_imt_reg);
gpointer mono_arch_create_specific_trampoline (gpointer arg1, MonoTrampolineType tramp_type, MonoMemoryManager *mem_manager, guint32 *code_len);
MonoMethod* mono_arch_find_imt_method (host_mgreg_t *regs, guint8 *code);
MonoVTable* mono_arch_find_static_call_vtable (host_mgreg_t *regs, guint8 *code);
gpointer mono_arch_build_imt_trampoline (MonoVTable *vtable, MonoIMTCheckItem **imt_entries, int count, gpointer fail_tramp);
void mono_arch_notify_pending_exc (MonoThreadInfo *info);
guint8* mono_arch_get_call_target (guint8 *code);
guint32 mono_arch_get_plt_info_offset (guint8 *plt_entry, host_mgreg_t *regs, guint8 *code);
GSList *mono_arch_get_trampolines (gboolean aot);
gpointer mono_arch_get_interp_to_native_trampoline (MonoTrampInfo **info);
gpointer mono_arch_get_native_to_interp_trampoline (MonoTrampInfo **info);
#ifdef MONO_ARCH_HAVE_INTERP_PINVOKE_TRAMP
// Moves data (arguments and return vt address) from the InterpFrame to the CallContext so a pinvoke call can be made.
void mono_arch_set_native_call_context_args (CallContext *ccontext, gpointer frame, MonoMethodSignature *sig);
// Moves the return value from the InterpFrame to the ccontext, or to the retp (if native code passed the retvt address)
void mono_arch_set_native_call_context_ret (CallContext *ccontext, gpointer frame, MonoMethodSignature *sig, gpointer retp);
// When entering interp from native, this moves the arguments from the ccontext to the InterpFrame. If we have a return
// vt address, we return it. This ret vt address needs to be passed to mono_arch_set_native_call_context_ret.
gpointer mono_arch_get_native_call_context_args (CallContext *ccontext, gpointer frame, MonoMethodSignature *sig);
// After the pinvoke call is done, this moves return value from the ccontext to the InterpFrame.
void mono_arch_get_native_call_context_ret (CallContext *ccontext, gpointer frame, MonoMethodSignature *sig);
#endif
/*New interruption machinery */
void
mono_setup_async_callback (MonoContext *ctx, void (*async_cb)(void *fun), gpointer user_data);
void
mono_arch_setup_async_callback (MonoContext *ctx, void (*async_cb)(void *fun), gpointer user_data);
gboolean
mono_thread_state_init_from_handle (MonoThreadUnwindState *tctx, MonoThreadInfo *info, /*optional*/ void *sigctx);
/* Exception handling */
typedef gboolean (*MonoJitStackWalk) (StackFrameInfo *frame, MonoContext *ctx, gpointer data);
void mono_exceptions_init (void);
gboolean mono_handle_exception (MonoContext *ctx, gpointer obj);
void mono_handle_native_crash (const char *signal, MonoContext *mctx, MONO_SIG_HANDLER_INFO_TYPE *siginfo);
MONO_API void mono_print_thread_dump (void *sigctx);
MONO_API void mono_print_thread_dump_from_ctx (MonoContext *ctx);
MONO_COMPONENT_API void mono_walk_stack_with_ctx (MonoJitStackWalk func, MonoContext *start_ctx, MonoUnwindOptions unwind_options, void *user_data);
MONO_COMPONENT_API void mono_walk_stack_with_state (MonoJitStackWalk func, MonoThreadUnwindState *state, MonoUnwindOptions unwind_options, void *user_data);
void mono_walk_stack (MonoJitStackWalk func, MonoUnwindOptions options, void *user_data);
gboolean mono_thread_state_init_from_sigctx (MonoThreadUnwindState *ctx, void *sigctx);
void mono_thread_state_init (MonoThreadUnwindState *ctx);
MONO_COMPONENT_API gboolean mono_thread_state_init_from_current (MonoThreadUnwindState *ctx);
MONO_COMPONENT_API gboolean mono_thread_state_init_from_monoctx (MonoThreadUnwindState *ctx, MonoContext *mctx);
void mono_setup_altstack (MonoJitTlsData *tls);
void mono_free_altstack (MonoJitTlsData *tls);
gpointer mono_altstack_restore_prot (host_mgreg_t *regs, guint8 *code, gpointer *tramp_data, guint8* tramp);
MONO_COMPONENT_API MonoJitInfo* mini_jit_info_table_find (gpointer addr);
MonoJitInfo* mini_jit_info_table_find_ext (gpointer addr, gboolean allow_trampolines);
G_EXTERN_C void mono_resume_unwind (MonoContext *ctx);
MonoJitInfo * mono_find_jit_info (MonoJitTlsData *jit_tls, MonoJitInfo *res, MonoJitInfo *prev_ji, MonoContext *ctx, MonoContext *new_ctx, char **trace, MonoLMF **lmf, int *native_offset, gboolean *managed);
typedef gboolean (*MonoExceptionFrameWalk) (MonoMethod *method, gpointer ip, size_t native_offset, gboolean managed, gpointer user_data);
MONO_API gboolean mono_exception_walk_trace (MonoException *ex, MonoExceptionFrameWalk func, gpointer user_data);
MONO_COMPONENT_API void mono_restore_context (MonoContext *ctx);
guint8* mono_jinfo_get_unwind_info (MonoJitInfo *ji, guint32 *unwind_info_len);
int mono_jinfo_get_epilog_size (MonoJitInfo *ji);
gboolean
mono_find_jit_info_ext (MonoJitTlsData *jit_tls,
MonoJitInfo *prev_ji, MonoContext *ctx,
MonoContext *new_ctx, char **trace, MonoLMF **lmf,
host_mgreg_t **save_locations,
StackFrameInfo *frame);
gpointer mono_get_throw_exception (void);
gpointer mono_get_rethrow_exception (void);
gpointer mono_get_rethrow_preserve_exception (void);
gpointer mono_get_call_filter (void);
gpointer mono_get_restore_context (void);
gpointer mono_get_throw_corlib_exception (void);
gpointer mono_get_throw_exception_addr (void);
gpointer mono_get_rethrow_preserve_exception_addr (void);
ICALL_EXPORT
MonoArray *ves_icall_get_trace (MonoException *exc, gint32 skip, MonoBoolean need_file_info);
ICALL_EXPORT
MonoBoolean ves_icall_get_frame_info (gint32 skip, MonoBoolean need_file_info,
MonoReflectionMethod **method,
gint32 *iloffset, gint32 *native_offset,
MonoString **file, gint32 *line, gint32 *column);
void mono_set_cast_details (MonoClass *from, MonoClass *to);
void mono_decompose_typechecks (MonoCompile *cfg);
/* Dominator/SSA methods */
void mono_compile_dominator_info (MonoCompile *cfg, int dom_flags);
void mono_compute_natural_loops (MonoCompile *cfg);
MonoBitSet* mono_compile_iterated_dfrontier (MonoCompile *cfg, MonoBitSet *set);
void mono_ssa_compute (MonoCompile *cfg);
void mono_ssa_remove (MonoCompile *cfg);
void mono_ssa_remove_gsharedvt (MonoCompile *cfg);
void mono_ssa_cprop (MonoCompile *cfg);
void mono_ssa_deadce (MonoCompile *cfg);
void mono_ssa_strength_reduction (MonoCompile *cfg);
void mono_free_loop_info (MonoCompile *cfg);
void mono_ssa_loop_invariant_code_motion (MonoCompile *cfg);
void mono_ssa_compute2 (MonoCompile *cfg);
void mono_ssa_remove2 (MonoCompile *cfg);
void mono_ssa_cprop2 (MonoCompile *cfg);
void mono_ssa_deadce2 (MonoCompile *cfg);
/* debugging support */
void mono_debug_init_method (MonoCompile *cfg, MonoBasicBlock *start_block,
guint32 breakpoint_id);
void mono_debug_open_method (MonoCompile *cfg);
void mono_debug_close_method (MonoCompile *cfg);
void mono_debug_free_method (MonoCompile *cfg);
void mono_debug_open_block (MonoCompile *cfg, MonoBasicBlock *bb, guint32 address);
void mono_debug_record_line_number (MonoCompile *cfg, MonoInst *ins, guint32 address);
void mono_debug_serialize_debug_info (MonoCompile *cfg, guint8 **out_buf, guint32 *buf_len);
void mono_debug_add_aot_method (MonoMethod *method, guint8 *code_start,
guint8 *debug_info, guint32 debug_info_len);
MONO_API void mono_debug_print_vars (gpointer ip, gboolean only_arguments);
MONO_API void mono_debugger_run_finally (MonoContext *start_ctx);
MONO_API gboolean mono_breakpoint_clean_code (guint8 *method_start, guint8 *code, int offset, guint8 *buf, int size);
/* Tracing */
MonoCallSpec *mono_trace_set_options (const char *options);
gboolean mono_trace_eval (MonoMethod *method);
gboolean
mono_tailcall_print_enabled (void);
void
mono_tailcall_print (const char *format, ...);
gboolean
mono_is_supported_tailcall_helper (gboolean value, const char *svalue);
#define IS_SUPPORTED_TAILCALL(x) (mono_is_supported_tailcall_helper((x), #x))
extern void
mono_perform_abc_removal (MonoCompile *cfg);
extern void
mono_perform_abc_removal (MonoCompile *cfg);
extern void
mono_local_cprop (MonoCompile *cfg);
extern void
mono_local_cprop (MonoCompile *cfg);
extern void
mono_local_deadce (MonoCompile *cfg);
void
mono_local_alias_analysis (MonoCompile *cfg);
/* Generic sharing */
void
mono_set_generic_sharing_supported (gboolean supported);
void
mono_set_generic_sharing_vt_supported (gboolean supported);
void
mono_set_partial_sharing_supported (gboolean supported);
gboolean
mono_class_generic_sharing_enabled (MonoClass *klass);
gpointer
mono_class_fill_runtime_generic_context (MonoVTable *class_vtable, guint32 slot, MonoError *error);
gpointer
mono_method_fill_runtime_generic_context (MonoMethodRuntimeGenericContext *mrgctx, guint32 slot, MonoError *error);
const char*
mono_rgctx_info_type_to_str (MonoRgctxInfoType type);
MonoJumpInfoType
mini_rgctx_info_type_to_patch_info_type (MonoRgctxInfoType info_type);
gboolean
mono_method_needs_static_rgctx_invoke (MonoMethod *method, gboolean allow_type_vars);
int
mono_class_rgctx_get_array_size (int n, gboolean mrgctx);
MonoGenericContext
mono_method_construct_object_context (MonoMethod *method);
MONO_COMPONENT_API MonoMethod*
mono_method_get_declaring_generic_method (MonoMethod *method);
int
mono_generic_context_check_used (MonoGenericContext *context);
int
mono_class_check_context_used (MonoClass *klass);
gboolean
mono_generic_context_is_sharable (MonoGenericContext *context, gboolean allow_type_vars);
gboolean
mono_generic_context_is_sharable_full (MonoGenericContext *context, gboolean allow_type_vars, gboolean allow_partial);
gboolean
mono_method_is_generic_impl (MonoMethod *method);
gboolean
mono_method_is_generic_sharable (MonoMethod *method, gboolean allow_type_vars);
gboolean
mono_method_is_generic_sharable_full (MonoMethod *method, gboolean allow_type_vars, gboolean allow_partial, gboolean allow_gsharedvt);
gboolean
mini_class_is_generic_sharable (MonoClass *klass);
gboolean
mini_generic_inst_is_sharable (MonoGenericInst *inst, gboolean allow_type_vars, gboolean allow_partial);
MonoMethod*
mono_class_get_method_generic (MonoClass *klass, MonoMethod *method, MonoError *error);
gboolean
mono_is_partially_sharable_inst (MonoGenericInst *inst);
gboolean
mini_is_gsharedvt_gparam (MonoType *t);
gboolean
mini_is_gsharedvt_inst (MonoGenericInst *inst);
MonoGenericContext* mini_method_get_context (MonoMethod *method);
int mono_method_check_context_used (MonoMethod *method);
gboolean mono_generic_context_equal_deep (MonoGenericContext *context1, MonoGenericContext *context2);
gpointer mono_helper_get_rgctx_other_ptr (MonoClass *caller_class, MonoVTable *vtable,
guint32 token, guint32 token_source, guint32 rgctx_type,
gint32 rgctx_index);
void mono_generic_sharing_init (void);
MonoClass* mini_class_get_container_class (MonoClass *klass);
MonoGenericContext* mini_class_get_context (MonoClass *klass);
typedef enum {
SHARE_MODE_NONE = 0x0,
SHARE_MODE_GSHAREDVT = 0x1,
} GetSharedMethodFlags;
MonoType* mini_get_underlying_type (MonoType *type);
MonoType* mini_type_get_underlying_type (MonoType *type);
MonoClass* mini_get_class (MonoMethod *method, guint32 token, MonoGenericContext *context);
MonoMethod* mini_get_shared_method_to_register (MonoMethod *method);
MonoMethod* mini_get_shared_method_full (MonoMethod *method, GetSharedMethodFlags flags, MonoError *error);
MonoType* mini_get_shared_gparam (MonoType *t, MonoType *constraint);
int mini_get_rgctx_entry_slot (MonoJumpInfoRgctxEntry *entry);
int mini_type_stack_size (MonoType *t, int *align);
int mini_type_stack_size_full (MonoType *t, guint32 *align, gboolean pinvoke);
void mini_type_to_eval_stack_type (MonoCompile *cfg, MonoType *type, MonoInst *inst);
guint mono_type_to_regmove (MonoCompile *cfg, MonoType *type);
void mono_cfg_add_try_hole (MonoCompile *cfg, MonoExceptionClause *clause, guint8 *start, MonoBasicBlock *bb);
void mono_cfg_set_exception (MonoCompile *cfg, MonoExceptionType type);
void mono_cfg_set_exception_invalid_program (MonoCompile *cfg, char *msg);
#define MONO_TIME_TRACK(a, phase) \
{ \
gint64 start = mono_time_track_start (); \
(phase) ; \
mono_time_track_end (&(a), start); \
}
gint64 mono_time_track_start (void);
void mono_time_track_end (gint64 *time, gint64 start);
void mono_update_jit_stats (MonoCompile *cfg);
gboolean mini_type_is_reference (MonoType *type);
gboolean mini_type_is_vtype (MonoType *t);
gboolean mini_type_var_is_vt (MonoType *type);
gboolean mini_is_gsharedvt_type (MonoType *t);
gboolean mini_is_gsharedvt_klass (MonoClass *klass);
gboolean mini_is_gsharedvt_signature (MonoMethodSignature *sig);
gboolean mini_is_gsharedvt_variable_type (MonoType *t);
gboolean mini_is_gsharedvt_variable_klass (MonoClass *klass);
gboolean mini_is_gsharedvt_sharable_method (MonoMethod *method);
gboolean mini_is_gsharedvt_variable_signature (MonoMethodSignature *sig);
gboolean mini_is_gsharedvt_sharable_inst (MonoGenericInst *inst);
gboolean mini_method_is_default_method (MonoMethod *m);
gboolean mini_method_needs_mrgctx (MonoMethod *m);
gpointer mini_method_get_rgctx (MonoMethod *m);
void mini_init_gsctx (MonoMemPool *mp, MonoGenericContext *context, MonoGenericSharingContext *gsctx);
gpointer mini_get_gsharedvt_wrapper (gboolean gsharedvt_in, gpointer addr, MonoMethodSignature *normal_sig, MonoMethodSignature *gsharedvt_sig,
gint32 vcall_offset, gboolean calli);
MonoMethod* mini_get_gsharedvt_in_sig_wrapper (MonoMethodSignature *sig);
MonoMethod* mini_get_gsharedvt_out_sig_wrapper (MonoMethodSignature *sig);
MonoMethodSignature* mini_get_gsharedvt_out_sig_wrapper_signature (gboolean has_this, gboolean has_ret, int param_count);
gboolean mini_gsharedvt_runtime_invoke_supported (MonoMethodSignature *sig);
G_EXTERN_C void mono_interp_entry_from_trampoline (gpointer ccontext, gpointer imethod);
G_EXTERN_C void mono_interp_to_native_trampoline (gpointer addr, gpointer ccontext);
MonoMethod* mini_get_interp_in_wrapper (MonoMethodSignature *sig);
MonoMethod* mini_get_interp_lmf_wrapper (const char *name, gpointer target);
char* mono_get_method_from_ip (void *ip);
/* SIMD support */
typedef enum {
/* Used for lazy initialization */
MONO_CPU_INITED = 1 << 0,
#if defined(TARGET_X86) || defined(TARGET_AMD64)
MONO_CPU_X86_SSE = 1 << 1,
MONO_CPU_X86_SSE2 = 1 << 2,
MONO_CPU_X86_PCLMUL = 1 << 3,
MONO_CPU_X86_AES = 1 << 4,
MONO_CPU_X86_SSE3 = 1 << 5,
MONO_CPU_X86_SSSE3 = 1 << 6,
MONO_CPU_X86_SSE41 = 1 << 7,
MONO_CPU_X86_SSE42 = 1 << 8,
MONO_CPU_X86_POPCNT = 1 << 9,
MONO_CPU_X86_AVX = 1 << 10,
MONO_CPU_X86_AVX2 = 1 << 11,
MONO_CPU_X86_FMA = 1 << 12,
MONO_CPU_X86_LZCNT = 1 << 13,
MONO_CPU_X86_BMI1 = 1 << 14,
MONO_CPU_X86_BMI2 = 1 << 15,
//
// Dependencies (based on System.Runtime.Intrinsics.X86 class hierarchy):
//
// sse
// sse2
// pclmul
// aes
// sse3
// ssse3 (doesn't include 'pclmul' and 'aes')
// sse4.1
// sse4.2
// popcnt
// avx (doesn't include 'popcnt')
// avx2
// fma
// lzcnt
// bmi1
// bmi2
MONO_CPU_X86_SSE_COMBINED = MONO_CPU_X86_SSE,
MONO_CPU_X86_SSE2_COMBINED = MONO_CPU_X86_SSE_COMBINED | MONO_CPU_X86_SSE2,
MONO_CPU_X86_PCLMUL_COMBINED = MONO_CPU_X86_SSE2_COMBINED | MONO_CPU_X86_PCLMUL,
MONO_CPU_X86_AES_COMBINED = MONO_CPU_X86_SSE2_COMBINED | MONO_CPU_X86_AES,
MONO_CPU_X86_SSE3_COMBINED = MONO_CPU_X86_SSE2_COMBINED | MONO_CPU_X86_SSE3,
MONO_CPU_X86_SSSE3_COMBINED = MONO_CPU_X86_SSE3_COMBINED | MONO_CPU_X86_SSSE3,
MONO_CPU_X86_SSE41_COMBINED = MONO_CPU_X86_SSSE3_COMBINED | MONO_CPU_X86_SSE41,
MONO_CPU_X86_SSE42_COMBINED = MONO_CPU_X86_SSE41_COMBINED | MONO_CPU_X86_SSE42,
MONO_CPU_X86_POPCNT_COMBINED = MONO_CPU_X86_SSE42_COMBINED | MONO_CPU_X86_POPCNT,
MONO_CPU_X86_AVX_COMBINED = MONO_CPU_X86_SSE42_COMBINED | MONO_CPU_X86_AVX,
MONO_CPU_X86_AVX2_COMBINED = MONO_CPU_X86_AVX_COMBINED | MONO_CPU_X86_AVX2,
MONO_CPU_X86_FMA_COMBINED = MONO_CPU_X86_AVX_COMBINED | MONO_CPU_X86_FMA,
MONO_CPU_X86_FULL_SSEAVX_COMBINED = MONO_CPU_X86_FMA_COMBINED | MONO_CPU_X86_AVX2 | MONO_CPU_X86_PCLMUL
| MONO_CPU_X86_AES | MONO_CPU_X86_POPCNT | MONO_CPU_X86_FMA,
#endif
#ifdef TARGET_WASM
MONO_CPU_WASM_SIMD = 1 << 1,
#endif
#ifdef TARGET_ARM64
MONO_CPU_ARM64_BASE = 1 << 1,
MONO_CPU_ARM64_CRC = 1 << 2,
MONO_CPU_ARM64_CRYPTO = 1 << 3,
MONO_CPU_ARM64_NEON = 1 << 4,
MONO_CPU_ARM64_RDM = 1 << 5,
MONO_CPU_ARM64_DP = 1 << 6,
#endif
} MonoCPUFeatures;
G_ENUM_FUNCTIONS (MonoCPUFeatures)
MonoCPUFeatures mini_get_cpu_features (MonoCompile* cfg);
enum {
SIMD_COMP_EQ,
SIMD_COMP_LT,
SIMD_COMP_LE,
SIMD_COMP_UNORD,
SIMD_COMP_NEQ,
SIMD_COMP_NLT,
SIMD_COMP_NLE,
SIMD_COMP_ORD
};
enum {
SIMD_PREFETCH_MODE_NTA,
SIMD_PREFETCH_MODE_0,
SIMD_PREFETCH_MODE_1,
SIMD_PREFETCH_MODE_2,
};
const char *mono_arch_xregname (int reg);
MonoCPUFeatures mono_arch_get_cpu_features (void);
#ifdef MONO_ARCH_SIMD_INTRINSICS
void mono_simd_simplify_indirection (MonoCompile *cfg);
void mono_simd_decompose_intrinsic (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins);
MonoInst* mono_emit_simd_intrinsics (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args);
MonoInst* mono_emit_simd_field_load (MonoCompile *cfg, MonoClassField *field, MonoInst *addr);
void mono_simd_intrinsics_init (void);
#endif
MonoMethod*
mini_method_to_shared (MonoMethod *method); // null if not shared
static inline gboolean
mini_safepoints_enabled (void)
{
#if defined (TARGET_WASM)
return FALSE;
#else
return TRUE;
#endif
}
gpointer
mono_arch_load_function (MonoJitICallId jit_icall_id);
MONO_COMPONENT_API MonoGenericContext
mono_get_generic_context_from_stack_frame (MonoJitInfo *ji, gpointer generic_info);
MONO_COMPONENT_API gpointer
mono_get_generic_info_from_stack_frame (MonoJitInfo *ji, MonoContext *ctx);
MonoMemoryManager* mini_get_default_mem_manager (void);
MONO_COMPONENT_API int
mono_wasm_get_debug_level (void);
#endif /* __MONO_MINI_H__ */
| /**
* \file
* Copyright 2002-2003 Ximian Inc
* Copyright 2003-2011 Novell Inc
* Copyright 2011 Xamarin Inc
* Licensed under the MIT license. See LICENSE file in the project root for full license information.
*/
#ifndef __MONO_MINI_H__
#define __MONO_MINI_H__
#include "config.h"
#include <glib.h>
#include <signal.h>
#ifdef HAVE_SYS_TYPES_H
#include <sys/types.h>
#endif
#include <mono/utils/mono-forward-internal.h>
#include <mono/metadata/loader.h>
#include <mono/metadata/mempool.h>
#include <mono/utils/monobitset.h>
#include <mono/metadata/class.h>
#include <mono/metadata/object.h>
#include <mono/metadata/opcodes.h>
#include <mono/metadata/tabledefs.h>
#include <mono/metadata/domain-internals.h>
#include "mono/metadata/class-internals.h"
#include "mono/metadata/class-init.h"
#include "mono/metadata/object-internals.h"
#include <mono/metadata/profiler-private.h>
#include <mono/metadata/debug-helpers.h>
#include <mono/metadata/abi-details.h>
#include <mono/metadata/jit-info.h>
#include <mono/utils/mono-compiler.h>
#include <mono/utils/mono-machine.h>
#include <mono/utils/mono-stack-unwinding.h>
#include <mono/utils/mono-threads.h>
#include <mono/utils/mono-threads-coop.h>
#include <mono/utils/mono-tls.h>
#include <mono/utils/atomic.h>
#include <mono/utils/mono-jemalloc.h>
#include <mono/utils/mono-conc-hashtable.h>
#include <mono/utils/mono-signal-handler.h>
#include <mono/utils/ftnptr.h>
#include <mono/metadata/icalls.h>
// Forward declare so that mini-*.h can have pointers to them.
// CallInfo is presently architecture specific.
typedef struct MonoInst MonoInst;
typedef struct CallInfo CallInfo;
typedef struct SeqPointInfo SeqPointInfo;
#include "mini-arch.h"
#include "regalloc.h"
#include "mini-unwind.h"
#include <mono/jit/jit.h>
#include "cfgdump.h"
#include "tiered.h"
#include "mono/metadata/tabledefs.h"
#include "mono/metadata/marshal.h"
#include "mono/metadata/exception.h"
#include "mono/metadata/callspec.h"
#include "mono/metadata/icall-signatures.h"
/*
* The mini code should not have any compile time dependencies on the GC being used, so the same object file from mini/
* can be linked into both mono and mono-sgen.
*/
#if !defined(MONO_DLL_EXPORT) || !defined(_MSC_VER)
#if defined(HAVE_BOEHM_GC) || defined(HAVE_SGEN_GC)
#error "The code in mini/ should not depend on these defines."
#endif
#endif
#ifndef __GNUC__
/*#define __alignof__(a) sizeof(a)*/
#define __alignof__(type) G_STRUCT_OFFSET(struct { char c; type x; }, x)
#endif
#if DISABLE_LOGGING
#define MINI_DEBUG(level,limit,code)
#else
#define MINI_DEBUG(level,limit,code) do {if (G_UNLIKELY ((level) >= (limit))) code} while (0)
#endif
#if !defined(DISABLE_TASKLETS) && defined(MONO_ARCH_SUPPORT_TASKLETS)
#if defined(__GNUC__)
#define MONO_SUPPORT_TASKLETS 1
#elif defined(HOST_WIN32)
#define MONO_SUPPORT_TASKLETS 1
// Replace some gnu intrinsics needed for tasklets with MSVC equivalents.
#define __builtin_extract_return_addr(x) x
#define __builtin_return_address(x) _ReturnAddress()
#define __builtin_frame_address(x) _AddressOfReturnAddress()
#endif
#endif
#if ENABLE_LLVM
#define COMPILE_LLVM(cfg) ((cfg)->compile_llvm)
#define LLVM_ENABLED TRUE
#else
#define COMPILE_LLVM(cfg) (0)
#define LLVM_ENABLED FALSE
#endif
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
#define COMPILE_SOFT_FLOAT(cfg) (!COMPILE_LLVM ((cfg)) && mono_arch_is_soft_float ())
#else
#define COMPILE_SOFT_FLOAT(cfg) (0)
#endif
#define NOT_IMPLEMENTED do { g_assert_not_reached (); } while (0)
/* for 32 bit systems */
#if G_BYTE_ORDER == G_LITTLE_ENDIAN
#define MINI_LS_WORD_IDX 0
#define MINI_MS_WORD_IDX 1
#else
#define MINI_LS_WORD_IDX 1
#define MINI_MS_WORD_IDX 0
#endif
#define MINI_LS_WORD_OFFSET (MINI_LS_WORD_IDX * 4)
#define MINI_MS_WORD_OFFSET (MINI_MS_WORD_IDX * 4)
#define MONO_LVREG_LS(lvreg) ((lvreg) + 1)
#define MONO_LVREG_MS(lvreg) ((lvreg) + 2)
#ifndef DISABLE_AOT
#define MONO_USE_AOT_COMPILER
#endif
//TODO: This is x86/amd64 specific.
#define mono_simd_shuffle_mask(a,b,c,d) ((a) | ((b) << 2) | ((c) << 4) | ((d) << 6))
/* Remap printf to g_print (we use a mix of these in the mini code) */
#ifdef HOST_ANDROID
#define printf g_print
#endif
#define MONO_TYPE_IS_PRIMITIVE(t) ((!m_type_is_byref ((t)) && ((((t)->type >= MONO_TYPE_BOOLEAN && (t)->type <= MONO_TYPE_R8) || ((t)->type >= MONO_TYPE_I && (t)->type <= MONO_TYPE_U)))))
#define MONO_TYPE_IS_VECTOR_PRIMITIVE(t) ((!m_type_is_byref ((t)) && ((((t)->type >= MONO_TYPE_I1 && (t)->type <= MONO_TYPE_R8) || ((t)->type >= MONO_TYPE_I && (t)->type <= MONO_TYPE_U)))))
//XXX this ignores if t is byref
#define MONO_TYPE_IS_PRIMITIVE_SCALAR(t) ((((((t)->type >= MONO_TYPE_BOOLEAN && (t)->type <= MONO_TYPE_U8) || ((t)->type >= MONO_TYPE_I && (t)->type <= MONO_TYPE_U)))))
typedef struct
{
MonoClass *klass;
MonoMethod *method;
} MonoClassMethodPair;
typedef struct
{
MonoClass *klass;
MonoMethod *method;
gboolean is_virtual;
} MonoDelegateClassMethodPair;
typedef struct {
MonoJitInfo *ji;
MonoCodeManager *code_mp;
} MonoJitDynamicMethodInfo;
/* An extension of MonoGenericParamFull used in generic sharing */
typedef struct {
MonoGenericParamFull param;
MonoGenericParam *parent;
} MonoGSharedGenericParam;
/* Contains a list of ips which needs to be patched when a method is compiled */
typedef struct {
GSList *list;
} MonoJumpList;
/* Arch-specific */
typedef struct {
int dummy;
} MonoDynCallInfo;
typedef struct {
guint32 index;
MonoExceptionClause *clause;
} MonoLeaveClause;
/*
* Information about a stack frame.
* FIXME This typedef exists only to avoid tons of code rewriting
*/
typedef MonoStackFrameInfo StackFrameInfo;
#if 0
#define mono_bitset_foreach_bit(set,b,n) \
for (b = 0; b < n; b++)\
if (mono_bitset_test_fast(set,b))
#else
#define mono_bitset_foreach_bit(set,b,n) \
for (b = mono_bitset_find_start (set); b < n && b >= 0; b = mono_bitset_find_first (set, b))
#endif
/*
* Pull the list of opcodes
*/
#define OPDEF(a,b,c,d,e,f,g,h,i,j) \
a = i,
enum {
#include "mono/cil/opcode.def"
CEE_LASTOP
};
#undef OPDEF
#define MONO_VARINFO(cfg,varnum) (&(cfg)->vars [varnum])
#define MONO_INST_NULLIFY_SREGS(dest) do { \
(dest)->sreg1 = (dest)->sreg2 = (dest)->sreg3 = -1; \
} while (0)
#define MONO_INST_NEW(cfg,dest,op) do { \
(dest) = (MonoInst *)mono_mempool_alloc0 ((cfg)->mempool, sizeof (MonoInst)); \
(dest)->opcode = (op); \
(dest)->dreg = -1; \
MONO_INST_NULLIFY_SREGS ((dest)); \
(dest)->cil_code = (cfg)->ip; \
} while (0)
#define MONO_INST_NEW_CALL(cfg,dest,op) do { \
(dest) = (MonoCallInst *)mono_mempool_alloc0 ((cfg)->mempool, sizeof (MonoCallInst)); \
(dest)->inst.opcode = (op); \
(dest)->inst.dreg = -1; \
MONO_INST_NULLIFY_SREGS (&(dest)->inst); \
(dest)->inst.cil_code = (cfg)->ip; \
} while (0)
#define MONO_ADD_INS(b,inst) do { \
if ((b)->last_ins) { \
(b)->last_ins->next = (inst); \
(inst)->prev = (b)->last_ins; \
(b)->last_ins = (inst); \
} else { \
(b)->code = (b)->last_ins = (inst); \
} \
} while (0)
#define NULLIFY_INS(ins) do { \
(ins)->opcode = OP_NOP; \
(ins)->dreg = -1; \
MONO_INST_NULLIFY_SREGS ((ins)); \
} while (0)
/* Remove INS from BB */
#define MONO_REMOVE_INS(bb,ins) do { \
if ((ins)->prev) \
(ins)->prev->next = (ins)->next; \
if ((ins)->next) \
(ins)->next->prev = (ins)->prev; \
if ((bb)->code == (ins)) \
(bb)->code = (ins)->next; \
if ((bb)->last_ins == (ins)) \
(bb)->last_ins = (ins)->prev; \
} while (0)
/* Remove INS from BB and nullify it */
#define MONO_DELETE_INS(bb,ins) do { \
MONO_REMOVE_INS ((bb), (ins)); \
NULLIFY_INS ((ins)); \
} while (0)
/*
* this is used to determine when some branch optimizations are possible: we exclude FP compares
* because they have weird semantics with NaNs.
*/
#define MONO_IS_COND_BRANCH_OP(ins) (((ins)->opcode >= OP_LBEQ && (ins)->opcode <= OP_LBLT_UN) || ((ins)->opcode >= OP_FBEQ && (ins)->opcode <= OP_FBLT_UN) || ((ins)->opcode >= OP_IBEQ && (ins)->opcode <= OP_IBLT_UN))
#define MONO_IS_COND_BRANCH_NOFP(ins) (MONO_IS_COND_BRANCH_OP(ins) && !(((ins)->opcode >= OP_FBEQ) && ((ins)->opcode <= OP_FBLT_UN)))
#define MONO_IS_BRANCH_OP(ins) (MONO_IS_COND_BRANCH_OP(ins) || ((ins)->opcode == OP_BR) || ((ins)->opcode == OP_BR_REG) || ((ins)->opcode == OP_SWITCH))
#define MONO_IS_COND_EXC(ins) ((((ins)->opcode >= OP_COND_EXC_EQ) && ((ins)->opcode <= OP_COND_EXC_LT_UN)) || (((ins)->opcode >= OP_COND_EXC_IEQ) && ((ins)->opcode <= OP_COND_EXC_ILT_UN)))
#define MONO_IS_SETCC(ins) ((((ins)->opcode >= OP_CEQ) && ((ins)->opcode <= OP_CLT_UN)) || (((ins)->opcode >= OP_ICEQ) && ((ins)->opcode <= OP_ICLE_UN)) || (((ins)->opcode >= OP_LCEQ) && ((ins)->opcode <= OP_LCLT_UN)) || (((ins)->opcode >= OP_FCEQ) && ((ins)->opcode <= OP_FCLT_UN)))
#define MONO_HAS_CUSTOM_EMULATION(ins) (((ins)->opcode >= OP_FBEQ && (ins)->opcode <= OP_FBLT_UN) || ((ins)->opcode >= OP_FCEQ && (ins)->opcode <= OP_FCLT_UN))
#define MONO_IS_LOAD_MEMBASE(ins) (((ins)->opcode >= OP_LOAD_MEMBASE && (ins)->opcode <= OP_LOADV_MEMBASE) || ((ins)->opcode >= OP_ATOMIC_LOAD_I1 && (ins)->opcode <= OP_ATOMIC_LOAD_R8))
#define MONO_IS_STORE_MEMBASE(ins) (((ins)->opcode >= OP_STORE_MEMBASE_REG && (ins)->opcode <= OP_STOREV_MEMBASE) || ((ins)->opcode >= OP_ATOMIC_STORE_I1 && (ins)->opcode <= OP_ATOMIC_STORE_R8))
#define MONO_IS_STORE_MEMINDEX(ins) (((ins)->opcode >= OP_STORE_MEMINDEX) && ((ins)->opcode <= OP_STORER8_MEMINDEX))
// This is internal because it is easily confused with any enum or integer.
#define MONO_IS_TAILCALL_OPCODE_INTERNAL(opcode) ((opcode) == OP_TAILCALL || (opcode) == OP_TAILCALL_MEMBASE || (opcode) == OP_TAILCALL_REG)
#define MONO_IS_TAILCALL_OPCODE(call) (MONO_IS_TAILCALL_OPCODE_INTERNAL (call->inst.opcode))
// OP_DYN_CALL is not a MonoCallInst
#define MONO_IS_CALL(ins) (((ins)->opcode >= OP_VOIDCALL && (ins)->opcode <= OP_VCALL2_MEMBASE) || \
MONO_IS_TAILCALL_OPCODE_INTERNAL ((ins)->opcode))
#define MONO_IS_JUMP_TABLE(ins) (((ins)->opcode == OP_JUMP_TABLE) ? TRUE : ((((ins)->opcode == OP_AOTCONST) && (ins->inst_i1 == (gpointer)MONO_PATCH_INFO_SWITCH)) ? TRUE : ((ins)->opcode == OP_SWITCH) ? TRUE : ((((ins)->opcode == OP_GOT_ENTRY) && ((ins)->inst_right->inst_i1 == (gpointer)MONO_PATCH_INFO_SWITCH)) ? TRUE : FALSE)))
#define MONO_JUMP_TABLE_FROM_INS(ins) (((ins)->opcode == OP_JUMP_TABLE) ? (ins)->inst_p0 : (((ins)->opcode == OP_AOTCONST) && (ins->inst_i1 == (gpointer)MONO_PATCH_INFO_SWITCH) ? (ins)->inst_p0 : (((ins)->opcode == OP_SWITCH) ? (ins)->inst_p0 : ((((ins)->opcode == OP_GOT_ENTRY) && ((ins)->inst_right->inst_i1 == (gpointer)MONO_PATCH_INFO_SWITCH)) ? (ins)->inst_right->inst_p0 : NULL))))
#define MONO_INS_HAS_NO_SIDE_EFFECT(ins) (mono_ins_no_side_effects ((ins)))
#define MONO_INS_IS_PCONST_NULL(ins) ((ins)->opcode == OP_PCONST && (ins)->inst_p0 == 0)
#define MONO_METHOD_IS_FINAL(m) (((m)->flags & METHOD_ATTRIBUTE_FINAL) || ((m)->klass && (mono_class_get_flags ((m)->klass) & TYPE_ATTRIBUTE_SEALED)))
/* Determine whenever 'ins' represents a load of the 'this' argument */
#define MONO_CHECK_THIS(ins) (mono_method_signature_internal (cfg->method)->hasthis && ((ins)->opcode == OP_MOVE) && ((ins)->sreg1 == cfg->args [0]->dreg))
#ifdef MONO_ARCH_SIMD_INTRINSICS
#define MONO_IS_PHI(ins) (((ins)->opcode == OP_PHI) || ((ins)->opcode == OP_FPHI) || ((ins)->opcode == OP_VPHI) || ((ins)->opcode == OP_XPHI))
#define MONO_IS_MOVE(ins) (((ins)->opcode == OP_MOVE) || ((ins)->opcode == OP_FMOVE) || ((ins)->opcode == OP_VMOVE) || ((ins)->opcode == OP_XMOVE) || ((ins)->opcode == OP_RMOVE))
#define MONO_IS_NON_FP_MOVE(ins) (((ins)->opcode == OP_MOVE) || ((ins)->opcode == OP_VMOVE) || ((ins)->opcode == OP_XMOVE))
#define MONO_IS_REAL_MOVE(ins) (((ins)->opcode == OP_MOVE) || ((ins)->opcode == OP_FMOVE) || ((ins)->opcode == OP_XMOVE) || ((ins)->opcode == OP_RMOVE))
#define MONO_IS_ZERO(ins) (((ins)->opcode == OP_VZERO) || ((ins)->opcode == OP_XZERO))
#ifdef TARGET_ARM64
/*
* SIMD is only supported on arm64 when using the LLVM backend. When not using
* the LLVM backend, treat SIMD datatypes as regular value types.
*/
#define MONO_CLASS_IS_SIMD(cfg, klass) (((cfg)->opt & MONO_OPT_SIMD) && COMPILE_LLVM (cfg) && m_class_is_simd_type (klass))
#else
#define MONO_CLASS_IS_SIMD(cfg, klass) (((cfg)->opt & MONO_OPT_SIMD) && m_class_is_simd_type (klass) && (COMPILE_LLVM (cfg) || mono_type_size (m_class_get_byval_arg (klass), NULL) == 16))
#endif
#else
#define MONO_IS_PHI(ins) (((ins)->opcode == OP_PHI) || ((ins)->opcode == OP_FPHI) || ((ins)->opcode == OP_VPHI))
#define MONO_IS_MOVE(ins) (((ins)->opcode == OP_MOVE) || ((ins)->opcode == OP_FMOVE) || ((ins)->opcode == OP_VMOVE) || ((ins)->opcode == OP_RMOVE))
#define MONO_IS_NON_FP_MOVE(ins) (((ins)->opcode == OP_MOVE) || ((ins)->opcode == OP_VMOVE))
/*A real MOVE is one that isn't decomposed such as a VMOVE or LMOVE*/
#define MONO_IS_REAL_MOVE(ins) (((ins)->opcode == OP_MOVE) || ((ins)->opcode == OP_FMOVE) || ((ins)->opcode == OP_RMOVE))
#define MONO_IS_ZERO(ins) ((ins)->opcode == OP_VZERO)
#define MONO_CLASS_IS_SIMD(cfg, klass) (0)
#endif
#if defined(TARGET_X86) || defined(TARGET_AMD64)
#define EMIT_NEW_X86_LEA(cfg,dest,sr1,sr2,shift,imm) do { \
MONO_INST_NEW (cfg, dest, OP_X86_LEA); \
(dest)->dreg = alloc_ireg_mp ((cfg)); \
(dest)->sreg1 = (sr1); \
(dest)->sreg2 = (sr2); \
(dest)->inst_imm = (imm); \
(dest)->backend.shift_amount = (shift); \
MONO_ADD_INS ((cfg)->cbb, (dest)); \
} while (0)
#endif
typedef struct MonoInstList MonoInstList;
typedef struct MonoCallInst MonoCallInst;
typedef struct MonoCallArgParm MonoCallArgParm;
typedef struct MonoMethodVar MonoMethodVar;
typedef struct MonoBasicBlock MonoBasicBlock;
typedef struct MonoSpillInfo MonoSpillInfo;
extern MonoCallSpec *mono_jit_trace_calls;
extern MonoMethodDesc *mono_inject_async_exc_method;
extern int mono_inject_async_exc_pos;
extern MonoMethodDesc *mono_break_at_bb_method;
extern int mono_break_at_bb_bb_num;
extern gboolean mono_do_x86_stack_align;
extern int mini_verbose;
extern int valgrind_register;
#define INS_INFO(opcode) (&mini_ins_info [((opcode) - OP_START - 1) * 4])
/* instruction description for use in regalloc/scheduling */
enum {
MONO_INST_DEST = 0,
MONO_INST_SRC1 = 1, /* we depend on the SRCs to be consecutive */
MONO_INST_SRC2 = 2,
MONO_INST_SRC3 = 3,
MONO_INST_LEN = 4,
MONO_INST_CLOB = 5,
/* Unused, commented out to reduce the size of the mdesc tables
MONO_INST_FLAGS,
MONO_INST_COST,
MONO_INST_DELAY,
MONO_INST_RES,
*/
MONO_INST_MAX = 6
};
typedef union MonoInstSpec { // instruction specification
struct {
char dest;
char src1;
char src2;
char src3;
unsigned char len;
char clob;
// char flags;
// char cost;
// char delay;
// char res;
};
struct {
char xdest;
char src [3];
unsigned char xlen;
char xclob;
};
char bytes[MONO_INST_MAX];
} MonoInstSpec;
extern const char mini_ins_info[];
extern const gint8 mini_ins_sreg_counts [];
#ifndef DISABLE_JIT
#define mono_inst_get_num_src_registers(ins) (mini_ins_sreg_counts [(ins)->opcode - OP_START - 1])
#else
#define mono_inst_get_num_src_registers(ins) 0
#endif
#define mono_inst_get_src_registers(ins, regs) (((regs) [0] = (ins)->sreg1), ((regs) [1] = (ins)->sreg2), ((regs) [2] = (ins)->sreg3), mono_inst_get_num_src_registers ((ins)))
#define MONO_BB_FOR_EACH_INS(bb, ins) for ((ins) = (bb)->code; (ins); (ins) = (ins)->next)
#define MONO_BB_FOR_EACH_INS_SAFE(bb, n, ins) for ((ins) = (bb)->code, n = (ins) ? (ins)->next : NULL; (ins); (ins) = (n), (n) = (ins) ? (ins)->next : NULL)
#define MONO_BB_FOR_EACH_INS_REVERSE(bb, ins) for ((ins) = (bb)->last_ins; (ins); (ins) = (ins)->prev)
#define MONO_BB_FOR_EACH_INS_REVERSE_SAFE(bb, p, ins) for ((ins) = (bb)->last_ins, p = (ins) ? (ins)->prev : NULL; (ins); (ins) = (p), (p) = (ins) ? (ins)->prev : NULL)
#define mono_bb_first_ins(bb) (bb)->code
/*
* Iterate through all used registers in the instruction.
* Relies on the existing order of the MONO_INST enum: MONO_INST_{DREG,SREG1,SREG2,SREG3,LEN}
* INS is the instruction, IDX is the register index, REG is the pointer to a register.
*/
#define MONO_INS_FOR_EACH_REG(ins, idx, reg) for ((idx) = INS_INFO ((ins)->opcode)[MONO_INST_DEST] != ' ' ? MONO_INST_DEST : \
(mono_inst_get_num_src_registers (ins) ? MONO_INST_SRC1 : MONO_INST_LEN); \
(reg) = (idx) == MONO_INST_DEST ? &(ins)->dreg : \
((idx) == MONO_INST_SRC1 ? &(ins)->sreg1 : \
((idx) == MONO_INST_SRC2 ? &(ins)->sreg2 : \
((idx) == MONO_INST_SRC3 ? &(ins)->sreg3 : NULL))), \
idx < MONO_INST_LEN; \
(idx) = (idx) > mono_inst_get_num_src_registers (ins) + (INS_INFO ((ins)->opcode)[MONO_INST_DEST] != ' ') ? MONO_INST_LEN : (idx) + 1)
struct MonoSpillInfo {
int offset;
};
/*
* Information about a call site for the GC map creation code
*/
typedef struct {
/* The next offset after the call instruction */
int pc_offset;
/* The basic block containing the call site */
MonoBasicBlock *bb;
/*
* The set of variables live at the call site.
* Has length cfg->num_varinfo in bits.
*/
guint8 *liveness;
/*
* List of OP_GC_PARAM_SLOT_LIVENESS_DEF instructions defining the param slots
* used by this call.
*/
GSList *param_slots;
} GCCallSite;
/*
* The IR-level extended basic block.
*
* A basic block can have multiple exits just fine, as long as the point of
* 'departure' is the last instruction in the basic block. Extended basic
* blocks, on the other hand, may have instructions that leave the block
* midstream. The important thing is that they cannot be _entered_
* midstream, ie, execution of a basic block (or extened bb) always start
* at the beginning of the block, never in the middle.
*/
struct MonoBasicBlock {
MonoInst *last_ins;
/* the next basic block in the order it appears in IL */
MonoBasicBlock *next_bb;
/*
* Before instruction selection it is the first tree in the
* forest and the first item in the list of trees. After
* instruction selection it is the first instruction and the
* first item in the list of instructions.
*/
MonoInst *code;
/* unique block number identification */
gint32 block_num;
gint32 dfn;
/* Basic blocks: incoming and outgoing counts and pointers */
/* Each bb should only appear once in each array */
gint16 out_count, in_count;
MonoBasicBlock **in_bb;
MonoBasicBlock **out_bb;
/* Points to the start of the CIL code that initiated this BB */
unsigned char* cil_code;
/* Length of the CIL block */
gint32 cil_length;
/* The offset of the generated code, used for fixups */
int native_offset;
/* The length of the generated code, doesn't include alignment padding */
int native_length;
/* The real native offset, which includes alignment padding too */
int real_native_offset;
int max_offset;
int max_length;
/* Visited and reachable flags */
guint32 flags;
/*
* SSA and loop based flags
*/
MonoBitSet *dominators;
MonoBitSet *dfrontier;
MonoBasicBlock *idom;
GSList *dominated;
/* fast dominator algorithm */
MonoBasicBlock *df_parent, *ancestor, *child, *label;
int size, sdom, idomn;
/* loop nesting and recognition */
GList *loop_blocks;
gint8 nesting;
gint8 loop_body_start;
/*
* Whenever the bblock is rarely executed so it should be emitted after
* the function epilog.
*/
guint out_of_line : 1;
/* Caches the result of uselessness calculation during optimize_branches */
guint not_useless : 1;
/* Whenever the decompose_array_access_opts () pass needs to process this bblock */
guint needs_decompose : 1;
/* Whenever this bblock is extended, ie. it has branches inside it */
guint extended : 1;
/* Whenever this bblock contains a OP_JUMP_TABLE instruction */
guint has_jump_table : 1;
/* Whenever this bblock contains an OP_CALL_HANDLER instruction */
guint has_call_handler : 1;
/* Whenever this bblock starts a try block */
guint try_start : 1;
#ifdef ENABLE_LLVM
/* The offset of the CIL instruction in this bblock which ends a try block */
intptr_t try_end;
#endif
/*
* If this is set, extend the try range started by this bblock by an arch specific
* number of bytes to encompass the end of the previous bblock (e.g. a Monitor.Enter
* call).
*/
guint extend_try_block : 1;
/* use for liveness analysis */
MonoBitSet *gen_set;
MonoBitSet *kill_set;
MonoBitSet *live_in_set;
MonoBitSet *live_out_set;
/* fields to deal with non-empty stack slots at bb boundary */
guint16 out_scount, in_scount;
MonoInst **out_stack;
MonoInst **in_stack;
/* we use that to prevent merging of bblocks covered by different clauses*/
guint real_offset;
GSList *seq_points;
// The MonoInst of the last sequence point for the current basic block.
MonoInst *last_seq_point;
// This will hold a list of last sequence points of incoming basic blocks
MonoInst **pred_seq_points;
guint num_pred_seq_points;
GSList *spill_slot_defs;
/* List of call sites in this bblock sorted by pc_offset */
GSList *gc_callsites;
/*
* If this is not null, the basic block is a try hole for all the clauses
* in the list previous to this element (including the element).
*/
GList *clause_holes;
/*
* The region encodes whether the basic block is inside
* a finally, catch, filter or none of these.
*
* If the value is -1, then it is neither finally, catch nor filter
*
* Otherwise the format is:
*
* Bits: | 0-3 | 4-7 | 8-31
* | | |
* | clause-flags | MONO_REGION | clause-index
*
*/
guint region;
/* The current symbolic register number, used in local register allocation. */
guint32 max_vreg;
};
/* BBlock flags */
enum {
BB_VISITED = 1 << 0,
BB_REACHABLE = 1 << 1,
BB_EXCEPTION_DEAD_OBJ = 1 << 2,
BB_EXCEPTION_UNSAFE = 1 << 3,
BB_EXCEPTION_HANDLER = 1 << 4,
/* for Native Client, mark the blocks that can be jumped to indirectly */
BB_INDIRECT_JUMP_TARGET = 1 << 5 ,
/* Contains code with some side effects */
BB_HAS_SIDE_EFFECTS = 1 << 6,
};
typedef struct MonoMemcpyArgs {
int size, align;
} MonoMemcpyArgs;
typedef enum {
LLVMArgNone,
/* Scalar argument passed by value */
LLVMArgNormal,
/* Only in ainfo->pair_storage */
LLVMArgInIReg,
/* Only in ainfo->pair_storage */
LLVMArgInFPReg,
/* Valuetype passed in 1-2 consecutive register */
LLVMArgVtypeInReg,
LLVMArgVtypeByVal,
LLVMArgVtypeRetAddr, /* On on cinfo->ret */
LLVMArgGSharedVt,
/* Fixed size argument passed to/returned from gsharedvt method by ref */
LLVMArgGsharedvtFixed,
/* Fixed size vtype argument passed to/returned from gsharedvt method by ref */
LLVMArgGsharedvtFixedVtype,
/* Variable sized argument passed to/returned from gsharedvt method by ref */
LLVMArgGsharedvtVariable,
/* Vtype passed/returned as one int array argument */
LLVMArgAsIArgs,
/* Vtype passed as a set of fp arguments */
LLVMArgAsFpArgs,
/*
* Only for returns, a structure which
* consists of floats/doubles.
*/
LLVMArgFpStruct,
LLVMArgVtypeByRef,
/* Vtype returned as an int */
LLVMArgVtypeAsScalar,
/* Address to local vtype passed as argument (using register or stack). */
LLVMArgVtypeAddr,
/*
* On WASM, a one element vtype is passed/returned as a scalar with the same
* type as the element.
* esize is the size of the value.
*/
LLVMArgWasmVtypeAsScalar
} LLVMArgStorage;
typedef struct {
LLVMArgStorage storage;
/*
* Only if storage == ArgVtypeInReg/LLVMArgAsFpArgs.
* This contains how the parts of the vtype are passed.
*/
LLVMArgStorage pair_storage [8];
/*
* Only if storage == LLVMArgAsIArgs/LLVMArgAsFpArgs/LLVMArgFpStruct.
* If storage == LLVMArgAsFpArgs, this is the number of arguments
* used to pass the value.
* If storage == LLVMArgFpStruct, this is the number of fields
* in the structure.
*/
int nslots;
/* Only if storage == LLVMArgAsIArgs/LLVMArgAsFpArgs/LLVMArgFpStruct (4/8) */
int esize;
/* Parameter index in the LLVM signature */
int pindex;
MonoType *type;
/* Only if storage == LLVMArgAsFpArgs. Dummy fp args to insert before this arg */
int ndummy_fpargs;
} LLVMArgInfo;
typedef struct {
LLVMArgInfo ret;
/* Whenever there is an rgctx argument */
gboolean rgctx_arg;
/* Whenever there is an IMT argument */
gboolean imt_arg;
/* Whenever there is a dummy extra argument */
gboolean dummy_arg;
/*
* The position of the vret arg in the argument list.
* Only if ret->storage == ArgVtypeRetAddr.
* Should be 0 or 1.
*/
int vret_arg_index;
/* The indexes of various special arguments in the LLVM signature */
int vret_arg_pindex, this_arg_pindex, rgctx_arg_pindex, imt_arg_pindex, dummy_arg_pindex;
/* Inline array of argument info */
/* args [0] is for the this argument if it exists */
LLVMArgInfo args [1];
} LLVMCallInfo;
#define MONO_MAX_SRC_REGS 3
struct MonoInst {
guint16 opcode;
guint8 type; /* stack type */
guint8 flags;
/* used by the register allocator */
gint32 dreg, sreg1, sreg2, sreg3;
MonoInst *next, *prev;
union {
union {
MonoInst *src;
MonoMethodVar *var;
target_mgreg_t const_val;
#if (SIZEOF_REGISTER > TARGET_SIZEOF_VOID_P) && (G_BYTE_ORDER == G_BIG_ENDIAN)
struct {
gpointer p[SIZEOF_REGISTER/TARGET_SIZEOF_VOID_P];
} pdata;
#else
gpointer p;
#endif
MonoMethod *method;
MonoMethodSignature *signature;
MonoBasicBlock **many_blocks;
MonoBasicBlock *target_block;
MonoInst **args;
MonoType *vtype;
MonoClass *klass;
int *phi_args;
MonoCallInst *call_inst;
GList *exception_clauses;
const char *exc_name;
} op [2];
gint64 i8const;
double r8const;
} data;
const unsigned char* cil_code; /* for debugging and bblock splitting */
/* used mostly by the backend to store additional info it may need */
union {
gint32 reg3;
gint32 arg_info;
gint32 size;
MonoMemcpyArgs *memcpy_args; /* in OP_MEMSET and OP_MEMCPY */
gpointer data;
gint shift_amount;
gboolean is_pinvoke; /* for variables in the unmanaged marshal format */
gboolean record_cast_details; /* For CEE_CASTCLASS */
MonoInst *spill_var; /* for OP_MOVE_I4_TO_F/F_TO_I4 and OP_FCONV_TO_R8_X */
guint16 source_opcode; /*OP_XCONV_R8_TO_I4 needs to know which op was used to do proper widening*/
int pc_offset; /* OP_GC_LIVERANGE_START/END */
/*
* memory_barrier: MONO_MEMORY_BARRIER_{ACQ,REL,SEQ}
* atomic_load_*: MONO_MEMORY_BARRIER_{ACQ,SEQ}
* atomic_store_*: MONO_MEMORY_BARRIER_{REL,SEQ}
*/
int memory_barrier_kind;
} backend;
MonoClass *klass;
};
struct MonoCallInst {
MonoInst inst;
MonoMethodSignature *signature;
MonoMethod *method;
MonoInst **args;
MonoInst *out_args;
MonoInst *vret_var;
gconstpointer fptr;
MonoJitICallId jit_icall_id;
guint stack_usage;
guint stack_align_amount;
regmask_t used_iregs;
regmask_t used_fregs;
GSList *out_ireg_args;
GSList *out_freg_args;
GSList *outarg_vts;
CallInfo *call_info;
#ifdef ENABLE_LLVM
LLVMCallInfo *cinfo;
int rgctx_arg_reg, imt_arg_reg;
#endif
#ifdef TARGET_ARM
/* See the comment in mini-arm.c!mono_arch_emit_call for RegTypeFP. */
GSList *float_args;
#endif
// Bitfields are at the end to minimize padding for alignment,
// unless there is a placement to increase locality.
guint is_virtual : 1;
// FIXME tailcall field is written after read; prefer MONO_IS_TAILCALL_OPCODE.
guint tailcall : 1;
/* If this is TRUE, 'fptr' points to a MonoJumpInfo instead of an address. */
guint fptr_is_patch : 1;
/*
* If this is true, then the call returns a vtype in a register using the same
* calling convention as OP_CALL.
*/
guint vret_in_reg : 1;
/* Whenever vret_in_reg returns fp values */
guint vret_in_reg_fp : 1;
/* Whenever there is an IMT argument and it is dynamic */
guint dynamic_imt_arg : 1;
/* Whenever there is an RGCTX argument */
guint32 rgctx_reg : 1;
/* Whenever the call will need an unbox trampoline */
guint need_unbox_trampoline : 1;
};
struct MonoCallArgParm {
MonoInst ins;
gint32 size;
gint32 offset;
gint32 offPrm;
};
/*
* flags for MonoInst
* Note: some of the values overlap, because they can't appear
* in the same MonoInst.
*/
enum {
MONO_INST_HAS_METHOD = 1,
MONO_INST_INIT = 1, /* in localloc */
MONO_INST_SINGLE_STEP_LOC = 1, /* in SEQ_POINT */
MONO_INST_IS_DEAD = 2,
MONO_INST_TAILCALL = 4,
MONO_INST_VOLATILE = 4,
MONO_INST_NOTYPECHECK = 4,
MONO_INST_NONEMPTY_STACK = 4, /* in SEQ_POINT */
MONO_INST_UNALIGNED = 8,
MONO_INST_NESTED_CALL = 8, /* in SEQ_POINT */
MONO_INST_CFOLD_TAKEN = 8, /* On branches */
MONO_INST_CFOLD_NOT_TAKEN = 16, /* On branches */
MONO_INST_DEFINITION_HAS_SIDE_EFFECTS = 8,
/* the address of the variable has been taken */
MONO_INST_INDIRECT = 16,
MONO_INST_NORANGECHECK = 16,
/* On loads, the source address can be null */
MONO_INST_FAULT = 32,
/*
* On variables, identifies LMF variables. These variables have a dummy type (int), but
* require stack space for a MonoLMF struct.
*/
MONO_INST_LMF = 32,
/* On loads, the source address points to a constant value */
MONO_INST_INVARIANT_LOAD = 64,
/* On stores, the destination is the stack */
MONO_INST_STACK_STORE = 64,
/* On variables, the variable needs GC tracking */
MONO_INST_GC_TRACK = 128,
/*
* Set on instructions during code emission which make calls, i.e. OP_CALL, OP_THROW.
* backend.pc_offset will be set to the pc offset at the end of the native call instructions.
*/
MONO_INST_GC_CALLSITE = 128,
/* On comparisons, mark the branch following the condition as likely to be taken */
MONO_INST_LIKELY = 128,
MONO_INST_NONULLCHECK = 128,
};
#define inst_c0 data.op[0].const_val
#define inst_c1 data.op[1].const_val
#define inst_i0 data.op[0].src
#define inst_i1 data.op[1].src
#if (SIZEOF_REGISTER > TARGET_SIZEOF_VOID_P) && (G_BYTE_ORDER == G_BIG_ENDIAN)
#define inst_p0 data.op[0].pdata.p[SIZEOF_REGISTER/TARGET_SIZEOF_VOID_P - 1]
#define inst_p1 data.op[1].pdata.p[SIZEOF_REGISTER/TARGET_SIZEOF_VOID_P - 1]
#else
#define inst_p0 data.op[0].p
#define inst_p1 data.op[1].p
#endif
#define inst_l data.i8const
#define inst_r data.r8const
#define inst_left data.op[0].src
#define inst_right data.op[1].src
#define inst_newa_len data.op[0].src
#define inst_newa_class data.op[1].klass
/* In _OVF opcodes */
#define inst_exc_name data.op[0].exc_name
#define inst_var data.op[0].var
#define inst_vtype data.op[1].vtype
/* in branch instructions */
#define inst_many_bb data.op[1].many_blocks
#define inst_target_bb data.op[0].target_block
#define inst_true_bb data.op[1].many_blocks[0]
#define inst_false_bb data.op[1].many_blocks[1]
#define inst_basereg sreg1
#define inst_indexreg sreg2
#define inst_destbasereg dreg
#define inst_offset data.op[0].const_val
#define inst_imm data.op[1].const_val
#define inst_call data.op[1].call_inst
#define inst_phi_args data.op[1].phi_args
#define inst_eh_blocks data.op[1].exception_clauses
/* Return the lower 32 bits of the 64 bit immediate in INS */
static inline guint32
ins_get_l_low (MonoInst *ins)
{
return (guint32)(ins->data.i8const & 0xffffffff);
}
/* Return the higher 32 bits of the 64 bit immediate in INS */
static inline guint32
ins_get_l_high (MonoInst *ins)
{
return (guint32)((ins->data.i8const >> 32) & 0xffffffff);
}
static inline void
mono_inst_set_src_registers (MonoInst *ins, int *regs)
{
ins->sreg1 = regs [0];
ins->sreg2 = regs [1];
ins->sreg3 = regs [2];
}
typedef union {
struct {
guint16 tid; /* tree number */
guint16 bid; /* block number */
} pos ;
guint32 abs_pos;
} MonoPosition;
typedef struct {
MonoPosition first_use, last_use;
} MonoLiveRange;
typedef struct MonoLiveRange2 MonoLiveRange2;
struct MonoLiveRange2 {
int from, to;
MonoLiveRange2 *next;
};
typedef struct {
/* List of live ranges sorted by 'from' */
MonoLiveRange2 *range;
MonoLiveRange2 *last_range;
} MonoLiveInterval;
/*
* Additional information about a variable
*/
struct MonoMethodVar {
guint idx; /* inside cfg->varinfo, cfg->vars */
MonoLiveRange range; /* generated by liveness analysis */
MonoLiveInterval *interval; /* generated by liveness analysis */
int reg; /* != -1 if allocated into a register */
int spill_costs;
MonoBitSet *def_in; /* used by SSA */
MonoInst *def; /* used by SSA */
MonoBasicBlock *def_bb; /* used by SSA */
GList *uses; /* used by SSA */
char cpstate; /* used by SSA conditional constant propagation */
/* The native offsets corresponding to the live range of the variable */
gint32 live_range_start, live_range_end;
/*
* cfg->varinfo [idx]->dreg could be replaced for OP_REGVAR, this contains the
* original vreg.
*/
gint32 vreg;
};
/* Generic sharing */
/*
* Flags for which contexts were used in inflating a generic.
*/
enum {
MONO_GENERIC_CONTEXT_USED_CLASS = 1,
MONO_GENERIC_CONTEXT_USED_METHOD = 2
};
enum {
/* Cannot be 0 since this is stored in rgctx slots, and 0 means an unitialized rgctx slot */
MONO_GSHAREDVT_BOX_TYPE_VTYPE = 1,
MONO_GSHAREDVT_BOX_TYPE_REF = 2,
MONO_GSHAREDVT_BOX_TYPE_NULLABLE = 3
};
typedef enum {
MONO_RGCTX_INFO_STATIC_DATA = 0,
MONO_RGCTX_INFO_KLASS = 1,
MONO_RGCTX_INFO_ELEMENT_KLASS = 2,
MONO_RGCTX_INFO_VTABLE = 3,
MONO_RGCTX_INFO_TYPE = 4,
MONO_RGCTX_INFO_REFLECTION_TYPE = 5,
MONO_RGCTX_INFO_METHOD = 6,
MONO_RGCTX_INFO_GENERIC_METHOD_CODE = 7,
MONO_RGCTX_INFO_GSHAREDVT_OUT_WRAPPER = 8,
MONO_RGCTX_INFO_CLASS_FIELD = 9,
MONO_RGCTX_INFO_METHOD_RGCTX = 10,
MONO_RGCTX_INFO_METHOD_CONTEXT = 11,
MONO_RGCTX_INFO_REMOTING_INVOKE_WITH_CHECK = 12,
MONO_RGCTX_INFO_METHOD_DELEGATE_CODE = 13,
MONO_RGCTX_INFO_CAST_CACHE = 14,
MONO_RGCTX_INFO_ARRAY_ELEMENT_SIZE = 15,
MONO_RGCTX_INFO_VALUE_SIZE = 16,
/* +1 to avoid zero values in rgctx slots */
MONO_RGCTX_INFO_FIELD_OFFSET = 17,
/* Either the code for a gsharedvt method, or the address for a gsharedvt-out trampoline for the method */
/* In llvmonly mode, this is a function descriptor */
MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE = 18,
/* Same for virtual calls */
/* In llvmonly mode, this is a function descriptor */
MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE_VIRT = 19,
/* Same for calli, associated with a signature */
MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI = 20,
MONO_RGCTX_INFO_SIG_GSHAREDVT_IN_TRAMPOLINE_CALLI = 21,
/* One of MONO_GSHAREDVT_BOX_TYPE */
MONO_RGCTX_INFO_CLASS_BOX_TYPE = 22,
/* Resolves to a MonoGSharedVtMethodRuntimeInfo */
MONO_RGCTX_INFO_METHOD_GSHAREDVT_INFO = 23,
MONO_RGCTX_INFO_LOCAL_OFFSET = 24,
MONO_RGCTX_INFO_MEMCPY = 25,
MONO_RGCTX_INFO_BZERO = 26,
/* The address of Nullable<T>.Box () */
/* In llvmonly mode, this is a function descriptor */
MONO_RGCTX_INFO_NULLABLE_CLASS_BOX = 27,
MONO_RGCTX_INFO_NULLABLE_CLASS_UNBOX = 28,
/* MONO_PATCH_INFO_VCALL_METHOD */
/* In llvmonly mode, this is a function descriptor */
MONO_RGCTX_INFO_VIRT_METHOD_CODE = 29,
/*
* MONO_PATCH_INFO_VCALL_METHOD
* Same as MONO_RGCTX_INFO_CLASS_BOX_TYPE, but for the class
* which implements the method.
*/
MONO_RGCTX_INFO_VIRT_METHOD_BOX_TYPE = 30,
/* Resolve to 2 (TRUE) or 1 (FALSE) */
MONO_RGCTX_INFO_CLASS_IS_REF_OR_CONTAINS_REFS = 31,
/* The MonoDelegateTrampInfo instance */
MONO_RGCTX_INFO_DELEGATE_TRAMP_INFO = 32,
/* Same as MONO_PATCH_INFO_METHOD_FTNDESC */
MONO_RGCTX_INFO_METHOD_FTNDESC = 33,
/* mono_type_size () for a class */
MONO_RGCTX_INFO_CLASS_SIZEOF = 34,
/* The InterpMethod for a method */
MONO_RGCTX_INFO_INTERP_METHOD = 35,
/* The llvmonly interp entry for a method */
MONO_RGCTX_INFO_LLVMONLY_INTERP_ENTRY = 36
} MonoRgctxInfoType;
/* How an rgctx is passed to a method */
typedef enum {
MONO_RGCTX_ACCESS_NONE = 0,
/* Loaded from this->vtable->rgctx */
MONO_RGCTX_ACCESS_THIS = 1,
/* Loaded from an additional mrgctx argument */
MONO_RGCTX_ACCESS_MRGCTX = 2,
/* Loaded from an additional vtable argument */
MONO_RGCTX_ACCESS_VTABLE = 3
} MonoRgctxAccess;
typedef struct _MonoRuntimeGenericContextInfoTemplate {
MonoRgctxInfoType info_type;
gpointer data;
struct _MonoRuntimeGenericContextInfoTemplate *next;
} MonoRuntimeGenericContextInfoTemplate;
typedef struct {
MonoClass *next_subclass;
MonoRuntimeGenericContextInfoTemplate *infos;
GSList *method_templates;
} MonoRuntimeGenericContextTemplate;
typedef struct {
MonoVTable *class_vtable; /* must be the first element */
MonoGenericInst *method_inst;
gpointer infos [MONO_ZERO_LEN_ARRAY];
} MonoMethodRuntimeGenericContext;
/* MONO_ABI_SIZEOF () would include the 'infos' field as well */
#define MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT (TARGET_SIZEOF_VOID_P * 2)
#define MONO_RGCTX_SLOT_MAKE_RGCTX(i) (i)
#define MONO_RGCTX_SLOT_MAKE_MRGCTX(i) ((i) | 0x80000000)
#define MONO_RGCTX_SLOT_INDEX(s) ((s) & 0x7fffffff)
#define MONO_RGCTX_SLOT_IS_MRGCTX(s) (((s) & 0x80000000) ? TRUE : FALSE)
#define MONO_GSHAREDVT_DEL_INVOKE_VT_OFFSET -2
typedef struct {
MonoMethod *method;
MonoRuntimeGenericContextInfoTemplate *entries;
int num_entries, count_entries;
} MonoGSharedVtMethodInfo;
/* This is used by gsharedvt methods to allocate locals and compute local offsets */
typedef struct {
int locals_size;
/*
* The results of resolving the entries in MOonGSharedVtMethodInfo->entries.
* We use this instead of rgctx slots since these can be loaded using a load instead
* of a call to an rgctx fetch trampoline.
*/
gpointer entries [MONO_ZERO_LEN_ARRAY];
} MonoGSharedVtMethodRuntimeInfo;
typedef struct
{
MonoClass *klass;
MonoMethod *invoke;
MonoMethod *method;
MonoMethodSignature *invoke_sig;
MonoMethodSignature *sig;
gpointer method_ptr;
gpointer invoke_impl;
gpointer impl_this;
gpointer impl_nothis;
gboolean need_rgctx_tramp;
} MonoDelegateTrampInfo;
/*
* A function descriptor, which is a function address + argument pair.
* In llvm-only mode, these are used instead of trampolines to pass
* extra arguments to runtime functions/methods.
*/
typedef struct
{
gpointer addr;
gpointer arg;
MonoMethod *method;
/* Tagged InterpMethod* */
gpointer interp_method;
} MonoFtnDesc;
typedef enum {
#define PATCH_INFO(a,b) MONO_PATCH_INFO_ ## a,
#include "patch-info.h"
#undef PATCH_INFO
MONO_PATCH_INFO_NUM
} MonoJumpInfoType;
typedef struct MonoJumpInfoRgctxEntry MonoJumpInfoRgctxEntry;
typedef struct MonoJumpInfo MonoJumpInfo;
typedef struct MonoJumpInfoGSharedVtCall MonoJumpInfoGSharedVtCall;
// Subset of MonoJumpInfo.
typedef struct MonoJumpInfoTarget {
MonoJumpInfoType type;
gconstpointer target;
} MonoJumpInfoTarget;
// This ordering is mimiced in MONO_JIT_ICALLS.
typedef enum {
MONO_TRAMPOLINE_JIT = 0,
MONO_TRAMPOLINE_JUMP = 1,
MONO_TRAMPOLINE_RGCTX_LAZY_FETCH = 2,
MONO_TRAMPOLINE_AOT = 3,
MONO_TRAMPOLINE_AOT_PLT = 4,
MONO_TRAMPOLINE_DELEGATE = 5,
MONO_TRAMPOLINE_VCALL = 6,
MONO_TRAMPOLINE_NUM = 7,
} MonoTrampolineType;
// Assuming MONO_TRAMPOLINE_JIT / MONO_JIT_ICALL_generic_trampoline_jit are first.
#if __cplusplus
g_static_assert (MONO_TRAMPOLINE_JIT == 0);
#endif
#define mono_trampoline_type_to_jit_icall_id(a) ((a) + MONO_JIT_ICALL_generic_trampoline_jit)
#define mono_jit_icall_id_to_trampoline_type(a) ((MonoTrampolineType)((a) - MONO_JIT_ICALL_generic_trampoline_jit))
/* These trampolines return normally to their caller */
#define MONO_TRAMPOLINE_TYPE_MUST_RETURN(t) \
((t) == MONO_TRAMPOLINE_RGCTX_LAZY_FETCH)
/* These trampolines receive an argument directly in a register */
#define MONO_TRAMPOLINE_TYPE_HAS_ARG(t) \
(FALSE)
/* optimization flags */
#define OPTFLAG(id,shift,name,descr) MONO_OPT_ ## id = 1 << shift,
enum {
#include "optflags-def.h"
MONO_OPT_LAST
};
/*
* This structure represents a JIT backend.
*/
typedef struct {
guint have_card_table_wb : 1;
guint have_op_generic_class_init : 1;
guint emulate_mul_div : 1;
guint emulate_div : 1;
guint emulate_long_shift_opts : 1;
guint have_objc_get_selector : 1;
guint have_generalized_imt_trampoline : 1;
gboolean have_op_tailcall_membase : 1;
gboolean have_op_tailcall_reg : 1;
gboolean have_volatile_non_param_register : 1;
guint gshared_supported : 1;
guint use_fpstack : 1;
guint ilp32 : 1;
guint need_got_var : 1;
guint need_div_check : 1;
guint no_unaligned_access : 1;
guint disable_div_with_mul : 1;
guint explicit_null_checks : 1;
guint optimized_div : 1;
guint force_float32 : 1;
int monitor_enter_adjustment;
int dyn_call_param_area;
} MonoBackend;
/* Flags for mini_method_compile () */
typedef enum {
/* Whenever to run cctors during JITting */
JIT_FLAG_RUN_CCTORS = (1 << 0),
/* Whenever this is an AOT compilation */
JIT_FLAG_AOT = (1 << 1),
/* Whenever this is a full AOT compilation */
JIT_FLAG_FULL_AOT = (1 << 2),
/* Whenever to compile with LLVM */
JIT_FLAG_LLVM = (1 << 3),
/* Whenever to disable direct calls to icall functions */
JIT_FLAG_NO_DIRECT_ICALLS = (1 << 4),
/* Emit explicit null checks */
JIT_FLAG_EXPLICIT_NULL_CHECKS = (1 << 5),
/* Whenever to compile in llvm-only mode */
JIT_FLAG_LLVM_ONLY = (1 << 6),
/* Whenever calls to pinvoke functions are made directly */
JIT_FLAG_DIRECT_PINVOKE = (1 << 7),
/* Whenever this is a compile-all run and the result should be discarded */
JIT_FLAG_DISCARD_RESULTS = (1 << 8),
/* Whenever to generate code which can work with the interpreter */
JIT_FLAG_INTERP = (1 << 9),
/* Allow AOT to use all current CPU instructions */
JIT_FLAG_USE_CURRENT_CPU = (1 << 10),
/* Generate code to self-init the method for AOT */
JIT_FLAG_SELF_INIT = (1 << 11),
/* Assume code memory is exec only */
JIT_FLAG_CODE_EXEC_ONLY = (1 << 12),
} JitFlags;
/* Bit-fields in the MonoBasicBlock.region */
#define MONO_REGION_TRY 0
#define MONO_REGION_FINALLY 16
#define MONO_REGION_CATCH 32
#define MONO_REGION_FAULT 64
#define MONO_REGION_FILTER 128
#define MONO_BBLOCK_IS_IN_REGION(bblock, regtype) (((bblock)->region & (0xf << 4)) == (regtype))
#define MONO_REGION_FLAGS(region) ((region) & 0x7)
#define MONO_REGION_CLAUSE_INDEX(region) (((region) >> 8) - 1)
#define get_vreg_to_inst(cfg, vreg) ((vreg) < (cfg)->vreg_to_inst_len ? (cfg)->vreg_to_inst [(vreg)] : NULL)
#define vreg_is_volatile(cfg, vreg) (G_UNLIKELY (get_vreg_to_inst ((cfg), (vreg)) && (get_vreg_to_inst ((cfg), (vreg))->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT))))
#define vreg_is_ref(cfg, vreg) ((vreg) < (cfg)->vreg_is_ref_len ? (cfg)->vreg_is_ref [(vreg)] : 0)
#define vreg_is_mp(cfg, vreg) ((vreg) < (cfg)->vreg_is_mp_len ? (cfg)->vreg_is_mp [(vreg)] : 0)
/*
* Control Flow Graph and compilation unit information
*/
typedef struct {
MonoMethod *method;
MonoMethodHeader *header;
MonoMemPool *mempool;
MonoInst **varinfo;
MonoMethodVar *vars;
MonoInst *ret;
MonoBasicBlock *bb_entry;
MonoBasicBlock *bb_exit;
MonoBasicBlock *bb_init;
MonoBasicBlock **bblocks;
MonoBasicBlock **cil_offset_to_bb;
MonoMemPool *state_pool; /* used by instruction selection */
MonoBasicBlock *cbb; /* used by instruction selection */
MonoInst *prev_ins; /* in decompose */
MonoJumpInfo *patch_info;
MonoJitInfo *jit_info;
MonoJitDynamicMethodInfo *dynamic_info;
guint num_bblocks, max_block_num;
guint locals_start;
guint num_varinfo; /* used items in varinfo */
guint varinfo_count; /* total storage in varinfo */
gint stack_offset;
gint max_ireg;
gint cil_offset_to_bb_len;
MonoRegState *rs;
MonoSpillInfo *spill_info [16]; /* machine register spills */
gint spill_count;
gint spill_info_len [16];
/* unsigned char *cil_code; */
MonoInst *got_var; /* Global Offset Table variable */
MonoInst **locals;
/* Variable holding the mrgctx/vtable address for gshared methods */
MonoInst *rgctx_var;
MonoInst **args;
MonoType **arg_types;
MonoMethod *current_method; /* The method currently processed by method_to_ir () */
MonoMethod *method_to_register; /* The method to register in JIT info tables */
MonoGenericContext *generic_context;
MonoInst *this_arg;
MonoBackend *backend;
/*
* This variable represents the hidden argument holding the vtype
* return address. If the method returns something other than a vtype, or
* the vtype is returned in registers this is NULL.
*/
MonoInst *vret_addr;
/*
* This is used to initialize the cil_code field of MonoInst's.
*/
const unsigned char *ip;
struct MonoAliasingInformation *aliasing_info;
/* A hashtable of region ID-> SP var mappings */
/* An SP var is a place to store the stack pointer (used by handlers)*/
/*
* FIXME We can potentially get rid of this, since it was mainly used
* for hijacking return address for handler.
*/
GHashTable *spvars;
/*
* A hashtable of region ID -> EX var mappings
* An EX var stores the exception object passed to catch/filter blocks
* For finally blocks, it is set to TRUE if we should throw an abort
* once the execution of the finally block is over.
*/
GHashTable *exvars;
GList *ldstr_list; /* used by AOT */
guint real_offset;
GHashTable *cbb_hash;
/* The current virtual register number */
guint32 next_vreg;
MonoRgctxAccess rgctx_access;
MonoGenericSharingContext gsctx;
MonoGenericContext *gsctx_context;
MonoGSharedVtMethodInfo *gsharedvt_info;
gpointer jit_mm;
MonoMemoryManager *mem_manager;
/* Points to the gsharedvt locals area at runtime */
MonoInst *gsharedvt_locals_var;
/* The localloc instruction used to initialize gsharedvt_locals_var */
MonoInst *gsharedvt_locals_var_ins;
/* Points to a MonoGSharedVtMethodRuntimeInfo at runtime */
MonoInst *gsharedvt_info_var;
/* For native-to-managed wrappers, CEE_MONO_JIT_(AT|DE)TACH opcodes */
MonoInst *orig_domain_var;
MonoInst *lmf_var;
MonoInst *lmf_addr_var;
MonoInst *il_state_var;
MonoInst *stack_inbalance_var;
unsigned char *cil_start;
unsigned char *native_code;
guint code_size;
guint code_len;
guint prolog_end;
guint epilog_begin;
guint epilog_end;
regmask_t used_int_regs;
guint32 opt;
guint32 flags;
guint32 comp_done;
guint32 verbose_level;
guint32 stack_usage;
guint32 param_area;
guint32 frame_reg;
gint32 sig_cookie;
guint disable_aot : 1;
guint disable_ssa : 1;
guint disable_llvm : 1;
guint enable_extended_bblocks : 1;
guint run_cctors : 1;
guint need_lmf_area : 1;
guint compile_aot : 1;
guint full_aot : 1;
guint compile_llvm : 1;
guint got_var_allocated : 1;
guint ret_var_is_local : 1;
guint ret_var_set : 1;
guint unverifiable : 1;
guint skip_visibility : 1;
guint disable_llvm_implicit_null_checks : 1;
guint disable_reuse_registers : 1;
guint disable_reuse_stack_slots : 1;
guint disable_reuse_ref_stack_slots : 1;
guint disable_ref_noref_stack_slot_share : 1;
guint disable_initlocals_opt : 1;
guint disable_initlocals_opt_refs : 1;
guint disable_omit_fp : 1;
guint disable_vreg_to_lvreg : 1;
guint disable_deadce_vars : 1;
guint disable_out_of_line_bblocks : 1;
guint disable_direct_icalls : 1;
guint disable_gc_safe_points : 1;
guint direct_pinvoke : 1;
guint create_lmf_var : 1;
/*
* When this is set, the code to push/pop the LMF from the LMF stack is generated as IR
* instead of being generated in emit_prolog ()/emit_epilog ().
*/
guint lmf_ir : 1;
guint gen_write_barriers : 1;
guint init_ref_vars : 1;
guint extend_live_ranges : 1;
guint compute_precise_live_ranges : 1;
guint has_got_slots : 1;
guint uses_rgctx_reg : 1;
guint uses_vtable_reg : 1;
guint keep_cil_nops : 1;
guint gen_seq_points : 1;
/* Generate seq points for use by the debugger */
guint gen_sdb_seq_points : 1;
guint explicit_null_checks : 1;
guint compute_gc_maps : 1;
guint soft_breakpoints : 1;
guint arch_eh_jit_info : 1;
guint has_calls : 1;
guint has_emulated_ops : 1;
guint has_indirection : 1;
guint has_atomic_add_i4 : 1;
guint has_atomic_exchange_i4 : 1;
guint has_atomic_cas_i4 : 1;
guint check_pinvoke_callconv : 1;
guint has_unwind_info_for_epilog : 1;
guint disable_inline : 1;
/* Disable inlining into caller */
guint no_inline : 1;
guint gshared : 1;
guint gsharedvt : 1;
guint r4fp : 1;
guint llvm_only : 1;
guint interp : 1;
guint use_current_cpu : 1;
guint self_init : 1;
guint code_exec_only : 1;
guint interp_entry_only : 1;
guint after_method_to_ir : 1;
guint disable_inline_rgctx_fetch : 1;
guint deopt : 1;
guint8 uses_simd_intrinsics;
int r4_stack_type;
gpointer debug_info;
guint32 lmf_offset;
guint16 *intvars;
MonoProfilerCoverageInfo *coverage_info;
GHashTable *token_info_hash;
MonoCompileArch arch;
guint32 inline_depth;
/* Size of memory reserved for thunks */
int thunk_area;
/* Thunks */
guint8 *thunks;
/* Offset between the start of code and the thunks area */
int thunks_offset;
MonoExceptionType exception_type; /* MONO_EXCEPTION_* */
guint32 exception_data;
char* exception_message;
gpointer exception_ptr;
guint8 * encoded_unwind_ops;
guint32 encoded_unwind_ops_len;
GSList* unwind_ops;
GList* dont_inline;
/* Fields used by the local reg allocator */
void* reginfo;
int reginfo_len;
/* Maps vregs to their associated MonoInst's */
/* vregs with an associated MonoInst are 'global' while others are 'local' */
MonoInst **vreg_to_inst;
/* Size of above array */
guint32 vreg_to_inst_len;
/* Marks vregs which hold a GC ref */
/* FIXME: Use a bitmap */
gboolean *vreg_is_ref;
/* Size of above array */
guint32 vreg_is_ref_len;
/* Marks vregs which hold a managed pointer */
/* FIXME: Use a bitmap */
gboolean *vreg_is_mp;
/* Size of above array */
guint32 vreg_is_mp_len;
/*
* The original method to compile, differs from 'method' when doing generic
* sharing.
*/
MonoMethod *orig_method;
/* Patches which describe absolute addresses embedded into the native code */
GHashTable *abs_patches;
/* Used to implement move_i4_to_f on archs that can't do raw
copy between an ireg and a freg. This is an int32 var.*/
MonoInst *iconv_raw_var;
/* Used to implement fconv_to_r8_x. This is a double (8 bytes) var.*/
MonoInst *fconv_to_r8_x_var;
/*Use to implement simd constructors. This is a vector (16 bytes) var.*/
MonoInst *simd_ctor_var;
/* Used to implement dyn_call */
MonoInst *dyn_call_var;
MonoInst *last_seq_point;
/*
* List of sequence points represented as IL offset+native offset pairs.
* Allocated using glib.
* IL offset can be -1 or 0xffffff to refer to the sequence points
* inside the prolog and epilog used to implement method entry/exit events.
*/
GPtrArray *seq_points;
/* The encoded sequence point info */
struct MonoSeqPointInfo *seq_point_info;
/* Method headers which need to be freed after compilation */
GSList *headers_to_free;
/* Used by AOT */
guint32 got_offset, ex_info_offset, method_info_offset, method_index;
guint32 aot_method_flags;
/* For llvm */
guint32 got_access_count;
gpointer llvmonly_init_cond;
gpointer llvm_dummy_info_var, llvm_info_var;
/* Symbol used to refer to this method in generated assembly */
char *asm_symbol;
char *asm_debug_symbol;
char *llvm_method_name;
int castclass_cache_index;
MonoJitExceptionInfo *llvm_ex_info;
guint32 llvm_ex_info_len;
int llvm_this_reg, llvm_this_offset;
GSList *try_block_holes;
/* DWARF location list for 'this' */
GSList *this_loclist;
/* DWARF location list for 'rgctx_var' */
GSList *rgctx_loclist;
int *gsharedvt_vreg_to_idx;
GSList *signatures;
GSList *interp_in_signatures;
/* GC Maps */
/* The offsets of the locals area relative to the frame pointer */
gint locals_min_stack_offset, locals_max_stack_offset;
/* The current CFA rule */
int cur_cfa_reg, cur_cfa_offset;
/* The final CFA rule at the end of the prolog */
int cfa_reg, cfa_offset;
/* Points to a MonoCompileGC */
gpointer gc_info;
/*
* The encoded GC map along with its size. This contains binary data so it can be saved in an AOT
* image etc, but it requires a 4 byte alignment.
*/
guint8 *gc_map;
guint32 gc_map_size;
/* Error handling */
MonoError* error;
MonoErrorInternal error_value;
/* pointer to context datastructure used for graph dumping */
MonoGraphDumper *gdump_ctx;
gboolean *clause_is_dead;
/* Stats */
int stat_allocate_var;
int stat_locals_stack_size;
int stat_basic_blocks;
int stat_cil_code_size;
int stat_n_regvars;
int stat_inlineable_methods;
int stat_inlined_methods;
int stat_code_reallocs;
MonoProfilerCallInstrumentationFlags prof_flags;
gboolean prof_coverage;
/* For deduplication */
gboolean skip;
} MonoCompile;
#define MONO_CFG_PROFILE(cfg, flag) \
G_UNLIKELY ((cfg)->prof_flags & MONO_PROFILER_CALL_INSTRUMENTATION_ ## flag)
#define MONO_CFG_PROFILE_CALL_CONTEXT(cfg) \
(MONO_CFG_PROFILE (cfg, ENTER_CONTEXT) || MONO_CFG_PROFILE (cfg, LEAVE_CONTEXT))
typedef enum {
MONO_CFG_HAS_ALLOCA = 1 << 0,
MONO_CFG_HAS_CALLS = 1 << 1,
MONO_CFG_HAS_LDELEMA = 1 << 2,
MONO_CFG_HAS_VARARGS = 1 << 3,
MONO_CFG_HAS_TAILCALL = 1 << 4,
MONO_CFG_HAS_FPOUT = 1 << 5, /* there are fp values passed in int registers */
MONO_CFG_HAS_SPILLUP = 1 << 6, /* spill var slots are allocated from bottom to top */
MONO_CFG_HAS_CHECK_THIS = 1 << 7,
MONO_CFG_NEEDS_DECOMPOSE = 1 << 8,
MONO_CFG_HAS_TYPE_CHECK = 1 << 9
} MonoCompileFlags;
typedef enum {
MONO_CFG_USES_SIMD_INTRINSICS = 1 << 0,
MONO_CFG_USES_SIMD_INTRINSICS_SIMPLIFY_INDIRECTION = 1 << 1
} MonoSimdIntrinsicsFlags;
typedef struct {
gint32 methods_compiled;
gint32 methods_aot;
gint32 methods_aot_llvm;
gint32 methods_lookups;
gint32 allocate_var;
gint32 cil_code_size;
gint32 native_code_size;
gint32 code_reallocs;
gint32 max_code_size_ratio;
gint32 biggest_method_size;
gint32 allocated_code_size;
gint32 allocated_seq_points_size;
gint32 inlineable_methods;
gint32 inlined_methods;
gint32 basic_blocks;
gint32 max_basic_blocks;
gint32 locals_stack_size;
gint32 regvars;
gint32 generic_virtual_invocations;
gint32 alias_found;
gint32 alias_removed;
gint32 loads_eliminated;
gint32 stores_eliminated;
gint32 optimized_divisions;
gint32 methods_with_llvm;
gint32 methods_without_llvm;
gint32 methods_with_interp;
char *max_ratio_method;
char *biggest_method;
gint64 jit_method_to_ir;
gint64 jit_liveness_handle_exception_clauses;
gint64 jit_handle_out_of_line_bblock;
gint64 jit_decompose_long_opts;
gint64 jit_decompose_typechecks;
gint64 jit_local_cprop;
gint64 jit_local_emulate_ops;
gint64 jit_optimize_branches;
gint64 jit_handle_global_vregs;
gint64 jit_local_deadce;
gint64 jit_local_alias_analysis;
gint64 jit_if_conversion;
gint64 jit_bb_ordering;
gint64 jit_compile_dominator_info;
gint64 jit_compute_natural_loops;
gint64 jit_insert_safepoints;
gint64 jit_ssa_compute;
gint64 jit_ssa_cprop;
gint64 jit_ssa_deadce;
gint64 jit_perform_abc_removal;
gint64 jit_ssa_remove;
gint64 jit_local_cprop2;
gint64 jit_handle_global_vregs2;
gint64 jit_local_deadce2;
gint64 jit_optimize_branches2;
gint64 jit_decompose_vtype_opts;
gint64 jit_decompose_array_access_opts;
gint64 jit_liveness_handle_exception_clauses2;
gint64 jit_analyze_liveness;
gint64 jit_linear_scan;
gint64 jit_arch_allocate_vars;
gint64 jit_spill_global_vars;
gint64 jit_local_cprop3;
gint64 jit_local_deadce3;
gint64 jit_codegen;
gint64 jit_create_jit_info;
gint64 jit_gc_create_gc_map;
gint64 jit_save_seq_point_info;
gint64 jit_time;
gboolean enabled;
} MonoJitStats;
extern MonoJitStats mono_jit_stats;
static inline void
get_jit_stats (gint64 *methods_compiled, gint64 *cil_code_size_bytes, gint64 *native_code_size_bytes, gint64 *jit_time)
{
*methods_compiled = mono_jit_stats.methods_compiled;
*cil_code_size_bytes = mono_jit_stats.cil_code_size;
*native_code_size_bytes = mono_jit_stats.native_code_size;
*jit_time = mono_jit_stats.jit_time;
}
guint32
mono_get_exception_count (void);
static inline void
get_exception_stats (guint32 *exception_count)
{
*exception_count = mono_get_exception_count ();
}
/* opcodes: value assigned after all the CIL opcodes */
#ifdef MINI_OP
#undef MINI_OP
#endif
#ifdef MINI_OP3
#undef MINI_OP3
#endif
#define MINI_OP(a,b,dest,src1,src2) a,
#define MINI_OP3(a,b,dest,src1,src2,src3) a,
enum {
OP_START = MONO_CEE_LAST - 1,
#include "mini-ops.h"
OP_LAST
};
#undef MINI_OP
#undef MINI_OP3
#if TARGET_SIZEOF_VOID_P == 8
#define OP_PCONST OP_I8CONST
#define OP_DUMMY_PCONST OP_DUMMY_I8CONST
#define OP_PADD OP_LADD
#define OP_PADD_IMM OP_LADD_IMM
#define OP_PSUB_IMM OP_LSUB_IMM
#define OP_PAND_IMM OP_LAND_IMM
#define OP_PXOR_IMM OP_LXOR_IMM
#define OP_PSUB OP_LSUB
#define OP_PMUL OP_LMUL
#define OP_PMUL_IMM OP_LMUL_IMM
#define OP_POR_IMM OP_LOR_IMM
#define OP_PNEG OP_LNEG
#define OP_PCONV_TO_I1 OP_LCONV_TO_I1
#define OP_PCONV_TO_U1 OP_LCONV_TO_U1
#define OP_PCONV_TO_I2 OP_LCONV_TO_I2
#define OP_PCONV_TO_U2 OP_LCONV_TO_U2
#define OP_PCONV_TO_OVF_I1_UN OP_LCONV_TO_OVF_I1_UN
#define OP_PCONV_TO_OVF_I1 OP_LCONV_TO_OVF_I1
#define OP_PBEQ OP_LBEQ
#define OP_PCEQ OP_LCEQ
#define OP_PCLT OP_LCLT
#define OP_PCGT OP_LCGT
#define OP_PCLT_UN OP_LCLT_UN
#define OP_PCGT_UN OP_LCGT_UN
#define OP_PBNE_UN OP_LBNE_UN
#define OP_PBGE_UN OP_LBGE_UN
#define OP_PBLT_UN OP_LBLT_UN
#define OP_PBGE OP_LBGE
#define OP_STOREP_MEMBASE_REG OP_STOREI8_MEMBASE_REG
#define OP_STOREP_MEMBASE_IMM OP_STOREI8_MEMBASE_IMM
#else
#define OP_PCONST OP_ICONST
#define OP_DUMMY_PCONST OP_DUMMY_ICONST
#define OP_PADD OP_IADD
#define OP_PADD_IMM OP_IADD_IMM
#define OP_PSUB_IMM OP_ISUB_IMM
#define OP_PAND_IMM OP_IAND_IMM
#define OP_PXOR_IMM OP_IXOR_IMM
#define OP_PSUB OP_ISUB
#define OP_PMUL OP_IMUL
#define OP_PMUL_IMM OP_IMUL_IMM
#define OP_POR_IMM OP_IOR_IMM
#define OP_PNEG OP_INEG
#define OP_PCONV_TO_I1 OP_ICONV_TO_I1
#define OP_PCONV_TO_U1 OP_ICONV_TO_U1
#define OP_PCONV_TO_I2 OP_ICONV_TO_I2
#define OP_PCONV_TO_U2 OP_ICONV_TO_U2
#define OP_PCONV_TO_OVF_I1_UN OP_ICONV_TO_OVF_I1_UN
#define OP_PCONV_TO_OVF_I1 OP_ICONV_TO_OVF_I1
#define OP_PBEQ OP_IBEQ
#define OP_PCEQ OP_ICEQ
#define OP_PCLT OP_ICLT
#define OP_PCGT OP_ICGT
#define OP_PCLT_UN OP_ICLT_UN
#define OP_PCGT_UN OP_ICGT_UN
#define OP_PBNE_UN OP_IBNE_UN
#define OP_PBGE_UN OP_IBGE_UN
#define OP_PBLT_UN OP_IBLT_UN
#define OP_PBGE OP_IBGE
#define OP_STOREP_MEMBASE_REG OP_STOREI4_MEMBASE_REG
#define OP_STOREP_MEMBASE_IMM OP_STOREI4_MEMBASE_IMM
#endif
/* Opcodes to load/store regsize quantities */
#if defined (MONO_ARCH_ILP32)
#define OP_LOADR_MEMBASE OP_LOADI8_MEMBASE
#define OP_STORER_MEMBASE_REG OP_STOREI8_MEMBASE_REG
#else
#define OP_LOADR_MEMBASE OP_LOAD_MEMBASE
#define OP_STORER_MEMBASE_REG OP_STORE_MEMBASE_REG
#endif
typedef enum {
STACK_INV,
STACK_I4,
STACK_I8,
STACK_PTR,
STACK_R8,
STACK_MP,
STACK_OBJ,
STACK_VTYPE,
STACK_R4,
STACK_MAX
} MonoStackType;
typedef struct {
union {
double r8;
gint32 i4;
gint64 i8;
gpointer p;
MonoClass *klass;
} data;
int type;
} StackSlot;
extern const MonoInstSpec MONO_ARCH_CPU_SPEC [];
#define MONO_ARCH_CPU_SPEC_IDX_COMBINE(a) a ## _idx
#define MONO_ARCH_CPU_SPEC_IDX(a) MONO_ARCH_CPU_SPEC_IDX_COMBINE(a)
extern const guint16 MONO_ARCH_CPU_SPEC_IDX(MONO_ARCH_CPU_SPEC) [];
#define ins_get_spec(op) ((const char*)&MONO_ARCH_CPU_SPEC [MONO_ARCH_CPU_SPEC_IDX(MONO_ARCH_CPU_SPEC)[(op) - OP_LOAD]])
#ifndef DISABLE_JIT
static inline int
ins_get_size (int opcode)
{
return ((guint8 *)ins_get_spec (opcode))[MONO_INST_LEN];
}
guint8*
mini_realloc_code_slow (MonoCompile *cfg, int size);
static inline guint8*
realloc_code (MonoCompile *cfg, int size)
{
const int EXTRA_CODE_SPACE = 16;
const int code_len = cfg->code_len;
if (G_UNLIKELY ((guint)(code_len + size) > (cfg->code_size - EXTRA_CODE_SPACE)))
return mini_realloc_code_slow (cfg, size);
return cfg->native_code + code_len;
}
static inline void
set_code_len (MonoCompile *cfg, int len)
{
g_assert ((guint)len <= cfg->code_size);
cfg->code_len = len;
}
static inline void
set_code_cursor (MonoCompile *cfg, void* void_code)
{
guint8* code = (guint8*)void_code;
g_assert (code <= (cfg->native_code + cfg->code_size));
set_code_len (cfg, code - cfg->native_code);
}
#endif
enum {
MONO_COMP_DOM = 1,
MONO_COMP_IDOM = 2,
MONO_COMP_DFRONTIER = 4,
MONO_COMP_DOM_REV = 8,
MONO_COMP_LIVENESS = 16,
MONO_COMP_SSA = 32,
MONO_COMP_SSA_DEF_USE = 64,
MONO_COMP_REACHABILITY = 128,
MONO_COMP_LOOPS = 256
};
typedef enum {
MONO_GRAPH_CFG = 1,
MONO_GRAPH_DTREE = 2,
MONO_GRAPH_CFG_CODE = 4,
MONO_GRAPH_CFG_SSA = 8,
MONO_GRAPH_CFG_OPTCODE = 16
} MonoGraphOptions;
typedef struct {
guint16 size;
guint16 offset;
guint8 pad;
} MonoJitArgumentInfo;
enum {
BRANCH_NOT_TAKEN,
BRANCH_TAKEN,
BRANCH_UNDEF
};
typedef enum {
CMP_EQ,
CMP_NE,
CMP_LE,
CMP_GE,
CMP_LT,
CMP_GT,
CMP_LE_UN,
CMP_GE_UN,
CMP_LT_UN,
CMP_GT_UN,
CMP_ORD,
CMP_UNORD
} CompRelation;
typedef enum {
CMP_TYPE_L,
CMP_TYPE_I,
CMP_TYPE_F
} CompType;
/* Implicit exceptions */
enum {
MONO_EXC_INDEX_OUT_OF_RANGE,
MONO_EXC_OVERFLOW,
MONO_EXC_ARITHMETIC,
MONO_EXC_DIVIDE_BY_ZERO,
MONO_EXC_INVALID_CAST,
MONO_EXC_NULL_REF,
MONO_EXC_ARRAY_TYPE_MISMATCH,
MONO_EXC_ARGUMENT,
MONO_EXC_ARGUMENT_OUT_OF_RANGE,
MONO_EXC_ARGUMENT_OUT_OF_MEMORY,
MONO_EXC_INTRINS_NUM
};
/*
* Information about a trampoline function.
*/
struct MonoTrampInfo
{
/*
* The native code of the trampoline. Not owned by this structure.
*/
guint8 *code;
guint32 code_size;
/*
* The name of the trampoline which can be used in AOT/xdebug. Owned by this
* structure.
*/
char *name;
/*
* Patches required by the trampoline when aot-ing. Owned by this structure.
*/
MonoJumpInfo *ji;
/*
* Unwind information. Owned by this structure.
*/
GSList *unwind_ops;
MonoJitICallInfo *jit_icall_info;
/*
* The method the trampoline is associated with, if any.
*/
MonoMethod *method;
/*
* Encoded unwind info loaded from AOT images
*/
guint8 *uw_info;
guint32 uw_info_len;
/* Whenever uw_info is owned by this structure */
gboolean owns_uw_info;
};
typedef void (*MonoInstFunc) (MonoInst *tree, gpointer data);
enum {
FILTER_IL_SEQ_POINT = 1 << 0,
FILTER_NOP = 1 << 1,
};
static inline gboolean
mono_inst_filter (MonoInst *ins, int filter)
{
if (!ins || !filter)
return FALSE;
if ((filter & FILTER_IL_SEQ_POINT) && ins->opcode == OP_IL_SEQ_POINT)
return TRUE;
if ((filter & FILTER_NOP) && ins->opcode == OP_NOP)
return TRUE;
return FALSE;
}
static inline MonoInst*
mono_inst_next (MonoInst *ins, int filter)
{
do {
ins = ins->next;
} while (mono_inst_filter (ins, filter));
return ins;
}
static inline MonoInst*
mono_inst_prev (MonoInst *ins, int filter)
{
do {
ins = ins->prev;
} while (mono_inst_filter (ins, filter));
return ins;
}
static inline MonoInst*
mono_bb_first_inst (MonoBasicBlock *bb, int filter)
{
MonoInst *ins = bb->code;
if (mono_inst_filter (ins, filter))
ins = mono_inst_next (ins, filter);
return ins;
}
static inline MonoInst*
mono_bb_last_inst (MonoBasicBlock *bb, int filter)
{
MonoInst *ins = bb->last_ins;
if (mono_inst_filter (ins, filter))
ins = mono_inst_prev (ins, filter);
return ins;
}
/* profiler support */
void mini_add_profiler_argument (const char *desc);
void mini_profiler_emit_enter (MonoCompile *cfg);
void mini_profiler_emit_leave (MonoCompile *cfg, MonoInst *ret);
void mini_profiler_emit_tail_call (MonoCompile *cfg, MonoMethod *target);
void mini_profiler_emit_call_finally (MonoCompile *cfg, MonoMethodHeader *header, unsigned char *ip, guint32 index, MonoExceptionClause *clause);
void mini_profiler_context_enable (void);
gpointer mini_profiler_context_get_this (MonoProfilerCallContext *ctx);
gpointer mini_profiler_context_get_argument (MonoProfilerCallContext *ctx, guint32 pos);
gpointer mini_profiler_context_get_local (MonoProfilerCallContext *ctx, guint32 pos);
gpointer mini_profiler_context_get_result (MonoProfilerCallContext *ctx);
void mini_profiler_context_free_buffer (gpointer buffer);
/* graph dumping */
void mono_cfg_dump_create_context (MonoCompile *cfg);
void mono_cfg_dump_begin_group (MonoCompile *cfg);
void mono_cfg_dump_close_group (MonoCompile *cfg);
void mono_cfg_dump_ir (MonoCompile *cfg, const char *phase_name);
/* helper methods */
MonoInst* mono_find_spvar_for_region (MonoCompile *cfg, int region);
MonoInst* mono_find_exvar_for_offset (MonoCompile *cfg, int offset);
int mono_get_block_region_notry (MonoCompile *cfg, int region);
void mono_bblock_add_inst (MonoBasicBlock *bb, MonoInst *inst);
void mono_bblock_insert_after_ins (MonoBasicBlock *bb, MonoInst *ins, MonoInst *ins_to_insert);
void mono_bblock_insert_before_ins (MonoBasicBlock *bb, MonoInst *ins, MonoInst *ins_to_insert);
void mono_verify_bblock (MonoBasicBlock *bb);
void mono_verify_cfg (MonoCompile *cfg);
void mono_constant_fold (MonoCompile *cfg);
MonoInst* mono_constant_fold_ins (MonoCompile *cfg, MonoInst *ins, MonoInst *arg1, MonoInst *arg2, gboolean overwrite);
int mono_eval_cond_branch (MonoInst *branch);
int mono_is_power_of_two (guint32 val);
void mono_cprop_local (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst **acp, int acp_size);
MonoInst* mono_compile_create_var (MonoCompile *cfg, MonoType *type, int opcode);
MonoInst* mono_compile_create_var_for_vreg (MonoCompile *cfg, MonoType *type, int opcode, int vreg);
void mono_compile_make_var_load (MonoCompile *cfg, MonoInst *dest, gssize var_index);
MonoInst* mini_get_int_to_float_spill_area (MonoCompile *cfg);
MonoType* mono_type_from_stack_type (MonoInst *ins);
guint32 mono_alloc_ireg (MonoCompile *cfg);
guint32 mono_alloc_lreg (MonoCompile *cfg);
guint32 mono_alloc_freg (MonoCompile *cfg);
guint32 mono_alloc_preg (MonoCompile *cfg);
guint32 mono_alloc_dreg (MonoCompile *cfg, MonoStackType stack_type);
guint32 mono_alloc_ireg_ref (MonoCompile *cfg);
guint32 mono_alloc_ireg_mp (MonoCompile *cfg);
guint32 mono_alloc_ireg_copy (MonoCompile *cfg, guint32 vreg);
void mono_mark_vreg_as_ref (MonoCompile *cfg, int vreg);
void mono_mark_vreg_as_mp (MonoCompile *cfg, int vreg);
void mono_link_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to);
void mono_unlink_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to);
gboolean mono_bblocks_linked (MonoBasicBlock *bb1, MonoBasicBlock *bb2);
void mono_remove_bblock (MonoCompile *cfg, MonoBasicBlock *bb);
void mono_nullify_basic_block (MonoBasicBlock *bb);
void mono_merge_basic_blocks (MonoCompile *cfg, MonoBasicBlock *bb, MonoBasicBlock *bbn);
void mono_optimize_branches (MonoCompile *cfg);
void mono_blockset_print (MonoCompile *cfg, MonoBitSet *set, const char *name, guint idom);
void mono_print_ins_index (int i, MonoInst *ins);
GString *mono_print_ins_index_strbuf (int i, MonoInst *ins);
void mono_print_ins (MonoInst *ins);
void mono_print_bb (MonoBasicBlock *bb, const char *msg);
void mono_print_code (MonoCompile *cfg, const char *msg);
const char* mono_inst_name (int op);
int mono_op_to_op_imm (int opcode);
int mono_op_imm_to_op (int opcode);
int mono_load_membase_to_load_mem (int opcode);
gboolean mono_op_no_side_effects (int opcode);
gboolean mono_ins_no_side_effects (MonoInst *ins);
guint mono_type_to_load_membase (MonoCompile *cfg, MonoType *type);
guint mono_type_to_store_membase (MonoCompile *cfg, MonoType *type);
guint32 mono_type_to_stloc_coerce (MonoType *type);
guint mini_type_to_stind (MonoCompile* cfg, MonoType *type);
MonoStackType mini_type_to_stack_type (MonoCompile *cfg, MonoType *t);
MonoJitInfo* mini_lookup_method (MonoMethod *method, MonoMethod *shared);
guint32 mono_reverse_branch_op (guint32 opcode);
void mono_disassemble_code (MonoCompile *cfg, guint8 *code, int size, char *id);
MonoJumpInfoTarget mono_call_to_patch (MonoCallInst *call);
void mono_call_add_patch_info (MonoCompile *cfg, MonoCallInst *call, int ip);
void mono_add_patch_info (MonoCompile *cfg, int ip, MonoJumpInfoType type, gconstpointer target);
void mono_add_patch_info_rel (MonoCompile *cfg, int ip, MonoJumpInfoType type, gconstpointer target, int relocation);
void mono_remove_patch_info (MonoCompile *cfg, int ip);
gpointer mono_jit_compile_method_inner (MonoMethod *method, int opt, MonoError *error);
GList *mono_varlist_insert_sorted (MonoCompile *cfg, GList *list, MonoMethodVar *mv, int sort_type);
GList *mono_varlist_sort (MonoCompile *cfg, GList *list, int sort_type);
void mono_analyze_liveness (MonoCompile *cfg);
void mono_analyze_liveness_gc (MonoCompile *cfg);
void mono_linear_scan (MonoCompile *cfg, GList *vars, GList *regs, regmask_t *used_mask);
void mono_global_regalloc (MonoCompile *cfg);
void mono_create_jump_table (MonoCompile *cfg, MonoInst *label, MonoBasicBlock **bbs, int num_blocks);
MonoCompile *mini_method_compile (MonoMethod *method, guint32 opts, JitFlags flags, int parts, int aot_method_index);
void mono_destroy_compile (MonoCompile *cfg);
void mono_empty_compile (MonoCompile *cfg);
MonoJitICallInfo *mono_find_jit_opcode_emulation (int opcode);
void mono_print_ins_index (int i, MonoInst *ins);
void mono_print_ins (MonoInst *ins);
MonoInst *mono_get_got_var (MonoCompile *cfg);
void mono_add_seq_point (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins, int native_offset);
void mono_add_var_location (MonoCompile *cfg, MonoInst *var, gboolean is_reg, int reg, int offset, int from, int to);
MonoInst* mono_emit_jit_icall_id (MonoCompile *cfg, MonoJitICallId jit_icall_id, MonoInst **args);
#define mono_emit_jit_icall(cfg, name, args) (mono_emit_jit_icall_id ((cfg), MONO_JIT_ICALL_ ## name, (args)))
MonoInst* mono_emit_jit_icall_by_info (MonoCompile *cfg, int il_offset, MonoJitICallInfo *info, MonoInst **args);
MonoInst* mono_emit_method_call (MonoCompile *cfg, MonoMethod *method, MonoInst **args, MonoInst *this_ins);
gboolean mini_should_insert_breakpoint (MonoMethod *method);
int mono_target_pagesize (void);
gboolean mini_class_is_system_array (MonoClass *klass);
void mono_linterval_add_range (MonoCompile *cfg, MonoLiveInterval *interval, int from, int to);
void mono_linterval_print (MonoLiveInterval *interval);
void mono_linterval_print_nl (MonoLiveInterval *interval);
gboolean mono_linterval_covers (MonoLiveInterval *interval, int pos);
gint32 mono_linterval_get_intersect_pos (MonoLiveInterval *i1, MonoLiveInterval *i2);
void mono_linterval_split (MonoCompile *cfg, MonoLiveInterval *interval, MonoLiveInterval **i1, MonoLiveInterval **i2, int pos);
void mono_liveness_handle_exception_clauses (MonoCompile *cfg);
gpointer mono_realloc_native_code (MonoCompile *cfg);
void mono_register_opcode_emulation (int opcode, const char* name, MonoMethodSignature *sig, gpointer func, gboolean no_throw);
void mono_draw_graph (MonoCompile *cfg, MonoGraphOptions draw_options);
void mono_add_ins_to_end (MonoBasicBlock *bb, MonoInst *inst);
void mono_replace_ins (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins, MonoInst **prev, MonoBasicBlock *first_bb, MonoBasicBlock *last_bb);
void mini_register_opcode_emulation (int opcode, MonoJitICallInfo *jit_icall_info, const char *name, MonoMethodSignature *sig, gpointer func, const char *symbol, gboolean no_throw);
#ifdef __cplusplus
template <typename T>
inline void
mini_register_opcode_emulation (int opcode, MonoJitICallInfo *jit_icall_info, const char *name, MonoMethodSignature *sig, T func, const char *symbol, gboolean no_throw)
{
mini_register_opcode_emulation (opcode, jit_icall_info, name, sig, (gpointer)func, symbol, no_throw);
}
#endif // __cplusplus
void mono_trampolines_init (void);
guint8 * mono_get_trampoline_code (MonoTrampolineType tramp_type);
gpointer mono_create_specific_trampoline (MonoMemoryManager *mem_manager, gpointer arg1, MonoTrampolineType tramp_type, guint32 *code_len);
gpointer mono_create_jump_trampoline (MonoMethod *method,
gboolean add_sync_wrapper,
MonoError *error);
gpointer mono_create_jit_trampoline (MonoMethod *method, MonoError *error);
gpointer mono_create_jit_trampoline_from_token (MonoImage *image, guint32 token);
gpointer mono_create_delegate_trampoline (MonoClass *klass);
MonoDelegateTrampInfo* mono_create_delegate_trampoline_info (MonoClass *klass, MonoMethod *method);
gpointer mono_create_delegate_virtual_trampoline (MonoClass *klass, MonoMethod *method);
gpointer mono_create_rgctx_lazy_fetch_trampoline (guint32 offset);
gpointer mono_create_static_rgctx_trampoline (MonoMethod *m, gpointer addr);
gpointer mono_create_ftnptr_arg_trampoline (gpointer arg, gpointer addr);
guint32 mono_find_rgctx_lazy_fetch_trampoline_by_addr (gconstpointer addr);
gpointer mono_magic_trampoline (host_mgreg_t *regs, guint8 *code, gpointer arg, guint8* tramp);
gpointer mono_delegate_trampoline (host_mgreg_t *regs, guint8 *code, gpointer *tramp_data, guint8* tramp);
gpointer mono_aot_trampoline (host_mgreg_t *regs, guint8 *code, guint8 *token_info,
guint8* tramp);
gpointer mono_aot_plt_trampoline (host_mgreg_t *regs, guint8 *code, guint8 *token_info,
guint8* tramp);
gconstpointer mono_get_trampoline_func (MonoTrampolineType tramp_type);
gpointer mini_get_vtable_trampoline (MonoVTable *vt, int slot_index);
const char* mono_get_generic_trampoline_simple_name (MonoTrampolineType tramp_type);
const char* mono_get_generic_trampoline_name (MonoTrampolineType tramp_type);
char* mono_get_rgctx_fetch_trampoline_name (int slot);
gpointer mini_get_single_step_trampoline (void);
gpointer mini_get_breakpoint_trampoline (void);
gpointer mini_add_method_trampoline (MonoMethod *m, gpointer compiled_method, gboolean add_static_rgctx_tramp, gboolean add_unbox_tramp);
gboolean mini_jit_info_is_gsharedvt (MonoJitInfo *ji);
gpointer* mini_resolve_imt_method (MonoVTable *vt, gpointer *vtable_slot, MonoMethod *imt_method, MonoMethod **impl_method, gpointer *out_aot_addr,
gboolean *out_need_rgctx_tramp, MonoMethod **variant_iface,
MonoError *error);
void* mono_global_codeman_reserve (int size);
#define mono_global_codeman_reserve(size) (g_cast (mono_global_codeman_reserve ((size))))
void mono_global_codeman_foreach (MonoCodeManagerFunc func, void *user_data);
const char *mono_regname_full (int reg, int bank);
gint32* mono_allocate_stack_slots (MonoCompile *cfg, gboolean backward, guint32 *stack_size, guint32 *stack_align);
void mono_local_regalloc (MonoCompile *cfg, MonoBasicBlock *bb);
MonoInst *mono_branch_optimize_exception_target (MonoCompile *cfg, MonoBasicBlock *bb, const char * exname);
void mono_remove_critical_edges (MonoCompile *cfg);
gboolean mono_is_regsize_var (MonoType *t);
MonoJumpInfo * mono_patch_info_new (MonoMemPool *mp, int ip, MonoJumpInfoType type, gconstpointer target);
int mini_class_check_context_used (MonoCompile *cfg, MonoClass *klass);
int mini_method_check_context_used (MonoCompile *cfg, MonoMethod *method);
void mini_type_from_op (MonoCompile *cfg, MonoInst *ins, MonoInst *src1, MonoInst *src2);
void mini_set_inline_failure (MonoCompile *cfg, const char *msg);
void mini_test_tailcall (MonoCompile *cfg, gboolean tailcall);
gboolean mini_should_check_stack_pointer (MonoCompile *cfg);
MonoInst* mini_emit_box (MonoCompile *cfg, MonoInst *val, MonoClass *klass, int context_used);
void mini_emit_memcpy (MonoCompile *cfg, int destreg, int doffset, int srcreg, int soffset, int size, int align);
void mini_emit_memset (MonoCompile *cfg, int destreg, int offset, int size, int val, int align);
void mini_emit_stobj (MonoCompile *cfg, MonoInst *dest, MonoInst *src, MonoClass *klass, gboolean native);
void mini_emit_initobj (MonoCompile *cfg, MonoInst *dest, const guchar *ip, MonoClass *klass);
void mini_emit_init_rvar (MonoCompile *cfg, int dreg, MonoType *rtype);
int mini_emit_sext_index_reg (MonoCompile *cfg, MonoInst *index);
MonoInst* mini_emit_ldelema_1_ins (MonoCompile *cfg, MonoClass *klass, MonoInst *arr, MonoInst *index, gboolean bcheck, gboolean bounded);
MonoInst* mini_emit_get_gsharedvt_info_klass (MonoCompile *cfg, MonoClass *klass, MonoRgctxInfoType rgctx_type);
MonoInst* mini_emit_get_rgctx_method (MonoCompile *cfg, int context_used,
MonoMethod *cmethod, MonoRgctxInfoType rgctx_type);
void mini_emit_tailcall_parameters (MonoCompile *cfg, MonoMethodSignature *sig);
MonoCallInst * mini_emit_call_args (MonoCompile *cfg, MonoMethodSignature *sig,
MonoInst **args, gboolean calli, gboolean virtual_, gboolean tailcall,
gboolean rgctx, gboolean unbox_trampoline, MonoMethod *target);
MonoInst* mini_emit_calli (MonoCompile *cfg, MonoMethodSignature *sig, MonoInst **args, MonoInst *addr, MonoInst *imt_arg, MonoInst *rgctx_arg);
MonoInst* mini_emit_calli_full (MonoCompile *cfg, MonoMethodSignature *sig, MonoInst **args, MonoInst *addr,
MonoInst *imt_arg, MonoInst *rgctx_arg, gboolean tailcall);
MonoInst* mini_emit_method_call_full (MonoCompile *cfg, MonoMethod *method, MonoMethodSignature *sig, gboolean tailcall,
MonoInst **args, MonoInst *this_ins, MonoInst *imt_arg, MonoInst *rgctx_arg);
MonoInst* mini_emit_abs_call (MonoCompile *cfg, MonoJumpInfoType patch_type, gconstpointer data,
MonoMethodSignature *sig, MonoInst **args);
MonoInst* mini_emit_extra_arg_calli (MonoCompile *cfg, MonoMethodSignature *fsig, MonoInst **orig_args, int arg_reg, MonoInst *call_target);
MonoInst* mini_emit_llvmonly_calli (MonoCompile *cfg, MonoMethodSignature *fsig, MonoInst **args, MonoInst *addr);
MonoInst* mini_emit_llvmonly_virtual_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, int context_used, MonoInst **sp);
MonoInst* mini_emit_memory_barrier (MonoCompile *cfg, int kind);
MonoInst* mini_emit_storing_write_barrier (MonoCompile *cfg, MonoInst *ptr, MonoInst *value);
void mini_emit_write_barrier (MonoCompile *cfg, MonoInst *ptr, MonoInst *value);
MonoInst* mini_emit_memory_load (MonoCompile *cfg, MonoType *type, MonoInst *src, int offset, int ins_flag);
void mini_emit_memory_store (MonoCompile *cfg, MonoType *type, MonoInst *dest, MonoInst *value, int ins_flag);
void mini_emit_memory_copy_bytes (MonoCompile *cfg, MonoInst *dest, MonoInst *src, MonoInst *size, int ins_flag);
void mini_emit_memory_init_bytes (MonoCompile *cfg, MonoInst *dest, MonoInst *value, MonoInst *size, int ins_flag);
void mini_emit_memory_copy (MonoCompile *cfg, MonoInst *dest, MonoInst *src, MonoClass *klass, gboolean native, int ins_flag);
MonoInst* mini_emit_array_store (MonoCompile *cfg, MonoClass *klass, MonoInst **sp, gboolean safety_checks);
MonoInst* mini_emit_inst_for_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args, gboolean *ins_type_initialized);
MonoInst* mini_emit_inst_for_ctor (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args);
MonoInst* mini_emit_inst_for_field_load (MonoCompile *cfg, MonoClassField *field);
MonoInst* mini_handle_enum_has_flag (MonoCompile *cfg, MonoClass *klass, MonoInst *enum_this, int enum_val_reg, MonoInst *enum_flag);
MonoInst* mini_handle_unbox (MonoCompile *cfg, MonoClass *klass, MonoInst *val, int context_used);
MonoMethod* mini_get_memcpy_method (void);
MonoMethod* mini_get_memset_method (void);
int mini_class_check_context_used (MonoCompile *cfg, MonoClass *klass);
MonoRgctxAccess mini_get_rgctx_access_for_method (MonoMethod *method);
CompRelation mono_opcode_to_cond (int opcode);
CompType mono_opcode_to_type (int opcode, int cmp_opcode);
CompRelation mono_negate_cond (CompRelation cond);
int mono_op_imm_to_op (int opcode);
void mono_decompose_op_imm (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins);
void mono_peephole_ins (MonoBasicBlock *bb, MonoInst *ins);
MonoUnwindOp *mono_create_unwind_op (int when,
int tag, int reg,
int val);
void mono_emit_unwind_op (MonoCompile *cfg, int when,
int tag, int reg,
int val);
MonoTrampInfo* mono_tramp_info_create (const char *name, guint8 *code, guint32 code_size, MonoJumpInfo *ji, GSList *unwind_ops);
void mono_tramp_info_free (MonoTrampInfo *info);
void mono_aot_tramp_info_register (MonoTrampInfo *info, MonoMemoryManager *mem_manager);
void mono_tramp_info_register (MonoTrampInfo *info, MonoMemoryManager *mem_manager);
int mini_exception_id_by_name (const char *name);
gboolean mini_type_is_hfa (MonoType *t, int *out_nfields, int *out_esize);
int mono_method_to_ir (MonoCompile *cfg, MonoMethod *method, MonoBasicBlock *start_bblock, MonoBasicBlock *end_bblock,
MonoInst *return_var, MonoInst **inline_args,
guint inline_offset, gboolean is_virtual_call);
//the following methods could just be renamed/moved from method-to-ir.c
int mini_inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, guchar *ip,
guint real_offset, gboolean inline_always);
MonoInst* mini_emit_get_rgctx_klass (MonoCompile *cfg, int context_used, MonoClass *klass, MonoRgctxInfoType rgctx_type);
MonoInst* mini_emit_runtime_constant (MonoCompile *cfg, MonoJumpInfoType patch_type, gpointer data);
void mini_save_cast_details (MonoCompile *cfg, MonoClass *klass, int obj_reg, gboolean null_check);
void mini_reset_cast_details (MonoCompile *cfg);
void mini_emit_class_check (MonoCompile *cfg, int klass_reg, MonoClass *klass);
gboolean mini_class_has_reference_variant_generic_argument (MonoCompile *cfg, MonoClass *klass, int context_used);
MonoInst *mono_decompose_opcode (MonoCompile *cfg, MonoInst *ins);
void mono_decompose_long_opts (MonoCompile *cfg);
void mono_decompose_vtype_opts (MonoCompile *cfg);
void mono_decompose_array_access_opts (MonoCompile *cfg);
void mono_decompose_soft_float (MonoCompile *cfg);
void mono_local_emulate_ops (MonoCompile *cfg);
void mono_handle_global_vregs (MonoCompile *cfg);
void mono_spill_global_vars (MonoCompile *cfg, gboolean *need_local_opts);
void mono_allocate_gsharedvt_vars (MonoCompile *cfg);
void mono_if_conversion (MonoCompile *cfg);
/* Delegates */
char* mono_get_delegate_virtual_invoke_impl_name (gboolean load_imt_reg, int offset);
gpointer mono_get_delegate_virtual_invoke_impl (MonoMethodSignature *sig, MonoMethod *method);
void mono_codegen (MonoCompile *cfg);
void mono_call_inst_add_outarg_reg (MonoCompile *cfg, MonoCallInst *call, int vreg, int hreg, int bank);
void mono_call_inst_add_outarg_vt (MonoCompile *cfg, MonoCallInst *call, MonoInst *outarg_vt);
/* methods that must be provided by the arch-specific port */
void mono_arch_init (void);
void mono_arch_finish_init (void);
void mono_arch_cleanup (void);
void mono_arch_cpu_init (void);
guint32 mono_arch_cpu_optimizations (guint32 *exclude_mask);
const char *mono_arch_regname (int reg);
const char *mono_arch_fregname (int reg);
void mono_arch_exceptions_init (void);
guchar* mono_arch_create_generic_trampoline (MonoTrampolineType tramp_type, MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_create_rgctx_lazy_fetch_trampoline (guint32 slot, MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_create_general_rgctx_lazy_fetch_trampoline (MonoTrampInfo **info, gboolean aot);
guint8* mono_arch_create_sdb_trampoline (gboolean single_step, MonoTrampInfo **info, gboolean aot);
guint8 *mono_arch_create_llvm_native_thunk (guint8* addr);
gpointer mono_arch_get_get_tls_tramp (void);
GList *mono_arch_get_allocatable_int_vars (MonoCompile *cfg);
GList *mono_arch_get_global_int_regs (MonoCompile *cfg);
guint32 mono_arch_regalloc_cost (MonoCompile *cfg, MonoMethodVar *vmv);
void mono_arch_patch_code_new (MonoCompile *cfg, guint8 *code, MonoJumpInfo *ji, gpointer target);
void mono_arch_flush_icache (guint8 *code, gint size);
guint8 *mono_arch_emit_prolog (MonoCompile *cfg);
void mono_arch_emit_epilog (MonoCompile *cfg);
void mono_arch_emit_exceptions (MonoCompile *cfg);
void mono_arch_lowering_pass (MonoCompile *cfg, MonoBasicBlock *bb);
void mono_arch_peephole_pass_1 (MonoCompile *cfg, MonoBasicBlock *bb);
void mono_arch_peephole_pass_2 (MonoCompile *cfg, MonoBasicBlock *bb);
void mono_arch_output_basic_block (MonoCompile *cfg, MonoBasicBlock *bb);
void mono_arch_fill_argument_info (MonoCompile *cfg);
void mono_arch_allocate_vars (MonoCompile *m);
int mono_arch_get_argument_info (MonoMethodSignature *csig, int param_count, MonoJitArgumentInfo *arg_info);
void mono_arch_emit_call (MonoCompile *cfg, MonoCallInst *call);
void mono_arch_emit_outarg_vt (MonoCompile *cfg, MonoInst *ins, MonoInst *src);
void mono_arch_emit_setret (MonoCompile *cfg, MonoMethod *method, MonoInst *val);
MonoDynCallInfo *mono_arch_dyn_call_prepare (MonoMethodSignature *sig);
void mono_arch_dyn_call_free (MonoDynCallInfo *info);
int mono_arch_dyn_call_get_buf_size (MonoDynCallInfo *info);
void mono_arch_start_dyn_call (MonoDynCallInfo *info, gpointer **args, guint8 *ret, guint8 *buf);
void mono_arch_finish_dyn_call (MonoDynCallInfo *info, guint8 *buf);
MonoInst *mono_arch_emit_inst_for_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args);
void mono_arch_decompose_opts (MonoCompile *cfg, MonoInst *ins);
void mono_arch_decompose_long_opts (MonoCompile *cfg, MonoInst *ins);
GSList* mono_arch_get_delegate_invoke_impls (void);
LLVMCallInfo* mono_arch_get_llvm_call_info (MonoCompile *cfg, MonoMethodSignature *sig);
guint8* mono_arch_emit_load_got_addr (guint8 *start, guint8 *code, MonoCompile *cfg, MonoJumpInfo **ji);
guint8* mono_arch_emit_load_aotconst (guint8 *start, guint8 *code, MonoJumpInfo **ji, MonoJumpInfoType tramp_type, gconstpointer target);
GSList* mono_arch_get_cie_program (void);
void mono_arch_set_target (char *mtriple);
gboolean mono_arch_gsharedvt_sig_supported (MonoMethodSignature *sig);
gpointer mono_arch_get_gsharedvt_trampoline (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_gsharedvt_call_info (MonoMemoryManager *mem_manager, gpointer addr, MonoMethodSignature *normal_sig, MonoMethodSignature *gsharedvt_sig, gboolean gsharedvt_in, gint32 vcall_offset, gboolean calli);
gboolean mono_arch_opcode_needs_emulation (MonoCompile *cfg, int opcode);
gboolean mono_arch_tailcall_supported (MonoCompile *cfg, MonoMethodSignature *caller_sig, MonoMethodSignature *callee_sig, gboolean virtual_);
int mono_arch_translate_tls_offset (int offset);
gboolean mono_arch_opcode_supported (int opcode);
MONO_COMPONENT_API void mono_arch_setup_resume_sighandler_ctx (MonoContext *ctx, gpointer func);
gboolean mono_arch_have_fast_tls (void);
#ifdef MONO_ARCH_HAS_REGISTER_ICALL
void mono_arch_register_icall (void);
#endif
#ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK
gboolean mono_arch_is_soft_float (void);
#else
static inline MONO_ALWAYS_INLINE gboolean
mono_arch_is_soft_float (void)
{
return FALSE;
}
#endif
/* Soft Debug support */
#ifdef MONO_ARCH_SOFT_DEBUG_SUPPORTED
MONO_COMPONENT_API void mono_arch_set_breakpoint (MonoJitInfo *ji, guint8 *ip);
MONO_COMPONENT_API void mono_arch_clear_breakpoint (MonoJitInfo *ji, guint8 *ip);
MONO_COMPONENT_API void mono_arch_start_single_stepping (void);
MONO_COMPONENT_API void mono_arch_stop_single_stepping (void);
gboolean mono_arch_is_single_step_event (void *info, void *sigctx);
gboolean mono_arch_is_breakpoint_event (void *info, void *sigctx);
MONO_COMPONENT_API void mono_arch_skip_breakpoint (MonoContext *ctx, MonoJitInfo *ji);
MONO_COMPONENT_API void mono_arch_skip_single_step (MonoContext *ctx);
SeqPointInfo *mono_arch_get_seq_point_info (guint8 *code);
#endif
gboolean
mono_arch_unwind_frame (MonoJitTlsData *jit_tls,
MonoJitInfo *ji, MonoContext *ctx,
MonoContext *new_ctx, MonoLMF **lmf,
host_mgreg_t **save_locations,
StackFrameInfo *frame_info);
gpointer mono_arch_get_throw_exception_by_name (void);
gpointer mono_arch_get_call_filter (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_restore_context (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_throw_exception (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_rethrow_exception (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_rethrow_preserve_exception (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_throw_corlib_exception (MonoTrampInfo **info, gboolean aot);
gpointer mono_arch_get_throw_pending_exception (MonoTrampInfo **info, gboolean aot);
gboolean mono_arch_handle_exception (void *sigctx, gpointer obj);
void mono_arch_handle_altstack_exception (void *sigctx, MONO_SIG_HANDLER_INFO_TYPE *siginfo, gpointer fault_addr, gboolean stack_ovf);
gboolean mono_handle_soft_stack_ovf (MonoJitTlsData *jit_tls, MonoJitInfo *ji, void *ctx, MONO_SIG_HANDLER_INFO_TYPE *siginfo, guint8* fault_addr);
void mono_handle_hard_stack_ovf (MonoJitTlsData *jit_tls, MonoJitInfo *ji, MonoContext *mctx, guint8* fault_addr);
void mono_arch_undo_ip_adjustment (MonoContext *ctx);
void mono_arch_do_ip_adjustment (MonoContext *ctx);
gpointer mono_arch_ip_from_context (void *sigctx);
MONO_COMPONENT_API host_mgreg_t mono_arch_context_get_int_reg (MonoContext *ctx, int reg);
MONO_COMPONENT_API host_mgreg_t*mono_arch_context_get_int_reg_address (MonoContext *ctx, int reg);
MONO_COMPONENT_API void mono_arch_context_set_int_reg (MonoContext *ctx, int reg, host_mgreg_t val);
void mono_arch_flush_register_windows (void);
gboolean mono_arch_is_inst_imm (int opcode, int imm_opcode, gint64 imm);
gboolean mono_arch_is_int_overflow (void *sigctx, void *info);
void mono_arch_invalidate_method (MonoJitInfo *ji, void *func, gpointer func_arg);
guint32 mono_arch_get_patch_offset (guint8 *code);
gpointer*mono_arch_get_delegate_method_ptr_addr (guint8* code, host_mgreg_t *regs);
void mono_arch_create_vars (MonoCompile *cfg);
void mono_arch_save_unwind_info (MonoCompile *cfg);
void mono_arch_register_lowlevel_calls (void);
gpointer mono_arch_get_unbox_trampoline (MonoMethod *m, gpointer addr);
gpointer mono_arch_get_static_rgctx_trampoline (MonoMemoryManager *mem_manager, gpointer arg, gpointer addr);
gpointer mono_arch_get_ftnptr_arg_trampoline (MonoMemoryManager *mem_manager, gpointer arg, gpointer addr);
gpointer mono_arch_get_gsharedvt_arg_trampoline (gpointer arg, gpointer addr);
void mono_arch_patch_callsite (guint8 *method_start, guint8 *code, guint8 *addr);
void mono_arch_patch_plt_entry (guint8 *code, gpointer *got, host_mgreg_t *regs, guint8 *addr);
int mono_arch_get_this_arg_reg (guint8 *code);
gpointer mono_arch_get_this_arg_from_call (host_mgreg_t *regs, guint8 *code);
gpointer mono_arch_get_delegate_invoke_impl (MonoMethodSignature *sig, gboolean has_target);
gpointer mono_arch_get_delegate_virtual_invoke_impl (MonoMethodSignature *sig, MonoMethod *method, int offset, gboolean load_imt_reg);
gpointer mono_arch_create_specific_trampoline (gpointer arg1, MonoTrampolineType tramp_type, MonoMemoryManager *mem_manager, guint32 *code_len);
MonoMethod* mono_arch_find_imt_method (host_mgreg_t *regs, guint8 *code);
MonoVTable* mono_arch_find_static_call_vtable (host_mgreg_t *regs, guint8 *code);
gpointer mono_arch_build_imt_trampoline (MonoVTable *vtable, MonoIMTCheckItem **imt_entries, int count, gpointer fail_tramp);
void mono_arch_notify_pending_exc (MonoThreadInfo *info);
guint8* mono_arch_get_call_target (guint8 *code);
guint32 mono_arch_get_plt_info_offset (guint8 *plt_entry, host_mgreg_t *regs, guint8 *code);
GSList *mono_arch_get_trampolines (gboolean aot);
gpointer mono_arch_get_interp_to_native_trampoline (MonoTrampInfo **info);
gpointer mono_arch_get_native_to_interp_trampoline (MonoTrampInfo **info);
#ifdef MONO_ARCH_HAVE_INTERP_PINVOKE_TRAMP
// Moves data (arguments and return vt address) from the InterpFrame to the CallContext so a pinvoke call can be made.
void mono_arch_set_native_call_context_args (CallContext *ccontext, gpointer frame, MonoMethodSignature *sig);
// Moves the return value from the InterpFrame to the ccontext, or to the retp (if native code passed the retvt address)
void mono_arch_set_native_call_context_ret (CallContext *ccontext, gpointer frame, MonoMethodSignature *sig, gpointer retp);
// When entering interp from native, this moves the arguments from the ccontext to the InterpFrame. If we have a return
// vt address, we return it. This ret vt address needs to be passed to mono_arch_set_native_call_context_ret.
gpointer mono_arch_get_native_call_context_args (CallContext *ccontext, gpointer frame, MonoMethodSignature *sig);
// After the pinvoke call is done, this moves return value from the ccontext to the InterpFrame.
void mono_arch_get_native_call_context_ret (CallContext *ccontext, gpointer frame, MonoMethodSignature *sig);
#endif
/*New interruption machinery */
void
mono_setup_async_callback (MonoContext *ctx, void (*async_cb)(void *fun), gpointer user_data);
void
mono_arch_setup_async_callback (MonoContext *ctx, void (*async_cb)(void *fun), gpointer user_data);
gboolean
mono_thread_state_init_from_handle (MonoThreadUnwindState *tctx, MonoThreadInfo *info, /*optional*/ void *sigctx);
/* Exception handling */
typedef gboolean (*MonoJitStackWalk) (StackFrameInfo *frame, MonoContext *ctx, gpointer data);
void mono_exceptions_init (void);
gboolean mono_handle_exception (MonoContext *ctx, gpointer obj);
void mono_handle_native_crash (const char *signal, MonoContext *mctx, MONO_SIG_HANDLER_INFO_TYPE *siginfo);
MONO_API void mono_print_thread_dump (void *sigctx);
MONO_API void mono_print_thread_dump_from_ctx (MonoContext *ctx);
MONO_COMPONENT_API void mono_walk_stack_with_ctx (MonoJitStackWalk func, MonoContext *start_ctx, MonoUnwindOptions unwind_options, void *user_data);
MONO_COMPONENT_API void mono_walk_stack_with_state (MonoJitStackWalk func, MonoThreadUnwindState *state, MonoUnwindOptions unwind_options, void *user_data);
void mono_walk_stack (MonoJitStackWalk func, MonoUnwindOptions options, void *user_data);
gboolean mono_thread_state_init_from_sigctx (MonoThreadUnwindState *ctx, void *sigctx);
void mono_thread_state_init (MonoThreadUnwindState *ctx);
MONO_COMPONENT_API gboolean mono_thread_state_init_from_current (MonoThreadUnwindState *ctx);
MONO_COMPONENT_API gboolean mono_thread_state_init_from_monoctx (MonoThreadUnwindState *ctx, MonoContext *mctx);
void mono_setup_altstack (MonoJitTlsData *tls);
void mono_free_altstack (MonoJitTlsData *tls);
gpointer mono_altstack_restore_prot (host_mgreg_t *regs, guint8 *code, gpointer *tramp_data, guint8* tramp);
MONO_COMPONENT_API MonoJitInfo* mini_jit_info_table_find (gpointer addr);
MonoJitInfo* mini_jit_info_table_find_ext (gpointer addr, gboolean allow_trampolines);
G_EXTERN_C void mono_resume_unwind (MonoContext *ctx);
MonoJitInfo * mono_find_jit_info (MonoJitTlsData *jit_tls, MonoJitInfo *res, MonoJitInfo *prev_ji, MonoContext *ctx, MonoContext *new_ctx, char **trace, MonoLMF **lmf, int *native_offset, gboolean *managed);
typedef gboolean (*MonoExceptionFrameWalk) (MonoMethod *method, gpointer ip, size_t native_offset, gboolean managed, gpointer user_data);
MONO_API gboolean mono_exception_walk_trace (MonoException *ex, MonoExceptionFrameWalk func, gpointer user_data);
MONO_COMPONENT_API void mono_restore_context (MonoContext *ctx);
guint8* mono_jinfo_get_unwind_info (MonoJitInfo *ji, guint32 *unwind_info_len);
int mono_jinfo_get_epilog_size (MonoJitInfo *ji);
gboolean
mono_find_jit_info_ext (MonoJitTlsData *jit_tls,
MonoJitInfo *prev_ji, MonoContext *ctx,
MonoContext *new_ctx, char **trace, MonoLMF **lmf,
host_mgreg_t **save_locations,
StackFrameInfo *frame);
gpointer mono_get_throw_exception (void);
gpointer mono_get_rethrow_exception (void);
gpointer mono_get_rethrow_preserve_exception (void);
gpointer mono_get_call_filter (void);
gpointer mono_get_restore_context (void);
gpointer mono_get_throw_corlib_exception (void);
gpointer mono_get_throw_exception_addr (void);
gpointer mono_get_rethrow_preserve_exception_addr (void);
ICALL_EXPORT
MonoArray *ves_icall_get_trace (MonoException *exc, gint32 skip, MonoBoolean need_file_info);
ICALL_EXPORT
MonoBoolean ves_icall_get_frame_info (gint32 skip, MonoBoolean need_file_info,
MonoReflectionMethod **method,
gint32 *iloffset, gint32 *native_offset,
MonoString **file, gint32 *line, gint32 *column);
void mono_set_cast_details (MonoClass *from, MonoClass *to);
void mono_decompose_typechecks (MonoCompile *cfg);
/* Dominator/SSA methods */
void mono_compile_dominator_info (MonoCompile *cfg, int dom_flags);
void mono_compute_natural_loops (MonoCompile *cfg);
MonoBitSet* mono_compile_iterated_dfrontier (MonoCompile *cfg, MonoBitSet *set);
void mono_ssa_compute (MonoCompile *cfg);
void mono_ssa_remove (MonoCompile *cfg);
void mono_ssa_remove_gsharedvt (MonoCompile *cfg);
void mono_ssa_cprop (MonoCompile *cfg);
void mono_ssa_deadce (MonoCompile *cfg);
void mono_ssa_strength_reduction (MonoCompile *cfg);
void mono_free_loop_info (MonoCompile *cfg);
void mono_ssa_loop_invariant_code_motion (MonoCompile *cfg);
void mono_ssa_compute2 (MonoCompile *cfg);
void mono_ssa_remove2 (MonoCompile *cfg);
void mono_ssa_cprop2 (MonoCompile *cfg);
void mono_ssa_deadce2 (MonoCompile *cfg);
/* debugging support */
void mono_debug_init_method (MonoCompile *cfg, MonoBasicBlock *start_block,
guint32 breakpoint_id);
void mono_debug_open_method (MonoCompile *cfg);
void mono_debug_close_method (MonoCompile *cfg);
void mono_debug_free_method (MonoCompile *cfg);
void mono_debug_open_block (MonoCompile *cfg, MonoBasicBlock *bb, guint32 address);
void mono_debug_record_line_number (MonoCompile *cfg, MonoInst *ins, guint32 address);
void mono_debug_serialize_debug_info (MonoCompile *cfg, guint8 **out_buf, guint32 *buf_len);
void mono_debug_add_aot_method (MonoMethod *method, guint8 *code_start,
guint8 *debug_info, guint32 debug_info_len);
MONO_API void mono_debug_print_vars (gpointer ip, gboolean only_arguments);
MONO_API void mono_debugger_run_finally (MonoContext *start_ctx);
MONO_API gboolean mono_breakpoint_clean_code (guint8 *method_start, guint8 *code, int offset, guint8 *buf, int size);
/* Tracing */
MonoCallSpec *mono_trace_set_options (const char *options);
gboolean mono_trace_eval (MonoMethod *method);
gboolean
mono_tailcall_print_enabled (void);
void
mono_tailcall_print (const char *format, ...);
gboolean
mono_is_supported_tailcall_helper (gboolean value, const char *svalue);
#define IS_SUPPORTED_TAILCALL(x) (mono_is_supported_tailcall_helper((x), #x))
extern void
mono_perform_abc_removal (MonoCompile *cfg);
extern void
mono_perform_abc_removal (MonoCompile *cfg);
extern void
mono_local_cprop (MonoCompile *cfg);
extern void
mono_local_cprop (MonoCompile *cfg);
extern void
mono_local_deadce (MonoCompile *cfg);
void
mono_local_alias_analysis (MonoCompile *cfg);
/* Generic sharing */
void
mono_set_generic_sharing_supported (gboolean supported);
void
mono_set_generic_sharing_vt_supported (gboolean supported);
void
mono_set_partial_sharing_supported (gboolean supported);
gboolean
mono_class_generic_sharing_enabled (MonoClass *klass);
gpointer
mono_class_fill_runtime_generic_context (MonoVTable *class_vtable, guint32 slot, MonoError *error);
gpointer
mono_method_fill_runtime_generic_context (MonoMethodRuntimeGenericContext *mrgctx, guint32 slot, MonoError *error);
const char*
mono_rgctx_info_type_to_str (MonoRgctxInfoType type);
MonoJumpInfoType
mini_rgctx_info_type_to_patch_info_type (MonoRgctxInfoType info_type);
gboolean
mono_method_needs_static_rgctx_invoke (MonoMethod *method, gboolean allow_type_vars);
int
mono_class_rgctx_get_array_size (int n, gboolean mrgctx);
MonoGenericContext
mono_method_construct_object_context (MonoMethod *method);
MONO_COMPONENT_API MonoMethod*
mono_method_get_declaring_generic_method (MonoMethod *method);
int
mono_generic_context_check_used (MonoGenericContext *context);
int
mono_class_check_context_used (MonoClass *klass);
gboolean
mono_generic_context_is_sharable (MonoGenericContext *context, gboolean allow_type_vars);
gboolean
mono_generic_context_is_sharable_full (MonoGenericContext *context, gboolean allow_type_vars, gboolean allow_partial);
gboolean
mono_method_is_generic_impl (MonoMethod *method);
gboolean
mono_method_is_generic_sharable (MonoMethod *method, gboolean allow_type_vars);
gboolean
mono_method_is_generic_sharable_full (MonoMethod *method, gboolean allow_type_vars, gboolean allow_partial, gboolean allow_gsharedvt);
gboolean
mini_class_is_generic_sharable (MonoClass *klass);
gboolean
mini_generic_inst_is_sharable (MonoGenericInst *inst, gboolean allow_type_vars, gboolean allow_partial);
MonoMethod*
mono_class_get_method_generic (MonoClass *klass, MonoMethod *method, MonoError *error);
gboolean
mono_is_partially_sharable_inst (MonoGenericInst *inst);
gboolean
mini_is_gsharedvt_gparam (MonoType *t);
gboolean
mini_is_gsharedvt_inst (MonoGenericInst *inst);
MonoGenericContext* mini_method_get_context (MonoMethod *method);
int mono_method_check_context_used (MonoMethod *method);
gboolean mono_generic_context_equal_deep (MonoGenericContext *context1, MonoGenericContext *context2);
gpointer mono_helper_get_rgctx_other_ptr (MonoClass *caller_class, MonoVTable *vtable,
guint32 token, guint32 token_source, guint32 rgctx_type,
gint32 rgctx_index);
void mono_generic_sharing_init (void);
MonoClass* mini_class_get_container_class (MonoClass *klass);
MonoGenericContext* mini_class_get_context (MonoClass *klass);
typedef enum {
SHARE_MODE_NONE = 0x0,
SHARE_MODE_GSHAREDVT = 0x1,
} GetSharedMethodFlags;
MonoType* mini_get_underlying_type (MonoType *type);
MonoType* mini_type_get_underlying_type (MonoType *type);
MonoClass* mini_get_class (MonoMethod *method, guint32 token, MonoGenericContext *context);
MonoMethod* mini_get_shared_method_to_register (MonoMethod *method);
MonoMethod* mini_get_shared_method_full (MonoMethod *method, GetSharedMethodFlags flags, MonoError *error);
MonoType* mini_get_shared_gparam (MonoType *t, MonoType *constraint);
int mini_get_rgctx_entry_slot (MonoJumpInfoRgctxEntry *entry);
int mini_type_stack_size (MonoType *t, int *align);
int mini_type_stack_size_full (MonoType *t, guint32 *align, gboolean pinvoke);
void mini_type_to_eval_stack_type (MonoCompile *cfg, MonoType *type, MonoInst *inst);
guint mono_type_to_regmove (MonoCompile *cfg, MonoType *type);
void mono_cfg_add_try_hole (MonoCompile *cfg, MonoExceptionClause *clause, guint8 *start, MonoBasicBlock *bb);
void mono_cfg_set_exception (MonoCompile *cfg, MonoExceptionType type);
void mono_cfg_set_exception_invalid_program (MonoCompile *cfg, char *msg);
#define MONO_TIME_TRACK(a, phase) \
{ \
gint64 start = mono_time_track_start (); \
(phase) ; \
mono_time_track_end (&(a), start); \
}
gint64 mono_time_track_start (void);
void mono_time_track_end (gint64 *time, gint64 start);
void mono_update_jit_stats (MonoCompile *cfg);
gboolean mini_type_is_reference (MonoType *type);
gboolean mini_type_is_vtype (MonoType *t);
gboolean mini_type_var_is_vt (MonoType *type);
gboolean mini_is_gsharedvt_type (MonoType *t);
gboolean mini_is_gsharedvt_klass (MonoClass *klass);
gboolean mini_is_gsharedvt_signature (MonoMethodSignature *sig);
gboolean mini_is_gsharedvt_variable_type (MonoType *t);
gboolean mini_is_gsharedvt_variable_klass (MonoClass *klass);
gboolean mini_is_gsharedvt_sharable_method (MonoMethod *method);
gboolean mini_is_gsharedvt_variable_signature (MonoMethodSignature *sig);
gboolean mini_is_gsharedvt_sharable_inst (MonoGenericInst *inst);
gboolean mini_method_is_default_method (MonoMethod *m);
gboolean mini_method_needs_mrgctx (MonoMethod *m);
gpointer mini_method_get_rgctx (MonoMethod *m);
void mini_init_gsctx (MonoMemPool *mp, MonoGenericContext *context, MonoGenericSharingContext *gsctx);
gpointer mini_get_gsharedvt_wrapper (gboolean gsharedvt_in, gpointer addr, MonoMethodSignature *normal_sig, MonoMethodSignature *gsharedvt_sig,
gint32 vcall_offset, gboolean calli);
MonoMethod* mini_get_gsharedvt_in_sig_wrapper (MonoMethodSignature *sig);
MonoMethod* mini_get_gsharedvt_out_sig_wrapper (MonoMethodSignature *sig);
MonoMethodSignature* mini_get_gsharedvt_out_sig_wrapper_signature (gboolean has_this, gboolean has_ret, int param_count);
gboolean mini_gsharedvt_runtime_invoke_supported (MonoMethodSignature *sig);
G_EXTERN_C void mono_interp_entry_from_trampoline (gpointer ccontext, gpointer imethod);
G_EXTERN_C void mono_interp_to_native_trampoline (gpointer addr, gpointer ccontext);
MonoMethod* mini_get_interp_in_wrapper (MonoMethodSignature *sig);
MonoMethod* mini_get_interp_lmf_wrapper (const char *name, gpointer target);
char* mono_get_method_from_ip (void *ip);
/* SIMD support */
typedef enum {
/* Used for lazy initialization */
MONO_CPU_INITED = 1 << 0,
#if defined(TARGET_X86) || defined(TARGET_AMD64)
MONO_CPU_X86_SSE = 1 << 1,
MONO_CPU_X86_SSE2 = 1 << 2,
MONO_CPU_X86_PCLMUL = 1 << 3,
MONO_CPU_X86_AES = 1 << 4,
MONO_CPU_X86_SSE3 = 1 << 5,
MONO_CPU_X86_SSSE3 = 1 << 6,
MONO_CPU_X86_SSE41 = 1 << 7,
MONO_CPU_X86_SSE42 = 1 << 8,
MONO_CPU_X86_POPCNT = 1 << 9,
MONO_CPU_X86_AVX = 1 << 10,
MONO_CPU_X86_AVX2 = 1 << 11,
MONO_CPU_X86_FMA = 1 << 12,
MONO_CPU_X86_LZCNT = 1 << 13,
MONO_CPU_X86_BMI1 = 1 << 14,
MONO_CPU_X86_BMI2 = 1 << 15,
//
// Dependencies (based on System.Runtime.Intrinsics.X86 class hierarchy):
//
// sse
// sse2
// pclmul
// aes
// sse3
// ssse3 (doesn't include 'pclmul' and 'aes')
// sse4.1
// sse4.2
// popcnt
// avx (doesn't include 'popcnt')
// avx2
// fma
// lzcnt
// bmi1
// bmi2
MONO_CPU_X86_SSE_COMBINED = MONO_CPU_X86_SSE,
MONO_CPU_X86_SSE2_COMBINED = MONO_CPU_X86_SSE_COMBINED | MONO_CPU_X86_SSE2,
MONO_CPU_X86_PCLMUL_COMBINED = MONO_CPU_X86_SSE2_COMBINED | MONO_CPU_X86_PCLMUL,
MONO_CPU_X86_AES_COMBINED = MONO_CPU_X86_SSE2_COMBINED | MONO_CPU_X86_AES,
MONO_CPU_X86_SSE3_COMBINED = MONO_CPU_X86_SSE2_COMBINED | MONO_CPU_X86_SSE3,
MONO_CPU_X86_SSSE3_COMBINED = MONO_CPU_X86_SSE3_COMBINED | MONO_CPU_X86_SSSE3,
MONO_CPU_X86_SSE41_COMBINED = MONO_CPU_X86_SSSE3_COMBINED | MONO_CPU_X86_SSE41,
MONO_CPU_X86_SSE42_COMBINED = MONO_CPU_X86_SSE41_COMBINED | MONO_CPU_X86_SSE42,
MONO_CPU_X86_POPCNT_COMBINED = MONO_CPU_X86_SSE42_COMBINED | MONO_CPU_X86_POPCNT,
MONO_CPU_X86_AVX_COMBINED = MONO_CPU_X86_SSE42_COMBINED | MONO_CPU_X86_AVX,
MONO_CPU_X86_AVX2_COMBINED = MONO_CPU_X86_AVX_COMBINED | MONO_CPU_X86_AVX2,
MONO_CPU_X86_FMA_COMBINED = MONO_CPU_X86_AVX_COMBINED | MONO_CPU_X86_FMA,
MONO_CPU_X86_FULL_SSEAVX_COMBINED = MONO_CPU_X86_FMA_COMBINED | MONO_CPU_X86_AVX2 | MONO_CPU_X86_PCLMUL
| MONO_CPU_X86_AES | MONO_CPU_X86_POPCNT | MONO_CPU_X86_FMA,
#endif
#ifdef TARGET_WASM
MONO_CPU_WASM_SIMD = 1 << 1,
#endif
#ifdef TARGET_ARM64
MONO_CPU_ARM64_BASE = 1 << 1,
MONO_CPU_ARM64_CRC = 1 << 2,
MONO_CPU_ARM64_CRYPTO = 1 << 3,
MONO_CPU_ARM64_NEON = 1 << 4,
MONO_CPU_ARM64_RDM = 1 << 5,
MONO_CPU_ARM64_DP = 1 << 6,
#endif
} MonoCPUFeatures;
G_ENUM_FUNCTIONS (MonoCPUFeatures)
MonoCPUFeatures mini_get_cpu_features (MonoCompile* cfg);
enum {
SIMD_COMP_EQ,
SIMD_COMP_LT,
SIMD_COMP_LE,
SIMD_COMP_UNORD,
SIMD_COMP_NEQ,
SIMD_COMP_NLT,
SIMD_COMP_NLE,
SIMD_COMP_ORD
};
enum {
SIMD_PREFETCH_MODE_NTA,
SIMD_PREFETCH_MODE_0,
SIMD_PREFETCH_MODE_1,
SIMD_PREFETCH_MODE_2,
};
const char *mono_arch_xregname (int reg);
MonoCPUFeatures mono_arch_get_cpu_features (void);
#ifdef MONO_ARCH_SIMD_INTRINSICS
void mono_simd_simplify_indirection (MonoCompile *cfg);
void mono_simd_decompose_intrinsic (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins);
MonoInst* mono_emit_simd_intrinsics (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args);
MonoInst* mono_emit_simd_field_load (MonoCompile *cfg, MonoClassField *field, MonoInst *addr);
void mono_simd_intrinsics_init (void);
#endif
MonoMethod*
mini_method_to_shared (MonoMethod *method); // null if not shared
static inline gboolean
mini_safepoints_enabled (void)
{
#if defined (TARGET_WASM)
return FALSE;
#else
return TRUE;
#endif
}
gpointer
mono_arch_load_function (MonoJitICallId jit_icall_id);
MONO_COMPONENT_API MonoGenericContext
mono_get_generic_context_from_stack_frame (MonoJitInfo *ji, gpointer generic_info);
MONO_COMPONENT_API gpointer
mono_get_generic_info_from_stack_frame (MonoJitInfo *ji, MonoContext *ctx);
MonoMemoryManager* mini_get_default_mem_manager (void);
MONO_COMPONENT_API int
mono_wasm_get_debug_level (void);
#endif /* __MONO_MINI_H__ */
| 1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/tests/Interop/StructMarshalling/ReversePInvoke/MarshalExpStruct/ExpStructAsParamNative.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <xplatform.h>
#include <platformdefines.h>
const int NumArrElements = 2;
struct InnerSequential
{
int f1;
float f2;
LPCSTR f3;
};
void PrintInnerSequential(InnerSequential* p, const char* name)
{
printf("\t%s.f1 = %d\n", name, p->f1);
printf("\t%s.f2 = %f\n", name, p->f2);
printf("\t%s.f3 = %s\n", name, p->f3);
}
void ChangeInnerSequential(InnerSequential* p)
{
p->f1 = 77;
p->f2 = 77.0;
const char* lpstr = "changed string";
size_t size = sizeof(char) * (strlen(lpstr) + 1);
LPSTR temp = (LPSTR)CoreClrAlloc( size );
memset(temp, 0, size);
if(temp)
{
strncpy_s((char*)temp,size,lpstr,size-1);
p->f3 = temp;
}
else
{
printf("Memory Allocated Failed!");
}
}
bool IsCorrectInnerSequential(InnerSequential* p)
{
if(p->f1 != 1)
return false;
if(p->f2 != 1.0)
return false;
if(strcmp((char*)p->f3,"some string") != 0 )
return false;
return true;
}
struct INNER2 // size = 12 bytes
{
INT f1;
FLOAT f2;
LPCSTR f3;
};
void ChangeINNER2(INNER2* p)
{
p->f1 = 77;
p->f2 = 77.0;
const char* temp = "changed string";
size_t len = strlen(temp);
LPCSTR str = (LPCSTR)CoreClrAlloc( sizeof(char)*(len+1) );
memset((LPVOID)str,0,len+1);
strncpy_s((char*)str,len+1,temp,len);
p->f3 = str;
}
void PrintINNER2(INNER2* p, const char* name)
{
printf("\t%s.f1 = %d\n", name, p->f1);
printf("\t%s.f2 = %f\n", name, p->f2);
printf("\t%s.f3 = %s\n", name, p->f3);
}
bool IsCorrectINNER2(INNER2* p)
{
if(p->f1 != 1)
return false;
if(p->f2 != 1.0)
return false;
if(memcmp(p->f3, "some string",11*sizeof(char)) != 0 )
return false;
return true;
}
struct InnerExplicit
{
#ifdef WINDOWS
union
{
INT f1;
FLOAT f2;
};
CHAR _unused0[4];
LPCSTR f3;
#else
union
{
INT f1;
FLOAT f2;
};
INT _unused0;
LPCSTR f3;
#endif
};
void PrintInnerExplicit(InnerExplicit* p, const char* name)
{
printf("\t%s.f1 = %d\n", name, p->f1);
printf("\t%s.f2 = %f\n", name, p->f2);
printf("\t%s.f3 = %s\n", name, p->f3);
}
void ChangeInnerExplicit(InnerExplicit* p)
{
p->f1 = 77;
const char* temp = "changed string";
size_t len = strlen(temp);
LPCSTR str = (LPCSTR)CoreClrAlloc( sizeof(char)*(len+1) );
memset((LPVOID)str,0,len+1);
strncpy_s((char*)str,len+1,temp,len);
p->f3 = str;
}
struct InnerArraySequential
{
InnerSequential arr[NumArrElements];
};
void PrintInnerArraySequential(InnerArraySequential* p, const char* name)
{
for(int i = 0; i < NumArrElements; i++)
{
printf("\t%s.arr[%d].f1 = %d\n", name, i, (p->arr)[i].f1);
printf("\t%s.arr[%d].f2 = %f\n", name, i, (p->arr)[i].f2);
printf("\t%s.arr[%d].f2 = %s\n", name, i, (p->arr)[i].f3);
}
}
void ChangeInnerArraySequential(InnerArraySequential* p)
{
const char* lpstr = "changed string";
LPSTR temp;
for(int i = 0; i < NumArrElements; i++)
{
(p->arr)[i].f1 = 77;
(p->arr)[i].f2 = 77.0;
size_t size = sizeof(char) * (strlen(lpstr) + 1);
temp = (LPSTR)CoreClrAlloc( size );
memset(temp, 0, size);
if(temp)
{
strncpy_s((char*)temp,strlen(lpstr)+1,lpstr,strlen(lpstr));
(p->arr)[i].f3 = temp;
}
else
{
printf("Memory Allocated Failed!");
}
}
}
bool IsCorrectInnerArraySequential(InnerArraySequential* p)
{
for(int i = 0; i < NumArrElements; i++)
{
if( (p->arr)[i].f1 != 1 )
return false;
if( (p->arr)[i].f2 != 1.0 )
return false;
}
return true;
}
union InnerArrayExplicit // size = 32 bytes
{
struct InnerSequential arr[2];
struct
{
LONG64 _unused0;
LPCSTR f4;
};
};
#ifdef HOST_64BIT
union OUTER3 // size = 32 bytes
{
struct InnerSequential arr[2];
struct
{
CHAR _unused0[24];
LPCSTR f4;
};
};
#else
struct OUTER3 // size = 28 bytes
{
struct InnerSequential arr[2];
LPCSTR f4;
};
#endif
void PrintOUTER3(OUTER3* p, const char* name)
{
for(int i = 0; i < NumArrElements; i++)
{
printf("\t%s.arr[%d].f1 = %d\n", name, i, (p->arr)[i].f1);
printf("\t%s.arr[%d].f2 = %f\n", name, i, (p->arr)[i].f2);
printf("\t%s.arr[%d].f3 = %s\n", name, i, (p->arr)[i].f3);
}
printf("\t%s.f4 = %s\n",name,p->f4);
}
void ChangeOUTER3(OUTER3* p)
{
const char* temp = "changed string";
size_t len = strlen(temp);
LPCSTR str = NULL;
for(int i = 0; i < NumArrElements; i++)
{
(p->arr)[i].f1 = 77;
(p->arr)[i].f2 = 77.0;
str = (LPCSTR)CoreClrAlloc( sizeof(char)*(len+1) );
memset((LPVOID)str,0,len+1);
strncpy_s((char*)str,len+1,temp,len);
(p->arr)[i].f3 = str;
}
str = (LPCSTR)CoreClrAlloc( sizeof(char)*(len+1) );
memset((LPVOID)str,0,len+1);
strncpy_s((char*)str,len+1,temp,len);
p->f4 = str;
}
bool IsCorrectOUTER3(OUTER3* p)
{
for(int i = 0; i < NumArrElements; i++)
{
if( (p->arr)[i].f1 != 1 )
return false;
if( (p->arr)[i].f2 != 1.0 )
return false;
if( memcmp((p->arr)[i].f3, "some string",11*sizeof(char)) != 0 )
return false;
}
if(memcmp(p->f4,"some string",11*sizeof(char)) != 0)
{
return false;
}
return true;
}
struct CharSetAnsiSequential
{
LPCSTR f1;
char f2;
};
void PrintCharSetAnsiSequential(CharSetAnsiSequential* p, const char* name)
{
printf("\t%s.f1 = %s\n", name, p->f1);
printf("\t%s.f2 = %c\n", name, p->f2);
}
void ChangeCharSetAnsiSequential(CharSetAnsiSequential* p)
{
const char* strSource = "change string";
size_t size = strlen(strSource) + 1;
LPSTR temp = (LPSTR)CoreClrAlloc(size);
if(temp != NULL)
{
memset(temp,0,size);
strncpy_s((char*)temp,size,strSource,size-1);
p->f1 = temp;
p->f2 = 'n';
}
else
{
printf("Memory Allocated Failed!");
}
}
bool IsCorrectCharSetAnsiSequential(CharSetAnsiSequential* p)
{
if(strcmp((char*)p->f1, (char*)"some string") != 0 )
return false;
if(p->f2 != 'c')
return false;
return true;
}
struct CharSetUnicodeSequential
{
LPCWSTR f1;
WCHAR f2;
};
void PrintCharSetUnicodeSequential(CharSetUnicodeSequential* p, const char* name)
{
#ifdef _WIN32
wprintf(L"\t%S.first = %s\n", name, p->f1);
wprintf(L"\t%S.last = %c\n", name, p->f2);
#else
wprintf(L"\t%s.first = %s\n", name, p->f1);
wprintf(L"\t%s.last = %c\n", name, p->f2);
#endif
}
void ChangeCharSetUnicodeSequential(CharSetUnicodeSequential* p)
{
WCHAR* strSource = (WCHAR*)(W("change string"));
size_t len =TP_slen(strSource);
LPCWSTR temp = (LPCWSTR)CoreClrAlloc(sizeof(WCHAR)*(len+1));
if(temp != NULL)
{
memset((LPWSTR)temp,0,len+1);
TP_wcsncpy_s((WCHAR*)temp, len, strSource, len);
p->f1 = temp;
p->f2 = L'n';
}
else
{
printf("Memory Allocated Failed!");
}
}
bool IsCorrectCharSetUnicodeSequential(CharSetUnicodeSequential* p)
{
WCHAR* expected= const_cast<WCHAR*>(W("some string"));
WCHAR* actual = const_cast<WCHAR*>(p->f1);
if(0 != TP_wcmp_s(actual, expected))
{
return false;
}
if(p->f2 != L'c')
{
return false;
}
return true;
}
struct NumberSequential // size = 64 bytes
{
LONG64 i64;
ULONG64 ui64;
DOUBLE d;
INT i32;
UINT ui32;
SHORT s1;
WORD us1;
SHORT i16;
WORD ui16;
FLOAT sgl;
BYTE b;
CHAR sb;
};
void PrintNumberSequential(NumberSequential* str, const char* name)
{
printf("\t%s.i32 = %d\n", name, str->i32);
printf("\t%s.ui32 = %d\n", name, str->ui32);
printf("\t%s.s1 = %d\n", name, str->s1);
printf("\t%s.us1 = %u\n", name, str->us1);
printf("\t%s.b = %u\n", name, str->b);
printf("\t%s.sb = %d\n", name, str->sb);
printf("\t%s.i16 = %d\n", name, str->i16);
printf("\t%s.ui16 = %u\n", name, str->ui16);
printf("\t%s.i64 = %lld\n", name, str->i64);
printf("\t%s.ui64 = %llu\n", name, str->ui64);
printf("\t%s.sgl = %f\n", name, str->sgl);
printf("\t%s.d = %f\n",name, str->d);
}
void ChangeNumberSequential(NumberSequential* p)
{
p->i32 = 0;
p->ui32 = 32;
p->s1 = 0;
p->us1 = 16;
p->b = 0;
p->sb = 8;
p->i16 = 0;
p->ui16 = 16;
p->i64 = 0;
p->ui64 = 64;
p->sgl = 64.0;
p->d = 6.4;
}
bool IsCorrectNumberSequential(NumberSequential* p)
{
if(p->i32 != INT_MIN || p->ui32 != 0xffffffff || p->s1 != -0x8000 || p->us1 != 0xffff || p->b != 0 ||
p->sb != 0x7f ||p->i16 != -0x8000 || p->ui16 != 0xffff || p->i64 != -1234567890 ||
p->ui64 != 1234567890 || (p->sgl) != 32.0 || p->d != 3.2)
{
return false;
}
return true;
}
struct S3 // size = 1032 bytes
{
BOOL flag;
LPCSTR str;
INT vals[256];
};
void PrintS3(S3* str, const char* name)
{
printf("\t%s.flag = %d\n", name, str->flag);
printf("\t%s.str = %s\n", name, str->str);
for(int i = 0; i<256 ;i++)
{
printf("\t%s.vals[%d] = %d\n",name,i,str->vals[i]);
}
}
void ChangeS3(S3* p)
{
p->flag = false;
const char* strSource = "change string";
size_t len =strlen(strSource)+1;
LPCSTR temp = (LPCSTR)CoreClrAlloc(sizeof(char)*len);
if(temp != NULL)
{
memset((LPVOID)temp,0,len);
strncpy_s((char*)temp,len,strSource,len-1);
p->str = temp;
}
for(int i = 1;i<257;i++)
{
p->vals[i-1] = i;
}
}
bool IsCorrectS3(S3* p)
{
int iflag = 0;
if(!p->flag || strcmp((char*)p->str,"some string") != 0)
return false;
for (int i = 0; i < 256; i++)
{
if (p->vals[i] != i)
{
printf("\tThe Index of %i is not expected",i);
iflag++;
}
}
if (iflag != 0)
{
return false;
}
return true;
}
struct S4 // size = 8 bytes
{
INT age;
LPCSTR name;
};
enum Enum1
{
e1 = 1,
e2 = 3
};
struct S5 // size = 8 bytes
{
struct S4 s4;
Enum1 ef;
};
void PrintS5(S5* str, const char* name)
{
printf("\t%s.s4.age = %d", name, str->s4.age);
printf("\t%s.s4.name = %s", name, str->s4.name);
printf("\t%s.ef = %d", name, str->ef);
}
void ChangeS5(S5* str)
{
Enum1 eInstance = e2;
const char* strSource = "change string";
size_t len =strlen(strSource)+1;
LPCSTR temp = (LPCSTR)CoreClrAlloc(sizeof(char)*len);
if(temp != NULL)
{
memset((LPVOID)temp,0,len);
strncpy_s((char*)temp,len,strSource,len-1);
str->s4.name = temp;
}
str->s4.age = 64;
str->ef = eInstance;
}
bool IsCorrectS5(S5* str)
{
Enum1 eInstance = e1;
if(str->s4.age != 32 || strcmp((char*)str->s4.name,"some string") != 0)
return false;
if(str->ef != eInstance)
{
return false;
}
return true;
}
struct StringStructSequentialAnsi // size = 8 bytes
{
LPCSTR first;
LPCSTR last;
};
void PrintStringStructSequentialAnsi(StringStructSequentialAnsi* str, const char* name)
{
printf("\t%s.first = %s\n", name, str->first);
printf("\t%s.last = %s\n", name, str->last);
}
bool IsCorrectStringStructSequentialAnsi(StringStructSequentialAnsi* str)
{
char strOne[512];
char strTwo[512];
for(int i = 0;i<512;i++)
{
strOne[i] = 'a';
strTwo[i] = 'b';
}
if(memcmp(str->first,strOne,512)!= 0)
return false;
if(memcmp(str->last,strTwo,512)!= 0)
return false;
return true;
}
void ChangeStringStructSequentialAnsi(StringStructSequentialAnsi* str)
{
char* newFirst = (char*)CoreClrAlloc(sizeof(char)*513);
char* newLast = (char*)CoreClrAlloc(sizeof(char)*513);
for (int i = 0; i < 512; ++i)
{
newFirst[i] = 'b';
newLast[i] = 'a';
}
newFirst[512] = '\0';
newLast[512] = '\0';
str->first = newFirst;
str->last = newLast;
}
struct StringStructSequentialUnicode // size = 8 bytes
{
LPCWSTR first;
LPCWSTR last;
};
void PrintStringStructSequentialUnicode(StringStructSequentialUnicode* str, const char* name)
{
#ifdef _WIN32
wprintf(L"\t%S.first = %s\n", name, str->first);
wprintf(L"\t%S.last = %s\n", name, str->last);
#else
wprintf(L"\t%s.first = %s\n", name, str->first);
wprintf(L"\t%s.last = %s\n", name, str->last);
#endif
}
bool IsCorrectStringStructSequentialUnicode(StringStructSequentialUnicode* str)
{
WCHAR strOne[256+1];
WCHAR strTwo[256+1];
for(int i = 0;i<256;++i)
{
strOne[i] = L'a';
strTwo[i] = L'b';
}
strOne[256] = L'\0';
strTwo[256] = L'\0';
if(memcmp(str->first,strOne,256*sizeof(WCHAR)) != 0)
return false;
if(memcmp(str->last,strTwo,256*sizeof(WCHAR)) != 0)
return false;
return true;
}
void ChangeStringStructSequentialUnicode(StringStructSequentialUnicode* str)
{
WCHAR* newFirst = (WCHAR*)CoreClrAlloc(sizeof(WCHAR)*257);
WCHAR* newLast = (WCHAR*)CoreClrAlloc(sizeof(WCHAR)*257);
for (int i = 0; i < 256; ++i)
{
newFirst[i] = L'b';
newLast[i] = L'a';
}
newFirst[256] = L'\0';
newLast[256] = L'\0';
str->first = (const WCHAR*)newFirst;
str->last = (const WCHAR*)newLast;
}
struct S8 // size = 32 bytes
{
LPCSTR name;
BOOL gender;
HRESULT i32;
HRESULT ui32;
WORD jobNum;
CHAR mySByte;
};
void PrintS8(S8* str, const char* name)
{
printf("\t%s.name = %s\n",name, str->name);
printf("\t%s.gender = %d\n", name, str->gender);
printf("\t%s.jobNum = %d\n",name, str->jobNum);
printf("\t%s.i32 = %d\n", name, (int)(str->i32));
printf("\t%s.ui32 = %u\n", name, (unsigned int)(str->ui32));
printf("\t%s.mySByte = %c\n", name, str->mySByte);
}
bool IsCorrectS8(S8* str)
{
if(memcmp( str->name,"hello", strlen("hello")*sizeof(char)+1 )!= 0)
return false;
if(!str->gender)
return false;
if(str->jobNum != 10)
return false;
if(str->i32!= 128 || str->ui32 != 128)
return false;
if(str->mySByte != 32)
return false;
return true;
}
void ChangeS8(S8* str)
{
const char* lpstr = "world";
size_t size = sizeof(char) * (strlen(lpstr) + 1);
LPSTR temp = (LPSTR)CoreClrAlloc( size );
memset(temp, 0, size);
if(temp)
{
strcpy_s((char*)temp,size,lpstr);
str->name = temp;
}
else
{
printf("Memory Allocated Failed!");
}
str->gender = false;
str->jobNum = 1;
str->i32 = 256;
str->ui32 = 256;
str->mySByte = 64;
}
#pragma pack (8)
struct S_int // size = 4 bytes
{
INT i;
};
struct S9;
typedef void (*TestDelegate1)(struct S9 myStruct);
struct S9 // size = 8 bytes
{
HRESULT i32;
TestDelegate1 myDelegate1;
};
struct S101 // size = 8 bytes
{
INT i;
struct S_int s_int;
};
struct S10 // size = 8 bytes
{
struct S101 s;
};
void PrintS10(S10* str, const char* name)
{
printf("\t%s.s.s_int.i = %d\n", name, str->s.s_int.i);
printf("\t%s.s.i = %d\n", name, str->s.i);
}
bool IsCorrectS10(S10* str)
{
if(str->s.s_int.i != 32)
return false;
if(str->s.i != 32)
return false;
return true;
}
void ChangeS10(S10* str)
{
str->s.s_int.i = 64;
str->s.i = 64;
}
#ifndef WINDOWS
typedef int* LPINT;
#endif
struct S11 // size = 8 bytes
{
LPINT i32;
INT i;
};
union U // size = 8 bytes
{
INT i32;
UINT ui32;
LPVOID iPtr;
LPVOID uiPtr;
SHORT s;
WORD us;
BYTE b;
CHAR sb;
LONG64 l;
ULONG64 ul;
FLOAT f;
DOUBLE d;
};
void PrintU(U* str, const char* name)
{
printf("\t%s.i32 = %d\n", name, str->i32);
printf("\t%s.ui32 = %u\n", name, str->ui32);
printf("\t%s.iPtr = %zu\n", name, (size_t)(str->iPtr));
printf("\t%s.uiPtr = %zu\n", name, (size_t)(str->uiPtr));
printf("\t%s.s = %d\n", name, str->s);
printf("\t%s.us = %u\n", name, str->us);
printf("\t%s.b = %u\n", name, str->b);
printf("\t%s.sb = %d\n", name, str->sb);
printf("\t%s.l = %lld\n", name, str->l);
printf("\t%s.ul = %llu\n", name, str->ul);
printf("\t%s.f = %f\n", name, str->f);
printf("\t%s.d = %f\n", name, str->d);
}
void ChangeU(U* p)
{
p->i32 = 2147483647;
p->ui32 = 0;
p->iPtr = (LPVOID)(-64);
p->uiPtr = (LPVOID)(64);
p->s = 32767;
p->us = 0;
p->b = 255;
p->sb = -128;
p->l = -1234567890;
p->ul = 0;
p->f = 64.0;
p->d = 6.4;
}
bool IsCorrectU(U* p)
{
if(p->d != 3.2)
{
return false;
}
return true;
}
struct ByteStructPack2Explicit // size = 2 bytes
{
BYTE b1;
BYTE b2;
};
void PrintByteStructPack2Explicit(ByteStructPack2Explicit* str, const char* name)
{
printf("\t%s.b1 = %d", name, str->b1);
printf("\t%s.b2 = %d", name, str->b2);
}
void ChangeByteStructPack2Explicit(ByteStructPack2Explicit* p)
{
p->b1 = 64;
p->b2 = 64;
}
bool IsCorrectByteStructPack2Explicit(ByteStructPack2Explicit* p)
{
if(p->b1 != 32 || p->b2 != 32)
return false;
return true;
}
struct ShortStructPack4Explicit // size = 4 bytes
{
SHORT s1;
SHORT s2;
};
void PrintShortStructPack4Explicit(ShortStructPack4Explicit* str, const char* name)
{
printf("\t%s.s1 = %d", name, str->s1);
printf("\t%s.s2 = %d", name, str->s2);
}
void ChangeShortStructPack4Explicit(ShortStructPack4Explicit* p)
{
p->s1 = 64;
p->s2 = 64;
}
bool IsCorrectShortStructPack4Explicit(ShortStructPack4Explicit* p)
{
if(p->s1 != 32 || p->s2 != 32)
return false;
return true;
}
struct IntStructPack8Explicit // size = 8 bytes
{
INT i1;
INT i2;
};
void PrintIntStructPack8Explicit(IntStructPack8Explicit* str, const char* name)
{
printf("\t%s.i1 = %d", name, str->i1);
printf("\t%s.i2 = %d", name, str->i2);
}
void ChangeIntStructPack8Explicit(IntStructPack8Explicit* p)
{
p->i1 = 64;
p->i2 = 64;
}
bool IsCorrectIntStructPack8Explicit(IntStructPack8Explicit* p)
{
if(p->i1 != 32 || p->i2 != 32)
return false;
return true;
}
struct LongStructPack16Explicit // size = 16 bytes
{
LONG64 l1;
LONG64 l2;
};
void PrintLongStructPack16Explicit(LongStructPack16Explicit* str, const char* name)
{
printf("\t%s.l1 = %lld", name, str->l1);
printf("\t%s.l2 = %lld", name, str->l2);
}
void ChangeLongStructPack16Explicit(LongStructPack16Explicit* p)
{
p->l1 = 64;
p->l2 = 64;
}
bool IsCorrectLongStructPack16Explicit(LongStructPack16Explicit* p)
{
if(p->l1 != 32 || p->l2 != 32)
return false;
return true;
}
LPSTR GetNativeString()
{
const char* lNativeStr = "Native";
const size_t lsize = strlen(lNativeStr);
LPSTR str = NULL;
str = (LPSTR)CoreClrAlloc( lsize+1 );
memset(str,0,lsize+1);
strcpy_s((char*)str,lsize+1,lNativeStr);
return str;
}
LPSTR GetSomeString()
{
const char* lNativeStr = "some string";
const size_t lsize = strlen(lNativeStr);
LPSTR str = NULL;
str = (LPSTR)CoreClrAlloc( lsize+1 );
memset(str,0,lsize+1);
strcpy_s((char*)str,lsize+1,lNativeStr);
return str;
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <xplatform.h>
#include <platformdefines.h>
const int NumArrElements = 2;
struct InnerSequential
{
int f1;
float f2;
LPCSTR f3;
};
void PrintInnerSequential(InnerSequential* p, const char* name)
{
printf("\t%s.f1 = %d\n", name, p->f1);
printf("\t%s.f2 = %f\n", name, p->f2);
printf("\t%s.f3 = %s\n", name, p->f3);
}
void ChangeInnerSequential(InnerSequential* p)
{
p->f1 = 77;
p->f2 = 77.0;
const char* lpstr = "changed string";
size_t size = sizeof(char) * (strlen(lpstr) + 1);
LPSTR temp = (LPSTR)CoreClrAlloc( size );
memset(temp, 0, size);
if(temp)
{
strncpy_s((char*)temp,size,lpstr,size-1);
p->f3 = temp;
}
else
{
printf("Memory Allocated Failed!");
}
}
bool IsCorrectInnerSequential(InnerSequential* p)
{
if(p->f1 != 1)
return false;
if(p->f2 != 1.0)
return false;
if(strcmp((char*)p->f3,"some string") != 0 )
return false;
return true;
}
struct INNER2 // size = 12 bytes
{
INT f1;
FLOAT f2;
LPCSTR f3;
};
void ChangeINNER2(INNER2* p)
{
p->f1 = 77;
p->f2 = 77.0;
const char* temp = "changed string";
size_t len = strlen(temp);
LPCSTR str = (LPCSTR)CoreClrAlloc( sizeof(char)*(len+1) );
memset((LPVOID)str,0,len+1);
strncpy_s((char*)str,len+1,temp,len);
p->f3 = str;
}
void PrintINNER2(INNER2* p, const char* name)
{
printf("\t%s.f1 = %d\n", name, p->f1);
printf("\t%s.f2 = %f\n", name, p->f2);
printf("\t%s.f3 = %s\n", name, p->f3);
}
bool IsCorrectINNER2(INNER2* p)
{
if(p->f1 != 1)
return false;
if(p->f2 != 1.0)
return false;
if(memcmp(p->f3, "some string",11*sizeof(char)) != 0 )
return false;
return true;
}
struct InnerExplicit
{
#ifdef WINDOWS
union
{
INT f1;
FLOAT f2;
};
CHAR _unused0[4];
LPCSTR f3;
#else
union
{
INT f1;
FLOAT f2;
};
INT _unused0;
LPCSTR f3;
#endif
};
void PrintInnerExplicit(InnerExplicit* p, const char* name)
{
printf("\t%s.f1 = %d\n", name, p->f1);
printf("\t%s.f2 = %f\n", name, p->f2);
printf("\t%s.f3 = %s\n", name, p->f3);
}
void ChangeInnerExplicit(InnerExplicit* p)
{
p->f1 = 77;
const char* temp = "changed string";
size_t len = strlen(temp);
LPCSTR str = (LPCSTR)CoreClrAlloc( sizeof(char)*(len+1) );
memset((LPVOID)str,0,len+1);
strncpy_s((char*)str,len+1,temp,len);
p->f3 = str;
}
struct InnerArraySequential
{
InnerSequential arr[NumArrElements];
};
void PrintInnerArraySequential(InnerArraySequential* p, const char* name)
{
for(int i = 0; i < NumArrElements; i++)
{
printf("\t%s.arr[%d].f1 = %d\n", name, i, (p->arr)[i].f1);
printf("\t%s.arr[%d].f2 = %f\n", name, i, (p->arr)[i].f2);
printf("\t%s.arr[%d].f2 = %s\n", name, i, (p->arr)[i].f3);
}
}
void ChangeInnerArraySequential(InnerArraySequential* p)
{
const char* lpstr = "changed string";
LPSTR temp;
for(int i = 0; i < NumArrElements; i++)
{
(p->arr)[i].f1 = 77;
(p->arr)[i].f2 = 77.0;
size_t size = sizeof(char) * (strlen(lpstr) + 1);
temp = (LPSTR)CoreClrAlloc( size );
memset(temp, 0, size);
if(temp)
{
strncpy_s((char*)temp,strlen(lpstr)+1,lpstr,strlen(lpstr));
(p->arr)[i].f3 = temp;
}
else
{
printf("Memory Allocated Failed!");
}
}
}
bool IsCorrectInnerArraySequential(InnerArraySequential* p)
{
for(int i = 0; i < NumArrElements; i++)
{
if( (p->arr)[i].f1 != 1 )
return false;
if( (p->arr)[i].f2 != 1.0 )
return false;
}
return true;
}
union InnerArrayExplicit // size = 32 bytes
{
struct InnerSequential arr[2];
struct
{
LONG64 _unused0;
LPCSTR f4;
};
};
#ifdef HOST_64BIT
union OUTER3 // size = 32 bytes
{
struct InnerSequential arr[2];
struct
{
CHAR _unused0[24];
LPCSTR f4;
};
};
#else
struct OUTER3 // size = 28 bytes
{
struct InnerSequential arr[2];
LPCSTR f4;
};
#endif
void PrintOUTER3(OUTER3* p, const char* name)
{
for(int i = 0; i < NumArrElements; i++)
{
printf("\t%s.arr[%d].f1 = %d\n", name, i, (p->arr)[i].f1);
printf("\t%s.arr[%d].f2 = %f\n", name, i, (p->arr)[i].f2);
printf("\t%s.arr[%d].f3 = %s\n", name, i, (p->arr)[i].f3);
}
printf("\t%s.f4 = %s\n",name,p->f4);
}
void ChangeOUTER3(OUTER3* p)
{
const char* temp = "changed string";
size_t len = strlen(temp);
LPCSTR str = NULL;
for(int i = 0; i < NumArrElements; i++)
{
(p->arr)[i].f1 = 77;
(p->arr)[i].f2 = 77.0;
str = (LPCSTR)CoreClrAlloc( sizeof(char)*(len+1) );
memset((LPVOID)str,0,len+1);
strncpy_s((char*)str,len+1,temp,len);
(p->arr)[i].f3 = str;
}
str = (LPCSTR)CoreClrAlloc( sizeof(char)*(len+1) );
memset((LPVOID)str,0,len+1);
strncpy_s((char*)str,len+1,temp,len);
p->f4 = str;
}
bool IsCorrectOUTER3(OUTER3* p)
{
for(int i = 0; i < NumArrElements; i++)
{
if( (p->arr)[i].f1 != 1 )
return false;
if( (p->arr)[i].f2 != 1.0 )
return false;
if( memcmp((p->arr)[i].f3, "some string",11*sizeof(char)) != 0 )
return false;
}
if(memcmp(p->f4,"some string",11*sizeof(char)) != 0)
{
return false;
}
return true;
}
struct CharSetAnsiSequential
{
LPCSTR f1;
char f2;
};
void PrintCharSetAnsiSequential(CharSetAnsiSequential* p, const char* name)
{
printf("\t%s.f1 = %s\n", name, p->f1);
printf("\t%s.f2 = %c\n", name, p->f2);
}
void ChangeCharSetAnsiSequential(CharSetAnsiSequential* p)
{
const char* strSource = "change string";
size_t size = strlen(strSource) + 1;
LPSTR temp = (LPSTR)CoreClrAlloc(size);
if(temp != NULL)
{
memset(temp,0,size);
strncpy_s((char*)temp,size,strSource,size-1);
p->f1 = temp;
p->f2 = 'n';
}
else
{
printf("Memory Allocated Failed!");
}
}
bool IsCorrectCharSetAnsiSequential(CharSetAnsiSequential* p)
{
if(strcmp((char*)p->f1, (char*)"some string") != 0 )
return false;
if(p->f2 != 'c')
return false;
return true;
}
struct CharSetUnicodeSequential
{
LPCWSTR f1;
WCHAR f2;
};
void PrintCharSetUnicodeSequential(CharSetUnicodeSequential* p, const char* name)
{
#ifdef _WIN32
wprintf(L"\t%S.first = %s\n", name, p->f1);
wprintf(L"\t%S.last = %c\n", name, p->f2);
#else
wprintf(L"\t%s.first = %s\n", name, p->f1);
wprintf(L"\t%s.last = %c\n", name, p->f2);
#endif
}
void ChangeCharSetUnicodeSequential(CharSetUnicodeSequential* p)
{
WCHAR* strSource = (WCHAR*)(W("change string"));
size_t len =TP_slen(strSource);
LPCWSTR temp = (LPCWSTR)CoreClrAlloc(sizeof(WCHAR)*(len+1));
if(temp != NULL)
{
memset((LPWSTR)temp,0,len+1);
TP_wcsncpy_s((WCHAR*)temp, len, strSource, len);
p->f1 = temp;
p->f2 = L'n';
}
else
{
printf("Memory Allocated Failed!");
}
}
bool IsCorrectCharSetUnicodeSequential(CharSetUnicodeSequential* p)
{
WCHAR* expected= const_cast<WCHAR*>(W("some string"));
WCHAR* actual = const_cast<WCHAR*>(p->f1);
if(0 != TP_wcmp_s(actual, expected))
{
return false;
}
if(p->f2 != L'c')
{
return false;
}
return true;
}
struct NumberSequential // size = 64 bytes
{
LONG64 i64;
ULONG64 ui64;
DOUBLE d;
INT i32;
UINT ui32;
SHORT s1;
WORD us1;
SHORT i16;
WORD ui16;
FLOAT sgl;
BYTE b;
CHAR sb;
};
void PrintNumberSequential(NumberSequential* str, const char* name)
{
printf("\t%s.i32 = %d\n", name, str->i32);
printf("\t%s.ui32 = %d\n", name, str->ui32);
printf("\t%s.s1 = %d\n", name, str->s1);
printf("\t%s.us1 = %u\n", name, str->us1);
printf("\t%s.b = %u\n", name, str->b);
printf("\t%s.sb = %d\n", name, str->sb);
printf("\t%s.i16 = %d\n", name, str->i16);
printf("\t%s.ui16 = %u\n", name, str->ui16);
printf("\t%s.i64 = %lld\n", name, str->i64);
printf("\t%s.ui64 = %llu\n", name, str->ui64);
printf("\t%s.sgl = %f\n", name, str->sgl);
printf("\t%s.d = %f\n",name, str->d);
}
void ChangeNumberSequential(NumberSequential* p)
{
p->i32 = 0;
p->ui32 = 32;
p->s1 = 0;
p->us1 = 16;
p->b = 0;
p->sb = 8;
p->i16 = 0;
p->ui16 = 16;
p->i64 = 0;
p->ui64 = 64;
p->sgl = 64.0;
p->d = 6.4;
}
bool IsCorrectNumberSequential(NumberSequential* p)
{
if(p->i32 != INT_MIN || p->ui32 != 0xffffffff || p->s1 != -0x8000 || p->us1 != 0xffff || p->b != 0 ||
p->sb != 0x7f ||p->i16 != -0x8000 || p->ui16 != 0xffff || p->i64 != -1234567890 ||
p->ui64 != 1234567890 || (p->sgl) != 32.0 || p->d != 3.2)
{
return false;
}
return true;
}
struct S3 // size = 1032 bytes
{
BOOL flag;
LPCSTR str;
INT vals[256];
};
void PrintS3(S3* str, const char* name)
{
printf("\t%s.flag = %d\n", name, str->flag);
printf("\t%s.str = %s\n", name, str->str);
for(int i = 0; i<256 ;i++)
{
printf("\t%s.vals[%d] = %d\n",name,i,str->vals[i]);
}
}
void ChangeS3(S3* p)
{
p->flag = false;
const char* strSource = "change string";
size_t len =strlen(strSource)+1;
LPCSTR temp = (LPCSTR)CoreClrAlloc(sizeof(char)*len);
if(temp != NULL)
{
memset((LPVOID)temp,0,len);
strncpy_s((char*)temp,len,strSource,len-1);
p->str = temp;
}
for(int i = 1;i<257;i++)
{
p->vals[i-1] = i;
}
}
bool IsCorrectS3(S3* p)
{
int iflag = 0;
if(!p->flag || strcmp((char*)p->str,"some string") != 0)
return false;
for (int i = 0; i < 256; i++)
{
if (p->vals[i] != i)
{
printf("\tThe Index of %i is not expected",i);
iflag++;
}
}
if (iflag != 0)
{
return false;
}
return true;
}
struct S4 // size = 8 bytes
{
INT age;
LPCSTR name;
};
enum Enum1
{
e1 = 1,
e2 = 3
};
struct S5 // size = 8 bytes
{
struct S4 s4;
Enum1 ef;
};
void PrintS5(S5* str, const char* name)
{
printf("\t%s.s4.age = %d", name, str->s4.age);
printf("\t%s.s4.name = %s", name, str->s4.name);
printf("\t%s.ef = %d", name, str->ef);
}
void ChangeS5(S5* str)
{
Enum1 eInstance = e2;
const char* strSource = "change string";
size_t len =strlen(strSource)+1;
LPCSTR temp = (LPCSTR)CoreClrAlloc(sizeof(char)*len);
if(temp != NULL)
{
memset((LPVOID)temp,0,len);
strncpy_s((char*)temp,len,strSource,len-1);
str->s4.name = temp;
}
str->s4.age = 64;
str->ef = eInstance;
}
bool IsCorrectS5(S5* str)
{
Enum1 eInstance = e1;
if(str->s4.age != 32 || strcmp((char*)str->s4.name,"some string") != 0)
return false;
if(str->ef != eInstance)
{
return false;
}
return true;
}
struct StringStructSequentialAnsi // size = 8 bytes
{
LPCSTR first;
LPCSTR last;
};
void PrintStringStructSequentialAnsi(StringStructSequentialAnsi* str, const char* name)
{
printf("\t%s.first = %s\n", name, str->first);
printf("\t%s.last = %s\n", name, str->last);
}
bool IsCorrectStringStructSequentialAnsi(StringStructSequentialAnsi* str)
{
char strOne[512];
char strTwo[512];
for(int i = 0;i<512;i++)
{
strOne[i] = 'a';
strTwo[i] = 'b';
}
if(memcmp(str->first,strOne,512)!= 0)
return false;
if(memcmp(str->last,strTwo,512)!= 0)
return false;
return true;
}
void ChangeStringStructSequentialAnsi(StringStructSequentialAnsi* str)
{
char* newFirst = (char*)CoreClrAlloc(sizeof(char)*513);
char* newLast = (char*)CoreClrAlloc(sizeof(char)*513);
for (int i = 0; i < 512; ++i)
{
newFirst[i] = 'b';
newLast[i] = 'a';
}
newFirst[512] = '\0';
newLast[512] = '\0';
str->first = newFirst;
str->last = newLast;
}
struct StringStructSequentialUnicode // size = 8 bytes
{
LPCWSTR first;
LPCWSTR last;
};
void PrintStringStructSequentialUnicode(StringStructSequentialUnicode* str, const char* name)
{
#ifdef _WIN32
wprintf(L"\t%S.first = %s\n", name, str->first);
wprintf(L"\t%S.last = %s\n", name, str->last);
#else
wprintf(L"\t%s.first = %s\n", name, str->first);
wprintf(L"\t%s.last = %s\n", name, str->last);
#endif
}
bool IsCorrectStringStructSequentialUnicode(StringStructSequentialUnicode* str)
{
WCHAR strOne[256+1];
WCHAR strTwo[256+1];
for(int i = 0;i<256;++i)
{
strOne[i] = L'a';
strTwo[i] = L'b';
}
strOne[256] = L'\0';
strTwo[256] = L'\0';
if(memcmp(str->first,strOne,256*sizeof(WCHAR)) != 0)
return false;
if(memcmp(str->last,strTwo,256*sizeof(WCHAR)) != 0)
return false;
return true;
}
void ChangeStringStructSequentialUnicode(StringStructSequentialUnicode* str)
{
WCHAR* newFirst = (WCHAR*)CoreClrAlloc(sizeof(WCHAR)*257);
WCHAR* newLast = (WCHAR*)CoreClrAlloc(sizeof(WCHAR)*257);
for (int i = 0; i < 256; ++i)
{
newFirst[i] = L'b';
newLast[i] = L'a';
}
newFirst[256] = L'\0';
newLast[256] = L'\0';
str->first = (const WCHAR*)newFirst;
str->last = (const WCHAR*)newLast;
}
struct S8 // size = 32 bytes
{
LPCSTR name;
BOOL gender;
HRESULT i32;
HRESULT ui32;
WORD jobNum;
CHAR mySByte;
};
void PrintS8(S8* str, const char* name)
{
printf("\t%s.name = %s\n",name, str->name);
printf("\t%s.gender = %d\n", name, str->gender);
printf("\t%s.jobNum = %d\n",name, str->jobNum);
printf("\t%s.i32 = %d\n", name, (int)(str->i32));
printf("\t%s.ui32 = %u\n", name, (unsigned int)(str->ui32));
printf("\t%s.mySByte = %c\n", name, str->mySByte);
}
bool IsCorrectS8(S8* str)
{
if(memcmp( str->name,"hello", strlen("hello")*sizeof(char)+1 )!= 0)
return false;
if(!str->gender)
return false;
if(str->jobNum != 10)
return false;
if(str->i32!= 128 || str->ui32 != 128)
return false;
if(str->mySByte != 32)
return false;
return true;
}
void ChangeS8(S8* str)
{
const char* lpstr = "world";
size_t size = sizeof(char) * (strlen(lpstr) + 1);
LPSTR temp = (LPSTR)CoreClrAlloc( size );
memset(temp, 0, size);
if(temp)
{
strcpy_s((char*)temp,size,lpstr);
str->name = temp;
}
else
{
printf("Memory Allocated Failed!");
}
str->gender = false;
str->jobNum = 1;
str->i32 = 256;
str->ui32 = 256;
str->mySByte = 64;
}
#pragma pack (8)
struct S_int // size = 4 bytes
{
INT i;
};
struct S9;
typedef void (*TestDelegate1)(struct S9 myStruct);
struct S9 // size = 8 bytes
{
HRESULT i32;
TestDelegate1 myDelegate1;
};
struct S101 // size = 8 bytes
{
INT i;
struct S_int s_int;
};
struct S10 // size = 8 bytes
{
struct S101 s;
};
void PrintS10(S10* str, const char* name)
{
printf("\t%s.s.s_int.i = %d\n", name, str->s.s_int.i);
printf("\t%s.s.i = %d\n", name, str->s.i);
}
bool IsCorrectS10(S10* str)
{
if(str->s.s_int.i != 32)
return false;
if(str->s.i != 32)
return false;
return true;
}
void ChangeS10(S10* str)
{
str->s.s_int.i = 64;
str->s.i = 64;
}
#ifndef WINDOWS
typedef int* LPINT;
#endif
struct S11 // size = 8 bytes
{
LPINT i32;
INT i;
};
union U // size = 8 bytes
{
INT i32;
UINT ui32;
LPVOID iPtr;
LPVOID uiPtr;
SHORT s;
WORD us;
BYTE b;
CHAR sb;
LONG64 l;
ULONG64 ul;
FLOAT f;
DOUBLE d;
};
void PrintU(U* str, const char* name)
{
printf("\t%s.i32 = %d\n", name, str->i32);
printf("\t%s.ui32 = %u\n", name, str->ui32);
printf("\t%s.iPtr = %zu\n", name, (size_t)(str->iPtr));
printf("\t%s.uiPtr = %zu\n", name, (size_t)(str->uiPtr));
printf("\t%s.s = %d\n", name, str->s);
printf("\t%s.us = %u\n", name, str->us);
printf("\t%s.b = %u\n", name, str->b);
printf("\t%s.sb = %d\n", name, str->sb);
printf("\t%s.l = %lld\n", name, str->l);
printf("\t%s.ul = %llu\n", name, str->ul);
printf("\t%s.f = %f\n", name, str->f);
printf("\t%s.d = %f\n", name, str->d);
}
void ChangeU(U* p)
{
p->i32 = 2147483647;
p->ui32 = 0;
p->iPtr = (LPVOID)(-64);
p->uiPtr = (LPVOID)(64);
p->s = 32767;
p->us = 0;
p->b = 255;
p->sb = -128;
p->l = -1234567890;
p->ul = 0;
p->f = 64.0;
p->d = 6.4;
}
bool IsCorrectU(U* p)
{
if(p->d != 3.2)
{
return false;
}
return true;
}
struct ByteStructPack2Explicit // size = 2 bytes
{
BYTE b1;
BYTE b2;
};
void PrintByteStructPack2Explicit(ByteStructPack2Explicit* str, const char* name)
{
printf("\t%s.b1 = %d", name, str->b1);
printf("\t%s.b2 = %d", name, str->b2);
}
void ChangeByteStructPack2Explicit(ByteStructPack2Explicit* p)
{
p->b1 = 64;
p->b2 = 64;
}
bool IsCorrectByteStructPack2Explicit(ByteStructPack2Explicit* p)
{
if(p->b1 != 32 || p->b2 != 32)
return false;
return true;
}
struct ShortStructPack4Explicit // size = 4 bytes
{
SHORT s1;
SHORT s2;
};
void PrintShortStructPack4Explicit(ShortStructPack4Explicit* str, const char* name)
{
printf("\t%s.s1 = %d", name, str->s1);
printf("\t%s.s2 = %d", name, str->s2);
}
void ChangeShortStructPack4Explicit(ShortStructPack4Explicit* p)
{
p->s1 = 64;
p->s2 = 64;
}
bool IsCorrectShortStructPack4Explicit(ShortStructPack4Explicit* p)
{
if(p->s1 != 32 || p->s2 != 32)
return false;
return true;
}
struct IntStructPack8Explicit // size = 8 bytes
{
INT i1;
INT i2;
};
void PrintIntStructPack8Explicit(IntStructPack8Explicit* str, const char* name)
{
printf("\t%s.i1 = %d", name, str->i1);
printf("\t%s.i2 = %d", name, str->i2);
}
void ChangeIntStructPack8Explicit(IntStructPack8Explicit* p)
{
p->i1 = 64;
p->i2 = 64;
}
bool IsCorrectIntStructPack8Explicit(IntStructPack8Explicit* p)
{
if(p->i1 != 32 || p->i2 != 32)
return false;
return true;
}
struct LongStructPack16Explicit // size = 16 bytes
{
LONG64 l1;
LONG64 l2;
};
void PrintLongStructPack16Explicit(LongStructPack16Explicit* str, const char* name)
{
printf("\t%s.l1 = %lld", name, str->l1);
printf("\t%s.l2 = %lld", name, str->l2);
}
void ChangeLongStructPack16Explicit(LongStructPack16Explicit* p)
{
p->l1 = 64;
p->l2 = 64;
}
bool IsCorrectLongStructPack16Explicit(LongStructPack16Explicit* p)
{
if(p->l1 != 32 || p->l2 != 32)
return false;
return true;
}
LPSTR GetNativeString()
{
const char* lNativeStr = "Native";
const size_t lsize = strlen(lNativeStr);
LPSTR str = NULL;
str = (LPSTR)CoreClrAlloc( lsize+1 );
memset(str,0,lsize+1);
strcpy_s((char*)str,lsize+1,lNativeStr);
return str;
}
LPSTR GetSomeString()
{
const char* lNativeStr = "some string";
const size_t lsize = strlen(lNativeStr);
LPSTR str = NULL;
str = (LPSTR)CoreClrAlloc( lsize+1 );
memset(str,0,lsize+1);
strcpy_s((char*)str,lsize+1,lNativeStr);
return str;
}
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/libraries/System.Private.Xml/tests/Xslt/TestFiles/TestData/xsltc/baseline/dft4.txt | Microsoft (R) XSLT Compiler version 2.0.61009
for Microsoft (R) Windows (R) 2005 Framework version 2.0.50727
Copyright (C) Microsoft Corporation 2007. All rights reserved.
fatal error : Error saving assembly 'D:\OASys\Working\dft4.dll'. ---> Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))
| Microsoft (R) XSLT Compiler version 2.0.61009
for Microsoft (R) Windows (R) 2005 Framework version 2.0.50727
Copyright (C) Microsoft Corporation 2007. All rights reserved.
fatal error : Error saving assembly 'D:\OASys\Working\dft4.dll'. ---> Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/native/libs/System.Security.Cryptography.Native.Apple/pal_seckey_macos.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#pragma once
#include "pal_types.h"
#include "pal_compiler.h"
#include <Security/Security.h>
/*
Export a key object.
Public keys are exported using the "OpenSSL" format option, which means, essentially,
"whatever format the openssl CLI would use for this algorithm by default".
Private keys are exported using the "Wrapped PKCS#8" format. These formats are available via
`openssl pkcs8 -topk8 ...`. While the PKCS#8 container is the same for all key types, the
payload is algorithm-dependent (though identified by the PKCS#8 wrapper).
An export passphrase is required for private keys, and ignored for public keys.
Follows pal_seckey return conventions.
*/
PALEXPORT int32_t AppleCryptoNative_SecKeyExport(
SecKeyRef pKey, int32_t exportPrivate, CFStringRef cfExportPassphrase, CFDataRef* ppDataOut, int32_t* pOSStatus);
/*
Import a key from a key blob.
Imports are always done using the "OpenSSL" format option, which means the format used for an
unencrypted private key via the openssl CLI verb of the algorithm being imported.
For public keys the "OpenSSL" format is NOT the format used by the openssl CLI for that algorithm,
but is in fact the X.509 SubjectPublicKeyInfo structure.
Returns 1 on success, 0 on failure (*pOSStatus should be set) and negative numbers for various
state machine errors.
*/
PALEXPORT int32_t AppleCryptoNative_SecKeyImportEphemeral(
uint8_t* pbKeyBlob, int32_t cbKeyBlob, int32_t isPrivateKey, SecKeyRef* ppKeyOut, int32_t* pOSStatus);
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#pragma once
#include "pal_types.h"
#include "pal_compiler.h"
#include <Security/Security.h>
/*
Export a key object.
Public keys are exported using the "OpenSSL" format option, which means, essentially,
"whatever format the openssl CLI would use for this algorithm by default".
Private keys are exported using the "Wrapped PKCS#8" format. These formats are available via
`openssl pkcs8 -topk8 ...`. While the PKCS#8 container is the same for all key types, the
payload is algorithm-dependent (though identified by the PKCS#8 wrapper).
An export passphrase is required for private keys, and ignored for public keys.
Follows pal_seckey return conventions.
*/
PALEXPORT int32_t AppleCryptoNative_SecKeyExport(
SecKeyRef pKey, int32_t exportPrivate, CFStringRef cfExportPassphrase, CFDataRef* ppDataOut, int32_t* pOSStatus);
/*
Import a key from a key blob.
Imports are always done using the "OpenSSL" format option, which means the format used for an
unencrypted private key via the openssl CLI verb of the algorithm being imported.
For public keys the "OpenSSL" format is NOT the format used by the openssl CLI for that algorithm,
but is in fact the X.509 SubjectPublicKeyInfo structure.
Returns 1 on success, 0 on failure (*pOSStatus should be set) and negative numbers for various
state machine errors.
*/
PALEXPORT int32_t AppleCryptoNative_SecKeyImportEphemeral(
uint8_t* pbKeyBlob, int32_t cbKeyBlob, int32_t isPrivateKey, SecKeyRef* ppKeyOut, int32_t* pOSStatus);
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/coreclr/inc/check.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
// ---------------------------------------------------------------------------
// Check.h
//
//
// Assertion checking infrastructure
// ---------------------------------------------------------------------------
#ifndef CHECK_H_
#define CHECK_H_
#include "static_assert.h"
#include "daccess.h"
#include "unreachable.h"
#ifdef _DEBUG
#ifdef _MSC_VER
// Make sure we can recurse deep enough for FORCEINLINE
#pragma inline_recursion(on)
#pragma inline_depth(16)
#pragma warning(disable:4714)
#endif // _MSC_VER
#if !defined(DISABLE_CONTRACTS)
#define CHECK_INVARIANTS 1
#define VALIDATE_OBJECTS 1
#endif
#endif // _DEBUG
#if defined(_DEBUG) && !defined(DACCESS_COMPILE)
#define _DEBUG_IMPL 1
#endif
#ifdef _DEBUG
#define DEBUG_ARG(x) , x
#else
#define DEBUG_ARG(x)
#endif
#define CHECK_STRESS 1
//--------------------------------------------------------------------------------
// A CHECK is an object which encapsulates a potential assertion
// failure. It not only contains the result of the check, but if the check fails,
// also records information about the condition and call site.
//
// CHECK also serves as a holder to prevent recursive CHECKS. These can be
// particularly common when putting preconditions inside predicates, especially
// routines called by an invariant.
//
// Note that using CHECK is perfectly efficient in a free build - the CHECK becomes
// a simple string constant pointer (typically either NULL or (LPCSTR)1, although some
// check failures may include messages)
//
// NOTE: you should NEVER use the CHECK class API directly - use the macros below.
//--------------------------------------------------------------------------------
class SString;
class CHECK
{
protected:
// On retail, this is a pointer to a string literal, null or (LPCSTR)1.
// On debug, this is a pointer to dynamically allocated memory - that
// lets us have formatted strings in debug builds.
LPCSTR m_message;
#ifdef _DEBUG
LPCSTR m_condition;
LPCSTR m_file;
INT m_line;
LONG *m_pCount;
// Keep leakage counters.
static size_t s_cLeakedBytes;
static size_t s_cNumFailures;
static thread_local LONG t_count;
#endif
static BOOL s_neverEnforceAsserts;
public: // !!! NOTE: Called from macros only!!!
// If we are not in a check, return TRUE and PushCheck; otherwise return FALSE
BOOL EnterAssert();
// Pops check count
void LeaveAssert();
// Just return if we are in a check
BOOL IsInAssert();
// Should we skip enforcing asserts
static BOOL EnforceAssert();
static BOOL EnforceAssert_StaticCheckOnly();
static void ResetAssert();
#ifdef _MSC_VER
#pragma warning(push)
#pragma warning(disable:4702) // Disable bogus unreachable code warning
#endif // _MSC_VER
CHECK() : m_message(NULL)
#ifdef _DEBUG
, m_condition (NULL)
, m_file(NULL)
, m_line(NULL)
, m_pCount(NULL)
#endif
{}
#ifdef _MSC_VER
#pragma warning(pop)
#endif // _MSC_VER
// Fail records the result of a condition check. Can take either a
// boolean value or another check result
BOOL Fail(BOOL condition);
BOOL Fail(const CHECK &check);
// Setup records context info after a failure.
void Setup(LPCSTR message DEBUG_ARG(LPCSTR condition) DEBUG_ARG(LPCSTR file) DEBUG_ARG(INT line));
static LPCSTR FormatMessage(LPCSTR messageFormat, ...);
// Trigger triggers the actual check failure. The trigger may provide a reason
// to include in the failure message.
void Trigger(LPCSTR reason);
// Finally, convert to a BOOL to allow just testing the result of a Check function
operator BOOL();
BOOL operator!();
CHECK &operator()() { return *this; }
static inline const CHECK OK() {
return CHECK();
}
static void SetAssertEnforcement(BOOL value);
private:
#ifdef _DEBUG
static LPCSTR AllocateDynamicMessage(const SString &s);
#endif
};
//--------------------------------------------------------------------------------
// These CHECK macros are the correct way to propagate an assertion. These
// routines are designed for use inside "Check" routines. Such routines may
// be Invariants, Validate routines, or any other assertional predicates.
//
// A Check routine should return a value of type CHECK.
//
// It should consist of multiple CHECK or CHECK_MSG statements (along with appropritate
// control flow) and should end with CHECK_OK() if all other checks pass.
//
// It may contain a CONTRACT_CHECK contract, but this is only appropriate if the
// check is used for non-assertional purposes (otherwise the contract will never execute).
// Note that CONTRACT_CHECK contracts do not support postconditions.
//
// CHECK: Check the given condition, return a CHECK failure if FALSE
// CHECK_MSG: Same, but include a message paramter if the check fails
// CHECK_OK: Return a successful check value;
//--------------------------------------------------------------------------------
#ifdef _DEBUG
#define DEBUG_ONLY_MESSAGE(msg) msg
#else
// On retail, we don't want to add a bunch of string literals to the image,
// so we just use the same one everywhere.
#define DEBUG_ONLY_MESSAGE(msg) ((LPCSTR)1)
#endif
#define CHECK_MSG_EX(_condition, _message, _RESULT) \
do \
{ \
CHECK _check; \
if (_check.Fail(_condition)) \
{ \
ENTER_DEBUG_ONLY_CODE; \
_check.Setup(DEBUG_ONLY_MESSAGE(_message) \
DEBUG_ARG(#_condition) \
DEBUG_ARG(__FILE__) \
DEBUG_ARG(__LINE__)); \
_RESULT(_check); \
LEAVE_DEBUG_ONLY_CODE; \
} \
} while (0)
#define RETURN_RESULT(r) return r
#define CHECK_MSG(_condition, _message) \
CHECK_MSG_EX(_condition, _message, RETURN_RESULT)
#define CHECK(_condition) \
CHECK_MSG(_condition, "")
#define CHECK_MSGF(_condition, _args) \
CHECK_MSG(_condition, CHECK::FormatMessage _args)
#define CHECK_FAIL(_message) \
CHECK_MSG(FALSE, _message); UNREACHABLE()
#define CHECK_FAILF(_args) \
CHECK_MSGF(FALSE, _args); UNREACHABLE()
#define CHECK_OK \
return CHECK::OK()
//--------------------------------------------------------------------------------
// ASSERT_CHECK is the proper way to trigger a check result. If the CHECK
// has failed, the diagnostic assertion routines will fire with appropriate
// context information.
//
// Note that the condition may either be a raw boolean expression or a CHECK result
// returned from a Check routine.
//
// Recursion note: ASSERT_CHECKs are only performed if there is no current check in
// progress.
//--------------------------------------------------------------------------------
#ifndef ENTER_DEBUG_ONLY_CODE
#define ENTER_DEBUG_ONLY_CODE
#endif
#ifndef LEAVE_DEBUG_ONLY_CODE
#define LEAVE_DEBUG_ONLY_CODE
#endif
#define ASSERT_CHECK(_condition, _message, _reason) \
do \
{ \
CHECK _check; \
if (_check.EnterAssert()) \
{ \
ENTER_DEBUG_ONLY_CODE; \
if (_check.Fail(_condition)) \
{ \
_check.Setup(_message \
DEBUG_ARG(#_condition) \
DEBUG_ARG(__FILE__) \
DEBUG_ARG(__LINE__)); \
_check.Trigger(_reason); \
} \
LEAVE_DEBUG_ONLY_CODE; \
_check.LeaveAssert(); \
} \
} while (0)
// ex: ASSERT_CHECKF(1+2==4, "my reason", ("Woah %d", 1+3));
// note that the double parenthesis, the 'args' param below will include one pair of parens.
#define ASSERT_CHECKF(_condition, _reason, _args) \
ASSERT_CHECK(_condition, CHECK::FormatMessage _args, _reason)
//--------------------------------------------------------------------------------
// INVARIANTS are descriptions of conditions which are always true at well defined
// points of execution. Invariants may be checked by the caller or callee at any
// time as paranoia requires.
//
// There are really two flavors of invariant. The "public invariant" describes
// to the caller invariant behavior about the abstraction which is visible from
// the public API (and of course it should be expressible in that public API).
//
// The "internal invariant" (or representation invariant), on the other hand, is
// a description of the private implementation of the abstraction, which may examine
// internal state of the abstraction or use private entry points.
//
// Classes with invariants should introduce methods called
// void Invariant();
// and
// void InternalInvariant();
// to allow invariant checks.
//--------------------------------------------------------------------------------
#if CHECK_INVARIANTS
template <typename TYPENAME>
CHECK CheckInvariant(TYPENAME &obj)
{
#if defined(_MSC_VER) || defined(__llvm__)
__if_exists(TYPENAME::Invariant)
{
CHECK(obj.Invariant());
}
__if_exists(TYPENAME::InternalInvariant)
{
CHECK(obj.InternalInvariant());
}
#endif
CHECK_OK;
}
#define CHECK_INVARIANT(o) \
ASSERT_CHECK(CheckInvariant(o), NULL, "Invariant failure")
#else
#define CHECK_INVARIANT(o) do { } while (0)
#endif
//--------------------------------------------------------------------------------
// VALIDATE is a check to be made on an object type which identifies a pointer as
// a valid instance of the object, by calling CheckPointer on it. Normally a null
// pointer is treated as an error; VALIDATE_NULL (or CheckPointer(o, NULL_OK))
// may be used when a null pointer is acceptible.
//
// In addition to the null/non-null check, a type may provide a specific Check method
// for more sophisticated identification. In general, the Check method
// should answer the question
// "Is this a valid instance of its declared compile-time type?". For instance, if
// runtype type identification were supported for the type, it should be invoked here.
//
// Note that CheckPointer will also check the invariant(s) if appropriate, so the
// invariants should NOT be explicitly invoked from the Check method.
//--------------------------------------------------------------------------------
enum IsNullOK
{
NULL_NOT_OK = 0,
NULL_OK = 1
};
#if CHECK_INVARIANTS
template <typename TYPENAME>
CHECK CheckPointer(TYPENAME *o, IsNullOK ok = NULL_NOT_OK)
{
if (o == NULL)
{
CHECK_MSG(ok, "Illegal null pointer");
}
else
{
#if defined(_MSC_VER) || defined(__llvm__)
__if_exists(TYPENAME::Check)
{
CHECK(o->Check());
}
#endif
}
CHECK_OK;
}
template <typename TYPENAME>
CHECK CheckValue(TYPENAME &val)
{
#if defined(_MSC_VER) || defined(__llvm__)
__if_exists(TYPENAME::Check)
{
CHECK(val.Check());
}
#endif
CHECK(CheckInvariant(val));
CHECK_OK;
}
#else // CHECK_INVARIANTS
#ifdef _DEBUG_IMPL
// Don't defined these functions to be nops for the non-debug
// build as it may hide important checks
template <typename TYPENAME>
CHECK CheckPointer(TYPENAME *o, IsNullOK ok = NULL_NOT_OK)
{
if (o == NULL)
{
CHECK_MSG(ok, "Illegal null pointer");
}
CHECK_OK;
}
template <typename TYPENAME>
CHECK CheckValue(TYPENAME &val)
{
CHECK_OK;
}
#endif
#endif // CHECK_INVARIANTS
#if VALIDATE_OBJECTS
#define VALIDATE(o) \
ASSERT_CHECK(CheckPointer(o), "Validation failure")
#define VALIDATE_NULL(o) \
ASSERT_CHECK(CheckPointer(o, NULL_OK), "Validation failure")
#else
#define VALIDATE(o) do { } while (0)
#define VALIDATE_NULL(o) do { } while (0)
#endif
//--------------------------------------------------------------------------------
// CONSISTENCY_CHECKS are ad-hoc assertions about the expected state of the program
// at a given time. A failure in one of these indicates a bug in the code.
//
// Note that the condition may either be a raw boolean expression or a CHECK result
// returned from a Check routine.
//--------------------------------------------------------------------------------
#define CONSISTENCY_CHECK(_condition) \
CONSISTENCY_CHECK_MSG(_condition, "")
#ifdef _DEBUG_IMPL
#define CONSISTENCY_CHECK_MSG(_condition, _message) \
ASSERT_CHECK(_condition, _message, "Consistency check failed")
#define CONSISTENCY_CHECK_MSGF(_condition, args) \
ASSERT_CHECKF(_condition, "Consistency check failed", args)
#else
#define CONSISTENCY_CHECK_MSG(_condition, _message) do { } while (0)
#define CONSISTENCY_CHECK_MSGF(_condition, args) do { } while (0)
#endif
//--------------------------------------------------------------------------------
// SIMPLIFYING_ASSUMPTIONS are workarounds which are placed in the code to allow progress
// to be made in the case of difficult corner cases. These should NOT be left in the
// code; they are really just markers of things which need to be fixed.
//
// Note that the condition may either be a raw boolean expression or a CHECK result
// returned from a Check routine.
//--------------------------------------------------------------------------------
// Ex usage:
// SIMPLIFYING_ASSUMPTION(SomeExpression());
#define SIMPLIFYING_ASSUMPTION(_condition) \
SIMPLIFYING_ASSUMPTION_MSG(_condition, "")
// Helper for HRs. Will provide formatted message showing the failure code.
#define SIMPLIFYING_ASSUMPTION_SUCCEEDED(__hr) \
{ \
HRESULT __hr2 = (__hr); \
(void)__hr2; \
SIMPLIFYING_ASSUMPTION_MSGF(SUCCEEDED(__hr2), ("HRESULT failed.\n Expected success.\n Actual=0x%x\n", __hr2)); \
}
#ifdef _DEBUG_IMPL
// Ex usage:
// SIMPLIFYING_ASSUMPTION_MSG(SUCCEEDED(hr), "It failed!");
#define SIMPLIFYING_ASSUMPTION_MSG(_condition, _message) \
ASSERT_CHECK(_condition, _message, "Unhandled special case detected")
// use a formatted string. Ex usage:
// SIMPLIFYING_ASSUMPTION_MSGF(SUCCEEDED(hr), ("Woah it failed! 0x%08x", hr));
#define SIMPLIFYING_ASSUMPTION_MSGF(_condition, args) \
ASSERT_CHECKF(_condition, "Unhandled special case detected", args)
#else // !_DEBUG_IMPL
#define SIMPLIFYING_ASSUMPTION_MSG(_condition, _message) do { } while (0)
#define SIMPLIFYING_ASSUMPTION_MSGF(_condition, args) do { } while (0)
#endif // !_DEBUG_IMPL
//--------------------------------------------------------------------------------
// COMPILER_ASSUME_MSG is a statement that tells the compiler to assume the
// condition is true. In a checked build these turn into asserts;
// in a free build they are passed through to the compiler to use in optimization.
//--------------------------------------------------------------------------------
#if defined(_PREFAST_) || defined(_PREFIX_) || defined(__clang_analyzer__)
#define COMPILER_ASSUME_MSG(_condition, _message) if (!(_condition)) __UNREACHABLE();
#define COMPILER_ASSUME_MSGF(_condition, args) if (!(_condition)) __UNREACHABLE();
#else
#if defined(DACCESS_COMPILE)
#define COMPILER_ASSUME_MSG(_condition, _message) do { } while (0)
#define COMPILER_ASSUME_MSGF(_condition, args) do { } while (0)
#else
#if defined(_DEBUG)
#define COMPILER_ASSUME_MSG(_condition, _message) \
ASSERT_CHECK(_condition, _message, "Compiler optimization assumption invalid")
#define COMPILER_ASSUME_MSGF(_condition, args) \
ASSERT_CHECKF(_condition, "Compiler optimization assumption invalid", args)
#else
#define COMPILER_ASSUME_MSG(_condition, _message) __assume(_condition)
#define COMPILER_ASSUME_MSGF(_condition, args) __assume(_condition)
#endif // _DEBUG
#endif // DACCESS_COMPILE
#endif // _PREFAST_ || _PREFIX_
#define COMPILER_ASSUME(_condition) \
COMPILER_ASSUME_MSG(_condition, "")
//--------------------------------------------------------------------------------
// PREFIX_ASSUME_MSG and PREFAST_ASSUME_MSG are just another name
// for COMPILER_ASSUME_MSG
// In a checked build these turn into asserts; in a free build
// they are passed through to the compiler to use in optimization;
// via an __assume(_condition) optimization hint.
//--------------------------------------------------------------------------------
#define PREFIX_ASSUME_MSG(_condition, _message) \
COMPILER_ASSUME_MSG(_condition, _message)
#define PREFIX_ASSUME_MSGF(_condition, args) \
COMPILER_ASSUME_MSGF(_condition, args)
#define PREFIX_ASSUME(_condition) \
COMPILER_ASSUME_MSG(_condition, "")
#define PREFAST_ASSUME_MSG(_condition, _message) \
COMPILER_ASSUME_MSG(_condition, _message)
#define PREFAST_ASSUME_MSGF(_condition, args) \
COMPILER_ASSUME_MSGF(_condition, args)
#define PREFAST_ASSUME(_condition) \
COMPILER_ASSUME_MSG(_condition, "")
//--------------------------------------------------------------------------------
// UNREACHABLE points are locations in the code which should not be able to be
// reached under any circumstances (e.g. a default in a switch which is supposed to
// cover all cases.). This macro tells the compiler this, and also embeds a check
// to make sure it is always true.
//--------------------------------------------------------------------------------
#define UNREACHABLE() \
UNREACHABLE_MSG("")
#ifdef __llvm__
// LLVM complains if a function does not return what it says.
#define UNREACHABLE_RET() do { UNREACHABLE(); return 0; } while (0)
#define UNREACHABLE_MSG_RET(_message) UNREACHABLE_MSG(_message); return 0;
#else // __llvm__
#define UNREACHABLE_RET() UNREACHABLE()
#define UNREACHABLE_MSG_RET(_message) UNREACHABLE_MSG(_message)
#endif // __llvm__ else
#ifdef _DEBUG_IMPL
// Note that the "do { } while (0)" syntax trick here doesn't work, as the compiler
// gives an error that the while(0) is unreachable code
#define UNREACHABLE_MSG(_message) \
{ \
CHECK _check; \
_check.Setup(_message, "<unreachable>", __FILE__, __LINE__); \
_check.Trigger("Reached the \"unreachable\""); \
} __UNREACHABLE()
#else
#define UNREACHABLE_MSG(_message) __UNREACHABLE()
#endif
//--------------------------------------------------------------------------------
// STRESS_CHECK represents a check which is included in a free build
// @todo: behavior on trigger
//
// Note that the condition may either be a raw boolean expression or a CHECK result
// returned from a Check routine.
//
// Since Retail builds don't allow formatted checks, there's no STRESS_CHECK_MSGF.
//--------------------------------------------------------------------------------
#if CHECK_STRESS
#define STRESS_CHECK(_condition, _message) \
ASSERT_CHECK(_condition, _message, "Stress Assertion Failure")
#else
#define STRESS_CHECK(_condition, _message) do { } while (0)
#endif
//--------------------------------------------------------------------------------
// CONTRACT_CHECK is used to put contracts on Check function. Note that it does
// not support postconditions.
//--------------------------------------------------------------------------------
#define CONTRACT_CHECK CONTRACTL
#define CONTRACT_CHECK_END CONTRACTL_END
//--------------------------------------------------------------------------------
// CCHECK is used for Check functions which may fail due to out of memory
// or other transient failures. These failures should be ignored when doing
// assertions, but they cannot be ignored when the Check function is used in
// normal code.
// @todo: really crufty to have 2 sets of CHECK macros
//--------------------------------------------------------------------------------
#ifdef _DEBUG
#define CCHECK_START \
{ \
BOOL ___exception = FALSE; \
BOOL ___transient = FALSE; \
CHECK ___result = CHECK::OK(); \
EX_TRY {
#define CCHECK_END \
} EX_CATCH { \
if (___result.IsInAssert()) \
{ \
___exception = TRUE; \
___transient = GET_EXCEPTION()->IsTransient(); \
} \
else \
EX_RETHROW; \
} EX_END_CATCH(RethrowTerminalExceptions); \
\
if (___exception) \
{ \
if (___transient) \
CHECK_OK; \
else \
CHECK_FAIL("Nontransient exception occurred during check"); \
} \
CHECK(___result); \
}
#define CRETURN_RESULT(r) ___result = r
#define CCHECK_MSG(_condition, _message) \
CHECK_MSG_EX(_condition, _message, CRETURN_RESULT)
#define CCHECK(_condition) \
CCHECK_MSG(_condition, "")
#define CCHECK_MSGF(_condition, _args) \
CCHECK_MSG(_condition, CHECK::FormatMessage _args)
#define CCHECK_FAIL(_message) \
CCHECK_MSG(FALSE, _message); UNREACHABLE()
#define CCHECK_FAILF(_args) \
CCHECK_MSGF(FALSE, _args); UNREACHABLE()
#else // _DEBUG
#define CCHECK_START
#define CCHECK_END
#define CCHECK CHECK
#define CCHECK_MSG CHECK_MSG
#define CCHECK_MSGF CHECK_MSGF
#define CCHECK_FAIL CHECK_FAIL
#define CCHECK_FAILF CHECK_FAILF
#endif
//--------------------------------------------------------------------------------
// Common base level checks
//--------------------------------------------------------------------------------
CHECK CheckAlignment(UINT alignment);
CHECK CheckAligned(UINT value, UINT alignment);
#if defined(_MSC_VER)
CHECK CheckAligned(ULONG value, UINT alignment);
#endif
CHECK CheckAligned(UINT64 value, UINT alignment);
CHECK CheckAligned(const void *address, UINT alignment);
CHECK CheckOverflow(UINT value1, UINT value2);
#if defined(_MSC_VER)
CHECK CheckOverflow(ULONG value1, ULONG value2);
#endif
CHECK CheckOverflow(UINT64 value1, UINT64 value2);
CHECK CheckOverflow(PTR_CVOID address, UINT offset);
#if defined(_MSC_VER)
CHECK CheckOverflow(const void *address, ULONG offset);
#endif
CHECK CheckOverflow(const void *address, UINT64 offset);
CHECK CheckUnderflow(UINT value1, UINT value2);
#if defined(_MSC_VER)
CHECK CheckUnderflow(ULONG value1, ULONG value2);
#endif
CHECK CheckUnderflow(UINT64 value1, UINT64 value2);
CHECK CheckUnderflow(const void *address, UINT offset);
#if defined(_MSC_VER)
CHECK CheckUnderflow(const void *address, ULONG offset);
#endif
CHECK CheckUnderflow(const void *address, UINT64 offset);
CHECK CheckUnderflow(const void *address, void *address2);
CHECK CheckZeroedMemory(const void *memory, SIZE_T size);
// These include overflow checks
CHECK CheckBounds(const void *rangeBase, UINT32 rangeSize, UINT32 offset);
CHECK CheckBounds(const void *rangeBase, UINT32 rangeSize, UINT32 offset, UINT32 size);
void WINAPI ReleaseCheckTls(LPVOID pTlsData);
// ================================================================================
// Inline definitions
// ================================================================================
#include "check.inl"
#endif // CHECK_H_
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
// ---------------------------------------------------------------------------
// Check.h
//
//
// Assertion checking infrastructure
// ---------------------------------------------------------------------------
#ifndef CHECK_H_
#define CHECK_H_
#include "static_assert.h"
#include "daccess.h"
#include "unreachable.h"
#ifdef _DEBUG
#ifdef _MSC_VER
// Make sure we can recurse deep enough for FORCEINLINE
#pragma inline_recursion(on)
#pragma inline_depth(16)
#pragma warning(disable:4714)
#endif // _MSC_VER
#if !defined(DISABLE_CONTRACTS)
#define CHECK_INVARIANTS 1
#define VALIDATE_OBJECTS 1
#endif
#endif // _DEBUG
#if defined(_DEBUG) && !defined(DACCESS_COMPILE)
#define _DEBUG_IMPL 1
#endif
#ifdef _DEBUG
#define DEBUG_ARG(x) , x
#else
#define DEBUG_ARG(x)
#endif
#define CHECK_STRESS 1
//--------------------------------------------------------------------------------
// A CHECK is an object which encapsulates a potential assertion
// failure. It not only contains the result of the check, but if the check fails,
// also records information about the condition and call site.
//
// CHECK also serves as a holder to prevent recursive CHECKS. These can be
// particularly common when putting preconditions inside predicates, especially
// routines called by an invariant.
//
// Note that using CHECK is perfectly efficient in a free build - the CHECK becomes
// a simple string constant pointer (typically either NULL or (LPCSTR)1, although some
// check failures may include messages)
//
// NOTE: you should NEVER use the CHECK class API directly - use the macros below.
//--------------------------------------------------------------------------------
class SString;
class CHECK
{
protected:
// On retail, this is a pointer to a string literal, null or (LPCSTR)1.
// On debug, this is a pointer to dynamically allocated memory - that
// lets us have formatted strings in debug builds.
LPCSTR m_message;
#ifdef _DEBUG
LPCSTR m_condition;
LPCSTR m_file;
INT m_line;
LONG *m_pCount;
// Keep leakage counters.
static size_t s_cLeakedBytes;
static size_t s_cNumFailures;
static thread_local LONG t_count;
#endif
static BOOL s_neverEnforceAsserts;
public: // !!! NOTE: Called from macros only!!!
// If we are not in a check, return TRUE and PushCheck; otherwise return FALSE
BOOL EnterAssert();
// Pops check count
void LeaveAssert();
// Just return if we are in a check
BOOL IsInAssert();
// Should we skip enforcing asserts
static BOOL EnforceAssert();
static BOOL EnforceAssert_StaticCheckOnly();
static void ResetAssert();
#ifdef _MSC_VER
#pragma warning(push)
#pragma warning(disable:4702) // Disable bogus unreachable code warning
#endif // _MSC_VER
CHECK() : m_message(NULL)
#ifdef _DEBUG
, m_condition (NULL)
, m_file(NULL)
, m_line(NULL)
, m_pCount(NULL)
#endif
{}
#ifdef _MSC_VER
#pragma warning(pop)
#endif // _MSC_VER
// Fail records the result of a condition check. Can take either a
// boolean value or another check result
BOOL Fail(BOOL condition);
BOOL Fail(const CHECK &check);
// Setup records context info after a failure.
void Setup(LPCSTR message DEBUG_ARG(LPCSTR condition) DEBUG_ARG(LPCSTR file) DEBUG_ARG(INT line));
static LPCSTR FormatMessage(LPCSTR messageFormat, ...);
// Trigger triggers the actual check failure. The trigger may provide a reason
// to include in the failure message.
void Trigger(LPCSTR reason);
// Finally, convert to a BOOL to allow just testing the result of a Check function
operator BOOL();
BOOL operator!();
CHECK &operator()() { return *this; }
static inline const CHECK OK() {
return CHECK();
}
static void SetAssertEnforcement(BOOL value);
private:
#ifdef _DEBUG
static LPCSTR AllocateDynamicMessage(const SString &s);
#endif
};
//--------------------------------------------------------------------------------
// These CHECK macros are the correct way to propagate an assertion. These
// routines are designed for use inside "Check" routines. Such routines may
// be Invariants, Validate routines, or any other assertional predicates.
//
// A Check routine should return a value of type CHECK.
//
// It should consist of multiple CHECK or CHECK_MSG statements (along with appropritate
// control flow) and should end with CHECK_OK() if all other checks pass.
//
// It may contain a CONTRACT_CHECK contract, but this is only appropriate if the
// check is used for non-assertional purposes (otherwise the contract will never execute).
// Note that CONTRACT_CHECK contracts do not support postconditions.
//
// CHECK: Check the given condition, return a CHECK failure if FALSE
// CHECK_MSG: Same, but include a message paramter if the check fails
// CHECK_OK: Return a successful check value;
//--------------------------------------------------------------------------------
#ifdef _DEBUG
#define DEBUG_ONLY_MESSAGE(msg) msg
#else
// On retail, we don't want to add a bunch of string literals to the image,
// so we just use the same one everywhere.
#define DEBUG_ONLY_MESSAGE(msg) ((LPCSTR)1)
#endif
#define CHECK_MSG_EX(_condition, _message, _RESULT) \
do \
{ \
CHECK _check; \
if (_check.Fail(_condition)) \
{ \
ENTER_DEBUG_ONLY_CODE; \
_check.Setup(DEBUG_ONLY_MESSAGE(_message) \
DEBUG_ARG(#_condition) \
DEBUG_ARG(__FILE__) \
DEBUG_ARG(__LINE__)); \
_RESULT(_check); \
LEAVE_DEBUG_ONLY_CODE; \
} \
} while (0)
#define RETURN_RESULT(r) return r
#define CHECK_MSG(_condition, _message) \
CHECK_MSG_EX(_condition, _message, RETURN_RESULT)
#define CHECK(_condition) \
CHECK_MSG(_condition, "")
#define CHECK_MSGF(_condition, _args) \
CHECK_MSG(_condition, CHECK::FormatMessage _args)
#define CHECK_FAIL(_message) \
CHECK_MSG(FALSE, _message); UNREACHABLE()
#define CHECK_FAILF(_args) \
CHECK_MSGF(FALSE, _args); UNREACHABLE()
#define CHECK_OK \
return CHECK::OK()
//--------------------------------------------------------------------------------
// ASSERT_CHECK is the proper way to trigger a check result. If the CHECK
// has failed, the diagnostic assertion routines will fire with appropriate
// context information.
//
// Note that the condition may either be a raw boolean expression or a CHECK result
// returned from a Check routine.
//
// Recursion note: ASSERT_CHECKs are only performed if there is no current check in
// progress.
//--------------------------------------------------------------------------------
#ifndef ENTER_DEBUG_ONLY_CODE
#define ENTER_DEBUG_ONLY_CODE
#endif
#ifndef LEAVE_DEBUG_ONLY_CODE
#define LEAVE_DEBUG_ONLY_CODE
#endif
#define ASSERT_CHECK(_condition, _message, _reason) \
do \
{ \
CHECK _check; \
if (_check.EnterAssert()) \
{ \
ENTER_DEBUG_ONLY_CODE; \
if (_check.Fail(_condition)) \
{ \
_check.Setup(_message \
DEBUG_ARG(#_condition) \
DEBUG_ARG(__FILE__) \
DEBUG_ARG(__LINE__)); \
_check.Trigger(_reason); \
} \
LEAVE_DEBUG_ONLY_CODE; \
_check.LeaveAssert(); \
} \
} while (0)
// ex: ASSERT_CHECKF(1+2==4, "my reason", ("Woah %d", 1+3));
// note that the double parenthesis, the 'args' param below will include one pair of parens.
#define ASSERT_CHECKF(_condition, _reason, _args) \
ASSERT_CHECK(_condition, CHECK::FormatMessage _args, _reason)
//--------------------------------------------------------------------------------
// INVARIANTS are descriptions of conditions which are always true at well defined
// points of execution. Invariants may be checked by the caller or callee at any
// time as paranoia requires.
//
// There are really two flavors of invariant. The "public invariant" describes
// to the caller invariant behavior about the abstraction which is visible from
// the public API (and of course it should be expressible in that public API).
//
// The "internal invariant" (or representation invariant), on the other hand, is
// a description of the private implementation of the abstraction, which may examine
// internal state of the abstraction or use private entry points.
//
// Classes with invariants should introduce methods called
// void Invariant();
// and
// void InternalInvariant();
// to allow invariant checks.
//--------------------------------------------------------------------------------
#if CHECK_INVARIANTS
template <typename TYPENAME>
CHECK CheckInvariant(TYPENAME &obj)
{
#if defined(_MSC_VER) || defined(__llvm__)
__if_exists(TYPENAME::Invariant)
{
CHECK(obj.Invariant());
}
__if_exists(TYPENAME::InternalInvariant)
{
CHECK(obj.InternalInvariant());
}
#endif
CHECK_OK;
}
#define CHECK_INVARIANT(o) \
ASSERT_CHECK(CheckInvariant(o), NULL, "Invariant failure")
#else
#define CHECK_INVARIANT(o) do { } while (0)
#endif
//--------------------------------------------------------------------------------
// VALIDATE is a check to be made on an object type which identifies a pointer as
// a valid instance of the object, by calling CheckPointer on it. Normally a null
// pointer is treated as an error; VALIDATE_NULL (or CheckPointer(o, NULL_OK))
// may be used when a null pointer is acceptible.
//
// In addition to the null/non-null check, a type may provide a specific Check method
// for more sophisticated identification. In general, the Check method
// should answer the question
// "Is this a valid instance of its declared compile-time type?". For instance, if
// runtype type identification were supported for the type, it should be invoked here.
//
// Note that CheckPointer will also check the invariant(s) if appropriate, so the
// invariants should NOT be explicitly invoked from the Check method.
//--------------------------------------------------------------------------------
enum IsNullOK
{
NULL_NOT_OK = 0,
NULL_OK = 1
};
#if CHECK_INVARIANTS
template <typename TYPENAME>
CHECK CheckPointer(TYPENAME *o, IsNullOK ok = NULL_NOT_OK)
{
if (o == NULL)
{
CHECK_MSG(ok, "Illegal null pointer");
}
else
{
#if defined(_MSC_VER) || defined(__llvm__)
__if_exists(TYPENAME::Check)
{
CHECK(o->Check());
}
#endif
}
CHECK_OK;
}
template <typename TYPENAME>
CHECK CheckValue(TYPENAME &val)
{
#if defined(_MSC_VER) || defined(__llvm__)
__if_exists(TYPENAME::Check)
{
CHECK(val.Check());
}
#endif
CHECK(CheckInvariant(val));
CHECK_OK;
}
#else // CHECK_INVARIANTS
#ifdef _DEBUG_IMPL
// Don't defined these functions to be nops for the non-debug
// build as it may hide important checks
template <typename TYPENAME>
CHECK CheckPointer(TYPENAME *o, IsNullOK ok = NULL_NOT_OK)
{
if (o == NULL)
{
CHECK_MSG(ok, "Illegal null pointer");
}
CHECK_OK;
}
template <typename TYPENAME>
CHECK CheckValue(TYPENAME &val)
{
CHECK_OK;
}
#endif
#endif // CHECK_INVARIANTS
#if VALIDATE_OBJECTS
#define VALIDATE(o) \
ASSERT_CHECK(CheckPointer(o), "Validation failure")
#define VALIDATE_NULL(o) \
ASSERT_CHECK(CheckPointer(o, NULL_OK), "Validation failure")
#else
#define VALIDATE(o) do { } while (0)
#define VALIDATE_NULL(o) do { } while (0)
#endif
//--------------------------------------------------------------------------------
// CONSISTENCY_CHECKS are ad-hoc assertions about the expected state of the program
// at a given time. A failure in one of these indicates a bug in the code.
//
// Note that the condition may either be a raw boolean expression or a CHECK result
// returned from a Check routine.
//--------------------------------------------------------------------------------
#define CONSISTENCY_CHECK(_condition) \
CONSISTENCY_CHECK_MSG(_condition, "")
#ifdef _DEBUG_IMPL
#define CONSISTENCY_CHECK_MSG(_condition, _message) \
ASSERT_CHECK(_condition, _message, "Consistency check failed")
#define CONSISTENCY_CHECK_MSGF(_condition, args) \
ASSERT_CHECKF(_condition, "Consistency check failed", args)
#else
#define CONSISTENCY_CHECK_MSG(_condition, _message) do { } while (0)
#define CONSISTENCY_CHECK_MSGF(_condition, args) do { } while (0)
#endif
//--------------------------------------------------------------------------------
// SIMPLIFYING_ASSUMPTIONS are workarounds which are placed in the code to allow progress
// to be made in the case of difficult corner cases. These should NOT be left in the
// code; they are really just markers of things which need to be fixed.
//
// Note that the condition may either be a raw boolean expression or a CHECK result
// returned from a Check routine.
//--------------------------------------------------------------------------------
// Ex usage:
// SIMPLIFYING_ASSUMPTION(SomeExpression());
#define SIMPLIFYING_ASSUMPTION(_condition) \
SIMPLIFYING_ASSUMPTION_MSG(_condition, "")
// Helper for HRs. Will provide formatted message showing the failure code.
#define SIMPLIFYING_ASSUMPTION_SUCCEEDED(__hr) \
{ \
HRESULT __hr2 = (__hr); \
(void)__hr2; \
SIMPLIFYING_ASSUMPTION_MSGF(SUCCEEDED(__hr2), ("HRESULT failed.\n Expected success.\n Actual=0x%x\n", __hr2)); \
}
#ifdef _DEBUG_IMPL
// Ex usage:
// SIMPLIFYING_ASSUMPTION_MSG(SUCCEEDED(hr), "It failed!");
#define SIMPLIFYING_ASSUMPTION_MSG(_condition, _message) \
ASSERT_CHECK(_condition, _message, "Unhandled special case detected")
// use a formatted string. Ex usage:
// SIMPLIFYING_ASSUMPTION_MSGF(SUCCEEDED(hr), ("Woah it failed! 0x%08x", hr));
#define SIMPLIFYING_ASSUMPTION_MSGF(_condition, args) \
ASSERT_CHECKF(_condition, "Unhandled special case detected", args)
#else // !_DEBUG_IMPL
#define SIMPLIFYING_ASSUMPTION_MSG(_condition, _message) do { } while (0)
#define SIMPLIFYING_ASSUMPTION_MSGF(_condition, args) do { } while (0)
#endif // !_DEBUG_IMPL
//--------------------------------------------------------------------------------
// COMPILER_ASSUME_MSG is a statement that tells the compiler to assume the
// condition is true. In a checked build these turn into asserts;
// in a free build they are passed through to the compiler to use in optimization.
//--------------------------------------------------------------------------------
#if defined(_PREFAST_) || defined(_PREFIX_) || defined(__clang_analyzer__)
#define COMPILER_ASSUME_MSG(_condition, _message) if (!(_condition)) __UNREACHABLE();
#define COMPILER_ASSUME_MSGF(_condition, args) if (!(_condition)) __UNREACHABLE();
#else
#if defined(DACCESS_COMPILE)
#define COMPILER_ASSUME_MSG(_condition, _message) do { } while (0)
#define COMPILER_ASSUME_MSGF(_condition, args) do { } while (0)
#else
#if defined(_DEBUG)
#define COMPILER_ASSUME_MSG(_condition, _message) \
ASSERT_CHECK(_condition, _message, "Compiler optimization assumption invalid")
#define COMPILER_ASSUME_MSGF(_condition, args) \
ASSERT_CHECKF(_condition, "Compiler optimization assumption invalid", args)
#else
#define COMPILER_ASSUME_MSG(_condition, _message) __assume(_condition)
#define COMPILER_ASSUME_MSGF(_condition, args) __assume(_condition)
#endif // _DEBUG
#endif // DACCESS_COMPILE
#endif // _PREFAST_ || _PREFIX_
#define COMPILER_ASSUME(_condition) \
COMPILER_ASSUME_MSG(_condition, "")
//--------------------------------------------------------------------------------
// PREFIX_ASSUME_MSG and PREFAST_ASSUME_MSG are just another name
// for COMPILER_ASSUME_MSG
// In a checked build these turn into asserts; in a free build
// they are passed through to the compiler to use in optimization;
// via an __assume(_condition) optimization hint.
//--------------------------------------------------------------------------------
#define PREFIX_ASSUME_MSG(_condition, _message) \
COMPILER_ASSUME_MSG(_condition, _message)
#define PREFIX_ASSUME_MSGF(_condition, args) \
COMPILER_ASSUME_MSGF(_condition, args)
#define PREFIX_ASSUME(_condition) \
COMPILER_ASSUME_MSG(_condition, "")
#define PREFAST_ASSUME_MSG(_condition, _message) \
COMPILER_ASSUME_MSG(_condition, _message)
#define PREFAST_ASSUME_MSGF(_condition, args) \
COMPILER_ASSUME_MSGF(_condition, args)
#define PREFAST_ASSUME(_condition) \
COMPILER_ASSUME_MSG(_condition, "")
//--------------------------------------------------------------------------------
// UNREACHABLE points are locations in the code which should not be able to be
// reached under any circumstances (e.g. a default in a switch which is supposed to
// cover all cases.). This macro tells the compiler this, and also embeds a check
// to make sure it is always true.
//--------------------------------------------------------------------------------
#define UNREACHABLE() \
UNREACHABLE_MSG("")
#ifdef __llvm__
// LLVM complains if a function does not return what it says.
#define UNREACHABLE_RET() do { UNREACHABLE(); return 0; } while (0)
#define UNREACHABLE_MSG_RET(_message) UNREACHABLE_MSG(_message); return 0;
#else // __llvm__
#define UNREACHABLE_RET() UNREACHABLE()
#define UNREACHABLE_MSG_RET(_message) UNREACHABLE_MSG(_message)
#endif // __llvm__ else
#ifdef _DEBUG_IMPL
// Note that the "do { } while (0)" syntax trick here doesn't work, as the compiler
// gives an error that the while(0) is unreachable code
#define UNREACHABLE_MSG(_message) \
{ \
CHECK _check; \
_check.Setup(_message, "<unreachable>", __FILE__, __LINE__); \
_check.Trigger("Reached the \"unreachable\""); \
} __UNREACHABLE()
#else
#define UNREACHABLE_MSG(_message) __UNREACHABLE()
#endif
//--------------------------------------------------------------------------------
// STRESS_CHECK represents a check which is included in a free build
// @todo: behavior on trigger
//
// Note that the condition may either be a raw boolean expression or a CHECK result
// returned from a Check routine.
//
// Since Retail builds don't allow formatted checks, there's no STRESS_CHECK_MSGF.
//--------------------------------------------------------------------------------
#if CHECK_STRESS
#define STRESS_CHECK(_condition, _message) \
ASSERT_CHECK(_condition, _message, "Stress Assertion Failure")
#else
#define STRESS_CHECK(_condition, _message) do { } while (0)
#endif
//--------------------------------------------------------------------------------
// CONTRACT_CHECK is used to put contracts on Check function. Note that it does
// not support postconditions.
//--------------------------------------------------------------------------------
#define CONTRACT_CHECK CONTRACTL
#define CONTRACT_CHECK_END CONTRACTL_END
//--------------------------------------------------------------------------------
// CCHECK is used for Check functions which may fail due to out of memory
// or other transient failures. These failures should be ignored when doing
// assertions, but they cannot be ignored when the Check function is used in
// normal code.
// @todo: really crufty to have 2 sets of CHECK macros
//--------------------------------------------------------------------------------
#ifdef _DEBUG
#define CCHECK_START \
{ \
BOOL ___exception = FALSE; \
BOOL ___transient = FALSE; \
CHECK ___result = CHECK::OK(); \
EX_TRY {
#define CCHECK_END \
} EX_CATCH { \
if (___result.IsInAssert()) \
{ \
___exception = TRUE; \
___transient = GET_EXCEPTION()->IsTransient(); \
} \
else \
EX_RETHROW; \
} EX_END_CATCH(RethrowTerminalExceptions); \
\
if (___exception) \
{ \
if (___transient) \
CHECK_OK; \
else \
CHECK_FAIL("Nontransient exception occurred during check"); \
} \
CHECK(___result); \
}
#define CRETURN_RESULT(r) ___result = r
#define CCHECK_MSG(_condition, _message) \
CHECK_MSG_EX(_condition, _message, CRETURN_RESULT)
#define CCHECK(_condition) \
CCHECK_MSG(_condition, "")
#define CCHECK_MSGF(_condition, _args) \
CCHECK_MSG(_condition, CHECK::FormatMessage _args)
#define CCHECK_FAIL(_message) \
CCHECK_MSG(FALSE, _message); UNREACHABLE()
#define CCHECK_FAILF(_args) \
CCHECK_MSGF(FALSE, _args); UNREACHABLE()
#else // _DEBUG
#define CCHECK_START
#define CCHECK_END
#define CCHECK CHECK
#define CCHECK_MSG CHECK_MSG
#define CCHECK_MSGF CHECK_MSGF
#define CCHECK_FAIL CHECK_FAIL
#define CCHECK_FAILF CHECK_FAILF
#endif
//--------------------------------------------------------------------------------
// Common base level checks
//--------------------------------------------------------------------------------
CHECK CheckAlignment(UINT alignment);
CHECK CheckAligned(UINT value, UINT alignment);
#if defined(_MSC_VER)
CHECK CheckAligned(ULONG value, UINT alignment);
#endif
CHECK CheckAligned(UINT64 value, UINT alignment);
CHECK CheckAligned(const void *address, UINT alignment);
CHECK CheckOverflow(UINT value1, UINT value2);
#if defined(_MSC_VER)
CHECK CheckOverflow(ULONG value1, ULONG value2);
#endif
CHECK CheckOverflow(UINT64 value1, UINT64 value2);
CHECK CheckOverflow(PTR_CVOID address, UINT offset);
#if defined(_MSC_VER)
CHECK CheckOverflow(const void *address, ULONG offset);
#endif
CHECK CheckOverflow(const void *address, UINT64 offset);
CHECK CheckUnderflow(UINT value1, UINT value2);
#if defined(_MSC_VER)
CHECK CheckUnderflow(ULONG value1, ULONG value2);
#endif
CHECK CheckUnderflow(UINT64 value1, UINT64 value2);
CHECK CheckUnderflow(const void *address, UINT offset);
#if defined(_MSC_VER)
CHECK CheckUnderflow(const void *address, ULONG offset);
#endif
CHECK CheckUnderflow(const void *address, UINT64 offset);
CHECK CheckUnderflow(const void *address, void *address2);
CHECK CheckZeroedMemory(const void *memory, SIZE_T size);
// These include overflow checks
CHECK CheckBounds(const void *rangeBase, UINT32 rangeSize, UINT32 offset);
CHECK CheckBounds(const void *rangeBase, UINT32 rangeSize, UINT32 offset, UINT32 size);
void WINAPI ReleaseCheckTls(LPVOID pTlsData);
// ================================================================================
// Inline definitions
// ================================================================================
#include "check.inl"
#endif // CHECK_H_
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/mono/mono/mini/mini-ppc.c | /**
* \file
* PowerPC backend for the Mono code generator
*
* Authors:
* Paolo Molaro ([email protected])
* Dietmar Maurer ([email protected])
* Andreas Faerber <[email protected]>
*
* (C) 2003 Ximian, Inc.
* (C) 2007-2008 Andreas Faerber
*/
#include "mini.h"
#include <string.h>
#include <mono/metadata/abi-details.h>
#include <mono/metadata/appdomain.h>
#include <mono/metadata/debug-helpers.h>
#include <mono/utils/mono-proclib.h>
#include <mono/utils/mono-mmap.h>
#include <mono/utils/mono-hwcap.h>
#include <mono/utils/unlocked.h>
#include "mono/utils/mono-tls-inline.h"
#include "mini-ppc.h"
#ifdef TARGET_POWERPC64
#include "cpu-ppc64.h"
#else
#include "cpu-ppc.h"
#endif
#include "ir-emit.h"
#include "aot-runtime.h"
#include "mini-runtime.h"
#ifdef __APPLE__
#include <sys/sysctl.h>
#endif
#ifdef __linux__
#include <unistd.h>
#endif
#ifdef _AIX
#include <sys/systemcfg.h>
#endif
static GENERATE_TRY_GET_CLASS_WITH_CACHE (math, "System", "Math")
static GENERATE_TRY_GET_CLASS_WITH_CACHE (mathf, "System", "MathF")
#define FORCE_INDIR_CALL 1
enum {
TLS_MODE_DETECT,
TLS_MODE_FAILED,
TLS_MODE_LTHREADS,
TLS_MODE_NPTL,
TLS_MODE_DARWIN_G4,
TLS_MODE_DARWIN_G5
};
/* cpu_hw_caps contains the flags defined below */
static int cpu_hw_caps = 0;
static int cachelinesize = 0;
static int cachelineinc = 0;
enum {
PPC_ICACHE_SNOOP = 1 << 0,
PPC_MULTIPLE_LS_UNITS = 1 << 1,
PPC_SMP_CAPABLE = 1 << 2,
PPC_ISA_2X = 1 << 3,
PPC_ISA_64 = 1 << 4,
PPC_MOVE_FPR_GPR = 1 << 5,
PPC_ISA_2_03 = 1 << 6,
PPC_HW_CAP_END
};
#define BREAKPOINT_SIZE (PPC_LOAD_SEQUENCE_LENGTH + 4)
/* This mutex protects architecture specific caches */
#define mono_mini_arch_lock() mono_os_mutex_lock (&mini_arch_mutex)
#define mono_mini_arch_unlock() mono_os_mutex_unlock (&mini_arch_mutex)
static mono_mutex_t mini_arch_mutex;
/*
* The code generated for sequence points reads from this location, which is
* made read-only when single stepping is enabled.
*/
static gpointer ss_trigger_page;
/* Enabled breakpoints read from this trigger page */
static gpointer bp_trigger_page;
#define MONO_EMIT_NEW_LOAD_R8(cfg,dr,addr) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), OP_R8CONST); \
inst->type = STACK_R8; \
inst->dreg = (dr); \
inst->inst_p0 = (void*)(addr); \
mono_bblock_add_inst (cfg->cbb, inst); \
} while (0)
const char*
mono_arch_regname (int reg) {
static const char rnames[][4] = {
"r0", "sp", "r2", "r3", "r4",
"r5", "r6", "r7", "r8", "r9",
"r10", "r11", "r12", "r13", "r14",
"r15", "r16", "r17", "r18", "r19",
"r20", "r21", "r22", "r23", "r24",
"r25", "r26", "r27", "r28", "r29",
"r30", "r31"
};
if (reg >= 0 && reg < 32)
return rnames [reg];
return "unknown";
}
const char*
mono_arch_fregname (int reg) {
static const char rnames[][4] = {
"f0", "f1", "f2", "f3", "f4",
"f5", "f6", "f7", "f8", "f9",
"f10", "f11", "f12", "f13", "f14",
"f15", "f16", "f17", "f18", "f19",
"f20", "f21", "f22", "f23", "f24",
"f25", "f26", "f27", "f28", "f29",
"f30", "f31"
};
if (reg >= 0 && reg < 32)
return rnames [reg];
return "unknown";
}
/* this function overwrites r0, r11, r12 */
static guint8*
emit_memcpy (guint8 *code, int size, int dreg, int doffset, int sreg, int soffset)
{
/* unrolled, use the counter in big */
if (size > sizeof (target_mgreg_t) * 5) {
long shifted = size / TARGET_SIZEOF_VOID_P;
guint8 *copy_loop_start, *copy_loop_jump;
ppc_load (code, ppc_r0, shifted);
ppc_mtctr (code, ppc_r0);
//g_assert (sreg == ppc_r12);
ppc_addi (code, ppc_r11, dreg, (doffset - sizeof (target_mgreg_t)));
ppc_addi (code, ppc_r12, sreg, (soffset - sizeof (target_mgreg_t)));
copy_loop_start = code;
ppc_ldptr_update (code, ppc_r0, (unsigned int)sizeof (target_mgreg_t), ppc_r12);
ppc_stptr_update (code, ppc_r0, (unsigned int)sizeof (target_mgreg_t), ppc_r11);
copy_loop_jump = code;
ppc_bc (code, PPC_BR_DEC_CTR_NONZERO, 0, 0);
ppc_patch (copy_loop_jump, copy_loop_start);
size -= shifted * sizeof (target_mgreg_t);
doffset = soffset = 0;
dreg = ppc_r11;
}
#ifdef __mono_ppc64__
/* the hardware has multiple load/store units and the move is long
enough to use more then one register, then use load/load/store/store
to execute 2 instructions per cycle. */
if ((cpu_hw_caps & PPC_MULTIPLE_LS_UNITS) && (dreg != ppc_r11) && (sreg != ppc_r11)) {
while (size >= 16) {
ppc_ldptr (code, ppc_r0, soffset, sreg);
ppc_ldptr (code, ppc_r11, soffset+8, sreg);
ppc_stptr (code, ppc_r0, doffset, dreg);
ppc_stptr (code, ppc_r11, doffset+8, dreg);
size -= 16;
soffset += 16;
doffset += 16;
}
}
while (size >= 8) {
ppc_ldr (code, ppc_r0, soffset, sreg);
ppc_str (code, ppc_r0, doffset, dreg);
size -= 8;
soffset += 8;
doffset += 8;
}
#else
if ((cpu_hw_caps & PPC_MULTIPLE_LS_UNITS) && (dreg != ppc_r11) && (sreg != ppc_r11)) {
while (size >= 8) {
ppc_lwz (code, ppc_r0, soffset, sreg);
ppc_lwz (code, ppc_r11, soffset+4, sreg);
ppc_stw (code, ppc_r0, doffset, dreg);
ppc_stw (code, ppc_r11, doffset+4, dreg);
size -= 8;
soffset += 8;
doffset += 8;
}
}
#endif
while (size >= 4) {
ppc_lwz (code, ppc_r0, soffset, sreg);
ppc_stw (code, ppc_r0, doffset, dreg);
size -= 4;
soffset += 4;
doffset += 4;
}
while (size >= 2) {
ppc_lhz (code, ppc_r0, soffset, sreg);
ppc_sth (code, ppc_r0, doffset, dreg);
size -= 2;
soffset += 2;
doffset += 2;
}
while (size >= 1) {
ppc_lbz (code, ppc_r0, soffset, sreg);
ppc_stb (code, ppc_r0, doffset, dreg);
size -= 1;
soffset += 1;
doffset += 1;
}
return code;
}
/*
* mono_arch_get_argument_info:
* @csig: a method signature
* @param_count: the number of parameters to consider
* @arg_info: an array to store the result infos
*
* Gathers information on parameters such as size, alignment and
* padding. arg_info should be large enought to hold param_count + 1 entries.
*
* Returns the size of the activation frame.
*/
int
mono_arch_get_argument_info (MonoMethodSignature *csig, int param_count, MonoJitArgumentInfo *arg_info)
{
#ifdef __mono_ppc64__
NOT_IMPLEMENTED;
return -1;
#else
int k, frame_size = 0;
int size, align, pad;
int offset = 8;
if (MONO_TYPE_ISSTRUCT (csig->ret)) {
frame_size += sizeof (target_mgreg_t);
offset += 4;
}
arg_info [0].offset = offset;
if (csig->hasthis) {
frame_size += sizeof (target_mgreg_t);
offset += 4;
}
arg_info [0].size = frame_size;
for (k = 0; k < param_count; k++) {
if (csig->pinvoke && !csig->marshalling_disabled)
size = mono_type_native_stack_size (csig->params [k], (guint32*)&align);
else
size = mini_type_stack_size (csig->params [k], &align);
/* ignore alignment for now */
align = 1;
frame_size += pad = (align - (frame_size & (align - 1))) & (align - 1);
arg_info [k].pad = pad;
frame_size += size;
arg_info [k + 1].pad = 0;
arg_info [k + 1].size = size;
offset += pad;
arg_info [k + 1].offset = offset;
offset += size;
}
align = MONO_ARCH_FRAME_ALIGNMENT;
frame_size += pad = (align - (frame_size & (align - 1))) & (align - 1);
arg_info [k].pad = pad;
return frame_size;
#endif
}
#ifdef __mono_ppc64__
static gboolean
is_load_sequence (guint32 *seq)
{
return ppc_opcode (seq [0]) == 15 && /* lis */
ppc_opcode (seq [1]) == 24 && /* ori */
ppc_opcode (seq [2]) == 30 && /* sldi */
ppc_opcode (seq [3]) == 25 && /* oris */
ppc_opcode (seq [4]) == 24; /* ori */
}
#define ppc_load_get_dest(l) (((l)>>21) & 0x1f)
#define ppc_load_get_off(l) ((gint16)((l) & 0xffff))
#endif
/* ld || lwz */
#define ppc_is_load_op(opcode) (ppc_opcode ((opcode)) == 58 || ppc_opcode ((opcode)) == 32)
/* code must point to the blrl */
gboolean
mono_ppc_is_direct_call_sequence (guint32 *code)
{
#ifdef __mono_ppc64__
g_assert(*code == 0x4e800021 || *code == 0x4e800020 || *code == 0x4e800420);
/* the thunk-less direct call sequence: lis/ori/sldi/oris/ori/mtlr/blrl */
if (ppc_opcode (code [-1]) == 31) { /* mtlr */
if (ppc_is_load_op (code [-2]) && ppc_is_load_op (code [-3])) { /* ld/ld */
if (!is_load_sequence (&code [-8]))
return FALSE;
/* one of the loads must be "ld r2,8(rX)" or "ld r2,4(rX) for ilp32 */
return (ppc_load_get_dest (code [-2]) == ppc_r2 && ppc_load_get_off (code [-2]) == sizeof (target_mgreg_t)) ||
(ppc_load_get_dest (code [-3]) == ppc_r2 && ppc_load_get_off (code [-3]) == sizeof (target_mgreg_t));
}
if (ppc_opcode (code [-2]) == 24 && ppc_opcode (code [-3]) == 31) /* mr/nop */
return is_load_sequence (&code [-8]);
else
return is_load_sequence (&code [-6]);
}
return FALSE;
#else
g_assert(*code == 0x4e800021);
/* the thunk-less direct call sequence: lis/ori/mtlr/blrl */
return ppc_opcode (code [-1]) == 31 &&
ppc_opcode (code [-2]) == 24 &&
ppc_opcode (code [-3]) == 15;
#endif
}
#define MAX_ARCH_DELEGATE_PARAMS 7
static guint8*
get_delegate_invoke_impl (MonoTrampInfo **info, gboolean has_target, guint32 param_count, gboolean aot)
{
guint8 *code, *start;
if (has_target) {
int size = MONO_PPC_32_64_CASE (32, 32) + PPC_FTNPTR_SIZE;
start = code = mono_global_codeman_reserve (size);
if (!aot)
code = mono_ppc_create_pre_code_ftnptr (code);
/* Replace the this argument with the target */
ppc_ldptr (code, ppc_r0, MONO_STRUCT_OFFSET (MonoDelegate, method_ptr), ppc_r3);
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
/* it's a function descriptor */
/* Can't use ldptr as it doesn't work with r0 */
ppc_ldptr_indexed (code, ppc_r0, 0, ppc_r0);
#endif
ppc_mtctr (code, ppc_r0);
ppc_ldptr (code, ppc_r3, MONO_STRUCT_OFFSET (MonoDelegate, target), ppc_r3);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
g_assert ((code - start) <= size);
mono_arch_flush_icache (start, size);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_DELEGATE_INVOKE, NULL));
} else {
int size, i;
size = MONO_PPC_32_64_CASE (32, 32) + param_count * 4 + PPC_FTNPTR_SIZE;
start = code = mono_global_codeman_reserve (size);
if (!aot)
code = mono_ppc_create_pre_code_ftnptr (code);
ppc_ldptr (code, ppc_r0, MONO_STRUCT_OFFSET (MonoDelegate, method_ptr), ppc_r3);
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
/* it's a function descriptor */
ppc_ldptr_indexed (code, ppc_r0, 0, ppc_r0);
#endif
ppc_mtctr (code, ppc_r0);
/* slide down the arguments */
for (i = 0; i < param_count; ++i) {
ppc_mr (code, (ppc_r3 + i), (ppc_r3 + i + 1));
}
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
g_assert ((code - start) <= size);
mono_arch_flush_icache (start, size);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_DELEGATE_INVOKE, NULL));
}
if (has_target) {
*info = mono_tramp_info_create ("delegate_invoke_impl_has_target", start, code - start, NULL, NULL);
} else {
char *name = g_strdup_printf ("delegate_invoke_impl_target_%d", param_count);
*info = mono_tramp_info_create (name, start, code - start, NULL, NULL);
g_free (name);
}
return start;
}
GSList*
mono_arch_get_delegate_invoke_impls (void)
{
GSList *res = NULL;
MonoTrampInfo *info;
int i;
get_delegate_invoke_impl (&info, TRUE, 0, TRUE);
res = g_slist_prepend (res, info);
for (i = 0; i <= MAX_ARCH_DELEGATE_PARAMS; ++i) {
get_delegate_invoke_impl (&info, FALSE, i, TRUE);
res = g_slist_prepend (res, info);
}
return res;
}
gpointer
mono_arch_get_delegate_invoke_impl (MonoMethodSignature *sig, gboolean has_target)
{
guint8 *code, *start;
/* FIXME: Support more cases */
if (MONO_TYPE_ISSTRUCT (sig->ret))
return NULL;
if (has_target) {
static guint8* cached = NULL;
if (cached)
return cached;
if (mono_ee_features.use_aot_trampolines) {
start = (guint8*)mono_aot_get_trampoline ("delegate_invoke_impl_has_target");
} else {
MonoTrampInfo *info;
start = get_delegate_invoke_impl (&info, TRUE, 0, FALSE);
mono_tramp_info_register (info, NULL);
}
mono_memory_barrier ();
cached = start;
} else {
static guint8* cache [MAX_ARCH_DELEGATE_PARAMS + 1] = {NULL};
int i;
if (sig->param_count > MAX_ARCH_DELEGATE_PARAMS)
return NULL;
for (i = 0; i < sig->param_count; ++i)
if (!mono_is_regsize_var (sig->params [i]))
return NULL;
code = cache [sig->param_count];
if (code)
return code;
if (mono_ee_features.use_aot_trampolines) {
char *name = g_strdup_printf ("delegate_invoke_impl_target_%d", sig->param_count);
start = (guint8*)mono_aot_get_trampoline (name);
g_free (name);
} else {
MonoTrampInfo *info;
start = get_delegate_invoke_impl (&info, FALSE, sig->param_count, FALSE);
mono_tramp_info_register (info, NULL);
}
mono_memory_barrier ();
cache [sig->param_count] = start;
}
return start;
}
gpointer
mono_arch_get_delegate_virtual_invoke_impl (MonoMethodSignature *sig, MonoMethod *method, int offset, gboolean load_imt_reg)
{
return NULL;
}
gpointer
mono_arch_get_this_arg_from_call (host_mgreg_t *r, guint8 *code)
{
return (gpointer)(gsize)r [ppc_r3];
}
typedef struct {
long int type;
long int value;
} AuxVec;
#define MAX_AUX_ENTRIES 128
/*
* PPC_FEATURE_POWER4, PPC_FEATURE_POWER5, PPC_FEATURE_POWER5_PLUS, PPC_FEATURE_CELL,
* PPC_FEATURE_PA6T, PPC_FEATURE_ARCH_2_05 are considered supporting 2X ISA features
*/
#define ISA_2X (0x00080000 | 0x00040000 | 0x00020000 | 0x00010000 | 0x00000800 | 0x00001000)
/* define PPC_FEATURE_64 HWCAP for 64-bit category. */
#define ISA_64 0x40000000
/* define PPC_FEATURE_POWER6_EXT HWCAP for power6x mffgpr/mftgpr instructions. */
#define ISA_MOVE_FPR_GPR 0x00000200
/*
* Initialize the cpu to execute managed code.
*/
void
mono_arch_cpu_init (void)
{
}
/*
* Initialize architecture specific code.
*/
void
mono_arch_init (void)
{
#if defined(MONO_CROSS_COMPILE)
#elif defined(__APPLE__)
int mib [3];
size_t len = sizeof (cachelinesize);
mib [0] = CTL_HW;
mib [1] = HW_CACHELINE;
if (sysctl (mib, 2, &cachelinesize, &len, NULL, 0) == -1) {
perror ("sysctl");
cachelinesize = 128;
} else {
cachelineinc = cachelinesize;
}
#elif defined(__linux__)
AuxVec vec [MAX_AUX_ENTRIES];
int i, vec_entries = 0;
/* sadly this will work only with 2.6 kernels... */
FILE* f = fopen ("/proc/self/auxv", "rb");
if (f) {
vec_entries = fread (&vec, sizeof (AuxVec), MAX_AUX_ENTRIES, f);
fclose (f);
}
for (i = 0; i < vec_entries; i++) {
int type = vec [i].type;
if (type == 19) { /* AT_DCACHEBSIZE */
cachelinesize = vec [i].value;
continue;
}
}
#elif defined(G_COMPILER_CODEWARRIOR)
cachelinesize = 32;
cachelineinc = 32;
#elif defined(_AIX)
/* FIXME: use block instead? */
cachelinesize = _system_configuration.icache_line;
cachelineinc = _system_configuration.icache_line;
#else
//#error Need a way to get cache line size
#endif
if (mono_hwcap_ppc_has_icache_snoop)
cpu_hw_caps |= PPC_ICACHE_SNOOP;
if (mono_hwcap_ppc_is_isa_2x)
cpu_hw_caps |= PPC_ISA_2X;
if (mono_hwcap_ppc_is_isa_2_03)
cpu_hw_caps |= PPC_ISA_2_03;
if (mono_hwcap_ppc_is_isa_64)
cpu_hw_caps |= PPC_ISA_64;
if (mono_hwcap_ppc_has_move_fpr_gpr)
cpu_hw_caps |= PPC_MOVE_FPR_GPR;
if (mono_hwcap_ppc_has_multiple_ls_units)
cpu_hw_caps |= PPC_MULTIPLE_LS_UNITS;
if (!cachelinesize)
cachelinesize = 32;
if (!cachelineinc)
cachelineinc = cachelinesize;
if (mono_cpu_count () > 1)
cpu_hw_caps |= PPC_SMP_CAPABLE;
mono_os_mutex_init_recursive (&mini_arch_mutex);
ss_trigger_page = mono_valloc (NULL, mono_pagesize (), MONO_MMAP_READ, MONO_MEM_ACCOUNT_OTHER);
bp_trigger_page = mono_valloc (NULL, mono_pagesize (), MONO_MMAP_READ, MONO_MEM_ACCOUNT_OTHER);
mono_mprotect (bp_trigger_page, mono_pagesize (), 0);
// FIXME: Fix partial sharing for power and remove this
mono_set_partial_sharing_supported (FALSE);
}
/*
* Cleanup architecture specific code.
*/
void
mono_arch_cleanup (void)
{
mono_os_mutex_destroy (&mini_arch_mutex);
}
gboolean
mono_arch_have_fast_tls (void)
{
return FALSE;
}
/*
* This function returns the optimizations supported on this cpu.
*/
guint32
mono_arch_cpu_optimizations (guint32 *exclude_mask)
{
guint32 opts = 0;
/* no ppc-specific optimizations yet */
*exclude_mask = 0;
return opts;
}
#ifdef __mono_ppc64__
#define CASE_PPC32(c)
#define CASE_PPC64(c) case c:
#else
#define CASE_PPC32(c) case c:
#define CASE_PPC64(c)
#endif
static gboolean
is_regsize_var (MonoType *t) {
if (m_type_is_byref (t))
return TRUE;
t = mini_get_underlying_type (t);
switch (t->type) {
case MONO_TYPE_I4:
case MONO_TYPE_U4:
CASE_PPC64 (MONO_TYPE_I8)
CASE_PPC64 (MONO_TYPE_U8)
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
return TRUE;
case MONO_TYPE_OBJECT:
case MONO_TYPE_STRING:
case MONO_TYPE_CLASS:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
return TRUE;
case MONO_TYPE_GENERICINST:
if (!mono_type_generic_inst_is_valuetype (t))
return TRUE;
return FALSE;
case MONO_TYPE_VALUETYPE:
return FALSE;
}
return FALSE;
}
#ifndef DISABLE_JIT
GList *
mono_arch_get_allocatable_int_vars (MonoCompile *cfg)
{
GList *vars = NULL;
int i;
for (i = 0; i < cfg->num_varinfo; i++) {
MonoInst *ins = cfg->varinfo [i];
MonoMethodVar *vmv = MONO_VARINFO (cfg, i);
/* unused vars */
if (vmv->range.first_use.abs_pos >= vmv->range.last_use.abs_pos)
continue;
if (ins->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT) || (ins->opcode != OP_LOCAL && ins->opcode != OP_ARG))
continue;
/* we can only allocate 32 bit values */
if (is_regsize_var (ins->inst_vtype)) {
g_assert (MONO_VARINFO (cfg, i)->reg == -1);
g_assert (i == vmv->idx);
vars = mono_varlist_insert_sorted (cfg, vars, vmv, FALSE);
}
}
return vars;
}
#endif /* ifndef DISABLE_JIT */
GList *
mono_arch_get_global_int_regs (MonoCompile *cfg)
{
GList *regs = NULL;
int i, top = 32;
if (cfg->frame_reg != ppc_sp)
top = 31;
/* ppc_r13 is used by the system on PPC EABI */
for (i = 14; i < top; ++i) {
/*
* Reserve r29 for holding the vtable address for virtual calls in AOT mode,
* since the trampolines can clobber r12.
*/
if (!(cfg->compile_aot && i == 29))
regs = g_list_prepend (regs, GUINT_TO_POINTER (i));
}
return regs;
}
/*
* mono_arch_regalloc_cost:
*
* Return the cost, in number of memory references, of the action of
* allocating the variable VMV into a register during global register
* allocation.
*/
guint32
mono_arch_regalloc_cost (MonoCompile *cfg, MonoMethodVar *vmv)
{
/* FIXME: */
return 2;
}
void
mono_arch_flush_icache (guint8 *code, gint size)
{
#ifdef MONO_CROSS_COMPILE
/* do nothing */
#else
register guint8 *p;
guint8 *endp, *start;
p = start = code;
endp = p + size;
start = (guint8*)((gsize)start & ~(cachelinesize - 1));
/* use dcbf for smp support, later optimize for UP, see pem._64bit.d20030611.pdf page 211 */
#if defined(G_COMPILER_CODEWARRIOR)
if (cpu_hw_caps & PPC_SMP_CAPABLE) {
for (p = start; p < endp; p += cachelineinc) {
asm { dcbf 0, p };
}
} else {
for (p = start; p < endp; p += cachelineinc) {
asm { dcbst 0, p };
}
}
asm { sync };
p = code;
for (p = start; p < endp; p += cachelineinc) {
asm {
icbi 0, p
sync
}
}
asm {
sync
isync
}
#else
/* For POWER5/6 with ICACHE_SNOOPing only one icbi in the range is required.
* The sync is required to insure that the store queue is completely empty.
* While the icbi performs no cache operations, icbi/isync is required to
* kill local prefetch.
*/
if (cpu_hw_caps & PPC_ICACHE_SNOOP) {
asm ("sync");
asm ("icbi 0,%0;" : : "r"(code) : "memory");
asm ("isync");
return;
}
/* use dcbf for smp support, see pem._64bit.d20030611.pdf page 211 */
if (cpu_hw_caps & PPC_SMP_CAPABLE) {
for (p = start; p < endp; p += cachelineinc) {
asm ("dcbf 0,%0;" : : "r"(p) : "memory");
}
} else {
for (p = start; p < endp; p += cachelineinc) {
asm ("dcbst 0,%0;" : : "r"(p) : "memory");
}
}
asm ("sync");
p = code;
for (p = start; p < endp; p += cachelineinc) {
/* for ISA2.0+ implementations we should not need any extra sync between the
* icbi instructions. Both the 2.0 PEM and the PowerISA-2.05 say this.
* So I am not sure which chip had this problem but its not an issue on
* of the ISA V2 chips.
*/
if (cpu_hw_caps & PPC_ISA_2X)
asm ("icbi 0,%0;" : : "r"(p) : "memory");
else
asm ("icbi 0,%0; sync;" : : "r"(p) : "memory");
}
if (!(cpu_hw_caps & PPC_ISA_2X))
asm ("sync");
asm ("isync");
#endif
#endif
}
void
mono_arch_flush_register_windows (void)
{
}
#ifdef __APPLE__
#define ALWAYS_ON_STACK(s) s
#define FP_ALSO_IN_REG(s) s
#else
#ifdef __mono_ppc64__
#define ALWAYS_ON_STACK(s) s
#define FP_ALSO_IN_REG(s) s
#else
#define ALWAYS_ON_STACK(s)
#define FP_ALSO_IN_REG(s)
#endif
#define ALIGN_DOUBLES
#endif
enum {
RegTypeGeneral,
RegTypeBase,
RegTypeFP,
RegTypeStructByVal,
RegTypeStructByAddr,
RegTypeFPStructByVal, // For the v2 ABI, floats should be passed in FRs instead of GRs. Only valid for ABI v2!
};
typedef struct {
gint32 offset;
guint32 vtsize; /* in param area */
guint8 reg;
guint8 vtregs; /* number of registers used to pass a RegTypeStructByVal/RegTypeFPStructByVal */
guint8 regtype : 4; /* 0 general, 1 basereg, 2 floating point register, see RegType* */
guint8 size : 4; /* 1, 2, 4, 8, or regs used by RegTypeStructByVal/RegTypeFPStructByVal */
guint8 bytes : 4; /* size in bytes - only valid for
RegTypeStructByVal/RegTypeFPStructByVal if the struct fits
in one word, otherwise it's 0*/
} ArgInfo;
struct CallInfo {
int nargs;
guint32 stack_usage;
guint32 struct_ret;
ArgInfo ret;
ArgInfo sig_cookie;
gboolean vtype_retaddr;
int vret_arg_index;
ArgInfo args [1];
};
#define DEBUG(a)
#if PPC_RETURN_SMALL_FLOAT_STRUCTS_IN_FR_REGS
//
// Test if a structure is completely composed of either float XOR double fields and has fewer than
// PPC_MOST_FLOAT_STRUCT_MEMBERS_TO_RETURN_VIA_REGISTER members.
// If this is true the structure can be returned directly via float registers instead of by a hidden parameter
// pointing to where the return value should be stored.
// This is as per the ELF ABI v2.
//
static gboolean
is_float_struct_returnable_via_regs (MonoType *type, int* member_cnt, int* member_size)
{
int local_member_cnt, local_member_size;
if (!member_cnt) {
member_cnt = &local_member_cnt;
}
if (!member_size) {
member_size = &local_member_size;
}
gboolean is_all_floats = mini_type_is_hfa(type, member_cnt, member_size);
return is_all_floats && (*member_cnt <= PPC_MOST_FLOAT_STRUCT_MEMBERS_TO_RETURN_VIA_REGISTERS);
}
#else
#define is_float_struct_returnable_via_regs(a,b,c) (FALSE)
#endif
#if PPC_RETURN_SMALL_STRUCTS_IN_REGS
//
// Test if a structure is smaller in size than 2 doublewords (PPC_LARGEST_STRUCT_SIZE_TO_RETURN_VIA_REGISTERS) and is
// completely composed of fields all of basic types.
// If this is true the structure can be returned directly via registers r3/r4 instead of by a hidden parameter
// pointing to where the return value should be stored.
// This is as per the ELF ABI v2.
//
static gboolean
is_struct_returnable_via_regs (MonoClass *klass, gboolean is_pinvoke)
{
gboolean has_a_field = FALSE;
int size = 0;
if (klass) {
gpointer iter = NULL;
MonoClassField *f;
if (is_pinvoke)
size = mono_type_native_stack_size (m_class_get_byval_arg (klass), 0);
else
size = mini_type_stack_size (m_class_get_byval_arg (klass), 0);
if (size == 0)
return TRUE;
if (size > PPC_LARGEST_STRUCT_SIZE_TO_RETURN_VIA_REGISTERS)
return FALSE;
while ((f = mono_class_get_fields_internal (klass, &iter))) {
if (!(f->type->attrs & FIELD_ATTRIBUTE_STATIC)) {
// TBD: Is there a better way to check for the basic types?
if (m_type_is_byref (f->type)) {
return FALSE;
} else if ((f->type->type >= MONO_TYPE_BOOLEAN) && (f->type->type <= MONO_TYPE_R8)) {
has_a_field = TRUE;
} else if (MONO_TYPE_ISSTRUCT (f->type)) {
MonoClass *klass = mono_class_from_mono_type_internal (f->type);
if (is_struct_returnable_via_regs(klass, is_pinvoke)) {
has_a_field = TRUE;
} else {
return FALSE;
}
} else {
return FALSE;
}
}
}
}
return has_a_field;
}
#else
#define is_struct_returnable_via_regs(a,b) (FALSE)
#endif
static void inline
add_general (guint *gr, guint *stack_size, ArgInfo *ainfo, gboolean simple)
{
#ifdef __mono_ppc64__
g_assert (simple);
#endif
if (simple) {
if (*gr >= 3 + PPC_NUM_REG_ARGS) {
ainfo->offset = PPC_STACK_PARAM_OFFSET + *stack_size;
ainfo->reg = ppc_sp; /* in the caller */
ainfo->regtype = RegTypeBase;
*stack_size += sizeof (target_mgreg_t);
} else {
ALWAYS_ON_STACK (*stack_size += sizeof (target_mgreg_t));
ainfo->reg = *gr;
}
} else {
if (*gr >= 3 + PPC_NUM_REG_ARGS - 1) {
#ifdef ALIGN_DOUBLES
//*stack_size += (*stack_size % 8);
#endif
ainfo->offset = PPC_STACK_PARAM_OFFSET + *stack_size;
ainfo->reg = ppc_sp; /* in the caller */
ainfo->regtype = RegTypeBase;
*stack_size += 8;
} else {
#ifdef ALIGN_DOUBLES
if (!((*gr) & 1))
(*gr) ++;
#endif
ALWAYS_ON_STACK (*stack_size += 8);
ainfo->reg = *gr;
}
(*gr) ++;
}
(*gr) ++;
}
#if defined(__APPLE__) || (defined(__mono_ppc64__) && !PPC_PASS_SMALL_FLOAT_STRUCTS_IN_FR_REGS)
static gboolean
has_only_a_r48_field (MonoClass *klass)
{
gpointer iter;
MonoClassField *f;
gboolean have_field = FALSE;
iter = NULL;
while ((f = mono_class_get_fields_internal (klass, &iter))) {
if (!(f->type->attrs & FIELD_ATTRIBUTE_STATIC)) {
if (have_field)
return FALSE;
if (!m_type_is_byref (f->type) && (f->type->type == MONO_TYPE_R4 || f->type->type == MONO_TYPE_R8))
have_field = TRUE;
else
return FALSE;
}
}
return have_field;
}
#endif
static CallInfo*
get_call_info (MonoMethodSignature *sig)
{
guint i, fr, gr, pstart;
int n = sig->hasthis + sig->param_count;
MonoType *simpletype;
guint32 stack_size = 0;
CallInfo *cinfo = g_malloc0 (sizeof (CallInfo) + sizeof (ArgInfo) * n);
gboolean is_pinvoke = sig->pinvoke;
fr = PPC_FIRST_FPARG_REG;
gr = PPC_FIRST_ARG_REG;
if (mini_type_is_vtype (sig->ret)) {
cinfo->vtype_retaddr = TRUE;
}
pstart = 0;
n = 0;
/*
* To simplify get_this_arg_reg () and LLVM integration, emit the vret arg after
* the first argument, allowing 'this' to be always passed in the first arg reg.
* Also do this if the first argument is a reference type, since virtual calls
* are sometimes made using calli without sig->hasthis set, like in the delegate
* invoke wrappers.
*/
if (cinfo->vtype_retaddr && !is_pinvoke && (sig->hasthis || (sig->param_count > 0 && MONO_TYPE_IS_REFERENCE (mini_get_underlying_type (sig->params [0]))))) {
if (sig->hasthis) {
add_general (&gr, &stack_size, cinfo->args + 0, TRUE);
n ++;
} else {
add_general (&gr, &stack_size, &cinfo->args [sig->hasthis + 0], TRUE);
pstart = 1;
n ++;
}
add_general (&gr, &stack_size, &cinfo->ret, TRUE);
cinfo->struct_ret = cinfo->ret.reg;
cinfo->vret_arg_index = 1;
} else {
/* this */
if (sig->hasthis) {
add_general (&gr, &stack_size, cinfo->args + 0, TRUE);
n ++;
}
if (cinfo->vtype_retaddr) {
add_general (&gr, &stack_size, &cinfo->ret, TRUE);
cinfo->struct_ret = cinfo->ret.reg;
}
}
DEBUG(printf("params: %d\n", sig->param_count));
for (i = pstart; i < sig->param_count; ++i) {
if (!sig->pinvoke && (sig->call_convention == MONO_CALL_VARARG) && (i == sig->sentinelpos)) {
/* Prevent implicit arguments and sig_cookie from
being passed in registers */
gr = PPC_LAST_ARG_REG + 1;
/* FIXME: don't we have to set fr, too? */
/* Emit the signature cookie just before the implicit arguments */
add_general (&gr, &stack_size, &cinfo->sig_cookie, TRUE);
}
DEBUG(printf("param %d: ", i));
if (m_type_is_byref (sig->params [i])) {
DEBUG(printf("byref\n"));
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
n++;
continue;
}
simpletype = mini_get_underlying_type (sig->params [i]);
switch (simpletype->type) {
case MONO_TYPE_BOOLEAN:
case MONO_TYPE_I1:
case MONO_TYPE_U1:
cinfo->args [n].size = 1;
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
n++;
break;
case MONO_TYPE_CHAR:
case MONO_TYPE_I2:
case MONO_TYPE_U2:
cinfo->args [n].size = 2;
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
n++;
break;
case MONO_TYPE_I4:
case MONO_TYPE_U4:
cinfo->args [n].size = 4;
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
n++;
break;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
case MONO_TYPE_CLASS:
case MONO_TYPE_OBJECT:
case MONO_TYPE_STRING:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
cinfo->args [n].size = sizeof (target_mgreg_t);
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
n++;
break;
case MONO_TYPE_GENERICINST:
if (!mono_type_generic_inst_is_valuetype (simpletype)) {
cinfo->args [n].size = sizeof (target_mgreg_t);
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
n++;
break;
}
/* Fall through */
case MONO_TYPE_VALUETYPE:
case MONO_TYPE_TYPEDBYREF: {
gint size;
MonoClass *klass = mono_class_from_mono_type_internal (sig->params [i]);
if (simpletype->type == MONO_TYPE_TYPEDBYREF)
size = MONO_ABI_SIZEOF (MonoTypedRef);
else if (sig->pinvoke && !sig->marshalling_disabled)
size = mono_class_native_size (klass, NULL);
else
size = mono_class_value_size (klass, NULL);
#if defined(__APPLE__) || (defined(__mono_ppc64__) && !PPC_PASS_SMALL_FLOAT_STRUCTS_IN_FR_REGS)
if ((size == 4 || size == 8) && has_only_a_r48_field (klass)) {
cinfo->args [n].size = size;
/* It was 7, now it is 8 in LinuxPPC */
if (fr <= PPC_LAST_FPARG_REG) {
cinfo->args [n].regtype = RegTypeFP;
cinfo->args [n].reg = fr;
fr ++;
FP_ALSO_IN_REG (gr ++);
#if !defined(__mono_ppc64__)
if (size == 8)
FP_ALSO_IN_REG (gr ++);
#endif
ALWAYS_ON_STACK (stack_size += size);
} else {
cinfo->args [n].offset = PPC_STACK_PARAM_OFFSET + stack_size;
cinfo->args [n].regtype = RegTypeBase;
cinfo->args [n].reg = ppc_sp; /* in the caller*/
stack_size += 8;
}
n++;
break;
}
#endif
DEBUG(printf ("load %d bytes struct\n",
mono_class_native_size (sig->params [i]->data.klass, NULL)));
#if PPC_PASS_STRUCTS_BY_VALUE
{
int align_size = size;
int nregs = 0;
int rest = PPC_LAST_ARG_REG - gr + 1;
int n_in_regs = 0;
#if PPC_PASS_SMALL_FLOAT_STRUCTS_IN_FR_REGS
int mbr_cnt = 0;
int mbr_size = 0;
gboolean is_all_floats = is_float_struct_returnable_via_regs (sig->params [i], &mbr_cnt, &mbr_size);
if (is_all_floats) {
rest = PPC_LAST_FPARG_REG - fr + 1;
}
// Pass small (<= 8 member) structures entirely made up of either float or double members
// in FR registers. There have to be at least mbr_cnt registers left.
if (is_all_floats &&
(rest >= mbr_cnt)) {
nregs = mbr_cnt;
n_in_regs = MIN (rest, nregs);
cinfo->args [n].regtype = RegTypeFPStructByVal;
cinfo->args [n].vtregs = n_in_regs;
cinfo->args [n].size = mbr_size;
cinfo->args [n].vtsize = nregs - n_in_regs;
cinfo->args [n].reg = fr;
fr += n_in_regs;
if (mbr_size == 4) {
// floats
FP_ALSO_IN_REG (gr += (n_in_regs+1)/2);
} else {
// doubles
FP_ALSO_IN_REG (gr += (n_in_regs));
}
} else
#endif
{
align_size += (sizeof (target_mgreg_t) - 1);
align_size &= ~(sizeof (target_mgreg_t) - 1);
nregs = (align_size + sizeof (target_mgreg_t) -1 ) / sizeof (target_mgreg_t);
n_in_regs = MIN (rest, nregs);
if (n_in_regs < 0)
n_in_regs = 0;
#ifdef __APPLE__
/* FIXME: check this */
if (size >= 3 && size % 4 != 0)
n_in_regs = 0;
#endif
cinfo->args [n].regtype = RegTypeStructByVal;
cinfo->args [n].vtregs = n_in_regs;
cinfo->args [n].size = n_in_regs;
cinfo->args [n].vtsize = nregs - n_in_regs;
cinfo->args [n].reg = gr;
gr += n_in_regs;
}
#ifdef __mono_ppc64__
if (nregs == 1 && is_pinvoke)
cinfo->args [n].bytes = size;
else
#endif
cinfo->args [n].bytes = 0;
cinfo->args [n].offset = PPC_STACK_PARAM_OFFSET + stack_size;
/*g_print ("offset for arg %d at %d\n", n, PPC_STACK_PARAM_OFFSET + stack_size);*/
stack_size += nregs * sizeof (target_mgreg_t);
}
#else
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
cinfo->args [n].regtype = RegTypeStructByAddr;
cinfo->args [n].vtsize = size;
#endif
n++;
break;
}
case MONO_TYPE_U8:
case MONO_TYPE_I8:
cinfo->args [n].size = 8;
add_general (&gr, &stack_size, cinfo->args + n, SIZEOF_REGISTER == 8);
n++;
break;
case MONO_TYPE_R4:
cinfo->args [n].size = 4;
/* It was 7, now it is 8 in LinuxPPC */
if (fr <= PPC_LAST_FPARG_REG
// For non-native vararg calls the parms must go in storage
&& !(!sig->pinvoke && (sig->call_convention == MONO_CALL_VARARG))
) {
cinfo->args [n].regtype = RegTypeFP;
cinfo->args [n].reg = fr;
fr ++;
FP_ALSO_IN_REG (gr ++);
ALWAYS_ON_STACK (stack_size += SIZEOF_REGISTER);
} else {
cinfo->args [n].offset = PPC_STACK_PARAM_OFFSET + stack_size + MONO_PPC_32_64_CASE (0, 4);
cinfo->args [n].regtype = RegTypeBase;
cinfo->args [n].reg = ppc_sp; /* in the caller*/
stack_size += SIZEOF_REGISTER;
}
n++;
break;
case MONO_TYPE_R8:
cinfo->args [n].size = 8;
/* It was 7, now it is 8 in LinuxPPC */
if (fr <= PPC_LAST_FPARG_REG
// For non-native vararg calls the parms must go in storage
&& !(!sig->pinvoke && (sig->call_convention == MONO_CALL_VARARG))
) {
cinfo->args [n].regtype = RegTypeFP;
cinfo->args [n].reg = fr;
fr ++;
FP_ALSO_IN_REG (gr += sizeof (double) / SIZEOF_REGISTER);
ALWAYS_ON_STACK (stack_size += 8);
} else {
cinfo->args [n].offset = PPC_STACK_PARAM_OFFSET + stack_size;
cinfo->args [n].regtype = RegTypeBase;
cinfo->args [n].reg = ppc_sp; /* in the caller*/
stack_size += 8;
}
n++;
break;
default:
g_error ("Can't trampoline 0x%x", sig->params [i]->type);
}
}
cinfo->nargs = n;
if (!sig->pinvoke && (sig->call_convention == MONO_CALL_VARARG) && (i == sig->sentinelpos)) {
/* Prevent implicit arguments and sig_cookie from
being passed in registers */
gr = PPC_LAST_ARG_REG + 1;
/* Emit the signature cookie just before the implicit arguments */
add_general (&gr, &stack_size, &cinfo->sig_cookie, TRUE);
}
{
simpletype = mini_get_underlying_type (sig->ret);
switch (simpletype->type) {
case MONO_TYPE_BOOLEAN:
case MONO_TYPE_I1:
case MONO_TYPE_U1:
case MONO_TYPE_I2:
case MONO_TYPE_U2:
case MONO_TYPE_CHAR:
case MONO_TYPE_I4:
case MONO_TYPE_U4:
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
case MONO_TYPE_CLASS:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
case MONO_TYPE_STRING:
cinfo->ret.reg = ppc_r3;
break;
case MONO_TYPE_U8:
case MONO_TYPE_I8:
cinfo->ret.reg = ppc_r3;
break;
case MONO_TYPE_R4:
case MONO_TYPE_R8:
cinfo->ret.reg = ppc_f1;
cinfo->ret.regtype = RegTypeFP;
break;
case MONO_TYPE_GENERICINST:
if (!mono_type_generic_inst_is_valuetype (simpletype)) {
cinfo->ret.reg = ppc_r3;
break;
}
break;
case MONO_TYPE_VALUETYPE:
break;
case MONO_TYPE_TYPEDBYREF:
case MONO_TYPE_VOID:
break;
default:
g_error ("Can't handle as return value 0x%x", sig->ret->type);
}
}
/* align stack size to 16 */
DEBUG (printf (" stack size: %d (%d)\n", (stack_size + 15) & ~15, stack_size));
stack_size = (stack_size + 15) & ~15;
cinfo->stack_usage = stack_size;
return cinfo;
}
#ifndef DISABLE_JIT
gboolean
mono_arch_tailcall_supported (MonoCompile *cfg, MonoMethodSignature *caller_sig, MonoMethodSignature *callee_sig, gboolean virtual_)
{
CallInfo *caller_info = get_call_info (caller_sig);
CallInfo *callee_info = get_call_info (callee_sig);
gboolean res = IS_SUPPORTED_TAILCALL (callee_info->stack_usage <= caller_info->stack_usage)
&& IS_SUPPORTED_TAILCALL (memcmp (&callee_info->ret, &caller_info->ret, sizeof (caller_info->ret)) == 0);
// FIXME ABIs vary as to if this local is in the parameter area or not,
// so this check might not be needed.
for (int i = 0; res && i < callee_info->nargs; ++i) {
res = IS_SUPPORTED_TAILCALL (callee_info->args [i].regtype != RegTypeStructByAddr);
/* An address on the callee's stack is passed as the argument */
}
g_free (caller_info);
g_free (callee_info);
return res;
}
#endif
/*
* Set var information according to the calling convention. ppc version.
* The locals var stuff should most likely be split in another method.
*/
void
mono_arch_allocate_vars (MonoCompile *m)
{
MonoMethodSignature *sig;
MonoMethodHeader *header;
MonoInst *inst;
int i, offset, size, align, curinst;
int frame_reg = ppc_sp;
gint32 *offsets;
guint32 locals_stack_size, locals_stack_align;
m->flags |= MONO_CFG_HAS_SPILLUP;
/* this is bug #60332: remove when #59509 is fixed, so no weird vararg
* call convs needs to be handled this way.
*/
if (m->flags & MONO_CFG_HAS_VARARGS)
m->param_area = MAX (m->param_area, sizeof (target_mgreg_t)*8);
/* gtk-sharp and other broken code will dllimport vararg functions even with
* non-varargs signatures. Since there is little hope people will get this right
* we assume they won't.
*/
if (m->method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE)
m->param_area = MAX (m->param_area, sizeof (target_mgreg_t)*8);
header = m->header;
/*
* We use the frame register also for any method that has
* exception clauses. This way, when the handlers are called,
* the code will reference local variables using the frame reg instead of
* the stack pointer: if we had to restore the stack pointer, we'd
* corrupt the method frames that are already on the stack (since
* filters get called before stack unwinding happens) when the filter
* code would call any method (this also applies to finally etc.).
*/
if ((m->flags & MONO_CFG_HAS_ALLOCA) || header->num_clauses)
frame_reg = ppc_r31;
m->frame_reg = frame_reg;
if (frame_reg != ppc_sp) {
m->used_int_regs |= 1 << frame_reg;
}
sig = mono_method_signature_internal (m->method);
offset = 0;
curinst = 0;
if (MONO_TYPE_ISSTRUCT (sig->ret)) {
m->ret->opcode = OP_REGVAR;
m->ret->inst_c0 = m->ret->dreg = ppc_r3;
} else {
/* FIXME: handle long values? */
switch (mini_get_underlying_type (sig->ret)->type) {
case MONO_TYPE_VOID:
break;
case MONO_TYPE_R4:
case MONO_TYPE_R8:
m->ret->opcode = OP_REGVAR;
m->ret->inst_c0 = m->ret->dreg = ppc_f1;
break;
default:
m->ret->opcode = OP_REGVAR;
m->ret->inst_c0 = m->ret->dreg = ppc_r3;
break;
}
}
/* local vars are at a positive offset from the stack pointer */
/*
* also note that if the function uses alloca, we use ppc_r31
* to point at the local variables.
*/
offset = PPC_MINIMAL_STACK_SIZE; /* linkage area */
/* align the offset to 16 bytes: not sure this is needed here */
//offset += 16 - 1;
//offset &= ~(16 - 1);
/* add parameter area size for called functions */
offset += m->param_area;
offset += 16 - 1;
offset &= ~(16 - 1);
/* the MonoLMF structure is stored just below the stack pointer */
if (MONO_TYPE_ISSTRUCT (sig->ret)) {
offset += sizeof(gpointer) - 1;
offset &= ~(sizeof(gpointer) - 1);
m->vret_addr->opcode = OP_REGOFFSET;
m->vret_addr->inst_basereg = frame_reg;
m->vret_addr->inst_offset = offset;
if (G_UNLIKELY (m->verbose_level > 1)) {
printf ("vret_addr =");
mono_print_ins (m->vret_addr);
}
offset += sizeof(gpointer);
}
offsets = mono_allocate_stack_slots (m, FALSE, &locals_stack_size, &locals_stack_align);
if (locals_stack_align) {
offset += (locals_stack_align - 1);
offset &= ~(locals_stack_align - 1);
}
for (i = m->locals_start; i < m->num_varinfo; i++) {
if (offsets [i] != -1) {
MonoInst *inst = m->varinfo [i];
inst->opcode = OP_REGOFFSET;
inst->inst_basereg = frame_reg;
inst->inst_offset = offset + offsets [i];
/*
g_print ("allocating local %d (%s) to %d\n",
i, mono_type_get_name (inst->inst_vtype), inst->inst_offset);
*/
}
}
offset += locals_stack_size;
curinst = 0;
if (sig->hasthis) {
inst = m->args [curinst];
if (inst->opcode != OP_REGVAR) {
inst->opcode = OP_REGOFFSET;
inst->inst_basereg = frame_reg;
offset += sizeof (target_mgreg_t) - 1;
offset &= ~(sizeof (target_mgreg_t) - 1);
inst->inst_offset = offset;
offset += sizeof (target_mgreg_t);
}
curinst++;
}
for (i = 0; i < sig->param_count; ++i) {
inst = m->args [curinst];
if (inst->opcode != OP_REGVAR) {
inst->opcode = OP_REGOFFSET;
inst->inst_basereg = frame_reg;
if (sig->pinvoke && !sig->marshalling_disabled) {
size = mono_type_native_stack_size (sig->params [i], (guint32*)&align);
inst->backend.is_pinvoke = 1;
} else {
size = mono_type_size (sig->params [i], &align);
}
if (MONO_TYPE_ISSTRUCT (sig->params [i]) && size < sizeof (target_mgreg_t))
size = align = sizeof (target_mgreg_t);
/*
* Use at least 4/8 byte alignment, since these might be passed in registers, and
* they are saved using std in the prolog.
*/
align = sizeof (target_mgreg_t);
offset += align - 1;
offset &= ~(align - 1);
inst->inst_offset = offset;
offset += size;
}
curinst++;
}
/* some storage for fp conversions */
offset += 8 - 1;
offset &= ~(8 - 1);
m->arch.fp_conv_var_offset = offset;
offset += 8;
/* align the offset to 16 bytes */
offset += 16 - 1;
offset &= ~(16 - 1);
/* change sign? */
m->stack_offset = offset;
if (sig->call_convention == MONO_CALL_VARARG) {
CallInfo *cinfo = get_call_info (m->method->signature);
m->sig_cookie = cinfo->sig_cookie.offset;
g_free(cinfo);
}
}
void
mono_arch_create_vars (MonoCompile *cfg)
{
MonoMethodSignature *sig = mono_method_signature_internal (cfg->method);
if (MONO_TYPE_ISSTRUCT (sig->ret)) {
cfg->vret_addr = mono_compile_create_var (cfg, mono_get_int_type (), OP_ARG);
}
}
/* Fixme: we need an alignment solution for enter_method and mono_arch_call_opcode,
* currently alignment in mono_arch_call_opcode is computed without arch_get_argument_info
*/
static void
emit_sig_cookie (MonoCompile *cfg, MonoCallInst *call, CallInfo *cinfo)
{
int sig_reg = mono_alloc_ireg (cfg);
/* FIXME: Add support for signature tokens to AOT */
cfg->disable_aot = TRUE;
MONO_EMIT_NEW_ICONST (cfg, sig_reg, (gulong)call->signature);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG,
ppc_r1, cinfo->sig_cookie.offset, sig_reg);
}
void
mono_arch_emit_call (MonoCompile *cfg, MonoCallInst *call)
{
MonoInst *in, *ins;
MonoMethodSignature *sig;
int i, n;
CallInfo *cinfo;
sig = call->signature;
n = sig->param_count + sig->hasthis;
cinfo = get_call_info (sig);
for (i = 0; i < n; ++i) {
ArgInfo *ainfo = cinfo->args + i;
MonoType *t;
if (i >= sig->hasthis)
t = sig->params [i - sig->hasthis];
else
t = mono_get_int_type ();
t = mini_get_underlying_type (t);
if (!sig->pinvoke && (sig->call_convention == MONO_CALL_VARARG) && (i == sig->sentinelpos))
emit_sig_cookie (cfg, call, cinfo);
in = call->args [i];
if (ainfo->regtype == RegTypeGeneral) {
#ifndef __mono_ppc64__
if (!m_type_is_byref (t) && ((t->type == MONO_TYPE_I8) || (t->type == MONO_TYPE_U8))) {
MONO_INST_NEW (cfg, ins, OP_MOVE);
ins->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = MONO_LVREG_LS (in->dreg);
MONO_ADD_INS (cfg->cbb, ins);
mono_call_inst_add_outarg_reg (cfg, call, ins->dreg, ainfo->reg + 1, FALSE);
MONO_INST_NEW (cfg, ins, OP_MOVE);
ins->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = MONO_LVREG_MS (in->dreg);
MONO_ADD_INS (cfg->cbb, ins);
mono_call_inst_add_outarg_reg (cfg, call, ins->dreg, ainfo->reg, FALSE);
} else
#endif
{
MONO_INST_NEW (cfg, ins, OP_MOVE);
ins->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = in->dreg;
MONO_ADD_INS (cfg->cbb, ins);
mono_call_inst_add_outarg_reg (cfg, call, ins->dreg, ainfo->reg, FALSE);
}
} else if (ainfo->regtype == RegTypeStructByAddr) {
MONO_INST_NEW (cfg, ins, OP_OUTARG_VT);
ins->opcode = OP_OUTARG_VT;
ins->sreg1 = in->dreg;
ins->klass = in->klass;
ins->inst_p0 = call;
ins->inst_p1 = mono_mempool_alloc (cfg->mempool, sizeof (ArgInfo));
memcpy (ins->inst_p1, ainfo, sizeof (ArgInfo));
MONO_ADD_INS (cfg->cbb, ins);
} else if (ainfo->regtype == RegTypeStructByVal) {
/* this is further handled in mono_arch_emit_outarg_vt () */
MONO_INST_NEW (cfg, ins, OP_OUTARG_VT);
ins->opcode = OP_OUTARG_VT;
ins->sreg1 = in->dreg;
ins->klass = in->klass;
ins->inst_p0 = call;
ins->inst_p1 = mono_mempool_alloc (cfg->mempool, sizeof (ArgInfo));
memcpy (ins->inst_p1, ainfo, sizeof (ArgInfo));
MONO_ADD_INS (cfg->cbb, ins);
} else if (ainfo->regtype == RegTypeFPStructByVal) {
/* this is further handled in mono_arch_emit_outarg_vt () */
MONO_INST_NEW (cfg, ins, OP_OUTARG_VT);
ins->opcode = OP_OUTARG_VT;
ins->sreg1 = in->dreg;
ins->klass = in->klass;
ins->inst_p0 = call;
ins->inst_p1 = mono_mempool_alloc (cfg->mempool, sizeof (ArgInfo));
memcpy (ins->inst_p1, ainfo, sizeof (ArgInfo));
MONO_ADD_INS (cfg->cbb, ins);
cfg->flags |= MONO_CFG_HAS_FPOUT;
} else if (ainfo->regtype == RegTypeBase) {
if (!m_type_is_byref (t) && ((t->type == MONO_TYPE_I8) || (t->type == MONO_TYPE_U8))) {
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI8_MEMBASE_REG, ppc_r1, ainfo->offset, in->dreg);
} else if (!m_type_is_byref (t) && ((t->type == MONO_TYPE_R4) || (t->type == MONO_TYPE_R8))) {
if (t->type == MONO_TYPE_R8)
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORER8_MEMBASE_REG, ppc_r1, ainfo->offset, in->dreg);
else
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORER4_MEMBASE_REG, ppc_r1, ainfo->offset, in->dreg);
} else {
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, ppc_r1, ainfo->offset, in->dreg);
}
} else if (ainfo->regtype == RegTypeFP) {
if (t->type == MONO_TYPE_VALUETYPE) {
/* this is further handled in mono_arch_emit_outarg_vt () */
MONO_INST_NEW (cfg, ins, OP_OUTARG_VT);
ins->opcode = OP_OUTARG_VT;
ins->sreg1 = in->dreg;
ins->klass = in->klass;
ins->inst_p0 = call;
ins->inst_p1 = mono_mempool_alloc (cfg->mempool, sizeof (ArgInfo));
memcpy (ins->inst_p1, ainfo, sizeof (ArgInfo));
MONO_ADD_INS (cfg->cbb, ins);
cfg->flags |= MONO_CFG_HAS_FPOUT;
} else {
int dreg = mono_alloc_freg (cfg);
if (ainfo->size == 4) {
MONO_EMIT_NEW_UNALU (cfg, OP_FCONV_TO_R4, dreg, in->dreg);
} else {
MONO_INST_NEW (cfg, ins, OP_FMOVE);
ins->dreg = dreg;
ins->sreg1 = in->dreg;
MONO_ADD_INS (cfg->cbb, ins);
}
mono_call_inst_add_outarg_reg (cfg, call, dreg, ainfo->reg, TRUE);
cfg->flags |= MONO_CFG_HAS_FPOUT;
}
} else {
g_assert_not_reached ();
}
}
/* Emit the signature cookie in the case that there is no
additional argument */
if (!sig->pinvoke && (sig->call_convention == MONO_CALL_VARARG) && (n == sig->sentinelpos))
emit_sig_cookie (cfg, call, cinfo);
if (cinfo->struct_ret) {
MonoInst *vtarg;
MONO_INST_NEW (cfg, vtarg, OP_MOVE);
vtarg->sreg1 = call->vret_var->dreg;
vtarg->dreg = mono_alloc_preg (cfg);
MONO_ADD_INS (cfg->cbb, vtarg);
mono_call_inst_add_outarg_reg (cfg, call, vtarg->dreg, cinfo->struct_ret, FALSE);
}
call->stack_usage = cinfo->stack_usage;
cfg->param_area = MAX (PPC_MINIMAL_PARAM_AREA_SIZE, MAX (cfg->param_area, cinfo->stack_usage));
cfg->flags |= MONO_CFG_HAS_CALLS;
g_free (cinfo);
}
#ifndef DISABLE_JIT
void
mono_arch_emit_outarg_vt (MonoCompile *cfg, MonoInst *ins, MonoInst *src)
{
MonoCallInst *call = (MonoCallInst*)ins->inst_p0;
ArgInfo *ainfo = (ArgInfo*)ins->inst_p1;
int ovf_size = ainfo->vtsize;
int doffset = ainfo->offset;
int i, soffset, dreg;
if (ainfo->regtype == RegTypeStructByVal) {
#ifdef __APPLE__
guint32 size = 0;
#endif
soffset = 0;
#ifdef __APPLE__
/*
* Darwin pinvokes needs some special handling for 1
* and 2 byte arguments
*/
g_assert (ins->klass);
if (call->signature->pinvoke && !call->signature->marshalling_disabled)
size = mono_class_native_size (ins->klass, NULL);
if (size == 2 || size == 1) {
int tmpr = mono_alloc_ireg (cfg);
if (size == 1)
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI1_MEMBASE, tmpr, src->dreg, soffset);
else
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI2_MEMBASE, tmpr, src->dreg, soffset);
dreg = mono_alloc_ireg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_MOVE, dreg, tmpr);
mono_call_inst_add_outarg_reg (cfg, call, dreg, ainfo->reg, FALSE);
} else
#endif
for (i = 0; i < ainfo->vtregs; ++i) {
dreg = mono_alloc_ireg (cfg);
#if G_BYTE_ORDER == G_BIG_ENDIAN
int antipadding = 0;
if (ainfo->bytes) {
g_assert (i == 0);
antipadding = sizeof (target_mgreg_t) - ainfo->bytes;
}
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, src->dreg, soffset);
if (antipadding)
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHR_UN_IMM, dreg, dreg, antipadding * 8);
#else
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, src->dreg, soffset);
#endif
mono_call_inst_add_outarg_reg (cfg, call, dreg, ainfo->reg + i, FALSE);
soffset += sizeof (target_mgreg_t);
}
if (ovf_size != 0)
mini_emit_memcpy (cfg, ppc_r1, doffset + soffset, src->dreg, soffset, ovf_size * sizeof (target_mgreg_t), TARGET_SIZEOF_VOID_P);
} else if (ainfo->regtype == RegTypeFPStructByVal) {
soffset = 0;
for (i = 0; i < ainfo->vtregs; ++i) {
int tmpr = mono_alloc_freg (cfg);
if (ainfo->size == 4)
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADR4_MEMBASE, tmpr, src->dreg, soffset);
else // ==8
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADR8_MEMBASE, tmpr, src->dreg, soffset);
dreg = mono_alloc_freg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_FMOVE, dreg, tmpr);
mono_call_inst_add_outarg_reg (cfg, call, dreg, ainfo->reg+i, TRUE);
soffset += ainfo->size;
}
if (ovf_size != 0)
mini_emit_memcpy (cfg, ppc_r1, doffset + soffset, src->dreg, soffset, ovf_size * sizeof (target_mgreg_t), TARGET_SIZEOF_VOID_P);
} else if (ainfo->regtype == RegTypeFP) {
int tmpr = mono_alloc_freg (cfg);
if (ainfo->size == 4)
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADR4_MEMBASE, tmpr, src->dreg, 0);
else
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADR8_MEMBASE, tmpr, src->dreg, 0);
dreg = mono_alloc_freg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_FMOVE, dreg, tmpr);
mono_call_inst_add_outarg_reg (cfg, call, dreg, ainfo->reg, TRUE);
} else {
MonoInst *vtcopy = mono_compile_create_var (cfg, m_class_get_byval_arg (src->klass), OP_LOCAL);
MonoInst *load;
guint32 size;
/* FIXME: alignment? */
if (call->signature->pinvoke && !call->signature->marshalling_disabled) {
size = mono_type_native_stack_size (m_class_get_byval_arg (src->klass), NULL);
vtcopy->backend.is_pinvoke = 1;
} else {
size = mini_type_stack_size (m_class_get_byval_arg (src->klass), NULL);
}
if (size > 0)
g_assert (ovf_size > 0);
EMIT_NEW_VARLOADA (cfg, load, vtcopy, vtcopy->inst_vtype);
mini_emit_memcpy (cfg, load->dreg, 0, src->dreg, 0, size, TARGET_SIZEOF_VOID_P);
if (ainfo->offset)
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, ppc_r1, ainfo->offset, load->dreg);
else
mono_call_inst_add_outarg_reg (cfg, call, load->dreg, ainfo->reg, FALSE);
}
}
void
mono_arch_emit_setret (MonoCompile *cfg, MonoMethod *method, MonoInst *val)
{
MonoType *ret = mini_get_underlying_type (mono_method_signature_internal (method)->ret);
if (!rm_type_is_byref (ret)) {
#ifndef __mono_ppc64__
if (ret->type == MONO_TYPE_I8 || ret->type == MONO_TYPE_U8) {
MonoInst *ins;
MONO_INST_NEW (cfg, ins, OP_SETLRET);
ins->sreg1 = MONO_LVREG_LS (val->dreg);
ins->sreg2 = MONO_LVREG_MS (val->dreg);
MONO_ADD_INS (cfg->cbb, ins);
return;
}
#endif
if (ret->type == MONO_TYPE_R8 || ret->type == MONO_TYPE_R4) {
MONO_EMIT_NEW_UNALU (cfg, OP_FMOVE, cfg->ret->dreg, val->dreg);
return;
}
}
MONO_EMIT_NEW_UNALU (cfg, OP_MOVE, cfg->ret->dreg, val->dreg);
}
gboolean
mono_arch_is_inst_imm (int opcode, int imm_opcode, gint64 imm)
{
return TRUE;
}
#endif /* DISABLE_JIT */
/*
* Conditional branches have a small offset, so if it is likely overflowed,
* we do a branch to the end of the method (uncond branches have much larger
* offsets) where we perform the conditional and jump back unconditionally.
* It's slightly slower, since we add two uncond branches, but it's very simple
* with the current patch implementation and such large methods are likely not
* going to be perf critical anyway.
*/
typedef struct {
union {
MonoBasicBlock *bb;
const char *exception;
} data;
guint32 ip_offset;
guint16 b0_cond;
guint16 b1_cond;
} MonoOvfJump;
#define EMIT_COND_BRANCH_FLAGS(ins,b0,b1) \
if (0 && ins->inst_true_bb->native_offset) { \
ppc_bc (code, (b0), (b1), (code - cfg->native_code + ins->inst_true_bb->native_offset) & 0xffff); \
} else { \
int br_disp = ins->inst_true_bb->max_offset - offset; \
if (!ppc_is_imm16 (br_disp + 8 * 1024) || !ppc_is_imm16 (br_disp - 8 * 1024)) { \
MonoOvfJump *ovfj = mono_mempool_alloc (cfg->mempool, sizeof (MonoOvfJump)); \
ovfj->data.bb = ins->inst_true_bb; \
ovfj->ip_offset = 0; \
ovfj->b0_cond = (b0); \
ovfj->b1_cond = (b1); \
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_BB_OVF, ovfj); \
ppc_b (code, 0); \
} else { \
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_BB, ins->inst_true_bb); \
ppc_bc (code, (b0), (b1), 0); \
} \
}
#define EMIT_COND_BRANCH(ins,cond) EMIT_COND_BRANCH_FLAGS(ins, branch_b0_table [(cond)], branch_b1_table [(cond)])
/* emit an exception if condition is fail
*
* We assign the extra code used to throw the implicit exceptions
* to cfg->bb_exit as far as the big branch handling is concerned
*/
#define EMIT_COND_SYSTEM_EXCEPTION_FLAGS(b0,b1,exc_name) \
do { \
int br_disp = cfg->bb_exit->max_offset - offset; \
if (!ppc_is_imm16 (br_disp + 1024) || ! ppc_is_imm16 (ppc_is_imm16 (br_disp - 1024))) { \
MonoOvfJump *ovfj = mono_mempool_alloc (cfg->mempool, sizeof (MonoOvfJump)); \
ovfj->data.exception = (exc_name); \
ovfj->ip_offset = code - cfg->native_code; \
ovfj->b0_cond = (b0); \
ovfj->b1_cond = (b1); \
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_EXC_OVF, ovfj); \
ppc_bl (code, 0); \
cfg->bb_exit->max_offset += 24; \
} else { \
mono_add_patch_info (cfg, code - cfg->native_code, \
MONO_PATCH_INFO_EXC, exc_name); \
ppc_bcl (code, (b0), (b1), 0); \
} \
} while (0);
#define EMIT_COND_SYSTEM_EXCEPTION(cond,exc_name) EMIT_COND_SYSTEM_EXCEPTION_FLAGS(branch_b0_table [(cond)], branch_b1_table [(cond)], (exc_name))
void
mono_arch_peephole_pass_1 (MonoCompile *cfg, MonoBasicBlock *bb)
{
}
static int
normalize_opcode (int opcode)
{
switch (opcode) {
#ifndef MONO_ARCH_ILP32
case MONO_PPC_32_64_CASE (OP_LOADI4_MEMBASE, OP_LOADI8_MEMBASE):
return OP_LOAD_MEMBASE;
case MONO_PPC_32_64_CASE (OP_LOADI4_MEMINDEX, OP_LOADI8_MEMINDEX):
return OP_LOAD_MEMINDEX;
case MONO_PPC_32_64_CASE (OP_STOREI4_MEMBASE_REG, OP_STOREI8_MEMBASE_REG):
return OP_STORE_MEMBASE_REG;
case MONO_PPC_32_64_CASE (OP_STOREI4_MEMBASE_IMM, OP_STOREI8_MEMBASE_IMM):
return OP_STORE_MEMBASE_IMM;
case MONO_PPC_32_64_CASE (OP_STOREI4_MEMINDEX, OP_STOREI8_MEMINDEX):
return OP_STORE_MEMINDEX;
#endif
case MONO_PPC_32_64_CASE (OP_ISHR_IMM, OP_LSHR_IMM):
return OP_SHR_IMM;
case MONO_PPC_32_64_CASE (OP_ISHR_UN_IMM, OP_LSHR_UN_IMM):
return OP_SHR_UN_IMM;
default:
return opcode;
}
}
void
mono_arch_peephole_pass_2 (MonoCompile *cfg, MonoBasicBlock *bb)
{
MonoInst *ins, *n, *last_ins = NULL;
MONO_BB_FOR_EACH_INS_SAFE (bb, n, ins) {
switch (normalize_opcode (ins->opcode)) {
case OP_MUL_IMM:
/* remove unnecessary multiplication with 1 */
if (ins->inst_imm == 1) {
if (ins->dreg != ins->sreg1) {
ins->opcode = OP_MOVE;
} else {
MONO_DELETE_INS (bb, ins);
continue;
}
} else if (inst->inst_imm > 0) {
int power2 = mono_is_power_of_two (ins->inst_imm);
if (power2 > 0) {
ins->opcode = OP_SHL_IMM;
ins->inst_imm = power2;
}
}
break;
case OP_LOAD_MEMBASE:
/*
* OP_STORE_MEMBASE_REG reg, offset(basereg)
* OP_LOAD_MEMBASE offset(basereg), reg
*/
if (last_ins && normalize_opcode (last_ins->opcode) == OP_STORE_MEMBASE_REG &&
ins->inst_basereg == last_ins->inst_destbasereg &&
ins->inst_offset == last_ins->inst_offset) {
if (ins->dreg == last_ins->sreg1) {
MONO_DELETE_INS (bb, ins);
continue;
} else {
//static int c = 0; printf ("MATCHX %s %d\n", cfg->method->name,c++);
ins->opcode = OP_MOVE;
ins->sreg1 = last_ins->sreg1;
}
/*
* Note: reg1 must be different from the basereg in the second load
* OP_LOAD_MEMBASE offset(basereg), reg1
* OP_LOAD_MEMBASE offset(basereg), reg2
* -->
* OP_LOAD_MEMBASE offset(basereg), reg1
* OP_MOVE reg1, reg2
*/
} else if (last_ins && normalize_opcode (last_ins->opcode) == OP_LOAD_MEMBASE &&
ins->inst_basereg != last_ins->dreg &&
ins->inst_basereg == last_ins->inst_basereg &&
ins->inst_offset == last_ins->inst_offset) {
if (ins->dreg == last_ins->dreg) {
MONO_DELETE_INS (bb, ins);
continue;
} else {
ins->opcode = OP_MOVE;
ins->sreg1 = last_ins->dreg;
}
//g_assert_not_reached ();
#if 0
/*
* OP_STORE_MEMBASE_IMM imm, offset(basereg)
* OP_LOAD_MEMBASE offset(basereg), reg
* -->
* OP_STORE_MEMBASE_IMM imm, offset(basereg)
* OP_ICONST reg, imm
*/
} else if (last_ins && normalize_opcode (last_ins->opcode) == OP_STORE_MEMBASE_IMM &&
ins->inst_basereg == last_ins->inst_destbasereg &&
ins->inst_offset == last_ins->inst_offset) {
//static int c = 0; printf ("MATCHX %s %d\n", cfg->method->name,c++);
ins->opcode = OP_ICONST;
ins->inst_c0 = last_ins->inst_imm;
g_assert_not_reached (); // check this rule
#endif
}
break;
case OP_LOADU1_MEMBASE:
case OP_LOADI1_MEMBASE:
if (last_ins && (last_ins->opcode == OP_STOREI1_MEMBASE_REG) &&
ins->inst_basereg == last_ins->inst_destbasereg &&
ins->inst_offset == last_ins->inst_offset) {
ins->opcode = (ins->opcode == OP_LOADI1_MEMBASE) ? OP_ICONV_TO_I1 : OP_ICONV_TO_U1;
ins->sreg1 = last_ins->sreg1;
}
break;
case OP_LOADU2_MEMBASE:
case OP_LOADI2_MEMBASE:
if (last_ins && (last_ins->opcode == OP_STOREI2_MEMBASE_REG) &&
ins->inst_basereg == last_ins->inst_destbasereg &&
ins->inst_offset == last_ins->inst_offset) {
ins->opcode = (ins->opcode == OP_LOADI2_MEMBASE) ? OP_ICONV_TO_I2 : OP_ICONV_TO_U2;
ins->sreg1 = last_ins->sreg1;
}
break;
#ifdef __mono_ppc64__
case OP_LOADU4_MEMBASE:
case OP_LOADI4_MEMBASE:
if (last_ins && (last_ins->opcode == OP_STOREI4_MEMBASE_REG) &&
ins->inst_basereg == last_ins->inst_destbasereg &&
ins->inst_offset == last_ins->inst_offset) {
ins->opcode = (ins->opcode == OP_LOADI4_MEMBASE) ? OP_ICONV_TO_I4 : OP_ICONV_TO_U4;
ins->sreg1 = last_ins->sreg1;
}
break;
#endif
case OP_MOVE:
ins->opcode = OP_MOVE;
/*
* OP_MOVE reg, reg
*/
if (ins->dreg == ins->sreg1) {
MONO_DELETE_INS (bb, ins);
continue;
}
/*
* OP_MOVE sreg, dreg
* OP_MOVE dreg, sreg
*/
if (last_ins && last_ins->opcode == OP_MOVE &&
ins->sreg1 == last_ins->dreg &&
ins->dreg == last_ins->sreg1) {
MONO_DELETE_INS (bb, ins);
continue;
}
break;
}
last_ins = ins;
ins = ins->next;
}
bb->last_ins = last_ins;
}
void
mono_arch_decompose_opts (MonoCompile *cfg, MonoInst *ins)
{
switch (ins->opcode) {
case OP_ICONV_TO_R_UN: {
// This value is OK as-is for both big and little endian because of how it is stored
static const guint64 adjust_val = 0x4330000000000000ULL;
int msw_reg = mono_alloc_ireg (cfg);
int adj_reg = mono_alloc_freg (cfg);
int tmp_reg = mono_alloc_freg (cfg);
int basereg = ppc_sp;
int offset = -8;
MONO_EMIT_NEW_ICONST (cfg, msw_reg, 0x43300000);
if (!ppc_is_imm16 (offset + 4)) {
basereg = mono_alloc_ireg (cfg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IADD_IMM, basereg, cfg->frame_reg, offset);
}
#if G_BYTE_ORDER == G_BIG_ENDIAN
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI4_MEMBASE_REG, basereg, offset, msw_reg);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI4_MEMBASE_REG, basereg, offset + 4, ins->sreg1);
#else
// For little endian the words are reversed
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI4_MEMBASE_REG, basereg, offset + 4, msw_reg);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI4_MEMBASE_REG, basereg, offset, ins->sreg1);
#endif
MONO_EMIT_NEW_LOAD_R8 (cfg, adj_reg, &adjust_val);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADR8_MEMBASE, tmp_reg, basereg, offset);
MONO_EMIT_NEW_BIALU (cfg, OP_FSUB, ins->dreg, tmp_reg, adj_reg);
ins->opcode = OP_NOP;
break;
}
#ifndef __mono_ppc64__
case OP_ICONV_TO_R4:
case OP_ICONV_TO_R8: {
/* If we have a PPC_FEATURE_64 machine we can avoid
this and use the fcfid instruction. Otherwise
on an old 32-bit chip and we have to do this the
hard way. */
if (!(cpu_hw_caps & PPC_ISA_64)) {
/* FIXME: change precision for CEE_CONV_R4 */
static const guint64 adjust_val = 0x4330000080000000ULL;
int msw_reg = mono_alloc_ireg (cfg);
int xored = mono_alloc_ireg (cfg);
int adj_reg = mono_alloc_freg (cfg);
int tmp_reg = mono_alloc_freg (cfg);
int basereg = ppc_sp;
int offset = -8;
if (!ppc_is_imm16 (offset + 4)) {
basereg = mono_alloc_ireg (cfg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IADD_IMM, basereg, cfg->frame_reg, offset);
}
MONO_EMIT_NEW_ICONST (cfg, msw_reg, 0x43300000);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI4_MEMBASE_REG, basereg, offset, msw_reg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_XOR_IMM, xored, ins->sreg1, 0x80000000);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI4_MEMBASE_REG, basereg, offset + 4, xored);
MONO_EMIT_NEW_LOAD_R8 (cfg, adj_reg, (gpointer)&adjust_val);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADR8_MEMBASE, tmp_reg, basereg, offset);
MONO_EMIT_NEW_BIALU (cfg, OP_FSUB, ins->dreg, tmp_reg, adj_reg);
if (ins->opcode == OP_ICONV_TO_R4)
MONO_EMIT_NEW_UNALU (cfg, OP_FCONV_TO_R4, ins->dreg, ins->dreg);
ins->opcode = OP_NOP;
}
break;
}
#endif
case OP_CKFINITE: {
int msw_reg = mono_alloc_ireg (cfg);
int basereg = ppc_sp;
int offset = -8;
if (!ppc_is_imm16 (offset + 4)) {
basereg = mono_alloc_ireg (cfg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IADD_IMM, basereg, cfg->frame_reg, offset);
}
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORER8_MEMBASE_REG, basereg, offset, ins->sreg1);
#if G_BYTE_ORDER == G_BIG_ENDIAN
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, msw_reg, basereg, offset);
#else
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, msw_reg, basereg, offset+4);
#endif
MONO_EMIT_NEW_UNALU (cfg, OP_PPC_CHECK_FINITE, -1, msw_reg);
MONO_EMIT_NEW_UNALU (cfg, OP_FMOVE, ins->dreg, ins->sreg1);
ins->opcode = OP_NOP;
break;
}
#ifdef __mono_ppc64__
case OP_IADD_OVF:
case OP_IADD_OVF_UN:
case OP_ISUB_OVF: {
int shifted1_reg = mono_alloc_ireg (cfg);
int shifted2_reg = mono_alloc_ireg (cfg);
int result_shifted_reg = mono_alloc_ireg (cfg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHL_IMM, shifted1_reg, ins->sreg1, 32);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHL_IMM, shifted2_reg, ins->sreg2, 32);
MONO_EMIT_NEW_BIALU (cfg, ins->opcode, result_shifted_reg, shifted1_reg, shifted2_reg);
if (ins->opcode == OP_IADD_OVF_UN)
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHR_UN_IMM, ins->dreg, result_shifted_reg, 32);
else
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHR_IMM, ins->dreg, result_shifted_reg, 32);
ins->opcode = OP_NOP;
break;
}
#endif
default:
break;
}
}
void
mono_arch_decompose_long_opts (MonoCompile *cfg, MonoInst *ins)
{
switch (ins->opcode) {
case OP_LADD_OVF:
/* ADC sets the condition code */
MONO_EMIT_NEW_BIALU (cfg, OP_ADDCC, MONO_LVREG_LS (ins->dreg), MONO_LVREG_LS (ins->sreg1), MONO_LVREG_LS (ins->sreg2));
MONO_EMIT_NEW_BIALU (cfg, OP_ADD_OVF_CARRY, MONO_LVREG_MS (ins->dreg), MONO_LVREG_MS (ins->sreg1), MONO_LVREG_MS (ins->sreg2));
NULLIFY_INS (ins);
break;
case OP_LADD_OVF_UN:
/* ADC sets the condition code */
MONO_EMIT_NEW_BIALU (cfg, OP_ADDCC, MONO_LVREG_LS (ins->dreg), MONO_LVREG_LS (ins->sreg1), MONO_LVREG_LS (ins->sreg2));
MONO_EMIT_NEW_BIALU (cfg, OP_ADD_OVF_UN_CARRY, MONO_LVREG_MS (ins->dreg), MONO_LVREG_MS (ins->sreg1), MONO_LVREG_MS (ins->sreg2));
NULLIFY_INS (ins);
break;
case OP_LSUB_OVF:
/* SBB sets the condition code */
MONO_EMIT_NEW_BIALU (cfg, OP_SUBCC, MONO_LVREG_LS (ins->dreg), MONO_LVREG_LS (ins->sreg1), MONO_LVREG_LS (ins->sreg2));
MONO_EMIT_NEW_BIALU (cfg, OP_SUB_OVF_CARRY, MONO_LVREG_MS (ins->dreg), MONO_LVREG_MS (ins->sreg1), MONO_LVREG_MS (ins->sreg2));
NULLIFY_INS (ins);
break;
case OP_LSUB_OVF_UN:
/* SBB sets the condition code */
MONO_EMIT_NEW_BIALU (cfg, OP_SUBCC, MONO_LVREG_LS (ins->dreg), MONO_LVREG_LS (ins->sreg1), MONO_LVREG_LS (ins->sreg2));
MONO_EMIT_NEW_BIALU (cfg, OP_SUB_OVF_UN_CARRY, MONO_LVREG_MS (ins->dreg), MONO_LVREG_MS (ins->sreg1), MONO_LVREG_MS (ins->sreg2));
NULLIFY_INS (ins);
break;
case OP_LNEG:
/* From gcc generated code */
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PPC_SUBFIC, MONO_LVREG_LS (ins->dreg), MONO_LVREG_LS (ins->sreg1), 0);
MONO_EMIT_NEW_UNALU (cfg, OP_PPC_SUBFZE, MONO_LVREG_MS (ins->dreg), MONO_LVREG_MS (ins->sreg1));
NULLIFY_INS (ins);
break;
default:
break;
}
}
/*
* the branch_b0_table should maintain the order of these
* opcodes.
case CEE_BEQ:
case CEE_BGE:
case CEE_BGT:
case CEE_BLE:
case CEE_BLT:
case CEE_BNE_UN:
case CEE_BGE_UN:
case CEE_BGT_UN:
case CEE_BLE_UN:
case CEE_BLT_UN:
*/
static const guchar
branch_b0_table [] = {
PPC_BR_TRUE,
PPC_BR_FALSE,
PPC_BR_TRUE,
PPC_BR_FALSE,
PPC_BR_TRUE,
PPC_BR_FALSE,
PPC_BR_FALSE,
PPC_BR_TRUE,
PPC_BR_FALSE,
PPC_BR_TRUE
};
static const guchar
branch_b1_table [] = {
PPC_BR_EQ,
PPC_BR_LT,
PPC_BR_GT,
PPC_BR_GT,
PPC_BR_LT,
PPC_BR_EQ,
PPC_BR_LT,
PPC_BR_GT,
PPC_BR_GT,
PPC_BR_LT
};
#define NEW_INS(cfg,dest,op) do { \
MONO_INST_NEW((cfg), (dest), (op)); \
mono_bblock_insert_after_ins (bb, last_ins, (dest)); \
} while (0)
static int
map_to_reg_reg_op (int op)
{
switch (op) {
case OP_ADD_IMM:
return OP_IADD;
case OP_SUB_IMM:
return OP_ISUB;
case OP_AND_IMM:
return OP_IAND;
case OP_COMPARE_IMM:
return OP_COMPARE;
case OP_ICOMPARE_IMM:
return OP_ICOMPARE;
case OP_LCOMPARE_IMM:
return OP_LCOMPARE;
case OP_ADDCC_IMM:
return OP_IADDCC;
case OP_ADC_IMM:
return OP_IADC;
case OP_SUBCC_IMM:
return OP_ISUBCC;
case OP_SBB_IMM:
return OP_ISBB;
case OP_OR_IMM:
return OP_IOR;
case OP_XOR_IMM:
return OP_IXOR;
case OP_MUL_IMM:
return OP_IMUL;
case OP_LMUL_IMM:
return OP_LMUL;
case OP_LOAD_MEMBASE:
return OP_LOAD_MEMINDEX;
case OP_LOADI4_MEMBASE:
return OP_LOADI4_MEMINDEX;
case OP_LOADU4_MEMBASE:
return OP_LOADU4_MEMINDEX;
case OP_LOADI8_MEMBASE:
return OP_LOADI8_MEMINDEX;
case OP_LOADU1_MEMBASE:
return OP_LOADU1_MEMINDEX;
case OP_LOADI2_MEMBASE:
return OP_LOADI2_MEMINDEX;
case OP_LOADU2_MEMBASE:
return OP_LOADU2_MEMINDEX;
case OP_LOADI1_MEMBASE:
return OP_LOADI1_MEMINDEX;
case OP_LOADR4_MEMBASE:
return OP_LOADR4_MEMINDEX;
case OP_LOADR8_MEMBASE:
return OP_LOADR8_MEMINDEX;
case OP_STOREI1_MEMBASE_REG:
return OP_STOREI1_MEMINDEX;
case OP_STOREI2_MEMBASE_REG:
return OP_STOREI2_MEMINDEX;
case OP_STOREI4_MEMBASE_REG:
return OP_STOREI4_MEMINDEX;
case OP_STOREI8_MEMBASE_REG:
return OP_STOREI8_MEMINDEX;
case OP_STORE_MEMBASE_REG:
return OP_STORE_MEMINDEX;
case OP_STORER4_MEMBASE_REG:
return OP_STORER4_MEMINDEX;
case OP_STORER8_MEMBASE_REG:
return OP_STORER8_MEMINDEX;
case OP_STORE_MEMBASE_IMM:
return OP_STORE_MEMBASE_REG;
case OP_STOREI1_MEMBASE_IMM:
return OP_STOREI1_MEMBASE_REG;
case OP_STOREI2_MEMBASE_IMM:
return OP_STOREI2_MEMBASE_REG;
case OP_STOREI4_MEMBASE_IMM:
return OP_STOREI4_MEMBASE_REG;
case OP_STOREI8_MEMBASE_IMM:
return OP_STOREI8_MEMBASE_REG;
}
if (mono_op_imm_to_op (op) == -1)
g_error ("mono_op_imm_to_op failed for %s\n", mono_inst_name (op));
return mono_op_imm_to_op (op);
}
//#define map_to_reg_reg_op(op) (cfg->new_ir? mono_op_imm_to_op (op): map_to_reg_reg_op (op))
#define compare_opcode_is_unsigned(opcode) \
(((opcode) >= CEE_BNE_UN && (opcode) <= CEE_BLT_UN) || \
((opcode) >= OP_IBNE_UN && (opcode) <= OP_IBLT_UN) || \
((opcode) >= OP_LBNE_UN && (opcode) <= OP_LBLT_UN) || \
((opcode) >= OP_COND_EXC_NE_UN && (opcode) <= OP_COND_EXC_LT_UN) || \
((opcode) >= OP_COND_EXC_INE_UN && (opcode) <= OP_COND_EXC_ILT_UN) || \
((opcode) == OP_CLT_UN || (opcode) == OP_CGT_UN || \
(opcode) == OP_ICLT_UN || (opcode) == OP_ICGT_UN || \
(opcode) == OP_LCLT_UN || (opcode) == OP_LCGT_UN))
/*
* Remove from the instruction list the instructions that can't be
* represented with very simple instructions with no register
* requirements.
*/
void
mono_arch_lowering_pass (MonoCompile *cfg, MonoBasicBlock *bb)
{
MonoInst *ins, *next, *temp, *last_ins = NULL;
int imm;
MONO_BB_FOR_EACH_INS (bb, ins) {
loop_start:
switch (ins->opcode) {
case OP_IDIV_UN_IMM:
case OP_IDIV_IMM:
case OP_IREM_IMM:
case OP_IREM_UN_IMM:
CASE_PPC64 (OP_LREM_IMM) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
if (ins->opcode == OP_IDIV_IMM)
ins->opcode = OP_IDIV;
else if (ins->opcode == OP_IREM_IMM)
ins->opcode = OP_IREM;
else if (ins->opcode == OP_IDIV_UN_IMM)
ins->opcode = OP_IDIV_UN;
else if (ins->opcode == OP_IREM_UN_IMM)
ins->opcode = OP_IREM_UN;
else if (ins->opcode == OP_LREM_IMM)
ins->opcode = OP_LREM;
last_ins = temp;
/* handle rem separately */
goto loop_start;
}
case OP_IREM:
case OP_IREM_UN:
CASE_PPC64 (OP_LREM)
CASE_PPC64 (OP_LREM_UN) {
MonoInst *mul;
/* we change a rem dest, src1, src2 to
* div temp1, src1, src2
* mul temp2, temp1, src2
* sub dest, src1, temp2
*/
if (ins->opcode == OP_IREM || ins->opcode == OP_IREM_UN) {
NEW_INS (cfg, mul, OP_IMUL);
NEW_INS (cfg, temp, ins->opcode == OP_IREM? OP_IDIV: OP_IDIV_UN);
ins->opcode = OP_ISUB;
} else {
NEW_INS (cfg, mul, OP_LMUL);
NEW_INS (cfg, temp, ins->opcode == OP_LREM? OP_LDIV: OP_LDIV_UN);
ins->opcode = OP_LSUB;
}
temp->sreg1 = ins->sreg1;
temp->sreg2 = ins->sreg2;
temp->dreg = mono_alloc_ireg (cfg);
mul->sreg1 = temp->dreg;
mul->sreg2 = ins->sreg2;
mul->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = mul->dreg;
break;
}
case OP_IADD_IMM:
CASE_PPC64 (OP_LADD_IMM)
case OP_ADD_IMM:
case OP_ADDCC_IMM:
if (!ppc_is_imm16 (ins->inst_imm)) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
}
break;
case OP_ISUB_IMM:
CASE_PPC64 (OP_LSUB_IMM)
case OP_SUB_IMM:
if (!ppc_is_imm16 (-ins->inst_imm)) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
}
break;
case OP_IAND_IMM:
case OP_IOR_IMM:
case OP_IXOR_IMM:
case OP_LAND_IMM:
case OP_LOR_IMM:
case OP_LXOR_IMM:
case OP_AND_IMM:
case OP_OR_IMM:
case OP_XOR_IMM: {
gboolean is_imm = ((ins->inst_imm & 0xffff0000) && (ins->inst_imm & 0xffff));
#ifdef __mono_ppc64__
if (ins->inst_imm & 0xffffffff00000000ULL)
is_imm = TRUE;
#endif
if (is_imm) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
}
break;
}
case OP_ISBB_IMM:
case OP_IADC_IMM:
case OP_SBB_IMM:
case OP_SUBCC_IMM:
case OP_ADC_IMM:
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
break;
case OP_COMPARE_IMM:
case OP_ICOMPARE_IMM:
CASE_PPC64 (OP_LCOMPARE_IMM)
next = ins->next;
/* Branch opts can eliminate the branch */
if (!next || (!(MONO_IS_COND_BRANCH_OP (next) || MONO_IS_COND_EXC (next) || MONO_IS_SETCC (next)))) {
ins->opcode = OP_NOP;
break;
}
g_assert(next);
if (compare_opcode_is_unsigned (next->opcode)) {
if (!ppc_is_uimm16 (ins->inst_imm)) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
}
} else {
if (!ppc_is_imm16 (ins->inst_imm)) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
}
}
break;
case OP_IMUL_IMM:
case OP_MUL_IMM:
CASE_PPC64 (OP_LMUL_IMM)
if (ins->inst_imm == 1) {
ins->opcode = OP_MOVE;
break;
}
if (ins->inst_imm == 0) {
ins->opcode = OP_ICONST;
ins->inst_c0 = 0;
break;
}
imm = (ins->inst_imm > 0) ? mono_is_power_of_two (ins->inst_imm) : -1;
if (imm > 0) {
ins->opcode = OP_SHL_IMM;
ins->inst_imm = imm;
break;
}
if (!ppc_is_imm16 (ins->inst_imm)) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
}
break;
case OP_LOCALLOC_IMM:
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = temp->dreg;
ins->opcode = OP_LOCALLOC;
break;
case OP_LOAD_MEMBASE:
case OP_LOADI4_MEMBASE:
CASE_PPC64 (OP_LOADI8_MEMBASE)
case OP_LOADU4_MEMBASE:
case OP_LOADI2_MEMBASE:
case OP_LOADU2_MEMBASE:
case OP_LOADI1_MEMBASE:
case OP_LOADU1_MEMBASE:
case OP_LOADR4_MEMBASE:
case OP_LOADR8_MEMBASE:
case OP_STORE_MEMBASE_REG:
CASE_PPC64 (OP_STOREI8_MEMBASE_REG)
case OP_STOREI4_MEMBASE_REG:
case OP_STOREI2_MEMBASE_REG:
case OP_STOREI1_MEMBASE_REG:
case OP_STORER4_MEMBASE_REG:
case OP_STORER8_MEMBASE_REG:
/* we can do two things: load the immed in a register
* and use an indexed load, or see if the immed can be
* represented as an ad_imm + a load with a smaller offset
* that fits. We just do the first for now, optimize later.
*/
if (ppc_is_imm16 (ins->inst_offset))
break;
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_offset;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
break;
case OP_STORE_MEMBASE_IMM:
case OP_STOREI1_MEMBASE_IMM:
case OP_STOREI2_MEMBASE_IMM:
case OP_STOREI4_MEMBASE_IMM:
CASE_PPC64 (OP_STOREI8_MEMBASE_IMM)
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
last_ins = temp;
goto loop_start; /* make it handle the possibly big ins->inst_offset */
case OP_R8CONST:
case OP_R4CONST:
if (cfg->compile_aot) {
/* Keep these in the aot case */
break;
}
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = (gulong)ins->inst_p0;
temp->dreg = mono_alloc_ireg (cfg);
ins->inst_basereg = temp->dreg;
ins->inst_offset = 0;
ins->opcode = ins->opcode == OP_R4CONST? OP_LOADR4_MEMBASE: OP_LOADR8_MEMBASE;
last_ins = temp;
/* make it handle the possibly big ins->inst_offset
* later optimize to use lis + load_membase
*/
goto loop_start;
}
last_ins = ins;
}
bb->last_ins = last_ins;
bb->max_vreg = cfg->next_vreg;
}
static guchar*
emit_float_to_int (MonoCompile *cfg, guchar *code, int dreg, int sreg, int size, gboolean is_signed)
{
long offset = cfg->arch.fp_conv_var_offset;
long sub_offset;
/* sreg is a float, dreg is an integer reg. ppc_f0 is used a scratch */
#ifdef __mono_ppc64__
if (size == 8) {
ppc_fctidz (code, ppc_f0, sreg);
sub_offset = 0;
} else
#endif
{
ppc_fctiwz (code, ppc_f0, sreg);
sub_offset = 4;
}
if (ppc_is_imm16 (offset + sub_offset)) {
ppc_stfd (code, ppc_f0, offset, cfg->frame_reg);
if (size == 8)
ppc_ldr (code, dreg, offset + sub_offset, cfg->frame_reg);
else
ppc_lwz (code, dreg, offset + sub_offset, cfg->frame_reg);
} else {
ppc_load (code, dreg, offset);
ppc_add (code, dreg, dreg, cfg->frame_reg);
ppc_stfd (code, ppc_f0, 0, dreg);
if (size == 8)
ppc_ldr (code, dreg, sub_offset, dreg);
else
ppc_lwz (code, dreg, sub_offset, dreg);
}
if (!is_signed) {
if (size == 1)
ppc_andid (code, dreg, dreg, 0xff);
else if (size == 2)
ppc_andid (code, dreg, dreg, 0xffff);
#ifdef __mono_ppc64__
else if (size == 4)
ppc_clrldi (code, dreg, dreg, 32);
#endif
} else {
if (size == 1)
ppc_extsb (code, dreg, dreg);
else if (size == 2)
ppc_extsh (code, dreg, dreg);
#ifdef __mono_ppc64__
else if (size == 4)
ppc_extsw (code, dreg, dreg);
#endif
}
return code;
}
static void
emit_thunk (guint8 *code, gconstpointer target)
{
guint8 *p = code;
/* 2 bytes on 32bit, 5 bytes on 64bit */
ppc_load_sequence (code, ppc_r0, target);
ppc_mtctr (code, ppc_r0);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
mono_arch_flush_icache (p, code - p);
}
static void
handle_thunk (MonoCompile *cfg, guchar *code, const guchar *target)
{
MonoJitInfo *ji = NULL;
MonoThunkJitInfo *info;
guint8 *thunks, *p;
int thunks_size;
guint8 *orig_target;
guint8 *target_thunk;
if (cfg) {
/*
* This can be called multiple times during JITting,
* save the current position in cfg->arch to avoid
* doing a O(n^2) search.
*/
if (!cfg->arch.thunks) {
cfg->arch.thunks = cfg->thunks;
cfg->arch.thunks_size = cfg->thunk_area;
}
thunks = cfg->arch.thunks;
thunks_size = cfg->arch.thunks_size;
if (!thunks_size) {
g_print ("thunk failed %p->%p, thunk space=%d method %s", code, target, thunks_size, mono_method_full_name (cfg->method, TRUE));
g_assert_not_reached ();
}
g_assert (*(guint32*)thunks == 0);
emit_thunk (thunks, target);
ppc_patch (code, thunks);
cfg->arch.thunks += THUNK_SIZE;
cfg->arch.thunks_size -= THUNK_SIZE;
} else {
ji = mini_jit_info_table_find (code);
g_assert (ji);
info = mono_jit_info_get_thunk_info (ji);
g_assert (info);
thunks = (guint8 *) ji->code_start + info->thunks_offset;
thunks_size = info->thunks_size;
orig_target = mono_arch_get_call_target (code + 4);
mono_mini_arch_lock ();
target_thunk = NULL;
if (orig_target >= thunks && orig_target < thunks + thunks_size) {
/* The call already points to a thunk, because of trampolines etc. */
target_thunk = orig_target;
} else {
for (p = thunks; p < thunks + thunks_size; p += THUNK_SIZE) {
if (((guint32 *) p) [0] == 0) {
/* Free entry */
target_thunk = p;
break;
} else {
/* ppc64 requires 5 instructions, 32bit two instructions */
#ifdef __mono_ppc64__
const int const_load_size = 5;
#else
const int const_load_size = 2;
#endif
guint32 load [const_load_size];
guchar *templ = (guchar *) load;
ppc_load_sequence (templ, ppc_r0, target);
if (!memcmp (p, load, const_load_size)) {
/* Thunk already points to target */
target_thunk = p;
break;
}
}
}
}
// g_print ("THUNK: %p %p %p\n", code, target, target_thunk);
if (!target_thunk) {
mono_mini_arch_unlock ();
g_print ("thunk failed %p->%p, thunk space=%d method %s", code, target, thunks_size, cfg ? mono_method_full_name (cfg->method, TRUE) : mono_method_full_name (jinfo_get_method (ji), TRUE));
g_assert_not_reached ();
}
emit_thunk (target_thunk, target);
ppc_patch (code, target_thunk);
mono_mini_arch_unlock ();
}
}
static void
patch_ins (guint8 *code, guint32 ins)
{
*(guint32*)code = ins;
mono_arch_flush_icache (code, 4);
}
static void
ppc_patch_full (MonoCompile *cfg, guchar *code, const guchar *target, gboolean is_fd)
{
guint32 ins = *(guint32*)code;
guint32 prim = ins >> 26;
guint32 ovf;
//g_print ("patching 0x%08x (0x%08x) to point to 0x%08x\n", code, ins, target);
if (prim == 18) {
// prefer relative branches, they are more position independent (e.g. for AOT compilation).
gint diff = target - code;
g_assert (!is_fd);
if (diff >= 0){
if (diff <= 33554431){
ins = (18 << 26) | (diff) | (ins & 1);
patch_ins (code, ins);
return;
}
} else {
/* diff between 0 and -33554432 */
if (diff >= -33554432){
ins = (18 << 26) | (diff & ~0xfc000000) | (ins & 1);
patch_ins (code, ins);
return;
}
}
if ((glong)target >= 0){
if ((glong)target <= 33554431){
ins = (18 << 26) | ((gulong) target) | (ins & 1) | 2;
patch_ins (code, ins);
return;
}
} else {
if ((glong)target >= -33554432){
ins = (18 << 26) | (((gulong)target) & ~0xfc000000) | (ins & 1) | 2;
patch_ins (code, ins);
return;
}
}
handle_thunk (cfg, code, target);
return;
g_assert_not_reached ();
}
if (prim == 16) {
g_assert (!is_fd);
// absolute address
if (ins & 2) {
guint32 li = (gulong)target;
ins = (ins & 0xffff0000) | (ins & 3);
ovf = li & 0xffff0000;
if (ovf != 0 && ovf != 0xffff0000)
g_assert_not_reached ();
li &= 0xffff;
ins |= li;
// FIXME: assert the top bits of li are 0
} else {
gint diff = target - code;
ins = (ins & 0xffff0000) | (ins & 3);
ovf = diff & 0xffff0000;
if (ovf != 0 && ovf != 0xffff0000)
g_assert_not_reached ();
diff &= 0xffff;
ins |= diff;
}
patch_ins (code, ins);
return;
}
if (prim == 15 || ins == 0x4e800021 || ins == 0x4e800020 || ins == 0x4e800420) {
#ifdef __mono_ppc64__
guint32 *seq = (guint32*)code;
guint32 *branch_ins;
/* the trampoline code will try to patch the blrl, blr, bcctr */
if (ins == 0x4e800021 || ins == 0x4e800020 || ins == 0x4e800420) {
branch_ins = seq;
if (ppc_is_load_op (seq [-3]) || ppc_opcode (seq [-3]) == 31) /* ld || lwz || mr */
code -= 32;
else
code -= 24;
} else {
if (ppc_is_load_op (seq [5])
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
/* With function descs we need to do more careful
matches. */
|| ppc_opcode (seq [5]) == 31 /* ld || lwz || mr */
#endif
)
branch_ins = seq + 8;
else
branch_ins = seq + 6;
}
seq = (guint32*)code;
/* this is the lis/ori/sldi/oris/ori/(ld/ld|mr/nop)/mtlr/blrl sequence */
g_assert (mono_ppc_is_direct_call_sequence (branch_ins));
if (ppc_is_load_op (seq [5])) {
g_assert (ppc_is_load_op (seq [6]));
if (!is_fd) {
guint8 *buf = (guint8*)&seq [5];
ppc_mr (buf, PPC_CALL_REG, ppc_r12);
ppc_nop (buf);
}
} else {
if (is_fd)
target = (const guchar*)mono_get_addr_from_ftnptr ((gpointer)target);
}
/* FIXME: make this thread safe */
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
/* FIXME: we're assuming we're using r12 here */
ppc_load_ptr_sequence (code, ppc_r12, target);
#else
ppc_load_ptr_sequence (code, PPC_CALL_REG, target);
#endif
mono_arch_flush_icache ((guint8*)seq, 28);
#else
guint32 *seq;
/* the trampoline code will try to patch the blrl, blr, bcctr */
if (ins == 0x4e800021 || ins == 0x4e800020 || ins == 0x4e800420) {
code -= 12;
}
/* this is the lis/ori/mtlr/blrl sequence */
seq = (guint32*)code;
g_assert ((seq [0] >> 26) == 15);
g_assert ((seq [1] >> 26) == 24);
g_assert ((seq [2] >> 26) == 31);
g_assert (seq [3] == 0x4e800021 || seq [3] == 0x4e800020 || seq [3] == 0x4e800420);
/* FIXME: make this thread safe */
ppc_lis (code, PPC_CALL_REG, (guint32)(target) >> 16);
ppc_ori (code, PPC_CALL_REG, PPC_CALL_REG, (guint32)(target) & 0xffff);
mono_arch_flush_icache (code - 8, 8);
#endif
} else {
g_assert_not_reached ();
}
// g_print ("patched with 0x%08x\n", ins);
}
void
ppc_patch (guchar *code, const guchar *target)
{
ppc_patch_full (NULL, code, target, FALSE);
}
void
mono_ppc_patch (guchar *code, const guchar *target)
{
ppc_patch (code, target);
}
static guint8*
emit_move_return_value (MonoCompile *cfg, MonoInst *ins, guint8 *code)
{
switch (ins->opcode) {
case OP_FCALL:
case OP_FCALL_REG:
case OP_FCALL_MEMBASE:
if (ins->dreg != ppc_f1)
ppc_fmr (code, ins->dreg, ppc_f1);
break;
}
return code;
}
static guint8*
emit_reserve_param_area (MonoCompile *cfg, guint8 *code)
{
long size = cfg->param_area;
size += MONO_ARCH_FRAME_ALIGNMENT - 1;
size &= -MONO_ARCH_FRAME_ALIGNMENT;
if (!size)
return code;
ppc_ldptr (code, ppc_r0, 0, ppc_sp);
if (ppc_is_imm16 (-size)) {
ppc_stptr_update (code, ppc_r0, -size, ppc_sp);
} else {
ppc_load (code, ppc_r12, -size);
ppc_stptr_update_indexed (code, ppc_r0, ppc_sp, ppc_r12);
}
return code;
}
static guint8*
emit_unreserve_param_area (MonoCompile *cfg, guint8 *code)
{
long size = cfg->param_area;
size += MONO_ARCH_FRAME_ALIGNMENT - 1;
size &= -MONO_ARCH_FRAME_ALIGNMENT;
if (!size)
return code;
ppc_ldptr (code, ppc_r0, 0, ppc_sp);
if (ppc_is_imm16 (size)) {
ppc_stptr_update (code, ppc_r0, size, ppc_sp);
} else {
ppc_load (code, ppc_r12, size);
ppc_stptr_update_indexed (code, ppc_r0, ppc_sp, ppc_r12);
}
return code;
}
#define MASK_SHIFT_IMM(i) ((i) & MONO_PPC_32_64_CASE (0x1f, 0x3f))
#ifndef DISABLE_JIT
void
mono_arch_output_basic_block (MonoCompile *cfg, MonoBasicBlock *bb)
{
MonoInst *ins, *next;
MonoCallInst *call;
guint8 *code = cfg->native_code + cfg->code_len;
MonoInst *last_ins = NULL;
int max_len, cpos;
int L;
/* we don't align basic blocks of loops on ppc */
if (cfg->verbose_level > 2)
g_print ("Basic block %d starting at offset 0x%x\n", bb->block_num, bb->native_offset);
cpos = bb->max_offset;
MONO_BB_FOR_EACH_INS (bb, ins) {
const guint offset = code - cfg->native_code;
set_code_cursor (cfg, code);
max_len = ins_get_size (ins->opcode);
code = realloc_code (cfg, max_len);
// if (ins->cil_code)
// g_print ("cil code\n");
mono_debug_record_line_number (cfg, ins, offset);
switch (normalize_opcode (ins->opcode)) {
case OP_RELAXED_NOP:
case OP_NOP:
case OP_DUMMY_USE:
case OP_DUMMY_ICONST:
case OP_DUMMY_I8CONST:
case OP_DUMMY_R8CONST:
case OP_DUMMY_R4CONST:
case OP_NOT_REACHED:
case OP_NOT_NULL:
break;
case OP_IL_SEQ_POINT:
mono_add_seq_point (cfg, bb, ins, code - cfg->native_code);
break;
case OP_SEQ_POINT: {
int i;
if (cfg->compile_aot)
NOT_IMPLEMENTED;
/*
* Read from the single stepping trigger page. This will cause a
* SIGSEGV when single stepping is enabled.
* We do this _before_ the breakpoint, so single stepping after
* a breakpoint is hit will step to the next IL offset.
*/
if (ins->flags & MONO_INST_SINGLE_STEP_LOC) {
ppc_load (code, ppc_r12, (gsize)ss_trigger_page);
ppc_ldptr (code, ppc_r12, 0, ppc_r12);
}
mono_add_seq_point (cfg, bb, ins, code - cfg->native_code);
/*
* A placeholder for a possible breakpoint inserted by
* mono_arch_set_breakpoint ().
*/
for (i = 0; i < BREAKPOINT_SIZE / 4; ++i)
ppc_nop (code);
break;
}
case OP_BIGMUL:
ppc_mullw (code, ppc_r0, ins->sreg1, ins->sreg2);
ppc_mulhw (code, ppc_r3, ins->sreg1, ins->sreg2);
ppc_mr (code, ppc_r4, ppc_r0);
break;
case OP_BIGMUL_UN:
ppc_mullw (code, ppc_r0, ins->sreg1, ins->sreg2);
ppc_mulhwu (code, ppc_r3, ins->sreg1, ins->sreg2);
ppc_mr (code, ppc_r4, ppc_r0);
break;
case OP_MEMORY_BARRIER:
ppc_sync (code);
break;
case OP_STOREI1_MEMBASE_REG:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_stb (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_stb (code, ins->sreg1, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_stbx (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
}
break;
case OP_STOREI2_MEMBASE_REG:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_sth (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_sth (code, ins->sreg1, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_sthx (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
}
break;
case OP_STORE_MEMBASE_REG:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_stptr (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_stptr (code, ins->sreg1, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_stptr_indexed (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
}
break;
#ifdef MONO_ARCH_ILP32
case OP_STOREI8_MEMBASE_REG:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_str (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_str_indexed (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
break;
#endif
case OP_STOREI1_MEMINDEX:
ppc_stbx (code, ins->sreg1, ins->inst_destbasereg, ins->sreg2);
break;
case OP_STOREI2_MEMINDEX:
ppc_sthx (code, ins->sreg1, ins->inst_destbasereg, ins->sreg2);
break;
case OP_STORE_MEMINDEX:
ppc_stptr_indexed (code, ins->sreg1, ins->inst_destbasereg, ins->sreg2);
break;
case OP_LOADU4_MEM:
g_assert_not_reached ();
break;
case OP_LOAD_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_ldptr (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset) && (ins->dreg > 0)) {
ppc_addis (code, ins->dreg, ins->inst_basereg, ppc_ha(ins->inst_offset));
ppc_ldptr (code, ins->dreg, ins->inst_offset, ins->dreg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_ldptr_indexed (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
}
break;
case OP_LOADI4_MEMBASE:
#ifdef __mono_ppc64__
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lwa (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset) && (ins->dreg > 0)) {
ppc_addis (code, ins->dreg, ins->inst_basereg, ppc_ha(ins->inst_offset));
ppc_lwa (code, ins->dreg, ins->inst_offset, ins->dreg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lwax (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
}
break;
#endif
case OP_LOADU4_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lwz (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset) && (ins->dreg > 0)) {
ppc_addis (code, ins->dreg, ins->inst_basereg, ppc_ha(ins->inst_offset));
ppc_lwz (code, ins->dreg, ins->inst_offset, ins->dreg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lwzx (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
}
break;
case OP_LOADI1_MEMBASE:
case OP_LOADU1_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lbz (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset) && (ins->dreg > 0)) {
ppc_addis (code, ins->dreg, ins->inst_basereg, ppc_ha(ins->inst_offset));
ppc_lbz (code, ins->dreg, ins->inst_offset, ins->dreg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lbzx (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
}
if (ins->opcode == OP_LOADI1_MEMBASE)
ppc_extsb (code, ins->dreg, ins->dreg);
break;
case OP_LOADU2_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lhz (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset) && (ins->dreg > 0)) {
ppc_addis (code, ins->dreg, ins->inst_basereg, ppc_ha(ins->inst_offset));
ppc_lhz (code, ins->dreg, ins->inst_offset, ins->dreg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lhzx (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
}
break;
case OP_LOADI2_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lha (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset) && (ins->dreg > 0)) {
ppc_addis (code, ins->dreg, ins->inst_basereg, ppc_ha(ins->inst_offset));
ppc_lha (code, ins->dreg, ins->inst_offset, ins->dreg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lhax (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
}
break;
#ifdef MONO_ARCH_ILP32
case OP_LOADI8_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_ldr (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_ldr_indexed (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
break;
#endif
case OP_LOAD_MEMINDEX:
ppc_ldptr_indexed (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_LOADI4_MEMINDEX:
#ifdef __mono_ppc64__
ppc_lwax (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
#endif
case OP_LOADU4_MEMINDEX:
ppc_lwzx (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_LOADU2_MEMINDEX:
ppc_lhzx (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_LOADI2_MEMINDEX:
ppc_lhax (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_LOADU1_MEMINDEX:
ppc_lbzx (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_LOADI1_MEMINDEX:
ppc_lbzx (code, ins->dreg, ins->inst_basereg, ins->sreg2);
ppc_extsb (code, ins->dreg, ins->dreg);
break;
case OP_ICONV_TO_I1:
CASE_PPC64 (OP_LCONV_TO_I1)
ppc_extsb (code, ins->dreg, ins->sreg1);
break;
case OP_ICONV_TO_I2:
CASE_PPC64 (OP_LCONV_TO_I2)
ppc_extsh (code, ins->dreg, ins->sreg1);
break;
case OP_ICONV_TO_U1:
CASE_PPC64 (OP_LCONV_TO_U1)
ppc_clrlwi (code, ins->dreg, ins->sreg1, 24);
break;
case OP_ICONV_TO_U2:
CASE_PPC64 (OP_LCONV_TO_U2)
ppc_clrlwi (code, ins->dreg, ins->sreg1, 16);
break;
case OP_COMPARE:
case OP_ICOMPARE:
CASE_PPC64 (OP_LCOMPARE)
L = (sizeof (target_mgreg_t) == 4 || ins->opcode == OP_ICOMPARE) ? 0 : 1;
next = ins->next;
if (next && compare_opcode_is_unsigned (next->opcode))
ppc_cmpl (code, 0, L, ins->sreg1, ins->sreg2);
else
ppc_cmp (code, 0, L, ins->sreg1, ins->sreg2);
break;
case OP_COMPARE_IMM:
case OP_ICOMPARE_IMM:
CASE_PPC64 (OP_LCOMPARE_IMM)
L = (sizeof (target_mgreg_t) == 4 || ins->opcode == OP_ICOMPARE_IMM) ? 0 : 1;
next = ins->next;
if (next && compare_opcode_is_unsigned (next->opcode)) {
if (ppc_is_uimm16 (ins->inst_imm)) {
ppc_cmpli (code, 0, L, ins->sreg1, (ins->inst_imm & 0xffff));
} else {
g_assert_not_reached ();
}
} else {
if (ppc_is_imm16 (ins->inst_imm)) {
ppc_cmpi (code, 0, L, ins->sreg1, (ins->inst_imm & 0xffff));
} else {
g_assert_not_reached ();
}
}
break;
case OP_BREAK:
/*
* gdb does not like encountering a trap in the debugged code. So
* instead of emitting a trap, we emit a call a C function and place a
* breakpoint there.
*/
//ppc_break (code);
ppc_mr (code, ppc_r3, ins->sreg1);
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_break));
if ((FORCE_INDIR_CALL || cfg->method->dynamic) && !cfg->compile_aot) {
ppc_load_func (code, PPC_CALL_REG, 0);
ppc_mtlr (code, PPC_CALL_REG);
ppc_blrl (code);
} else {
ppc_bl (code, 0);
}
break;
case OP_ADDCC:
case OP_IADDCC:
ppc_addco (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_IADD:
CASE_PPC64 (OP_LADD)
ppc_add (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_ADC:
case OP_IADC:
ppc_adde (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_ADDCC_IMM:
if (ppc_is_imm16 (ins->inst_imm)) {
ppc_addic (code, ins->dreg, ins->sreg1, ins->inst_imm);
} else {
g_assert_not_reached ();
}
break;
case OP_ADD_IMM:
case OP_IADD_IMM:
CASE_PPC64 (OP_LADD_IMM)
if (ppc_is_imm16 (ins->inst_imm)) {
ppc_addi (code, ins->dreg, ins->sreg1, ins->inst_imm);
} else {
g_assert_not_reached ();
}
break;
case OP_IADD_OVF:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_addo (code, ins->dreg, ins->sreg1, ins->sreg2);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_IADD_OVF_UN:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_addco (code, ins->dreg, ins->sreg1, ins->sreg2);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<13));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_ISUB_OVF:
CASE_PPC64 (OP_LSUB_OVF)
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_subfo (code, ins->dreg, ins->sreg2, ins->sreg1);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_ISUB_OVF_UN:
CASE_PPC64 (OP_LSUB_OVF_UN)
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_subfc (code, ins->dreg, ins->sreg2, ins->sreg1);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<13));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_TRUE, PPC_BR_EQ, "OverflowException");
break;
case OP_ADD_OVF_CARRY:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_addeo (code, ins->dreg, ins->sreg1, ins->sreg2);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_ADD_OVF_UN_CARRY:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_addeo (code, ins->dreg, ins->sreg1, ins->sreg2);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<13));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_SUB_OVF_CARRY:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_subfeo (code, ins->dreg, ins->sreg2, ins->sreg1);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_SUB_OVF_UN_CARRY:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_subfeo (code, ins->dreg, ins->sreg2, ins->sreg1);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<13));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_TRUE, PPC_BR_EQ, "OverflowException");
break;
case OP_SUBCC:
case OP_ISUBCC:
ppc_subfco (code, ins->dreg, ins->sreg2, ins->sreg1);
break;
case OP_ISUB:
CASE_PPC64 (OP_LSUB)
ppc_subf (code, ins->dreg, ins->sreg2, ins->sreg1);
break;
case OP_SBB:
case OP_ISBB:
ppc_subfe (code, ins->dreg, ins->sreg2, ins->sreg1);
break;
case OP_SUB_IMM:
case OP_ISUB_IMM:
CASE_PPC64 (OP_LSUB_IMM)
// we add the negated value
if (ppc_is_imm16 (-ins->inst_imm))
ppc_addi (code, ins->dreg, ins->sreg1, -ins->inst_imm);
else {
g_assert_not_reached ();
}
break;
case OP_PPC_SUBFIC:
g_assert (ppc_is_imm16 (ins->inst_imm));
ppc_subfic (code, ins->dreg, ins->sreg1, ins->inst_imm);
break;
case OP_PPC_SUBFZE:
ppc_subfze (code, ins->dreg, ins->sreg1);
break;
case OP_IAND:
CASE_PPC64 (OP_LAND)
/* FIXME: the ppc macros as inconsistent here: put dest as the first arg! */
ppc_and (code, ins->sreg1, ins->dreg, ins->sreg2);
break;
case OP_AND_IMM:
case OP_IAND_IMM:
CASE_PPC64 (OP_LAND_IMM)
if (!(ins->inst_imm & 0xffff0000)) {
ppc_andid (code, ins->sreg1, ins->dreg, ins->inst_imm);
} else if (!(ins->inst_imm & 0xffff)) {
ppc_andisd (code, ins->sreg1, ins->dreg, ((guint32)ins->inst_imm >> 16));
} else {
g_assert_not_reached ();
}
break;
case OP_IDIV:
CASE_PPC64 (OP_LDIV) {
guint8 *divisor_is_m1;
/* XER format: SO, OV, CA, reserved [21 bits], count [8 bits]
*/
ppc_compare_reg_imm (code, 0, ins->sreg2, -1);
divisor_is_m1 = code;
ppc_bc (code, PPC_BR_FALSE | PPC_BR_LIKELY, PPC_BR_EQ, 0);
ppc_lis (code, ppc_r0, 0x8000);
#ifdef __mono_ppc64__
if (ins->opcode == OP_LDIV)
ppc_sldi (code, ppc_r0, ppc_r0, 32);
#endif
ppc_compare (code, 0, ins->sreg1, ppc_r0);
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_TRUE, PPC_BR_EQ, "OverflowException");
ppc_patch (divisor_is_m1, code);
/* XER format: SO, OV, CA, reserved [21 bits], count [8 bits]
*/
if (ins->opcode == OP_IDIV)
ppc_divwod (code, ins->dreg, ins->sreg1, ins->sreg2);
#ifdef __mono_ppc64__
else
ppc_divdod (code, ins->dreg, ins->sreg1, ins->sreg2);
#endif
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "DivideByZeroException");
break;
}
case OP_IDIV_UN:
CASE_PPC64 (OP_LDIV_UN)
if (ins->opcode == OP_IDIV_UN)
ppc_divwuod (code, ins->dreg, ins->sreg1, ins->sreg2);
#ifdef __mono_ppc64__
else
ppc_divduod (code, ins->dreg, ins->sreg1, ins->sreg2);
#endif
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "DivideByZeroException");
break;
case OP_DIV_IMM:
case OP_IREM:
case OP_IREM_UN:
case OP_REM_IMM:
g_assert_not_reached ();
case OP_IOR:
CASE_PPC64 (OP_LOR)
ppc_or (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_OR_IMM:
case OP_IOR_IMM:
CASE_PPC64 (OP_LOR_IMM)
if (!(ins->inst_imm & 0xffff0000)) {
ppc_ori (code, ins->sreg1, ins->dreg, ins->inst_imm);
} else if (!(ins->inst_imm & 0xffff)) {
ppc_oris (code, ins->dreg, ins->sreg1, ((guint32)(ins->inst_imm) >> 16));
} else {
g_assert_not_reached ();
}
break;
case OP_IXOR:
CASE_PPC64 (OP_LXOR)
ppc_xor (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_IXOR_IMM:
case OP_XOR_IMM:
CASE_PPC64 (OP_LXOR_IMM)
if (!(ins->inst_imm & 0xffff0000)) {
ppc_xori (code, ins->sreg1, ins->dreg, ins->inst_imm);
} else if (!(ins->inst_imm & 0xffff)) {
ppc_xoris (code, ins->sreg1, ins->dreg, ((guint32)(ins->inst_imm) >> 16));
} else {
g_assert_not_reached ();
}
break;
case OP_ISHL:
CASE_PPC64 (OP_LSHL)
ppc_shift_left (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_SHL_IMM:
case OP_ISHL_IMM:
CASE_PPC64 (OP_LSHL_IMM)
ppc_shift_left_imm (code, ins->dreg, ins->sreg1, MASK_SHIFT_IMM (ins->inst_imm));
break;
case OP_ISHR:
ppc_sraw (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_SHR_IMM:
ppc_shift_right_arith_imm (code, ins->dreg, ins->sreg1, MASK_SHIFT_IMM (ins->inst_imm));
break;
case OP_SHR_UN_IMM:
if (MASK_SHIFT_IMM (ins->inst_imm))
ppc_shift_right_imm (code, ins->dreg, ins->sreg1, MASK_SHIFT_IMM (ins->inst_imm));
else
ppc_mr (code, ins->dreg, ins->sreg1);
break;
case OP_ISHR_UN:
ppc_srw (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_INOT:
CASE_PPC64 (OP_LNOT)
ppc_not (code, ins->dreg, ins->sreg1);
break;
case OP_INEG:
CASE_PPC64 (OP_LNEG)
ppc_neg (code, ins->dreg, ins->sreg1);
break;
case OP_IMUL:
CASE_PPC64 (OP_LMUL)
ppc_multiply (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_IMUL_IMM:
case OP_MUL_IMM:
CASE_PPC64 (OP_LMUL_IMM)
if (ppc_is_imm16 (ins->inst_imm)) {
ppc_mulli (code, ins->dreg, ins->sreg1, ins->inst_imm);
} else {
g_assert_not_reached ();
}
break;
case OP_IMUL_OVF:
CASE_PPC64 (OP_LMUL_OVF)
/* we annot use mcrxr, since it's not implemented on some processors
* XER format: SO, OV, CA, reserved [21 bits], count [8 bits]
*/
if (ins->opcode == OP_IMUL_OVF)
ppc_mullwo (code, ins->dreg, ins->sreg1, ins->sreg2);
#ifdef __mono_ppc64__
else
ppc_mulldo (code, ins->dreg, ins->sreg1, ins->sreg2);
#endif
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_IMUL_OVF_UN:
CASE_PPC64 (OP_LMUL_OVF_UN)
/* we first multiply to get the high word and compare to 0
* to set the flags, then the result is discarded and then
* we multiply to get the lower * bits result
*/
if (ins->opcode == OP_IMUL_OVF_UN)
ppc_mulhwu (code, ppc_r0, ins->sreg1, ins->sreg2);
#ifdef __mono_ppc64__
else
ppc_mulhdu (code, ppc_r0, ins->sreg1, ins->sreg2);
#endif
ppc_cmpi (code, 0, 0, ppc_r0, 0);
EMIT_COND_SYSTEM_EXCEPTION (CEE_BNE_UN - CEE_BEQ, "OverflowException");
ppc_multiply (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_ICONST:
ppc_load (code, ins->dreg, ins->inst_c0);
break;
case OP_I8CONST: {
ppc_load (code, ins->dreg, ins->inst_l);
break;
}
case OP_LOAD_GOTADDR:
/* The PLT implementation depends on this */
g_assert (ins->dreg == ppc_r30);
code = mono_arch_emit_load_got_addr (cfg->native_code, code, cfg, NULL);
break;
case OP_GOT_ENTRY:
// FIXME: Fix max instruction length
/* XXX: This is hairy; we're casting a pointer from a union to an enum... */
mono_add_patch_info (cfg, offset, (MonoJumpInfoType)(intptr_t)ins->inst_right->inst_i1, ins->inst_right->inst_p0);
/* arch_emit_got_access () patches this */
ppc_load32 (code, ppc_r0, 0);
ppc_ldptr_indexed (code, ins->dreg, ins->inst_basereg, ppc_r0);
break;
case OP_AOTCONST:
mono_add_patch_info (cfg, offset, (MonoJumpInfoType)(intptr_t)ins->inst_i1, ins->inst_p0);
ppc_load_sequence (code, ins->dreg, 0);
break;
CASE_PPC32 (OP_ICONV_TO_I4)
CASE_PPC32 (OP_ICONV_TO_U4)
case OP_MOVE:
if (ins->dreg != ins->sreg1)
ppc_mr (code, ins->dreg, ins->sreg1);
break;
case OP_SETLRET: {
int saved = ins->sreg1;
if (ins->sreg1 == ppc_r3) {
ppc_mr (code, ppc_r0, ins->sreg1);
saved = ppc_r0;
}
if (ins->sreg2 != ppc_r3)
ppc_mr (code, ppc_r3, ins->sreg2);
if (saved != ppc_r4)
ppc_mr (code, ppc_r4, saved);
break;
}
case OP_FMOVE:
if (ins->dreg != ins->sreg1)
ppc_fmr (code, ins->dreg, ins->sreg1);
break;
case OP_MOVE_F_TO_I4:
ppc_stfs (code, ins->sreg1, -4, ppc_r1);
ppc_ldptr (code, ins->dreg, -4, ppc_r1);
break;
case OP_MOVE_I4_TO_F:
ppc_stw (code, ins->sreg1, -4, ppc_r1);
ppc_lfs (code, ins->dreg, -4, ppc_r1);
break;
#ifdef __mono_ppc64__
case OP_MOVE_F_TO_I8:
ppc_stfd (code, ins->sreg1, -8, ppc_r1);
ppc_ldptr (code, ins->dreg, -8, ppc_r1);
break;
case OP_MOVE_I8_TO_F:
ppc_stptr (code, ins->sreg1, -8, ppc_r1);
ppc_lfd (code, ins->dreg, -8, ppc_r1);
break;
#endif
case OP_FCONV_TO_R4:
ppc_frsp (code, ins->dreg, ins->sreg1);
break;
case OP_TAILCALL_PARAMETER:
// This opcode helps compute sizes, i.e.
// of the subsequent OP_TAILCALL, but contributes no code.
g_assert (ins->next);
break;
case OP_TAILCALL: {
int i, pos;
MonoCallInst *call = (MonoCallInst*)ins;
/*
* Keep in sync with mono_arch_emit_epilog
*/
g_assert (!cfg->method->save_lmf);
/*
* Note: we can use ppc_r12 here because it is dead anyway:
* we're leaving the method.
*/
if (1 || cfg->flags & MONO_CFG_HAS_CALLS) {
long ret_offset = cfg->stack_usage + PPC_RET_ADDR_OFFSET;
if (ppc_is_imm16 (ret_offset)) {
ppc_ldptr (code, ppc_r0, ret_offset, cfg->frame_reg);
} else {
ppc_load (code, ppc_r12, ret_offset);
ppc_ldptr_indexed (code, ppc_r0, cfg->frame_reg, ppc_r12);
}
ppc_mtlr (code, ppc_r0);
}
if (ppc_is_imm16 (cfg->stack_usage)) {
ppc_addi (code, ppc_r12, cfg->frame_reg, cfg->stack_usage);
} else {
/* cfg->stack_usage is an int, so we can use
* an addis/addi sequence here even in 64-bit. */
ppc_addis (code, ppc_r12, cfg->frame_reg, ppc_ha(cfg->stack_usage));
ppc_addi (code, ppc_r12, ppc_r12, cfg->stack_usage);
}
if (!cfg->method->save_lmf) {
pos = 0;
for (i = 31; i >= 13; --i) {
if (cfg->used_int_regs & (1 << i)) {
pos += sizeof (target_mgreg_t);
ppc_ldptr (code, i, -pos, ppc_r12);
}
}
} else {
/* FIXME restore from MonoLMF: though this can't happen yet */
}
/* Copy arguments on the stack to our argument area */
if (call->stack_usage) {
code = emit_memcpy (code, call->stack_usage, ppc_r12, PPC_STACK_PARAM_OFFSET, ppc_sp, PPC_STACK_PARAM_OFFSET);
/* r12 was clobbered */
g_assert (cfg->frame_reg == ppc_sp);
if (ppc_is_imm16 (cfg->stack_usage)) {
ppc_addi (code, ppc_r12, cfg->frame_reg, cfg->stack_usage);
} else {
/* cfg->stack_usage is an int, so we can use
* an addis/addi sequence here even in 64-bit. */
ppc_addis (code, ppc_r12, cfg->frame_reg, ppc_ha(cfg->stack_usage));
ppc_addi (code, ppc_r12, ppc_r12, cfg->stack_usage);
}
}
ppc_mr (code, ppc_sp, ppc_r12);
mono_add_patch_info (cfg, (guint8*) code - cfg->native_code, MONO_PATCH_INFO_METHOD_JUMP, call->method);
cfg->thunk_area += THUNK_SIZE;
if (cfg->compile_aot) {
/* arch_emit_got_access () patches this */
ppc_load32 (code, ppc_r0, 0);
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
ppc_ldptr_indexed (code, ppc_r12, ppc_r30, ppc_r0);
ppc_ldptr (code, ppc_r0, 0, ppc_r12);
#else
ppc_ldptr_indexed (code, ppc_r0, ppc_r30, ppc_r0);
#endif
ppc_mtctr (code, ppc_r0);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
} else {
ppc_b (code, 0);
}
break;
}
case OP_CHECK_THIS:
/* ensure ins->sreg1 is not NULL */
ppc_ldptr (code, ppc_r0, 0, ins->sreg1);
break;
case OP_ARGLIST: {
long cookie_offset = cfg->sig_cookie + cfg->stack_usage;
if (ppc_is_imm16 (cookie_offset)) {
ppc_addi (code, ppc_r0, cfg->frame_reg, cookie_offset);
} else {
ppc_load (code, ppc_r0, cookie_offset);
ppc_add (code, ppc_r0, cfg->frame_reg, ppc_r0);
}
ppc_stptr (code, ppc_r0, 0, ins->sreg1);
break;
}
case OP_FCALL:
case OP_LCALL:
case OP_VCALL:
case OP_VCALL2:
case OP_VOIDCALL:
case OP_CALL:
call = (MonoCallInst*)ins;
mono_call_add_patch_info (cfg, call, offset);
if ((FORCE_INDIR_CALL || cfg->method->dynamic) && !cfg->compile_aot) {
ppc_load_func (code, PPC_CALL_REG, 0);
ppc_mtlr (code, PPC_CALL_REG);
ppc_blrl (code);
} else {
ppc_bl (code, 0);
}
/* FIXME: this should be handled somewhere else in the new jit */
code = emit_move_return_value (cfg, ins, code);
break;
case OP_FCALL_REG:
case OP_LCALL_REG:
case OP_VCALL_REG:
case OP_VCALL2_REG:
case OP_VOIDCALL_REG:
case OP_CALL_REG:
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
ppc_ldptr (code, ppc_r0, 0, ins->sreg1);
/* FIXME: if we know that this is a method, we
can omit this load */
ppc_ldptr (code, ppc_r2, 8, ins->sreg1);
ppc_mtlr (code, ppc_r0);
#else
#if (_CALL_ELF == 2)
if (ins->flags & MONO_INST_HAS_METHOD) {
// Not a global entry point
} else {
// Need to set up r12 with function entry address for global entry point
if (ppc_r12 != ins->sreg1) {
ppc_mr(code,ppc_r12,ins->sreg1);
}
}
#endif
ppc_mtlr (code, ins->sreg1);
#endif
ppc_blrl (code);
/* FIXME: this should be handled somewhere else in the new jit */
code = emit_move_return_value (cfg, ins, code);
break;
case OP_FCALL_MEMBASE:
case OP_LCALL_MEMBASE:
case OP_VCALL_MEMBASE:
case OP_VCALL2_MEMBASE:
case OP_VOIDCALL_MEMBASE:
case OP_CALL_MEMBASE:
if (cfg->compile_aot && ins->sreg1 == ppc_r12) {
/* The trampolines clobber this */
ppc_mr (code, ppc_r29, ins->sreg1);
ppc_ldptr (code, ppc_r0, ins->inst_offset, ppc_r29);
} else {
ppc_ldptr (code, ppc_r0, ins->inst_offset, ins->sreg1);
}
ppc_mtlr (code, ppc_r0);
ppc_blrl (code);
/* FIXME: this should be handled somewhere else in the new jit */
code = emit_move_return_value (cfg, ins, code);
break;
case OP_LOCALLOC: {
guint8 * zero_loop_jump, * zero_loop_start;
/* keep alignment */
int alloca_waste = PPC_STACK_PARAM_OFFSET + cfg->param_area + 31;
int area_offset = alloca_waste;
area_offset &= ~31;
ppc_addi (code, ppc_r12, ins->sreg1, alloca_waste + 31);
/* FIXME: should be calculated from MONO_ARCH_FRAME_ALIGNMENT */
ppc_clear_right_imm (code, ppc_r12, ppc_r12, 4);
/* use ctr to store the number of words to 0 if needed */
if (ins->flags & MONO_INST_INIT) {
/* we zero 4 bytes at a time:
* we add 7 instead of 3 so that we set the counter to
* at least 1, otherwise the bdnz instruction will make
* it negative and iterate billions of times.
*/
ppc_addi (code, ppc_r0, ins->sreg1, 7);
ppc_shift_right_arith_imm (code, ppc_r0, ppc_r0, 2);
ppc_mtctr (code, ppc_r0);
}
ppc_ldptr (code, ppc_r0, 0, ppc_sp);
ppc_neg (code, ppc_r12, ppc_r12);
ppc_stptr_update_indexed (code, ppc_r0, ppc_sp, ppc_r12);
/* FIXME: make this loop work in 8 byte
increments on PPC64 */
if (ins->flags & MONO_INST_INIT) {
/* adjust the dest reg by -4 so we can use stwu */
/* we actually adjust -8 because we let the loop
* run at least once
*/
ppc_addi (code, ins->dreg, ppc_sp, (area_offset - 8));
ppc_li (code, ppc_r12, 0);
zero_loop_start = code;
ppc_stwu (code, ppc_r12, 4, ins->dreg);
zero_loop_jump = code;
ppc_bc (code, PPC_BR_DEC_CTR_NONZERO, 0, 0);
ppc_patch (zero_loop_jump, zero_loop_start);
}
ppc_addi (code, ins->dreg, ppc_sp, area_offset);
break;
}
case OP_THROW: {
//ppc_break (code);
ppc_mr (code, ppc_r3, ins->sreg1);
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_arch_throw_exception));
if ((FORCE_INDIR_CALL || cfg->method->dynamic) && !cfg->compile_aot) {
ppc_load_func (code, PPC_CALL_REG, 0);
ppc_mtlr (code, PPC_CALL_REG);
ppc_blrl (code);
} else {
ppc_bl (code, 0);
}
break;
}
case OP_RETHROW: {
//ppc_break (code);
ppc_mr (code, ppc_r3, ins->sreg1);
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_JIT_ICALL_ID,
GUINT_TO_POINTER (MONO_JIT_ICALL_mono_arch_rethrow_exception));
if ((FORCE_INDIR_CALL || cfg->method->dynamic) && !cfg->compile_aot) {
ppc_load_func (code, PPC_CALL_REG, 0);
ppc_mtlr (code, PPC_CALL_REG);
ppc_blrl (code);
} else {
ppc_bl (code, 0);
}
break;
}
case OP_START_HANDLER: {
MonoInst *spvar = mono_find_spvar_for_region (cfg, bb->region);
g_assert (spvar->inst_basereg != ppc_sp);
code = emit_reserve_param_area (cfg, code);
ppc_mflr (code, ppc_r0);
if (ppc_is_imm16 (spvar->inst_offset)) {
ppc_stptr (code, ppc_r0, spvar->inst_offset, spvar->inst_basereg);
} else {
ppc_load (code, ppc_r12, spvar->inst_offset);
ppc_stptr_indexed (code, ppc_r0, ppc_r12, spvar->inst_basereg);
}
break;
}
case OP_ENDFILTER: {
MonoInst *spvar = mono_find_spvar_for_region (cfg, bb->region);
g_assert (spvar->inst_basereg != ppc_sp);
code = emit_unreserve_param_area (cfg, code);
if (ins->sreg1 != ppc_r3)
ppc_mr (code, ppc_r3, ins->sreg1);
if (ppc_is_imm16 (spvar->inst_offset)) {
ppc_ldptr (code, ppc_r0, spvar->inst_offset, spvar->inst_basereg);
} else {
ppc_load (code, ppc_r12, spvar->inst_offset);
ppc_ldptr_indexed (code, ppc_r0, spvar->inst_basereg, ppc_r12);
}
ppc_mtlr (code, ppc_r0);
ppc_blr (code);
break;
}
case OP_ENDFINALLY: {
MonoInst *spvar = mono_find_spvar_for_region (cfg, bb->region);
g_assert (spvar->inst_basereg != ppc_sp);
code = emit_unreserve_param_area (cfg, code);
ppc_ldptr (code, ppc_r0, spvar->inst_offset, spvar->inst_basereg);
ppc_mtlr (code, ppc_r0);
ppc_blr (code);
break;
}
case OP_CALL_HANDLER:
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_BB, ins->inst_target_bb);
ppc_bl (code, 0);
for (GList *tmp = ins->inst_eh_blocks; tmp != bb->clause_holes; tmp = tmp->prev)
mono_cfg_add_try_hole (cfg, ((MonoLeaveClause *) tmp->data)->clause, code, bb);
break;
case OP_LABEL:
ins->inst_c0 = code - cfg->native_code;
break;
case OP_BR:
/*if (ins->inst_target_bb->native_offset) {
ppc_b (code, 0);
//x86_jump_code (code, cfg->native_code + ins->inst_target_bb->native_offset);
} else*/ {
mono_add_patch_info (cfg, offset, MONO_PATCH_INFO_BB, ins->inst_target_bb);
ppc_b (code, 0);
}
break;
case OP_BR_REG:
ppc_mtctr (code, ins->sreg1);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
break;
case OP_ICNEQ:
ppc_li (code, ins->dreg, 0);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_EQ, 2);
ppc_li (code, ins->dreg, 1);
break;
case OP_CEQ:
case OP_ICEQ:
CASE_PPC64 (OP_LCEQ)
ppc_li (code, ins->dreg, 0);
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 2);
ppc_li (code, ins->dreg, 1);
break;
case OP_CLT:
case OP_CLT_UN:
case OP_ICLT:
case OP_ICLT_UN:
CASE_PPC64 (OP_LCLT)
CASE_PPC64 (OP_LCLT_UN)
ppc_li (code, ins->dreg, 1);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_LT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_ICGE:
case OP_ICGE_UN:
ppc_li (code, ins->dreg, 1);
ppc_bc (code, PPC_BR_FALSE, PPC_BR_LT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_CGT:
case OP_CGT_UN:
case OP_ICGT:
case OP_ICGT_UN:
CASE_PPC64 (OP_LCGT)
CASE_PPC64 (OP_LCGT_UN)
ppc_li (code, ins->dreg, 1);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_GT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_ICLE:
case OP_ICLE_UN:
ppc_li (code, ins->dreg, 1);
ppc_bc (code, PPC_BR_FALSE, PPC_BR_GT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_COND_EXC_EQ:
case OP_COND_EXC_NE_UN:
case OP_COND_EXC_LT:
case OP_COND_EXC_LT_UN:
case OP_COND_EXC_GT:
case OP_COND_EXC_GT_UN:
case OP_COND_EXC_GE:
case OP_COND_EXC_GE_UN:
case OP_COND_EXC_LE:
case OP_COND_EXC_LE_UN:
EMIT_COND_SYSTEM_EXCEPTION (ins->opcode - OP_COND_EXC_EQ, (const char*)ins->inst_p1);
break;
case OP_COND_EXC_IEQ:
case OP_COND_EXC_INE_UN:
case OP_COND_EXC_ILT:
case OP_COND_EXC_ILT_UN:
case OP_COND_EXC_IGT:
case OP_COND_EXC_IGT_UN:
case OP_COND_EXC_IGE:
case OP_COND_EXC_IGE_UN:
case OP_COND_EXC_ILE:
case OP_COND_EXC_ILE_UN:
EMIT_COND_SYSTEM_EXCEPTION (ins->opcode - OP_COND_EXC_IEQ, (const char*)ins->inst_p1);
break;
case OP_IBEQ:
case OP_IBNE_UN:
case OP_IBLT:
case OP_IBLT_UN:
case OP_IBGT:
case OP_IBGT_UN:
case OP_IBGE:
case OP_IBGE_UN:
case OP_IBLE:
case OP_IBLE_UN:
EMIT_COND_BRANCH (ins, ins->opcode - OP_IBEQ);
break;
/* floating point opcodes */
case OP_R8CONST:
g_assert (cfg->compile_aot);
/* FIXME: Optimize this */
ppc_bl (code, 1);
ppc_mflr (code, ppc_r12);
ppc_b (code, 3);
*(double*)code = *(double*)ins->inst_p0;
code += 8;
ppc_lfd (code, ins->dreg, 8, ppc_r12);
break;
case OP_R4CONST:
g_assert_not_reached ();
break;
case OP_STORER8_MEMBASE_REG:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_stfd (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_stfd (code, ins->sreg1, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_stfdx (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
}
break;
case OP_LOADR8_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lfd (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_lfd (code, ins->dreg, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lfdx (code, ins->dreg, ins->inst_destbasereg, ppc_r0);
}
}
break;
case OP_STORER4_MEMBASE_REG:
ppc_frsp (code, ins->sreg1, ins->sreg1);
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_stfs (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_stfs (code, ins->sreg1, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_stfsx (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
}
break;
case OP_LOADR4_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lfs (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_lfs (code, ins->dreg, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lfsx (code, ins->dreg, ins->inst_destbasereg, ppc_r0);
}
}
break;
case OP_LOADR4_MEMINDEX:
ppc_lfsx (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_LOADR8_MEMINDEX:
ppc_lfdx (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_STORER4_MEMINDEX:
ppc_frsp (code, ins->sreg1, ins->sreg1);
ppc_stfsx (code, ins->sreg1, ins->inst_destbasereg, ins->sreg2);
break;
case OP_STORER8_MEMINDEX:
ppc_stfdx (code, ins->sreg1, ins->inst_destbasereg, ins->sreg2);
break;
case CEE_CONV_R_UN:
case CEE_CONV_R4: /* FIXME: change precision */
case CEE_CONV_R8:
g_assert_not_reached ();
case OP_FCONV_TO_I1:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 1, TRUE);
break;
case OP_FCONV_TO_U1:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 1, FALSE);
break;
case OP_FCONV_TO_I2:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 2, TRUE);
break;
case OP_FCONV_TO_U2:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 2, FALSE);
break;
case OP_FCONV_TO_I4:
case OP_FCONV_TO_I:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 4, TRUE);
break;
case OP_FCONV_TO_U4:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 4, FALSE);
break;
case OP_LCONV_TO_R_UN:
g_assert_not_reached ();
/* Implemented as helper calls */
break;
case OP_LCONV_TO_OVF_I4_2:
case OP_LCONV_TO_OVF_I: {
#ifdef __mono_ppc64__
NOT_IMPLEMENTED;
#else
guint8 *negative_branch, *msword_positive_branch, *msword_negative_branch, *ovf_ex_target;
// Check if its negative
ppc_cmpi (code, 0, 0, ins->sreg1, 0);
negative_branch = code;
ppc_bc (code, PPC_BR_TRUE, PPC_BR_LT, 0);
// Its positive msword == 0
ppc_cmpi (code, 0, 0, ins->sreg2, 0);
msword_positive_branch = code;
ppc_bc (code, PPC_BR_TRUE, PPC_BR_EQ, 0);
ovf_ex_target = code;
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_ALWAYS, 0, "OverflowException");
// Negative
ppc_patch (negative_branch, code);
ppc_cmpi (code, 0, 0, ins->sreg2, -1);
msword_negative_branch = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
ppc_patch (msword_negative_branch, ovf_ex_target);
ppc_patch (msword_positive_branch, code);
if (ins->dreg != ins->sreg1)
ppc_mr (code, ins->dreg, ins->sreg1);
break;
#endif
}
case OP_ROUND:
ppc_frind (code, ins->dreg, ins->sreg1);
break;
case OP_PPC_TRUNC:
ppc_frizd (code, ins->dreg, ins->sreg1);
break;
case OP_PPC_CEIL:
ppc_fripd (code, ins->dreg, ins->sreg1);
break;
case OP_PPC_FLOOR:
ppc_frimd (code, ins->dreg, ins->sreg1);
break;
case OP_ABS:
ppc_fabsd (code, ins->dreg, ins->sreg1);
break;
case OP_SQRTF:
ppc_fsqrtsd (code, ins->dreg, ins->sreg1);
break;
case OP_SQRT:
ppc_fsqrtd (code, ins->dreg, ins->sreg1);
break;
case OP_FADD:
ppc_fadd (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_FSUB:
ppc_fsub (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_FMUL:
ppc_fmul (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_FDIV:
ppc_fdiv (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_FNEG:
ppc_fneg (code, ins->dreg, ins->sreg1);
break;
case OP_FREM:
/* emulated */
g_assert_not_reached ();
break;
/* These min/max require POWER5 */
case OP_IMIN:
ppc_cmp (code, 0, 0, ins->sreg1, ins->sreg2);
ppc_isellt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_IMIN_UN:
ppc_cmpl (code, 0, 0, ins->sreg1, ins->sreg2);
ppc_isellt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_IMAX:
ppc_cmp (code, 0, 0, ins->sreg1, ins->sreg2);
ppc_iselgt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_IMAX_UN:
ppc_cmpl (code, 0, 0, ins->sreg1, ins->sreg2);
ppc_iselgt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
CASE_PPC64 (OP_LMIN)
ppc_cmp (code, 0, 1, ins->sreg1, ins->sreg2);
ppc_isellt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
CASE_PPC64 (OP_LMIN_UN)
ppc_cmpl (code, 0, 1, ins->sreg1, ins->sreg2);
ppc_isellt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
CASE_PPC64 (OP_LMAX)
ppc_cmp (code, 0, 1, ins->sreg1, ins->sreg2);
ppc_iselgt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
CASE_PPC64 (OP_LMAX_UN)
ppc_cmpl (code, 0, 1, ins->sreg1, ins->sreg2);
ppc_iselgt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_FCOMPARE:
ppc_fcmpu (code, 0, ins->sreg1, ins->sreg2);
break;
case OP_FCEQ:
case OP_FCNEQ:
ppc_fcmpo (code, 0, ins->sreg1, ins->sreg2);
ppc_li (code, ins->dreg, 1);
ppc_bc (code, ins->opcode == OP_FCEQ ? PPC_BR_TRUE : PPC_BR_FALSE, PPC_BR_EQ, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_FCLT:
case OP_FCGE:
ppc_fcmpo (code, 0, ins->sreg1, ins->sreg2);
ppc_li (code, ins->dreg, 1);
ppc_bc (code, ins->opcode == OP_FCLT ? PPC_BR_TRUE : PPC_BR_FALSE, PPC_BR_LT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_FCLT_UN:
ppc_fcmpu (code, 0, ins->sreg1, ins->sreg2);
ppc_li (code, ins->dreg, 1);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_SO, 3);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_LT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_FCGT:
case OP_FCLE:
ppc_fcmpo (code, 0, ins->sreg1, ins->sreg2);
ppc_li (code, ins->dreg, 1);
ppc_bc (code, ins->opcode == OP_FCGT ? PPC_BR_TRUE : PPC_BR_FALSE, PPC_BR_GT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_FCGT_UN:
ppc_fcmpu (code, 0, ins->sreg1, ins->sreg2);
ppc_li (code, ins->dreg, 1);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_SO, 3);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_GT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_FBEQ:
EMIT_COND_BRANCH (ins, CEE_BEQ - CEE_BEQ);
break;
case OP_FBNE_UN:
EMIT_COND_BRANCH (ins, CEE_BNE_UN - CEE_BEQ);
break;
case OP_FBLT:
ppc_bc (code, PPC_BR_TRUE, PPC_BR_SO, 2);
EMIT_COND_BRANCH (ins, CEE_BLT - CEE_BEQ);
break;
case OP_FBLT_UN:
EMIT_COND_BRANCH_FLAGS (ins, PPC_BR_TRUE, PPC_BR_SO);
EMIT_COND_BRANCH (ins, CEE_BLT_UN - CEE_BEQ);
break;
case OP_FBGT:
ppc_bc (code, PPC_BR_TRUE, PPC_BR_SO, 2);
EMIT_COND_BRANCH (ins, CEE_BGT - CEE_BEQ);
break;
case OP_FBGT_UN:
EMIT_COND_BRANCH_FLAGS (ins, PPC_BR_TRUE, PPC_BR_SO);
EMIT_COND_BRANCH (ins, CEE_BGT_UN - CEE_BEQ);
break;
case OP_FBGE:
ppc_bc (code, PPC_BR_TRUE, PPC_BR_SO, 2);
EMIT_COND_BRANCH (ins, CEE_BGE - CEE_BEQ);
break;
case OP_FBGE_UN:
EMIT_COND_BRANCH (ins, CEE_BGE_UN - CEE_BEQ);
break;
case OP_FBLE:
ppc_bc (code, PPC_BR_TRUE, PPC_BR_SO, 2);
EMIT_COND_BRANCH (ins, CEE_BLE - CEE_BEQ);
break;
case OP_FBLE_UN:
EMIT_COND_BRANCH (ins, CEE_BLE_UN - CEE_BEQ);
break;
case OP_CKFINITE:
g_assert_not_reached ();
case OP_PPC_CHECK_FINITE: {
ppc_rlwinm (code, ins->sreg1, ins->sreg1, 0, 1, 31);
ppc_addis (code, ins->sreg1, ins->sreg1, -32752);
ppc_rlwinmd (code, ins->sreg1, ins->sreg1, 1, 31, 31);
EMIT_COND_SYSTEM_EXCEPTION (CEE_BEQ - CEE_BEQ, "ArithmeticException");
break;
case OP_JUMP_TABLE:
mono_add_patch_info (cfg, offset, (MonoJumpInfoType)ins->inst_c1, ins->inst_p0);
#ifdef __mono_ppc64__
ppc_load_sequence (code, ins->dreg, (guint64)0x0f0f0f0f0f0f0f0fLL);
#else
ppc_load_sequence (code, ins->dreg, (gulong)0x0f0f0f0fL);
#endif
break;
}
#ifdef __mono_ppc64__
case OP_ICONV_TO_I4:
case OP_SEXT_I4:
ppc_extsw (code, ins->dreg, ins->sreg1);
break;
case OP_ICONV_TO_U4:
case OP_ZEXT_I4:
ppc_clrldi (code, ins->dreg, ins->sreg1, 32);
break;
case OP_ICONV_TO_R4:
case OP_ICONV_TO_R8:
case OP_LCONV_TO_R4:
case OP_LCONV_TO_R8: {
int tmp;
if (ins->opcode == OP_ICONV_TO_R4 || ins->opcode == OP_ICONV_TO_R8) {
ppc_extsw (code, ppc_r0, ins->sreg1);
tmp = ppc_r0;
} else {
tmp = ins->sreg1;
}
if (cpu_hw_caps & PPC_MOVE_FPR_GPR) {
ppc_mffgpr (code, ins->dreg, tmp);
} else {
ppc_str (code, tmp, -8, ppc_r1);
ppc_lfd (code, ins->dreg, -8, ppc_r1);
}
ppc_fcfid (code, ins->dreg, ins->dreg);
if (ins->opcode == OP_ICONV_TO_R4 || ins->opcode == OP_LCONV_TO_R4)
ppc_frsp (code, ins->dreg, ins->dreg);
break;
}
case OP_LSHR:
ppc_srad (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_LSHR_UN:
ppc_srd (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_COND_EXC_C:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1 << 13)); /* CA */
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, (const char*)ins->inst_p1);
break;
case OP_COND_EXC_OV:
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1 << 14)); /* OV */
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, (const char*)ins->inst_p1);
break;
case OP_LBEQ:
case OP_LBNE_UN:
case OP_LBLT:
case OP_LBLT_UN:
case OP_LBGT:
case OP_LBGT_UN:
case OP_LBGE:
case OP_LBGE_UN:
case OP_LBLE:
case OP_LBLE_UN:
EMIT_COND_BRANCH (ins, ins->opcode - OP_LBEQ);
break;
case OP_FCONV_TO_I8:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 8, TRUE);
break;
case OP_FCONV_TO_U8:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 8, FALSE);
break;
case OP_STOREI4_MEMBASE_REG:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_stw (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_stwx (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
break;
case OP_STOREI4_MEMINDEX:
ppc_stwx (code, ins->sreg1, ins->sreg2, ins->inst_destbasereg);
break;
case OP_ISHR_IMM:
ppc_srawi (code, ins->dreg, ins->sreg1, (ins->inst_imm & 0x1f));
break;
case OP_ISHR_UN_IMM:
if (ins->inst_imm & 0x1f)
ppc_srwi (code, ins->dreg, ins->sreg1, (ins->inst_imm & 0x1f));
else
ppc_mr (code, ins->dreg, ins->sreg1);
break;
#else
case OP_ICONV_TO_R4:
case OP_ICONV_TO_R8: {
if (cpu_hw_caps & PPC_ISA_64) {
ppc_srawi(code, ppc_r0, ins->sreg1, 31);
ppc_stw (code, ppc_r0, -8, ppc_r1);
ppc_stw (code, ins->sreg1, -4, ppc_r1);
ppc_lfd (code, ins->dreg, -8, ppc_r1);
ppc_fcfid (code, ins->dreg, ins->dreg);
if (ins->opcode == OP_ICONV_TO_R4)
ppc_frsp (code, ins->dreg, ins->dreg);
}
break;
}
#endif
case OP_ATOMIC_ADD_I4:
CASE_PPC64 (OP_ATOMIC_ADD_I8) {
int location = ins->inst_basereg;
int addend = ins->sreg2;
guint8 *loop, *branch;
g_assert (ins->inst_offset == 0);
loop = code;
ppc_sync (code);
if (ins->opcode == OP_ATOMIC_ADD_I4)
ppc_lwarx (code, ppc_r0, 0, location);
#ifdef __mono_ppc64__
else
ppc_ldarx (code, ppc_r0, 0, location);
#endif
ppc_add (code, ppc_r0, ppc_r0, addend);
if (ins->opcode == OP_ATOMIC_ADD_I4)
ppc_stwcxd (code, ppc_r0, 0, location);
#ifdef __mono_ppc64__
else
ppc_stdcxd (code, ppc_r0, 0, location);
#endif
branch = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
ppc_patch (branch, loop);
ppc_sync (code);
ppc_mr (code, ins->dreg, ppc_r0);
break;
}
case OP_ATOMIC_CAS_I4:
CASE_PPC64 (OP_ATOMIC_CAS_I8) {
int location = ins->sreg1;
int value = ins->sreg2;
int comparand = ins->sreg3;
guint8 *start, *not_equal, *lost_reservation;
start = code;
ppc_sync (code);
if (ins->opcode == OP_ATOMIC_CAS_I4)
ppc_lwarx (code, ppc_r0, 0, location);
#ifdef __mono_ppc64__
else
ppc_ldarx (code, ppc_r0, 0, location);
#endif
ppc_cmp (code, 0, ins->opcode == OP_ATOMIC_CAS_I4 ? 0 : 1, ppc_r0, comparand);
not_equal = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
if (ins->opcode == OP_ATOMIC_CAS_I4)
ppc_stwcxd (code, value, 0, location);
#ifdef __mono_ppc64__
else
ppc_stdcxd (code, value, 0, location);
#endif
lost_reservation = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
ppc_patch (lost_reservation, start);
ppc_patch (not_equal, code);
ppc_sync (code);
ppc_mr (code, ins->dreg, ppc_r0);
break;
}
case OP_LIVERANGE_START: {
if (cfg->verbose_level > 1)
printf ("R%d START=0x%x\n", MONO_VARINFO (cfg, ins->inst_c0)->vreg, (int)(code - cfg->native_code));
MONO_VARINFO (cfg, ins->inst_c0)->live_range_start = code - cfg->native_code;
break;
}
case OP_LIVERANGE_END: {
if (cfg->verbose_level > 1)
printf ("R%d END=0x%x\n", MONO_VARINFO (cfg, ins->inst_c0)->vreg, (int)(code - cfg->native_code));
MONO_VARINFO (cfg, ins->inst_c0)->live_range_end = code - cfg->native_code;
break;
}
case OP_GC_SAFE_POINT:
break;
default:
g_warning ("unknown opcode %s in %s()\n", mono_inst_name (ins->opcode), __FUNCTION__);
g_assert_not_reached ();
}
if ((cfg->opt & MONO_OPT_BRANCH) && ((code - cfg->native_code - offset) > max_len)) {
g_warning ("wrong maximal instruction length of instruction %s (expected %d, got %ld)",
mono_inst_name (ins->opcode), max_len, (glong)(code - cfg->native_code - offset));
g_assert_not_reached ();
}
cpos += max_len;
last_ins = ins;
}
set_code_cursor (cfg, code);
}
#endif /* !DISABLE_JIT */
void
mono_arch_register_lowlevel_calls (void)
{
/* The signature doesn't matter */
mono_register_jit_icall (mono_ppc_throw_exception, mono_icall_sig_void, TRUE);
}
#ifdef __mono_ppc64__
#if G_BYTE_ORDER == G_LITTLE_ENDIAN
#define patch_load_sequence(ip,val) do {\
guint16 *__load = (guint16*)(ip); \
g_assert (sizeof (val) == sizeof (gsize)); \
__load [0] = (((guint64)(gsize)(val)) >> 48) & 0xffff; \
__load [2] = (((guint64)(gsize)(val)) >> 32) & 0xffff; \
__load [6] = (((guint64)(gsize)(val)) >> 16) & 0xffff; \
__load [8] = ((guint64)(gsize)(val)) & 0xffff; \
} while (0)
#elif G_BYTE_ORDER == G_BIG_ENDIAN
#define patch_load_sequence(ip,val) do {\
guint16 *__load = (guint16*)(ip); \
g_assert (sizeof (val) == sizeof (gsize)); \
__load [1] = (((guint64)(gsize)(val)) >> 48) & 0xffff; \
__load [3] = (((guint64)(gsize)(val)) >> 32) & 0xffff; \
__load [7] = (((guint64)(gsize)(val)) >> 16) & 0xffff; \
__load [9] = ((guint64)(gsize)(val)) & 0xffff; \
} while (0)
#else
#error huh? No endianess defined by compiler
#endif
#else
#define patch_load_sequence(ip,val) do {\
guint16 *__lis_ori = (guint16*)(ip); \
__lis_ori [1] = (((gulong)(val)) >> 16) & 0xffff; \
__lis_ori [3] = ((gulong)(val)) & 0xffff; \
} while (0)
#endif
#ifndef DISABLE_JIT
void
mono_arch_patch_code_new (MonoCompile *cfg, guint8 *code, MonoJumpInfo *ji, gpointer target)
{
unsigned char *ip = ji->ip.i + code;
gboolean is_fd = FALSE;
switch (ji->type) {
case MONO_PATCH_INFO_IP:
patch_load_sequence (ip, ip);
break;
case MONO_PATCH_INFO_SWITCH: {
gpointer *table = (gpointer *)ji->data.table->table;
int i;
patch_load_sequence (ip, table);
for (i = 0; i < ji->data.table->table_size; i++) {
table [i] = (glong)ji->data.table->table [i] + code;
}
/* we put into the table the absolute address, no need for ppc_patch in this case */
break;
}
case MONO_PATCH_INFO_METHODCONST:
case MONO_PATCH_INFO_CLASS:
case MONO_PATCH_INFO_IMAGE:
case MONO_PATCH_INFO_FIELD:
case MONO_PATCH_INFO_VTABLE:
case MONO_PATCH_INFO_IID:
case MONO_PATCH_INFO_SFLDA:
case MONO_PATCH_INFO_LDSTR:
case MONO_PATCH_INFO_TYPE_FROM_HANDLE:
case MONO_PATCH_INFO_LDTOKEN:
/* from OP_AOTCONST : lis + ori */
patch_load_sequence (ip, target);
break;
case MONO_PATCH_INFO_R4:
case MONO_PATCH_INFO_R8:
g_assert_not_reached ();
*((gconstpointer *)(ip + 2)) = ji->data.target;
break;
case MONO_PATCH_INFO_EXC_NAME:
g_assert_not_reached ();
*((gconstpointer *)(ip + 1)) = ji->data.name;
break;
case MONO_PATCH_INFO_NONE:
case MONO_PATCH_INFO_BB_OVF:
case MONO_PATCH_INFO_EXC_OVF:
/* everything is dealt with at epilog output time */
break;
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
case MONO_PATCH_INFO_JIT_ICALL_ID:
case MONO_PATCH_INFO_ABS:
case MONO_PATCH_INFO_RGCTX_FETCH:
case MONO_PATCH_INFO_JIT_ICALL_ADDR:
case MONO_PATCH_INFO_SPECIFIC_TRAMPOLINE_LAZY_FETCH_ADDR:
is_fd = TRUE;
/* fall through */
#endif
default:
ppc_patch_full (cfg, ip, (const guchar*)target, is_fd);
break;
}
}
/*
* Emit code to save the registers in used_int_regs or the registers in the MonoLMF
* structure at positive offset pos from register base_reg. pos is guaranteed to fit into
* the instruction offset immediate for all the registers.
*/
static guint8*
save_registers (MonoCompile *cfg, guint8* code, int pos, int base_reg, gboolean save_lmf, guint32 used_int_regs, int cfa_offset)
{
int i;
if (!save_lmf) {
for (i = 13; i <= 31; i++) {
if (used_int_regs & (1 << i)) {
ppc_str (code, i, pos, base_reg);
mono_emit_unwind_op_offset (cfg, code, i, pos - cfa_offset);
pos += sizeof (target_mgreg_t);
}
}
} else {
/* pos is the start of the MonoLMF structure */
int offset = pos + G_STRUCT_OFFSET (MonoLMF, iregs);
for (i = 13; i <= 31; i++) {
ppc_str (code, i, offset, base_reg);
mono_emit_unwind_op_offset (cfg, code, i, offset - cfa_offset);
offset += sizeof (target_mgreg_t);
}
offset = pos + G_STRUCT_OFFSET (MonoLMF, fregs);
for (i = 14; i < 32; i++) {
ppc_stfd (code, i, offset, base_reg);
offset += sizeof (gdouble);
}
}
return code;
}
/*
* Stack frame layout:
*
* ------------------- sp
* MonoLMF structure or saved registers
* -------------------
* spilled regs
* -------------------
* locals
* -------------------
* param area size is cfg->param_area
* -------------------
* linkage area size is PPC_STACK_PARAM_OFFSET
* ------------------- sp
* red zone
*/
guint8 *
mono_arch_emit_prolog (MonoCompile *cfg)
{
MonoMethod *method = cfg->method;
MonoBasicBlock *bb;
MonoMethodSignature *sig;
MonoInst *inst;
long alloc_size, pos, max_offset, cfa_offset;
int i;
guint8 *code;
CallInfo *cinfo;
int lmf_offset = 0;
int tailcall_struct_index;
sig = mono_method_signature_internal (method);
cfg->code_size = 512 + sig->param_count * 32;
code = cfg->native_code = g_malloc (cfg->code_size);
cfa_offset = 0;
/* We currently emit unwind info for aot, but don't use it */
mono_emit_unwind_op_def_cfa (cfg, code, ppc_r1, 0);
if (1 || cfg->flags & MONO_CFG_HAS_CALLS) {
ppc_mflr (code, ppc_r0);
ppc_str (code, ppc_r0, PPC_RET_ADDR_OFFSET, ppc_sp);
mono_emit_unwind_op_offset (cfg, code, ppc_lr, PPC_RET_ADDR_OFFSET);
}
alloc_size = cfg->stack_offset;
pos = 0;
if (!method->save_lmf) {
for (i = 31; i >= 13; --i) {
if (cfg->used_int_regs & (1 << i)) {
pos += sizeof (target_mgreg_t);
}
}
} else {
pos += sizeof (MonoLMF);
lmf_offset = pos;
}
alloc_size += pos;
// align to MONO_ARCH_FRAME_ALIGNMENT bytes
if (alloc_size & (MONO_ARCH_FRAME_ALIGNMENT - 1)) {
alloc_size += MONO_ARCH_FRAME_ALIGNMENT - 1;
alloc_size &= ~(MONO_ARCH_FRAME_ALIGNMENT - 1);
}
cfg->stack_usage = alloc_size;
g_assert ((alloc_size & (MONO_ARCH_FRAME_ALIGNMENT-1)) == 0);
if (alloc_size) {
if (ppc_is_imm16 (-alloc_size)) {
ppc_str_update (code, ppc_sp, -alloc_size, ppc_sp);
cfa_offset = alloc_size;
mono_emit_unwind_op_def_cfa_offset (cfg, code, alloc_size);
code = save_registers (cfg, code, alloc_size - pos, ppc_sp, method->save_lmf, cfg->used_int_regs, cfa_offset);
} else {
if (pos)
ppc_addi (code, ppc_r12, ppc_sp, -pos);
ppc_load (code, ppc_r0, -alloc_size);
ppc_str_update_indexed (code, ppc_sp, ppc_sp, ppc_r0);
cfa_offset = alloc_size;
mono_emit_unwind_op_def_cfa_offset (cfg, code, alloc_size);
code = save_registers (cfg, code, 0, ppc_r12, method->save_lmf, cfg->used_int_regs, cfa_offset);
}
}
if (cfg->frame_reg != ppc_sp) {
ppc_mr (code, cfg->frame_reg, ppc_sp);
mono_emit_unwind_op_def_cfa_reg (cfg, code, cfg->frame_reg);
}
/* store runtime generic context */
if (cfg->rgctx_var) {
g_assert (cfg->rgctx_var->opcode == OP_REGOFFSET &&
(cfg->rgctx_var->inst_basereg == ppc_r1 || cfg->rgctx_var->inst_basereg == ppc_r31));
ppc_stptr (code, MONO_ARCH_RGCTX_REG, cfg->rgctx_var->inst_offset, cfg->rgctx_var->inst_basereg);
}
/* compute max_offset in order to use short forward jumps
* we always do it on ppc because the immediate displacement
* for jumps is too small
*/
max_offset = 0;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
MonoInst *ins;
bb->max_offset = max_offset;
MONO_BB_FOR_EACH_INS (bb, ins)
max_offset += ins_get_size (ins->opcode);
}
/* load arguments allocated to register from the stack */
pos = 0;
cinfo = get_call_info (sig);
if (MONO_TYPE_ISSTRUCT (sig->ret)) {
ArgInfo *ainfo = &cinfo->ret;
inst = cfg->vret_addr;
g_assert (inst);
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stptr (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stptr_indexed (code, ainfo->reg, ppc_r12, inst->inst_basereg);
}
}
tailcall_struct_index = 0;
for (i = 0; i < sig->param_count + sig->hasthis; ++i) {
ArgInfo *ainfo = cinfo->args + i;
inst = cfg->args [pos];
if (cfg->verbose_level > 2)
g_print ("Saving argument %d (type: %d)\n", i, ainfo->regtype);
if (inst->opcode == OP_REGVAR) {
if (ainfo->regtype == RegTypeGeneral)
ppc_mr (code, inst->dreg, ainfo->reg);
else if (ainfo->regtype == RegTypeFP)
ppc_fmr (code, inst->dreg, ainfo->reg);
else if (ainfo->regtype == RegTypeBase) {
ppc_ldr (code, ppc_r12, 0, ppc_sp);
ppc_ldptr (code, inst->dreg, ainfo->offset, ppc_r12);
} else
g_assert_not_reached ();
if (cfg->verbose_level > 2)
g_print ("Argument %ld assigned to register %s\n", pos, mono_arch_regname (inst->dreg));
} else {
/* the argument should be put on the stack: FIXME handle size != word */
if (ainfo->regtype == RegTypeGeneral) {
switch (ainfo->size) {
case 1:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stb (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_stb (code, ainfo->reg, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stbx (code, ainfo->reg, inst->inst_basereg, ppc_r12);
}
}
break;
case 2:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_sth (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_sth (code, ainfo->reg, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_sthx (code, ainfo->reg, inst->inst_basereg, ppc_r12);
}
}
break;
#ifdef __mono_ppc64__
case 4:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stw (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_stw (code, ainfo->reg, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stwx (code, ainfo->reg, inst->inst_basereg, ppc_r12);
}
}
break;
case 8:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_str (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_str_indexed (code, ainfo->reg, ppc_r12, inst->inst_basereg);
}
break;
#else
case 8:
if (ppc_is_imm16 (inst->inst_offset + 4)) {
ppc_stw (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
ppc_stw (code, ainfo->reg + 1, inst->inst_offset + 4, inst->inst_basereg);
} else {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_addi (code, ppc_r12, ppc_r12, inst->inst_offset);
ppc_stw (code, ainfo->reg, 0, ppc_r12);
ppc_stw (code, ainfo->reg + 1, 4, ppc_r12);
}
break;
#endif
default:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stptr (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_stptr (code, ainfo->reg, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stptr_indexed (code, ainfo->reg, inst->inst_basereg, ppc_r12);
}
}
break;
}
} else if (ainfo->regtype == RegTypeBase) {
g_assert (ppc_is_imm16 (ainfo->offset));
/* load the previous stack pointer in r12 */
ppc_ldr (code, ppc_r12, 0, ppc_sp);
ppc_ldptr (code, ppc_r0, ainfo->offset, ppc_r12);
switch (ainfo->size) {
case 1:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stb (code, ppc_r0, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_stb (code, ppc_r0, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stbx (code, ppc_r0, inst->inst_basereg, ppc_r12);
}
}
break;
case 2:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_sth (code, ppc_r0, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_sth (code, ppc_r0, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_sthx (code, ppc_r0, inst->inst_basereg, ppc_r12);
}
}
break;
#ifdef __mono_ppc64__
case 4:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stw (code, ppc_r0, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_stw (code, ppc_r0, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stwx (code, ppc_r0, inst->inst_basereg, ppc_r12);
}
}
break;
case 8:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_str (code, ppc_r0, inst->inst_offset, inst->inst_basereg);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_str_indexed (code, ppc_r0, ppc_r12, inst->inst_basereg);
}
break;
#else
case 8:
g_assert (ppc_is_imm16 (ainfo->offset + 4));
if (ppc_is_imm16 (inst->inst_offset + 4)) {
ppc_stw (code, ppc_r0, inst->inst_offset, inst->inst_basereg);
ppc_lwz (code, ppc_r0, ainfo->offset + 4, ppc_r12);
ppc_stw (code, ppc_r0, inst->inst_offset + 4, inst->inst_basereg);
} else {
/* use r11 to load the 2nd half of the long before we clobber r12. */
ppc_lwz (code, ppc_r11, ainfo->offset + 4, ppc_r12);
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_addi (code, ppc_r12, ppc_r12, inst->inst_offset);
ppc_stw (code, ppc_r0, 0, ppc_r12);
ppc_stw (code, ppc_r11, 4, ppc_r12);
}
break;
#endif
default:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stptr (code, ppc_r0, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_stptr (code, ppc_r0, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stptr_indexed (code, ppc_r0, inst->inst_basereg, ppc_r12);
}
}
break;
}
} else if (ainfo->regtype == RegTypeFP) {
g_assert (ppc_is_imm16 (inst->inst_offset));
if (ainfo->size == 8)
ppc_stfd (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
else if (ainfo->size == 4)
ppc_stfs (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
else
g_assert_not_reached ();
} else if (ainfo->regtype == RegTypeFPStructByVal) {
int doffset = inst->inst_offset;
int soffset = 0;
int cur_reg;
int size = 0;
g_assert (ppc_is_imm16 (inst->inst_offset));
g_assert (ppc_is_imm16 (inst->inst_offset + ainfo->vtregs * sizeof (target_mgreg_t)));
/* FIXME: what if there is no class? */
if (sig->pinvoke && !sig->marshalling_disabled && mono_class_from_mono_type_internal (inst->inst_vtype))
size = mono_class_native_size (mono_class_from_mono_type_internal (inst->inst_vtype), NULL);
for (cur_reg = 0; cur_reg < ainfo->vtregs; ++cur_reg) {
if (ainfo->size == 4) {
ppc_stfs (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
} else {
ppc_stfd (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
}
soffset += ainfo->size;
doffset += ainfo->size;
}
} else if (ainfo->regtype == RegTypeStructByVal) {
int doffset = inst->inst_offset;
int soffset = 0;
int cur_reg;
int size = 0;
g_assert (ppc_is_imm16 (inst->inst_offset));
g_assert (ppc_is_imm16 (inst->inst_offset + ainfo->vtregs * sizeof (target_mgreg_t)));
/* FIXME: what if there is no class? */
if (sig->pinvoke && !sig->marshalling_disabled && mono_class_from_mono_type_internal (inst->inst_vtype))
size = mono_class_native_size (mono_class_from_mono_type_internal (inst->inst_vtype), NULL);
for (cur_reg = 0; cur_reg < ainfo->vtregs; ++cur_reg) {
#if __APPLE__
/*
* Darwin handles 1 and 2 byte
* structs specially by
* loading h/b into the arg
* register. Only done for
* pinvokes.
*/
if (size == 2)
ppc_sth (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
else if (size == 1)
ppc_stb (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
else
#endif
{
#ifdef __mono_ppc64__
if (ainfo->bytes) {
g_assert (cur_reg == 0);
#if G_BYTE_ORDER == G_BIG_ENDIAN
ppc_sldi (code, ppc_r0, ainfo->reg,
(sizeof (target_mgreg_t) - ainfo->bytes) * 8);
ppc_stptr (code, ppc_r0, doffset, inst->inst_basereg);
#else
if (mono_class_native_size (inst->klass, NULL) == 1) {
ppc_stb (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
} else if (mono_class_native_size (inst->klass, NULL) == 2) {
ppc_sth (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
} else if (mono_class_native_size (inst->klass, NULL) == 4) { // WDS -- maybe <=4?
ppc_stw (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
} else {
ppc_stptr (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg); // WDS -- Better way?
}
#endif
} else
#endif
{
ppc_stptr (code, ainfo->reg + cur_reg, doffset,
inst->inst_basereg);
}
}
soffset += sizeof (target_mgreg_t);
doffset += sizeof (target_mgreg_t);
}
if (ainfo->vtsize) {
/* FIXME: we need to do the shifting here, too */
if (ainfo->bytes)
NOT_IMPLEMENTED;
/* load the previous stack pointer in r12 (r0 gets overwritten by the memcpy) */
ppc_ldr (code, ppc_r12, 0, ppc_sp);
if ((size & MONO_PPC_32_64_CASE (3, 7)) != 0) {
code = emit_memcpy (code, size - soffset,
inst->inst_basereg, doffset,
ppc_r12, ainfo->offset + soffset);
} else {
code = emit_memcpy (code, ainfo->vtsize * sizeof (target_mgreg_t),
inst->inst_basereg, doffset,
ppc_r12, ainfo->offset + soffset);
}
}
} else if (ainfo->regtype == RegTypeStructByAddr) {
/* if it was originally a RegTypeBase */
if (ainfo->offset) {
/* load the previous stack pointer in r12 */
ppc_ldr (code, ppc_r12, 0, ppc_sp);
ppc_ldptr (code, ppc_r12, ainfo->offset, ppc_r12);
} else {
ppc_mr (code, ppc_r12, ainfo->reg);
}
g_assert (ppc_is_imm16 (inst->inst_offset));
code = emit_memcpy (code, ainfo->vtsize, inst->inst_basereg, inst->inst_offset, ppc_r12, 0);
/*g_print ("copy in %s: %d bytes from %d to offset: %d\n", method->name, ainfo->vtsize, ainfo->reg, inst->inst_offset);*/
} else
g_assert_not_reached ();
}
pos++;
}
if (method->save_lmf) {
if (cfg->compile_aot) {
/* Compute the got address which is needed by the PLT entry */
code = mono_arch_emit_load_got_addr (cfg->native_code, code, cfg, NULL);
}
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_JIT_ICALL_ID,
GUINT_TO_POINTER (MONO_JIT_ICALL_mono_tls_get_lmf_addr_extern));
if ((FORCE_INDIR_CALL || cfg->method->dynamic) && !cfg->compile_aot) {
ppc_load_func (code, PPC_CALL_REG, 0);
ppc_mtlr (code, PPC_CALL_REG);
ppc_blrl (code);
} else {
ppc_bl (code, 0);
}
/* we build the MonoLMF structure on the stack - see mini-ppc.h */
/* lmf_offset is the offset from the previous stack pointer,
* alloc_size is the total stack space allocated, so the offset
* of MonoLMF from the current stack ptr is alloc_size - lmf_offset.
* The pointer to the struct is put in ppc_r12 (new_lmf).
* The callee-saved registers are already in the MonoLMF structure
*/
ppc_addi (code, ppc_r12, ppc_sp, alloc_size - lmf_offset);
/* ppc_r3 is the result from mono_get_lmf_addr () */
ppc_stptr (code, ppc_r3, G_STRUCT_OFFSET(MonoLMF, lmf_addr), ppc_r12);
/* new_lmf->previous_lmf = *lmf_addr */
ppc_ldptr (code, ppc_r0, G_STRUCT_OFFSET(MonoLMF, previous_lmf), ppc_r3);
ppc_stptr (code, ppc_r0, G_STRUCT_OFFSET(MonoLMF, previous_lmf), ppc_r12);
/* *(lmf_addr) = r12 */
ppc_stptr (code, ppc_r12, G_STRUCT_OFFSET(MonoLMF, previous_lmf), ppc_r3);
/* save method info */
if (cfg->compile_aot)
// FIXME:
ppc_load (code, ppc_r0, 0);
else
ppc_load_ptr (code, ppc_r0, method);
ppc_stptr (code, ppc_r0, G_STRUCT_OFFSET(MonoLMF, method), ppc_r12);
ppc_stptr (code, ppc_sp, G_STRUCT_OFFSET(MonoLMF, ebp), ppc_r12);
/* save the current IP */
if (cfg->compile_aot) {
ppc_bl (code, 1);
ppc_mflr (code, ppc_r0);
} else {
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_IP, NULL);
#ifdef __mono_ppc64__
ppc_load_sequence (code, ppc_r0, (guint64)0x0101010101010101LL);
#else
ppc_load_sequence (code, ppc_r0, (gulong)0x01010101L);
#endif
}
ppc_stptr (code, ppc_r0, G_STRUCT_OFFSET(MonoLMF, eip), ppc_r12);
}
set_code_cursor (cfg, code);
g_free (cinfo);
return code;
}
void
mono_arch_emit_epilog (MonoCompile *cfg)
{
MonoMethod *method = cfg->method;
int pos, i;
int max_epilog_size = 16 + 20*4;
guint8 *code;
if (cfg->method->save_lmf)
max_epilog_size += 128;
code = realloc_code (cfg, max_epilog_size);
pos = 0;
if (method->save_lmf) {
int lmf_offset;
pos += sizeof (MonoLMF);
lmf_offset = pos;
/* save the frame reg in r8 */
ppc_mr (code, ppc_r8, cfg->frame_reg);
ppc_addi (code, ppc_r12, cfg->frame_reg, cfg->stack_usage - lmf_offset);
/* r5 = previous_lmf */
ppc_ldptr (code, ppc_r5, G_STRUCT_OFFSET(MonoLMF, previous_lmf), ppc_r12);
/* r6 = lmf_addr */
ppc_ldptr (code, ppc_r6, G_STRUCT_OFFSET(MonoLMF, lmf_addr), ppc_r12);
/* *(lmf_addr) = previous_lmf */
ppc_stptr (code, ppc_r5, G_STRUCT_OFFSET(MonoLMF, previous_lmf), ppc_r6);
/* FIXME: speedup: there is no actual need to restore the registers if
* we didn't actually change them (idea from Zoltan).
*/
/* restore iregs */
ppc_ldr_multiple (code, ppc_r13, G_STRUCT_OFFSET(MonoLMF, iregs), ppc_r12);
/* restore fregs */
/*for (i = 14; i < 32; i++) {
ppc_lfd (code, i, G_STRUCT_OFFSET(MonoLMF, fregs) + ((i-14) * sizeof (gdouble)), ppc_r12);
}*/
g_assert (ppc_is_imm16 (cfg->stack_usage + PPC_RET_ADDR_OFFSET));
/* use the saved copy of the frame reg in r8 */
if (1 || cfg->flags & MONO_CFG_HAS_CALLS) {
ppc_ldr (code, ppc_r0, cfg->stack_usage + PPC_RET_ADDR_OFFSET, ppc_r8);
ppc_mtlr (code, ppc_r0);
}
ppc_addic (code, ppc_sp, ppc_r8, cfg->stack_usage);
} else {
if (1 || cfg->flags & MONO_CFG_HAS_CALLS) {
long return_offset = cfg->stack_usage + PPC_RET_ADDR_OFFSET;
if (ppc_is_imm16 (return_offset)) {
ppc_ldr (code, ppc_r0, return_offset, cfg->frame_reg);
} else {
ppc_load (code, ppc_r12, return_offset);
ppc_ldr_indexed (code, ppc_r0, cfg->frame_reg, ppc_r12);
}
ppc_mtlr (code, ppc_r0);
}
if (ppc_is_imm16 (cfg->stack_usage)) {
int offset = cfg->stack_usage;
for (i = 13; i <= 31; i++) {
if (cfg->used_int_regs & (1 << i))
offset -= sizeof (target_mgreg_t);
}
if (cfg->frame_reg != ppc_sp)
ppc_mr (code, ppc_r12, cfg->frame_reg);
/* note r31 (possibly the frame register) is restored last */
for (i = 13; i <= 31; i++) {
if (cfg->used_int_regs & (1 << i)) {
ppc_ldr (code, i, offset, cfg->frame_reg);
offset += sizeof (target_mgreg_t);
}
}
if (cfg->frame_reg != ppc_sp)
ppc_addi (code, ppc_sp, ppc_r12, cfg->stack_usage);
else
ppc_addi (code, ppc_sp, ppc_sp, cfg->stack_usage);
} else {
ppc_load32 (code, ppc_r12, cfg->stack_usage);
if (cfg->used_int_regs) {
ppc_add (code, ppc_r12, cfg->frame_reg, ppc_r12);
for (i = 31; i >= 13; --i) {
if (cfg->used_int_regs & (1 << i)) {
pos += sizeof (target_mgreg_t);
ppc_ldr (code, i, -pos, ppc_r12);
}
}
ppc_mr (code, ppc_sp, ppc_r12);
} else {
ppc_add (code, ppc_sp, cfg->frame_reg, ppc_r12);
}
}
}
ppc_blr (code);
set_code_cursor (cfg, code);
}
#endif /* ifndef DISABLE_JIT */
/* remove once throw_exception_by_name is eliminated */
static int
exception_id_by_name (const char *name)
{
if (strcmp (name, "IndexOutOfRangeException") == 0)
return MONO_EXC_INDEX_OUT_OF_RANGE;
if (strcmp (name, "OverflowException") == 0)
return MONO_EXC_OVERFLOW;
if (strcmp (name, "ArithmeticException") == 0)
return MONO_EXC_ARITHMETIC;
if (strcmp (name, "DivideByZeroException") == 0)
return MONO_EXC_DIVIDE_BY_ZERO;
if (strcmp (name, "InvalidCastException") == 0)
return MONO_EXC_INVALID_CAST;
if (strcmp (name, "NullReferenceException") == 0)
return MONO_EXC_NULL_REF;
if (strcmp (name, "ArrayTypeMismatchException") == 0)
return MONO_EXC_ARRAY_TYPE_MISMATCH;
if (strcmp (name, "ArgumentException") == 0)
return MONO_EXC_ARGUMENT;
g_error ("Unknown intrinsic exception %s\n", name);
return 0;
}
#ifndef DISABLE_JIT
void
mono_arch_emit_exceptions (MonoCompile *cfg)
{
MonoJumpInfo *patch_info;
int i;
guint8 *code;
guint8* exc_throw_pos [MONO_EXC_INTRINS_NUM];
guint8 exc_throw_found [MONO_EXC_INTRINS_NUM];
int max_epilog_size = 50;
for (i = 0; i < MONO_EXC_INTRINS_NUM; i++) {
exc_throw_pos [i] = NULL;
exc_throw_found [i] = 0;
}
/* count the number of exception infos */
/*
* make sure we have enough space for exceptions
*/
for (patch_info = cfg->patch_info; patch_info; patch_info = patch_info->next) {
if (patch_info->type == MONO_PATCH_INFO_EXC) {
i = exception_id_by_name ((const char*)patch_info->data.target);
if (!exc_throw_found [i]) {
max_epilog_size += (2 * PPC_LOAD_SEQUENCE_LENGTH) + 5 * 4;
exc_throw_found [i] = TRUE;
}
} else if (patch_info->type == MONO_PATCH_INFO_BB_OVF)
max_epilog_size += 12;
else if (patch_info->type == MONO_PATCH_INFO_EXC_OVF) {
MonoOvfJump *ovfj = (MonoOvfJump*)patch_info->data.target;
i = exception_id_by_name (ovfj->data.exception);
if (!exc_throw_found [i]) {
max_epilog_size += (2 * PPC_LOAD_SEQUENCE_LENGTH) + 5 * 4;
exc_throw_found [i] = TRUE;
}
max_epilog_size += 8;
}
}
code = realloc_code (cfg, max_epilog_size);
/* add code to raise exceptions */
for (patch_info = cfg->patch_info; patch_info; patch_info = patch_info->next) {
switch (patch_info->type) {
case MONO_PATCH_INFO_BB_OVF: {
MonoOvfJump *ovfj = (MonoOvfJump*)patch_info->data.target;
unsigned char *ip = patch_info->ip.i + cfg->native_code;
/* patch the initial jump */
ppc_patch (ip, code);
ppc_bc (code, ovfj->b0_cond, ovfj->b1_cond, 2);
ppc_b (code, 0);
ppc_patch (code - 4, ip + 4); /* jump back after the initiali branch */
/* jump back to the true target */
ppc_b (code, 0);
ip = ovfj->data.bb->native_offset + cfg->native_code;
ppc_patch (code - 4, ip);
patch_info->type = MONO_PATCH_INFO_NONE;
break;
}
case MONO_PATCH_INFO_EXC_OVF: {
MonoOvfJump *ovfj = (MonoOvfJump*)patch_info->data.target;
MonoJumpInfo *newji;
unsigned char *ip = patch_info->ip.i + cfg->native_code;
unsigned char *bcl = code;
/* patch the initial jump: we arrived here with a call */
ppc_patch (ip, code);
ppc_bc (code, ovfj->b0_cond, ovfj->b1_cond, 0);
ppc_b (code, 0);
ppc_patch (code - 4, ip + 4); /* jump back after the initiali branch */
/* patch the conditional jump to the right handler */
/* make it processed next */
newji = mono_mempool_alloc (cfg->mempool, sizeof (MonoJumpInfo));
newji->type = MONO_PATCH_INFO_EXC;
newji->ip.i = bcl - cfg->native_code;
newji->data.target = ovfj->data.exception;
newji->next = patch_info->next;
patch_info->next = newji;
patch_info->type = MONO_PATCH_INFO_NONE;
break;
}
case MONO_PATCH_INFO_EXC: {
MonoClass *exc_class;
unsigned char *ip = patch_info->ip.i + cfg->native_code;
i = exception_id_by_name ((const char*)patch_info->data.target);
if (exc_throw_pos [i] && !(ip > exc_throw_pos [i] && ip - exc_throw_pos [i] > 50000)) {
ppc_patch (ip, exc_throw_pos [i]);
patch_info->type = MONO_PATCH_INFO_NONE;
break;
} else {
exc_throw_pos [i] = code;
}
exc_class = mono_class_load_from_name (mono_defaults.corlib, "System", patch_info->data.name);
ppc_patch (ip, code);
/*mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_EXC_NAME, patch_info->data.target);*/
ppc_load (code, ppc_r3, m_class_get_type_token (exc_class));
/* we got here from a conditional call, so the calling ip is set in lr */
ppc_mflr (code, ppc_r4);
patch_info->type = MONO_PATCH_INFO_JIT_ICALL_ID;
patch_info->data.jit_icall_id = MONO_JIT_ICALL_mono_arch_throw_corlib_exception;
patch_info->ip.i = code - cfg->native_code;
if (FORCE_INDIR_CALL || cfg->method->dynamic) {
ppc_load_func (code, PPC_CALL_REG, 0);
ppc_mtctr (code, PPC_CALL_REG);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
} else {
ppc_bl (code, 0);
}
break;
}
default:
/* do nothing */
break;
}
}
set_code_cursor (cfg, code);
}
#endif
#if DEAD_CODE
static int
try_offset_access (void *value, guint32 idx)
{
register void* me __asm__ ("r2");
void ***p = (void***)((char*)me + 284);
int idx1 = idx / 32;
int idx2 = idx % 32;
if (!p [idx1])
return 0;
if (value != p[idx1][idx2])
return 0;
return 1;
}
#endif
void
mono_arch_finish_init (void)
{
}
#define CMP_SIZE (PPC_LOAD_SEQUENCE_LENGTH + 4)
#define BR_SIZE 4
#define LOADSTORE_SIZE 4
#define JUMP_IMM_SIZE 12
#define JUMP_IMM32_SIZE (PPC_LOAD_SEQUENCE_LENGTH + 8)
#define ENABLE_WRONG_METHOD_CHECK 0
gpointer
mono_arch_build_imt_trampoline (MonoVTable *vtable, MonoIMTCheckItem **imt_entries, int count,
gpointer fail_tramp)
{
int i;
int size = 0;
guint8 *code, *start;
MonoMemoryManager *mem_manager = m_class_get_mem_manager (vtable->klass);
for (i = 0; i < count; ++i) {
MonoIMTCheckItem *item = imt_entries [i];
if (item->is_equals) {
if (item->check_target_idx) {
if (!item->compare_done)
item->chunk_size += CMP_SIZE;
if (item->has_target_code)
item->chunk_size += BR_SIZE + JUMP_IMM32_SIZE;
else
item->chunk_size += LOADSTORE_SIZE + BR_SIZE + JUMP_IMM_SIZE;
} else {
if (fail_tramp) {
item->chunk_size += CMP_SIZE + BR_SIZE + JUMP_IMM32_SIZE * 2;
if (!item->has_target_code)
item->chunk_size += LOADSTORE_SIZE;
} else {
item->chunk_size += LOADSTORE_SIZE + JUMP_IMM_SIZE;
#if ENABLE_WRONG_METHOD_CHECK
item->chunk_size += CMP_SIZE + BR_SIZE + 4;
#endif
}
}
} else {
item->chunk_size += CMP_SIZE + BR_SIZE;
imt_entries [item->check_target_idx]->compare_done = TRUE;
}
size += item->chunk_size;
}
/* the initial load of the vtable address */
size += PPC_LOAD_SEQUENCE_LENGTH + LOADSTORE_SIZE;
if (fail_tramp) {
code = (guint8 *)mini_alloc_generic_virtual_trampoline (vtable, size);
} else {
code = mono_mem_manager_code_reserve (mem_manager, size);
}
start = code;
/*
* We need to save and restore r12 because it might be
* used by the caller as the vtable register, so
* clobbering it will trip up the magic trampoline.
*
* FIXME: Get rid of this by making sure that r12 is
* not used as the vtable register in interface calls.
*/
ppc_stptr (code, ppc_r12, PPC_RET_ADDR_OFFSET, ppc_sp);
ppc_load (code, ppc_r12, (gsize)(& (vtable->vtable [0])));
for (i = 0; i < count; ++i) {
MonoIMTCheckItem *item = imt_entries [i];
item->code_target = code;
if (item->is_equals) {
if (item->check_target_idx) {
if (!item->compare_done) {
ppc_load (code, ppc_r0, (gsize)item->key);
ppc_compare_log (code, 0, MONO_ARCH_IMT_REG, ppc_r0);
}
item->jmp_code = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
if (item->has_target_code) {
ppc_load_ptr (code, ppc_r0, item->value.target_code);
} else {
ppc_ldptr (code, ppc_r0, (sizeof (target_mgreg_t) * item->value.vtable_slot), ppc_r12);
ppc_ldptr (code, ppc_r12, PPC_RET_ADDR_OFFSET, ppc_sp);
}
ppc_mtctr (code, ppc_r0);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
} else {
if (fail_tramp) {
ppc_load (code, ppc_r0, (gulong)item->key);
ppc_compare_log (code, 0, MONO_ARCH_IMT_REG, ppc_r0);
item->jmp_code = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
if (item->has_target_code) {
ppc_load_ptr (code, ppc_r0, item->value.target_code);
} else {
g_assert (vtable);
ppc_load_ptr (code, ppc_r0, & (vtable->vtable [item->value.vtable_slot]));
ppc_ldptr_indexed (code, ppc_r0, 0, ppc_r0);
}
ppc_mtctr (code, ppc_r0);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
ppc_patch (item->jmp_code, code);
ppc_load_ptr (code, ppc_r0, fail_tramp);
ppc_mtctr (code, ppc_r0);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
item->jmp_code = NULL;
} else {
/* enable the commented code to assert on wrong method */
#if ENABLE_WRONG_METHOD_CHECK
ppc_load (code, ppc_r0, (guint32)item->key);
ppc_compare_log (code, 0, MONO_ARCH_IMT_REG, ppc_r0);
item->jmp_code = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
#endif
ppc_ldptr (code, ppc_r0, (sizeof (target_mgreg_t) * item->value.vtable_slot), ppc_r12);
ppc_ldptr (code, ppc_r12, PPC_RET_ADDR_OFFSET, ppc_sp);
ppc_mtctr (code, ppc_r0);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
#if ENABLE_WRONG_METHOD_CHECK
ppc_patch (item->jmp_code, code);
ppc_break (code);
item->jmp_code = NULL;
#endif
}
}
} else {
ppc_load (code, ppc_r0, (gulong)item->key);
ppc_compare_log (code, 0, MONO_ARCH_IMT_REG, ppc_r0);
item->jmp_code = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_LT, 0);
}
}
/* patch the branches to get to the target items */
for (i = 0; i < count; ++i) {
MonoIMTCheckItem *item = imt_entries [i];
if (item->jmp_code) {
if (item->check_target_idx) {
ppc_patch (item->jmp_code, imt_entries [item->check_target_idx]->code_target);
}
}
}
if (!fail_tramp)
UnlockedAdd (&mono_stats.imt_trampolines_size, code - start);
g_assert (code - start <= size);
mono_arch_flush_icache (start, size);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_IMT_TRAMPOLINE, NULL));
mono_tramp_info_register (mono_tramp_info_create (NULL, start, code - start, NULL, NULL), mem_manager);
return start;
}
MonoMethod*
mono_arch_find_imt_method (host_mgreg_t *regs, guint8 *code)
{
host_mgreg_t *r = (host_mgreg_t*)regs;
return (MonoMethod*)(gsize) r [MONO_ARCH_IMT_REG];
}
MonoVTable*
mono_arch_find_static_call_vtable (host_mgreg_t *regs, guint8 *code)
{
return (MonoVTable*)(gsize) regs [MONO_ARCH_RGCTX_REG];
}
GSList*
mono_arch_get_cie_program (void)
{
GSList *l = NULL;
mono_add_unwind_op_def_cfa (l, (guint8*)NULL, (guint8*)NULL, ppc_r1, 0);
return l;
}
MonoInst*
mono_arch_emit_inst_for_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args)
{
MonoInst *ins = NULL;
int opcode = 0;
if (cmethod->klass == mono_class_try_get_math_class ()) {
if (strcmp (cmethod->name, "Sqrt") == 0) {
opcode = OP_SQRT;
} else if (strcmp (cmethod->name, "Abs") == 0 && fsig->params [0]->type == MONO_TYPE_R8) {
opcode = OP_ABS;
}
if (opcode && fsig->param_count == 1) {
MONO_INST_NEW (cfg, ins, opcode);
ins->type = STACK_R8;
ins->dreg = mono_alloc_freg (cfg);
ins->sreg1 = args [0]->dreg;
MONO_ADD_INS (cfg->cbb, ins);
}
/* Check for Min/Max for (u)int(32|64) */
opcode = 0;
if (cpu_hw_caps & PPC_ISA_2_03) {
if (strcmp (cmethod->name, "Min") == 0) {
if (fsig->params [0]->type == MONO_TYPE_I4)
opcode = OP_IMIN;
if (fsig->params [0]->type == MONO_TYPE_U4)
opcode = OP_IMIN_UN;
#ifdef __mono_ppc64__
else if (fsig->params [0]->type == MONO_TYPE_I8)
opcode = OP_LMIN;
else if (fsig->params [0]->type == MONO_TYPE_U8)
opcode = OP_LMIN_UN;
#endif
} else if (strcmp (cmethod->name, "Max") == 0) {
if (fsig->params [0]->type == MONO_TYPE_I4)
opcode = OP_IMAX;
if (fsig->params [0]->type == MONO_TYPE_U4)
opcode = OP_IMAX_UN;
#ifdef __mono_ppc64__
else if (fsig->params [0]->type == MONO_TYPE_I8)
opcode = OP_LMAX;
else if (fsig->params [0]->type == MONO_TYPE_U8)
opcode = OP_LMAX_UN;
#endif
}
/*
* TODO: Floating point version with fsel, but fsel has
* some peculiarities (need a scratch reg unless
* comparing with 0, NaN/Inf behaviour (then MathF too)
*/
}
if (opcode && fsig->param_count == 2) {
MONO_INST_NEW (cfg, ins, opcode);
ins->type = fsig->params [0]->type == MONO_TYPE_I4 ? STACK_I4 : STACK_I8;
ins->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = args [0]->dreg;
ins->sreg2 = args [1]->dreg;
MONO_ADD_INS (cfg->cbb, ins);
}
/* Rounding instructions */
opcode = 0;
if ((cpu_hw_caps & PPC_ISA_2X) && (fsig->param_count == 1) && (fsig->params [0]->type == MONO_TYPE_R8)) {
/*
* XXX: sysmath.c and the POWER ISA documentation for
* frin[.] imply rounding is a little more complicated
* than expected; the semantics are slightly different,
* so just "frin." isn't a drop-in replacement. Floor,
* Truncate, and Ceiling seem to work normally though.
* (also, no float versions of these ops, but frsp
* could be preprended?)
*/
//if (!strcmp (cmethod->name, "Round"))
// opcode = OP_ROUND;
if (!strcmp (cmethod->name, "Floor"))
opcode = OP_PPC_FLOOR;
else if (!strcmp (cmethod->name, "Ceiling"))
opcode = OP_PPC_CEIL;
else if (!strcmp (cmethod->name, "Truncate"))
opcode = OP_PPC_TRUNC;
if (opcode != 0) {
MONO_INST_NEW (cfg, ins, opcode);
ins->type = STACK_R8;
ins->dreg = mono_alloc_freg (cfg);
ins->sreg1 = args [0]->dreg;
MONO_ADD_INS (cfg->cbb, ins);
}
}
}
if (cmethod->klass == mono_class_try_get_mathf_class ()) {
if (strcmp (cmethod->name, "Sqrt") == 0) {
opcode = OP_SQRTF;
} /* XXX: POWER has no single-precision normal FPU abs? */
if (opcode && fsig->param_count == 1) {
MONO_INST_NEW (cfg, ins, opcode);
ins->type = STACK_R4;
ins->dreg = mono_alloc_freg (cfg);
ins->sreg1 = args [0]->dreg;
MONO_ADD_INS (cfg->cbb, ins);
}
}
return ins;
}
host_mgreg_t
mono_arch_context_get_int_reg (MonoContext *ctx, int reg)
{
if (reg == ppc_r1)
return (host_mgreg_t)(gsize)MONO_CONTEXT_GET_SP (ctx);
return ctx->regs [reg];
}
host_mgreg_t*
mono_arch_context_get_int_reg_address (MonoContext *ctx, int reg)
{
if (reg == ppc_r1)
return (host_mgreg_t)(gsize)&MONO_CONTEXT_GET_SP (ctx);
return &ctx->regs [reg];
}
guint32
mono_arch_get_patch_offset (guint8 *code)
{
return 0;
}
/*
* mono_aot_emit_load_got_addr:
*
* Emit code to load the got address.
* On PPC, the result is placed into r30.
*/
guint8*
mono_arch_emit_load_got_addr (guint8 *start, guint8 *code, MonoCompile *cfg, MonoJumpInfo **ji)
{
ppc_bl (code, 1);
ppc_mflr (code, ppc_r30);
if (cfg)
mono_add_patch_info (cfg, code - start, MONO_PATCH_INFO_GOT_OFFSET, NULL);
else
*ji = mono_patch_info_list_prepend (*ji, code - start, MONO_PATCH_INFO_GOT_OFFSET, NULL);
/* arch_emit_got_address () patches this */
#if defined(TARGET_POWERPC64)
ppc_nop (code);
ppc_nop (code);
ppc_nop (code);
ppc_nop (code);
#else
ppc_load32 (code, ppc_r0, 0);
ppc_add (code, ppc_r30, ppc_r30, ppc_r0);
#endif
set_code_cursor (cfg, code);
return code;
}
/*
* mono_ppc_emit_load_aotconst:
*
* Emit code to load the contents of the GOT slot identified by TRAMP_TYPE and
* TARGET from the mscorlib GOT in full-aot code.
* On PPC, the GOT address is assumed to be in r30, and the result is placed into
* r12.
*/
guint8*
mono_arch_emit_load_aotconst (guint8 *start, guint8 *code, MonoJumpInfo **ji, MonoJumpInfoType tramp_type, gconstpointer target)
{
/* Load the mscorlib got address */
ppc_ldptr (code, ppc_r12, sizeof (target_mgreg_t), ppc_r30);
*ji = mono_patch_info_list_prepend (*ji, code - start, tramp_type, target);
/* arch_emit_got_access () patches this */
ppc_load32 (code, ppc_r0, 0);
ppc_ldptr_indexed (code, ppc_r12, ppc_r12, ppc_r0);
return code;
}
/* Soft Debug support */
#ifdef MONO_ARCH_SOFT_DEBUG_SUPPORTED
/*
* BREAKPOINTS
*/
/*
* mono_arch_set_breakpoint:
*
* See mini-amd64.c for docs.
*/
void
mono_arch_set_breakpoint (MonoJitInfo *ji, guint8 *ip)
{
guint8 *code = ip;
guint8 *orig_code = code;
ppc_load_sequence (code, ppc_r12, (gsize)bp_trigger_page);
ppc_ldptr (code, ppc_r12, 0, ppc_r12);
g_assert (code - orig_code == BREAKPOINT_SIZE);
mono_arch_flush_icache (orig_code, code - orig_code);
}
/*
* mono_arch_clear_breakpoint:
*
* See mini-amd64.c for docs.
*/
void
mono_arch_clear_breakpoint (MonoJitInfo *ji, guint8 *ip)
{
guint8 *code = ip;
int i;
for (i = 0; i < BREAKPOINT_SIZE / 4; ++i)
ppc_nop (code);
mono_arch_flush_icache (ip, code - ip);
}
/*
* mono_arch_is_breakpoint_event:
*
* See mini-amd64.c for docs.
*/
gboolean
mono_arch_is_breakpoint_event (void *info, void *sigctx)
{
siginfo_t* sinfo = (siginfo_t*) info;
/* Sometimes the address is off by 4 */
if (sinfo->si_addr >= bp_trigger_page && (guint8*)sinfo->si_addr <= (guint8*)bp_trigger_page + 128)
return TRUE;
else
return FALSE;
}
/*
* mono_arch_skip_breakpoint:
*
* See mini-amd64.c for docs.
*/
void
mono_arch_skip_breakpoint (MonoContext *ctx, MonoJitInfo *ji)
{
/* skip the ldptr */
MONO_CONTEXT_SET_IP (ctx, (guint8*)MONO_CONTEXT_GET_IP (ctx) + 4);
}
/*
* SINGLE STEPPING
*/
/*
* mono_arch_start_single_stepping:
*
* See mini-amd64.c for docs.
*/
void
mono_arch_start_single_stepping (void)
{
mono_mprotect (ss_trigger_page, mono_pagesize (), 0);
}
/*
* mono_arch_stop_single_stepping:
*
* See mini-amd64.c for docs.
*/
void
mono_arch_stop_single_stepping (void)
{
mono_mprotect (ss_trigger_page, mono_pagesize (), MONO_MMAP_READ);
}
/*
* mono_arch_is_single_step_event:
*
* See mini-amd64.c for docs.
*/
gboolean
mono_arch_is_single_step_event (void *info, void *sigctx)
{
siginfo_t* sinfo = (siginfo_t*) info;
/* Sometimes the address is off by 4 */
if (sinfo->si_addr >= ss_trigger_page && (guint8*)sinfo->si_addr <= (guint8*)ss_trigger_page + 128)
return TRUE;
else
return FALSE;
}
/*
* mono_arch_skip_single_step:
*
* See mini-amd64.c for docs.
*/
void
mono_arch_skip_single_step (MonoContext *ctx)
{
/* skip the ldptr */
MONO_CONTEXT_SET_IP (ctx, (guint8*)MONO_CONTEXT_GET_IP (ctx) + 4);
}
/*
* mono_arch_create_seq_point_info:
*
* See mini-amd64.c for docs.
*/
SeqPointInfo*
mono_arch_get_seq_point_info (guint8 *code)
{
NOT_IMPLEMENTED;
return NULL;
}
#endif
gboolean
mono_arch_opcode_supported (int opcode)
{
switch (opcode) {
case OP_ATOMIC_ADD_I4:
case OP_ATOMIC_CAS_I4:
#ifdef TARGET_POWERPC64
case OP_ATOMIC_ADD_I8:
case OP_ATOMIC_CAS_I8:
#endif
return TRUE;
default:
return FALSE;
}
}
gpointer
mono_arch_load_function (MonoJitICallId jit_icall_id)
{
gpointer target = NULL;
switch (jit_icall_id) {
#undef MONO_AOT_ICALL
#define MONO_AOT_ICALL(x) case MONO_JIT_ICALL_ ## x: target = (gpointer)x; break;
MONO_AOT_ICALL (mono_ppc_throw_exception)
}
return target;
}
| /**
* \file
* PowerPC backend for the Mono code generator
*
* Authors:
* Paolo Molaro ([email protected])
* Dietmar Maurer ([email protected])
* Andreas Faerber <[email protected]>
*
* (C) 2003 Ximian, Inc.
* (C) 2007-2008 Andreas Faerber
*/
#include "mini.h"
#include <string.h>
#include <mono/metadata/abi-details.h>
#include <mono/metadata/appdomain.h>
#include <mono/metadata/debug-helpers.h>
#include <mono/utils/mono-proclib.h>
#include <mono/utils/mono-mmap.h>
#include <mono/utils/mono-hwcap.h>
#include <mono/utils/unlocked.h>
#include "mono/utils/mono-tls-inline.h"
#include "mini-ppc.h"
#ifdef TARGET_POWERPC64
#include "cpu-ppc64.h"
#else
#include "cpu-ppc.h"
#endif
#include "ir-emit.h"
#include "aot-runtime.h"
#include "mini-runtime.h"
#ifdef __APPLE__
#include <sys/sysctl.h>
#endif
#ifdef __linux__
#include <unistd.h>
#endif
#ifdef _AIX
#include <sys/systemcfg.h>
#endif
static GENERATE_TRY_GET_CLASS_WITH_CACHE (math, "System", "Math")
static GENERATE_TRY_GET_CLASS_WITH_CACHE (mathf, "System", "MathF")
#define FORCE_INDIR_CALL 1
enum {
TLS_MODE_DETECT,
TLS_MODE_FAILED,
TLS_MODE_LTHREADS,
TLS_MODE_NPTL,
TLS_MODE_DARWIN_G4,
TLS_MODE_DARWIN_G5
};
/* cpu_hw_caps contains the flags defined below */
static int cpu_hw_caps = 0;
static int cachelinesize = 0;
static int cachelineinc = 0;
enum {
PPC_ICACHE_SNOOP = 1 << 0,
PPC_MULTIPLE_LS_UNITS = 1 << 1,
PPC_SMP_CAPABLE = 1 << 2,
PPC_ISA_2X = 1 << 3,
PPC_ISA_64 = 1 << 4,
PPC_MOVE_FPR_GPR = 1 << 5,
PPC_ISA_2_03 = 1 << 6,
PPC_HW_CAP_END
};
#define BREAKPOINT_SIZE (PPC_LOAD_SEQUENCE_LENGTH + 4)
/* This mutex protects architecture specific caches */
#define mono_mini_arch_lock() mono_os_mutex_lock (&mini_arch_mutex)
#define mono_mini_arch_unlock() mono_os_mutex_unlock (&mini_arch_mutex)
static mono_mutex_t mini_arch_mutex;
/*
* The code generated for sequence points reads from this location, which is
* made read-only when single stepping is enabled.
*/
static gpointer ss_trigger_page;
/* Enabled breakpoints read from this trigger page */
static gpointer bp_trigger_page;
#define MONO_EMIT_NEW_LOAD_R8(cfg,dr,addr) do { \
MonoInst *inst; \
MONO_INST_NEW ((cfg), (inst), OP_R8CONST); \
inst->type = STACK_R8; \
inst->dreg = (dr); \
inst->inst_p0 = (void*)(addr); \
mono_bblock_add_inst (cfg->cbb, inst); \
} while (0)
const char*
mono_arch_regname (int reg) {
static const char rnames[][4] = {
"r0", "sp", "r2", "r3", "r4",
"r5", "r6", "r7", "r8", "r9",
"r10", "r11", "r12", "r13", "r14",
"r15", "r16", "r17", "r18", "r19",
"r20", "r21", "r22", "r23", "r24",
"r25", "r26", "r27", "r28", "r29",
"r30", "r31"
};
if (reg >= 0 && reg < 32)
return rnames [reg];
return "unknown";
}
const char*
mono_arch_fregname (int reg) {
static const char rnames[][4] = {
"f0", "f1", "f2", "f3", "f4",
"f5", "f6", "f7", "f8", "f9",
"f10", "f11", "f12", "f13", "f14",
"f15", "f16", "f17", "f18", "f19",
"f20", "f21", "f22", "f23", "f24",
"f25", "f26", "f27", "f28", "f29",
"f30", "f31"
};
if (reg >= 0 && reg < 32)
return rnames [reg];
return "unknown";
}
/* this function overwrites r0, r11, r12 */
static guint8*
emit_memcpy (guint8 *code, int size, int dreg, int doffset, int sreg, int soffset)
{
/* unrolled, use the counter in big */
if (size > sizeof (target_mgreg_t) * 5) {
long shifted = size / TARGET_SIZEOF_VOID_P;
guint8 *copy_loop_start, *copy_loop_jump;
ppc_load (code, ppc_r0, shifted);
ppc_mtctr (code, ppc_r0);
//g_assert (sreg == ppc_r12);
ppc_addi (code, ppc_r11, dreg, (doffset - sizeof (target_mgreg_t)));
ppc_addi (code, ppc_r12, sreg, (soffset - sizeof (target_mgreg_t)));
copy_loop_start = code;
ppc_ldptr_update (code, ppc_r0, (unsigned int)sizeof (target_mgreg_t), ppc_r12);
ppc_stptr_update (code, ppc_r0, (unsigned int)sizeof (target_mgreg_t), ppc_r11);
copy_loop_jump = code;
ppc_bc (code, PPC_BR_DEC_CTR_NONZERO, 0, 0);
ppc_patch (copy_loop_jump, copy_loop_start);
size -= shifted * sizeof (target_mgreg_t);
doffset = soffset = 0;
dreg = ppc_r11;
}
#ifdef __mono_ppc64__
/* the hardware has multiple load/store units and the move is long
enough to use more then one register, then use load/load/store/store
to execute 2 instructions per cycle. */
if ((cpu_hw_caps & PPC_MULTIPLE_LS_UNITS) && (dreg != ppc_r11) && (sreg != ppc_r11)) {
while (size >= 16) {
ppc_ldptr (code, ppc_r0, soffset, sreg);
ppc_ldptr (code, ppc_r11, soffset+8, sreg);
ppc_stptr (code, ppc_r0, doffset, dreg);
ppc_stptr (code, ppc_r11, doffset+8, dreg);
size -= 16;
soffset += 16;
doffset += 16;
}
}
while (size >= 8) {
ppc_ldr (code, ppc_r0, soffset, sreg);
ppc_str (code, ppc_r0, doffset, dreg);
size -= 8;
soffset += 8;
doffset += 8;
}
#else
if ((cpu_hw_caps & PPC_MULTIPLE_LS_UNITS) && (dreg != ppc_r11) && (sreg != ppc_r11)) {
while (size >= 8) {
ppc_lwz (code, ppc_r0, soffset, sreg);
ppc_lwz (code, ppc_r11, soffset+4, sreg);
ppc_stw (code, ppc_r0, doffset, dreg);
ppc_stw (code, ppc_r11, doffset+4, dreg);
size -= 8;
soffset += 8;
doffset += 8;
}
}
#endif
while (size >= 4) {
ppc_lwz (code, ppc_r0, soffset, sreg);
ppc_stw (code, ppc_r0, doffset, dreg);
size -= 4;
soffset += 4;
doffset += 4;
}
while (size >= 2) {
ppc_lhz (code, ppc_r0, soffset, sreg);
ppc_sth (code, ppc_r0, doffset, dreg);
size -= 2;
soffset += 2;
doffset += 2;
}
while (size >= 1) {
ppc_lbz (code, ppc_r0, soffset, sreg);
ppc_stb (code, ppc_r0, doffset, dreg);
size -= 1;
soffset += 1;
doffset += 1;
}
return code;
}
/*
* mono_arch_get_argument_info:
* @csig: a method signature
* @param_count: the number of parameters to consider
* @arg_info: an array to store the result infos
*
* Gathers information on parameters such as size, alignment and
* padding. arg_info should be large enought to hold param_count + 1 entries.
*
* Returns the size of the activation frame.
*/
int
mono_arch_get_argument_info (MonoMethodSignature *csig, int param_count, MonoJitArgumentInfo *arg_info)
{
#ifdef __mono_ppc64__
NOT_IMPLEMENTED;
return -1;
#else
int k, frame_size = 0;
int size, align, pad;
int offset = 8;
if (MONO_TYPE_ISSTRUCT (csig->ret)) {
frame_size += sizeof (target_mgreg_t);
offset += 4;
}
arg_info [0].offset = offset;
if (csig->hasthis) {
frame_size += sizeof (target_mgreg_t);
offset += 4;
}
arg_info [0].size = frame_size;
for (k = 0; k < param_count; k++) {
if (csig->pinvoke && !csig->marshalling_disabled)
size = mono_type_native_stack_size (csig->params [k], (guint32*)&align);
else
size = mini_type_stack_size (csig->params [k], &align);
/* ignore alignment for now */
align = 1;
frame_size += pad = (align - (frame_size & (align - 1))) & (align - 1);
arg_info [k].pad = pad;
frame_size += size;
arg_info [k + 1].pad = 0;
arg_info [k + 1].size = size;
offset += pad;
arg_info [k + 1].offset = offset;
offset += size;
}
align = MONO_ARCH_FRAME_ALIGNMENT;
frame_size += pad = (align - (frame_size & (align - 1))) & (align - 1);
arg_info [k].pad = pad;
return frame_size;
#endif
}
#ifdef __mono_ppc64__
static gboolean
is_load_sequence (guint32 *seq)
{
return ppc_opcode (seq [0]) == 15 && /* lis */
ppc_opcode (seq [1]) == 24 && /* ori */
ppc_opcode (seq [2]) == 30 && /* sldi */
ppc_opcode (seq [3]) == 25 && /* oris */
ppc_opcode (seq [4]) == 24; /* ori */
}
#define ppc_load_get_dest(l) (((l)>>21) & 0x1f)
#define ppc_load_get_off(l) ((gint16)((l) & 0xffff))
#endif
/* ld || lwz */
#define ppc_is_load_op(opcode) (ppc_opcode ((opcode)) == 58 || ppc_opcode ((opcode)) == 32)
/* code must point to the blrl */
gboolean
mono_ppc_is_direct_call_sequence (guint32 *code)
{
#ifdef __mono_ppc64__
g_assert(*code == 0x4e800021 || *code == 0x4e800020 || *code == 0x4e800420);
/* the thunk-less direct call sequence: lis/ori/sldi/oris/ori/mtlr/blrl */
if (ppc_opcode (code [-1]) == 31) { /* mtlr */
if (ppc_is_load_op (code [-2]) && ppc_is_load_op (code [-3])) { /* ld/ld */
if (!is_load_sequence (&code [-8]))
return FALSE;
/* one of the loads must be "ld r2,8(rX)" or "ld r2,4(rX) for ilp32 */
return (ppc_load_get_dest (code [-2]) == ppc_r2 && ppc_load_get_off (code [-2]) == sizeof (target_mgreg_t)) ||
(ppc_load_get_dest (code [-3]) == ppc_r2 && ppc_load_get_off (code [-3]) == sizeof (target_mgreg_t));
}
if (ppc_opcode (code [-2]) == 24 && ppc_opcode (code [-3]) == 31) /* mr/nop */
return is_load_sequence (&code [-8]);
else
return is_load_sequence (&code [-6]);
}
return FALSE;
#else
g_assert(*code == 0x4e800021);
/* the thunk-less direct call sequence: lis/ori/mtlr/blrl */
return ppc_opcode (code [-1]) == 31 &&
ppc_opcode (code [-2]) == 24 &&
ppc_opcode (code [-3]) == 15;
#endif
}
#define MAX_ARCH_DELEGATE_PARAMS 7
static guint8*
get_delegate_invoke_impl (MonoTrampInfo **info, gboolean has_target, guint32 param_count, gboolean aot)
{
guint8 *code, *start;
if (has_target) {
int size = MONO_PPC_32_64_CASE (32, 32) + PPC_FTNPTR_SIZE;
start = code = mono_global_codeman_reserve (size);
if (!aot)
code = mono_ppc_create_pre_code_ftnptr (code);
/* Replace the this argument with the target */
ppc_ldptr (code, ppc_r0, MONO_STRUCT_OFFSET (MonoDelegate, method_ptr), ppc_r3);
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
/* it's a function descriptor */
/* Can't use ldptr as it doesn't work with r0 */
ppc_ldptr_indexed (code, ppc_r0, 0, ppc_r0);
#endif
ppc_mtctr (code, ppc_r0);
ppc_ldptr (code, ppc_r3, MONO_STRUCT_OFFSET (MonoDelegate, target), ppc_r3);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
g_assert ((code - start) <= size);
mono_arch_flush_icache (start, size);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_DELEGATE_INVOKE, NULL));
} else {
int size, i;
size = MONO_PPC_32_64_CASE (32, 32) + param_count * 4 + PPC_FTNPTR_SIZE;
start = code = mono_global_codeman_reserve (size);
if (!aot)
code = mono_ppc_create_pre_code_ftnptr (code);
ppc_ldptr (code, ppc_r0, MONO_STRUCT_OFFSET (MonoDelegate, method_ptr), ppc_r3);
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
/* it's a function descriptor */
ppc_ldptr_indexed (code, ppc_r0, 0, ppc_r0);
#endif
ppc_mtctr (code, ppc_r0);
/* slide down the arguments */
for (i = 0; i < param_count; ++i) {
ppc_mr (code, (ppc_r3 + i), (ppc_r3 + i + 1));
}
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
g_assert ((code - start) <= size);
mono_arch_flush_icache (start, size);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_DELEGATE_INVOKE, NULL));
}
if (has_target) {
*info = mono_tramp_info_create ("delegate_invoke_impl_has_target", start, code - start, NULL, NULL);
} else {
char *name = g_strdup_printf ("delegate_invoke_impl_target_%d", param_count);
*info = mono_tramp_info_create (name, start, code - start, NULL, NULL);
g_free (name);
}
return start;
}
GSList*
mono_arch_get_delegate_invoke_impls (void)
{
GSList *res = NULL;
MonoTrampInfo *info;
int i;
get_delegate_invoke_impl (&info, TRUE, 0, TRUE);
res = g_slist_prepend (res, info);
for (i = 0; i <= MAX_ARCH_DELEGATE_PARAMS; ++i) {
get_delegate_invoke_impl (&info, FALSE, i, TRUE);
res = g_slist_prepend (res, info);
}
return res;
}
gpointer
mono_arch_get_delegate_invoke_impl (MonoMethodSignature *sig, gboolean has_target)
{
guint8 *code, *start;
/* FIXME: Support more cases */
if (MONO_TYPE_ISSTRUCT (sig->ret))
return NULL;
if (has_target) {
static guint8* cached = NULL;
if (cached)
return cached;
if (mono_ee_features.use_aot_trampolines) {
start = (guint8*)mono_aot_get_trampoline ("delegate_invoke_impl_has_target");
} else {
MonoTrampInfo *info;
start = get_delegate_invoke_impl (&info, TRUE, 0, FALSE);
mono_tramp_info_register (info, NULL);
}
mono_memory_barrier ();
cached = start;
} else {
static guint8* cache [MAX_ARCH_DELEGATE_PARAMS + 1] = {NULL};
int i;
if (sig->param_count > MAX_ARCH_DELEGATE_PARAMS)
return NULL;
for (i = 0; i < sig->param_count; ++i)
if (!mono_is_regsize_var (sig->params [i]))
return NULL;
code = cache [sig->param_count];
if (code)
return code;
if (mono_ee_features.use_aot_trampolines) {
char *name = g_strdup_printf ("delegate_invoke_impl_target_%d", sig->param_count);
start = (guint8*)mono_aot_get_trampoline (name);
g_free (name);
} else {
MonoTrampInfo *info;
start = get_delegate_invoke_impl (&info, FALSE, sig->param_count, FALSE);
mono_tramp_info_register (info, NULL);
}
mono_memory_barrier ();
cache [sig->param_count] = start;
}
return start;
}
gpointer
mono_arch_get_delegate_virtual_invoke_impl (MonoMethodSignature *sig, MonoMethod *method, int offset, gboolean load_imt_reg)
{
return NULL;
}
gpointer
mono_arch_get_this_arg_from_call (host_mgreg_t *r, guint8 *code)
{
return (gpointer)(gsize)r [ppc_r3];
}
typedef struct {
long int type;
long int value;
} AuxVec;
#define MAX_AUX_ENTRIES 128
/*
* PPC_FEATURE_POWER4, PPC_FEATURE_POWER5, PPC_FEATURE_POWER5_PLUS, PPC_FEATURE_CELL,
* PPC_FEATURE_PA6T, PPC_FEATURE_ARCH_2_05 are considered supporting 2X ISA features
*/
#define ISA_2X (0x00080000 | 0x00040000 | 0x00020000 | 0x00010000 | 0x00000800 | 0x00001000)
/* define PPC_FEATURE_64 HWCAP for 64-bit category. */
#define ISA_64 0x40000000
/* define PPC_FEATURE_POWER6_EXT HWCAP for power6x mffgpr/mftgpr instructions. */
#define ISA_MOVE_FPR_GPR 0x00000200
/*
* Initialize the cpu to execute managed code.
*/
void
mono_arch_cpu_init (void)
{
}
/*
* Initialize architecture specific code.
*/
void
mono_arch_init (void)
{
#if defined(MONO_CROSS_COMPILE)
#elif defined(__APPLE__)
int mib [3];
size_t len = sizeof (cachelinesize);
mib [0] = CTL_HW;
mib [1] = HW_CACHELINE;
if (sysctl (mib, 2, &cachelinesize, &len, NULL, 0) == -1) {
perror ("sysctl");
cachelinesize = 128;
} else {
cachelineinc = cachelinesize;
}
#elif defined(__linux__)
AuxVec vec [MAX_AUX_ENTRIES];
int i, vec_entries = 0;
/* sadly this will work only with 2.6 kernels... */
FILE* f = fopen ("/proc/self/auxv", "rb");
if (f) {
vec_entries = fread (&vec, sizeof (AuxVec), MAX_AUX_ENTRIES, f);
fclose (f);
}
for (i = 0; i < vec_entries; i++) {
int type = vec [i].type;
if (type == 19) { /* AT_DCACHEBSIZE */
cachelinesize = vec [i].value;
continue;
}
}
#elif defined(G_COMPILER_CODEWARRIOR)
cachelinesize = 32;
cachelineinc = 32;
#elif defined(_AIX)
/* FIXME: use block instead? */
cachelinesize = _system_configuration.icache_line;
cachelineinc = _system_configuration.icache_line;
#else
//#error Need a way to get cache line size
#endif
if (mono_hwcap_ppc_has_icache_snoop)
cpu_hw_caps |= PPC_ICACHE_SNOOP;
if (mono_hwcap_ppc_is_isa_2x)
cpu_hw_caps |= PPC_ISA_2X;
if (mono_hwcap_ppc_is_isa_2_03)
cpu_hw_caps |= PPC_ISA_2_03;
if (mono_hwcap_ppc_is_isa_64)
cpu_hw_caps |= PPC_ISA_64;
if (mono_hwcap_ppc_has_move_fpr_gpr)
cpu_hw_caps |= PPC_MOVE_FPR_GPR;
if (mono_hwcap_ppc_has_multiple_ls_units)
cpu_hw_caps |= PPC_MULTIPLE_LS_UNITS;
if (!cachelinesize)
cachelinesize = 32;
if (!cachelineinc)
cachelineinc = cachelinesize;
if (mono_cpu_count () > 1)
cpu_hw_caps |= PPC_SMP_CAPABLE;
mono_os_mutex_init_recursive (&mini_arch_mutex);
ss_trigger_page = mono_valloc (NULL, mono_pagesize (), MONO_MMAP_READ, MONO_MEM_ACCOUNT_OTHER);
bp_trigger_page = mono_valloc (NULL, mono_pagesize (), MONO_MMAP_READ, MONO_MEM_ACCOUNT_OTHER);
mono_mprotect (bp_trigger_page, mono_pagesize (), 0);
// FIXME: Fix partial sharing for power and remove this
mono_set_partial_sharing_supported (FALSE);
}
/*
* Cleanup architecture specific code.
*/
void
mono_arch_cleanup (void)
{
mono_os_mutex_destroy (&mini_arch_mutex);
}
gboolean
mono_arch_have_fast_tls (void)
{
return FALSE;
}
/*
* This function returns the optimizations supported on this cpu.
*/
guint32
mono_arch_cpu_optimizations (guint32 *exclude_mask)
{
guint32 opts = 0;
/* no ppc-specific optimizations yet */
*exclude_mask = 0;
return opts;
}
#ifdef __mono_ppc64__
#define CASE_PPC32(c)
#define CASE_PPC64(c) case c:
#else
#define CASE_PPC32(c) case c:
#define CASE_PPC64(c)
#endif
static gboolean
is_regsize_var (MonoType *t) {
if (m_type_is_byref (t))
return TRUE;
t = mini_get_underlying_type (t);
switch (t->type) {
case MONO_TYPE_I4:
case MONO_TYPE_U4:
CASE_PPC64 (MONO_TYPE_I8)
CASE_PPC64 (MONO_TYPE_U8)
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
return TRUE;
case MONO_TYPE_OBJECT:
case MONO_TYPE_STRING:
case MONO_TYPE_CLASS:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
return TRUE;
case MONO_TYPE_GENERICINST:
if (!mono_type_generic_inst_is_valuetype (t))
return TRUE;
return FALSE;
case MONO_TYPE_VALUETYPE:
return FALSE;
}
return FALSE;
}
#ifndef DISABLE_JIT
GList *
mono_arch_get_allocatable_int_vars (MonoCompile *cfg)
{
GList *vars = NULL;
int i;
for (i = 0; i < cfg->num_varinfo; i++) {
MonoInst *ins = cfg->varinfo [i];
MonoMethodVar *vmv = MONO_VARINFO (cfg, i);
/* unused vars */
if (vmv->range.first_use.abs_pos >= vmv->range.last_use.abs_pos)
continue;
if (ins->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT) || (ins->opcode != OP_LOCAL && ins->opcode != OP_ARG))
continue;
/* we can only allocate 32 bit values */
if (is_regsize_var (ins->inst_vtype)) {
g_assert (MONO_VARINFO (cfg, i)->reg == -1);
g_assert (i == vmv->idx);
vars = mono_varlist_insert_sorted (cfg, vars, vmv, FALSE);
}
}
return vars;
}
#endif /* ifndef DISABLE_JIT */
GList *
mono_arch_get_global_int_regs (MonoCompile *cfg)
{
GList *regs = NULL;
int i, top = 32;
if (cfg->frame_reg != ppc_sp)
top = 31;
/* ppc_r13 is used by the system on PPC EABI */
for (i = 14; i < top; ++i) {
/*
* Reserve r29 for holding the vtable address for virtual calls in AOT mode,
* since the trampolines can clobber r12.
*/
if (!(cfg->compile_aot && i == 29))
regs = g_list_prepend (regs, GUINT_TO_POINTER (i));
}
return regs;
}
/*
* mono_arch_regalloc_cost:
*
* Return the cost, in number of memory references, of the action of
* allocating the variable VMV into a register during global register
* allocation.
*/
guint32
mono_arch_regalloc_cost (MonoCompile *cfg, MonoMethodVar *vmv)
{
/* FIXME: */
return 2;
}
void
mono_arch_flush_icache (guint8 *code, gint size)
{
#ifdef MONO_CROSS_COMPILE
/* do nothing */
#else
register guint8 *p;
guint8 *endp, *start;
p = start = code;
endp = p + size;
start = (guint8*)((gsize)start & ~(cachelinesize - 1));
/* use dcbf for smp support, later optimize for UP, see pem._64bit.d20030611.pdf page 211 */
#if defined(G_COMPILER_CODEWARRIOR)
if (cpu_hw_caps & PPC_SMP_CAPABLE) {
for (p = start; p < endp; p += cachelineinc) {
asm { dcbf 0, p };
}
} else {
for (p = start; p < endp; p += cachelineinc) {
asm { dcbst 0, p };
}
}
asm { sync };
p = code;
for (p = start; p < endp; p += cachelineinc) {
asm {
icbi 0, p
sync
}
}
asm {
sync
isync
}
#else
/* For POWER5/6 with ICACHE_SNOOPing only one icbi in the range is required.
* The sync is required to insure that the store queue is completely empty.
* While the icbi performs no cache operations, icbi/isync is required to
* kill local prefetch.
*/
if (cpu_hw_caps & PPC_ICACHE_SNOOP) {
asm ("sync");
asm ("icbi 0,%0;" : : "r"(code) : "memory");
asm ("isync");
return;
}
/* use dcbf for smp support, see pem._64bit.d20030611.pdf page 211 */
if (cpu_hw_caps & PPC_SMP_CAPABLE) {
for (p = start; p < endp; p += cachelineinc) {
asm ("dcbf 0,%0;" : : "r"(p) : "memory");
}
} else {
for (p = start; p < endp; p += cachelineinc) {
asm ("dcbst 0,%0;" : : "r"(p) : "memory");
}
}
asm ("sync");
p = code;
for (p = start; p < endp; p += cachelineinc) {
/* for ISA2.0+ implementations we should not need any extra sync between the
* icbi instructions. Both the 2.0 PEM and the PowerISA-2.05 say this.
* So I am not sure which chip had this problem but its not an issue on
* of the ISA V2 chips.
*/
if (cpu_hw_caps & PPC_ISA_2X)
asm ("icbi 0,%0;" : : "r"(p) : "memory");
else
asm ("icbi 0,%0; sync;" : : "r"(p) : "memory");
}
if (!(cpu_hw_caps & PPC_ISA_2X))
asm ("sync");
asm ("isync");
#endif
#endif
}
void
mono_arch_flush_register_windows (void)
{
}
#ifdef __APPLE__
#define ALWAYS_ON_STACK(s) s
#define FP_ALSO_IN_REG(s) s
#else
#ifdef __mono_ppc64__
#define ALWAYS_ON_STACK(s) s
#define FP_ALSO_IN_REG(s) s
#else
#define ALWAYS_ON_STACK(s)
#define FP_ALSO_IN_REG(s)
#endif
#define ALIGN_DOUBLES
#endif
enum {
RegTypeGeneral,
RegTypeBase,
RegTypeFP,
RegTypeStructByVal,
RegTypeStructByAddr,
RegTypeFPStructByVal, // For the v2 ABI, floats should be passed in FRs instead of GRs. Only valid for ABI v2!
};
typedef struct {
gint32 offset;
guint32 vtsize; /* in param area */
guint8 reg;
guint8 vtregs; /* number of registers used to pass a RegTypeStructByVal/RegTypeFPStructByVal */
guint8 regtype : 4; /* 0 general, 1 basereg, 2 floating point register, see RegType* */
guint8 size : 4; /* 1, 2, 4, 8, or regs used by RegTypeStructByVal/RegTypeFPStructByVal */
guint8 bytes : 4; /* size in bytes - only valid for
RegTypeStructByVal/RegTypeFPStructByVal if the struct fits
in one word, otherwise it's 0*/
} ArgInfo;
struct CallInfo {
int nargs;
guint32 stack_usage;
guint32 struct_ret;
ArgInfo ret;
ArgInfo sig_cookie;
gboolean vtype_retaddr;
int vret_arg_index;
ArgInfo args [1];
};
#define DEBUG(a)
#if PPC_RETURN_SMALL_FLOAT_STRUCTS_IN_FR_REGS
//
// Test if a structure is completely composed of either float XOR double fields and has fewer than
// PPC_MOST_FLOAT_STRUCT_MEMBERS_TO_RETURN_VIA_REGISTER members.
// If this is true the structure can be returned directly via float registers instead of by a hidden parameter
// pointing to where the return value should be stored.
// This is as per the ELF ABI v2.
//
static gboolean
is_float_struct_returnable_via_regs (MonoType *type, int* member_cnt, int* member_size)
{
int local_member_cnt, local_member_size;
if (!member_cnt) {
member_cnt = &local_member_cnt;
}
if (!member_size) {
member_size = &local_member_size;
}
gboolean is_all_floats = mini_type_is_hfa(type, member_cnt, member_size);
return is_all_floats && (*member_cnt <= PPC_MOST_FLOAT_STRUCT_MEMBERS_TO_RETURN_VIA_REGISTERS);
}
#else
#define is_float_struct_returnable_via_regs(a,b,c) (FALSE)
#endif
#if PPC_RETURN_SMALL_STRUCTS_IN_REGS
//
// Test if a structure is smaller in size than 2 doublewords (PPC_LARGEST_STRUCT_SIZE_TO_RETURN_VIA_REGISTERS) and is
// completely composed of fields all of basic types.
// If this is true the structure can be returned directly via registers r3/r4 instead of by a hidden parameter
// pointing to where the return value should be stored.
// This is as per the ELF ABI v2.
//
static gboolean
is_struct_returnable_via_regs (MonoClass *klass, gboolean is_pinvoke)
{
gboolean has_a_field = FALSE;
int size = 0;
if (klass) {
gpointer iter = NULL;
MonoClassField *f;
if (is_pinvoke)
size = mono_type_native_stack_size (m_class_get_byval_arg (klass), 0);
else
size = mini_type_stack_size (m_class_get_byval_arg (klass), 0);
if (size == 0)
return TRUE;
if (size > PPC_LARGEST_STRUCT_SIZE_TO_RETURN_VIA_REGISTERS)
return FALSE;
while ((f = mono_class_get_fields_internal (klass, &iter))) {
if (!(f->type->attrs & FIELD_ATTRIBUTE_STATIC)) {
// TBD: Is there a better way to check for the basic types?
if (m_type_is_byref (f->type)) {
return FALSE;
} else if ((f->type->type >= MONO_TYPE_BOOLEAN) && (f->type->type <= MONO_TYPE_R8)) {
has_a_field = TRUE;
} else if (MONO_TYPE_ISSTRUCT (f->type)) {
MonoClass *klass = mono_class_from_mono_type_internal (f->type);
if (is_struct_returnable_via_regs(klass, is_pinvoke)) {
has_a_field = TRUE;
} else {
return FALSE;
}
} else {
return FALSE;
}
}
}
}
return has_a_field;
}
#else
#define is_struct_returnable_via_regs(a,b) (FALSE)
#endif
static void inline
add_general (guint *gr, guint *stack_size, ArgInfo *ainfo, gboolean simple)
{
#ifdef __mono_ppc64__
g_assert (simple);
#endif
if (simple) {
if (*gr >= 3 + PPC_NUM_REG_ARGS) {
ainfo->offset = PPC_STACK_PARAM_OFFSET + *stack_size;
ainfo->reg = ppc_sp; /* in the caller */
ainfo->regtype = RegTypeBase;
*stack_size += sizeof (target_mgreg_t);
} else {
ALWAYS_ON_STACK (*stack_size += sizeof (target_mgreg_t));
ainfo->reg = *gr;
}
} else {
if (*gr >= 3 + PPC_NUM_REG_ARGS - 1) {
#ifdef ALIGN_DOUBLES
//*stack_size += (*stack_size % 8);
#endif
ainfo->offset = PPC_STACK_PARAM_OFFSET + *stack_size;
ainfo->reg = ppc_sp; /* in the caller */
ainfo->regtype = RegTypeBase;
*stack_size += 8;
} else {
#ifdef ALIGN_DOUBLES
if (!((*gr) & 1))
(*gr) ++;
#endif
ALWAYS_ON_STACK (*stack_size += 8);
ainfo->reg = *gr;
}
(*gr) ++;
}
(*gr) ++;
}
#if defined(__APPLE__) || (defined(__mono_ppc64__) && !PPC_PASS_SMALL_FLOAT_STRUCTS_IN_FR_REGS)
static gboolean
has_only_a_r48_field (MonoClass *klass)
{
gpointer iter;
MonoClassField *f;
gboolean have_field = FALSE;
iter = NULL;
while ((f = mono_class_get_fields_internal (klass, &iter))) {
if (!(f->type->attrs & FIELD_ATTRIBUTE_STATIC)) {
if (have_field)
return FALSE;
if (!m_type_is_byref (f->type) && (f->type->type == MONO_TYPE_R4 || f->type->type == MONO_TYPE_R8))
have_field = TRUE;
else
return FALSE;
}
}
return have_field;
}
#endif
static CallInfo*
get_call_info (MonoMethodSignature *sig)
{
guint i, fr, gr, pstart;
int n = sig->hasthis + sig->param_count;
MonoType *simpletype;
guint32 stack_size = 0;
CallInfo *cinfo = g_malloc0 (sizeof (CallInfo) + sizeof (ArgInfo) * n);
gboolean is_pinvoke = sig->pinvoke;
fr = PPC_FIRST_FPARG_REG;
gr = PPC_FIRST_ARG_REG;
if (mini_type_is_vtype (sig->ret)) {
cinfo->vtype_retaddr = TRUE;
}
pstart = 0;
n = 0;
/*
* To simplify get_this_arg_reg () and LLVM integration, emit the vret arg after
* the first argument, allowing 'this' to be always passed in the first arg reg.
* Also do this if the first argument is a reference type, since virtual calls
* are sometimes made using calli without sig->hasthis set, like in the delegate
* invoke wrappers.
*/
if (cinfo->vtype_retaddr && !is_pinvoke && (sig->hasthis || (sig->param_count > 0 && MONO_TYPE_IS_REFERENCE (mini_get_underlying_type (sig->params [0]))))) {
if (sig->hasthis) {
add_general (&gr, &stack_size, cinfo->args + 0, TRUE);
n ++;
} else {
add_general (&gr, &stack_size, &cinfo->args [sig->hasthis + 0], TRUE);
pstart = 1;
n ++;
}
add_general (&gr, &stack_size, &cinfo->ret, TRUE);
cinfo->struct_ret = cinfo->ret.reg;
cinfo->vret_arg_index = 1;
} else {
/* this */
if (sig->hasthis) {
add_general (&gr, &stack_size, cinfo->args + 0, TRUE);
n ++;
}
if (cinfo->vtype_retaddr) {
add_general (&gr, &stack_size, &cinfo->ret, TRUE);
cinfo->struct_ret = cinfo->ret.reg;
}
}
DEBUG(printf("params: %d\n", sig->param_count));
for (i = pstart; i < sig->param_count; ++i) {
if (!sig->pinvoke && (sig->call_convention == MONO_CALL_VARARG) && (i == sig->sentinelpos)) {
/* Prevent implicit arguments and sig_cookie from
being passed in registers */
gr = PPC_LAST_ARG_REG + 1;
/* FIXME: don't we have to set fr, too? */
/* Emit the signature cookie just before the implicit arguments */
add_general (&gr, &stack_size, &cinfo->sig_cookie, TRUE);
}
DEBUG(printf("param %d: ", i));
if (m_type_is_byref (sig->params [i])) {
DEBUG(printf("byref\n"));
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
n++;
continue;
}
simpletype = mini_get_underlying_type (sig->params [i]);
switch (simpletype->type) {
case MONO_TYPE_BOOLEAN:
case MONO_TYPE_I1:
case MONO_TYPE_U1:
cinfo->args [n].size = 1;
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
n++;
break;
case MONO_TYPE_CHAR:
case MONO_TYPE_I2:
case MONO_TYPE_U2:
cinfo->args [n].size = 2;
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
n++;
break;
case MONO_TYPE_I4:
case MONO_TYPE_U4:
cinfo->args [n].size = 4;
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
n++;
break;
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
case MONO_TYPE_CLASS:
case MONO_TYPE_OBJECT:
case MONO_TYPE_STRING:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
cinfo->args [n].size = sizeof (target_mgreg_t);
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
n++;
break;
case MONO_TYPE_GENERICINST:
if (!mono_type_generic_inst_is_valuetype (simpletype)) {
cinfo->args [n].size = sizeof (target_mgreg_t);
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
n++;
break;
}
/* Fall through */
case MONO_TYPE_VALUETYPE:
case MONO_TYPE_TYPEDBYREF: {
gint size;
MonoClass *klass = mono_class_from_mono_type_internal (sig->params [i]);
if (simpletype->type == MONO_TYPE_TYPEDBYREF)
size = MONO_ABI_SIZEOF (MonoTypedRef);
else if (sig->pinvoke && !sig->marshalling_disabled)
size = mono_class_native_size (klass, NULL);
else
size = mono_class_value_size (klass, NULL);
#if defined(__APPLE__) || (defined(__mono_ppc64__) && !PPC_PASS_SMALL_FLOAT_STRUCTS_IN_FR_REGS)
if ((size == 4 || size == 8) && has_only_a_r48_field (klass)) {
cinfo->args [n].size = size;
/* It was 7, now it is 8 in LinuxPPC */
if (fr <= PPC_LAST_FPARG_REG) {
cinfo->args [n].regtype = RegTypeFP;
cinfo->args [n].reg = fr;
fr ++;
FP_ALSO_IN_REG (gr ++);
#if !defined(__mono_ppc64__)
if (size == 8)
FP_ALSO_IN_REG (gr ++);
#endif
ALWAYS_ON_STACK (stack_size += size);
} else {
cinfo->args [n].offset = PPC_STACK_PARAM_OFFSET + stack_size;
cinfo->args [n].regtype = RegTypeBase;
cinfo->args [n].reg = ppc_sp; /* in the caller*/
stack_size += 8;
}
n++;
break;
}
#endif
DEBUG(printf ("load %d bytes struct\n",
mono_class_native_size (sig->params [i]->data.klass, NULL)));
#if PPC_PASS_STRUCTS_BY_VALUE
{
int align_size = size;
int nregs = 0;
int rest = PPC_LAST_ARG_REG - gr + 1;
int n_in_regs = 0;
#if PPC_PASS_SMALL_FLOAT_STRUCTS_IN_FR_REGS
int mbr_cnt = 0;
int mbr_size = 0;
gboolean is_all_floats = is_float_struct_returnable_via_regs (sig->params [i], &mbr_cnt, &mbr_size);
if (is_all_floats) {
rest = PPC_LAST_FPARG_REG - fr + 1;
}
// Pass small (<= 8 member) structures entirely made up of either float or double members
// in FR registers. There have to be at least mbr_cnt registers left.
if (is_all_floats &&
(rest >= mbr_cnt)) {
nregs = mbr_cnt;
n_in_regs = MIN (rest, nregs);
cinfo->args [n].regtype = RegTypeFPStructByVal;
cinfo->args [n].vtregs = n_in_regs;
cinfo->args [n].size = mbr_size;
cinfo->args [n].vtsize = nregs - n_in_regs;
cinfo->args [n].reg = fr;
fr += n_in_regs;
if (mbr_size == 4) {
// floats
FP_ALSO_IN_REG (gr += (n_in_regs+1)/2);
} else {
// doubles
FP_ALSO_IN_REG (gr += (n_in_regs));
}
} else
#endif
{
align_size += (sizeof (target_mgreg_t) - 1);
align_size &= ~(sizeof (target_mgreg_t) - 1);
nregs = (align_size + sizeof (target_mgreg_t) -1 ) / sizeof (target_mgreg_t);
n_in_regs = MIN (rest, nregs);
if (n_in_regs < 0)
n_in_regs = 0;
#ifdef __APPLE__
/* FIXME: check this */
if (size >= 3 && size % 4 != 0)
n_in_regs = 0;
#endif
cinfo->args [n].regtype = RegTypeStructByVal;
cinfo->args [n].vtregs = n_in_regs;
cinfo->args [n].size = n_in_regs;
cinfo->args [n].vtsize = nregs - n_in_regs;
cinfo->args [n].reg = gr;
gr += n_in_regs;
}
#ifdef __mono_ppc64__
if (nregs == 1 && is_pinvoke)
cinfo->args [n].bytes = size;
else
#endif
cinfo->args [n].bytes = 0;
cinfo->args [n].offset = PPC_STACK_PARAM_OFFSET + stack_size;
/*g_print ("offset for arg %d at %d\n", n, PPC_STACK_PARAM_OFFSET + stack_size);*/
stack_size += nregs * sizeof (target_mgreg_t);
}
#else
add_general (&gr, &stack_size, cinfo->args + n, TRUE);
cinfo->args [n].regtype = RegTypeStructByAddr;
cinfo->args [n].vtsize = size;
#endif
n++;
break;
}
case MONO_TYPE_U8:
case MONO_TYPE_I8:
cinfo->args [n].size = 8;
add_general (&gr, &stack_size, cinfo->args + n, SIZEOF_REGISTER == 8);
n++;
break;
case MONO_TYPE_R4:
cinfo->args [n].size = 4;
/* It was 7, now it is 8 in LinuxPPC */
if (fr <= PPC_LAST_FPARG_REG
// For non-native vararg calls the parms must go in storage
&& !(!sig->pinvoke && (sig->call_convention == MONO_CALL_VARARG))
) {
cinfo->args [n].regtype = RegTypeFP;
cinfo->args [n].reg = fr;
fr ++;
FP_ALSO_IN_REG (gr ++);
ALWAYS_ON_STACK (stack_size += SIZEOF_REGISTER);
} else {
cinfo->args [n].offset = PPC_STACK_PARAM_OFFSET + stack_size + MONO_PPC_32_64_CASE (0, 4);
cinfo->args [n].regtype = RegTypeBase;
cinfo->args [n].reg = ppc_sp; /* in the caller*/
stack_size += SIZEOF_REGISTER;
}
n++;
break;
case MONO_TYPE_R8:
cinfo->args [n].size = 8;
/* It was 7, now it is 8 in LinuxPPC */
if (fr <= PPC_LAST_FPARG_REG
// For non-native vararg calls the parms must go in storage
&& !(!sig->pinvoke && (sig->call_convention == MONO_CALL_VARARG))
) {
cinfo->args [n].regtype = RegTypeFP;
cinfo->args [n].reg = fr;
fr ++;
FP_ALSO_IN_REG (gr += sizeof (double) / SIZEOF_REGISTER);
ALWAYS_ON_STACK (stack_size += 8);
} else {
cinfo->args [n].offset = PPC_STACK_PARAM_OFFSET + stack_size;
cinfo->args [n].regtype = RegTypeBase;
cinfo->args [n].reg = ppc_sp; /* in the caller*/
stack_size += 8;
}
n++;
break;
default:
g_error ("Can't trampoline 0x%x", sig->params [i]->type);
}
}
cinfo->nargs = n;
if (!sig->pinvoke && (sig->call_convention == MONO_CALL_VARARG) && (i == sig->sentinelpos)) {
/* Prevent implicit arguments and sig_cookie from
being passed in registers */
gr = PPC_LAST_ARG_REG + 1;
/* Emit the signature cookie just before the implicit arguments */
add_general (&gr, &stack_size, &cinfo->sig_cookie, TRUE);
}
{
simpletype = mini_get_underlying_type (sig->ret);
switch (simpletype->type) {
case MONO_TYPE_BOOLEAN:
case MONO_TYPE_I1:
case MONO_TYPE_U1:
case MONO_TYPE_I2:
case MONO_TYPE_U2:
case MONO_TYPE_CHAR:
case MONO_TYPE_I4:
case MONO_TYPE_U4:
case MONO_TYPE_I:
case MONO_TYPE_U:
case MONO_TYPE_PTR:
case MONO_TYPE_FNPTR:
case MONO_TYPE_CLASS:
case MONO_TYPE_OBJECT:
case MONO_TYPE_SZARRAY:
case MONO_TYPE_ARRAY:
case MONO_TYPE_STRING:
cinfo->ret.reg = ppc_r3;
break;
case MONO_TYPE_U8:
case MONO_TYPE_I8:
cinfo->ret.reg = ppc_r3;
break;
case MONO_TYPE_R4:
case MONO_TYPE_R8:
cinfo->ret.reg = ppc_f1;
cinfo->ret.regtype = RegTypeFP;
break;
case MONO_TYPE_GENERICINST:
if (!mono_type_generic_inst_is_valuetype (simpletype)) {
cinfo->ret.reg = ppc_r3;
break;
}
break;
case MONO_TYPE_VALUETYPE:
break;
case MONO_TYPE_TYPEDBYREF:
case MONO_TYPE_VOID:
break;
default:
g_error ("Can't handle as return value 0x%x", sig->ret->type);
}
}
/* align stack size to 16 */
DEBUG (printf (" stack size: %d (%d)\n", (stack_size + 15) & ~15, stack_size));
stack_size = (stack_size + 15) & ~15;
cinfo->stack_usage = stack_size;
return cinfo;
}
#ifndef DISABLE_JIT
gboolean
mono_arch_tailcall_supported (MonoCompile *cfg, MonoMethodSignature *caller_sig, MonoMethodSignature *callee_sig, gboolean virtual_)
{
CallInfo *caller_info = get_call_info (caller_sig);
CallInfo *callee_info = get_call_info (callee_sig);
gboolean res = IS_SUPPORTED_TAILCALL (callee_info->stack_usage <= caller_info->stack_usage)
&& IS_SUPPORTED_TAILCALL (memcmp (&callee_info->ret, &caller_info->ret, sizeof (caller_info->ret)) == 0);
// FIXME ABIs vary as to if this local is in the parameter area or not,
// so this check might not be needed.
for (int i = 0; res && i < callee_info->nargs; ++i) {
res = IS_SUPPORTED_TAILCALL (callee_info->args [i].regtype != RegTypeStructByAddr);
/* An address on the callee's stack is passed as the argument */
}
g_free (caller_info);
g_free (callee_info);
return res;
}
#endif
/*
* Set var information according to the calling convention. ppc version.
* The locals var stuff should most likely be split in another method.
*/
void
mono_arch_allocate_vars (MonoCompile *m)
{
MonoMethodSignature *sig;
MonoMethodHeader *header;
MonoInst *inst;
int i, offset, size, align, curinst;
int frame_reg = ppc_sp;
gint32 *offsets;
guint32 locals_stack_size, locals_stack_align;
m->flags |= MONO_CFG_HAS_SPILLUP;
/* this is bug #60332: remove when #59509 is fixed, so no weird vararg
* call convs needs to be handled this way.
*/
if (m->flags & MONO_CFG_HAS_VARARGS)
m->param_area = MAX (m->param_area, sizeof (target_mgreg_t)*8);
/* gtk-sharp and other broken code will dllimport vararg functions even with
* non-varargs signatures. Since there is little hope people will get this right
* we assume they won't.
*/
if (m->method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE)
m->param_area = MAX (m->param_area, sizeof (target_mgreg_t)*8);
header = m->header;
/*
* We use the frame register also for any method that has
* exception clauses. This way, when the handlers are called,
* the code will reference local variables using the frame reg instead of
* the stack pointer: if we had to restore the stack pointer, we'd
* corrupt the method frames that are already on the stack (since
* filters get called before stack unwinding happens) when the filter
* code would call any method (this also applies to finally etc.).
*/
if ((m->flags & MONO_CFG_HAS_ALLOCA) || header->num_clauses)
frame_reg = ppc_r31;
m->frame_reg = frame_reg;
if (frame_reg != ppc_sp) {
m->used_int_regs |= 1 << frame_reg;
}
sig = mono_method_signature_internal (m->method);
offset = 0;
curinst = 0;
if (MONO_TYPE_ISSTRUCT (sig->ret)) {
m->ret->opcode = OP_REGVAR;
m->ret->inst_c0 = m->ret->dreg = ppc_r3;
} else {
/* FIXME: handle long values? */
switch (mini_get_underlying_type (sig->ret)->type) {
case MONO_TYPE_VOID:
break;
case MONO_TYPE_R4:
case MONO_TYPE_R8:
m->ret->opcode = OP_REGVAR;
m->ret->inst_c0 = m->ret->dreg = ppc_f1;
break;
default:
m->ret->opcode = OP_REGVAR;
m->ret->inst_c0 = m->ret->dreg = ppc_r3;
break;
}
}
/* local vars are at a positive offset from the stack pointer */
/*
* also note that if the function uses alloca, we use ppc_r31
* to point at the local variables.
*/
offset = PPC_MINIMAL_STACK_SIZE; /* linkage area */
/* align the offset to 16 bytes: not sure this is needed here */
//offset += 16 - 1;
//offset &= ~(16 - 1);
/* add parameter area size for called functions */
offset += m->param_area;
offset += 16 - 1;
offset &= ~(16 - 1);
/* the MonoLMF structure is stored just below the stack pointer */
if (MONO_TYPE_ISSTRUCT (sig->ret)) {
offset += sizeof(gpointer) - 1;
offset &= ~(sizeof(gpointer) - 1);
m->vret_addr->opcode = OP_REGOFFSET;
m->vret_addr->inst_basereg = frame_reg;
m->vret_addr->inst_offset = offset;
if (G_UNLIKELY (m->verbose_level > 1)) {
printf ("vret_addr =");
mono_print_ins (m->vret_addr);
}
offset += sizeof(gpointer);
}
offsets = mono_allocate_stack_slots (m, FALSE, &locals_stack_size, &locals_stack_align);
if (locals_stack_align) {
offset += (locals_stack_align - 1);
offset &= ~(locals_stack_align - 1);
}
for (i = m->locals_start; i < m->num_varinfo; i++) {
if (offsets [i] != -1) {
MonoInst *inst = m->varinfo [i];
inst->opcode = OP_REGOFFSET;
inst->inst_basereg = frame_reg;
inst->inst_offset = offset + offsets [i];
/*
g_print ("allocating local %d (%s) to %d\n",
i, mono_type_get_name (inst->inst_vtype), inst->inst_offset);
*/
}
}
offset += locals_stack_size;
curinst = 0;
if (sig->hasthis) {
inst = m->args [curinst];
if (inst->opcode != OP_REGVAR) {
inst->opcode = OP_REGOFFSET;
inst->inst_basereg = frame_reg;
offset += sizeof (target_mgreg_t) - 1;
offset &= ~(sizeof (target_mgreg_t) - 1);
inst->inst_offset = offset;
offset += sizeof (target_mgreg_t);
}
curinst++;
}
for (i = 0; i < sig->param_count; ++i) {
inst = m->args [curinst];
if (inst->opcode != OP_REGVAR) {
inst->opcode = OP_REGOFFSET;
inst->inst_basereg = frame_reg;
if (sig->pinvoke && !sig->marshalling_disabled) {
size = mono_type_native_stack_size (sig->params [i], (guint32*)&align);
inst->backend.is_pinvoke = 1;
} else {
size = mono_type_size (sig->params [i], &align);
}
if (MONO_TYPE_ISSTRUCT (sig->params [i]) && size < sizeof (target_mgreg_t))
size = align = sizeof (target_mgreg_t);
/*
* Use at least 4/8 byte alignment, since these might be passed in registers, and
* they are saved using std in the prolog.
*/
align = sizeof (target_mgreg_t);
offset += align - 1;
offset &= ~(align - 1);
inst->inst_offset = offset;
offset += size;
}
curinst++;
}
/* some storage for fp conversions */
offset += 8 - 1;
offset &= ~(8 - 1);
m->arch.fp_conv_var_offset = offset;
offset += 8;
/* align the offset to 16 bytes */
offset += 16 - 1;
offset &= ~(16 - 1);
/* change sign? */
m->stack_offset = offset;
if (sig->call_convention == MONO_CALL_VARARG) {
CallInfo *cinfo = get_call_info (m->method->signature);
m->sig_cookie = cinfo->sig_cookie.offset;
g_free(cinfo);
}
}
void
mono_arch_create_vars (MonoCompile *cfg)
{
MonoMethodSignature *sig = mono_method_signature_internal (cfg->method);
if (MONO_TYPE_ISSTRUCT (sig->ret)) {
cfg->vret_addr = mono_compile_create_var (cfg, mono_get_int_type (), OP_ARG);
}
}
/* Fixme: we need an alignment solution for enter_method and mono_arch_call_opcode,
* currently alignment in mono_arch_call_opcode is computed without arch_get_argument_info
*/
static void
emit_sig_cookie (MonoCompile *cfg, MonoCallInst *call, CallInfo *cinfo)
{
int sig_reg = mono_alloc_ireg (cfg);
/* FIXME: Add support for signature tokens to AOT */
cfg->disable_aot = TRUE;
MONO_EMIT_NEW_ICONST (cfg, sig_reg, (gulong)call->signature);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG,
ppc_r1, cinfo->sig_cookie.offset, sig_reg);
}
void
mono_arch_emit_call (MonoCompile *cfg, MonoCallInst *call)
{
MonoInst *in, *ins;
MonoMethodSignature *sig;
int i, n;
CallInfo *cinfo;
sig = call->signature;
n = sig->param_count + sig->hasthis;
cinfo = get_call_info (sig);
for (i = 0; i < n; ++i) {
ArgInfo *ainfo = cinfo->args + i;
MonoType *t;
if (i >= sig->hasthis)
t = sig->params [i - sig->hasthis];
else
t = mono_get_int_type ();
t = mini_get_underlying_type (t);
if (!sig->pinvoke && (sig->call_convention == MONO_CALL_VARARG) && (i == sig->sentinelpos))
emit_sig_cookie (cfg, call, cinfo);
in = call->args [i];
if (ainfo->regtype == RegTypeGeneral) {
#ifndef __mono_ppc64__
if (!m_type_is_byref (t) && ((t->type == MONO_TYPE_I8) || (t->type == MONO_TYPE_U8))) {
MONO_INST_NEW (cfg, ins, OP_MOVE);
ins->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = MONO_LVREG_LS (in->dreg);
MONO_ADD_INS (cfg->cbb, ins);
mono_call_inst_add_outarg_reg (cfg, call, ins->dreg, ainfo->reg + 1, FALSE);
MONO_INST_NEW (cfg, ins, OP_MOVE);
ins->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = MONO_LVREG_MS (in->dreg);
MONO_ADD_INS (cfg->cbb, ins);
mono_call_inst_add_outarg_reg (cfg, call, ins->dreg, ainfo->reg, FALSE);
} else
#endif
{
MONO_INST_NEW (cfg, ins, OP_MOVE);
ins->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = in->dreg;
MONO_ADD_INS (cfg->cbb, ins);
mono_call_inst_add_outarg_reg (cfg, call, ins->dreg, ainfo->reg, FALSE);
}
} else if (ainfo->regtype == RegTypeStructByAddr) {
MONO_INST_NEW (cfg, ins, OP_OUTARG_VT);
ins->opcode = OP_OUTARG_VT;
ins->sreg1 = in->dreg;
ins->klass = in->klass;
ins->inst_p0 = call;
ins->inst_p1 = mono_mempool_alloc (cfg->mempool, sizeof (ArgInfo));
memcpy (ins->inst_p1, ainfo, sizeof (ArgInfo));
MONO_ADD_INS (cfg->cbb, ins);
} else if (ainfo->regtype == RegTypeStructByVal) {
/* this is further handled in mono_arch_emit_outarg_vt () */
MONO_INST_NEW (cfg, ins, OP_OUTARG_VT);
ins->opcode = OP_OUTARG_VT;
ins->sreg1 = in->dreg;
ins->klass = in->klass;
ins->inst_p0 = call;
ins->inst_p1 = mono_mempool_alloc (cfg->mempool, sizeof (ArgInfo));
memcpy (ins->inst_p1, ainfo, sizeof (ArgInfo));
MONO_ADD_INS (cfg->cbb, ins);
} else if (ainfo->regtype == RegTypeFPStructByVal) {
/* this is further handled in mono_arch_emit_outarg_vt () */
MONO_INST_NEW (cfg, ins, OP_OUTARG_VT);
ins->opcode = OP_OUTARG_VT;
ins->sreg1 = in->dreg;
ins->klass = in->klass;
ins->inst_p0 = call;
ins->inst_p1 = mono_mempool_alloc (cfg->mempool, sizeof (ArgInfo));
memcpy (ins->inst_p1, ainfo, sizeof (ArgInfo));
MONO_ADD_INS (cfg->cbb, ins);
cfg->flags |= MONO_CFG_HAS_FPOUT;
} else if (ainfo->regtype == RegTypeBase) {
if (!m_type_is_byref (t) && ((t->type == MONO_TYPE_I8) || (t->type == MONO_TYPE_U8))) {
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI8_MEMBASE_REG, ppc_r1, ainfo->offset, in->dreg);
} else if (!m_type_is_byref (t) && ((t->type == MONO_TYPE_R4) || (t->type == MONO_TYPE_R8))) {
if (t->type == MONO_TYPE_R8)
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORER8_MEMBASE_REG, ppc_r1, ainfo->offset, in->dreg);
else
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORER4_MEMBASE_REG, ppc_r1, ainfo->offset, in->dreg);
} else {
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, ppc_r1, ainfo->offset, in->dreg);
}
} else if (ainfo->regtype == RegTypeFP) {
if (t->type == MONO_TYPE_VALUETYPE) {
/* this is further handled in mono_arch_emit_outarg_vt () */
MONO_INST_NEW (cfg, ins, OP_OUTARG_VT);
ins->opcode = OP_OUTARG_VT;
ins->sreg1 = in->dreg;
ins->klass = in->klass;
ins->inst_p0 = call;
ins->inst_p1 = mono_mempool_alloc (cfg->mempool, sizeof (ArgInfo));
memcpy (ins->inst_p1, ainfo, sizeof (ArgInfo));
MONO_ADD_INS (cfg->cbb, ins);
cfg->flags |= MONO_CFG_HAS_FPOUT;
} else {
int dreg = mono_alloc_freg (cfg);
if (ainfo->size == 4) {
MONO_EMIT_NEW_UNALU (cfg, OP_FCONV_TO_R4, dreg, in->dreg);
} else {
MONO_INST_NEW (cfg, ins, OP_FMOVE);
ins->dreg = dreg;
ins->sreg1 = in->dreg;
MONO_ADD_INS (cfg->cbb, ins);
}
mono_call_inst_add_outarg_reg (cfg, call, dreg, ainfo->reg, TRUE);
cfg->flags |= MONO_CFG_HAS_FPOUT;
}
} else {
g_assert_not_reached ();
}
}
/* Emit the signature cookie in the case that there is no
additional argument */
if (!sig->pinvoke && (sig->call_convention == MONO_CALL_VARARG) && (n == sig->sentinelpos))
emit_sig_cookie (cfg, call, cinfo);
if (cinfo->struct_ret) {
MonoInst *vtarg;
MONO_INST_NEW (cfg, vtarg, OP_MOVE);
vtarg->sreg1 = call->vret_var->dreg;
vtarg->dreg = mono_alloc_preg (cfg);
MONO_ADD_INS (cfg->cbb, vtarg);
mono_call_inst_add_outarg_reg (cfg, call, vtarg->dreg, cinfo->struct_ret, FALSE);
}
call->stack_usage = cinfo->stack_usage;
cfg->param_area = MAX (PPC_MINIMAL_PARAM_AREA_SIZE, MAX (cfg->param_area, cinfo->stack_usage));
cfg->flags |= MONO_CFG_HAS_CALLS;
g_free (cinfo);
}
#ifndef DISABLE_JIT
void
mono_arch_emit_outarg_vt (MonoCompile *cfg, MonoInst *ins, MonoInst *src)
{
MonoCallInst *call = (MonoCallInst*)ins->inst_p0;
ArgInfo *ainfo = (ArgInfo*)ins->inst_p1;
int ovf_size = ainfo->vtsize;
int doffset = ainfo->offset;
int i, soffset, dreg;
if (ainfo->regtype == RegTypeStructByVal) {
#ifdef __APPLE__
guint32 size = 0;
#endif
soffset = 0;
#ifdef __APPLE__
/*
* Darwin pinvokes needs some special handling for 1
* and 2 byte arguments
*/
g_assert (ins->klass);
if (call->signature->pinvoke && !call->signature->marshalling_disabled)
size = mono_class_native_size (ins->klass, NULL);
if (size == 2 || size == 1) {
int tmpr = mono_alloc_ireg (cfg);
if (size == 1)
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI1_MEMBASE, tmpr, src->dreg, soffset);
else
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI2_MEMBASE, tmpr, src->dreg, soffset);
dreg = mono_alloc_ireg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_MOVE, dreg, tmpr);
mono_call_inst_add_outarg_reg (cfg, call, dreg, ainfo->reg, FALSE);
} else
#endif
for (i = 0; i < ainfo->vtregs; ++i) {
dreg = mono_alloc_ireg (cfg);
#if G_BYTE_ORDER == G_BIG_ENDIAN
int antipadding = 0;
if (ainfo->bytes) {
g_assert (i == 0);
antipadding = sizeof (target_mgreg_t) - ainfo->bytes;
}
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, src->dreg, soffset);
if (antipadding)
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHR_UN_IMM, dreg, dreg, antipadding * 8);
#else
MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, src->dreg, soffset);
#endif
mono_call_inst_add_outarg_reg (cfg, call, dreg, ainfo->reg + i, FALSE);
soffset += sizeof (target_mgreg_t);
}
if (ovf_size != 0)
mini_emit_memcpy (cfg, ppc_r1, doffset + soffset, src->dreg, soffset, ovf_size * sizeof (target_mgreg_t), TARGET_SIZEOF_VOID_P);
} else if (ainfo->regtype == RegTypeFPStructByVal) {
soffset = 0;
for (i = 0; i < ainfo->vtregs; ++i) {
int tmpr = mono_alloc_freg (cfg);
if (ainfo->size == 4)
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADR4_MEMBASE, tmpr, src->dreg, soffset);
else // ==8
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADR8_MEMBASE, tmpr, src->dreg, soffset);
dreg = mono_alloc_freg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_FMOVE, dreg, tmpr);
mono_call_inst_add_outarg_reg (cfg, call, dreg, ainfo->reg+i, TRUE);
soffset += ainfo->size;
}
if (ovf_size != 0)
mini_emit_memcpy (cfg, ppc_r1, doffset + soffset, src->dreg, soffset, ovf_size * sizeof (target_mgreg_t), TARGET_SIZEOF_VOID_P);
} else if (ainfo->regtype == RegTypeFP) {
int tmpr = mono_alloc_freg (cfg);
if (ainfo->size == 4)
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADR4_MEMBASE, tmpr, src->dreg, 0);
else
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADR8_MEMBASE, tmpr, src->dreg, 0);
dreg = mono_alloc_freg (cfg);
MONO_EMIT_NEW_UNALU (cfg, OP_FMOVE, dreg, tmpr);
mono_call_inst_add_outarg_reg (cfg, call, dreg, ainfo->reg, TRUE);
} else {
MonoInst *vtcopy = mono_compile_create_var (cfg, m_class_get_byval_arg (src->klass), OP_LOCAL);
MonoInst *load;
guint32 size;
/* FIXME: alignment? */
if (call->signature->pinvoke && !call->signature->marshalling_disabled) {
size = mono_type_native_stack_size (m_class_get_byval_arg (src->klass), NULL);
vtcopy->backend.is_pinvoke = 1;
} else {
size = mini_type_stack_size (m_class_get_byval_arg (src->klass), NULL);
}
if (size > 0)
g_assert (ovf_size > 0);
EMIT_NEW_VARLOADA (cfg, load, vtcopy, vtcopy->inst_vtype);
mini_emit_memcpy (cfg, load->dreg, 0, src->dreg, 0, size, TARGET_SIZEOF_VOID_P);
if (ainfo->offset)
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, ppc_r1, ainfo->offset, load->dreg);
else
mono_call_inst_add_outarg_reg (cfg, call, load->dreg, ainfo->reg, FALSE);
}
}
void
mono_arch_emit_setret (MonoCompile *cfg, MonoMethod *method, MonoInst *val)
{
MonoType *ret = mini_get_underlying_type (mono_method_signature_internal (method)->ret);
if (!rm_type_is_byref (ret)) {
#ifndef __mono_ppc64__
if (ret->type == MONO_TYPE_I8 || ret->type == MONO_TYPE_U8) {
MonoInst *ins;
MONO_INST_NEW (cfg, ins, OP_SETLRET);
ins->sreg1 = MONO_LVREG_LS (val->dreg);
ins->sreg2 = MONO_LVREG_MS (val->dreg);
MONO_ADD_INS (cfg->cbb, ins);
return;
}
#endif
if (ret->type == MONO_TYPE_R8 || ret->type == MONO_TYPE_R4) {
MONO_EMIT_NEW_UNALU (cfg, OP_FMOVE, cfg->ret->dreg, val->dreg);
return;
}
}
MONO_EMIT_NEW_UNALU (cfg, OP_MOVE, cfg->ret->dreg, val->dreg);
}
gboolean
mono_arch_is_inst_imm (int opcode, int imm_opcode, gint64 imm)
{
return TRUE;
}
#endif /* DISABLE_JIT */
/*
* Conditional branches have a small offset, so if it is likely overflowed,
* we do a branch to the end of the method (uncond branches have much larger
* offsets) where we perform the conditional and jump back unconditionally.
* It's slightly slower, since we add two uncond branches, but it's very simple
* with the current patch implementation and such large methods are likely not
* going to be perf critical anyway.
*/
typedef struct {
union {
MonoBasicBlock *bb;
const char *exception;
} data;
guint32 ip_offset;
guint16 b0_cond;
guint16 b1_cond;
} MonoOvfJump;
#define EMIT_COND_BRANCH_FLAGS(ins,b0,b1) \
if (0 && ins->inst_true_bb->native_offset) { \
ppc_bc (code, (b0), (b1), (code - cfg->native_code + ins->inst_true_bb->native_offset) & 0xffff); \
} else { \
int br_disp = ins->inst_true_bb->max_offset - offset; \
if (!ppc_is_imm16 (br_disp + 8 * 1024) || !ppc_is_imm16 (br_disp - 8 * 1024)) { \
MonoOvfJump *ovfj = mono_mempool_alloc (cfg->mempool, sizeof (MonoOvfJump)); \
ovfj->data.bb = ins->inst_true_bb; \
ovfj->ip_offset = 0; \
ovfj->b0_cond = (b0); \
ovfj->b1_cond = (b1); \
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_BB_OVF, ovfj); \
ppc_b (code, 0); \
} else { \
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_BB, ins->inst_true_bb); \
ppc_bc (code, (b0), (b1), 0); \
} \
}
#define EMIT_COND_BRANCH(ins,cond) EMIT_COND_BRANCH_FLAGS(ins, branch_b0_table [(cond)], branch_b1_table [(cond)])
/* emit an exception if condition is fail
*
* We assign the extra code used to throw the implicit exceptions
* to cfg->bb_exit as far as the big branch handling is concerned
*/
#define EMIT_COND_SYSTEM_EXCEPTION_FLAGS(b0,b1,exc_name) \
do { \
int br_disp = cfg->bb_exit->max_offset - offset; \
if (!ppc_is_imm16 (br_disp + 1024) || ! ppc_is_imm16 (ppc_is_imm16 (br_disp - 1024))) { \
MonoOvfJump *ovfj = mono_mempool_alloc (cfg->mempool, sizeof (MonoOvfJump)); \
ovfj->data.exception = (exc_name); \
ovfj->ip_offset = code - cfg->native_code; \
ovfj->b0_cond = (b0); \
ovfj->b1_cond = (b1); \
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_EXC_OVF, ovfj); \
ppc_bl (code, 0); \
cfg->bb_exit->max_offset += 24; \
} else { \
mono_add_patch_info (cfg, code - cfg->native_code, \
MONO_PATCH_INFO_EXC, exc_name); \
ppc_bcl (code, (b0), (b1), 0); \
} \
} while (0);
#define EMIT_COND_SYSTEM_EXCEPTION(cond,exc_name) EMIT_COND_SYSTEM_EXCEPTION_FLAGS(branch_b0_table [(cond)], branch_b1_table [(cond)], (exc_name))
void
mono_arch_peephole_pass_1 (MonoCompile *cfg, MonoBasicBlock *bb)
{
}
static int
normalize_opcode (int opcode)
{
switch (opcode) {
#ifndef MONO_ARCH_ILP32
case MONO_PPC_32_64_CASE (OP_LOADI4_MEMBASE, OP_LOADI8_MEMBASE):
return OP_LOAD_MEMBASE;
case MONO_PPC_32_64_CASE (OP_LOADI4_MEMINDEX, OP_LOADI8_MEMINDEX):
return OP_LOAD_MEMINDEX;
case MONO_PPC_32_64_CASE (OP_STOREI4_MEMBASE_REG, OP_STOREI8_MEMBASE_REG):
return OP_STORE_MEMBASE_REG;
case MONO_PPC_32_64_CASE (OP_STOREI4_MEMBASE_IMM, OP_STOREI8_MEMBASE_IMM):
return OP_STORE_MEMBASE_IMM;
case MONO_PPC_32_64_CASE (OP_STOREI4_MEMINDEX, OP_STOREI8_MEMINDEX):
return OP_STORE_MEMINDEX;
#endif
case MONO_PPC_32_64_CASE (OP_ISHR_IMM, OP_LSHR_IMM):
return OP_SHR_IMM;
case MONO_PPC_32_64_CASE (OP_ISHR_UN_IMM, OP_LSHR_UN_IMM):
return OP_SHR_UN_IMM;
default:
return opcode;
}
}
void
mono_arch_peephole_pass_2 (MonoCompile *cfg, MonoBasicBlock *bb)
{
MonoInst *ins, *n, *last_ins = NULL;
MONO_BB_FOR_EACH_INS_SAFE (bb, n, ins) {
switch (normalize_opcode (ins->opcode)) {
case OP_MUL_IMM:
/* remove unnecessary multiplication with 1 */
if (ins->inst_imm == 1) {
if (ins->dreg != ins->sreg1) {
ins->opcode = OP_MOVE;
} else {
MONO_DELETE_INS (bb, ins);
continue;
}
} else if (inst->inst_imm > 0) {
int power2 = mono_is_power_of_two (ins->inst_imm);
if (power2 > 0) {
ins->opcode = OP_SHL_IMM;
ins->inst_imm = power2;
}
}
break;
case OP_LOAD_MEMBASE:
/*
* OP_STORE_MEMBASE_REG reg, offset(basereg)
* OP_LOAD_MEMBASE offset(basereg), reg
*/
if (last_ins && normalize_opcode (last_ins->opcode) == OP_STORE_MEMBASE_REG &&
ins->inst_basereg == last_ins->inst_destbasereg &&
ins->inst_offset == last_ins->inst_offset) {
if (ins->dreg == last_ins->sreg1) {
MONO_DELETE_INS (bb, ins);
continue;
} else {
//static int c = 0; printf ("MATCHX %s %d\n", cfg->method->name,c++);
ins->opcode = OP_MOVE;
ins->sreg1 = last_ins->sreg1;
}
/*
* Note: reg1 must be different from the basereg in the second load
* OP_LOAD_MEMBASE offset(basereg), reg1
* OP_LOAD_MEMBASE offset(basereg), reg2
* -->
* OP_LOAD_MEMBASE offset(basereg), reg1
* OP_MOVE reg1, reg2
*/
} else if (last_ins && normalize_opcode (last_ins->opcode) == OP_LOAD_MEMBASE &&
ins->inst_basereg != last_ins->dreg &&
ins->inst_basereg == last_ins->inst_basereg &&
ins->inst_offset == last_ins->inst_offset) {
if (ins->dreg == last_ins->dreg) {
MONO_DELETE_INS (bb, ins);
continue;
} else {
ins->opcode = OP_MOVE;
ins->sreg1 = last_ins->dreg;
}
//g_assert_not_reached ();
#if 0
/*
* OP_STORE_MEMBASE_IMM imm, offset(basereg)
* OP_LOAD_MEMBASE offset(basereg), reg
* -->
* OP_STORE_MEMBASE_IMM imm, offset(basereg)
* OP_ICONST reg, imm
*/
} else if (last_ins && normalize_opcode (last_ins->opcode) == OP_STORE_MEMBASE_IMM &&
ins->inst_basereg == last_ins->inst_destbasereg &&
ins->inst_offset == last_ins->inst_offset) {
//static int c = 0; printf ("MATCHX %s %d\n", cfg->method->name,c++);
ins->opcode = OP_ICONST;
ins->inst_c0 = last_ins->inst_imm;
g_assert_not_reached (); // check this rule
#endif
}
break;
case OP_LOADU1_MEMBASE:
case OP_LOADI1_MEMBASE:
if (last_ins && (last_ins->opcode == OP_STOREI1_MEMBASE_REG) &&
ins->inst_basereg == last_ins->inst_destbasereg &&
ins->inst_offset == last_ins->inst_offset) {
ins->opcode = (ins->opcode == OP_LOADI1_MEMBASE) ? OP_ICONV_TO_I1 : OP_ICONV_TO_U1;
ins->sreg1 = last_ins->sreg1;
}
break;
case OP_LOADU2_MEMBASE:
case OP_LOADI2_MEMBASE:
if (last_ins && (last_ins->opcode == OP_STOREI2_MEMBASE_REG) &&
ins->inst_basereg == last_ins->inst_destbasereg &&
ins->inst_offset == last_ins->inst_offset) {
ins->opcode = (ins->opcode == OP_LOADI2_MEMBASE) ? OP_ICONV_TO_I2 : OP_ICONV_TO_U2;
ins->sreg1 = last_ins->sreg1;
}
break;
#ifdef __mono_ppc64__
case OP_LOADU4_MEMBASE:
case OP_LOADI4_MEMBASE:
if (last_ins && (last_ins->opcode == OP_STOREI4_MEMBASE_REG) &&
ins->inst_basereg == last_ins->inst_destbasereg &&
ins->inst_offset == last_ins->inst_offset) {
ins->opcode = (ins->opcode == OP_LOADI4_MEMBASE) ? OP_ICONV_TO_I4 : OP_ICONV_TO_U4;
ins->sreg1 = last_ins->sreg1;
}
break;
#endif
case OP_MOVE:
ins->opcode = OP_MOVE;
/*
* OP_MOVE reg, reg
*/
if (ins->dreg == ins->sreg1) {
MONO_DELETE_INS (bb, ins);
continue;
}
/*
* OP_MOVE sreg, dreg
* OP_MOVE dreg, sreg
*/
if (last_ins && last_ins->opcode == OP_MOVE &&
ins->sreg1 == last_ins->dreg &&
ins->dreg == last_ins->sreg1) {
MONO_DELETE_INS (bb, ins);
continue;
}
break;
}
last_ins = ins;
ins = ins->next;
}
bb->last_ins = last_ins;
}
void
mono_arch_decompose_opts (MonoCompile *cfg, MonoInst *ins)
{
switch (ins->opcode) {
case OP_ICONV_TO_R_UN: {
// This value is OK as-is for both big and little endian because of how it is stored
static const guint64 adjust_val = 0x4330000000000000ULL;
int msw_reg = mono_alloc_ireg (cfg);
int adj_reg = mono_alloc_freg (cfg);
int tmp_reg = mono_alloc_freg (cfg);
int basereg = ppc_sp;
int offset = -8;
MONO_EMIT_NEW_ICONST (cfg, msw_reg, 0x43300000);
if (!ppc_is_imm16 (offset + 4)) {
basereg = mono_alloc_ireg (cfg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IADD_IMM, basereg, cfg->frame_reg, offset);
}
#if G_BYTE_ORDER == G_BIG_ENDIAN
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI4_MEMBASE_REG, basereg, offset, msw_reg);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI4_MEMBASE_REG, basereg, offset + 4, ins->sreg1);
#else
// For little endian the words are reversed
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI4_MEMBASE_REG, basereg, offset + 4, msw_reg);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI4_MEMBASE_REG, basereg, offset, ins->sreg1);
#endif
MONO_EMIT_NEW_LOAD_R8 (cfg, adj_reg, &adjust_val);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADR8_MEMBASE, tmp_reg, basereg, offset);
MONO_EMIT_NEW_BIALU (cfg, OP_FSUB, ins->dreg, tmp_reg, adj_reg);
ins->opcode = OP_NOP;
break;
}
#ifndef __mono_ppc64__
case OP_ICONV_TO_R4:
case OP_ICONV_TO_R8: {
/* If we have a PPC_FEATURE_64 machine we can avoid
this and use the fcfid instruction. Otherwise
on an old 32-bit chip and we have to do this the
hard way. */
if (!(cpu_hw_caps & PPC_ISA_64)) {
/* FIXME: change precision for CEE_CONV_R4 */
static const guint64 adjust_val = 0x4330000080000000ULL;
int msw_reg = mono_alloc_ireg (cfg);
int xored = mono_alloc_ireg (cfg);
int adj_reg = mono_alloc_freg (cfg);
int tmp_reg = mono_alloc_freg (cfg);
int basereg = ppc_sp;
int offset = -8;
if (!ppc_is_imm16 (offset + 4)) {
basereg = mono_alloc_ireg (cfg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IADD_IMM, basereg, cfg->frame_reg, offset);
}
MONO_EMIT_NEW_ICONST (cfg, msw_reg, 0x43300000);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI4_MEMBASE_REG, basereg, offset, msw_reg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_XOR_IMM, xored, ins->sreg1, 0x80000000);
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI4_MEMBASE_REG, basereg, offset + 4, xored);
MONO_EMIT_NEW_LOAD_R8 (cfg, adj_reg, (gpointer)&adjust_val);
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADR8_MEMBASE, tmp_reg, basereg, offset);
MONO_EMIT_NEW_BIALU (cfg, OP_FSUB, ins->dreg, tmp_reg, adj_reg);
if (ins->opcode == OP_ICONV_TO_R4)
MONO_EMIT_NEW_UNALU (cfg, OP_FCONV_TO_R4, ins->dreg, ins->dreg);
ins->opcode = OP_NOP;
}
break;
}
#endif
case OP_CKFINITE: {
int msw_reg = mono_alloc_ireg (cfg);
int basereg = ppc_sp;
int offset = -8;
if (!ppc_is_imm16 (offset + 4)) {
basereg = mono_alloc_ireg (cfg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IADD_IMM, basereg, cfg->frame_reg, offset);
}
MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORER8_MEMBASE_REG, basereg, offset, ins->sreg1);
#if G_BYTE_ORDER == G_BIG_ENDIAN
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, msw_reg, basereg, offset);
#else
MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, msw_reg, basereg, offset+4);
#endif
MONO_EMIT_NEW_UNALU (cfg, OP_PPC_CHECK_FINITE, -1, msw_reg);
MONO_EMIT_NEW_UNALU (cfg, OP_FMOVE, ins->dreg, ins->sreg1);
ins->opcode = OP_NOP;
break;
}
#ifdef __mono_ppc64__
case OP_IADD_OVF:
case OP_IADD_OVF_UN:
case OP_ISUB_OVF: {
int shifted1_reg = mono_alloc_ireg (cfg);
int shifted2_reg = mono_alloc_ireg (cfg);
int result_shifted_reg = mono_alloc_ireg (cfg);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHL_IMM, shifted1_reg, ins->sreg1, 32);
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHL_IMM, shifted2_reg, ins->sreg2, 32);
MONO_EMIT_NEW_BIALU (cfg, ins->opcode, result_shifted_reg, shifted1_reg, shifted2_reg);
if (ins->opcode == OP_IADD_OVF_UN)
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHR_UN_IMM, ins->dreg, result_shifted_reg, 32);
else
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHR_IMM, ins->dreg, result_shifted_reg, 32);
ins->opcode = OP_NOP;
break;
}
#endif
default:
break;
}
}
void
mono_arch_decompose_long_opts (MonoCompile *cfg, MonoInst *ins)
{
switch (ins->opcode) {
case OP_LADD_OVF:
/* ADC sets the condition code */
MONO_EMIT_NEW_BIALU (cfg, OP_ADDCC, MONO_LVREG_LS (ins->dreg), MONO_LVREG_LS (ins->sreg1), MONO_LVREG_LS (ins->sreg2));
MONO_EMIT_NEW_BIALU (cfg, OP_ADD_OVF_CARRY, MONO_LVREG_MS (ins->dreg), MONO_LVREG_MS (ins->sreg1), MONO_LVREG_MS (ins->sreg2));
NULLIFY_INS (ins);
break;
case OP_LADD_OVF_UN:
/* ADC sets the condition code */
MONO_EMIT_NEW_BIALU (cfg, OP_ADDCC, MONO_LVREG_LS (ins->dreg), MONO_LVREG_LS (ins->sreg1), MONO_LVREG_LS (ins->sreg2));
MONO_EMIT_NEW_BIALU (cfg, OP_ADD_OVF_UN_CARRY, MONO_LVREG_MS (ins->dreg), MONO_LVREG_MS (ins->sreg1), MONO_LVREG_MS (ins->sreg2));
NULLIFY_INS (ins);
break;
case OP_LSUB_OVF:
/* SBB sets the condition code */
MONO_EMIT_NEW_BIALU (cfg, OP_SUBCC, MONO_LVREG_LS (ins->dreg), MONO_LVREG_LS (ins->sreg1), MONO_LVREG_LS (ins->sreg2));
MONO_EMIT_NEW_BIALU (cfg, OP_SUB_OVF_CARRY, MONO_LVREG_MS (ins->dreg), MONO_LVREG_MS (ins->sreg1), MONO_LVREG_MS (ins->sreg2));
NULLIFY_INS (ins);
break;
case OP_LSUB_OVF_UN:
/* SBB sets the condition code */
MONO_EMIT_NEW_BIALU (cfg, OP_SUBCC, MONO_LVREG_LS (ins->dreg), MONO_LVREG_LS (ins->sreg1), MONO_LVREG_LS (ins->sreg2));
MONO_EMIT_NEW_BIALU (cfg, OP_SUB_OVF_UN_CARRY, MONO_LVREG_MS (ins->dreg), MONO_LVREG_MS (ins->sreg1), MONO_LVREG_MS (ins->sreg2));
NULLIFY_INS (ins);
break;
case OP_LNEG:
/* From gcc generated code */
MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PPC_SUBFIC, MONO_LVREG_LS (ins->dreg), MONO_LVREG_LS (ins->sreg1), 0);
MONO_EMIT_NEW_UNALU (cfg, OP_PPC_SUBFZE, MONO_LVREG_MS (ins->dreg), MONO_LVREG_MS (ins->sreg1));
NULLIFY_INS (ins);
break;
default:
break;
}
}
/*
* the branch_b0_table should maintain the order of these
* opcodes.
case CEE_BEQ:
case CEE_BGE:
case CEE_BGT:
case CEE_BLE:
case CEE_BLT:
case CEE_BNE_UN:
case CEE_BGE_UN:
case CEE_BGT_UN:
case CEE_BLE_UN:
case CEE_BLT_UN:
*/
static const guchar
branch_b0_table [] = {
PPC_BR_TRUE,
PPC_BR_FALSE,
PPC_BR_TRUE,
PPC_BR_FALSE,
PPC_BR_TRUE,
PPC_BR_FALSE,
PPC_BR_FALSE,
PPC_BR_TRUE,
PPC_BR_FALSE,
PPC_BR_TRUE
};
static const guchar
branch_b1_table [] = {
PPC_BR_EQ,
PPC_BR_LT,
PPC_BR_GT,
PPC_BR_GT,
PPC_BR_LT,
PPC_BR_EQ,
PPC_BR_LT,
PPC_BR_GT,
PPC_BR_GT,
PPC_BR_LT
};
#define NEW_INS(cfg,dest,op) do { \
MONO_INST_NEW((cfg), (dest), (op)); \
mono_bblock_insert_after_ins (bb, last_ins, (dest)); \
} while (0)
static int
map_to_reg_reg_op (int op)
{
switch (op) {
case OP_ADD_IMM:
return OP_IADD;
case OP_SUB_IMM:
return OP_ISUB;
case OP_AND_IMM:
return OP_IAND;
case OP_COMPARE_IMM:
return OP_COMPARE;
case OP_ICOMPARE_IMM:
return OP_ICOMPARE;
case OP_LCOMPARE_IMM:
return OP_LCOMPARE;
case OP_ADDCC_IMM:
return OP_IADDCC;
case OP_ADC_IMM:
return OP_IADC;
case OP_SUBCC_IMM:
return OP_ISUBCC;
case OP_SBB_IMM:
return OP_ISBB;
case OP_OR_IMM:
return OP_IOR;
case OP_XOR_IMM:
return OP_IXOR;
case OP_MUL_IMM:
return OP_IMUL;
case OP_LMUL_IMM:
return OP_LMUL;
case OP_LOAD_MEMBASE:
return OP_LOAD_MEMINDEX;
case OP_LOADI4_MEMBASE:
return OP_LOADI4_MEMINDEX;
case OP_LOADU4_MEMBASE:
return OP_LOADU4_MEMINDEX;
case OP_LOADI8_MEMBASE:
return OP_LOADI8_MEMINDEX;
case OP_LOADU1_MEMBASE:
return OP_LOADU1_MEMINDEX;
case OP_LOADI2_MEMBASE:
return OP_LOADI2_MEMINDEX;
case OP_LOADU2_MEMBASE:
return OP_LOADU2_MEMINDEX;
case OP_LOADI1_MEMBASE:
return OP_LOADI1_MEMINDEX;
case OP_LOADR4_MEMBASE:
return OP_LOADR4_MEMINDEX;
case OP_LOADR8_MEMBASE:
return OP_LOADR8_MEMINDEX;
case OP_STOREI1_MEMBASE_REG:
return OP_STOREI1_MEMINDEX;
case OP_STOREI2_MEMBASE_REG:
return OP_STOREI2_MEMINDEX;
case OP_STOREI4_MEMBASE_REG:
return OP_STOREI4_MEMINDEX;
case OP_STOREI8_MEMBASE_REG:
return OP_STOREI8_MEMINDEX;
case OP_STORE_MEMBASE_REG:
return OP_STORE_MEMINDEX;
case OP_STORER4_MEMBASE_REG:
return OP_STORER4_MEMINDEX;
case OP_STORER8_MEMBASE_REG:
return OP_STORER8_MEMINDEX;
case OP_STORE_MEMBASE_IMM:
return OP_STORE_MEMBASE_REG;
case OP_STOREI1_MEMBASE_IMM:
return OP_STOREI1_MEMBASE_REG;
case OP_STOREI2_MEMBASE_IMM:
return OP_STOREI2_MEMBASE_REG;
case OP_STOREI4_MEMBASE_IMM:
return OP_STOREI4_MEMBASE_REG;
case OP_STOREI8_MEMBASE_IMM:
return OP_STOREI8_MEMBASE_REG;
}
if (mono_op_imm_to_op (op) == -1)
g_error ("mono_op_imm_to_op failed for %s\n", mono_inst_name (op));
return mono_op_imm_to_op (op);
}
//#define map_to_reg_reg_op(op) (cfg->new_ir? mono_op_imm_to_op (op): map_to_reg_reg_op (op))
#define compare_opcode_is_unsigned(opcode) \
(((opcode) >= CEE_BNE_UN && (opcode) <= CEE_BLT_UN) || \
((opcode) >= OP_IBNE_UN && (opcode) <= OP_IBLT_UN) || \
((opcode) >= OP_LBNE_UN && (opcode) <= OP_LBLT_UN) || \
((opcode) >= OP_COND_EXC_NE_UN && (opcode) <= OP_COND_EXC_LT_UN) || \
((opcode) >= OP_COND_EXC_INE_UN && (opcode) <= OP_COND_EXC_ILT_UN) || \
((opcode) == OP_CLT_UN || (opcode) == OP_CGT_UN || \
(opcode) == OP_ICLT_UN || (opcode) == OP_ICGT_UN || \
(opcode) == OP_LCLT_UN || (opcode) == OP_LCGT_UN))
/*
* Remove from the instruction list the instructions that can't be
* represented with very simple instructions with no register
* requirements.
*/
void
mono_arch_lowering_pass (MonoCompile *cfg, MonoBasicBlock *bb)
{
MonoInst *ins, *next, *temp, *last_ins = NULL;
int imm;
MONO_BB_FOR_EACH_INS (bb, ins) {
loop_start:
switch (ins->opcode) {
case OP_IDIV_UN_IMM:
case OP_IDIV_IMM:
case OP_IREM_IMM:
case OP_IREM_UN_IMM:
CASE_PPC64 (OP_LREM_IMM) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
if (ins->opcode == OP_IDIV_IMM)
ins->opcode = OP_IDIV;
else if (ins->opcode == OP_IREM_IMM)
ins->opcode = OP_IREM;
else if (ins->opcode == OP_IDIV_UN_IMM)
ins->opcode = OP_IDIV_UN;
else if (ins->opcode == OP_IREM_UN_IMM)
ins->opcode = OP_IREM_UN;
else if (ins->opcode == OP_LREM_IMM)
ins->opcode = OP_LREM;
last_ins = temp;
/* handle rem separately */
goto loop_start;
}
case OP_IREM:
case OP_IREM_UN:
CASE_PPC64 (OP_LREM)
CASE_PPC64 (OP_LREM_UN) {
MonoInst *mul;
/* we change a rem dest, src1, src2 to
* div temp1, src1, src2
* mul temp2, temp1, src2
* sub dest, src1, temp2
*/
if (ins->opcode == OP_IREM || ins->opcode == OP_IREM_UN) {
NEW_INS (cfg, mul, OP_IMUL);
NEW_INS (cfg, temp, ins->opcode == OP_IREM? OP_IDIV: OP_IDIV_UN);
ins->opcode = OP_ISUB;
} else {
NEW_INS (cfg, mul, OP_LMUL);
NEW_INS (cfg, temp, ins->opcode == OP_LREM? OP_LDIV: OP_LDIV_UN);
ins->opcode = OP_LSUB;
}
temp->sreg1 = ins->sreg1;
temp->sreg2 = ins->sreg2;
temp->dreg = mono_alloc_ireg (cfg);
mul->sreg1 = temp->dreg;
mul->sreg2 = ins->sreg2;
mul->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = mul->dreg;
break;
}
case OP_IADD_IMM:
CASE_PPC64 (OP_LADD_IMM)
case OP_ADD_IMM:
case OP_ADDCC_IMM:
if (!ppc_is_imm16 (ins->inst_imm)) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
}
break;
case OP_ISUB_IMM:
CASE_PPC64 (OP_LSUB_IMM)
case OP_SUB_IMM:
if (!ppc_is_imm16 (-ins->inst_imm)) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
}
break;
case OP_IAND_IMM:
case OP_IOR_IMM:
case OP_IXOR_IMM:
case OP_LAND_IMM:
case OP_LOR_IMM:
case OP_LXOR_IMM:
case OP_AND_IMM:
case OP_OR_IMM:
case OP_XOR_IMM: {
gboolean is_imm = ((ins->inst_imm & 0xffff0000) && (ins->inst_imm & 0xffff));
#ifdef __mono_ppc64__
if (ins->inst_imm & 0xffffffff00000000ULL)
is_imm = TRUE;
#endif
if (is_imm) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
}
break;
}
case OP_ISBB_IMM:
case OP_IADC_IMM:
case OP_SBB_IMM:
case OP_SUBCC_IMM:
case OP_ADC_IMM:
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
break;
case OP_COMPARE_IMM:
case OP_ICOMPARE_IMM:
CASE_PPC64 (OP_LCOMPARE_IMM)
next = ins->next;
/* Branch opts can eliminate the branch */
if (!next || (!(MONO_IS_COND_BRANCH_OP (next) || MONO_IS_COND_EXC (next) || MONO_IS_SETCC (next)))) {
ins->opcode = OP_NOP;
break;
}
g_assert(next);
if (compare_opcode_is_unsigned (next->opcode)) {
if (!ppc_is_uimm16 (ins->inst_imm)) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
}
} else {
if (!ppc_is_imm16 (ins->inst_imm)) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
}
}
break;
case OP_IMUL_IMM:
case OP_MUL_IMM:
CASE_PPC64 (OP_LMUL_IMM)
if (ins->inst_imm == 1) {
ins->opcode = OP_MOVE;
break;
}
if (ins->inst_imm == 0) {
ins->opcode = OP_ICONST;
ins->inst_c0 = 0;
break;
}
imm = (ins->inst_imm > 0) ? mono_is_power_of_two (ins->inst_imm) : -1;
if (imm > 0) {
ins->opcode = OP_SHL_IMM;
ins->inst_imm = imm;
break;
}
if (!ppc_is_imm16 (ins->inst_imm)) {
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
}
break;
case OP_LOCALLOC_IMM:
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = temp->dreg;
ins->opcode = OP_LOCALLOC;
break;
case OP_LOAD_MEMBASE:
case OP_LOADI4_MEMBASE:
CASE_PPC64 (OP_LOADI8_MEMBASE)
case OP_LOADU4_MEMBASE:
case OP_LOADI2_MEMBASE:
case OP_LOADU2_MEMBASE:
case OP_LOADI1_MEMBASE:
case OP_LOADU1_MEMBASE:
case OP_LOADR4_MEMBASE:
case OP_LOADR8_MEMBASE:
case OP_STORE_MEMBASE_REG:
CASE_PPC64 (OP_STOREI8_MEMBASE_REG)
case OP_STOREI4_MEMBASE_REG:
case OP_STOREI2_MEMBASE_REG:
case OP_STOREI1_MEMBASE_REG:
case OP_STORER4_MEMBASE_REG:
case OP_STORER8_MEMBASE_REG:
/* we can do two things: load the immed in a register
* and use an indexed load, or see if the immed can be
* represented as an ad_imm + a load with a smaller offset
* that fits. We just do the first for now, optimize later.
*/
if (ppc_is_imm16 (ins->inst_offset))
break;
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_offset;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg2 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
break;
case OP_STORE_MEMBASE_IMM:
case OP_STOREI1_MEMBASE_IMM:
case OP_STOREI2_MEMBASE_IMM:
case OP_STOREI4_MEMBASE_IMM:
CASE_PPC64 (OP_STOREI8_MEMBASE_IMM)
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = ins->inst_imm;
temp->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = temp->dreg;
ins->opcode = map_to_reg_reg_op (ins->opcode);
last_ins = temp;
goto loop_start; /* make it handle the possibly big ins->inst_offset */
case OP_R8CONST:
case OP_R4CONST:
if (cfg->compile_aot) {
/* Keep these in the aot case */
break;
}
NEW_INS (cfg, temp, OP_ICONST);
temp->inst_c0 = (gulong)ins->inst_p0;
temp->dreg = mono_alloc_ireg (cfg);
ins->inst_basereg = temp->dreg;
ins->inst_offset = 0;
ins->opcode = ins->opcode == OP_R4CONST? OP_LOADR4_MEMBASE: OP_LOADR8_MEMBASE;
last_ins = temp;
/* make it handle the possibly big ins->inst_offset
* later optimize to use lis + load_membase
*/
goto loop_start;
}
last_ins = ins;
}
bb->last_ins = last_ins;
bb->max_vreg = cfg->next_vreg;
}
static guchar*
emit_float_to_int (MonoCompile *cfg, guchar *code, int dreg, int sreg, int size, gboolean is_signed)
{
long offset = cfg->arch.fp_conv_var_offset;
long sub_offset;
/* sreg is a float, dreg is an integer reg. ppc_f0 is used a scratch */
#ifdef __mono_ppc64__
if (size == 8) {
ppc_fctidz (code, ppc_f0, sreg);
sub_offset = 0;
} else
#endif
{
ppc_fctiwz (code, ppc_f0, sreg);
sub_offset = 4;
}
if (ppc_is_imm16 (offset + sub_offset)) {
ppc_stfd (code, ppc_f0, offset, cfg->frame_reg);
if (size == 8)
ppc_ldr (code, dreg, offset + sub_offset, cfg->frame_reg);
else
ppc_lwz (code, dreg, offset + sub_offset, cfg->frame_reg);
} else {
ppc_load (code, dreg, offset);
ppc_add (code, dreg, dreg, cfg->frame_reg);
ppc_stfd (code, ppc_f0, 0, dreg);
if (size == 8)
ppc_ldr (code, dreg, sub_offset, dreg);
else
ppc_lwz (code, dreg, sub_offset, dreg);
}
if (!is_signed) {
if (size == 1)
ppc_andid (code, dreg, dreg, 0xff);
else if (size == 2)
ppc_andid (code, dreg, dreg, 0xffff);
#ifdef __mono_ppc64__
else if (size == 4)
ppc_clrldi (code, dreg, dreg, 32);
#endif
} else {
if (size == 1)
ppc_extsb (code, dreg, dreg);
else if (size == 2)
ppc_extsh (code, dreg, dreg);
#ifdef __mono_ppc64__
else if (size == 4)
ppc_extsw (code, dreg, dreg);
#endif
}
return code;
}
static void
emit_thunk (guint8 *code, gconstpointer target)
{
guint8 *p = code;
/* 2 bytes on 32bit, 5 bytes on 64bit */
ppc_load_sequence (code, ppc_r0, target);
ppc_mtctr (code, ppc_r0);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
mono_arch_flush_icache (p, code - p);
}
static void
handle_thunk (MonoCompile *cfg, guchar *code, const guchar *target)
{
MonoJitInfo *ji = NULL;
MonoThunkJitInfo *info;
guint8 *thunks, *p;
int thunks_size;
guint8 *orig_target;
guint8 *target_thunk;
if (cfg) {
/*
* This can be called multiple times during JITting,
* save the current position in cfg->arch to avoid
* doing a O(n^2) search.
*/
if (!cfg->arch.thunks) {
cfg->arch.thunks = cfg->thunks;
cfg->arch.thunks_size = cfg->thunk_area;
}
thunks = cfg->arch.thunks;
thunks_size = cfg->arch.thunks_size;
if (!thunks_size) {
g_print ("thunk failed %p->%p, thunk space=%d method %s", code, target, thunks_size, mono_method_full_name (cfg->method, TRUE));
g_assert_not_reached ();
}
g_assert (*(guint32*)thunks == 0);
emit_thunk (thunks, target);
ppc_patch (code, thunks);
cfg->arch.thunks += THUNK_SIZE;
cfg->arch.thunks_size -= THUNK_SIZE;
} else {
ji = mini_jit_info_table_find (code);
g_assert (ji);
info = mono_jit_info_get_thunk_info (ji);
g_assert (info);
thunks = (guint8 *) ji->code_start + info->thunks_offset;
thunks_size = info->thunks_size;
orig_target = mono_arch_get_call_target (code + 4);
mono_mini_arch_lock ();
target_thunk = NULL;
if (orig_target >= thunks && orig_target < thunks + thunks_size) {
/* The call already points to a thunk, because of trampolines etc. */
target_thunk = orig_target;
} else {
for (p = thunks; p < thunks + thunks_size; p += THUNK_SIZE) {
if (((guint32 *) p) [0] == 0) {
/* Free entry */
target_thunk = p;
break;
} else {
/* ppc64 requires 5 instructions, 32bit two instructions */
#ifdef __mono_ppc64__
const int const_load_size = 5;
#else
const int const_load_size = 2;
#endif
guint32 load [const_load_size];
guchar *templ = (guchar *) load;
ppc_load_sequence (templ, ppc_r0, target);
if (!memcmp (p, load, const_load_size)) {
/* Thunk already points to target */
target_thunk = p;
break;
}
}
}
}
// g_print ("THUNK: %p %p %p\n", code, target, target_thunk);
if (!target_thunk) {
mono_mini_arch_unlock ();
g_print ("thunk failed %p->%p, thunk space=%d method %s", code, target, thunks_size, cfg ? mono_method_full_name (cfg->method, TRUE) : mono_method_full_name (jinfo_get_method (ji), TRUE));
g_assert_not_reached ();
}
emit_thunk (target_thunk, target);
ppc_patch (code, target_thunk);
mono_mini_arch_unlock ();
}
}
static void
patch_ins (guint8 *code, guint32 ins)
{
*(guint32*)code = ins;
mono_arch_flush_icache (code, 4);
}
static void
ppc_patch_full (MonoCompile *cfg, guchar *code, const guchar *target, gboolean is_fd)
{
guint32 ins = *(guint32*)code;
guint32 prim = ins >> 26;
guint32 ovf;
//g_print ("patching 0x%08x (0x%08x) to point to 0x%08x\n", code, ins, target);
if (prim == 18) {
// prefer relative branches, they are more position independent (e.g. for AOT compilation).
gint diff = target - code;
g_assert (!is_fd);
if (diff >= 0){
if (diff <= 33554431){
ins = (18 << 26) | (diff) | (ins & 1);
patch_ins (code, ins);
return;
}
} else {
/* diff between 0 and -33554432 */
if (diff >= -33554432){
ins = (18 << 26) | (diff & ~0xfc000000) | (ins & 1);
patch_ins (code, ins);
return;
}
}
if ((glong)target >= 0){
if ((glong)target <= 33554431){
ins = (18 << 26) | ((gulong) target) | (ins & 1) | 2;
patch_ins (code, ins);
return;
}
} else {
if ((glong)target >= -33554432){
ins = (18 << 26) | (((gulong)target) & ~0xfc000000) | (ins & 1) | 2;
patch_ins (code, ins);
return;
}
}
handle_thunk (cfg, code, target);
return;
g_assert_not_reached ();
}
if (prim == 16) {
g_assert (!is_fd);
// absolute address
if (ins & 2) {
guint32 li = (gulong)target;
ins = (ins & 0xffff0000) | (ins & 3);
ovf = li & 0xffff0000;
if (ovf != 0 && ovf != 0xffff0000)
g_assert_not_reached ();
li &= 0xffff;
ins |= li;
// FIXME: assert the top bits of li are 0
} else {
gint diff = target - code;
ins = (ins & 0xffff0000) | (ins & 3);
ovf = diff & 0xffff0000;
if (ovf != 0 && ovf != 0xffff0000)
g_assert_not_reached ();
diff &= 0xffff;
ins |= diff;
}
patch_ins (code, ins);
return;
}
if (prim == 15 || ins == 0x4e800021 || ins == 0x4e800020 || ins == 0x4e800420) {
#ifdef __mono_ppc64__
guint32 *seq = (guint32*)code;
guint32 *branch_ins;
/* the trampoline code will try to patch the blrl, blr, bcctr */
if (ins == 0x4e800021 || ins == 0x4e800020 || ins == 0x4e800420) {
branch_ins = seq;
if (ppc_is_load_op (seq [-3]) || ppc_opcode (seq [-3]) == 31) /* ld || lwz || mr */
code -= 32;
else
code -= 24;
} else {
if (ppc_is_load_op (seq [5])
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
/* With function descs we need to do more careful
matches. */
|| ppc_opcode (seq [5]) == 31 /* ld || lwz || mr */
#endif
)
branch_ins = seq + 8;
else
branch_ins = seq + 6;
}
seq = (guint32*)code;
/* this is the lis/ori/sldi/oris/ori/(ld/ld|mr/nop)/mtlr/blrl sequence */
g_assert (mono_ppc_is_direct_call_sequence (branch_ins));
if (ppc_is_load_op (seq [5])) {
g_assert (ppc_is_load_op (seq [6]));
if (!is_fd) {
guint8 *buf = (guint8*)&seq [5];
ppc_mr (buf, PPC_CALL_REG, ppc_r12);
ppc_nop (buf);
}
} else {
if (is_fd)
target = (const guchar*)mono_get_addr_from_ftnptr ((gpointer)target);
}
/* FIXME: make this thread safe */
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
/* FIXME: we're assuming we're using r12 here */
ppc_load_ptr_sequence (code, ppc_r12, target);
#else
ppc_load_ptr_sequence (code, PPC_CALL_REG, target);
#endif
mono_arch_flush_icache ((guint8*)seq, 28);
#else
guint32 *seq;
/* the trampoline code will try to patch the blrl, blr, bcctr */
if (ins == 0x4e800021 || ins == 0x4e800020 || ins == 0x4e800420) {
code -= 12;
}
/* this is the lis/ori/mtlr/blrl sequence */
seq = (guint32*)code;
g_assert ((seq [0] >> 26) == 15);
g_assert ((seq [1] >> 26) == 24);
g_assert ((seq [2] >> 26) == 31);
g_assert (seq [3] == 0x4e800021 || seq [3] == 0x4e800020 || seq [3] == 0x4e800420);
/* FIXME: make this thread safe */
ppc_lis (code, PPC_CALL_REG, (guint32)(target) >> 16);
ppc_ori (code, PPC_CALL_REG, PPC_CALL_REG, (guint32)(target) & 0xffff);
mono_arch_flush_icache (code - 8, 8);
#endif
} else {
g_assert_not_reached ();
}
// g_print ("patched with 0x%08x\n", ins);
}
void
ppc_patch (guchar *code, const guchar *target)
{
ppc_patch_full (NULL, code, target, FALSE);
}
void
mono_ppc_patch (guchar *code, const guchar *target)
{
ppc_patch (code, target);
}
static guint8*
emit_move_return_value (MonoCompile *cfg, MonoInst *ins, guint8 *code)
{
switch (ins->opcode) {
case OP_FCALL:
case OP_FCALL_REG:
case OP_FCALL_MEMBASE:
if (ins->dreg != ppc_f1)
ppc_fmr (code, ins->dreg, ppc_f1);
break;
}
return code;
}
static guint8*
emit_reserve_param_area (MonoCompile *cfg, guint8 *code)
{
long size = cfg->param_area;
size += MONO_ARCH_FRAME_ALIGNMENT - 1;
size &= -MONO_ARCH_FRAME_ALIGNMENT;
if (!size)
return code;
ppc_ldptr (code, ppc_r0, 0, ppc_sp);
if (ppc_is_imm16 (-size)) {
ppc_stptr_update (code, ppc_r0, -size, ppc_sp);
} else {
ppc_load (code, ppc_r12, -size);
ppc_stptr_update_indexed (code, ppc_r0, ppc_sp, ppc_r12);
}
return code;
}
static guint8*
emit_unreserve_param_area (MonoCompile *cfg, guint8 *code)
{
long size = cfg->param_area;
size += MONO_ARCH_FRAME_ALIGNMENT - 1;
size &= -MONO_ARCH_FRAME_ALIGNMENT;
if (!size)
return code;
ppc_ldptr (code, ppc_r0, 0, ppc_sp);
if (ppc_is_imm16 (size)) {
ppc_stptr_update (code, ppc_r0, size, ppc_sp);
} else {
ppc_load (code, ppc_r12, size);
ppc_stptr_update_indexed (code, ppc_r0, ppc_sp, ppc_r12);
}
return code;
}
#define MASK_SHIFT_IMM(i) ((i) & MONO_PPC_32_64_CASE (0x1f, 0x3f))
#ifndef DISABLE_JIT
void
mono_arch_output_basic_block (MonoCompile *cfg, MonoBasicBlock *bb)
{
MonoInst *ins, *next;
MonoCallInst *call;
guint8 *code = cfg->native_code + cfg->code_len;
MonoInst *last_ins = NULL;
int max_len, cpos;
int L;
/* we don't align basic blocks of loops on ppc */
if (cfg->verbose_level > 2)
g_print ("Basic block %d starting at offset 0x%x\n", bb->block_num, bb->native_offset);
cpos = bb->max_offset;
MONO_BB_FOR_EACH_INS (bb, ins) {
const guint offset = code - cfg->native_code;
set_code_cursor (cfg, code);
max_len = ins_get_size (ins->opcode);
code = realloc_code (cfg, max_len);
// if (ins->cil_code)
// g_print ("cil code\n");
mono_debug_record_line_number (cfg, ins, offset);
switch (normalize_opcode (ins->opcode)) {
case OP_RELAXED_NOP:
case OP_NOP:
case OP_DUMMY_USE:
case OP_DUMMY_ICONST:
case OP_DUMMY_I8CONST:
case OP_DUMMY_R8CONST:
case OP_DUMMY_R4CONST:
case OP_NOT_REACHED:
case OP_NOT_NULL:
break;
case OP_IL_SEQ_POINT:
mono_add_seq_point (cfg, bb, ins, code - cfg->native_code);
break;
case OP_SEQ_POINT: {
int i;
if (cfg->compile_aot)
NOT_IMPLEMENTED;
/*
* Read from the single stepping trigger page. This will cause a
* SIGSEGV when single stepping is enabled.
* We do this _before_ the breakpoint, so single stepping after
* a breakpoint is hit will step to the next IL offset.
*/
if (ins->flags & MONO_INST_SINGLE_STEP_LOC) {
ppc_load (code, ppc_r12, (gsize)ss_trigger_page);
ppc_ldptr (code, ppc_r12, 0, ppc_r12);
}
mono_add_seq_point (cfg, bb, ins, code - cfg->native_code);
/*
* A placeholder for a possible breakpoint inserted by
* mono_arch_set_breakpoint ().
*/
for (i = 0; i < BREAKPOINT_SIZE / 4; ++i)
ppc_nop (code);
break;
}
case OP_BIGMUL:
ppc_mullw (code, ppc_r0, ins->sreg1, ins->sreg2);
ppc_mulhw (code, ppc_r3, ins->sreg1, ins->sreg2);
ppc_mr (code, ppc_r4, ppc_r0);
break;
case OP_BIGMUL_UN:
ppc_mullw (code, ppc_r0, ins->sreg1, ins->sreg2);
ppc_mulhwu (code, ppc_r3, ins->sreg1, ins->sreg2);
ppc_mr (code, ppc_r4, ppc_r0);
break;
case OP_MEMORY_BARRIER:
ppc_sync (code);
break;
case OP_STOREI1_MEMBASE_REG:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_stb (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_stb (code, ins->sreg1, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_stbx (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
}
break;
case OP_STOREI2_MEMBASE_REG:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_sth (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_sth (code, ins->sreg1, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_sthx (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
}
break;
case OP_STORE_MEMBASE_REG:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_stptr (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_stptr (code, ins->sreg1, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_stptr_indexed (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
}
break;
#ifdef MONO_ARCH_ILP32
case OP_STOREI8_MEMBASE_REG:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_str (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_str_indexed (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
break;
#endif
case OP_STOREI1_MEMINDEX:
ppc_stbx (code, ins->sreg1, ins->inst_destbasereg, ins->sreg2);
break;
case OP_STOREI2_MEMINDEX:
ppc_sthx (code, ins->sreg1, ins->inst_destbasereg, ins->sreg2);
break;
case OP_STORE_MEMINDEX:
ppc_stptr_indexed (code, ins->sreg1, ins->inst_destbasereg, ins->sreg2);
break;
case OP_LOADU4_MEM:
g_assert_not_reached ();
break;
case OP_LOAD_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_ldptr (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset) && (ins->dreg > 0)) {
ppc_addis (code, ins->dreg, ins->inst_basereg, ppc_ha(ins->inst_offset));
ppc_ldptr (code, ins->dreg, ins->inst_offset, ins->dreg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_ldptr_indexed (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
}
break;
case OP_LOADI4_MEMBASE:
#ifdef __mono_ppc64__
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lwa (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset) && (ins->dreg > 0)) {
ppc_addis (code, ins->dreg, ins->inst_basereg, ppc_ha(ins->inst_offset));
ppc_lwa (code, ins->dreg, ins->inst_offset, ins->dreg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lwax (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
}
break;
#endif
case OP_LOADU4_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lwz (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset) && (ins->dreg > 0)) {
ppc_addis (code, ins->dreg, ins->inst_basereg, ppc_ha(ins->inst_offset));
ppc_lwz (code, ins->dreg, ins->inst_offset, ins->dreg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lwzx (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
}
break;
case OP_LOADI1_MEMBASE:
case OP_LOADU1_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lbz (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset) && (ins->dreg > 0)) {
ppc_addis (code, ins->dreg, ins->inst_basereg, ppc_ha(ins->inst_offset));
ppc_lbz (code, ins->dreg, ins->inst_offset, ins->dreg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lbzx (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
}
if (ins->opcode == OP_LOADI1_MEMBASE)
ppc_extsb (code, ins->dreg, ins->dreg);
break;
case OP_LOADU2_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lhz (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset) && (ins->dreg > 0)) {
ppc_addis (code, ins->dreg, ins->inst_basereg, ppc_ha(ins->inst_offset));
ppc_lhz (code, ins->dreg, ins->inst_offset, ins->dreg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lhzx (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
}
break;
case OP_LOADI2_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lha (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset) && (ins->dreg > 0)) {
ppc_addis (code, ins->dreg, ins->inst_basereg, ppc_ha(ins->inst_offset));
ppc_lha (code, ins->dreg, ins->inst_offset, ins->dreg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lhax (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
}
break;
#ifdef MONO_ARCH_ILP32
case OP_LOADI8_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_ldr (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_ldr_indexed (code, ins->dreg, ins->inst_basereg, ppc_r0);
}
break;
#endif
case OP_LOAD_MEMINDEX:
ppc_ldptr_indexed (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_LOADI4_MEMINDEX:
#ifdef __mono_ppc64__
ppc_lwax (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
#endif
case OP_LOADU4_MEMINDEX:
ppc_lwzx (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_LOADU2_MEMINDEX:
ppc_lhzx (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_LOADI2_MEMINDEX:
ppc_lhax (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_LOADU1_MEMINDEX:
ppc_lbzx (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_LOADI1_MEMINDEX:
ppc_lbzx (code, ins->dreg, ins->inst_basereg, ins->sreg2);
ppc_extsb (code, ins->dreg, ins->dreg);
break;
case OP_ICONV_TO_I1:
CASE_PPC64 (OP_LCONV_TO_I1)
ppc_extsb (code, ins->dreg, ins->sreg1);
break;
case OP_ICONV_TO_I2:
CASE_PPC64 (OP_LCONV_TO_I2)
ppc_extsh (code, ins->dreg, ins->sreg1);
break;
case OP_ICONV_TO_U1:
CASE_PPC64 (OP_LCONV_TO_U1)
ppc_clrlwi (code, ins->dreg, ins->sreg1, 24);
break;
case OP_ICONV_TO_U2:
CASE_PPC64 (OP_LCONV_TO_U2)
ppc_clrlwi (code, ins->dreg, ins->sreg1, 16);
break;
case OP_COMPARE:
case OP_ICOMPARE:
CASE_PPC64 (OP_LCOMPARE)
L = (sizeof (target_mgreg_t) == 4 || ins->opcode == OP_ICOMPARE) ? 0 : 1;
next = ins->next;
if (next && compare_opcode_is_unsigned (next->opcode))
ppc_cmpl (code, 0, L, ins->sreg1, ins->sreg2);
else
ppc_cmp (code, 0, L, ins->sreg1, ins->sreg2);
break;
case OP_COMPARE_IMM:
case OP_ICOMPARE_IMM:
CASE_PPC64 (OP_LCOMPARE_IMM)
L = (sizeof (target_mgreg_t) == 4 || ins->opcode == OP_ICOMPARE_IMM) ? 0 : 1;
next = ins->next;
if (next && compare_opcode_is_unsigned (next->opcode)) {
if (ppc_is_uimm16 (ins->inst_imm)) {
ppc_cmpli (code, 0, L, ins->sreg1, (ins->inst_imm & 0xffff));
} else {
g_assert_not_reached ();
}
} else {
if (ppc_is_imm16 (ins->inst_imm)) {
ppc_cmpi (code, 0, L, ins->sreg1, (ins->inst_imm & 0xffff));
} else {
g_assert_not_reached ();
}
}
break;
case OP_BREAK:
/*
* gdb does not like encountering a trap in the debugged code. So
* instead of emitting a trap, we emit a call a C function and place a
* breakpoint there.
*/
//ppc_break (code);
ppc_mr (code, ppc_r3, ins->sreg1);
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_break));
if ((FORCE_INDIR_CALL || cfg->method->dynamic) && !cfg->compile_aot) {
ppc_load_func (code, PPC_CALL_REG, 0);
ppc_mtlr (code, PPC_CALL_REG);
ppc_blrl (code);
} else {
ppc_bl (code, 0);
}
break;
case OP_ADDCC:
case OP_IADDCC:
ppc_addco (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_IADD:
CASE_PPC64 (OP_LADD)
ppc_add (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_ADC:
case OP_IADC:
ppc_adde (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_ADDCC_IMM:
if (ppc_is_imm16 (ins->inst_imm)) {
ppc_addic (code, ins->dreg, ins->sreg1, ins->inst_imm);
} else {
g_assert_not_reached ();
}
break;
case OP_ADD_IMM:
case OP_IADD_IMM:
CASE_PPC64 (OP_LADD_IMM)
if (ppc_is_imm16 (ins->inst_imm)) {
ppc_addi (code, ins->dreg, ins->sreg1, ins->inst_imm);
} else {
g_assert_not_reached ();
}
break;
case OP_IADD_OVF:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_addo (code, ins->dreg, ins->sreg1, ins->sreg2);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_IADD_OVF_UN:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_addco (code, ins->dreg, ins->sreg1, ins->sreg2);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<13));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_ISUB_OVF:
CASE_PPC64 (OP_LSUB_OVF)
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_subfo (code, ins->dreg, ins->sreg2, ins->sreg1);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_ISUB_OVF_UN:
CASE_PPC64 (OP_LSUB_OVF_UN)
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_subfc (code, ins->dreg, ins->sreg2, ins->sreg1);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<13));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_TRUE, PPC_BR_EQ, "OverflowException");
break;
case OP_ADD_OVF_CARRY:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_addeo (code, ins->dreg, ins->sreg1, ins->sreg2);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_ADD_OVF_UN_CARRY:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_addeo (code, ins->dreg, ins->sreg1, ins->sreg2);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<13));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_SUB_OVF_CARRY:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_subfeo (code, ins->dreg, ins->sreg2, ins->sreg1);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_SUB_OVF_UN_CARRY:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_subfeo (code, ins->dreg, ins->sreg2, ins->sreg1);
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<13));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_TRUE, PPC_BR_EQ, "OverflowException");
break;
case OP_SUBCC:
case OP_ISUBCC:
ppc_subfco (code, ins->dreg, ins->sreg2, ins->sreg1);
break;
case OP_ISUB:
CASE_PPC64 (OP_LSUB)
ppc_subf (code, ins->dreg, ins->sreg2, ins->sreg1);
break;
case OP_SBB:
case OP_ISBB:
ppc_subfe (code, ins->dreg, ins->sreg2, ins->sreg1);
break;
case OP_SUB_IMM:
case OP_ISUB_IMM:
CASE_PPC64 (OP_LSUB_IMM)
// we add the negated value
if (ppc_is_imm16 (-ins->inst_imm))
ppc_addi (code, ins->dreg, ins->sreg1, -ins->inst_imm);
else {
g_assert_not_reached ();
}
break;
case OP_PPC_SUBFIC:
g_assert (ppc_is_imm16 (ins->inst_imm));
ppc_subfic (code, ins->dreg, ins->sreg1, ins->inst_imm);
break;
case OP_PPC_SUBFZE:
ppc_subfze (code, ins->dreg, ins->sreg1);
break;
case OP_IAND:
CASE_PPC64 (OP_LAND)
/* FIXME: the ppc macros as inconsistent here: put dest as the first arg! */
ppc_and (code, ins->sreg1, ins->dreg, ins->sreg2);
break;
case OP_AND_IMM:
case OP_IAND_IMM:
CASE_PPC64 (OP_LAND_IMM)
if (!(ins->inst_imm & 0xffff0000)) {
ppc_andid (code, ins->sreg1, ins->dreg, ins->inst_imm);
} else if (!(ins->inst_imm & 0xffff)) {
ppc_andisd (code, ins->sreg1, ins->dreg, ((guint32)ins->inst_imm >> 16));
} else {
g_assert_not_reached ();
}
break;
case OP_IDIV:
CASE_PPC64 (OP_LDIV) {
guint8 *divisor_is_m1;
/* XER format: SO, OV, CA, reserved [21 bits], count [8 bits]
*/
ppc_compare_reg_imm (code, 0, ins->sreg2, -1);
divisor_is_m1 = code;
ppc_bc (code, PPC_BR_FALSE | PPC_BR_LIKELY, PPC_BR_EQ, 0);
ppc_lis (code, ppc_r0, 0x8000);
#ifdef __mono_ppc64__
if (ins->opcode == OP_LDIV)
ppc_sldi (code, ppc_r0, ppc_r0, 32);
#endif
ppc_compare (code, 0, ins->sreg1, ppc_r0);
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_TRUE, PPC_BR_EQ, "OverflowException");
ppc_patch (divisor_is_m1, code);
/* XER format: SO, OV, CA, reserved [21 bits], count [8 bits]
*/
if (ins->opcode == OP_IDIV)
ppc_divwod (code, ins->dreg, ins->sreg1, ins->sreg2);
#ifdef __mono_ppc64__
else
ppc_divdod (code, ins->dreg, ins->sreg1, ins->sreg2);
#endif
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "DivideByZeroException");
break;
}
case OP_IDIV_UN:
CASE_PPC64 (OP_LDIV_UN)
if (ins->opcode == OP_IDIV_UN)
ppc_divwuod (code, ins->dreg, ins->sreg1, ins->sreg2);
#ifdef __mono_ppc64__
else
ppc_divduod (code, ins->dreg, ins->sreg1, ins->sreg2);
#endif
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "DivideByZeroException");
break;
case OP_DIV_IMM:
case OP_IREM:
case OP_IREM_UN:
case OP_REM_IMM:
g_assert_not_reached ();
case OP_IOR:
CASE_PPC64 (OP_LOR)
ppc_or (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_OR_IMM:
case OP_IOR_IMM:
CASE_PPC64 (OP_LOR_IMM)
if (!(ins->inst_imm & 0xffff0000)) {
ppc_ori (code, ins->sreg1, ins->dreg, ins->inst_imm);
} else if (!(ins->inst_imm & 0xffff)) {
ppc_oris (code, ins->dreg, ins->sreg1, ((guint32)(ins->inst_imm) >> 16));
} else {
g_assert_not_reached ();
}
break;
case OP_IXOR:
CASE_PPC64 (OP_LXOR)
ppc_xor (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_IXOR_IMM:
case OP_XOR_IMM:
CASE_PPC64 (OP_LXOR_IMM)
if (!(ins->inst_imm & 0xffff0000)) {
ppc_xori (code, ins->sreg1, ins->dreg, ins->inst_imm);
} else if (!(ins->inst_imm & 0xffff)) {
ppc_xoris (code, ins->sreg1, ins->dreg, ((guint32)(ins->inst_imm) >> 16));
} else {
g_assert_not_reached ();
}
break;
case OP_ISHL:
CASE_PPC64 (OP_LSHL)
ppc_shift_left (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_SHL_IMM:
case OP_ISHL_IMM:
CASE_PPC64 (OP_LSHL_IMM)
ppc_shift_left_imm (code, ins->dreg, ins->sreg1, MASK_SHIFT_IMM (ins->inst_imm));
break;
case OP_ISHR:
ppc_sraw (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_SHR_IMM:
ppc_shift_right_arith_imm (code, ins->dreg, ins->sreg1, MASK_SHIFT_IMM (ins->inst_imm));
break;
case OP_SHR_UN_IMM:
if (MASK_SHIFT_IMM (ins->inst_imm))
ppc_shift_right_imm (code, ins->dreg, ins->sreg1, MASK_SHIFT_IMM (ins->inst_imm));
else
ppc_mr (code, ins->dreg, ins->sreg1);
break;
case OP_ISHR_UN:
ppc_srw (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_INOT:
CASE_PPC64 (OP_LNOT)
ppc_not (code, ins->dreg, ins->sreg1);
break;
case OP_INEG:
CASE_PPC64 (OP_LNEG)
ppc_neg (code, ins->dreg, ins->sreg1);
break;
case OP_IMUL:
CASE_PPC64 (OP_LMUL)
ppc_multiply (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_IMUL_IMM:
case OP_MUL_IMM:
CASE_PPC64 (OP_LMUL_IMM)
if (ppc_is_imm16 (ins->inst_imm)) {
ppc_mulli (code, ins->dreg, ins->sreg1, ins->inst_imm);
} else {
g_assert_not_reached ();
}
break;
case OP_IMUL_OVF:
CASE_PPC64 (OP_LMUL_OVF)
/* we annot use mcrxr, since it's not implemented on some processors
* XER format: SO, OV, CA, reserved [21 bits], count [8 bits]
*/
if (ins->opcode == OP_IMUL_OVF)
ppc_mullwo (code, ins->dreg, ins->sreg1, ins->sreg2);
#ifdef __mono_ppc64__
else
ppc_mulldo (code, ins->dreg, ins->sreg1, ins->sreg2);
#endif
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1<<14));
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, "OverflowException");
break;
case OP_IMUL_OVF_UN:
CASE_PPC64 (OP_LMUL_OVF_UN)
/* we first multiply to get the high word and compare to 0
* to set the flags, then the result is discarded and then
* we multiply to get the lower * bits result
*/
if (ins->opcode == OP_IMUL_OVF_UN)
ppc_mulhwu (code, ppc_r0, ins->sreg1, ins->sreg2);
#ifdef __mono_ppc64__
else
ppc_mulhdu (code, ppc_r0, ins->sreg1, ins->sreg2);
#endif
ppc_cmpi (code, 0, 0, ppc_r0, 0);
EMIT_COND_SYSTEM_EXCEPTION (CEE_BNE_UN - CEE_BEQ, "OverflowException");
ppc_multiply (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_ICONST:
ppc_load (code, ins->dreg, ins->inst_c0);
break;
case OP_I8CONST: {
ppc_load (code, ins->dreg, ins->inst_l);
break;
}
case OP_LOAD_GOTADDR:
/* The PLT implementation depends on this */
g_assert (ins->dreg == ppc_r30);
code = mono_arch_emit_load_got_addr (cfg->native_code, code, cfg, NULL);
break;
case OP_GOT_ENTRY:
// FIXME: Fix max instruction length
/* XXX: This is hairy; we're casting a pointer from a union to an enum... */
mono_add_patch_info (cfg, offset, (MonoJumpInfoType)(intptr_t)ins->inst_right->inst_i1, ins->inst_right->inst_p0);
/* arch_emit_got_access () patches this */
ppc_load32 (code, ppc_r0, 0);
ppc_ldptr_indexed (code, ins->dreg, ins->inst_basereg, ppc_r0);
break;
case OP_AOTCONST:
mono_add_patch_info (cfg, offset, (MonoJumpInfoType)(intptr_t)ins->inst_i1, ins->inst_p0);
ppc_load_sequence (code, ins->dreg, 0);
break;
CASE_PPC32 (OP_ICONV_TO_I4)
CASE_PPC32 (OP_ICONV_TO_U4)
case OP_MOVE:
if (ins->dreg != ins->sreg1)
ppc_mr (code, ins->dreg, ins->sreg1);
break;
case OP_SETLRET: {
int saved = ins->sreg1;
if (ins->sreg1 == ppc_r3) {
ppc_mr (code, ppc_r0, ins->sreg1);
saved = ppc_r0;
}
if (ins->sreg2 != ppc_r3)
ppc_mr (code, ppc_r3, ins->sreg2);
if (saved != ppc_r4)
ppc_mr (code, ppc_r4, saved);
break;
}
case OP_FMOVE:
if (ins->dreg != ins->sreg1)
ppc_fmr (code, ins->dreg, ins->sreg1);
break;
case OP_MOVE_F_TO_I4:
ppc_stfs (code, ins->sreg1, -4, ppc_r1);
ppc_ldptr (code, ins->dreg, -4, ppc_r1);
break;
case OP_MOVE_I4_TO_F:
ppc_stw (code, ins->sreg1, -4, ppc_r1);
ppc_lfs (code, ins->dreg, -4, ppc_r1);
break;
#ifdef __mono_ppc64__
case OP_MOVE_F_TO_I8:
ppc_stfd (code, ins->sreg1, -8, ppc_r1);
ppc_ldptr (code, ins->dreg, -8, ppc_r1);
break;
case OP_MOVE_I8_TO_F:
ppc_stptr (code, ins->sreg1, -8, ppc_r1);
ppc_lfd (code, ins->dreg, -8, ppc_r1);
break;
#endif
case OP_FCONV_TO_R4:
ppc_frsp (code, ins->dreg, ins->sreg1);
break;
case OP_TAILCALL_PARAMETER:
// This opcode helps compute sizes, i.e.
// of the subsequent OP_TAILCALL, but contributes no code.
g_assert (ins->next);
break;
case OP_TAILCALL: {
int i, pos;
MonoCallInst *call = (MonoCallInst*)ins;
/*
* Keep in sync with mono_arch_emit_epilog
*/
g_assert (!cfg->method->save_lmf);
/*
* Note: we can use ppc_r12 here because it is dead anyway:
* we're leaving the method.
*/
if (1 || cfg->flags & MONO_CFG_HAS_CALLS) {
long ret_offset = cfg->stack_usage + PPC_RET_ADDR_OFFSET;
if (ppc_is_imm16 (ret_offset)) {
ppc_ldptr (code, ppc_r0, ret_offset, cfg->frame_reg);
} else {
ppc_load (code, ppc_r12, ret_offset);
ppc_ldptr_indexed (code, ppc_r0, cfg->frame_reg, ppc_r12);
}
ppc_mtlr (code, ppc_r0);
}
if (ppc_is_imm16 (cfg->stack_usage)) {
ppc_addi (code, ppc_r12, cfg->frame_reg, cfg->stack_usage);
} else {
/* cfg->stack_usage is an int, so we can use
* an addis/addi sequence here even in 64-bit. */
ppc_addis (code, ppc_r12, cfg->frame_reg, ppc_ha(cfg->stack_usage));
ppc_addi (code, ppc_r12, ppc_r12, cfg->stack_usage);
}
if (!cfg->method->save_lmf) {
pos = 0;
for (i = 31; i >= 13; --i) {
if (cfg->used_int_regs & (1 << i)) {
pos += sizeof (target_mgreg_t);
ppc_ldptr (code, i, -pos, ppc_r12);
}
}
} else {
/* FIXME restore from MonoLMF: though this can't happen yet */
}
/* Copy arguments on the stack to our argument area */
if (call->stack_usage) {
code = emit_memcpy (code, call->stack_usage, ppc_r12, PPC_STACK_PARAM_OFFSET, ppc_sp, PPC_STACK_PARAM_OFFSET);
/* r12 was clobbered */
g_assert (cfg->frame_reg == ppc_sp);
if (ppc_is_imm16 (cfg->stack_usage)) {
ppc_addi (code, ppc_r12, cfg->frame_reg, cfg->stack_usage);
} else {
/* cfg->stack_usage is an int, so we can use
* an addis/addi sequence here even in 64-bit. */
ppc_addis (code, ppc_r12, cfg->frame_reg, ppc_ha(cfg->stack_usage));
ppc_addi (code, ppc_r12, ppc_r12, cfg->stack_usage);
}
}
ppc_mr (code, ppc_sp, ppc_r12);
mono_add_patch_info (cfg, (guint8*) code - cfg->native_code, MONO_PATCH_INFO_METHOD_JUMP, call->method);
cfg->thunk_area += THUNK_SIZE;
if (cfg->compile_aot) {
/* arch_emit_got_access () patches this */
ppc_load32 (code, ppc_r0, 0);
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
ppc_ldptr_indexed (code, ppc_r12, ppc_r30, ppc_r0);
ppc_ldptr (code, ppc_r0, 0, ppc_r12);
#else
ppc_ldptr_indexed (code, ppc_r0, ppc_r30, ppc_r0);
#endif
ppc_mtctr (code, ppc_r0);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
} else {
ppc_b (code, 0);
}
break;
}
case OP_CHECK_THIS:
/* ensure ins->sreg1 is not NULL */
ppc_ldptr (code, ppc_r0, 0, ins->sreg1);
break;
case OP_ARGLIST: {
long cookie_offset = cfg->sig_cookie + cfg->stack_usage;
if (ppc_is_imm16 (cookie_offset)) {
ppc_addi (code, ppc_r0, cfg->frame_reg, cookie_offset);
} else {
ppc_load (code, ppc_r0, cookie_offset);
ppc_add (code, ppc_r0, cfg->frame_reg, ppc_r0);
}
ppc_stptr (code, ppc_r0, 0, ins->sreg1);
break;
}
case OP_FCALL:
case OP_LCALL:
case OP_VCALL:
case OP_VCALL2:
case OP_VOIDCALL:
case OP_CALL:
call = (MonoCallInst*)ins;
mono_call_add_patch_info (cfg, call, offset);
if ((FORCE_INDIR_CALL || cfg->method->dynamic) && !cfg->compile_aot) {
ppc_load_func (code, PPC_CALL_REG, 0);
ppc_mtlr (code, PPC_CALL_REG);
ppc_blrl (code);
} else {
ppc_bl (code, 0);
}
/* FIXME: this should be handled somewhere else in the new jit */
code = emit_move_return_value (cfg, ins, code);
break;
case OP_FCALL_REG:
case OP_LCALL_REG:
case OP_VCALL_REG:
case OP_VCALL2_REG:
case OP_VOIDCALL_REG:
case OP_CALL_REG:
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
ppc_ldptr (code, ppc_r0, 0, ins->sreg1);
/* FIXME: if we know that this is a method, we
can omit this load */
ppc_ldptr (code, ppc_r2, 8, ins->sreg1);
ppc_mtlr (code, ppc_r0);
#else
#if (_CALL_ELF == 2)
if (ins->flags & MONO_INST_HAS_METHOD) {
// Not a global entry point
} else {
// Need to set up r12 with function entry address for global entry point
if (ppc_r12 != ins->sreg1) {
ppc_mr(code,ppc_r12,ins->sreg1);
}
}
#endif
ppc_mtlr (code, ins->sreg1);
#endif
ppc_blrl (code);
/* FIXME: this should be handled somewhere else in the new jit */
code = emit_move_return_value (cfg, ins, code);
break;
case OP_FCALL_MEMBASE:
case OP_LCALL_MEMBASE:
case OP_VCALL_MEMBASE:
case OP_VCALL2_MEMBASE:
case OP_VOIDCALL_MEMBASE:
case OP_CALL_MEMBASE:
if (cfg->compile_aot && ins->sreg1 == ppc_r12) {
/* The trampolines clobber this */
ppc_mr (code, ppc_r29, ins->sreg1);
ppc_ldptr (code, ppc_r0, ins->inst_offset, ppc_r29);
} else {
ppc_ldptr (code, ppc_r0, ins->inst_offset, ins->sreg1);
}
ppc_mtlr (code, ppc_r0);
ppc_blrl (code);
/* FIXME: this should be handled somewhere else in the new jit */
code = emit_move_return_value (cfg, ins, code);
break;
case OP_LOCALLOC: {
guint8 * zero_loop_jump, * zero_loop_start;
/* keep alignment */
int alloca_waste = PPC_STACK_PARAM_OFFSET + cfg->param_area + 31;
int area_offset = alloca_waste;
area_offset &= ~31;
ppc_addi (code, ppc_r12, ins->sreg1, alloca_waste + 31);
/* FIXME: should be calculated from MONO_ARCH_FRAME_ALIGNMENT */
ppc_clear_right_imm (code, ppc_r12, ppc_r12, 4);
/* use ctr to store the number of words to 0 if needed */
if (ins->flags & MONO_INST_INIT) {
/* we zero 4 bytes at a time:
* we add 7 instead of 3 so that we set the counter to
* at least 1, otherwise the bdnz instruction will make
* it negative and iterate billions of times.
*/
ppc_addi (code, ppc_r0, ins->sreg1, 7);
ppc_shift_right_arith_imm (code, ppc_r0, ppc_r0, 2);
ppc_mtctr (code, ppc_r0);
}
ppc_ldptr (code, ppc_r0, 0, ppc_sp);
ppc_neg (code, ppc_r12, ppc_r12);
ppc_stptr_update_indexed (code, ppc_r0, ppc_sp, ppc_r12);
/* FIXME: make this loop work in 8 byte
increments on PPC64 */
if (ins->flags & MONO_INST_INIT) {
/* adjust the dest reg by -4 so we can use stwu */
/* we actually adjust -8 because we let the loop
* run at least once
*/
ppc_addi (code, ins->dreg, ppc_sp, (area_offset - 8));
ppc_li (code, ppc_r12, 0);
zero_loop_start = code;
ppc_stwu (code, ppc_r12, 4, ins->dreg);
zero_loop_jump = code;
ppc_bc (code, PPC_BR_DEC_CTR_NONZERO, 0, 0);
ppc_patch (zero_loop_jump, zero_loop_start);
}
ppc_addi (code, ins->dreg, ppc_sp, area_offset);
break;
}
case OP_THROW: {
//ppc_break (code);
ppc_mr (code, ppc_r3, ins->sreg1);
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_arch_throw_exception));
if ((FORCE_INDIR_CALL || cfg->method->dynamic) && !cfg->compile_aot) {
ppc_load_func (code, PPC_CALL_REG, 0);
ppc_mtlr (code, PPC_CALL_REG);
ppc_blrl (code);
} else {
ppc_bl (code, 0);
}
break;
}
case OP_RETHROW: {
//ppc_break (code);
ppc_mr (code, ppc_r3, ins->sreg1);
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_JIT_ICALL_ID,
GUINT_TO_POINTER (MONO_JIT_ICALL_mono_arch_rethrow_exception));
if ((FORCE_INDIR_CALL || cfg->method->dynamic) && !cfg->compile_aot) {
ppc_load_func (code, PPC_CALL_REG, 0);
ppc_mtlr (code, PPC_CALL_REG);
ppc_blrl (code);
} else {
ppc_bl (code, 0);
}
break;
}
case OP_START_HANDLER: {
MonoInst *spvar = mono_find_spvar_for_region (cfg, bb->region);
g_assert (spvar->inst_basereg != ppc_sp);
code = emit_reserve_param_area (cfg, code);
ppc_mflr (code, ppc_r0);
if (ppc_is_imm16 (spvar->inst_offset)) {
ppc_stptr (code, ppc_r0, spvar->inst_offset, spvar->inst_basereg);
} else {
ppc_load (code, ppc_r12, spvar->inst_offset);
ppc_stptr_indexed (code, ppc_r0, ppc_r12, spvar->inst_basereg);
}
break;
}
case OP_ENDFILTER: {
MonoInst *spvar = mono_find_spvar_for_region (cfg, bb->region);
g_assert (spvar->inst_basereg != ppc_sp);
code = emit_unreserve_param_area (cfg, code);
if (ins->sreg1 != ppc_r3)
ppc_mr (code, ppc_r3, ins->sreg1);
if (ppc_is_imm16 (spvar->inst_offset)) {
ppc_ldptr (code, ppc_r0, spvar->inst_offset, spvar->inst_basereg);
} else {
ppc_load (code, ppc_r12, spvar->inst_offset);
ppc_ldptr_indexed (code, ppc_r0, spvar->inst_basereg, ppc_r12);
}
ppc_mtlr (code, ppc_r0);
ppc_blr (code);
break;
}
case OP_ENDFINALLY: {
MonoInst *spvar = mono_find_spvar_for_region (cfg, bb->region);
g_assert (spvar->inst_basereg != ppc_sp);
code = emit_unreserve_param_area (cfg, code);
ppc_ldptr (code, ppc_r0, spvar->inst_offset, spvar->inst_basereg);
ppc_mtlr (code, ppc_r0);
ppc_blr (code);
break;
}
case OP_CALL_HANDLER:
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_BB, ins->inst_target_bb);
ppc_bl (code, 0);
for (GList *tmp = ins->inst_eh_blocks; tmp != bb->clause_holes; tmp = tmp->prev)
mono_cfg_add_try_hole (cfg, ((MonoLeaveClause *) tmp->data)->clause, code, bb);
break;
case OP_LABEL:
ins->inst_c0 = code - cfg->native_code;
break;
case OP_BR:
/*if (ins->inst_target_bb->native_offset) {
ppc_b (code, 0);
//x86_jump_code (code, cfg->native_code + ins->inst_target_bb->native_offset);
} else*/ {
mono_add_patch_info (cfg, offset, MONO_PATCH_INFO_BB, ins->inst_target_bb);
ppc_b (code, 0);
}
break;
case OP_BR_REG:
ppc_mtctr (code, ins->sreg1);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
break;
case OP_ICNEQ:
ppc_li (code, ins->dreg, 0);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_EQ, 2);
ppc_li (code, ins->dreg, 1);
break;
case OP_CEQ:
case OP_ICEQ:
CASE_PPC64 (OP_LCEQ)
ppc_li (code, ins->dreg, 0);
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 2);
ppc_li (code, ins->dreg, 1);
break;
case OP_CLT:
case OP_CLT_UN:
case OP_ICLT:
case OP_ICLT_UN:
CASE_PPC64 (OP_LCLT)
CASE_PPC64 (OP_LCLT_UN)
ppc_li (code, ins->dreg, 1);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_LT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_ICGE:
case OP_ICGE_UN:
ppc_li (code, ins->dreg, 1);
ppc_bc (code, PPC_BR_FALSE, PPC_BR_LT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_CGT:
case OP_CGT_UN:
case OP_ICGT:
case OP_ICGT_UN:
CASE_PPC64 (OP_LCGT)
CASE_PPC64 (OP_LCGT_UN)
ppc_li (code, ins->dreg, 1);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_GT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_ICLE:
case OP_ICLE_UN:
ppc_li (code, ins->dreg, 1);
ppc_bc (code, PPC_BR_FALSE, PPC_BR_GT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_COND_EXC_EQ:
case OP_COND_EXC_NE_UN:
case OP_COND_EXC_LT:
case OP_COND_EXC_LT_UN:
case OP_COND_EXC_GT:
case OP_COND_EXC_GT_UN:
case OP_COND_EXC_GE:
case OP_COND_EXC_GE_UN:
case OP_COND_EXC_LE:
case OP_COND_EXC_LE_UN:
EMIT_COND_SYSTEM_EXCEPTION (ins->opcode - OP_COND_EXC_EQ, (const char*)ins->inst_p1);
break;
case OP_COND_EXC_IEQ:
case OP_COND_EXC_INE_UN:
case OP_COND_EXC_ILT:
case OP_COND_EXC_ILT_UN:
case OP_COND_EXC_IGT:
case OP_COND_EXC_IGT_UN:
case OP_COND_EXC_IGE:
case OP_COND_EXC_IGE_UN:
case OP_COND_EXC_ILE:
case OP_COND_EXC_ILE_UN:
EMIT_COND_SYSTEM_EXCEPTION (ins->opcode - OP_COND_EXC_IEQ, (const char*)ins->inst_p1);
break;
case OP_IBEQ:
case OP_IBNE_UN:
case OP_IBLT:
case OP_IBLT_UN:
case OP_IBGT:
case OP_IBGT_UN:
case OP_IBGE:
case OP_IBGE_UN:
case OP_IBLE:
case OP_IBLE_UN:
EMIT_COND_BRANCH (ins, ins->opcode - OP_IBEQ);
break;
/* floating point opcodes */
case OP_R8CONST:
g_assert (cfg->compile_aot);
/* FIXME: Optimize this */
ppc_bl (code, 1);
ppc_mflr (code, ppc_r12);
ppc_b (code, 3);
*(double*)code = *(double*)ins->inst_p0;
code += 8;
ppc_lfd (code, ins->dreg, 8, ppc_r12);
break;
case OP_R4CONST:
g_assert_not_reached ();
break;
case OP_STORER8_MEMBASE_REG:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_stfd (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_stfd (code, ins->sreg1, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_stfdx (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
}
break;
case OP_LOADR8_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lfd (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_lfd (code, ins->dreg, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lfdx (code, ins->dreg, ins->inst_destbasereg, ppc_r0);
}
}
break;
case OP_STORER4_MEMBASE_REG:
ppc_frsp (code, ins->sreg1, ins->sreg1);
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_stfs (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_stfs (code, ins->sreg1, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_stfsx (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
}
break;
case OP_LOADR4_MEMBASE:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_lfs (code, ins->dreg, ins->inst_offset, ins->inst_basereg);
} else {
if (ppc_is_imm32 (ins->inst_offset)) {
ppc_addis (code, ppc_r11, ins->inst_destbasereg, ppc_ha(ins->inst_offset));
ppc_lfs (code, ins->dreg, ins->inst_offset, ppc_r11);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_lfsx (code, ins->dreg, ins->inst_destbasereg, ppc_r0);
}
}
break;
case OP_LOADR4_MEMINDEX:
ppc_lfsx (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_LOADR8_MEMINDEX:
ppc_lfdx (code, ins->dreg, ins->inst_basereg, ins->sreg2);
break;
case OP_STORER4_MEMINDEX:
ppc_frsp (code, ins->sreg1, ins->sreg1);
ppc_stfsx (code, ins->sreg1, ins->inst_destbasereg, ins->sreg2);
break;
case OP_STORER8_MEMINDEX:
ppc_stfdx (code, ins->sreg1, ins->inst_destbasereg, ins->sreg2);
break;
case CEE_CONV_R_UN:
case CEE_CONV_R4: /* FIXME: change precision */
case CEE_CONV_R8:
g_assert_not_reached ();
case OP_FCONV_TO_I1:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 1, TRUE);
break;
case OP_FCONV_TO_U1:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 1, FALSE);
break;
case OP_FCONV_TO_I2:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 2, TRUE);
break;
case OP_FCONV_TO_U2:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 2, FALSE);
break;
case OP_FCONV_TO_I4:
case OP_FCONV_TO_I:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 4, TRUE);
break;
case OP_FCONV_TO_U4:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 4, FALSE);
break;
case OP_LCONV_TO_R_UN:
g_assert_not_reached ();
/* Implemented as helper calls */
break;
case OP_LCONV_TO_OVF_I4_2:
case OP_LCONV_TO_OVF_I: {
#ifdef __mono_ppc64__
NOT_IMPLEMENTED;
#else
guint8 *negative_branch, *msword_positive_branch, *msword_negative_branch, *ovf_ex_target;
// Check if its negative
ppc_cmpi (code, 0, 0, ins->sreg1, 0);
negative_branch = code;
ppc_bc (code, PPC_BR_TRUE, PPC_BR_LT, 0);
// Its positive msword == 0
ppc_cmpi (code, 0, 0, ins->sreg2, 0);
msword_positive_branch = code;
ppc_bc (code, PPC_BR_TRUE, PPC_BR_EQ, 0);
ovf_ex_target = code;
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_ALWAYS, 0, "OverflowException");
// Negative
ppc_patch (negative_branch, code);
ppc_cmpi (code, 0, 0, ins->sreg2, -1);
msword_negative_branch = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
ppc_patch (msword_negative_branch, ovf_ex_target);
ppc_patch (msword_positive_branch, code);
if (ins->dreg != ins->sreg1)
ppc_mr (code, ins->dreg, ins->sreg1);
break;
#endif
}
case OP_ROUND:
ppc_frind (code, ins->dreg, ins->sreg1);
break;
case OP_PPC_TRUNC:
ppc_frizd (code, ins->dreg, ins->sreg1);
break;
case OP_PPC_CEIL:
ppc_fripd (code, ins->dreg, ins->sreg1);
break;
case OP_PPC_FLOOR:
ppc_frimd (code, ins->dreg, ins->sreg1);
break;
case OP_ABS:
ppc_fabsd (code, ins->dreg, ins->sreg1);
break;
case OP_SQRTF:
ppc_fsqrtsd (code, ins->dreg, ins->sreg1);
break;
case OP_SQRT:
ppc_fsqrtd (code, ins->dreg, ins->sreg1);
break;
case OP_FADD:
ppc_fadd (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_FSUB:
ppc_fsub (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_FMUL:
ppc_fmul (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_FDIV:
ppc_fdiv (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_FNEG:
ppc_fneg (code, ins->dreg, ins->sreg1);
break;
case OP_FREM:
/* emulated */
g_assert_not_reached ();
break;
/* These min/max require POWER5 */
case OP_IMIN:
ppc_cmp (code, 0, 0, ins->sreg1, ins->sreg2);
ppc_isellt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_IMIN_UN:
ppc_cmpl (code, 0, 0, ins->sreg1, ins->sreg2);
ppc_isellt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_IMAX:
ppc_cmp (code, 0, 0, ins->sreg1, ins->sreg2);
ppc_iselgt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_IMAX_UN:
ppc_cmpl (code, 0, 0, ins->sreg1, ins->sreg2);
ppc_iselgt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
CASE_PPC64 (OP_LMIN)
ppc_cmp (code, 0, 1, ins->sreg1, ins->sreg2);
ppc_isellt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
CASE_PPC64 (OP_LMIN_UN)
ppc_cmpl (code, 0, 1, ins->sreg1, ins->sreg2);
ppc_isellt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
CASE_PPC64 (OP_LMAX)
ppc_cmp (code, 0, 1, ins->sreg1, ins->sreg2);
ppc_iselgt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
CASE_PPC64 (OP_LMAX_UN)
ppc_cmpl (code, 0, 1, ins->sreg1, ins->sreg2);
ppc_iselgt (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_FCOMPARE:
ppc_fcmpu (code, 0, ins->sreg1, ins->sreg2);
break;
case OP_FCEQ:
case OP_FCNEQ:
ppc_fcmpo (code, 0, ins->sreg1, ins->sreg2);
ppc_li (code, ins->dreg, 1);
ppc_bc (code, ins->opcode == OP_FCEQ ? PPC_BR_TRUE : PPC_BR_FALSE, PPC_BR_EQ, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_FCLT:
case OP_FCGE:
ppc_fcmpo (code, 0, ins->sreg1, ins->sreg2);
ppc_li (code, ins->dreg, 1);
ppc_bc (code, ins->opcode == OP_FCLT ? PPC_BR_TRUE : PPC_BR_FALSE, PPC_BR_LT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_FCLT_UN:
ppc_fcmpu (code, 0, ins->sreg1, ins->sreg2);
ppc_li (code, ins->dreg, 1);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_SO, 3);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_LT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_FCGT:
case OP_FCLE:
ppc_fcmpo (code, 0, ins->sreg1, ins->sreg2);
ppc_li (code, ins->dreg, 1);
ppc_bc (code, ins->opcode == OP_FCGT ? PPC_BR_TRUE : PPC_BR_FALSE, PPC_BR_GT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_FCGT_UN:
ppc_fcmpu (code, 0, ins->sreg1, ins->sreg2);
ppc_li (code, ins->dreg, 1);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_SO, 3);
ppc_bc (code, PPC_BR_TRUE, PPC_BR_GT, 2);
ppc_li (code, ins->dreg, 0);
break;
case OP_FBEQ:
EMIT_COND_BRANCH (ins, CEE_BEQ - CEE_BEQ);
break;
case OP_FBNE_UN:
EMIT_COND_BRANCH (ins, CEE_BNE_UN - CEE_BEQ);
break;
case OP_FBLT:
ppc_bc (code, PPC_BR_TRUE, PPC_BR_SO, 2);
EMIT_COND_BRANCH (ins, CEE_BLT - CEE_BEQ);
break;
case OP_FBLT_UN:
EMIT_COND_BRANCH_FLAGS (ins, PPC_BR_TRUE, PPC_BR_SO);
EMIT_COND_BRANCH (ins, CEE_BLT_UN - CEE_BEQ);
break;
case OP_FBGT:
ppc_bc (code, PPC_BR_TRUE, PPC_BR_SO, 2);
EMIT_COND_BRANCH (ins, CEE_BGT - CEE_BEQ);
break;
case OP_FBGT_UN:
EMIT_COND_BRANCH_FLAGS (ins, PPC_BR_TRUE, PPC_BR_SO);
EMIT_COND_BRANCH (ins, CEE_BGT_UN - CEE_BEQ);
break;
case OP_FBGE:
ppc_bc (code, PPC_BR_TRUE, PPC_BR_SO, 2);
EMIT_COND_BRANCH (ins, CEE_BGE - CEE_BEQ);
break;
case OP_FBGE_UN:
EMIT_COND_BRANCH (ins, CEE_BGE_UN - CEE_BEQ);
break;
case OP_FBLE:
ppc_bc (code, PPC_BR_TRUE, PPC_BR_SO, 2);
EMIT_COND_BRANCH (ins, CEE_BLE - CEE_BEQ);
break;
case OP_FBLE_UN:
EMIT_COND_BRANCH (ins, CEE_BLE_UN - CEE_BEQ);
break;
case OP_CKFINITE:
g_assert_not_reached ();
case OP_PPC_CHECK_FINITE: {
ppc_rlwinm (code, ins->sreg1, ins->sreg1, 0, 1, 31);
ppc_addis (code, ins->sreg1, ins->sreg1, -32752);
ppc_rlwinmd (code, ins->sreg1, ins->sreg1, 1, 31, 31);
EMIT_COND_SYSTEM_EXCEPTION (CEE_BEQ - CEE_BEQ, "ArithmeticException");
break;
case OP_JUMP_TABLE:
mono_add_patch_info (cfg, offset, (MonoJumpInfoType)ins->inst_c1, ins->inst_p0);
#ifdef __mono_ppc64__
ppc_load_sequence (code, ins->dreg, (guint64)0x0f0f0f0f0f0f0f0fLL);
#else
ppc_load_sequence (code, ins->dreg, (gulong)0x0f0f0f0fL);
#endif
break;
}
#ifdef __mono_ppc64__
case OP_ICONV_TO_I4:
case OP_SEXT_I4:
ppc_extsw (code, ins->dreg, ins->sreg1);
break;
case OP_ICONV_TO_U4:
case OP_ZEXT_I4:
ppc_clrldi (code, ins->dreg, ins->sreg1, 32);
break;
case OP_ICONV_TO_R4:
case OP_ICONV_TO_R8:
case OP_LCONV_TO_R4:
case OP_LCONV_TO_R8: {
int tmp;
if (ins->opcode == OP_ICONV_TO_R4 || ins->opcode == OP_ICONV_TO_R8) {
ppc_extsw (code, ppc_r0, ins->sreg1);
tmp = ppc_r0;
} else {
tmp = ins->sreg1;
}
if (cpu_hw_caps & PPC_MOVE_FPR_GPR) {
ppc_mffgpr (code, ins->dreg, tmp);
} else {
ppc_str (code, tmp, -8, ppc_r1);
ppc_lfd (code, ins->dreg, -8, ppc_r1);
}
ppc_fcfid (code, ins->dreg, ins->dreg);
if (ins->opcode == OP_ICONV_TO_R4 || ins->opcode == OP_LCONV_TO_R4)
ppc_frsp (code, ins->dreg, ins->dreg);
break;
}
case OP_LSHR:
ppc_srad (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_LSHR_UN:
ppc_srd (code, ins->dreg, ins->sreg1, ins->sreg2);
break;
case OP_COND_EXC_C:
/* check XER [0-3] (SO, OV, CA): we can't use mcrxr
*/
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1 << 13)); /* CA */
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, (const char*)ins->inst_p1);
break;
case OP_COND_EXC_OV:
ppc_mfspr (code, ppc_r0, ppc_xer);
ppc_andisd (code, ppc_r0, ppc_r0, (1 << 14)); /* OV */
EMIT_COND_SYSTEM_EXCEPTION_FLAGS (PPC_BR_FALSE, PPC_BR_EQ, (const char*)ins->inst_p1);
break;
case OP_LBEQ:
case OP_LBNE_UN:
case OP_LBLT:
case OP_LBLT_UN:
case OP_LBGT:
case OP_LBGT_UN:
case OP_LBGE:
case OP_LBGE_UN:
case OP_LBLE:
case OP_LBLE_UN:
EMIT_COND_BRANCH (ins, ins->opcode - OP_LBEQ);
break;
case OP_FCONV_TO_I8:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 8, TRUE);
break;
case OP_FCONV_TO_U8:
code = emit_float_to_int (cfg, code, ins->dreg, ins->sreg1, 8, FALSE);
break;
case OP_STOREI4_MEMBASE_REG:
if (ppc_is_imm16 (ins->inst_offset)) {
ppc_stw (code, ins->sreg1, ins->inst_offset, ins->inst_destbasereg);
} else {
ppc_load (code, ppc_r0, ins->inst_offset);
ppc_stwx (code, ins->sreg1, ins->inst_destbasereg, ppc_r0);
}
break;
case OP_STOREI4_MEMINDEX:
ppc_stwx (code, ins->sreg1, ins->sreg2, ins->inst_destbasereg);
break;
case OP_ISHR_IMM:
ppc_srawi (code, ins->dreg, ins->sreg1, (ins->inst_imm & 0x1f));
break;
case OP_ISHR_UN_IMM:
if (ins->inst_imm & 0x1f)
ppc_srwi (code, ins->dreg, ins->sreg1, (ins->inst_imm & 0x1f));
else
ppc_mr (code, ins->dreg, ins->sreg1);
break;
#else
case OP_ICONV_TO_R4:
case OP_ICONV_TO_R8: {
if (cpu_hw_caps & PPC_ISA_64) {
ppc_srawi(code, ppc_r0, ins->sreg1, 31);
ppc_stw (code, ppc_r0, -8, ppc_r1);
ppc_stw (code, ins->sreg1, -4, ppc_r1);
ppc_lfd (code, ins->dreg, -8, ppc_r1);
ppc_fcfid (code, ins->dreg, ins->dreg);
if (ins->opcode == OP_ICONV_TO_R4)
ppc_frsp (code, ins->dreg, ins->dreg);
}
break;
}
#endif
case OP_ATOMIC_ADD_I4:
CASE_PPC64 (OP_ATOMIC_ADD_I8) {
int location = ins->inst_basereg;
int addend = ins->sreg2;
guint8 *loop, *branch;
g_assert (ins->inst_offset == 0);
loop = code;
ppc_sync (code);
if (ins->opcode == OP_ATOMIC_ADD_I4)
ppc_lwarx (code, ppc_r0, 0, location);
#ifdef __mono_ppc64__
else
ppc_ldarx (code, ppc_r0, 0, location);
#endif
ppc_add (code, ppc_r0, ppc_r0, addend);
if (ins->opcode == OP_ATOMIC_ADD_I4)
ppc_stwcxd (code, ppc_r0, 0, location);
#ifdef __mono_ppc64__
else
ppc_stdcxd (code, ppc_r0, 0, location);
#endif
branch = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
ppc_patch (branch, loop);
ppc_sync (code);
ppc_mr (code, ins->dreg, ppc_r0);
break;
}
case OP_ATOMIC_CAS_I4:
CASE_PPC64 (OP_ATOMIC_CAS_I8) {
int location = ins->sreg1;
int value = ins->sreg2;
int comparand = ins->sreg3;
guint8 *start, *not_equal, *lost_reservation;
start = code;
ppc_sync (code);
if (ins->opcode == OP_ATOMIC_CAS_I4)
ppc_lwarx (code, ppc_r0, 0, location);
#ifdef __mono_ppc64__
else
ppc_ldarx (code, ppc_r0, 0, location);
#endif
ppc_cmp (code, 0, ins->opcode == OP_ATOMIC_CAS_I4 ? 0 : 1, ppc_r0, comparand);
not_equal = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
if (ins->opcode == OP_ATOMIC_CAS_I4)
ppc_stwcxd (code, value, 0, location);
#ifdef __mono_ppc64__
else
ppc_stdcxd (code, value, 0, location);
#endif
lost_reservation = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
ppc_patch (lost_reservation, start);
ppc_patch (not_equal, code);
ppc_sync (code);
ppc_mr (code, ins->dreg, ppc_r0);
break;
}
case OP_LIVERANGE_START: {
if (cfg->verbose_level > 1)
printf ("R%d START=0x%x\n", MONO_VARINFO (cfg, ins->inst_c0)->vreg, (int)(code - cfg->native_code));
MONO_VARINFO (cfg, ins->inst_c0)->live_range_start = code - cfg->native_code;
break;
}
case OP_LIVERANGE_END: {
if (cfg->verbose_level > 1)
printf ("R%d END=0x%x\n", MONO_VARINFO (cfg, ins->inst_c0)->vreg, (int)(code - cfg->native_code));
MONO_VARINFO (cfg, ins->inst_c0)->live_range_end = code - cfg->native_code;
break;
}
case OP_GC_SAFE_POINT:
break;
default:
g_warning ("unknown opcode %s in %s()\n", mono_inst_name (ins->opcode), __FUNCTION__);
g_assert_not_reached ();
}
if ((cfg->opt & MONO_OPT_BRANCH) && ((code - cfg->native_code - offset) > max_len)) {
g_warning ("wrong maximal instruction length of instruction %s (expected %d, got %ld)",
mono_inst_name (ins->opcode), max_len, (glong)(code - cfg->native_code - offset));
g_assert_not_reached ();
}
cpos += max_len;
last_ins = ins;
}
set_code_cursor (cfg, code);
}
#endif /* !DISABLE_JIT */
void
mono_arch_register_lowlevel_calls (void)
{
/* The signature doesn't matter */
mono_register_jit_icall (mono_ppc_throw_exception, mono_icall_sig_void, TRUE);
}
#ifdef __mono_ppc64__
#if G_BYTE_ORDER == G_LITTLE_ENDIAN
#define patch_load_sequence(ip,val) do {\
guint16 *__load = (guint16*)(ip); \
g_assert (sizeof (val) == sizeof (gsize)); \
__load [0] = (((guint64)(gsize)(val)) >> 48) & 0xffff; \
__load [2] = (((guint64)(gsize)(val)) >> 32) & 0xffff; \
__load [6] = (((guint64)(gsize)(val)) >> 16) & 0xffff; \
__load [8] = ((guint64)(gsize)(val)) & 0xffff; \
} while (0)
#elif G_BYTE_ORDER == G_BIG_ENDIAN
#define patch_load_sequence(ip,val) do {\
guint16 *__load = (guint16*)(ip); \
g_assert (sizeof (val) == sizeof (gsize)); \
__load [1] = (((guint64)(gsize)(val)) >> 48) & 0xffff; \
__load [3] = (((guint64)(gsize)(val)) >> 32) & 0xffff; \
__load [7] = (((guint64)(gsize)(val)) >> 16) & 0xffff; \
__load [9] = ((guint64)(gsize)(val)) & 0xffff; \
} while (0)
#else
#error huh? No endianess defined by compiler
#endif
#else
#define patch_load_sequence(ip,val) do {\
guint16 *__lis_ori = (guint16*)(ip); \
__lis_ori [1] = (((gulong)(val)) >> 16) & 0xffff; \
__lis_ori [3] = ((gulong)(val)) & 0xffff; \
} while (0)
#endif
#ifndef DISABLE_JIT
void
mono_arch_patch_code_new (MonoCompile *cfg, guint8 *code, MonoJumpInfo *ji, gpointer target)
{
unsigned char *ip = ji->ip.i + code;
gboolean is_fd = FALSE;
switch (ji->type) {
case MONO_PATCH_INFO_IP:
patch_load_sequence (ip, ip);
break;
case MONO_PATCH_INFO_SWITCH: {
gpointer *table = (gpointer *)ji->data.table->table;
int i;
patch_load_sequence (ip, table);
for (i = 0; i < ji->data.table->table_size; i++) {
table [i] = (glong)ji->data.table->table [i] + code;
}
/* we put into the table the absolute address, no need for ppc_patch in this case */
break;
}
case MONO_PATCH_INFO_METHODCONST:
case MONO_PATCH_INFO_CLASS:
case MONO_PATCH_INFO_IMAGE:
case MONO_PATCH_INFO_FIELD:
case MONO_PATCH_INFO_VTABLE:
case MONO_PATCH_INFO_IID:
case MONO_PATCH_INFO_SFLDA:
case MONO_PATCH_INFO_LDSTR:
case MONO_PATCH_INFO_TYPE_FROM_HANDLE:
case MONO_PATCH_INFO_LDTOKEN:
/* from OP_AOTCONST : lis + ori */
patch_load_sequence (ip, target);
break;
case MONO_PATCH_INFO_R4:
case MONO_PATCH_INFO_R8:
g_assert_not_reached ();
*((gconstpointer *)(ip + 2)) = ji->data.target;
break;
case MONO_PATCH_INFO_EXC_NAME:
g_assert_not_reached ();
*((gconstpointer *)(ip + 1)) = ji->data.name;
break;
case MONO_PATCH_INFO_NONE:
case MONO_PATCH_INFO_BB_OVF:
case MONO_PATCH_INFO_EXC_OVF:
/* everything is dealt with at epilog output time */
break;
#ifdef PPC_USES_FUNCTION_DESCRIPTOR
case MONO_PATCH_INFO_JIT_ICALL_ID:
case MONO_PATCH_INFO_ABS:
case MONO_PATCH_INFO_RGCTX_FETCH:
case MONO_PATCH_INFO_JIT_ICALL_ADDR:
case MONO_PATCH_INFO_SPECIFIC_TRAMPOLINE_LAZY_FETCH_ADDR:
is_fd = TRUE;
/* fall through */
#endif
default:
ppc_patch_full (cfg, ip, (const guchar*)target, is_fd);
break;
}
}
/*
* Emit code to save the registers in used_int_regs or the registers in the MonoLMF
* structure at positive offset pos from register base_reg. pos is guaranteed to fit into
* the instruction offset immediate for all the registers.
*/
static guint8*
save_registers (MonoCompile *cfg, guint8* code, int pos, int base_reg, gboolean save_lmf, guint32 used_int_regs, int cfa_offset)
{
int i;
if (!save_lmf) {
for (i = 13; i <= 31; i++) {
if (used_int_regs & (1 << i)) {
ppc_str (code, i, pos, base_reg);
mono_emit_unwind_op_offset (cfg, code, i, pos - cfa_offset);
pos += sizeof (target_mgreg_t);
}
}
} else {
/* pos is the start of the MonoLMF structure */
int offset = pos + G_STRUCT_OFFSET (MonoLMF, iregs);
for (i = 13; i <= 31; i++) {
ppc_str (code, i, offset, base_reg);
mono_emit_unwind_op_offset (cfg, code, i, offset - cfa_offset);
offset += sizeof (target_mgreg_t);
}
offset = pos + G_STRUCT_OFFSET (MonoLMF, fregs);
for (i = 14; i < 32; i++) {
ppc_stfd (code, i, offset, base_reg);
offset += sizeof (gdouble);
}
}
return code;
}
/*
* Stack frame layout:
*
* ------------------- sp
* MonoLMF structure or saved registers
* -------------------
* spilled regs
* -------------------
* locals
* -------------------
* param area size is cfg->param_area
* -------------------
* linkage area size is PPC_STACK_PARAM_OFFSET
* ------------------- sp
* red zone
*/
guint8 *
mono_arch_emit_prolog (MonoCompile *cfg)
{
MonoMethod *method = cfg->method;
MonoBasicBlock *bb;
MonoMethodSignature *sig;
MonoInst *inst;
long alloc_size, pos, max_offset, cfa_offset;
int i;
guint8 *code;
CallInfo *cinfo;
int lmf_offset = 0;
int tailcall_struct_index;
sig = mono_method_signature_internal (method);
cfg->code_size = 512 + sig->param_count * 32;
code = cfg->native_code = g_malloc (cfg->code_size);
cfa_offset = 0;
/* We currently emit unwind info for aot, but don't use it */
mono_emit_unwind_op_def_cfa (cfg, code, ppc_r1, 0);
if (1 || cfg->flags & MONO_CFG_HAS_CALLS) {
ppc_mflr (code, ppc_r0);
ppc_str (code, ppc_r0, PPC_RET_ADDR_OFFSET, ppc_sp);
mono_emit_unwind_op_offset (cfg, code, ppc_lr, PPC_RET_ADDR_OFFSET);
}
alloc_size = cfg->stack_offset;
pos = 0;
if (!method->save_lmf) {
for (i = 31; i >= 13; --i) {
if (cfg->used_int_regs & (1 << i)) {
pos += sizeof (target_mgreg_t);
}
}
} else {
pos += sizeof (MonoLMF);
lmf_offset = pos;
}
alloc_size += pos;
// align to MONO_ARCH_FRAME_ALIGNMENT bytes
if (alloc_size & (MONO_ARCH_FRAME_ALIGNMENT - 1)) {
alloc_size += MONO_ARCH_FRAME_ALIGNMENT - 1;
alloc_size &= ~(MONO_ARCH_FRAME_ALIGNMENT - 1);
}
cfg->stack_usage = alloc_size;
g_assert ((alloc_size & (MONO_ARCH_FRAME_ALIGNMENT-1)) == 0);
if (alloc_size) {
if (ppc_is_imm16 (-alloc_size)) {
ppc_str_update (code, ppc_sp, -alloc_size, ppc_sp);
cfa_offset = alloc_size;
mono_emit_unwind_op_def_cfa_offset (cfg, code, alloc_size);
code = save_registers (cfg, code, alloc_size - pos, ppc_sp, method->save_lmf, cfg->used_int_regs, cfa_offset);
} else {
if (pos)
ppc_addi (code, ppc_r12, ppc_sp, -pos);
ppc_load (code, ppc_r0, -alloc_size);
ppc_str_update_indexed (code, ppc_sp, ppc_sp, ppc_r0);
cfa_offset = alloc_size;
mono_emit_unwind_op_def_cfa_offset (cfg, code, alloc_size);
code = save_registers (cfg, code, 0, ppc_r12, method->save_lmf, cfg->used_int_regs, cfa_offset);
}
}
if (cfg->frame_reg != ppc_sp) {
ppc_mr (code, cfg->frame_reg, ppc_sp);
mono_emit_unwind_op_def_cfa_reg (cfg, code, cfg->frame_reg);
}
/* store runtime generic context */
if (cfg->rgctx_var) {
g_assert (cfg->rgctx_var->opcode == OP_REGOFFSET &&
(cfg->rgctx_var->inst_basereg == ppc_r1 || cfg->rgctx_var->inst_basereg == ppc_r31));
ppc_stptr (code, MONO_ARCH_RGCTX_REG, cfg->rgctx_var->inst_offset, cfg->rgctx_var->inst_basereg);
}
/* compute max_offset in order to use short forward jumps
* we always do it on ppc because the immediate displacement
* for jumps is too small
*/
max_offset = 0;
for (bb = cfg->bb_entry; bb; bb = bb->next_bb) {
MonoInst *ins;
bb->max_offset = max_offset;
MONO_BB_FOR_EACH_INS (bb, ins)
max_offset += ins_get_size (ins->opcode);
}
/* load arguments allocated to register from the stack */
pos = 0;
cinfo = get_call_info (sig);
if (MONO_TYPE_ISSTRUCT (sig->ret)) {
ArgInfo *ainfo = &cinfo->ret;
inst = cfg->vret_addr;
g_assert (inst);
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stptr (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stptr_indexed (code, ainfo->reg, ppc_r12, inst->inst_basereg);
}
}
tailcall_struct_index = 0;
for (i = 0; i < sig->param_count + sig->hasthis; ++i) {
ArgInfo *ainfo = cinfo->args + i;
inst = cfg->args [pos];
if (cfg->verbose_level > 2)
g_print ("Saving argument %d (type: %d)\n", i, ainfo->regtype);
if (inst->opcode == OP_REGVAR) {
if (ainfo->regtype == RegTypeGeneral)
ppc_mr (code, inst->dreg, ainfo->reg);
else if (ainfo->regtype == RegTypeFP)
ppc_fmr (code, inst->dreg, ainfo->reg);
else if (ainfo->regtype == RegTypeBase) {
ppc_ldr (code, ppc_r12, 0, ppc_sp);
ppc_ldptr (code, inst->dreg, ainfo->offset, ppc_r12);
} else
g_assert_not_reached ();
if (cfg->verbose_level > 2)
g_print ("Argument %ld assigned to register %s\n", pos, mono_arch_regname (inst->dreg));
} else {
/* the argument should be put on the stack: FIXME handle size != word */
if (ainfo->regtype == RegTypeGeneral) {
switch (ainfo->size) {
case 1:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stb (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_stb (code, ainfo->reg, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stbx (code, ainfo->reg, inst->inst_basereg, ppc_r12);
}
}
break;
case 2:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_sth (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_sth (code, ainfo->reg, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_sthx (code, ainfo->reg, inst->inst_basereg, ppc_r12);
}
}
break;
#ifdef __mono_ppc64__
case 4:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stw (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_stw (code, ainfo->reg, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stwx (code, ainfo->reg, inst->inst_basereg, ppc_r12);
}
}
break;
case 8:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_str (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_str_indexed (code, ainfo->reg, ppc_r12, inst->inst_basereg);
}
break;
#else
case 8:
if (ppc_is_imm16 (inst->inst_offset + 4)) {
ppc_stw (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
ppc_stw (code, ainfo->reg + 1, inst->inst_offset + 4, inst->inst_basereg);
} else {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_addi (code, ppc_r12, ppc_r12, inst->inst_offset);
ppc_stw (code, ainfo->reg, 0, ppc_r12);
ppc_stw (code, ainfo->reg + 1, 4, ppc_r12);
}
break;
#endif
default:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stptr (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_stptr (code, ainfo->reg, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stptr_indexed (code, ainfo->reg, inst->inst_basereg, ppc_r12);
}
}
break;
}
} else if (ainfo->regtype == RegTypeBase) {
g_assert (ppc_is_imm16 (ainfo->offset));
/* load the previous stack pointer in r12 */
ppc_ldr (code, ppc_r12, 0, ppc_sp);
ppc_ldptr (code, ppc_r0, ainfo->offset, ppc_r12);
switch (ainfo->size) {
case 1:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stb (code, ppc_r0, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_stb (code, ppc_r0, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stbx (code, ppc_r0, inst->inst_basereg, ppc_r12);
}
}
break;
case 2:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_sth (code, ppc_r0, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_sth (code, ppc_r0, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_sthx (code, ppc_r0, inst->inst_basereg, ppc_r12);
}
}
break;
#ifdef __mono_ppc64__
case 4:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stw (code, ppc_r0, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_stw (code, ppc_r0, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stwx (code, ppc_r0, inst->inst_basereg, ppc_r12);
}
}
break;
case 8:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_str (code, ppc_r0, inst->inst_offset, inst->inst_basereg);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_str_indexed (code, ppc_r0, ppc_r12, inst->inst_basereg);
}
break;
#else
case 8:
g_assert (ppc_is_imm16 (ainfo->offset + 4));
if (ppc_is_imm16 (inst->inst_offset + 4)) {
ppc_stw (code, ppc_r0, inst->inst_offset, inst->inst_basereg);
ppc_lwz (code, ppc_r0, ainfo->offset + 4, ppc_r12);
ppc_stw (code, ppc_r0, inst->inst_offset + 4, inst->inst_basereg);
} else {
/* use r11 to load the 2nd half of the long before we clobber r12. */
ppc_lwz (code, ppc_r11, ainfo->offset + 4, ppc_r12);
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_addi (code, ppc_r12, ppc_r12, inst->inst_offset);
ppc_stw (code, ppc_r0, 0, ppc_r12);
ppc_stw (code, ppc_r11, 4, ppc_r12);
}
break;
#endif
default:
if (ppc_is_imm16 (inst->inst_offset)) {
ppc_stptr (code, ppc_r0, inst->inst_offset, inst->inst_basereg);
} else {
if (ppc_is_imm32 (inst->inst_offset)) {
ppc_addis (code, ppc_r12, inst->inst_basereg, ppc_ha(inst->inst_offset));
ppc_stptr (code, ppc_r0, inst->inst_offset, ppc_r12);
} else {
ppc_load (code, ppc_r12, inst->inst_offset);
ppc_stptr_indexed (code, ppc_r0, inst->inst_basereg, ppc_r12);
}
}
break;
}
} else if (ainfo->regtype == RegTypeFP) {
g_assert (ppc_is_imm16 (inst->inst_offset));
if (ainfo->size == 8)
ppc_stfd (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
else if (ainfo->size == 4)
ppc_stfs (code, ainfo->reg, inst->inst_offset, inst->inst_basereg);
else
g_assert_not_reached ();
} else if (ainfo->regtype == RegTypeFPStructByVal) {
int doffset = inst->inst_offset;
int soffset = 0;
int cur_reg;
int size = 0;
g_assert (ppc_is_imm16 (inst->inst_offset));
g_assert (ppc_is_imm16 (inst->inst_offset + ainfo->vtregs * sizeof (target_mgreg_t)));
/* FIXME: what if there is no class? */
if (sig->pinvoke && !sig->marshalling_disabled && mono_class_from_mono_type_internal (inst->inst_vtype))
size = mono_class_native_size (mono_class_from_mono_type_internal (inst->inst_vtype), NULL);
for (cur_reg = 0; cur_reg < ainfo->vtregs; ++cur_reg) {
if (ainfo->size == 4) {
ppc_stfs (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
} else {
ppc_stfd (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
}
soffset += ainfo->size;
doffset += ainfo->size;
}
} else if (ainfo->regtype == RegTypeStructByVal) {
int doffset = inst->inst_offset;
int soffset = 0;
int cur_reg;
int size = 0;
g_assert (ppc_is_imm16 (inst->inst_offset));
g_assert (ppc_is_imm16 (inst->inst_offset + ainfo->vtregs * sizeof (target_mgreg_t)));
/* FIXME: what if there is no class? */
if (sig->pinvoke && !sig->marshalling_disabled && mono_class_from_mono_type_internal (inst->inst_vtype))
size = mono_class_native_size (mono_class_from_mono_type_internal (inst->inst_vtype), NULL);
for (cur_reg = 0; cur_reg < ainfo->vtregs; ++cur_reg) {
#if __APPLE__
/*
* Darwin handles 1 and 2 byte
* structs specially by
* loading h/b into the arg
* register. Only done for
* pinvokes.
*/
if (size == 2)
ppc_sth (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
else if (size == 1)
ppc_stb (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
else
#endif
{
#ifdef __mono_ppc64__
if (ainfo->bytes) {
g_assert (cur_reg == 0);
#if G_BYTE_ORDER == G_BIG_ENDIAN
ppc_sldi (code, ppc_r0, ainfo->reg,
(sizeof (target_mgreg_t) - ainfo->bytes) * 8);
ppc_stptr (code, ppc_r0, doffset, inst->inst_basereg);
#else
if (mono_class_native_size (inst->klass, NULL) == 1) {
ppc_stb (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
} else if (mono_class_native_size (inst->klass, NULL) == 2) {
ppc_sth (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
} else if (mono_class_native_size (inst->klass, NULL) == 4) { // WDS -- maybe <=4?
ppc_stw (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg);
} else {
ppc_stptr (code, ainfo->reg + cur_reg, doffset, inst->inst_basereg); // WDS -- Better way?
}
#endif
} else
#endif
{
ppc_stptr (code, ainfo->reg + cur_reg, doffset,
inst->inst_basereg);
}
}
soffset += sizeof (target_mgreg_t);
doffset += sizeof (target_mgreg_t);
}
if (ainfo->vtsize) {
/* FIXME: we need to do the shifting here, too */
if (ainfo->bytes)
NOT_IMPLEMENTED;
/* load the previous stack pointer in r12 (r0 gets overwritten by the memcpy) */
ppc_ldr (code, ppc_r12, 0, ppc_sp);
if ((size & MONO_PPC_32_64_CASE (3, 7)) != 0) {
code = emit_memcpy (code, size - soffset,
inst->inst_basereg, doffset,
ppc_r12, ainfo->offset + soffset);
} else {
code = emit_memcpy (code, ainfo->vtsize * sizeof (target_mgreg_t),
inst->inst_basereg, doffset,
ppc_r12, ainfo->offset + soffset);
}
}
} else if (ainfo->regtype == RegTypeStructByAddr) {
/* if it was originally a RegTypeBase */
if (ainfo->offset) {
/* load the previous stack pointer in r12 */
ppc_ldr (code, ppc_r12, 0, ppc_sp);
ppc_ldptr (code, ppc_r12, ainfo->offset, ppc_r12);
} else {
ppc_mr (code, ppc_r12, ainfo->reg);
}
g_assert (ppc_is_imm16 (inst->inst_offset));
code = emit_memcpy (code, ainfo->vtsize, inst->inst_basereg, inst->inst_offset, ppc_r12, 0);
/*g_print ("copy in %s: %d bytes from %d to offset: %d\n", method->name, ainfo->vtsize, ainfo->reg, inst->inst_offset);*/
} else
g_assert_not_reached ();
}
pos++;
}
if (method->save_lmf) {
if (cfg->compile_aot) {
/* Compute the got address which is needed by the PLT entry */
code = mono_arch_emit_load_got_addr (cfg->native_code, code, cfg, NULL);
}
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_JIT_ICALL_ID,
GUINT_TO_POINTER (MONO_JIT_ICALL_mono_tls_get_lmf_addr_extern));
if ((FORCE_INDIR_CALL || cfg->method->dynamic) && !cfg->compile_aot) {
ppc_load_func (code, PPC_CALL_REG, 0);
ppc_mtlr (code, PPC_CALL_REG);
ppc_blrl (code);
} else {
ppc_bl (code, 0);
}
/* we build the MonoLMF structure on the stack - see mini-ppc.h */
/* lmf_offset is the offset from the previous stack pointer,
* alloc_size is the total stack space allocated, so the offset
* of MonoLMF from the current stack ptr is alloc_size - lmf_offset.
* The pointer to the struct is put in ppc_r12 (new_lmf).
* The callee-saved registers are already in the MonoLMF structure
*/
ppc_addi (code, ppc_r12, ppc_sp, alloc_size - lmf_offset);
/* ppc_r3 is the result from mono_get_lmf_addr () */
ppc_stptr (code, ppc_r3, G_STRUCT_OFFSET(MonoLMF, lmf_addr), ppc_r12);
/* new_lmf->previous_lmf = *lmf_addr */
ppc_ldptr (code, ppc_r0, G_STRUCT_OFFSET(MonoLMF, previous_lmf), ppc_r3);
ppc_stptr (code, ppc_r0, G_STRUCT_OFFSET(MonoLMF, previous_lmf), ppc_r12);
/* *(lmf_addr) = r12 */
ppc_stptr (code, ppc_r12, G_STRUCT_OFFSET(MonoLMF, previous_lmf), ppc_r3);
/* save method info */
if (cfg->compile_aot)
// FIXME:
ppc_load (code, ppc_r0, 0);
else
ppc_load_ptr (code, ppc_r0, method);
ppc_stptr (code, ppc_r0, G_STRUCT_OFFSET(MonoLMF, method), ppc_r12);
ppc_stptr (code, ppc_sp, G_STRUCT_OFFSET(MonoLMF, ebp), ppc_r12);
/* save the current IP */
if (cfg->compile_aot) {
ppc_bl (code, 1);
ppc_mflr (code, ppc_r0);
} else {
mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_IP, NULL);
#ifdef __mono_ppc64__
ppc_load_sequence (code, ppc_r0, (guint64)0x0101010101010101LL);
#else
ppc_load_sequence (code, ppc_r0, (gulong)0x01010101L);
#endif
}
ppc_stptr (code, ppc_r0, G_STRUCT_OFFSET(MonoLMF, eip), ppc_r12);
}
set_code_cursor (cfg, code);
g_free (cinfo);
return code;
}
void
mono_arch_emit_epilog (MonoCompile *cfg)
{
MonoMethod *method = cfg->method;
int pos, i;
int max_epilog_size = 16 + 20*4;
guint8 *code;
if (cfg->method->save_lmf)
max_epilog_size += 128;
code = realloc_code (cfg, max_epilog_size);
pos = 0;
if (method->save_lmf) {
int lmf_offset;
pos += sizeof (MonoLMF);
lmf_offset = pos;
/* save the frame reg in r8 */
ppc_mr (code, ppc_r8, cfg->frame_reg);
ppc_addi (code, ppc_r12, cfg->frame_reg, cfg->stack_usage - lmf_offset);
/* r5 = previous_lmf */
ppc_ldptr (code, ppc_r5, G_STRUCT_OFFSET(MonoLMF, previous_lmf), ppc_r12);
/* r6 = lmf_addr */
ppc_ldptr (code, ppc_r6, G_STRUCT_OFFSET(MonoLMF, lmf_addr), ppc_r12);
/* *(lmf_addr) = previous_lmf */
ppc_stptr (code, ppc_r5, G_STRUCT_OFFSET(MonoLMF, previous_lmf), ppc_r6);
/* FIXME: speedup: there is no actual need to restore the registers if
* we didn't actually change them (idea from Zoltan).
*/
/* restore iregs */
ppc_ldr_multiple (code, ppc_r13, G_STRUCT_OFFSET(MonoLMF, iregs), ppc_r12);
/* restore fregs */
/*for (i = 14; i < 32; i++) {
ppc_lfd (code, i, G_STRUCT_OFFSET(MonoLMF, fregs) + ((i-14) * sizeof (gdouble)), ppc_r12);
}*/
g_assert (ppc_is_imm16 (cfg->stack_usage + PPC_RET_ADDR_OFFSET));
/* use the saved copy of the frame reg in r8 */
if (1 || cfg->flags & MONO_CFG_HAS_CALLS) {
ppc_ldr (code, ppc_r0, cfg->stack_usage + PPC_RET_ADDR_OFFSET, ppc_r8);
ppc_mtlr (code, ppc_r0);
}
ppc_addic (code, ppc_sp, ppc_r8, cfg->stack_usage);
} else {
if (1 || cfg->flags & MONO_CFG_HAS_CALLS) {
long return_offset = cfg->stack_usage + PPC_RET_ADDR_OFFSET;
if (ppc_is_imm16 (return_offset)) {
ppc_ldr (code, ppc_r0, return_offset, cfg->frame_reg);
} else {
ppc_load (code, ppc_r12, return_offset);
ppc_ldr_indexed (code, ppc_r0, cfg->frame_reg, ppc_r12);
}
ppc_mtlr (code, ppc_r0);
}
if (ppc_is_imm16 (cfg->stack_usage)) {
int offset = cfg->stack_usage;
for (i = 13; i <= 31; i++) {
if (cfg->used_int_regs & (1 << i))
offset -= sizeof (target_mgreg_t);
}
if (cfg->frame_reg != ppc_sp)
ppc_mr (code, ppc_r12, cfg->frame_reg);
/* note r31 (possibly the frame register) is restored last */
for (i = 13; i <= 31; i++) {
if (cfg->used_int_regs & (1 << i)) {
ppc_ldr (code, i, offset, cfg->frame_reg);
offset += sizeof (target_mgreg_t);
}
}
if (cfg->frame_reg != ppc_sp)
ppc_addi (code, ppc_sp, ppc_r12, cfg->stack_usage);
else
ppc_addi (code, ppc_sp, ppc_sp, cfg->stack_usage);
} else {
ppc_load32 (code, ppc_r12, cfg->stack_usage);
if (cfg->used_int_regs) {
ppc_add (code, ppc_r12, cfg->frame_reg, ppc_r12);
for (i = 31; i >= 13; --i) {
if (cfg->used_int_regs & (1 << i)) {
pos += sizeof (target_mgreg_t);
ppc_ldr (code, i, -pos, ppc_r12);
}
}
ppc_mr (code, ppc_sp, ppc_r12);
} else {
ppc_add (code, ppc_sp, cfg->frame_reg, ppc_r12);
}
}
}
ppc_blr (code);
set_code_cursor (cfg, code);
}
#endif /* ifndef DISABLE_JIT */
/* remove once throw_exception_by_name is eliminated */
static int
exception_id_by_name (const char *name)
{
if (strcmp (name, "IndexOutOfRangeException") == 0)
return MONO_EXC_INDEX_OUT_OF_RANGE;
if (strcmp (name, "OverflowException") == 0)
return MONO_EXC_OVERFLOW;
if (strcmp (name, "ArithmeticException") == 0)
return MONO_EXC_ARITHMETIC;
if (strcmp (name, "DivideByZeroException") == 0)
return MONO_EXC_DIVIDE_BY_ZERO;
if (strcmp (name, "InvalidCastException") == 0)
return MONO_EXC_INVALID_CAST;
if (strcmp (name, "NullReferenceException") == 0)
return MONO_EXC_NULL_REF;
if (strcmp (name, "ArrayTypeMismatchException") == 0)
return MONO_EXC_ARRAY_TYPE_MISMATCH;
if (strcmp (name, "ArgumentException") == 0)
return MONO_EXC_ARGUMENT;
g_error ("Unknown intrinsic exception %s\n", name);
return 0;
}
#ifndef DISABLE_JIT
void
mono_arch_emit_exceptions (MonoCompile *cfg)
{
MonoJumpInfo *patch_info;
int i;
guint8 *code;
guint8* exc_throw_pos [MONO_EXC_INTRINS_NUM];
guint8 exc_throw_found [MONO_EXC_INTRINS_NUM];
int max_epilog_size = 50;
for (i = 0; i < MONO_EXC_INTRINS_NUM; i++) {
exc_throw_pos [i] = NULL;
exc_throw_found [i] = 0;
}
/* count the number of exception infos */
/*
* make sure we have enough space for exceptions
*/
for (patch_info = cfg->patch_info; patch_info; patch_info = patch_info->next) {
if (patch_info->type == MONO_PATCH_INFO_EXC) {
i = exception_id_by_name ((const char*)patch_info->data.target);
if (!exc_throw_found [i]) {
max_epilog_size += (2 * PPC_LOAD_SEQUENCE_LENGTH) + 5 * 4;
exc_throw_found [i] = TRUE;
}
} else if (patch_info->type == MONO_PATCH_INFO_BB_OVF)
max_epilog_size += 12;
else if (patch_info->type == MONO_PATCH_INFO_EXC_OVF) {
MonoOvfJump *ovfj = (MonoOvfJump*)patch_info->data.target;
i = exception_id_by_name (ovfj->data.exception);
if (!exc_throw_found [i]) {
max_epilog_size += (2 * PPC_LOAD_SEQUENCE_LENGTH) + 5 * 4;
exc_throw_found [i] = TRUE;
}
max_epilog_size += 8;
}
}
code = realloc_code (cfg, max_epilog_size);
/* add code to raise exceptions */
for (patch_info = cfg->patch_info; patch_info; patch_info = patch_info->next) {
switch (patch_info->type) {
case MONO_PATCH_INFO_BB_OVF: {
MonoOvfJump *ovfj = (MonoOvfJump*)patch_info->data.target;
unsigned char *ip = patch_info->ip.i + cfg->native_code;
/* patch the initial jump */
ppc_patch (ip, code);
ppc_bc (code, ovfj->b0_cond, ovfj->b1_cond, 2);
ppc_b (code, 0);
ppc_patch (code - 4, ip + 4); /* jump back after the initiali branch */
/* jump back to the true target */
ppc_b (code, 0);
ip = ovfj->data.bb->native_offset + cfg->native_code;
ppc_patch (code - 4, ip);
patch_info->type = MONO_PATCH_INFO_NONE;
break;
}
case MONO_PATCH_INFO_EXC_OVF: {
MonoOvfJump *ovfj = (MonoOvfJump*)patch_info->data.target;
MonoJumpInfo *newji;
unsigned char *ip = patch_info->ip.i + cfg->native_code;
unsigned char *bcl = code;
/* patch the initial jump: we arrived here with a call */
ppc_patch (ip, code);
ppc_bc (code, ovfj->b0_cond, ovfj->b1_cond, 0);
ppc_b (code, 0);
ppc_patch (code - 4, ip + 4); /* jump back after the initiali branch */
/* patch the conditional jump to the right handler */
/* make it processed next */
newji = mono_mempool_alloc (cfg->mempool, sizeof (MonoJumpInfo));
newji->type = MONO_PATCH_INFO_EXC;
newji->ip.i = bcl - cfg->native_code;
newji->data.target = ovfj->data.exception;
newji->next = patch_info->next;
patch_info->next = newji;
patch_info->type = MONO_PATCH_INFO_NONE;
break;
}
case MONO_PATCH_INFO_EXC: {
MonoClass *exc_class;
unsigned char *ip = patch_info->ip.i + cfg->native_code;
i = exception_id_by_name ((const char*)patch_info->data.target);
if (exc_throw_pos [i] && !(ip > exc_throw_pos [i] && ip - exc_throw_pos [i] > 50000)) {
ppc_patch (ip, exc_throw_pos [i]);
patch_info->type = MONO_PATCH_INFO_NONE;
break;
} else {
exc_throw_pos [i] = code;
}
exc_class = mono_class_load_from_name (mono_defaults.corlib, "System", patch_info->data.name);
ppc_patch (ip, code);
/*mono_add_patch_info (cfg, code - cfg->native_code, MONO_PATCH_INFO_EXC_NAME, patch_info->data.target);*/
ppc_load (code, ppc_r3, m_class_get_type_token (exc_class));
/* we got here from a conditional call, so the calling ip is set in lr */
ppc_mflr (code, ppc_r4);
patch_info->type = MONO_PATCH_INFO_JIT_ICALL_ID;
patch_info->data.jit_icall_id = MONO_JIT_ICALL_mono_arch_throw_corlib_exception;
patch_info->ip.i = code - cfg->native_code;
if (FORCE_INDIR_CALL || cfg->method->dynamic) {
ppc_load_func (code, PPC_CALL_REG, 0);
ppc_mtctr (code, PPC_CALL_REG);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
} else {
ppc_bl (code, 0);
}
break;
}
default:
/* do nothing */
break;
}
}
set_code_cursor (cfg, code);
}
#endif
#if DEAD_CODE
static int
try_offset_access (void *value, guint32 idx)
{
register void* me __asm__ ("r2");
void ***p = (void***)((char*)me + 284);
int idx1 = idx / 32;
int idx2 = idx % 32;
if (!p [idx1])
return 0;
if (value != p[idx1][idx2])
return 0;
return 1;
}
#endif
void
mono_arch_finish_init (void)
{
}
#define CMP_SIZE (PPC_LOAD_SEQUENCE_LENGTH + 4)
#define BR_SIZE 4
#define LOADSTORE_SIZE 4
#define JUMP_IMM_SIZE 12
#define JUMP_IMM32_SIZE (PPC_LOAD_SEQUENCE_LENGTH + 8)
#define ENABLE_WRONG_METHOD_CHECK 0
gpointer
mono_arch_build_imt_trampoline (MonoVTable *vtable, MonoIMTCheckItem **imt_entries, int count,
gpointer fail_tramp)
{
int i;
int size = 0;
guint8 *code, *start;
MonoMemoryManager *mem_manager = m_class_get_mem_manager (vtable->klass);
for (i = 0; i < count; ++i) {
MonoIMTCheckItem *item = imt_entries [i];
if (item->is_equals) {
if (item->check_target_idx) {
if (!item->compare_done)
item->chunk_size += CMP_SIZE;
if (item->has_target_code)
item->chunk_size += BR_SIZE + JUMP_IMM32_SIZE;
else
item->chunk_size += LOADSTORE_SIZE + BR_SIZE + JUMP_IMM_SIZE;
} else {
if (fail_tramp) {
item->chunk_size += CMP_SIZE + BR_SIZE + JUMP_IMM32_SIZE * 2;
if (!item->has_target_code)
item->chunk_size += LOADSTORE_SIZE;
} else {
item->chunk_size += LOADSTORE_SIZE + JUMP_IMM_SIZE;
#if ENABLE_WRONG_METHOD_CHECK
item->chunk_size += CMP_SIZE + BR_SIZE + 4;
#endif
}
}
} else {
item->chunk_size += CMP_SIZE + BR_SIZE;
imt_entries [item->check_target_idx]->compare_done = TRUE;
}
size += item->chunk_size;
}
/* the initial load of the vtable address */
size += PPC_LOAD_SEQUENCE_LENGTH + LOADSTORE_SIZE;
if (fail_tramp) {
code = (guint8 *)mini_alloc_generic_virtual_trampoline (vtable, size);
} else {
code = mono_mem_manager_code_reserve (mem_manager, size);
}
start = code;
/*
* We need to save and restore r12 because it might be
* used by the caller as the vtable register, so
* clobbering it will trip up the magic trampoline.
*
* FIXME: Get rid of this by making sure that r12 is
* not used as the vtable register in interface calls.
*/
ppc_stptr (code, ppc_r12, PPC_RET_ADDR_OFFSET, ppc_sp);
ppc_load (code, ppc_r12, (gsize)(& (vtable->vtable [0])));
for (i = 0; i < count; ++i) {
MonoIMTCheckItem *item = imt_entries [i];
item->code_target = code;
if (item->is_equals) {
if (item->check_target_idx) {
if (!item->compare_done) {
ppc_load (code, ppc_r0, (gsize)item->key);
ppc_compare_log (code, 0, MONO_ARCH_IMT_REG, ppc_r0);
}
item->jmp_code = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
if (item->has_target_code) {
ppc_load_ptr (code, ppc_r0, item->value.target_code);
} else {
ppc_ldptr (code, ppc_r0, (sizeof (target_mgreg_t) * item->value.vtable_slot), ppc_r12);
ppc_ldptr (code, ppc_r12, PPC_RET_ADDR_OFFSET, ppc_sp);
}
ppc_mtctr (code, ppc_r0);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
} else {
if (fail_tramp) {
ppc_load (code, ppc_r0, (gulong)item->key);
ppc_compare_log (code, 0, MONO_ARCH_IMT_REG, ppc_r0);
item->jmp_code = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
if (item->has_target_code) {
ppc_load_ptr (code, ppc_r0, item->value.target_code);
} else {
g_assert (vtable);
ppc_load_ptr (code, ppc_r0, & (vtable->vtable [item->value.vtable_slot]));
ppc_ldptr_indexed (code, ppc_r0, 0, ppc_r0);
}
ppc_mtctr (code, ppc_r0);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
ppc_patch (item->jmp_code, code);
ppc_load_ptr (code, ppc_r0, fail_tramp);
ppc_mtctr (code, ppc_r0);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
item->jmp_code = NULL;
} else {
/* enable the commented code to assert on wrong method */
#if ENABLE_WRONG_METHOD_CHECK
ppc_load (code, ppc_r0, (guint32)item->key);
ppc_compare_log (code, 0, MONO_ARCH_IMT_REG, ppc_r0);
item->jmp_code = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_EQ, 0);
#endif
ppc_ldptr (code, ppc_r0, (sizeof (target_mgreg_t) * item->value.vtable_slot), ppc_r12);
ppc_ldptr (code, ppc_r12, PPC_RET_ADDR_OFFSET, ppc_sp);
ppc_mtctr (code, ppc_r0);
ppc_bcctr (code, PPC_BR_ALWAYS, 0);
#if ENABLE_WRONG_METHOD_CHECK
ppc_patch (item->jmp_code, code);
ppc_break (code);
item->jmp_code = NULL;
#endif
}
}
} else {
ppc_load (code, ppc_r0, (gulong)item->key);
ppc_compare_log (code, 0, MONO_ARCH_IMT_REG, ppc_r0);
item->jmp_code = code;
ppc_bc (code, PPC_BR_FALSE, PPC_BR_LT, 0);
}
}
/* patch the branches to get to the target items */
for (i = 0; i < count; ++i) {
MonoIMTCheckItem *item = imt_entries [i];
if (item->jmp_code) {
if (item->check_target_idx) {
ppc_patch (item->jmp_code, imt_entries [item->check_target_idx]->code_target);
}
}
}
if (!fail_tramp)
UnlockedAdd (&mono_stats.imt_trampolines_size, code - start);
g_assert (code - start <= size);
mono_arch_flush_icache (start, size);
MONO_PROFILER_RAISE (jit_code_buffer, (start, code - start, MONO_PROFILER_CODE_BUFFER_IMT_TRAMPOLINE, NULL));
mono_tramp_info_register (mono_tramp_info_create (NULL, start, code - start, NULL, NULL), mem_manager);
return start;
}
MonoMethod*
mono_arch_find_imt_method (host_mgreg_t *regs, guint8 *code)
{
host_mgreg_t *r = (host_mgreg_t*)regs;
return (MonoMethod*)(gsize) r [MONO_ARCH_IMT_REG];
}
MonoVTable*
mono_arch_find_static_call_vtable (host_mgreg_t *regs, guint8 *code)
{
return (MonoVTable*)(gsize) regs [MONO_ARCH_RGCTX_REG];
}
GSList*
mono_arch_get_cie_program (void)
{
GSList *l = NULL;
mono_add_unwind_op_def_cfa (l, (guint8*)NULL, (guint8*)NULL, ppc_r1, 0);
return l;
}
MonoInst*
mono_arch_emit_inst_for_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args)
{
MonoInst *ins = NULL;
int opcode = 0;
if (cmethod->klass == mono_class_try_get_math_class ()) {
if (strcmp (cmethod->name, "Sqrt") == 0) {
opcode = OP_SQRT;
} else if (strcmp (cmethod->name, "Abs") == 0 && fsig->params [0]->type == MONO_TYPE_R8) {
opcode = OP_ABS;
}
if (opcode && fsig->param_count == 1) {
MONO_INST_NEW (cfg, ins, opcode);
ins->type = STACK_R8;
ins->dreg = mono_alloc_freg (cfg);
ins->sreg1 = args [0]->dreg;
MONO_ADD_INS (cfg->cbb, ins);
}
/* Check for Min/Max for (u)int(32|64) */
opcode = 0;
if (cpu_hw_caps & PPC_ISA_2_03) {
if (strcmp (cmethod->name, "Min") == 0) {
if (fsig->params [0]->type == MONO_TYPE_I4)
opcode = OP_IMIN;
if (fsig->params [0]->type == MONO_TYPE_U4)
opcode = OP_IMIN_UN;
#ifdef __mono_ppc64__
else if (fsig->params [0]->type == MONO_TYPE_I8)
opcode = OP_LMIN;
else if (fsig->params [0]->type == MONO_TYPE_U8)
opcode = OP_LMIN_UN;
#endif
} else if (strcmp (cmethod->name, "Max") == 0) {
if (fsig->params [0]->type == MONO_TYPE_I4)
opcode = OP_IMAX;
if (fsig->params [0]->type == MONO_TYPE_U4)
opcode = OP_IMAX_UN;
#ifdef __mono_ppc64__
else if (fsig->params [0]->type == MONO_TYPE_I8)
opcode = OP_LMAX;
else if (fsig->params [0]->type == MONO_TYPE_U8)
opcode = OP_LMAX_UN;
#endif
}
/*
* TODO: Floating point version with fsel, but fsel has
* some peculiarities (need a scratch reg unless
* comparing with 0, NaN/Inf behaviour (then MathF too)
*/
}
if (opcode && fsig->param_count == 2) {
MONO_INST_NEW (cfg, ins, opcode);
ins->type = fsig->params [0]->type == MONO_TYPE_I4 ? STACK_I4 : STACK_I8;
ins->dreg = mono_alloc_ireg (cfg);
ins->sreg1 = args [0]->dreg;
ins->sreg2 = args [1]->dreg;
MONO_ADD_INS (cfg->cbb, ins);
}
/* Rounding instructions */
opcode = 0;
if ((cpu_hw_caps & PPC_ISA_2X) && (fsig->param_count == 1) && (fsig->params [0]->type == MONO_TYPE_R8)) {
/*
* XXX: sysmath.c and the POWER ISA documentation for
* frin[.] imply rounding is a little more complicated
* than expected; the semantics are slightly different,
* so just "frin." isn't a drop-in replacement. Floor,
* Truncate, and Ceiling seem to work normally though.
* (also, no float versions of these ops, but frsp
* could be preprended?)
*/
//if (!strcmp (cmethod->name, "Round"))
// opcode = OP_ROUND;
if (!strcmp (cmethod->name, "Floor"))
opcode = OP_PPC_FLOOR;
else if (!strcmp (cmethod->name, "Ceiling"))
opcode = OP_PPC_CEIL;
else if (!strcmp (cmethod->name, "Truncate"))
opcode = OP_PPC_TRUNC;
if (opcode != 0) {
MONO_INST_NEW (cfg, ins, opcode);
ins->type = STACK_R8;
ins->dreg = mono_alloc_freg (cfg);
ins->sreg1 = args [0]->dreg;
MONO_ADD_INS (cfg->cbb, ins);
}
}
}
if (cmethod->klass == mono_class_try_get_mathf_class ()) {
if (strcmp (cmethod->name, "Sqrt") == 0) {
opcode = OP_SQRTF;
} /* XXX: POWER has no single-precision normal FPU abs? */
if (opcode && fsig->param_count == 1) {
MONO_INST_NEW (cfg, ins, opcode);
ins->type = STACK_R4;
ins->dreg = mono_alloc_freg (cfg);
ins->sreg1 = args [0]->dreg;
MONO_ADD_INS (cfg->cbb, ins);
}
}
return ins;
}
host_mgreg_t
mono_arch_context_get_int_reg (MonoContext *ctx, int reg)
{
if (reg == ppc_r1)
return (host_mgreg_t)(gsize)MONO_CONTEXT_GET_SP (ctx);
return ctx->regs [reg];
}
host_mgreg_t*
mono_arch_context_get_int_reg_address (MonoContext *ctx, int reg)
{
if (reg == ppc_r1)
return (host_mgreg_t)(gsize)&MONO_CONTEXT_GET_SP (ctx);
return &ctx->regs [reg];
}
guint32
mono_arch_get_patch_offset (guint8 *code)
{
return 0;
}
/*
* mono_aot_emit_load_got_addr:
*
* Emit code to load the got address.
* On PPC, the result is placed into r30.
*/
guint8*
mono_arch_emit_load_got_addr (guint8 *start, guint8 *code, MonoCompile *cfg, MonoJumpInfo **ji)
{
ppc_bl (code, 1);
ppc_mflr (code, ppc_r30);
if (cfg)
mono_add_patch_info (cfg, code - start, MONO_PATCH_INFO_GOT_OFFSET, NULL);
else
*ji = mono_patch_info_list_prepend (*ji, code - start, MONO_PATCH_INFO_GOT_OFFSET, NULL);
/* arch_emit_got_address () patches this */
#if defined(TARGET_POWERPC64)
ppc_nop (code);
ppc_nop (code);
ppc_nop (code);
ppc_nop (code);
#else
ppc_load32 (code, ppc_r0, 0);
ppc_add (code, ppc_r30, ppc_r30, ppc_r0);
#endif
set_code_cursor (cfg, code);
return code;
}
/*
* mono_ppc_emit_load_aotconst:
*
* Emit code to load the contents of the GOT slot identified by TRAMP_TYPE and
* TARGET from the mscorlib GOT in full-aot code.
* On PPC, the GOT address is assumed to be in r30, and the result is placed into
* r12.
*/
guint8*
mono_arch_emit_load_aotconst (guint8 *start, guint8 *code, MonoJumpInfo **ji, MonoJumpInfoType tramp_type, gconstpointer target)
{
/* Load the mscorlib got address */
ppc_ldptr (code, ppc_r12, sizeof (target_mgreg_t), ppc_r30);
*ji = mono_patch_info_list_prepend (*ji, code - start, tramp_type, target);
/* arch_emit_got_access () patches this */
ppc_load32 (code, ppc_r0, 0);
ppc_ldptr_indexed (code, ppc_r12, ppc_r12, ppc_r0);
return code;
}
/* Soft Debug support */
#ifdef MONO_ARCH_SOFT_DEBUG_SUPPORTED
/*
* BREAKPOINTS
*/
/*
* mono_arch_set_breakpoint:
*
* See mini-amd64.c for docs.
*/
void
mono_arch_set_breakpoint (MonoJitInfo *ji, guint8 *ip)
{
guint8 *code = ip;
guint8 *orig_code = code;
ppc_load_sequence (code, ppc_r12, (gsize)bp_trigger_page);
ppc_ldptr (code, ppc_r12, 0, ppc_r12);
g_assert (code - orig_code == BREAKPOINT_SIZE);
mono_arch_flush_icache (orig_code, code - orig_code);
}
/*
* mono_arch_clear_breakpoint:
*
* See mini-amd64.c for docs.
*/
void
mono_arch_clear_breakpoint (MonoJitInfo *ji, guint8 *ip)
{
guint8 *code = ip;
int i;
for (i = 0; i < BREAKPOINT_SIZE / 4; ++i)
ppc_nop (code);
mono_arch_flush_icache (ip, code - ip);
}
/*
* mono_arch_is_breakpoint_event:
*
* See mini-amd64.c for docs.
*/
gboolean
mono_arch_is_breakpoint_event (void *info, void *sigctx)
{
siginfo_t* sinfo = (siginfo_t*) info;
/* Sometimes the address is off by 4 */
if (sinfo->si_addr >= bp_trigger_page && (guint8*)sinfo->si_addr <= (guint8*)bp_trigger_page + 128)
return TRUE;
else
return FALSE;
}
/*
* mono_arch_skip_breakpoint:
*
* See mini-amd64.c for docs.
*/
void
mono_arch_skip_breakpoint (MonoContext *ctx, MonoJitInfo *ji)
{
/* skip the ldptr */
MONO_CONTEXT_SET_IP (ctx, (guint8*)MONO_CONTEXT_GET_IP (ctx) + 4);
}
/*
* SINGLE STEPPING
*/
/*
* mono_arch_start_single_stepping:
*
* See mini-amd64.c for docs.
*/
void
mono_arch_start_single_stepping (void)
{
mono_mprotect (ss_trigger_page, mono_pagesize (), 0);
}
/*
* mono_arch_stop_single_stepping:
*
* See mini-amd64.c for docs.
*/
void
mono_arch_stop_single_stepping (void)
{
mono_mprotect (ss_trigger_page, mono_pagesize (), MONO_MMAP_READ);
}
/*
* mono_arch_is_single_step_event:
*
* See mini-amd64.c for docs.
*/
gboolean
mono_arch_is_single_step_event (void *info, void *sigctx)
{
siginfo_t* sinfo = (siginfo_t*) info;
/* Sometimes the address is off by 4 */
if (sinfo->si_addr >= ss_trigger_page && (guint8*)sinfo->si_addr <= (guint8*)ss_trigger_page + 128)
return TRUE;
else
return FALSE;
}
/*
* mono_arch_skip_single_step:
*
* See mini-amd64.c for docs.
*/
void
mono_arch_skip_single_step (MonoContext *ctx)
{
/* skip the ldptr */
MONO_CONTEXT_SET_IP (ctx, (guint8*)MONO_CONTEXT_GET_IP (ctx) + 4);
}
/*
* mono_arch_create_seq_point_info:
*
* See mini-amd64.c for docs.
*/
SeqPointInfo*
mono_arch_get_seq_point_info (guint8 *code)
{
NOT_IMPLEMENTED;
return NULL;
}
#endif
gboolean
mono_arch_opcode_supported (int opcode)
{
switch (opcode) {
case OP_ATOMIC_ADD_I4:
case OP_ATOMIC_CAS_I4:
#ifdef TARGET_POWERPC64
case OP_ATOMIC_ADD_I8:
case OP_ATOMIC_CAS_I8:
#endif
return TRUE;
default:
return FALSE;
}
}
gpointer
mono_arch_load_function (MonoJitICallId jit_icall_id)
{
gpointer target = NULL;
switch (jit_icall_id) {
#undef MONO_AOT_ICALL
#define MONO_AOT_ICALL(x) case MONO_JIT_ICALL_ ## x: target = (gpointer)x; break;
MONO_AOT_ICALL (mono_ppc_throw_exception)
}
return target;
}
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/libraries/System.Private.Xml/tests/Xslt/TestFiles/TestData/xsltc/baseline/sft36.txt | Microsoft (R) XSLT Compiler version 2.0.61009
for Microsoft (R) Windows (R) 2005 Framework version 2.0.50727
Copyright (C) Microsoft Corporation 2007. All rights reserved.
sft36a.xsl(17,22) : warning : Execution of scripts is prohibited and will cause a runtime error. For trusted stylesheets use /settings:script option to enable this feature.
| Microsoft (R) XSLT Compiler version 2.0.61009
for Microsoft (R) Windows (R) 2005 Framework version 2.0.50727
Copyright (C) Microsoft Corporation 2007. All rights reserved.
sft36a.xsl(17,22) : warning : Execution of scripts is prohibited and will cause a runtime error. For trusted stylesheets use /settings:script option to enable this feature.
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/coreclr/pal/src/libunwind/src/tilegx/Gget_save_loc.c | /* libunwind - a platform-independent unwind library
Copyright (C) 2008 CodeSourcery
Copyright (C) 2014 Tilera Corp.
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "unwind_i.h"
int
unw_get_save_loc (unw_cursor_t *cursor, int reg, unw_save_loc_t *sloc)
{
struct cursor *c = (struct cursor *) cursor;
dwarf_loc_t loc;
loc = DWARF_NULL_LOC; /* default to "not saved" */
if (reg <= UNW_TILEGX_R55)
loc = c->dwarf.loc[reg - UNW_TILEGX_R0];
else
printf("\nInvalid register!");
memset (sloc, 0, sizeof (*sloc));
if (DWARF_IS_NULL_LOC (loc))
{
sloc->type = UNW_SLT_NONE;
return 0;
}
#if !defined(UNW_LOCAL_ONLY)
if (DWARF_IS_REG_LOC (loc))
{
sloc->type = UNW_SLT_REG;
sloc->u.regnum = DWARF_GET_LOC (loc);
}
else
#endif
{
sloc->type = UNW_SLT_MEMORY;
sloc->u.addr = DWARF_GET_LOC (loc);
}
return 0;
}
| /* libunwind - a platform-independent unwind library
Copyright (C) 2008 CodeSourcery
Copyright (C) 2014 Tilera Corp.
This file is part of libunwind.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#include "unwind_i.h"
int
unw_get_save_loc (unw_cursor_t *cursor, int reg, unw_save_loc_t *sloc)
{
struct cursor *c = (struct cursor *) cursor;
dwarf_loc_t loc;
loc = DWARF_NULL_LOC; /* default to "not saved" */
if (reg <= UNW_TILEGX_R55)
loc = c->dwarf.loc[reg - UNW_TILEGX_R0];
else
printf("\nInvalid register!");
memset (sloc, 0, sizeof (*sloc));
if (DWARF_IS_NULL_LOC (loc))
{
sloc->type = UNW_SLT_NONE;
return 0;
}
#if !defined(UNW_LOCAL_ONLY)
if (DWARF_IS_REG_LOC (loc))
{
sloc->type = UNW_SLT_REG;
sloc->u.regnum = DWARF_GET_LOC (loc);
}
else
#endif
{
sloc->type = UNW_SLT_MEMORY;
sloc->u.addr = DWARF_GET_LOC (loc);
}
return 0;
}
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/coreclr/vm/ecall.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
// ECALL.H -
//
// Handles our private native calling interface.
//
#ifndef _ECALL_H_
#define _ECALL_H_
#include "fcall.h"
class MethodDesc;
// CoreCLR defines fewer FCalls so make the hashtable even smaller.
#define FCALL_HASH_SIZE 127
typedef DPTR(struct ECHash) PTR_ECHash;
struct ECHash
{
PTR_ECHash m_pNext;
PCODE m_pImplementation;
PTR_MethodDesc m_pMD; // for reverse mapping
};
#ifdef DACCESS_COMPILE
GVAL_DECL(TADDR, gLowestFCall);
GVAL_DECL(TADDR, gHighestFCall);
GARY_DECL(PTR_ECHash, gFCallMethods, FCALL_HASH_SIZE);
#endif
enum {
FCFuncFlag_EndOfArray = 0x01,
FCFuncFlag_HasSignature = 0x02,
FCFuncFlag_Unreferenced = 0x04, // Suppress unused fcall check
};
struct ECFunc {
UINT_PTR m_dwFlags;
LPVOID m_pImplementation;
LPCSTR m_szMethodName;
LPHARDCODEDMETASIG m_pMethodSig; // Optional field. It is valid only if HasSignature() is set.
bool IsEndOfArray() { LIMITED_METHOD_CONTRACT; return !!(m_dwFlags & FCFuncFlag_EndOfArray); }
bool HasSignature() { LIMITED_METHOD_CONTRACT; return !!(m_dwFlags & FCFuncFlag_HasSignature); }
bool IsUnreferenced(){ LIMITED_METHOD_CONTRACT; return !!(m_dwFlags & FCFuncFlag_Unreferenced); }
int DynamicID() { LIMITED_METHOD_CONTRACT; return (int) ((INT8)(m_dwFlags >> 24)); }
ECFunc* NextInArray()
{
LIMITED_METHOD_CONTRACT;
return (ECFunc*)((BYTE*)this +
(HasSignature() ? sizeof(ECFunc) : offsetof(ECFunc, m_pMethodSig)));
}
};
struct ECClass
{
LPCSTR m_szClassName;
LPCSTR m_szNameSpace;
const LPVOID * m_pECFunc;
};
//=======================================================================
// Collects code and data pertaining to the ECall interface.
//=======================================================================
class ECall
{
public:
//---------------------------------------------------------
// One-time init
//---------------------------------------------------------
static void Init();
static PCODE GetFCallImpl(MethodDesc* pMD, BOOL * pfSharedOrDynamicFCallImpl = NULL);
static MethodDesc* MapTargetBackToMethod(PCODE pTarg, PCODE * ppAdjustedEntryPoint = NULL);
static DWORD GetIDForMethod(MethodDesc *pMD);
// Some fcalls (delegate ctors and tlbimpl ctors) shared one implementation.
// We should never patch vtable for these since they have 1:N mapping between
// MethodDesc and the actual implementation
static BOOL IsSharedFCallImpl(PCODE pImpl);
static BOOL CheckUnusedECalls(SetSHash<DWORD>& usedIDs);
static void DynamicallyAssignFCallImpl(PCODE impl, DWORD index);
static void PopulateManagedStringConstructors();
static void PopulateManagedCastHelpers();
#ifdef DACCESS_COMPILE
// Enumerates all gFCallMethods for minidumps.
static void EnumFCallMethods();
#endif // DACCESS_COMPILE
#define _DYNAMICALLY_ASSIGNED_FCALLS_BASE() \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(FastAllocateString, FramedAllocateString) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorCharArrayManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorCharArrayStartLengthManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorCharCountManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorCharPtrManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorCharPtrStartLengthManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorReadOnlySpanOfCharManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorSBytePtrManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorSBytePtrStartLengthManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorSBytePtrStartLengthEncodingManaged, NULL) \
#define DYNAMICALLY_ASSIGNED_FCALLS() _DYNAMICALLY_ASSIGNED_FCALLS_BASE()
enum
{
#undef DYNAMICALLY_ASSIGNED_FCALL_IMPL
#define DYNAMICALLY_ASSIGNED_FCALL_IMPL(id,defaultimpl) id,
DYNAMICALLY_ASSIGNED_FCALLS()
NUM_DYNAMICALLY_ASSIGNED_FCALL_IMPLEMENTATIONS,
InvalidDynamicFCallId = -1
};
};
extern "C" FCDECL1(VOID, FCComCtor, LPVOID pV);
#endif // _ECALL_H_
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
// ECALL.H -
//
// Handles our private native calling interface.
//
#ifndef _ECALL_H_
#define _ECALL_H_
#include "fcall.h"
class MethodDesc;
// CoreCLR defines fewer FCalls so make the hashtable even smaller.
#define FCALL_HASH_SIZE 127
typedef DPTR(struct ECHash) PTR_ECHash;
struct ECHash
{
PTR_ECHash m_pNext;
PCODE m_pImplementation;
PTR_MethodDesc m_pMD; // for reverse mapping
};
#ifdef DACCESS_COMPILE
GVAL_DECL(TADDR, gLowestFCall);
GVAL_DECL(TADDR, gHighestFCall);
GARY_DECL(PTR_ECHash, gFCallMethods, FCALL_HASH_SIZE);
#endif
enum {
FCFuncFlag_EndOfArray = 0x01,
FCFuncFlag_HasSignature = 0x02,
FCFuncFlag_Unreferenced = 0x04, // Suppress unused fcall check
};
struct ECFunc {
UINT_PTR m_dwFlags;
LPVOID m_pImplementation;
LPCSTR m_szMethodName;
LPHARDCODEDMETASIG m_pMethodSig; // Optional field. It is valid only if HasSignature() is set.
bool IsEndOfArray() { LIMITED_METHOD_CONTRACT; return !!(m_dwFlags & FCFuncFlag_EndOfArray); }
bool HasSignature() { LIMITED_METHOD_CONTRACT; return !!(m_dwFlags & FCFuncFlag_HasSignature); }
bool IsUnreferenced(){ LIMITED_METHOD_CONTRACT; return !!(m_dwFlags & FCFuncFlag_Unreferenced); }
int DynamicID() { LIMITED_METHOD_CONTRACT; return (int) ((INT8)(m_dwFlags >> 24)); }
ECFunc* NextInArray()
{
LIMITED_METHOD_CONTRACT;
return (ECFunc*)((BYTE*)this +
(HasSignature() ? sizeof(ECFunc) : offsetof(ECFunc, m_pMethodSig)));
}
};
struct ECClass
{
LPCSTR m_szClassName;
LPCSTR m_szNameSpace;
const LPVOID * m_pECFunc;
};
//=======================================================================
// Collects code and data pertaining to the ECall interface.
//=======================================================================
class ECall
{
public:
//---------------------------------------------------------
// One-time init
//---------------------------------------------------------
static void Init();
static PCODE GetFCallImpl(MethodDesc* pMD, BOOL * pfSharedOrDynamicFCallImpl = NULL);
static MethodDesc* MapTargetBackToMethod(PCODE pTarg, PCODE * ppAdjustedEntryPoint = NULL);
static DWORD GetIDForMethod(MethodDesc *pMD);
// Some fcalls (delegate ctors and tlbimpl ctors) shared one implementation.
// We should never patch vtable for these since they have 1:N mapping between
// MethodDesc and the actual implementation
static BOOL IsSharedFCallImpl(PCODE pImpl);
static BOOL CheckUnusedECalls(SetSHash<DWORD>& usedIDs);
static void DynamicallyAssignFCallImpl(PCODE impl, DWORD index);
static void PopulateManagedStringConstructors();
static void PopulateManagedCastHelpers();
#ifdef DACCESS_COMPILE
// Enumerates all gFCallMethods for minidumps.
static void EnumFCallMethods();
#endif // DACCESS_COMPILE
#define _DYNAMICALLY_ASSIGNED_FCALLS_BASE() \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(FastAllocateString, FramedAllocateString) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorCharArrayManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorCharArrayStartLengthManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorCharCountManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorCharPtrManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorCharPtrStartLengthManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorReadOnlySpanOfCharManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorSBytePtrManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorSBytePtrStartLengthManaged, NULL) \
DYNAMICALLY_ASSIGNED_FCALL_IMPL(CtorSBytePtrStartLengthEncodingManaged, NULL) \
#define DYNAMICALLY_ASSIGNED_FCALLS() _DYNAMICALLY_ASSIGNED_FCALLS_BASE()
enum
{
#undef DYNAMICALLY_ASSIGNED_FCALL_IMPL
#define DYNAMICALLY_ASSIGNED_FCALL_IMPL(id,defaultimpl) id,
DYNAMICALLY_ASSIGNED_FCALLS()
NUM_DYNAMICALLY_ASSIGNED_FCALL_IMPLEMENTATIONS,
InvalidDynamicFCallId = -1
};
};
extern "C" FCDECL1(VOID, FCComCtor, LPVOID pV);
#endif // _ECALL_H_
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/native/corehost/CMakeLists.txt | cmake_minimum_required(VERSION 3.6.2)
if (CMAKE_VERSION VERSION_GREATER 3.15 OR CMAKE_VERSION VERSION_EQUAL 3.15)
cmake_policy(SET CMP0091 NEW)
endif()
project(corehost)
include(../../../eng/native/configurepaths.cmake)
include(${CLR_ENG_NATIVE_DIR}/configurecompiler.cmake)
if (MSVC)
add_compile_options($<$<COMPILE_LANGUAGE:C,CXX>:/wd4996>)
add_compile_options($<$<COMPILE_LANGUAGE:C,CXX>:/wd4267>)
add_compile_options($<$<COMPILE_LANGUAGE:C,CXX>:/wd4018>)
add_compile_options($<$<COMPILE_LANGUAGE:C,CXX>:/wd4200>)
add_compile_options($<$<COMPILE_LANGUAGE:C,CXX>:/wd4244>)
# Host components don't try to handle asynchronous exceptions
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/EHsc>)
elseif (CMAKE_CXX_COMPILER_ID MATCHES GNU)
# Prevents libc from calling pthread_cond_destroy on static objects in
# dlopen()'ed library which we dlclose() in pal::unload_library.
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:-fno-use-cxa-atexit>)
endif()
add_subdirectory(hostcommon)
add_subdirectory(fxr)
add_subdirectory(hostpolicy)
add_subdirectory(apphost)
add_subdirectory(dotnet)
add_subdirectory(nethost)
add_subdirectory(test)
if (NOT RUNTIME_FLAVOR STREQUAL Mono)
if(CLR_CMAKE_TARGET_WIN32)
add_subdirectory(comhost)
add_subdirectory(ijwhost)
endif()
endif()
| cmake_minimum_required(VERSION 3.6.2)
if (CMAKE_VERSION VERSION_GREATER 3.15 OR CMAKE_VERSION VERSION_EQUAL 3.15)
cmake_policy(SET CMP0091 NEW)
endif()
project(corehost)
include(../../../eng/native/configurepaths.cmake)
include(${CLR_ENG_NATIVE_DIR}/configurecompiler.cmake)
if (MSVC)
add_compile_options($<$<COMPILE_LANGUAGE:C,CXX>:/wd4996>)
add_compile_options($<$<COMPILE_LANGUAGE:C,CXX>:/wd4267>)
add_compile_options($<$<COMPILE_LANGUAGE:C,CXX>:/wd4018>)
add_compile_options($<$<COMPILE_LANGUAGE:C,CXX>:/wd4200>)
add_compile_options($<$<COMPILE_LANGUAGE:C,CXX>:/wd4244>)
# Host components don't try to handle asynchronous exceptions
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/EHsc>)
elseif (CMAKE_CXX_COMPILER_ID MATCHES GNU)
# Prevents libc from calling pthread_cond_destroy on static objects in
# dlopen()'ed library which we dlclose() in pal::unload_library.
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:-fno-use-cxa-atexit>)
endif()
add_subdirectory(hostcommon)
add_subdirectory(fxr)
add_subdirectory(hostpolicy)
add_subdirectory(apphost)
add_subdirectory(dotnet)
add_subdirectory(nethost)
add_subdirectory(test)
if (NOT RUNTIME_FLAVOR STREQUAL Mono)
if(CLR_CMAKE_TARGET_WIN32)
add_subdirectory(comhost)
add_subdirectory(ijwhost)
endif()
endif()
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/libraries/System.Private.Xml/tests/Xslt/TestFiles/TestData/xsltc/baseline/cnt10.txt | DIFFERENT
Hello, world! | DIFFERENT
Hello, world! | -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/native/libs/System.Globalization.Native/pal_collation.c | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#include <assert.h>
#include <stdbool.h>
#include <stdlib.h>
#include <stdint.h>
#include <search.h>
#include <string.h>
#include "pal_errors_internal.h"
#include "pal_collation.h"
#include "pal_atomic.h"
c_static_assert_msg(UCOL_EQUAL == 0, "managed side requires 0 for equal strings");
c_static_assert_msg(UCOL_LESS < 0, "managed side requires less than zero for a < b");
c_static_assert_msg(UCOL_GREATER > 0, "managed side requires greater than zero for a > b");
c_static_assert_msg(USEARCH_DONE == -1, "managed side requires -1 for not found");
#define UCOL_IGNORABLE 0
#define UCOL_PRIMARYORDERMASK ((int32_t)0xFFFF0000)
#define UCOL_SECONDARYORDERMASK 0x0000FF00
#define UCOL_TERTIARYORDERMASK 0x000000FF
#define CompareOptionsIgnoreCase 0x1
#define CompareOptionsIgnoreNonSpace 0x2
#define CompareOptionsIgnoreSymbols 0x4
#define CompareOptionsIgnoreKanaType 0x8
#define CompareOptionsIgnoreWidth 0x10
#define CompareOptionsMask 0x1f
// #define CompareOptionsStringSort 0x20000000
// ICU's default is to use "StringSort", i.e. nonalphanumeric symbols come before alphanumeric.
// When StringSort is not specified (.NET's default), the sort order will be different between
// Windows and Unix platforms. The nonalphanumeric symbols will come after alphanumeric
// characters on Windows, but before on Unix.
// Since locale - specific string sort order can change from one version of Windows to the next,
// there is no reason to guarantee string sort order between Windows and ICU. Thus trying to
// change ICU's default behavior here isn't really justified unless someone has a strong reason
// for !StringSort to behave differently.
#define USED_STRING_SEARCH ((UStringSearch*) (-1))
typedef struct { int32_t key; UCollator* UCollator; } TCollatorMap;
typedef struct SearchIteratorNode
{
UStringSearch* searchIterator;
struct SearchIteratorNode* next;
} SearchIteratorNode;
/*
* For increased performance, we cache the UCollator objects for a locale and
* share them across threads. This is safe (and supported in ICU) if we ensure
* multiple threads are only ever dealing with const UCollators.
*/
struct SortHandle
{
UCollator* collatorsPerOption[CompareOptionsMask + 1];
SearchIteratorNode searchIteratorList[CompareOptionsMask + 1];
};
// Hiragana character range
static const UChar hiraganaStart = 0x3041;
static const UChar hiraganaEnd = 0x309e;
static const UChar hiraganaToKatakanaOffset = 0x30a1 - 0x3041;
// Length of the fullwidth characters from 'A' to 'Z'
// We'll use it to map the casing of the full width 'A' to 'Z' characters
static const int32_t FullWidthAlphabetRangeLength = 0xFF3A - 0xFF21 + 1;
// Mapping between half- and fullwidth characters.
// LowerChars are the characters that should sort lower than HigherChars
static const UChar g_HalfFullLowerChars[] = {
// halfwidth characters
0x0021, 0x0022, 0x0023, 0x0024, 0x0025, 0x0026, 0x0027, 0x0028, 0x0029, 0x002a, 0x002b, 0x002c, 0x002d, 0x002e, 0x002f,
0x0030, 0x0031, 0x0032, 0x0033, 0x0034, 0x0035, 0x0036, 0x0037, 0x0038, 0x0039, 0x003a, 0x003b, 0x003c, 0x003d, 0x003e,
0x003f, 0x0040, 0x0041, 0x0042, 0x0043, 0x0044, 0x0045, 0x0046, 0x0047, 0x0048, 0x0049, 0x004a, 0x004b, 0x004c, 0x004d,
0x004e, 0x004f, 0x0050, 0x0051, 0x0052, 0x0053, 0x0054, 0x0055, 0x0056, 0x0057, 0x0058, 0x0059, 0x005a, 0x005b, 0x005d,
0x005e, 0x005f, 0x0060, 0x0061, 0x0062, 0x0063, 0x0064, 0x0065, 0x0066, 0x0067, 0x0068, 0x0069, 0x006a, 0x006b, 0x006c,
0x006d, 0x006e, 0x006f, 0x0070, 0x0071, 0x0072, 0x0073, 0x0074, 0x0075, 0x0076, 0x0077, 0x0078, 0x0079, 0x007a, 0x007b,
0x007c, 0x007d, 0x007e, 0x00a2, 0x00a3, 0x00ac, 0x00af, 0x00a6, 0x00a5, 0x20a9,
// fullwidth characters
0x3002, 0x300c, 0x300d, 0x3001, 0x30fb, 0x30f2, 0x30a1, 0x30a3, 0x30a5, 0x30a7, 0x30a9, 0x30e3, 0x30e5, 0x30e7, 0x30c3,
0x30a2, 0x30a4, 0x30a6, 0x30a8, 0x30aa, 0x30ab, 0x30ad, 0x30af, 0x30b1, 0x30b3, 0x30b5, 0x30b7, 0x30b9, 0x30bb, 0x30bd,
0x30bf, 0x30c1, 0x30c4, 0x30c6, 0x30c8, 0x30ca, 0x30cb, 0x30cc, 0x30cd, 0x30ce, 0x30cf, 0x30d2, 0x30d5, 0x30d8, 0x30db,
0x30de, 0x30df, 0x30e0, 0x30e1, 0x30e2, 0x30e4, 0x30e6, 0x30e8, 0x30e9, 0x30ea, 0x30eb, 0x30ec, 0x30ed, 0x30ef, 0x30f3,
0x3164, 0x3131, 0x3132, 0x3133, 0x3134, 0x3135, 0x3136, 0x3137, 0x3138, 0x3139, 0x313a, 0x313b, 0x313c, 0x313d, 0x313e,
0x313f, 0x3140, 0x3141, 0x3142, 0x3143, 0x3144, 0x3145, 0x3146, 0x3147, 0x3148, 0x3149, 0x314a, 0x314b, 0x314c, 0x314d,
0x314e, 0x314f, 0x3150, 0x3151, 0x3152, 0x3153, 0x3154, 0x3155, 0x3156, 0x3157, 0x3158, 0x3159, 0x315a, 0x315b, 0x315c,
0x315d, 0x315e, 0x315f, 0x3160, 0x3161, 0x3162, 0x3163
};
static const UChar g_HalfFullHigherChars[] = {
// fullwidth characters
0xff01, 0xff02, 0xff03, 0xff04, 0xff05, 0xff06, 0xff07, 0xff08, 0xff09, 0xff0a, 0xff0b, 0xff0c, 0xff0d, 0xff0e, 0xff0f,
0xff10, 0xff11, 0xff12, 0xff13, 0xff14, 0xff15, 0xff16, 0xff17, 0xff18, 0xff19, 0xff1a, 0xff1b, 0xff1c, 0xff1d, 0xff1e,
0xff1f, 0xff20, 0xff21, 0xff22, 0xff23, 0xff24, 0xff25, 0xff26, 0xff27, 0xff28, 0xff29, 0xff2a, 0xff2b, 0xff2c, 0xff2d,
0xff2e, 0xff2f, 0xff30, 0xff31, 0xff32, 0xff33, 0xff34, 0xff35, 0xff36, 0xff37, 0xff38, 0xff39, 0xff3a, 0xff3b, 0xff3d,
0xff3e, 0xff3f, 0xff40, 0xff41, 0xff42, 0xff43, 0xff44, 0xff45, 0xff46, 0xff47, 0xff48, 0xff49, 0xff4a, 0xff4b, 0xff4c,
0xff4d, 0xff4e, 0xff4f, 0xff50, 0xff51, 0xff52, 0xff53, 0xff54, 0xff55, 0xff56, 0xff57, 0xff58, 0xff59, 0xff5a, 0xff5b,
0xff5c, 0xff5d, 0xff5e, 0xffe0, 0xffe1, 0xffe2, 0xffe3, 0xffe4, 0xffe5, 0xffe6,
// halfwidth characters
0xff61, 0xff62, 0xff63, 0xff64, 0xff65, 0xff66, 0xff67, 0xff68, 0xff69, 0xff6a, 0xff6b, 0xff6c, 0xff6d, 0xff6e, 0xff6f,
0xff71, 0xff72, 0xff73, 0xff74, 0xff75, 0xff76, 0xff77, 0xff78, 0xff79, 0xff7a, 0xff7b, 0xff7c, 0xff7d, 0xff7e, 0xff7f,
0xff80, 0xff81, 0xff82, 0xff83, 0xff84, 0xff85, 0xff86, 0xff87, 0xff88, 0xff89, 0xff8a, 0xff8b, 0xff8c, 0xff8d, 0xff8e,
0xff8f, 0xff90, 0xff91, 0xff92, 0xff93, 0xff94, 0xff95, 0xff96, 0xff97, 0xff98, 0xff99, 0xff9a, 0xff9b, 0xff9c, 0xff9d,
0xffa0, 0xffa1, 0xffa2, 0xffa3, 0xffa4, 0xffa5, 0xffa6, 0xffa7, 0xffa8, 0xffa9, 0xffaa, 0xffab, 0xffac, 0xffad, 0xffae,
0xffaf, 0xffb0, 0xffb1, 0xffb2, 0xffb3, 0xffb4, 0xffb5, 0xffb6, 0xffb7, 0xffb8, 0xffb9, 0xffba, 0xffbb, 0xffbc, 0xffbd,
0xffbe, 0xffc2, 0xffc3, 0xffc4, 0xffc5, 0xffc6, 0xffc7, 0xffca, 0xffcb, 0xffcc, 0xffcd, 0xffce, 0xffcf, 0xffd2, 0xffd3,
0xffd4, 0xffd5, 0xffd6, 0xffd7, 0xffda, 0xffdb, 0xffdc
};
static const int32_t g_HalfFullCharsLength = (sizeof(g_HalfFullHigherChars) / sizeof(UChar));
// Hiragana without [semi-]voiced sound mark for custom collation rules
// If Hiragana with [semi-]voiced sound mark is added to custom collation rules, there is a conflict
// between the custom rule and some default rule.
static const UChar g_HiraganaWithoutVoicedSoundMarkChars[] = {
0x3041, 0x3042, 0x3043, 0x3044, 0x3045, 0x3046, 0x3047, 0x3048, 0x3049, 0x304A, 0x304B, 0x304D, 0x304F, 0x3051, 0x3053,
0x3055, 0x3057, 0x3059, 0x305B, 0x305D, 0x305F, 0x3061, 0x3063, 0x3064, 0x3066, 0x3068, 0x306A, 0x306B, 0x306C, 0x306D,
0x306E, 0x306F, 0x3072, 0x3075, 0x3078, 0x307B, 0x307E, 0x307F, 0x3080, 0x3081, 0x3082, 0x3083, 0x3084, 0x3085, 0x3086,
0x3087, 0x3088, 0x3089, 0x308A, 0x308B, 0x308C, 0x308D, 0x308E, 0x308F, 0x3090, 0x3091, 0x3092, 0x3093, 0x3095, 0x3096, 0x309D,
};
static const int32_t g_HiraganaWithoutVoicedSoundMarkCharsLength = (sizeof(g_HiraganaWithoutVoicedSoundMarkChars) / sizeof(UChar));
/*
ICU collation rules reserve any punctuation and whitespace characters for use in the syntax.
Thus, to use these characters in a rule, they need to be escaped.
This rule was taken from http://www.unicode.org/reports/tr35/tr35-collation.html#Rules.
*/
static int NeedsEscape(UChar character)
{
return ((0x21 <= character && character <= 0x2f)
|| (0x3a <= character && character <= 0x40)
|| (0x5b <= character && character <= 0x60)
|| (0x7b <= character && character <= 0x7e));
}
/*
Gets a value indicating whether the HalfFullHigher character is considered a symbol character.
The ranges specified here are only checking for characters in the g_HalfFullHigherChars list and needs
to be combined with NeedsEscape above with the g_HalfFullLowerChars for all the IgnoreSymbols characters.
This is done so we can use range checks instead of comparing individual characters.
These ranges were obtained by running the above characters through .NET CompareInfo.Compare
with CompareOptions.IgnoreSymbols on Windows.
*/
static int IsHalfFullHigherSymbol(UChar character)
{
return (0xffe0 <= character && character <= 0xffe6)
|| (0xff61 <= character && character <= 0xff65);
}
/*
Fill custom collation rules for ignoreKana cases.
Since the CompareOptions flags don't map 1:1 with ICU default functionality, we need to fall back to using
custom rules in order to support IgnoreKanaType and IgnoreWidth CompareOptions correctly.
*/
static void FillIgnoreKanaRules(UChar* completeRules, int32_t* fillIndex, int32_t completeRulesLength, int32_t isIgnoreKanaType)
{
assert((*fillIndex) + (4 * (hiraganaEnd - hiraganaStart + 1)) <= completeRulesLength);
if ((*fillIndex) + (4 * (hiraganaEnd - hiraganaStart + 1)) > completeRulesLength) // check the allocated the size
{
return;
}
if (isIgnoreKanaType)
{
for (UChar hiraganaChar = hiraganaStart; hiraganaChar <= hiraganaEnd; hiraganaChar++)
{
// Hiragana is the range 3041 to 3096 & 309D & 309E
if (hiraganaChar <= 0x3096 || hiraganaChar >= 0x309D) // characters between 3096 and 309D are not mapped to katakana
{
completeRules[*fillIndex] = '&';
completeRules[(*fillIndex) + 1] = hiraganaChar;
completeRules[(*fillIndex) + 2] = '=';
completeRules[(*fillIndex) + 3] = hiraganaChar + hiraganaToKatakanaOffset;
(*fillIndex) += 4;
}
}
}
else
{
// Avoid conflicts between default [semi-]voiced sound mark rules and custom rules
for (int i = 0; i < g_HiraganaWithoutVoicedSoundMarkCharsLength; i++)
{
UChar hiraganaChar = g_HiraganaWithoutVoicedSoundMarkChars[i];
completeRules[*fillIndex] = '&';
completeRules[(*fillIndex) + 1] = hiraganaChar;
completeRules[(*fillIndex) + 2] = '<';
completeRules[(*fillIndex) + 3] = hiraganaChar + hiraganaToKatakanaOffset;
(*fillIndex) += 4;
}
}
}
/*
Fill custom collation rules for ignoreWidth cases.
Since the CompareOptions flags don't map 1:1 with ICU default functionality, we need to fall back to using
custom rules in order to support IgnoreKanaType and IgnoreWidth CompareOptions correctly.
*/
static void FillIgnoreWidthRules(UChar* completeRules, int32_t* fillIndex, int32_t completeRulesLength, int32_t isIgnoreWidth, int32_t isIgnoreCase, int32_t isIgnoreSymbols)
{
UChar compareChar = isIgnoreWidth ? '=' : '<';
UChar lowerChar;
UChar higherChar;
int needsEscape;
assert((*fillIndex) + (5 * g_HalfFullCharsLength) <= completeRulesLength);
if ((*fillIndex) + (5 * g_HalfFullCharsLength) > completeRulesLength)
{
return;
}
for (int i = 0; i < g_HalfFullCharsLength; i++)
{
lowerChar = g_HalfFullLowerChars[i];
higherChar = g_HalfFullHigherChars[i];
// the lower chars need to be checked for escaping since they contain ASCII punctuation
needsEscape = NeedsEscape(lowerChar);
// when isIgnoreSymbols is true and we are not ignoring width, check to see if
// this character is a symbol, and if so skip it
if (!(isIgnoreSymbols && (!isIgnoreWidth) && (needsEscape || IsHalfFullHigherSymbol(higherChar))))
{
completeRules[*fillIndex] = '&';
(*fillIndex)++;
if (needsEscape)
{
completeRules[*fillIndex] = '\\';
(*fillIndex)++;
}
completeRules[*fillIndex] = lowerChar;
completeRules[(*fillIndex) + 1] = compareChar;
completeRules[(*fillIndex) + 2] = higherChar;
(*fillIndex) += 3;
}
}
// When we have isIgnoreWidth is false, we sort the normal width latin alphabet characters before the full width latin alphabet characters
// e.g. `a` < `a` (\uFF41)
// This break the casing of the full width latin alphabet characters.
// e.g. `a` (\uFF41) == `A` (\uFF21).
// we are fixing back this case mapping here.
if (isIgnoreCase && (!isIgnoreWidth))
{
assert((*fillIndex) + (FullWidthAlphabetRangeLength * 4) <= completeRulesLength);
const int UpperCaseToLowerCaseOffset = 0xFF41 - 0xFF21;
for (UChar ch = 0xFF21; ch <= 0xFF3A; ch++)
{
completeRules[*fillIndex] = '&';
completeRules[(*fillIndex) + 1] = ch + UpperCaseToLowerCaseOffset;
completeRules[(*fillIndex) + 2] = '=';
completeRules[(*fillIndex) + 3] = ch;
(*fillIndex) += 4;
}
}
}
/*
* The collator returned by this function is owned by the callee and must be
* closed when this method returns with a U_SUCCESS UErrorCode.
*
* On error, the return value is undefined.
*/
static UCollator* CloneCollatorWithOptions(const UCollator* pCollator, int32_t options, UErrorCode* pErr)
{
UColAttributeValue strength = ucol_getStrength(pCollator);
int32_t isIgnoreCase = (options & CompareOptionsIgnoreCase) == CompareOptionsIgnoreCase;
int32_t isIgnoreNonSpace = (options & CompareOptionsIgnoreNonSpace) == CompareOptionsIgnoreNonSpace;
int32_t isIgnoreSymbols = (options & CompareOptionsIgnoreSymbols) == CompareOptionsIgnoreSymbols;
int32_t isIgnoreKanaType = (options & CompareOptionsIgnoreKanaType) == CompareOptionsIgnoreKanaType;
int32_t isIgnoreWidth = (options & CompareOptionsIgnoreWidth) == CompareOptionsIgnoreWidth;
if (isIgnoreCase)
{
strength = UCOL_SECONDARY;
}
if (isIgnoreNonSpace)
{
strength = UCOL_PRIMARY;
}
UCollator* pClonedCollator;
// IgnoreWidth - it would be easy to IgnoreWidth by just setting Strength <= Secondary.
// For any strength under that, the width of the characters will be ignored.
// For strength above that, the width of the characters will be used in differentiation.
// a. However, this doesn’t play nice with IgnoreCase, since these Strength levels are overloaded.
// b. So the plan to support IgnoreWidth is to use customized rules.
// i. Since the character width is differentiated at “Tertiary” strength, we only need to use custom rules in specific cases.
// ii. If (IgnoreWidth == true && Strength > “Secondary”)
// 1. Build up a custom rule set for each half-width character and say that it is equal to the corresponding full-width character.
// a. ex: “0x30F2 = 0xFF66 & 0x30F3 = 0xFF9D & …”
// iii. If (IgnoreWidth == false && Strength <= “Secondary”)
// 1. Build up a custom rule set saying that the half-width and full-width characters have a primary level difference (which will cause it always to be unequal)
// a. Ex. “0x30F2 < 0xFF66 & 0x30F3 < 0xFF9D & …”
// IgnoreKanaType – this works the same way as IgnoreWidth, it uses the set of Hiragana and Katakana characters instead of half-width vs full-width characters to build the rules.
int32_t applyIgnoreKanaTypeCustomRule = isIgnoreKanaType ^ (strength < UCOL_TERTIARY); // kana differs at the tertiary level
int32_t applyIgnoreWidthTypeCustomRule = isIgnoreWidth ^ (strength < UCOL_TERTIARY); // character width differs at the tertiary level
int32_t customRuleLength = 0;
if (applyIgnoreKanaTypeCustomRule || applyIgnoreWidthTypeCustomRule)
{
// If we need to create customRules, the KanaType custom rule will be 88 kana characters * 4 = 352 chars long
// and the Width custom rule will be at most 212 halfwidth characters * 5 = 1060 chars long.
customRuleLength = (applyIgnoreKanaTypeCustomRule ? 4 * (hiraganaEnd - hiraganaStart + 1) : 0) +
(applyIgnoreWidthTypeCustomRule ? ((5 * g_HalfFullCharsLength) + (isIgnoreCase ? 4 * FullWidthAlphabetRangeLength : 0)) : 0) +
1; // Adding extra terminator rule at the end to force ICU apply last actual entered rule, otherwise last actual rule get ignored.
}
if (customRuleLength == 0)
{
pClonedCollator = ucol_safeClone(pCollator, NULL, NULL, pErr);
}
else
{
int32_t rulesLength;
const UChar* localeRules = ucol_getRules(pCollator, &rulesLength);
int32_t completeRulesLength = rulesLength + customRuleLength + 1;
UChar* completeRules = (UChar*)calloc((size_t)completeRulesLength, sizeof(UChar));
for (int i = 0; i < rulesLength; i++)
{
completeRules[i] = localeRules[i];
}
if (applyIgnoreKanaTypeCustomRule)
{
FillIgnoreKanaRules(completeRules, &rulesLength, completeRulesLength, isIgnoreKanaType);
}
assert(rulesLength <= completeRulesLength);
if (applyIgnoreWidthTypeCustomRule)
{
FillIgnoreWidthRules(completeRules, &rulesLength, completeRulesLength, isIgnoreWidth, isIgnoreCase, isIgnoreSymbols);
}
assert(rulesLength + 4 <= completeRulesLength);
// Adding extra terminator rule at the end to force ICU apply last actual entered rule, otherwise last actual rule get ignored.
completeRules[rulesLength] = '&';
completeRules[rulesLength + 1] = 'a';
completeRules[rulesLength + 2] = '=';
completeRules[rulesLength + 3] = 'a';
rulesLength += 4;
pClonedCollator = ucol_openRules(completeRules, rulesLength, UCOL_DEFAULT, strength, NULL, pErr);
free(completeRules);
}
if (isIgnoreSymbols)
{
ucol_setAttribute(pClonedCollator, UCOL_ALTERNATE_HANDLING, UCOL_SHIFTED, pErr);
#if !defined(STATIC_ICU)
if (ucol_setMaxVariable_ptr != NULL)
{
// by default, ICU alternate shifted handling only ignores punctuation, but
// IgnoreSymbols needs symbols and currency as well, so change the "variable top"
// to include all symbols and currency
ucol_setMaxVariable(pClonedCollator, UCOL_REORDER_CODE_CURRENCY, pErr);
}
else
{
assert(ucol_setVariableTop_ptr != NULL);
// 0xfdfc is the last currency character before the first digit character
// in http://source.icu-project.org/repos/icu/icu/tags/release-52-1/source/data/unidata/FractionalUCA.txt
const UChar ignoreSymbolsVariableTop[] = { 0xfdfc };
ucol_setVariableTop_ptr(pClonedCollator, ignoreSymbolsVariableTop, 1, pErr);
}
#else // !defined(STATIC_ICU)
// by default, ICU alternate shifted handling only ignores punctuation, but
// IgnoreSymbols needs symbols and currency as well, so change the "variable top"
// to include all symbols and currency
#if HAVE_SET_MAX_VARIABLE
ucol_setMaxVariable(pClonedCollator, UCOL_REORDER_CODE_CURRENCY, pErr);
#else
// 0xfdfc is the last currency character before the first digit character
// in http://source.icu-project.org/repos/icu/icu/tags/release-52-1/source/data/unidata/FractionalUCA.txt
const UChar ignoreSymbolsVariableTop[] = { 0xfdfc };
ucol_setVariableTop(pClonedCollator, ignoreSymbolsVariableTop, 1, pErr);
#endif
#endif //!defined(STATIC_ICU)
}
ucol_setAttribute(pClonedCollator, UCOL_STRENGTH, strength, pErr);
// casing differs at the tertiary level.
// if strength is less than tertiary, but we are not ignoring case, then we need to flip CASE_LEVEL On
if (strength < UCOL_TERTIARY && !isIgnoreCase)
{
ucol_setAttribute(pClonedCollator, UCOL_CASE_LEVEL, UCOL_ON, pErr);
}
return pClonedCollator;
}
// Returns TRUE if all the collation elements in str are completely ignorable
static int CanIgnoreAllCollationElements(const UCollator* pColl, const UChar* lpStr, int32_t length)
{
int result = true;
UErrorCode err = U_ZERO_ERROR;
UCollationElements* pCollElem = ucol_openElements(pColl, lpStr, length, &err);
if (U_SUCCESS(err))
{
int32_t curCollElem = UCOL_NULLORDER;
while ((curCollElem = ucol_next(pCollElem, &err)) != UCOL_NULLORDER)
{
if (curCollElem != UCOL_IGNORABLE)
{
result = false;
break;
}
}
ucol_closeElements(pCollElem);
}
return U_SUCCESS(err) ? result : false;
}
static void CreateSortHandle(SortHandle** ppSortHandle)
{
*ppSortHandle = (SortHandle*)calloc(1, sizeof(SortHandle));
if ((*ppSortHandle) == NULL)
{
return;
}
memset(*ppSortHandle, 0, sizeof(SortHandle));
}
ResultCode GlobalizationNative_GetSortHandle(const char* lpLocaleName, SortHandle** ppSortHandle)
{
assert(ppSortHandle != NULL);
CreateSortHandle(ppSortHandle);
if ((*ppSortHandle) == NULL)
{
return GetResultCode(U_MEMORY_ALLOCATION_ERROR);
}
UErrorCode err = U_ZERO_ERROR;
(*ppSortHandle)->collatorsPerOption[0] = ucol_open(lpLocaleName, &err);
if (U_FAILURE(err))
{
free(*ppSortHandle);
(*ppSortHandle) = NULL;
}
return GetResultCode(err);
}
static const char* BreakIteratorRuleOld = // supported on ICU like versions 52
"$CR = [\\p{Grapheme_Cluster_Break = CR}]; \n" \
"$LF = [\\p{Grapheme_Cluster_Break = LF}]; \n" \
"$Control = [\\p{Grapheme_Cluster_Break = Control}]; \n" \
"$Extend = [\\p{Grapheme_Cluster_Break = Extend}]; \n" \
"$SpacingMark = [\\p{Grapheme_Cluster_Break = SpacingMark}]; \n" \
"$Regional_Indicator = [\\p{Grapheme_Cluster_Break = Regional_Indicator}]; \n" \
"$L = [\\p{Grapheme_Cluster_Break = L}]; \n" \
"$V = [\\p{Grapheme_Cluster_Break = V}]; \n" \
"$T = [\\p{Grapheme_Cluster_Break = T}]; \n" \
"$LV = [\\p{Grapheme_Cluster_Break = LV}]; \n" \
"$LVT = [\\p{Grapheme_Cluster_Break = LVT}]; \n" \
"!!chain; \n" \
"!!forward; \n" \
"$L ($L | $V | $LV | $LVT); \n" \
"($LV | $V) ($V | $T); \n" \
"($LVT | $T) $T; \n" \
"$Regional_Indicator $Regional_Indicator; \n" \
"[^$Control $CR $LF] $Extend; \n" \
"[^$Control $CR $LF] $SpacingMark; \n" \
"!!reverse; \n" \
"($L | $V | $LV | $LVT) $L; \n" \
"($V | $T) ($LV | $V); \n" \
"$T ($LVT | $T); \n" \
"$Regional_Indicator $Regional_Indicator; \n" \
"$Extend [^$Control $CR $LF]; \n" \
"$SpacingMark [^$Control $CR $LF]; \n" \
"!!safe_reverse; \n" \
"!!safe_forward; \n";
static const char* BreakIteratorRuleNew = // supported on newer ICU versions like 62 and up
"!!quoted_literals_only; \n" \
"$CR = [\\p{Grapheme_Cluster_Break = CR}]; \n" \
"$LF = [\\p{Grapheme_Cluster_Break = LF}]; \n" \
"$Control = [[\\p{Grapheme_Cluster_Break = Control}]]; \n" \
"$Extend = [[\\p{Grapheme_Cluster_Break = Extend}]]; \n" \
"$ZWJ = [\\p{Grapheme_Cluster_Break = ZWJ}]; \n" \
"$Regional_Indicator = [\\p{Grapheme_Cluster_Break = Regional_Indicator}]; \n" \
"$Prepend = [\\p{Grapheme_Cluster_Break = Prepend}]; \n" \
"$SpacingMark = [\\p{Grapheme_Cluster_Break = SpacingMark}]; \n" \
"$Virama = [\\p{Gujr}\\p{sc=Telu}\\p{sc=Mlym}\\p{sc=Orya}\\p{sc=Beng}\\p{sc=Deva}&\\p{Indic_Syllabic_Category=Virama}]; \n" \
"$LinkingConsonant = [\\p{Gujr}\\p{sc=Telu}\\p{sc=Mlym}\\p{sc=Orya}\\p{sc=Beng}\\p{sc=Deva}&\\p{Indic_Syllabic_Category=Consonant}]; \n" \
"$ExtCccZwj = [[\\p{gcb=Extend}-\\p{ccc=0}] \\p{gcb=ZWJ}]; \n" \
"$L = [\\p{Grapheme_Cluster_Break = L}]; \n" \
"$V = [\\p{Grapheme_Cluster_Break = V}]; \n" \
"$T = [\\p{Grapheme_Cluster_Break = T}]; \n" \
"$LV = [\\p{Grapheme_Cluster_Break = LV}]; \n" \
"$LVT = [\\p{Grapheme_Cluster_Break = LVT}]; \n" \
"$Extended_Pict = [:ExtPict:]; \n" \
"!!chain; \n" \
"!!lookAheadHardBreak; \n" \
"$L ($L | $V | $LV | $LVT); \n" \
"($LV | $V) ($V | $T); \n" \
"($LVT | $T) $T; \n" \
"[^$Control $CR $LF] ($Extend | $ZWJ); \n" \
"[^$Control $CR $LF] $SpacingMark; \n" \
"$Prepend [^$Control $CR $LF]; \n" \
"$LinkingConsonant $ExtCccZwj* $Virama $ExtCccZwj* $LinkingConsonant; \n" \
"$Extended_Pict $Extend* $ZWJ $Extended_Pict; \n" \
"^$Prepend* $Regional_Indicator $Regional_Indicator / $Regional_Indicator; \n" \
"^$Prepend* $Regional_Indicator $Regional_Indicator; \n" \
".;";
static UChar* s_breakIteratorRules = NULL;
// When doing string search operations using ICU, it is internally using a break iterator which doesn't allow breaking between some characters according to
// the Grapheme Cluster Boundary Rules specified in http://www.unicode.org/reports/tr29/#Grapheme_Cluster_Boundary_Rules.
// Unfortunately, not all rules will have the desired behavior we need to get in .NET. For example, the rules don't allow breaking between CR '\r' and LF '\n' characters.
// When searching for "\n" in a string like "\r\n", will get not found result.
// We are customizing the break iterator to exclude the CRxLF rule which don't allow breaking between CR and LF.
// The general rules syntax explained in the doc https://unicode-org.github.io/icu/userguide/boundaryanalysis/break-rules.html.
// The ICU latest rules definition exist here https://github.com/unicode-org/icu/blob/main/icu4c/source/data/brkitr/rules/char.txt.
static UBreakIterator* CreateCustomizedBreakIterator(void)
{
static UChar emptyString[1];
UBreakIterator* breaker;
UErrorCode status = U_ZERO_ERROR;
if (s_breakIteratorRules != NULL)
{
breaker = ubrk_openRules(s_breakIteratorRules, -1, emptyString, 0, NULL, &status);
return U_FAILURE(status) ? NULL : breaker;
}
int32_t oldRulesLength = (int32_t)strlen(BreakIteratorRuleOld);
int32_t newRulesLength = (int32_t)strlen(BreakIteratorRuleNew);
int32_t breakIteratorRulesLength = newRulesLength > oldRulesLength ? newRulesLength : oldRulesLength;
UChar* rules = (UChar*)calloc((size_t)breakIteratorRulesLength + 1, sizeof(UChar));
if (rules == NULL)
{
return NULL;
}
u_uastrncpy(rules, BreakIteratorRuleNew, newRulesLength);
rules[newRulesLength] = '\0';
breaker = ubrk_openRules(rules, newRulesLength, emptyString, 0, NULL, &status);
if (U_FAILURE(status))
{
status = U_ZERO_ERROR;
u_uastrncpy(rules, BreakIteratorRuleOld, oldRulesLength);
rules[oldRulesLength] = '\0';
breaker = ubrk_openRules(rules, oldRulesLength, emptyString, 0, NULL, &status);
}
if (U_FAILURE(status))
{
free(rules);
return NULL;
}
UChar* pNull = NULL;
if (!pal_atomic_cas_ptr((void* volatile*)&s_breakIteratorRules, rules, pNull))
{
free(rules);
assert(s_breakIteratorRules != NULL);
}
return breaker;
}
static void CloseSearchIterator(UStringSearch* pSearch)
{
assert(pSearch != NULL);
#if !defined(TARGET_WINDOWS)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
UBreakIterator* breakIterator = (UBreakIterator*)usearch_getBreakIterator(pSearch);
#if !defined(TARGET_WINDOWS)
#pragma GCC diagnostic pop
#endif
usearch_close(pSearch);
if (breakIterator != NULL)
{
ubrk_close(breakIterator);
}
}
void GlobalizationNative_CloseSortHandle(SortHandle* pSortHandle)
{
for (int i = 0; i <= CompareOptionsMask; i++)
{
if (pSortHandle->collatorsPerOption[i] != NULL)
{
UStringSearch* pSearch = pSortHandle->searchIteratorList[i].searchIterator;
if (pSearch != NULL)
{
if (pSearch != USED_STRING_SEARCH)
{
CloseSearchIterator(pSearch);
}
pSortHandle->searchIteratorList[i].searchIterator = NULL;
SearchIteratorNode* pNext = pSortHandle->searchIteratorList[i].next;
pSortHandle->searchIteratorList[i].next = NULL;
while (pNext != NULL)
{
if (pNext->searchIterator != NULL && pNext->searchIterator != USED_STRING_SEARCH)
{
CloseSearchIterator(pNext->searchIterator);
}
SearchIteratorNode* pCurrent = pNext;
pNext = pCurrent->next;
free(pCurrent);
}
}
ucol_close(pSortHandle->collatorsPerOption[i]);
pSortHandle->collatorsPerOption[i] = NULL;
}
}
free(pSortHandle);
}
static const UCollator* GetCollatorFromSortHandle(SortHandle* pSortHandle, int32_t options, UErrorCode* pErr)
{
if (options == 0)
{
return pSortHandle->collatorsPerOption[0];
}
else
{
options &= CompareOptionsMask;
UCollator* pCollator = pSortHandle->collatorsPerOption[options];
if (pCollator != NULL)
{
return pCollator;
}
pCollator = CloneCollatorWithOptions(pSortHandle->collatorsPerOption[0], options, pErr);
UCollator* pNull = NULL;
if (!pal_atomic_cas_ptr((void* volatile*)&pSortHandle->collatorsPerOption[options], pCollator, pNull))
{
ucol_close(pCollator);
pCollator = pSortHandle->collatorsPerOption[options];
assert(pCollator != NULL && "pCollator not expected to be null here.");
}
return pCollator;
}
}
// CreateNewSearchNode will create a new node in the linked list and mark this node search handle as borrowed handle.
static inline int32_t CreateNewSearchNode(SortHandle* pSortHandle, int32_t options)
{
SearchIteratorNode* node = (SearchIteratorNode*)calloc(1, sizeof(SearchIteratorNode));
if (node == NULL)
{
return false;
}
node->searchIterator = USED_STRING_SEARCH; // Mark the new node search handle as borrowed.
node->next = NULL;
SearchIteratorNode* pCurrent = &pSortHandle->searchIteratorList[options];
assert(pCurrent->searchIterator != NULL && "Search iterator not expected to be NULL at this stage.");
SearchIteratorNode* pNull = NULL;
do
{
if (pCurrent->next == NULL && pal_atomic_cas_ptr((void* volatile*)&(pCurrent->next), node, pNull))
{
break;
}
assert(pCurrent->next != NULL && "next pointer shouldn't be null.");
pCurrent = pCurrent->next;
} while (true);
return true;
}
// Restore previously borrowed search handle to the linked list.
static inline int32_t RestoreSearchHandle(SortHandle* pSortHandle, UStringSearch* pSearchIterator, int32_t options)
{
SearchIteratorNode* pCurrent = &pSortHandle->searchIteratorList[options];
while (pCurrent != NULL)
{
if (pCurrent->searchIterator == USED_STRING_SEARCH && pal_atomic_cas_ptr((void* volatile*)&(pCurrent->searchIterator), pSearchIterator, USED_STRING_SEARCH))
{
return true;
}
pCurrent = pCurrent->next;
}
return false;
}
// return -1 if couldn't borrow search handle from the SortHandle cache, otherwise, it return the slot number of the cache.
static int32_t GetSearchIteratorUsingCollator(
SortHandle* pSortHandle,
const UCollator* pColl,
const UChar* lpTarget,
int32_t cwTargetLength,
const UChar* lpSource,
int32_t cwSourceLength,
int32_t options,
UStringSearch** pSearchIterator)
{
options &= CompareOptionsMask;
*pSearchIterator = pSortHandle->searchIteratorList[options].searchIterator;
UErrorCode err = U_ZERO_ERROR;
if (*pSearchIterator == NULL)
{
UBreakIterator* breakIterator = CreateCustomizedBreakIterator();
*pSearchIterator = usearch_openFromCollator(lpTarget, cwTargetLength, lpSource, cwSourceLength, pColl, breakIterator, &err);
if (!U_SUCCESS(err))
{
if (breakIterator != NULL)
{
ubrk_close(breakIterator);
}
assert(false && "Couldn't open the search iterator.");
return -1;
}
UStringSearch* pNull = NULL;
if (!pal_atomic_cas_ptr((void* volatile*)&(pSortHandle->searchIteratorList[options].searchIterator), USED_STRING_SEARCH, pNull))
{
if (!CreateNewSearchNode(pSortHandle, options))
{
CloseSearchIterator(*pSearchIterator);
return -1;
}
}
return options;
}
assert(*pSearchIterator != NULL && "Should having a valid search handle at this stage.");
SearchIteratorNode* pCurrent = &pSortHandle->searchIteratorList[options];
while (*pSearchIterator == USED_STRING_SEARCH || !pal_atomic_cas_ptr((void* volatile*)&(pCurrent->searchIterator), USED_STRING_SEARCH, *pSearchIterator))
{
pCurrent = pCurrent->next;
if (pCurrent == NULL)
{
*pSearchIterator = NULL;
break;
}
*pSearchIterator = pCurrent->searchIterator;
}
if (*pSearchIterator == NULL) // Couldn't find any available handle to borrow then create a new one.
{
UBreakIterator* breakIterator = CreateCustomizedBreakIterator();
*pSearchIterator = usearch_openFromCollator(lpTarget, cwTargetLength, lpSource, cwSourceLength, pColl, breakIterator, &err);
if (!U_SUCCESS(err))
{
if (breakIterator != NULL)
{
ubrk_close(breakIterator);
}
assert(false && "Couldn't open a new search iterator.");
return -1;
}
if (!CreateNewSearchNode(pSortHandle, options))
{
CloseSearchIterator(*pSearchIterator);
return -1;
}
return options;
}
usearch_setText(*pSearchIterator, lpSource, cwSourceLength, &err);
if (!U_SUCCESS(err))
{
int32_t r;
(void)r;
r = RestoreSearchHandle(pSortHandle, *pSearchIterator, options);
assert(r && "restoring search handle shouldn't fail.");
return -1;
}
usearch_setPattern(*pSearchIterator, lpTarget, cwTargetLength, &err);
if (!U_SUCCESS(err))
{
int32_t r;
(void)r;
r = RestoreSearchHandle(pSortHandle, *pSearchIterator, options);
assert(r && "restoring search handle shouldn't fail.");
return -1;
}
return options;
}
// return -1 if couldn't borrow search handle from the SortHandle cache, otherwise, it return the slot number of the cache.
static inline int32_t GetSearchIterator(
SortHandle* pSortHandle,
const UChar* lpTarget,
int32_t cwTargetLength,
const UChar* lpSource,
int32_t cwSourceLength,
int32_t options,
UStringSearch** pSearchIterator)
{
UErrorCode err = U_ZERO_ERROR;
const UCollator* pColl = GetCollatorFromSortHandle(pSortHandle, options, &err);
if (!U_SUCCESS(err))
{
assert(false && "Couldn't get the collator.");
return -1;
}
return GetSearchIteratorUsingCollator(
pSortHandle,
pColl,
lpTarget,
cwTargetLength,
lpSource,
cwSourceLength,
options,
pSearchIterator);
}
int32_t GlobalizationNative_GetSortVersion(SortHandle* pSortHandle)
{
UErrorCode err = U_ZERO_ERROR;
const UCollator* pColl = GetCollatorFromSortHandle(pSortHandle, 0, &err);
int32_t result = -1;
if (U_SUCCESS(err))
{
ucol_getVersion(pColl, (uint8_t *) &result);
}
else
{
assert(false && "Unexpected ucol_getVersion to fail.");
}
return result;
}
/*
Function:
CompareString
*/
int32_t GlobalizationNative_CompareString(
SortHandle* pSortHandle, const UChar* lpStr1, int32_t cwStr1Length, const UChar* lpStr2, int32_t cwStr2Length, int32_t options)
{
UCollationResult result = UCOL_EQUAL;
UErrorCode err = U_ZERO_ERROR;
const UCollator* pColl = GetCollatorFromSortHandle(pSortHandle, options, &err);
if (U_SUCCESS(err))
{
// Workaround for https://unicode-org.atlassian.net/projects/ICU/issues/ICU-9396
// The ucol_strcoll routine on some older versions of ICU doesn't correctly
// handle nullptr inputs. We'll play defensively and always flow a non-nullptr.
UChar dummyChar = 0;
if (lpStr1 == NULL)
{
lpStr1 = &dummyChar;
}
if (lpStr2 == NULL)
{
lpStr2 = &dummyChar;
}
result = ucol_strcoll(pColl, lpStr1, cwStr1Length, lpStr2, cwStr2Length);
}
return result;
}
/*
Function:
IndexOf
*/
int32_t GlobalizationNative_IndexOf(
SortHandle* pSortHandle,
const UChar* lpTarget,
int32_t cwTargetLength,
const UChar* lpSource,
int32_t cwSourceLength,
int32_t options,
int32_t* pMatchedLength)
{
assert(cwTargetLength > 0);
int32_t result = USEARCH_DONE;
// It's possible somebody passed us (source = <empty>, target = <non-empty>).
// ICU's usearch_* APIs don't handle empty source inputs properly. However,
// if this occurs the user really just wanted us to perform an equality check.
// We can't short-circuit the operation because depending on the collation in
// use, certain code points may have zero weight, which means that empty
// strings may compare as equal to non-empty strings.
if (cwSourceLength == 0)
{
result = GlobalizationNative_CompareString(pSortHandle, lpTarget, cwTargetLength, lpSource, cwSourceLength, options);
if (result == UCOL_EQUAL && pMatchedLength != NULL)
{
*pMatchedLength = cwSourceLength;
}
return (result == UCOL_EQUAL) ? 0 : -1;
}
UErrorCode err = U_ZERO_ERROR;
UStringSearch* pSearch;
int32_t searchCacheSlot = GetSearchIterator(pSortHandle, lpTarget, cwTargetLength, lpSource, cwSourceLength, options, &pSearch);
if (searchCacheSlot < 0)
{
return result;
}
result = usearch_first(pSearch, &err);
// if the search was successful,
// we'll try to get the matched string length.
if (result != USEARCH_DONE && pMatchedLength != NULL)
{
*pMatchedLength = usearch_getMatchedLength(pSearch);
}
RestoreSearchHandle(pSortHandle, pSearch, searchCacheSlot);
return result;
}
/*
Function:
LastIndexOf
*/
int32_t GlobalizationNative_LastIndexOf(
SortHandle* pSortHandle,
const UChar* lpTarget,
int32_t cwTargetLength,
const UChar* lpSource,
int32_t cwSourceLength,
int32_t options,
int32_t* pMatchedLength)
{
assert(cwTargetLength > 0);
int32_t result = USEARCH_DONE;
// It's possible somebody passed us (source = <empty>, target = <non-empty>).
// ICU's usearch_* APIs don't handle empty source inputs properly. However,
// if this occurs the user really just wanted us to perform an equality check.
// We can't short-circuit the operation because depending on the collation in
// use, certain code points may have zero weight, which means that empty
// strings may compare as equal to non-empty strings.
if (cwSourceLength == 0)
{
result = GlobalizationNative_CompareString(pSortHandle, lpTarget, cwTargetLength, lpSource, cwSourceLength, options);
if (result == UCOL_EQUAL && pMatchedLength != NULL)
{
*pMatchedLength = cwSourceLength;
}
return (result == UCOL_EQUAL) ? 0 : -1;
}
UErrorCode err = U_ZERO_ERROR;
UStringSearch* pSearch;
int32_t searchCacheSlot = GetSearchIterator(pSortHandle, lpTarget, cwTargetLength, lpSource, cwSourceLength, options, &pSearch);
if (searchCacheSlot < 0)
{
return result;
}
result = usearch_last(pSearch, &err);
// if the search was successful, we'll try to get the matched string length.
if (result != USEARCH_DONE)
{
int32_t matchLength = -1;
if (pMatchedLength != NULL)
{
matchLength = usearch_getMatchedLength(pSearch);
*pMatchedLength = matchLength;
}
// In case the search result is pointing at the last character (including Surrogate case) of the source string, we need to check if the target string
// was constructed with characters which have no sort weights. The way we do that is to check that the matched length is 0.
// We need to update the returned index to have consistent behavior with Ordinal and NLS operations, and satisfy the condition:
// index = source.LastIndexOf(value, comparisonType);
// originalString.Substring(index).StartsWith(value, comparisonType) == true.
// https://github.com/dotnet/runtime/issues/13383
if (result >= cwSourceLength - 2)
{
if (pMatchedLength == NULL)
{
matchLength = usearch_getMatchedLength(pSearch);
}
if (matchLength == 0)
{
result = cwSourceLength;
}
}
}
RestoreSearchHandle(pSortHandle, pSearch, searchCacheSlot);
return result;
}
/*
collation element is an int used for sorting. It consists of 3 components:
* primary - first 16 bits, representing the base letter
* secondary - next 8 bits, typically an accent
* tertiary - last 8 bits, typically the case
An example (the numbers are made up to keep it simple)
a: 1 0 0
ą: 1 1 0
A: 1 0 1
Ą: 1 1 1
this method returns a mask that allows for characters comparison using specified Collator Strength
*/
static int32_t GetCollationElementMask(UColAttributeValue strength)
{
assert(strength >= UCOL_SECONDARY);
switch (strength)
{
case UCOL_PRIMARY:
return UCOL_PRIMARYORDERMASK;
case UCOL_SECONDARY:
return UCOL_PRIMARYORDERMASK | UCOL_SECONDARYORDERMASK;
default:
return UCOL_PRIMARYORDERMASK | UCOL_SECONDARYORDERMASK | UCOL_TERTIARYORDERMASK;
}
}
static int32_t inline SimpleAffix_Iterators(UCollationElements* pPatternIterator, UCollationElements* pSourceIterator, UColAttributeValue strength, int32_t forwardSearch, int32_t* pCapturedOffset)
{
assert(strength >= UCOL_SECONDARY);
UErrorCode errorCode = U_ZERO_ERROR;
int32_t movePattern = true, moveSource = true;
int32_t patternElement = UCOL_IGNORABLE, sourceElement = UCOL_IGNORABLE;
int32_t capturedOffset = 0;
int32_t collationElementMask = GetCollationElementMask(strength);
while (true)
{
if (movePattern)
{
patternElement = forwardSearch ? ucol_next(pPatternIterator, &errorCode) : ucol_previous(pPatternIterator, &errorCode);
}
if (moveSource)
{
if (pCapturedOffset != NULL)
{
capturedOffset = ucol_getOffset(pSourceIterator); // need to capture offset before advancing iterator
}
sourceElement = forwardSearch ? ucol_next(pSourceIterator, &errorCode) : ucol_previous(pSourceIterator, &errorCode);
}
movePattern = true; moveSource = true;
if (patternElement == UCOL_NULLORDER)
{
if (sourceElement == UCOL_NULLORDER)
{
goto ReturnTrue; // source is equal to pattern, we have reached both ends|beginnings at the same time
}
else if (sourceElement == UCOL_IGNORABLE)
{
goto ReturnTrue; // the next|previous character in source is an ignorable character, an example: "o\u0000".StartsWith("o")
}
else if (forwardSearch && ((sourceElement & UCOL_PRIMARYORDERMASK) == 0) && (sourceElement & UCOL_SECONDARYORDERMASK) != 0)
{
return false; // the next character in source text is a combining character, an example: "o\u0308".StartsWith("o")
}
else
{
goto ReturnTrue;
}
}
else if (patternElement == UCOL_IGNORABLE)
{
moveSource = false;
}
else if (sourceElement == UCOL_IGNORABLE)
{
movePattern = false;
}
else if ((patternElement & collationElementMask) != (sourceElement & collationElementMask))
{
return false;
}
}
ReturnTrue:
if (pCapturedOffset != NULL)
{
*pCapturedOffset = capturedOffset;
}
return true;
}
static int32_t SimpleAffix(const UCollator* pCollator, UErrorCode* pErrorCode, const UChar* pPattern, int32_t patternLength, const UChar* pText, int32_t textLength, int32_t forwardSearch, int32_t* pMatchedLength)
{
int32_t result = false;
UCollationElements* pPatternIterator = ucol_openElements(pCollator, pPattern, patternLength, pErrorCode);
if (U_SUCCESS(*pErrorCode))
{
UCollationElements* pSourceIterator = ucol_openElements(pCollator, pText, textLength, pErrorCode);
if (U_SUCCESS(*pErrorCode))
{
UColAttributeValue strength = ucol_getStrength(pCollator);
int32_t capturedOffset = 0;
result = SimpleAffix_Iterators(pPatternIterator, pSourceIterator, strength, forwardSearch, (pMatchedLength != NULL) ? &capturedOffset : NULL);
if (result && pMatchedLength != NULL)
{
// depending on whether we're searching forward or backward, the matching substring
// is [start of source string .. curIdx] or [curIdx .. end of source string]
*pMatchedLength = (forwardSearch) ? capturedOffset : (textLength - capturedOffset);
}
ucol_closeElements(pSourceIterator);
}
ucol_closeElements(pPatternIterator);
}
return result;
}
static int32_t ComplexStartsWith(SortHandle* pSortHandle, const UChar* pPattern, int32_t patternLength, const UChar* pText, int32_t textLength, int32_t options, int32_t* pMatchedLength)
{
int32_t result = false;
UErrorCode err = U_ZERO_ERROR;
const UCollator* pCollator = GetCollatorFromSortHandle(pSortHandle, options, &err);
if (!U_SUCCESS(err))
{
return result;
}
UStringSearch* pSearch;
int32_t searchCacheSlot = GetSearchIteratorUsingCollator(pSortHandle, pCollator, pPattern, patternLength, pText, textLength, options, &pSearch);
if (searchCacheSlot < 0)
{
return result;
}
int32_t idx = usearch_first(pSearch, &err);
if (idx != USEARCH_DONE)
{
if (idx == 0)
{
result = true;
}
else
{
result = CanIgnoreAllCollationElements(pCollator, pText, idx);
}
if (result && pMatchedLength != NULL)
{
// adjust matched length to account for all the elements we implicitly consumed at beginning of string
*pMatchedLength = idx + usearch_getMatchedLength(pSearch);
}
}
RestoreSearchHandle(pSortHandle, pSearch, searchCacheSlot);
return result;
}
/*
Return value is a "Win32 BOOL" (1 = true, 0 = false)
*/
int32_t GlobalizationNative_StartsWith(
SortHandle* pSortHandle,
const UChar* lpTarget,
int32_t cwTargetLength,
const UChar* lpSource,
int32_t cwSourceLength,
int32_t options,
int32_t* pMatchedLength)
{
if (options > CompareOptionsIgnoreCase)
{
return ComplexStartsWith(pSortHandle, lpTarget, cwTargetLength, lpSource, cwSourceLength, options, pMatchedLength);
}
UErrorCode err = U_ZERO_ERROR;
const UCollator* pCollator = GetCollatorFromSortHandle(pSortHandle, options, &err);
if (!U_SUCCESS(err))
{
return false;
}
return SimpleAffix(pCollator, &err, lpTarget, cwTargetLength, lpSource, cwSourceLength, true, pMatchedLength);
}
static int32_t ComplexEndsWith(SortHandle* pSortHandle, const UChar* pPattern, int32_t patternLength, const UChar* pText, int32_t textLength, int32_t options, int32_t* pMatchedLength)
{
int32_t result = false;
UErrorCode err = U_ZERO_ERROR;
const UCollator* pCollator = GetCollatorFromSortHandle(pSortHandle, options, &err);
if (!U_SUCCESS(err))
{
return result;
}
UStringSearch* pSearch;
int32_t searchCacheSlot = GetSearchIteratorUsingCollator(pSortHandle, pCollator, pPattern, patternLength, pText, textLength, options, &pSearch);
if (searchCacheSlot < 0)
{
return result;
}
int32_t idx = usearch_last(pSearch, &err);
if (idx != USEARCH_DONE)
{
int32_t matchEnd = idx + usearch_getMatchedLength(pSearch);
assert(matchEnd <= textLength);
if (matchEnd == textLength)
{
result = true;
}
else
{
int32_t remainingStringLength = textLength - matchEnd;
result = CanIgnoreAllCollationElements(pCollator, pText + matchEnd, remainingStringLength);
}
if (result && pMatchedLength != NULL)
{
// adjust matched length to account for all the elements we implicitly consumed at end of string
*pMatchedLength = textLength - idx;
}
}
RestoreSearchHandle(pSortHandle, pSearch, searchCacheSlot);
return result;
}
/*
Return value is a "Win32 BOOL" (1 = true, 0 = false)
*/
int32_t GlobalizationNative_EndsWith(
SortHandle* pSortHandle,
const UChar* lpTarget,
int32_t cwTargetLength,
const UChar* lpSource,
int32_t cwSourceLength,
int32_t options,
int32_t* pMatchedLength)
{
if (options > CompareOptionsIgnoreCase)
{
return ComplexEndsWith(pSortHandle, lpTarget, cwTargetLength, lpSource, cwSourceLength, options, pMatchedLength);
}
UErrorCode err = U_ZERO_ERROR;
const UCollator* pCollator = GetCollatorFromSortHandle(pSortHandle, options, &err);
if (!U_SUCCESS(err))
{
return false;
}
return SimpleAffix(pCollator, &err, lpTarget, cwTargetLength, lpSource, cwSourceLength, false, pMatchedLength);
}
int32_t GlobalizationNative_GetSortKey(
SortHandle* pSortHandle,
const UChar* lpStr,
int32_t cwStrLength,
uint8_t* sortKey,
int32_t cbSortKeyLength,
int32_t options)
{
UErrorCode err = U_ZERO_ERROR;
const UCollator* pColl = GetCollatorFromSortHandle(pSortHandle, options, &err);
int32_t result = 0;
if (U_SUCCESS(err))
{
result = ucol_getSortKey(pColl, lpStr, cwStrLength, sortKey, cbSortKeyLength);
}
return result;
}
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#include <assert.h>
#include <stdbool.h>
#include <stdlib.h>
#include <stdint.h>
#include <search.h>
#include <string.h>
#include "pal_errors_internal.h"
#include "pal_collation.h"
#include "pal_atomic.h"
c_static_assert_msg(UCOL_EQUAL == 0, "managed side requires 0 for equal strings");
c_static_assert_msg(UCOL_LESS < 0, "managed side requires less than zero for a < b");
c_static_assert_msg(UCOL_GREATER > 0, "managed side requires greater than zero for a > b");
c_static_assert_msg(USEARCH_DONE == -1, "managed side requires -1 for not found");
#define UCOL_IGNORABLE 0
#define UCOL_PRIMARYORDERMASK ((int32_t)0xFFFF0000)
#define UCOL_SECONDARYORDERMASK 0x0000FF00
#define UCOL_TERTIARYORDERMASK 0x000000FF
#define CompareOptionsIgnoreCase 0x1
#define CompareOptionsIgnoreNonSpace 0x2
#define CompareOptionsIgnoreSymbols 0x4
#define CompareOptionsIgnoreKanaType 0x8
#define CompareOptionsIgnoreWidth 0x10
#define CompareOptionsMask 0x1f
// #define CompareOptionsStringSort 0x20000000
// ICU's default is to use "StringSort", i.e. nonalphanumeric symbols come before alphanumeric.
// When StringSort is not specified (.NET's default), the sort order will be different between
// Windows and Unix platforms. The nonalphanumeric symbols will come after alphanumeric
// characters on Windows, but before on Unix.
// Since locale - specific string sort order can change from one version of Windows to the next,
// there is no reason to guarantee string sort order between Windows and ICU. Thus trying to
// change ICU's default behavior here isn't really justified unless someone has a strong reason
// for !StringSort to behave differently.
#define USED_STRING_SEARCH ((UStringSearch*) (-1))
typedef struct { int32_t key; UCollator* UCollator; } TCollatorMap;
typedef struct SearchIteratorNode
{
UStringSearch* searchIterator;
struct SearchIteratorNode* next;
} SearchIteratorNode;
/*
* For increased performance, we cache the UCollator objects for a locale and
* share them across threads. This is safe (and supported in ICU) if we ensure
* multiple threads are only ever dealing with const UCollators.
*/
struct SortHandle
{
UCollator* collatorsPerOption[CompareOptionsMask + 1];
SearchIteratorNode searchIteratorList[CompareOptionsMask + 1];
};
// Hiragana character range
static const UChar hiraganaStart = 0x3041;
static const UChar hiraganaEnd = 0x309e;
static const UChar hiraganaToKatakanaOffset = 0x30a1 - 0x3041;
// Length of the fullwidth characters from 'A' to 'Z'
// We'll use it to map the casing of the full width 'A' to 'Z' characters
static const int32_t FullWidthAlphabetRangeLength = 0xFF3A - 0xFF21 + 1;
// Mapping between half- and fullwidth characters.
// LowerChars are the characters that should sort lower than HigherChars
static const UChar g_HalfFullLowerChars[] = {
// halfwidth characters
0x0021, 0x0022, 0x0023, 0x0024, 0x0025, 0x0026, 0x0027, 0x0028, 0x0029, 0x002a, 0x002b, 0x002c, 0x002d, 0x002e, 0x002f,
0x0030, 0x0031, 0x0032, 0x0033, 0x0034, 0x0035, 0x0036, 0x0037, 0x0038, 0x0039, 0x003a, 0x003b, 0x003c, 0x003d, 0x003e,
0x003f, 0x0040, 0x0041, 0x0042, 0x0043, 0x0044, 0x0045, 0x0046, 0x0047, 0x0048, 0x0049, 0x004a, 0x004b, 0x004c, 0x004d,
0x004e, 0x004f, 0x0050, 0x0051, 0x0052, 0x0053, 0x0054, 0x0055, 0x0056, 0x0057, 0x0058, 0x0059, 0x005a, 0x005b, 0x005d,
0x005e, 0x005f, 0x0060, 0x0061, 0x0062, 0x0063, 0x0064, 0x0065, 0x0066, 0x0067, 0x0068, 0x0069, 0x006a, 0x006b, 0x006c,
0x006d, 0x006e, 0x006f, 0x0070, 0x0071, 0x0072, 0x0073, 0x0074, 0x0075, 0x0076, 0x0077, 0x0078, 0x0079, 0x007a, 0x007b,
0x007c, 0x007d, 0x007e, 0x00a2, 0x00a3, 0x00ac, 0x00af, 0x00a6, 0x00a5, 0x20a9,
// fullwidth characters
0x3002, 0x300c, 0x300d, 0x3001, 0x30fb, 0x30f2, 0x30a1, 0x30a3, 0x30a5, 0x30a7, 0x30a9, 0x30e3, 0x30e5, 0x30e7, 0x30c3,
0x30a2, 0x30a4, 0x30a6, 0x30a8, 0x30aa, 0x30ab, 0x30ad, 0x30af, 0x30b1, 0x30b3, 0x30b5, 0x30b7, 0x30b9, 0x30bb, 0x30bd,
0x30bf, 0x30c1, 0x30c4, 0x30c6, 0x30c8, 0x30ca, 0x30cb, 0x30cc, 0x30cd, 0x30ce, 0x30cf, 0x30d2, 0x30d5, 0x30d8, 0x30db,
0x30de, 0x30df, 0x30e0, 0x30e1, 0x30e2, 0x30e4, 0x30e6, 0x30e8, 0x30e9, 0x30ea, 0x30eb, 0x30ec, 0x30ed, 0x30ef, 0x30f3,
0x3164, 0x3131, 0x3132, 0x3133, 0x3134, 0x3135, 0x3136, 0x3137, 0x3138, 0x3139, 0x313a, 0x313b, 0x313c, 0x313d, 0x313e,
0x313f, 0x3140, 0x3141, 0x3142, 0x3143, 0x3144, 0x3145, 0x3146, 0x3147, 0x3148, 0x3149, 0x314a, 0x314b, 0x314c, 0x314d,
0x314e, 0x314f, 0x3150, 0x3151, 0x3152, 0x3153, 0x3154, 0x3155, 0x3156, 0x3157, 0x3158, 0x3159, 0x315a, 0x315b, 0x315c,
0x315d, 0x315e, 0x315f, 0x3160, 0x3161, 0x3162, 0x3163
};
static const UChar g_HalfFullHigherChars[] = {
// fullwidth characters
0xff01, 0xff02, 0xff03, 0xff04, 0xff05, 0xff06, 0xff07, 0xff08, 0xff09, 0xff0a, 0xff0b, 0xff0c, 0xff0d, 0xff0e, 0xff0f,
0xff10, 0xff11, 0xff12, 0xff13, 0xff14, 0xff15, 0xff16, 0xff17, 0xff18, 0xff19, 0xff1a, 0xff1b, 0xff1c, 0xff1d, 0xff1e,
0xff1f, 0xff20, 0xff21, 0xff22, 0xff23, 0xff24, 0xff25, 0xff26, 0xff27, 0xff28, 0xff29, 0xff2a, 0xff2b, 0xff2c, 0xff2d,
0xff2e, 0xff2f, 0xff30, 0xff31, 0xff32, 0xff33, 0xff34, 0xff35, 0xff36, 0xff37, 0xff38, 0xff39, 0xff3a, 0xff3b, 0xff3d,
0xff3e, 0xff3f, 0xff40, 0xff41, 0xff42, 0xff43, 0xff44, 0xff45, 0xff46, 0xff47, 0xff48, 0xff49, 0xff4a, 0xff4b, 0xff4c,
0xff4d, 0xff4e, 0xff4f, 0xff50, 0xff51, 0xff52, 0xff53, 0xff54, 0xff55, 0xff56, 0xff57, 0xff58, 0xff59, 0xff5a, 0xff5b,
0xff5c, 0xff5d, 0xff5e, 0xffe0, 0xffe1, 0xffe2, 0xffe3, 0xffe4, 0xffe5, 0xffe6,
// halfwidth characters
0xff61, 0xff62, 0xff63, 0xff64, 0xff65, 0xff66, 0xff67, 0xff68, 0xff69, 0xff6a, 0xff6b, 0xff6c, 0xff6d, 0xff6e, 0xff6f,
0xff71, 0xff72, 0xff73, 0xff74, 0xff75, 0xff76, 0xff77, 0xff78, 0xff79, 0xff7a, 0xff7b, 0xff7c, 0xff7d, 0xff7e, 0xff7f,
0xff80, 0xff81, 0xff82, 0xff83, 0xff84, 0xff85, 0xff86, 0xff87, 0xff88, 0xff89, 0xff8a, 0xff8b, 0xff8c, 0xff8d, 0xff8e,
0xff8f, 0xff90, 0xff91, 0xff92, 0xff93, 0xff94, 0xff95, 0xff96, 0xff97, 0xff98, 0xff99, 0xff9a, 0xff9b, 0xff9c, 0xff9d,
0xffa0, 0xffa1, 0xffa2, 0xffa3, 0xffa4, 0xffa5, 0xffa6, 0xffa7, 0xffa8, 0xffa9, 0xffaa, 0xffab, 0xffac, 0xffad, 0xffae,
0xffaf, 0xffb0, 0xffb1, 0xffb2, 0xffb3, 0xffb4, 0xffb5, 0xffb6, 0xffb7, 0xffb8, 0xffb9, 0xffba, 0xffbb, 0xffbc, 0xffbd,
0xffbe, 0xffc2, 0xffc3, 0xffc4, 0xffc5, 0xffc6, 0xffc7, 0xffca, 0xffcb, 0xffcc, 0xffcd, 0xffce, 0xffcf, 0xffd2, 0xffd3,
0xffd4, 0xffd5, 0xffd6, 0xffd7, 0xffda, 0xffdb, 0xffdc
};
static const int32_t g_HalfFullCharsLength = (sizeof(g_HalfFullHigherChars) / sizeof(UChar));
// Hiragana without [semi-]voiced sound mark for custom collation rules
// If Hiragana with [semi-]voiced sound mark is added to custom collation rules, there is a conflict
// between the custom rule and some default rule.
static const UChar g_HiraganaWithoutVoicedSoundMarkChars[] = {
0x3041, 0x3042, 0x3043, 0x3044, 0x3045, 0x3046, 0x3047, 0x3048, 0x3049, 0x304A, 0x304B, 0x304D, 0x304F, 0x3051, 0x3053,
0x3055, 0x3057, 0x3059, 0x305B, 0x305D, 0x305F, 0x3061, 0x3063, 0x3064, 0x3066, 0x3068, 0x306A, 0x306B, 0x306C, 0x306D,
0x306E, 0x306F, 0x3072, 0x3075, 0x3078, 0x307B, 0x307E, 0x307F, 0x3080, 0x3081, 0x3082, 0x3083, 0x3084, 0x3085, 0x3086,
0x3087, 0x3088, 0x3089, 0x308A, 0x308B, 0x308C, 0x308D, 0x308E, 0x308F, 0x3090, 0x3091, 0x3092, 0x3093, 0x3095, 0x3096, 0x309D,
};
static const int32_t g_HiraganaWithoutVoicedSoundMarkCharsLength = (sizeof(g_HiraganaWithoutVoicedSoundMarkChars) / sizeof(UChar));
/*
ICU collation rules reserve any punctuation and whitespace characters for use in the syntax.
Thus, to use these characters in a rule, they need to be escaped.
This rule was taken from http://www.unicode.org/reports/tr35/tr35-collation.html#Rules.
*/
static int NeedsEscape(UChar character)
{
return ((0x21 <= character && character <= 0x2f)
|| (0x3a <= character && character <= 0x40)
|| (0x5b <= character && character <= 0x60)
|| (0x7b <= character && character <= 0x7e));
}
/*
Gets a value indicating whether the HalfFullHigher character is considered a symbol character.
The ranges specified here are only checking for characters in the g_HalfFullHigherChars list and needs
to be combined with NeedsEscape above with the g_HalfFullLowerChars for all the IgnoreSymbols characters.
This is done so we can use range checks instead of comparing individual characters.
These ranges were obtained by running the above characters through .NET CompareInfo.Compare
with CompareOptions.IgnoreSymbols on Windows.
*/
static int IsHalfFullHigherSymbol(UChar character)
{
return (0xffe0 <= character && character <= 0xffe6)
|| (0xff61 <= character && character <= 0xff65);
}
/*
Fill custom collation rules for ignoreKana cases.
Since the CompareOptions flags don't map 1:1 with ICU default functionality, we need to fall back to using
custom rules in order to support IgnoreKanaType and IgnoreWidth CompareOptions correctly.
*/
static void FillIgnoreKanaRules(UChar* completeRules, int32_t* fillIndex, int32_t completeRulesLength, int32_t isIgnoreKanaType)
{
assert((*fillIndex) + (4 * (hiraganaEnd - hiraganaStart + 1)) <= completeRulesLength);
if ((*fillIndex) + (4 * (hiraganaEnd - hiraganaStart + 1)) > completeRulesLength) // check the allocated the size
{
return;
}
if (isIgnoreKanaType)
{
for (UChar hiraganaChar = hiraganaStart; hiraganaChar <= hiraganaEnd; hiraganaChar++)
{
// Hiragana is the range 3041 to 3096 & 309D & 309E
if (hiraganaChar <= 0x3096 || hiraganaChar >= 0x309D) // characters between 3096 and 309D are not mapped to katakana
{
completeRules[*fillIndex] = '&';
completeRules[(*fillIndex) + 1] = hiraganaChar;
completeRules[(*fillIndex) + 2] = '=';
completeRules[(*fillIndex) + 3] = hiraganaChar + hiraganaToKatakanaOffset;
(*fillIndex) += 4;
}
}
}
else
{
// Avoid conflicts between default [semi-]voiced sound mark rules and custom rules
for (int i = 0; i < g_HiraganaWithoutVoicedSoundMarkCharsLength; i++)
{
UChar hiraganaChar = g_HiraganaWithoutVoicedSoundMarkChars[i];
completeRules[*fillIndex] = '&';
completeRules[(*fillIndex) + 1] = hiraganaChar;
completeRules[(*fillIndex) + 2] = '<';
completeRules[(*fillIndex) + 3] = hiraganaChar + hiraganaToKatakanaOffset;
(*fillIndex) += 4;
}
}
}
/*
Fill custom collation rules for ignoreWidth cases.
Since the CompareOptions flags don't map 1:1 with ICU default functionality, we need to fall back to using
custom rules in order to support IgnoreKanaType and IgnoreWidth CompareOptions correctly.
*/
static void FillIgnoreWidthRules(UChar* completeRules, int32_t* fillIndex, int32_t completeRulesLength, int32_t isIgnoreWidth, int32_t isIgnoreCase, int32_t isIgnoreSymbols)
{
UChar compareChar = isIgnoreWidth ? '=' : '<';
UChar lowerChar;
UChar higherChar;
int needsEscape;
assert((*fillIndex) + (5 * g_HalfFullCharsLength) <= completeRulesLength);
if ((*fillIndex) + (5 * g_HalfFullCharsLength) > completeRulesLength)
{
return;
}
for (int i = 0; i < g_HalfFullCharsLength; i++)
{
lowerChar = g_HalfFullLowerChars[i];
higherChar = g_HalfFullHigherChars[i];
// the lower chars need to be checked for escaping since they contain ASCII punctuation
needsEscape = NeedsEscape(lowerChar);
// when isIgnoreSymbols is true and we are not ignoring width, check to see if
// this character is a symbol, and if so skip it
if (!(isIgnoreSymbols && (!isIgnoreWidth) && (needsEscape || IsHalfFullHigherSymbol(higherChar))))
{
completeRules[*fillIndex] = '&';
(*fillIndex)++;
if (needsEscape)
{
completeRules[*fillIndex] = '\\';
(*fillIndex)++;
}
completeRules[*fillIndex] = lowerChar;
completeRules[(*fillIndex) + 1] = compareChar;
completeRules[(*fillIndex) + 2] = higherChar;
(*fillIndex) += 3;
}
}
// When we have isIgnoreWidth is false, we sort the normal width latin alphabet characters before the full width latin alphabet characters
// e.g. `a` < `a` (\uFF41)
// This break the casing of the full width latin alphabet characters.
// e.g. `a` (\uFF41) == `A` (\uFF21).
// we are fixing back this case mapping here.
if (isIgnoreCase && (!isIgnoreWidth))
{
assert((*fillIndex) + (FullWidthAlphabetRangeLength * 4) <= completeRulesLength);
const int UpperCaseToLowerCaseOffset = 0xFF41 - 0xFF21;
for (UChar ch = 0xFF21; ch <= 0xFF3A; ch++)
{
completeRules[*fillIndex] = '&';
completeRules[(*fillIndex) + 1] = ch + UpperCaseToLowerCaseOffset;
completeRules[(*fillIndex) + 2] = '=';
completeRules[(*fillIndex) + 3] = ch;
(*fillIndex) += 4;
}
}
}
/*
* The collator returned by this function is owned by the callee and must be
* closed when this method returns with a U_SUCCESS UErrorCode.
*
* On error, the return value is undefined.
*/
static UCollator* CloneCollatorWithOptions(const UCollator* pCollator, int32_t options, UErrorCode* pErr)
{
UColAttributeValue strength = ucol_getStrength(pCollator);
int32_t isIgnoreCase = (options & CompareOptionsIgnoreCase) == CompareOptionsIgnoreCase;
int32_t isIgnoreNonSpace = (options & CompareOptionsIgnoreNonSpace) == CompareOptionsIgnoreNonSpace;
int32_t isIgnoreSymbols = (options & CompareOptionsIgnoreSymbols) == CompareOptionsIgnoreSymbols;
int32_t isIgnoreKanaType = (options & CompareOptionsIgnoreKanaType) == CompareOptionsIgnoreKanaType;
int32_t isIgnoreWidth = (options & CompareOptionsIgnoreWidth) == CompareOptionsIgnoreWidth;
if (isIgnoreCase)
{
strength = UCOL_SECONDARY;
}
if (isIgnoreNonSpace)
{
strength = UCOL_PRIMARY;
}
UCollator* pClonedCollator;
// IgnoreWidth - it would be easy to IgnoreWidth by just setting Strength <= Secondary.
// For any strength under that, the width of the characters will be ignored.
// For strength above that, the width of the characters will be used in differentiation.
// a. However, this doesn’t play nice with IgnoreCase, since these Strength levels are overloaded.
// b. So the plan to support IgnoreWidth is to use customized rules.
// i. Since the character width is differentiated at “Tertiary” strength, we only need to use custom rules in specific cases.
// ii. If (IgnoreWidth == true && Strength > “Secondary”)
// 1. Build up a custom rule set for each half-width character and say that it is equal to the corresponding full-width character.
// a. ex: “0x30F2 = 0xFF66 & 0x30F3 = 0xFF9D & …”
// iii. If (IgnoreWidth == false && Strength <= “Secondary”)
// 1. Build up a custom rule set saying that the half-width and full-width characters have a primary level difference (which will cause it always to be unequal)
// a. Ex. “0x30F2 < 0xFF66 & 0x30F3 < 0xFF9D & …”
// IgnoreKanaType – this works the same way as IgnoreWidth, it uses the set of Hiragana and Katakana characters instead of half-width vs full-width characters to build the rules.
int32_t applyIgnoreKanaTypeCustomRule = isIgnoreKanaType ^ (strength < UCOL_TERTIARY); // kana differs at the tertiary level
int32_t applyIgnoreWidthTypeCustomRule = isIgnoreWidth ^ (strength < UCOL_TERTIARY); // character width differs at the tertiary level
int32_t customRuleLength = 0;
if (applyIgnoreKanaTypeCustomRule || applyIgnoreWidthTypeCustomRule)
{
// If we need to create customRules, the KanaType custom rule will be 88 kana characters * 4 = 352 chars long
// and the Width custom rule will be at most 212 halfwidth characters * 5 = 1060 chars long.
customRuleLength = (applyIgnoreKanaTypeCustomRule ? 4 * (hiraganaEnd - hiraganaStart + 1) : 0) +
(applyIgnoreWidthTypeCustomRule ? ((5 * g_HalfFullCharsLength) + (isIgnoreCase ? 4 * FullWidthAlphabetRangeLength : 0)) : 0) +
1; // Adding extra terminator rule at the end to force ICU apply last actual entered rule, otherwise last actual rule get ignored.
}
if (customRuleLength == 0)
{
pClonedCollator = ucol_safeClone(pCollator, NULL, NULL, pErr);
}
else
{
int32_t rulesLength;
const UChar* localeRules = ucol_getRules(pCollator, &rulesLength);
int32_t completeRulesLength = rulesLength + customRuleLength + 1;
UChar* completeRules = (UChar*)calloc((size_t)completeRulesLength, sizeof(UChar));
for (int i = 0; i < rulesLength; i++)
{
completeRules[i] = localeRules[i];
}
if (applyIgnoreKanaTypeCustomRule)
{
FillIgnoreKanaRules(completeRules, &rulesLength, completeRulesLength, isIgnoreKanaType);
}
assert(rulesLength <= completeRulesLength);
if (applyIgnoreWidthTypeCustomRule)
{
FillIgnoreWidthRules(completeRules, &rulesLength, completeRulesLength, isIgnoreWidth, isIgnoreCase, isIgnoreSymbols);
}
assert(rulesLength + 4 <= completeRulesLength);
// Adding extra terminator rule at the end to force ICU apply last actual entered rule, otherwise last actual rule get ignored.
completeRules[rulesLength] = '&';
completeRules[rulesLength + 1] = 'a';
completeRules[rulesLength + 2] = '=';
completeRules[rulesLength + 3] = 'a';
rulesLength += 4;
pClonedCollator = ucol_openRules(completeRules, rulesLength, UCOL_DEFAULT, strength, NULL, pErr);
free(completeRules);
}
if (isIgnoreSymbols)
{
ucol_setAttribute(pClonedCollator, UCOL_ALTERNATE_HANDLING, UCOL_SHIFTED, pErr);
#if !defined(STATIC_ICU)
if (ucol_setMaxVariable_ptr != NULL)
{
// by default, ICU alternate shifted handling only ignores punctuation, but
// IgnoreSymbols needs symbols and currency as well, so change the "variable top"
// to include all symbols and currency
ucol_setMaxVariable(pClonedCollator, UCOL_REORDER_CODE_CURRENCY, pErr);
}
else
{
assert(ucol_setVariableTop_ptr != NULL);
// 0xfdfc is the last currency character before the first digit character
// in http://source.icu-project.org/repos/icu/icu/tags/release-52-1/source/data/unidata/FractionalUCA.txt
const UChar ignoreSymbolsVariableTop[] = { 0xfdfc };
ucol_setVariableTop_ptr(pClonedCollator, ignoreSymbolsVariableTop, 1, pErr);
}
#else // !defined(STATIC_ICU)
// by default, ICU alternate shifted handling only ignores punctuation, but
// IgnoreSymbols needs symbols and currency as well, so change the "variable top"
// to include all symbols and currency
#if HAVE_SET_MAX_VARIABLE
ucol_setMaxVariable(pClonedCollator, UCOL_REORDER_CODE_CURRENCY, pErr);
#else
// 0xfdfc is the last currency character before the first digit character
// in http://source.icu-project.org/repos/icu/icu/tags/release-52-1/source/data/unidata/FractionalUCA.txt
const UChar ignoreSymbolsVariableTop[] = { 0xfdfc };
ucol_setVariableTop(pClonedCollator, ignoreSymbolsVariableTop, 1, pErr);
#endif
#endif //!defined(STATIC_ICU)
}
ucol_setAttribute(pClonedCollator, UCOL_STRENGTH, strength, pErr);
// casing differs at the tertiary level.
// if strength is less than tertiary, but we are not ignoring case, then we need to flip CASE_LEVEL On
if (strength < UCOL_TERTIARY && !isIgnoreCase)
{
ucol_setAttribute(pClonedCollator, UCOL_CASE_LEVEL, UCOL_ON, pErr);
}
return pClonedCollator;
}
// Returns TRUE if all the collation elements in str are completely ignorable
static int CanIgnoreAllCollationElements(const UCollator* pColl, const UChar* lpStr, int32_t length)
{
int result = true;
UErrorCode err = U_ZERO_ERROR;
UCollationElements* pCollElem = ucol_openElements(pColl, lpStr, length, &err);
if (U_SUCCESS(err))
{
int32_t curCollElem = UCOL_NULLORDER;
while ((curCollElem = ucol_next(pCollElem, &err)) != UCOL_NULLORDER)
{
if (curCollElem != UCOL_IGNORABLE)
{
result = false;
break;
}
}
ucol_closeElements(pCollElem);
}
return U_SUCCESS(err) ? result : false;
}
static void CreateSortHandle(SortHandle** ppSortHandle)
{
*ppSortHandle = (SortHandle*)calloc(1, sizeof(SortHandle));
if ((*ppSortHandle) == NULL)
{
return;
}
memset(*ppSortHandle, 0, sizeof(SortHandle));
}
ResultCode GlobalizationNative_GetSortHandle(const char* lpLocaleName, SortHandle** ppSortHandle)
{
assert(ppSortHandle != NULL);
CreateSortHandle(ppSortHandle);
if ((*ppSortHandle) == NULL)
{
return GetResultCode(U_MEMORY_ALLOCATION_ERROR);
}
UErrorCode err = U_ZERO_ERROR;
(*ppSortHandle)->collatorsPerOption[0] = ucol_open(lpLocaleName, &err);
if (U_FAILURE(err))
{
free(*ppSortHandle);
(*ppSortHandle) = NULL;
}
return GetResultCode(err);
}
static const char* BreakIteratorRuleOld = // supported on ICU like versions 52
"$CR = [\\p{Grapheme_Cluster_Break = CR}]; \n" \
"$LF = [\\p{Grapheme_Cluster_Break = LF}]; \n" \
"$Control = [\\p{Grapheme_Cluster_Break = Control}]; \n" \
"$Extend = [\\p{Grapheme_Cluster_Break = Extend}]; \n" \
"$SpacingMark = [\\p{Grapheme_Cluster_Break = SpacingMark}]; \n" \
"$Regional_Indicator = [\\p{Grapheme_Cluster_Break = Regional_Indicator}]; \n" \
"$L = [\\p{Grapheme_Cluster_Break = L}]; \n" \
"$V = [\\p{Grapheme_Cluster_Break = V}]; \n" \
"$T = [\\p{Grapheme_Cluster_Break = T}]; \n" \
"$LV = [\\p{Grapheme_Cluster_Break = LV}]; \n" \
"$LVT = [\\p{Grapheme_Cluster_Break = LVT}]; \n" \
"!!chain; \n" \
"!!forward; \n" \
"$L ($L | $V | $LV | $LVT); \n" \
"($LV | $V) ($V | $T); \n" \
"($LVT | $T) $T; \n" \
"$Regional_Indicator $Regional_Indicator; \n" \
"[^$Control $CR $LF] $Extend; \n" \
"[^$Control $CR $LF] $SpacingMark; \n" \
"!!reverse; \n" \
"($L | $V | $LV | $LVT) $L; \n" \
"($V | $T) ($LV | $V); \n" \
"$T ($LVT | $T); \n" \
"$Regional_Indicator $Regional_Indicator; \n" \
"$Extend [^$Control $CR $LF]; \n" \
"$SpacingMark [^$Control $CR $LF]; \n" \
"!!safe_reverse; \n" \
"!!safe_forward; \n";
static const char* BreakIteratorRuleNew = // supported on newer ICU versions like 62 and up
"!!quoted_literals_only; \n" \
"$CR = [\\p{Grapheme_Cluster_Break = CR}]; \n" \
"$LF = [\\p{Grapheme_Cluster_Break = LF}]; \n" \
"$Control = [[\\p{Grapheme_Cluster_Break = Control}]]; \n" \
"$Extend = [[\\p{Grapheme_Cluster_Break = Extend}]]; \n" \
"$ZWJ = [\\p{Grapheme_Cluster_Break = ZWJ}]; \n" \
"$Regional_Indicator = [\\p{Grapheme_Cluster_Break = Regional_Indicator}]; \n" \
"$Prepend = [\\p{Grapheme_Cluster_Break = Prepend}]; \n" \
"$SpacingMark = [\\p{Grapheme_Cluster_Break = SpacingMark}]; \n" \
"$Virama = [\\p{Gujr}\\p{sc=Telu}\\p{sc=Mlym}\\p{sc=Orya}\\p{sc=Beng}\\p{sc=Deva}&\\p{Indic_Syllabic_Category=Virama}]; \n" \
"$LinkingConsonant = [\\p{Gujr}\\p{sc=Telu}\\p{sc=Mlym}\\p{sc=Orya}\\p{sc=Beng}\\p{sc=Deva}&\\p{Indic_Syllabic_Category=Consonant}]; \n" \
"$ExtCccZwj = [[\\p{gcb=Extend}-\\p{ccc=0}] \\p{gcb=ZWJ}]; \n" \
"$L = [\\p{Grapheme_Cluster_Break = L}]; \n" \
"$V = [\\p{Grapheme_Cluster_Break = V}]; \n" \
"$T = [\\p{Grapheme_Cluster_Break = T}]; \n" \
"$LV = [\\p{Grapheme_Cluster_Break = LV}]; \n" \
"$LVT = [\\p{Grapheme_Cluster_Break = LVT}]; \n" \
"$Extended_Pict = [:ExtPict:]; \n" \
"!!chain; \n" \
"!!lookAheadHardBreak; \n" \
"$L ($L | $V | $LV | $LVT); \n" \
"($LV | $V) ($V | $T); \n" \
"($LVT | $T) $T; \n" \
"[^$Control $CR $LF] ($Extend | $ZWJ); \n" \
"[^$Control $CR $LF] $SpacingMark; \n" \
"$Prepend [^$Control $CR $LF]; \n" \
"$LinkingConsonant $ExtCccZwj* $Virama $ExtCccZwj* $LinkingConsonant; \n" \
"$Extended_Pict $Extend* $ZWJ $Extended_Pict; \n" \
"^$Prepend* $Regional_Indicator $Regional_Indicator / $Regional_Indicator; \n" \
"^$Prepend* $Regional_Indicator $Regional_Indicator; \n" \
".;";
static UChar* s_breakIteratorRules = NULL;
// When doing string search operations using ICU, it is internally using a break iterator which doesn't allow breaking between some characters according to
// the Grapheme Cluster Boundary Rules specified in http://www.unicode.org/reports/tr29/#Grapheme_Cluster_Boundary_Rules.
// Unfortunately, not all rules will have the desired behavior we need to get in .NET. For example, the rules don't allow breaking between CR '\r' and LF '\n' characters.
// When searching for "\n" in a string like "\r\n", will get not found result.
// We are customizing the break iterator to exclude the CRxLF rule which don't allow breaking between CR and LF.
// The general rules syntax explained in the doc https://unicode-org.github.io/icu/userguide/boundaryanalysis/break-rules.html.
// The ICU latest rules definition exist here https://github.com/unicode-org/icu/blob/main/icu4c/source/data/brkitr/rules/char.txt.
static UBreakIterator* CreateCustomizedBreakIterator(void)
{
static UChar emptyString[1];
UBreakIterator* breaker;
UErrorCode status = U_ZERO_ERROR;
if (s_breakIteratorRules != NULL)
{
breaker = ubrk_openRules(s_breakIteratorRules, -1, emptyString, 0, NULL, &status);
return U_FAILURE(status) ? NULL : breaker;
}
int32_t oldRulesLength = (int32_t)strlen(BreakIteratorRuleOld);
int32_t newRulesLength = (int32_t)strlen(BreakIteratorRuleNew);
int32_t breakIteratorRulesLength = newRulesLength > oldRulesLength ? newRulesLength : oldRulesLength;
UChar* rules = (UChar*)calloc((size_t)breakIteratorRulesLength + 1, sizeof(UChar));
if (rules == NULL)
{
return NULL;
}
u_uastrncpy(rules, BreakIteratorRuleNew, newRulesLength);
rules[newRulesLength] = '\0';
breaker = ubrk_openRules(rules, newRulesLength, emptyString, 0, NULL, &status);
if (U_FAILURE(status))
{
status = U_ZERO_ERROR;
u_uastrncpy(rules, BreakIteratorRuleOld, oldRulesLength);
rules[oldRulesLength] = '\0';
breaker = ubrk_openRules(rules, oldRulesLength, emptyString, 0, NULL, &status);
}
if (U_FAILURE(status))
{
free(rules);
return NULL;
}
UChar* pNull = NULL;
if (!pal_atomic_cas_ptr((void* volatile*)&s_breakIteratorRules, rules, pNull))
{
free(rules);
assert(s_breakIteratorRules != NULL);
}
return breaker;
}
static void CloseSearchIterator(UStringSearch* pSearch)
{
assert(pSearch != NULL);
#if !defined(TARGET_WINDOWS)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
UBreakIterator* breakIterator = (UBreakIterator*)usearch_getBreakIterator(pSearch);
#if !defined(TARGET_WINDOWS)
#pragma GCC diagnostic pop
#endif
usearch_close(pSearch);
if (breakIterator != NULL)
{
ubrk_close(breakIterator);
}
}
void GlobalizationNative_CloseSortHandle(SortHandle* pSortHandle)
{
for (int i = 0; i <= CompareOptionsMask; i++)
{
if (pSortHandle->collatorsPerOption[i] != NULL)
{
UStringSearch* pSearch = pSortHandle->searchIteratorList[i].searchIterator;
if (pSearch != NULL)
{
if (pSearch != USED_STRING_SEARCH)
{
CloseSearchIterator(pSearch);
}
pSortHandle->searchIteratorList[i].searchIterator = NULL;
SearchIteratorNode* pNext = pSortHandle->searchIteratorList[i].next;
pSortHandle->searchIteratorList[i].next = NULL;
while (pNext != NULL)
{
if (pNext->searchIterator != NULL && pNext->searchIterator != USED_STRING_SEARCH)
{
CloseSearchIterator(pNext->searchIterator);
}
SearchIteratorNode* pCurrent = pNext;
pNext = pCurrent->next;
free(pCurrent);
}
}
ucol_close(pSortHandle->collatorsPerOption[i]);
pSortHandle->collatorsPerOption[i] = NULL;
}
}
free(pSortHandle);
}
static const UCollator* GetCollatorFromSortHandle(SortHandle* pSortHandle, int32_t options, UErrorCode* pErr)
{
if (options == 0)
{
return pSortHandle->collatorsPerOption[0];
}
else
{
options &= CompareOptionsMask;
UCollator* pCollator = pSortHandle->collatorsPerOption[options];
if (pCollator != NULL)
{
return pCollator;
}
pCollator = CloneCollatorWithOptions(pSortHandle->collatorsPerOption[0], options, pErr);
UCollator* pNull = NULL;
if (!pal_atomic_cas_ptr((void* volatile*)&pSortHandle->collatorsPerOption[options], pCollator, pNull))
{
ucol_close(pCollator);
pCollator = pSortHandle->collatorsPerOption[options];
assert(pCollator != NULL && "pCollator not expected to be null here.");
}
return pCollator;
}
}
// CreateNewSearchNode will create a new node in the linked list and mark this node search handle as borrowed handle.
static inline int32_t CreateNewSearchNode(SortHandle* pSortHandle, int32_t options)
{
SearchIteratorNode* node = (SearchIteratorNode*)calloc(1, sizeof(SearchIteratorNode));
if (node == NULL)
{
return false;
}
node->searchIterator = USED_STRING_SEARCH; // Mark the new node search handle as borrowed.
node->next = NULL;
SearchIteratorNode* pCurrent = &pSortHandle->searchIteratorList[options];
assert(pCurrent->searchIterator != NULL && "Search iterator not expected to be NULL at this stage.");
SearchIteratorNode* pNull = NULL;
do
{
if (pCurrent->next == NULL && pal_atomic_cas_ptr((void* volatile*)&(pCurrent->next), node, pNull))
{
break;
}
assert(pCurrent->next != NULL && "next pointer shouldn't be null.");
pCurrent = pCurrent->next;
} while (true);
return true;
}
// Restore previously borrowed search handle to the linked list.
static inline int32_t RestoreSearchHandle(SortHandle* pSortHandle, UStringSearch* pSearchIterator, int32_t options)
{
SearchIteratorNode* pCurrent = &pSortHandle->searchIteratorList[options];
while (pCurrent != NULL)
{
if (pCurrent->searchIterator == USED_STRING_SEARCH && pal_atomic_cas_ptr((void* volatile*)&(pCurrent->searchIterator), pSearchIterator, USED_STRING_SEARCH))
{
return true;
}
pCurrent = pCurrent->next;
}
return false;
}
// return -1 if couldn't borrow search handle from the SortHandle cache, otherwise, it return the slot number of the cache.
static int32_t GetSearchIteratorUsingCollator(
SortHandle* pSortHandle,
const UCollator* pColl,
const UChar* lpTarget,
int32_t cwTargetLength,
const UChar* lpSource,
int32_t cwSourceLength,
int32_t options,
UStringSearch** pSearchIterator)
{
options &= CompareOptionsMask;
*pSearchIterator = pSortHandle->searchIteratorList[options].searchIterator;
UErrorCode err = U_ZERO_ERROR;
if (*pSearchIterator == NULL)
{
UBreakIterator* breakIterator = CreateCustomizedBreakIterator();
*pSearchIterator = usearch_openFromCollator(lpTarget, cwTargetLength, lpSource, cwSourceLength, pColl, breakIterator, &err);
if (!U_SUCCESS(err))
{
if (breakIterator != NULL)
{
ubrk_close(breakIterator);
}
assert(false && "Couldn't open the search iterator.");
return -1;
}
UStringSearch* pNull = NULL;
if (!pal_atomic_cas_ptr((void* volatile*)&(pSortHandle->searchIteratorList[options].searchIterator), USED_STRING_SEARCH, pNull))
{
if (!CreateNewSearchNode(pSortHandle, options))
{
CloseSearchIterator(*pSearchIterator);
return -1;
}
}
return options;
}
assert(*pSearchIterator != NULL && "Should having a valid search handle at this stage.");
SearchIteratorNode* pCurrent = &pSortHandle->searchIteratorList[options];
while (*pSearchIterator == USED_STRING_SEARCH || !pal_atomic_cas_ptr((void* volatile*)&(pCurrent->searchIterator), USED_STRING_SEARCH, *pSearchIterator))
{
pCurrent = pCurrent->next;
if (pCurrent == NULL)
{
*pSearchIterator = NULL;
break;
}
*pSearchIterator = pCurrent->searchIterator;
}
if (*pSearchIterator == NULL) // Couldn't find any available handle to borrow then create a new one.
{
UBreakIterator* breakIterator = CreateCustomizedBreakIterator();
*pSearchIterator = usearch_openFromCollator(lpTarget, cwTargetLength, lpSource, cwSourceLength, pColl, breakIterator, &err);
if (!U_SUCCESS(err))
{
if (breakIterator != NULL)
{
ubrk_close(breakIterator);
}
assert(false && "Couldn't open a new search iterator.");
return -1;
}
if (!CreateNewSearchNode(pSortHandle, options))
{
CloseSearchIterator(*pSearchIterator);
return -1;
}
return options;
}
usearch_setText(*pSearchIterator, lpSource, cwSourceLength, &err);
if (!U_SUCCESS(err))
{
int32_t r;
(void)r;
r = RestoreSearchHandle(pSortHandle, *pSearchIterator, options);
assert(r && "restoring search handle shouldn't fail.");
return -1;
}
usearch_setPattern(*pSearchIterator, lpTarget, cwTargetLength, &err);
if (!U_SUCCESS(err))
{
int32_t r;
(void)r;
r = RestoreSearchHandle(pSortHandle, *pSearchIterator, options);
assert(r && "restoring search handle shouldn't fail.");
return -1;
}
return options;
}
// return -1 if couldn't borrow search handle from the SortHandle cache, otherwise, it return the slot number of the cache.
static inline int32_t GetSearchIterator(
SortHandle* pSortHandle,
const UChar* lpTarget,
int32_t cwTargetLength,
const UChar* lpSource,
int32_t cwSourceLength,
int32_t options,
UStringSearch** pSearchIterator)
{
UErrorCode err = U_ZERO_ERROR;
const UCollator* pColl = GetCollatorFromSortHandle(pSortHandle, options, &err);
if (!U_SUCCESS(err))
{
assert(false && "Couldn't get the collator.");
return -1;
}
return GetSearchIteratorUsingCollator(
pSortHandle,
pColl,
lpTarget,
cwTargetLength,
lpSource,
cwSourceLength,
options,
pSearchIterator);
}
int32_t GlobalizationNative_GetSortVersion(SortHandle* pSortHandle)
{
UErrorCode err = U_ZERO_ERROR;
const UCollator* pColl = GetCollatorFromSortHandle(pSortHandle, 0, &err);
int32_t result = -1;
if (U_SUCCESS(err))
{
ucol_getVersion(pColl, (uint8_t *) &result);
}
else
{
assert(false && "Unexpected ucol_getVersion to fail.");
}
return result;
}
/*
Function:
CompareString
*/
int32_t GlobalizationNative_CompareString(
SortHandle* pSortHandle, const UChar* lpStr1, int32_t cwStr1Length, const UChar* lpStr2, int32_t cwStr2Length, int32_t options)
{
UCollationResult result = UCOL_EQUAL;
UErrorCode err = U_ZERO_ERROR;
const UCollator* pColl = GetCollatorFromSortHandle(pSortHandle, options, &err);
if (U_SUCCESS(err))
{
// Workaround for https://unicode-org.atlassian.net/projects/ICU/issues/ICU-9396
// The ucol_strcoll routine on some older versions of ICU doesn't correctly
// handle nullptr inputs. We'll play defensively and always flow a non-nullptr.
UChar dummyChar = 0;
if (lpStr1 == NULL)
{
lpStr1 = &dummyChar;
}
if (lpStr2 == NULL)
{
lpStr2 = &dummyChar;
}
result = ucol_strcoll(pColl, lpStr1, cwStr1Length, lpStr2, cwStr2Length);
}
return result;
}
/*
Function:
IndexOf
*/
int32_t GlobalizationNative_IndexOf(
SortHandle* pSortHandle,
const UChar* lpTarget,
int32_t cwTargetLength,
const UChar* lpSource,
int32_t cwSourceLength,
int32_t options,
int32_t* pMatchedLength)
{
assert(cwTargetLength > 0);
int32_t result = USEARCH_DONE;
// It's possible somebody passed us (source = <empty>, target = <non-empty>).
// ICU's usearch_* APIs don't handle empty source inputs properly. However,
// if this occurs the user really just wanted us to perform an equality check.
// We can't short-circuit the operation because depending on the collation in
// use, certain code points may have zero weight, which means that empty
// strings may compare as equal to non-empty strings.
if (cwSourceLength == 0)
{
result = GlobalizationNative_CompareString(pSortHandle, lpTarget, cwTargetLength, lpSource, cwSourceLength, options);
if (result == UCOL_EQUAL && pMatchedLength != NULL)
{
*pMatchedLength = cwSourceLength;
}
return (result == UCOL_EQUAL) ? 0 : -1;
}
UErrorCode err = U_ZERO_ERROR;
UStringSearch* pSearch;
int32_t searchCacheSlot = GetSearchIterator(pSortHandle, lpTarget, cwTargetLength, lpSource, cwSourceLength, options, &pSearch);
if (searchCacheSlot < 0)
{
return result;
}
result = usearch_first(pSearch, &err);
// if the search was successful,
// we'll try to get the matched string length.
if (result != USEARCH_DONE && pMatchedLength != NULL)
{
*pMatchedLength = usearch_getMatchedLength(pSearch);
}
RestoreSearchHandle(pSortHandle, pSearch, searchCacheSlot);
return result;
}
/*
Function:
LastIndexOf
*/
int32_t GlobalizationNative_LastIndexOf(
SortHandle* pSortHandle,
const UChar* lpTarget,
int32_t cwTargetLength,
const UChar* lpSource,
int32_t cwSourceLength,
int32_t options,
int32_t* pMatchedLength)
{
assert(cwTargetLength > 0);
int32_t result = USEARCH_DONE;
// It's possible somebody passed us (source = <empty>, target = <non-empty>).
// ICU's usearch_* APIs don't handle empty source inputs properly. However,
// if this occurs the user really just wanted us to perform an equality check.
// We can't short-circuit the operation because depending on the collation in
// use, certain code points may have zero weight, which means that empty
// strings may compare as equal to non-empty strings.
if (cwSourceLength == 0)
{
result = GlobalizationNative_CompareString(pSortHandle, lpTarget, cwTargetLength, lpSource, cwSourceLength, options);
if (result == UCOL_EQUAL && pMatchedLength != NULL)
{
*pMatchedLength = cwSourceLength;
}
return (result == UCOL_EQUAL) ? 0 : -1;
}
UErrorCode err = U_ZERO_ERROR;
UStringSearch* pSearch;
int32_t searchCacheSlot = GetSearchIterator(pSortHandle, lpTarget, cwTargetLength, lpSource, cwSourceLength, options, &pSearch);
if (searchCacheSlot < 0)
{
return result;
}
result = usearch_last(pSearch, &err);
// if the search was successful, we'll try to get the matched string length.
if (result != USEARCH_DONE)
{
int32_t matchLength = -1;
if (pMatchedLength != NULL)
{
matchLength = usearch_getMatchedLength(pSearch);
*pMatchedLength = matchLength;
}
// In case the search result is pointing at the last character (including Surrogate case) of the source string, we need to check if the target string
// was constructed with characters which have no sort weights. The way we do that is to check that the matched length is 0.
// We need to update the returned index to have consistent behavior with Ordinal and NLS operations, and satisfy the condition:
// index = source.LastIndexOf(value, comparisonType);
// originalString.Substring(index).StartsWith(value, comparisonType) == true.
// https://github.com/dotnet/runtime/issues/13383
if (result >= cwSourceLength - 2)
{
if (pMatchedLength == NULL)
{
matchLength = usearch_getMatchedLength(pSearch);
}
if (matchLength == 0)
{
result = cwSourceLength;
}
}
}
RestoreSearchHandle(pSortHandle, pSearch, searchCacheSlot);
return result;
}
/*
collation element is an int used for sorting. It consists of 3 components:
* primary - first 16 bits, representing the base letter
* secondary - next 8 bits, typically an accent
* tertiary - last 8 bits, typically the case
An example (the numbers are made up to keep it simple)
a: 1 0 0
ą: 1 1 0
A: 1 0 1
Ą: 1 1 1
this method returns a mask that allows for characters comparison using specified Collator Strength
*/
static int32_t GetCollationElementMask(UColAttributeValue strength)
{
assert(strength >= UCOL_SECONDARY);
switch (strength)
{
case UCOL_PRIMARY:
return UCOL_PRIMARYORDERMASK;
case UCOL_SECONDARY:
return UCOL_PRIMARYORDERMASK | UCOL_SECONDARYORDERMASK;
default:
return UCOL_PRIMARYORDERMASK | UCOL_SECONDARYORDERMASK | UCOL_TERTIARYORDERMASK;
}
}
static int32_t inline SimpleAffix_Iterators(UCollationElements* pPatternIterator, UCollationElements* pSourceIterator, UColAttributeValue strength, int32_t forwardSearch, int32_t* pCapturedOffset)
{
assert(strength >= UCOL_SECONDARY);
UErrorCode errorCode = U_ZERO_ERROR;
int32_t movePattern = true, moveSource = true;
int32_t patternElement = UCOL_IGNORABLE, sourceElement = UCOL_IGNORABLE;
int32_t capturedOffset = 0;
int32_t collationElementMask = GetCollationElementMask(strength);
while (true)
{
if (movePattern)
{
patternElement = forwardSearch ? ucol_next(pPatternIterator, &errorCode) : ucol_previous(pPatternIterator, &errorCode);
}
if (moveSource)
{
if (pCapturedOffset != NULL)
{
capturedOffset = ucol_getOffset(pSourceIterator); // need to capture offset before advancing iterator
}
sourceElement = forwardSearch ? ucol_next(pSourceIterator, &errorCode) : ucol_previous(pSourceIterator, &errorCode);
}
movePattern = true; moveSource = true;
if (patternElement == UCOL_NULLORDER)
{
if (sourceElement == UCOL_NULLORDER)
{
goto ReturnTrue; // source is equal to pattern, we have reached both ends|beginnings at the same time
}
else if (sourceElement == UCOL_IGNORABLE)
{
goto ReturnTrue; // the next|previous character in source is an ignorable character, an example: "o\u0000".StartsWith("o")
}
else if (forwardSearch && ((sourceElement & UCOL_PRIMARYORDERMASK) == 0) && (sourceElement & UCOL_SECONDARYORDERMASK) != 0)
{
return false; // the next character in source text is a combining character, an example: "o\u0308".StartsWith("o")
}
else
{
goto ReturnTrue;
}
}
else if (patternElement == UCOL_IGNORABLE)
{
moveSource = false;
}
else if (sourceElement == UCOL_IGNORABLE)
{
movePattern = false;
}
else if ((patternElement & collationElementMask) != (sourceElement & collationElementMask))
{
return false;
}
}
ReturnTrue:
if (pCapturedOffset != NULL)
{
*pCapturedOffset = capturedOffset;
}
return true;
}
static int32_t SimpleAffix(const UCollator* pCollator, UErrorCode* pErrorCode, const UChar* pPattern, int32_t patternLength, const UChar* pText, int32_t textLength, int32_t forwardSearch, int32_t* pMatchedLength)
{
int32_t result = false;
UCollationElements* pPatternIterator = ucol_openElements(pCollator, pPattern, patternLength, pErrorCode);
if (U_SUCCESS(*pErrorCode))
{
UCollationElements* pSourceIterator = ucol_openElements(pCollator, pText, textLength, pErrorCode);
if (U_SUCCESS(*pErrorCode))
{
UColAttributeValue strength = ucol_getStrength(pCollator);
int32_t capturedOffset = 0;
result = SimpleAffix_Iterators(pPatternIterator, pSourceIterator, strength, forwardSearch, (pMatchedLength != NULL) ? &capturedOffset : NULL);
if (result && pMatchedLength != NULL)
{
// depending on whether we're searching forward or backward, the matching substring
// is [start of source string .. curIdx] or [curIdx .. end of source string]
*pMatchedLength = (forwardSearch) ? capturedOffset : (textLength - capturedOffset);
}
ucol_closeElements(pSourceIterator);
}
ucol_closeElements(pPatternIterator);
}
return result;
}
static int32_t ComplexStartsWith(SortHandle* pSortHandle, const UChar* pPattern, int32_t patternLength, const UChar* pText, int32_t textLength, int32_t options, int32_t* pMatchedLength)
{
int32_t result = false;
UErrorCode err = U_ZERO_ERROR;
const UCollator* pCollator = GetCollatorFromSortHandle(pSortHandle, options, &err);
if (!U_SUCCESS(err))
{
return result;
}
UStringSearch* pSearch;
int32_t searchCacheSlot = GetSearchIteratorUsingCollator(pSortHandle, pCollator, pPattern, patternLength, pText, textLength, options, &pSearch);
if (searchCacheSlot < 0)
{
return result;
}
int32_t idx = usearch_first(pSearch, &err);
if (idx != USEARCH_DONE)
{
if (idx == 0)
{
result = true;
}
else
{
result = CanIgnoreAllCollationElements(pCollator, pText, idx);
}
if (result && pMatchedLength != NULL)
{
// adjust matched length to account for all the elements we implicitly consumed at beginning of string
*pMatchedLength = idx + usearch_getMatchedLength(pSearch);
}
}
RestoreSearchHandle(pSortHandle, pSearch, searchCacheSlot);
return result;
}
/*
Return value is a "Win32 BOOL" (1 = true, 0 = false)
*/
int32_t GlobalizationNative_StartsWith(
SortHandle* pSortHandle,
const UChar* lpTarget,
int32_t cwTargetLength,
const UChar* lpSource,
int32_t cwSourceLength,
int32_t options,
int32_t* pMatchedLength)
{
if (options > CompareOptionsIgnoreCase)
{
return ComplexStartsWith(pSortHandle, lpTarget, cwTargetLength, lpSource, cwSourceLength, options, pMatchedLength);
}
UErrorCode err = U_ZERO_ERROR;
const UCollator* pCollator = GetCollatorFromSortHandle(pSortHandle, options, &err);
if (!U_SUCCESS(err))
{
return false;
}
return SimpleAffix(pCollator, &err, lpTarget, cwTargetLength, lpSource, cwSourceLength, true, pMatchedLength);
}
static int32_t ComplexEndsWith(SortHandle* pSortHandle, const UChar* pPattern, int32_t patternLength, const UChar* pText, int32_t textLength, int32_t options, int32_t* pMatchedLength)
{
int32_t result = false;
UErrorCode err = U_ZERO_ERROR;
const UCollator* pCollator = GetCollatorFromSortHandle(pSortHandle, options, &err);
if (!U_SUCCESS(err))
{
return result;
}
UStringSearch* pSearch;
int32_t searchCacheSlot = GetSearchIteratorUsingCollator(pSortHandle, pCollator, pPattern, patternLength, pText, textLength, options, &pSearch);
if (searchCacheSlot < 0)
{
return result;
}
int32_t idx = usearch_last(pSearch, &err);
if (idx != USEARCH_DONE)
{
int32_t matchEnd = idx + usearch_getMatchedLength(pSearch);
assert(matchEnd <= textLength);
if (matchEnd == textLength)
{
result = true;
}
else
{
int32_t remainingStringLength = textLength - matchEnd;
result = CanIgnoreAllCollationElements(pCollator, pText + matchEnd, remainingStringLength);
}
if (result && pMatchedLength != NULL)
{
// adjust matched length to account for all the elements we implicitly consumed at end of string
*pMatchedLength = textLength - idx;
}
}
RestoreSearchHandle(pSortHandle, pSearch, searchCacheSlot);
return result;
}
/*
Return value is a "Win32 BOOL" (1 = true, 0 = false)
*/
int32_t GlobalizationNative_EndsWith(
SortHandle* pSortHandle,
const UChar* lpTarget,
int32_t cwTargetLength,
const UChar* lpSource,
int32_t cwSourceLength,
int32_t options,
int32_t* pMatchedLength)
{
if (options > CompareOptionsIgnoreCase)
{
return ComplexEndsWith(pSortHandle, lpTarget, cwTargetLength, lpSource, cwSourceLength, options, pMatchedLength);
}
UErrorCode err = U_ZERO_ERROR;
const UCollator* pCollator = GetCollatorFromSortHandle(pSortHandle, options, &err);
if (!U_SUCCESS(err))
{
return false;
}
return SimpleAffix(pCollator, &err, lpTarget, cwTargetLength, lpSource, cwSourceLength, false, pMatchedLength);
}
int32_t GlobalizationNative_GetSortKey(
SortHandle* pSortHandle,
const UChar* lpStr,
int32_t cwStrLength,
uint8_t* sortKey,
int32_t cbSortKeyLength,
int32_t options)
{
UErrorCode err = U_ZERO_ERROR;
const UCollator* pColl = GetCollatorFromSortHandle(pSortHandle, options, &err);
int32_t result = 0;
if (U_SUCCESS(err))
{
result = ucol_getSortKey(pColl, lpStr, cwStrLength, sortKey, cbSortKeyLength);
}
return result;
}
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/coreclr/jit/targetamd64.h | // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#pragma once
#if !defined(TARGET_AMD64)
#error The file should not be included for this platform.
#endif
// clang-format off
// TODO-AMD64-CQ: Fine tune the following xxBlk threshold values:
#define CPU_LOAD_STORE_ARCH 0
#define ROUND_FLOAT 0 // Do not round intermed float expression results
#define CPU_HAS_BYTE_REGS 0
#define CPBLK_UNROLL_LIMIT 64 // Upper bound to let the code generator to loop unroll CpBlk.
#define INITBLK_UNROLL_LIMIT 128 // Upper bound to let the code generator to loop unroll InitBlk.
#define CPOBJ_NONGC_SLOTS_LIMIT 4 // For CpObj code generation, this is the the threshold of the number
// of contiguous non-gc slots that trigger generating rep movsq instead of
// sequences of movsq instructions
#ifdef FEATURE_SIMD
#define ALIGN_SIMD_TYPES 1 // whether SIMD type locals are to be aligned
#if defined(UNIX_AMD64_ABI)
#define FEATURE_PARTIAL_SIMD_CALLEE_SAVE 0 // Whether SIMD registers are partially saved at calls
#else // !UNIX_AMD64_ABI
#define FEATURE_PARTIAL_SIMD_CALLEE_SAVE 1 // Whether SIMD registers are partially saved at calls
#endif // !UNIX_AMD64_ABI
#endif
#define FEATURE_FIXED_OUT_ARGS 1 // Preallocate the outgoing arg area in the prolog
#define FEATURE_STRUCTPROMOTE 1 // JIT Optimization to promote fields of structs into registers
#define FEATURE_FASTTAILCALL 1 // Tail calls made as epilog+jmp
#define FEATURE_TAILCALL_OPT 1 // opportunistic Tail calls (i.e. without ".tail" prefix) made as fast tail calls.
#define FEATURE_SET_FLAGS 0 // Set to true to force the JIT to mark the trees with GTF_SET_FLAGS when the flags need to be set
#define MAX_PASS_SINGLEREG_BYTES 8 // Maximum size of a struct passed in a single register (double).
#ifdef UNIX_AMD64_ABI
#define FEATURE_MULTIREG_ARGS_OR_RET 1 // Support for passing and/or returning single values in more than one register
#define FEATURE_MULTIREG_ARGS 1 // Support for passing a single argument in more than one register
#define FEATURE_MULTIREG_RET 1 // Support for returning a single value in more than one register
#define FEATURE_MULTIREG_STRUCT_PROMOTE 1 // True when we want to promote fields of a multireg struct into registers
#define FEATURE_STRUCT_CLASSIFIER 1 // Uses a classifier function to determine if structs are passed/returned in more than one register
#define MAX_PASS_MULTIREG_BYTES 32 // Maximum size of a struct that could be passed in more than one register (Max is two SIMD16s)
#define MAX_RET_MULTIREG_BYTES 32 // Maximum size of a struct that could be returned in more than one register (Max is two SIMD16s)
#define MAX_ARG_REG_COUNT 2 // Maximum registers used to pass a single argument in multiple registers.
#define MAX_RET_REG_COUNT 2 // Maximum registers used to return a value.
#define MAX_MULTIREG_COUNT 2 // Maxiumum number of registers defined by a single instruction (including calls).
// This is also the maximum number of registers for a MultiReg node.
#else // !UNIX_AMD64_ABI
#define WINDOWS_AMD64_ABI // Uses the Windows ABI for AMD64
#define FEATURE_MULTIREG_ARGS_OR_RET 0 // Support for passing and/or returning single values in more than one register
#define FEATURE_MULTIREG_ARGS 0 // Support for passing a single argument in more than one register
#define FEATURE_MULTIREG_RET 0 // Support for returning a single value in more than one register
#define FEATURE_MULTIREG_STRUCT_PROMOTE 0 // True when we want to promote fields of a multireg struct into registers
#define MAX_PASS_MULTIREG_BYTES 0 // No multireg arguments
#define MAX_RET_MULTIREG_BYTES 0 // No multireg return values
#define MAX_ARG_REG_COUNT 1 // Maximum registers used to pass a single argument (no arguments are passed using multiple registers)
#define MAX_RET_REG_COUNT 1 // Maximum registers used to return a value.
#define MAX_MULTIREG_COUNT 2 // Maxiumum number of registers defined by a single instruction (including calls).
// This is also the maximum number of registers for a MultiReg node.
// Note that this must be greater than 1 so that GenTreeLclVar can have an array of
// MAX_MULTIREG_COUNT - 1.
#endif // !UNIX_AMD64_ABI
#define NOGC_WRITE_BARRIERS 0 // We DO-NOT have specialized WriteBarrier JIT Helpers that DO-NOT trash the RBM_CALLEE_TRASH registers
#define USER_ARGS_COME_LAST 1
#define EMIT_TRACK_STACK_DEPTH 1
#define TARGET_POINTER_SIZE 8 // equal to sizeof(void*) and the managed pointer size in bytes for this target
#define FEATURE_EH 1 // To aid platform bring-up, eliminate exceptional EH clauses (catch, filter, filter-handler, fault) and directly execute 'finally' clauses.
#define FEATURE_EH_CALLFINALLY_THUNKS 1 // Generate call-to-finally code in "thunks" in the enclosing EH region, protected by "cloned finally" clauses.
#ifdef UNIX_AMD64_ABI
#define ETW_EBP_FRAMED 1 // if 1 we cannot use EBP as a scratch register and must create EBP based frames for most methods
#else // !UNIX_AMD64_ABI
#define ETW_EBP_FRAMED 0 // if 1 we cannot use EBP as a scratch register and must create EBP based frames for most methods
#endif // !UNIX_AMD64_ABI
#define CSE_CONSTS 1 // Enable if we want to CSE constants
#define RBM_ALLFLOAT (RBM_XMM0 | RBM_XMM1 | RBM_XMM2 | RBM_XMM3 | RBM_XMM4 | RBM_XMM5 | RBM_XMM6 | RBM_XMM7 | RBM_XMM8 | RBM_XMM9 | RBM_XMM10 | RBM_XMM11 | RBM_XMM12 | RBM_XMM13 | RBM_XMM14 | RBM_XMM15)
#define RBM_ALLDOUBLE RBM_ALLFLOAT
#define REG_FP_FIRST REG_XMM0
#define REG_FP_LAST REG_XMM15
#define FIRST_FP_ARGREG REG_XMM0
#ifdef UNIX_AMD64_ABI
#define LAST_FP_ARGREG REG_XMM7
#else // !UNIX_AMD64_ABI
#define LAST_FP_ARGREG REG_XMM3
#endif // !UNIX_AMD64_ABI
#define REGNUM_BITS 6 // number of bits in a REG_*
#define REGSIZE_BYTES 8 // number of bytes in one register
#define XMM_REGSIZE_BYTES 16 // XMM register size in bytes
#define YMM_REGSIZE_BYTES 32 // YMM register size in bytes
#define CODE_ALIGN 1 // code alignment requirement
#define STACK_ALIGN 16 // stack alignment requirement
#define STACK_ALIGN_SHIFT 4 // Shift-right amount to convert size in bytes to size in STACK_ALIGN units == log2(STACK_ALIGN)
#if ETW_EBP_FRAMED
#define RBM_ETW_FRAMED_EBP RBM_NONE
#define RBM_ETW_FRAMED_EBP_LIST
#define REG_ETW_FRAMED_EBP_LIST
#define REG_ETW_FRAMED_EBP_COUNT 0
#else // !ETW_EBP_FRAMED
#define RBM_ETW_FRAMED_EBP RBM_EBP
#define RBM_ETW_FRAMED_EBP_LIST RBM_EBP,
#define REG_ETW_FRAMED_EBP_LIST REG_EBP,
#define REG_ETW_FRAMED_EBP_COUNT 1
#endif // !ETW_EBP_FRAMED
#ifdef UNIX_AMD64_ABI
#define MIN_ARG_AREA_FOR_CALL 0 // Minimum required outgoing argument space for a call.
#define RBM_INT_CALLEE_SAVED (RBM_EBX|RBM_ETW_FRAMED_EBP|RBM_R12|RBM_R13|RBM_R14|RBM_R15)
#define RBM_INT_CALLEE_TRASH (RBM_EAX|RBM_RDI|RBM_RSI|RBM_EDX|RBM_ECX|RBM_R8|RBM_R9|RBM_R10|RBM_R11)
#define RBM_FLT_CALLEE_SAVED (0)
#define RBM_FLT_CALLEE_TRASH (RBM_XMM0|RBM_XMM1|RBM_XMM2|RBM_XMM3|RBM_XMM4|RBM_XMM5|RBM_XMM6|RBM_XMM7| \
RBM_XMM8|RBM_XMM9|RBM_XMM10|RBM_XMM11|RBM_XMM12|RBM_XMM13|RBM_XMM14|RBM_XMM15)
#define REG_PROFILER_ENTER_ARG_0 REG_R14
#define RBM_PROFILER_ENTER_ARG_0 RBM_R14
#define REG_PROFILER_ENTER_ARG_1 REG_R15
#define RBM_PROFILER_ENTER_ARG_1 RBM_R15
#define REG_DEFAULT_PROFILER_CALL_TARGET REG_R11
#else // !UNIX_AMD64_ABI
#define MIN_ARG_AREA_FOR_CALL (4 * REGSIZE_BYTES) // Minimum required outgoing argument space for a call.
#define RBM_INT_CALLEE_SAVED (RBM_EBX|RBM_ESI|RBM_EDI|RBM_ETW_FRAMED_EBP|RBM_R12|RBM_R13|RBM_R14|RBM_R15)
#define RBM_INT_CALLEE_TRASH (RBM_EAX|RBM_ECX|RBM_EDX|RBM_R8|RBM_R9|RBM_R10|RBM_R11)
#define RBM_FLT_CALLEE_SAVED (RBM_XMM6|RBM_XMM7|RBM_XMM8|RBM_XMM9|RBM_XMM10|RBM_XMM11|RBM_XMM12|RBM_XMM13|RBM_XMM14|RBM_XMM15)
#define RBM_FLT_CALLEE_TRASH (RBM_XMM0|RBM_XMM1|RBM_XMM2|RBM_XMM3|RBM_XMM4|RBM_XMM5)
#endif // !UNIX_AMD64_ABI
#define RBM_OSR_INT_CALLEE_SAVED (RBM_INT_CALLEE_SAVED | RBM_EBP)
#define REG_FLT_CALLEE_SAVED_FIRST REG_XMM6
#define REG_FLT_CALLEE_SAVED_LAST REG_XMM15
#define RBM_CALLEE_TRASH (RBM_INT_CALLEE_TRASH | RBM_FLT_CALLEE_TRASH)
#define RBM_CALLEE_SAVED (RBM_INT_CALLEE_SAVED | RBM_FLT_CALLEE_SAVED)
#define RBM_CALLEE_TRASH_NOGC RBM_CALLEE_TRASH
#define RBM_ALLINT (RBM_INT_CALLEE_SAVED | RBM_INT_CALLEE_TRASH)
#if 0
#define REG_VAR_ORDER REG_EAX,REG_EDX,REG_ECX,REG_ESI,REG_EDI,REG_EBX,REG_ETW_FRAMED_EBP_LIST \
REG_R8,REG_R9,REG_R10,REG_R11,REG_R14,REG_R15,REG_R12,REG_R13
#else
// TEMPORARY ORDER TO AVOID CALLEE-SAVES
// TODO-CQ: Review this and set appropriately
#ifdef UNIX_AMD64_ABI
#define REG_VAR_ORDER REG_EAX,REG_EDI,REG_ESI, \
REG_EDX,REG_ECX,REG_R8,REG_R9, \
REG_R10,REG_R11,REG_EBX,REG_ETW_FRAMED_EBP_LIST \
REG_R14,REG_R15,REG_R12,REG_R13
#else // !UNIX_AMD64_ABI
#define REG_VAR_ORDER REG_EAX,REG_EDX,REG_ECX, \
REG_R8,REG_R9,REG_R10,REG_R11, \
REG_ESI,REG_EDI,REG_EBX,REG_ETW_FRAMED_EBP_LIST \
REG_R14,REG_R15,REG_R12,REG_R13
#endif // !UNIX_AMD64_ABI
#endif
#define REG_VAR_ORDER_FLT REG_XMM0,REG_XMM1,REG_XMM2,REG_XMM3,REG_XMM4,REG_XMM5,REG_XMM6,REG_XMM7,REG_XMM8,REG_XMM9,REG_XMM10,REG_XMM11,REG_XMM12,REG_XMM13,REG_XMM14,REG_XMM15
#ifdef UNIX_AMD64_ABI
#define CNT_CALLEE_SAVED (5 + REG_ETW_FRAMED_EBP_COUNT)
#define CNT_CALLEE_TRASH (9)
#define CNT_CALLEE_ENREG (CNT_CALLEE_SAVED)
#define CNT_CALLEE_SAVED_FLOAT (0)
#define CNT_CALLEE_TRASH_FLOAT (16)
#define REG_CALLEE_SAVED_ORDER REG_EBX,REG_ETW_FRAMED_EBP_LIST REG_R12,REG_R13,REG_R14,REG_R15
#define RBM_CALLEE_SAVED_ORDER RBM_EBX,RBM_ETW_FRAMED_EBP_LIST RBM_R12,RBM_R13,RBM_R14,RBM_R15
#else // !UNIX_AMD64_ABI
#define CNT_CALLEE_SAVED (7 + REG_ETW_FRAMED_EBP_COUNT)
#define CNT_CALLEE_TRASH (7)
#define CNT_CALLEE_ENREG (CNT_CALLEE_SAVED)
#define CNT_CALLEE_SAVED_FLOAT (10)
#define CNT_CALLEE_TRASH_FLOAT (6)
#define REG_CALLEE_SAVED_ORDER REG_EBX,REG_ESI,REG_EDI,REG_ETW_FRAMED_EBP_LIST REG_R12,REG_R13,REG_R14,REG_R15
#define RBM_CALLEE_SAVED_ORDER RBM_EBX,RBM_ESI,RBM_EDI,RBM_ETW_FRAMED_EBP_LIST RBM_R12,RBM_R13,RBM_R14,RBM_R15
#endif // !UNIX_AMD64_ABI
#define CALLEE_SAVED_REG_MAXSZ (CNT_CALLEE_SAVED*REGSIZE_BYTES)
#define CALLEE_SAVED_FLOAT_MAXSZ (CNT_CALLEE_SAVED_FLOAT*16)
// register to hold shift amount
#define REG_SHIFT REG_ECX
#define RBM_SHIFT RBM_ECX
// This is a general scratch register that does not conflict with the argument registers
#define REG_SCRATCH REG_EAX
// Where is the exception object on entry to the handler block?
#ifdef UNIX_AMD64_ABI
#define REG_EXCEPTION_OBJECT REG_ESI
#define RBM_EXCEPTION_OBJECT RBM_ESI
#else // !UNIX_AMD64_ABI
#define REG_EXCEPTION_OBJECT REG_EDX
#define RBM_EXCEPTION_OBJECT RBM_EDX
#endif // !UNIX_AMD64_ABI
#define REG_JUMP_THUNK_PARAM REG_EAX
#define RBM_JUMP_THUNK_PARAM RBM_EAX
// Register to be used for emitting helper calls whose call target is an indir of an
// absolute memory address in case of Rel32 overflow i.e. a data address could not be
// encoded as PC-relative 32-bit offset.
//
// Notes:
// 1) that RAX is callee trash register that is not used for passing parameter and
// also results in smaller instruction encoding.
// 2) Profiler Leave callback requires the return value to be preserved
// in some form. We can use custom calling convention for Leave callback.
// For e.g return value could be preserved in rcx so that it is available for
// profiler.
#define REG_DEFAULT_HELPER_CALL_TARGET REG_RAX
#define RBM_DEFAULT_HELPER_CALL_TARGET RBM_RAX
#define REG_R2R_INDIRECT_PARAM REG_RAX // Indirection cell for R2R fast tailcall
// See ImportThunk.Kind.DelayLoadHelperWithExistingIndirectionCell in crossgen2.
#define RBM_R2R_INDIRECT_PARAM RBM_RAX
// GenericPInvokeCalliHelper VASigCookie Parameter
#define REG_PINVOKE_COOKIE_PARAM REG_R11
#define RBM_PINVOKE_COOKIE_PARAM RBM_R11
// GenericPInvokeCalliHelper unmanaged target Parameter
#define REG_PINVOKE_TARGET_PARAM REG_R10
#define RBM_PINVOKE_TARGET_PARAM RBM_R10
// IL stub's secret MethodDesc parameter (JitFlags::JIT_FLAG_PUBLISH_SECRET_PARAM)
#define REG_SECRET_STUB_PARAM REG_R10
#define RBM_SECRET_STUB_PARAM RBM_R10
// Registers used by PInvoke frame setup
#define REG_PINVOKE_FRAME REG_EDI
#define RBM_PINVOKE_FRAME RBM_EDI
#define REG_PINVOKE_TCB REG_EAX
#define RBM_PINVOKE_TCB RBM_EAX
#define REG_PINVOKE_SCRATCH REG_EAX
#define RBM_PINVOKE_SCRATCH RBM_EAX
// The following defines are useful for iterating a regNumber
#define REG_FIRST REG_EAX
#define REG_INT_FIRST REG_EAX
#define REG_INT_LAST REG_R15
#define REG_INT_COUNT (REG_INT_LAST - REG_INT_FIRST + 1)
#define REG_NEXT(reg) ((regNumber)((unsigned)(reg) + 1))
#define REG_PREV(reg) ((regNumber)((unsigned)(reg) - 1))
// Which register are int and long values returned in ?
#define REG_INTRET REG_EAX
#define RBM_INTRET RBM_EAX
#define RBM_LNGRET RBM_EAX
#ifdef UNIX_AMD64_ABI
#define REG_INTRET_1 REG_RDX
#define RBM_INTRET_1 RBM_RDX
#define REG_LNGRET_1 REG_RDX
#define RBM_LNGRET_1 RBM_RDX
#endif // UNIX_AMD64_ABI
#define REG_FLOATRET REG_XMM0
#define RBM_FLOATRET RBM_XMM0
#define REG_DOUBLERET REG_XMM0
#define RBM_DOUBLERET RBM_XMM0
#ifdef UNIX_AMD64_ABI
#define REG_FLOATRET_1 REG_XMM1
#define RBM_FLOATRET_1 RBM_XMM1
#define REG_DOUBLERET_1 REG_XMM1
#define RBM_DOUBLERET_1 RBM_XMM1
#endif // UNIX_AMD64_ABI
#define REG_FPBASE REG_EBP
#define RBM_FPBASE RBM_EBP
#define STR_FPBASE "rbp"
#define REG_SPBASE REG_ESP
#define RBM_SPBASE RBM_ESP
#define STR_SPBASE "rsp"
#define FIRST_ARG_STACK_OFFS (REGSIZE_BYTES) // return address
#ifdef UNIX_AMD64_ABI
#define MAX_REG_ARG 6
#define MAX_FLOAT_REG_ARG 8
#define REG_ARG_FIRST REG_EDI
#define REG_ARG_LAST REG_R9
#define INIT_ARG_STACK_SLOT 0 // No outgoing reserved stack slots
#define REG_ARG_0 REG_EDI
#define REG_ARG_1 REG_ESI
#define REG_ARG_2 REG_EDX
#define REG_ARG_3 REG_ECX
#define REG_ARG_4 REG_R8
#define REG_ARG_5 REG_R9
extern const regNumber intArgRegs [MAX_REG_ARG];
extern const regMaskTP intArgMasks[MAX_REG_ARG];
extern const regNumber fltArgRegs [MAX_FLOAT_REG_ARG];
extern const regMaskTP fltArgMasks[MAX_FLOAT_REG_ARG];
#define RBM_ARG_0 RBM_RDI
#define RBM_ARG_1 RBM_RSI
#define RBM_ARG_2 RBM_EDX
#define RBM_ARG_3 RBM_ECX
#define RBM_ARG_4 RBM_R8
#define RBM_ARG_5 RBM_R9
#else // !UNIX_AMD64_ABI
#define MAX_REG_ARG 4
#define MAX_FLOAT_REG_ARG 4
#define REG_ARG_FIRST REG_ECX
#define REG_ARG_LAST REG_R9
#define INIT_ARG_STACK_SLOT 4 // 4 outgoing reserved stack slots
#define REG_ARG_0 REG_ECX
#define REG_ARG_1 REG_EDX
#define REG_ARG_2 REG_R8
#define REG_ARG_3 REG_R9
extern const regNumber intArgRegs [MAX_REG_ARG];
extern const regMaskTP intArgMasks[MAX_REG_ARG];
extern const regNumber fltArgRegs [MAX_FLOAT_REG_ARG];
extern const regMaskTP fltArgMasks[MAX_FLOAT_REG_ARG];
#define RBM_ARG_0 RBM_ECX
#define RBM_ARG_1 RBM_EDX
#define RBM_ARG_2 RBM_R8
#define RBM_ARG_3 RBM_R9
#endif // !UNIX_AMD64_ABI
#define REG_FLTARG_0 REG_XMM0
#define REG_FLTARG_1 REG_XMM1
#define REG_FLTARG_2 REG_XMM2
#define REG_FLTARG_3 REG_XMM3
#define RBM_FLTARG_0 RBM_XMM0
#define RBM_FLTARG_1 RBM_XMM1
#define RBM_FLTARG_2 RBM_XMM2
#define RBM_FLTARG_3 RBM_XMM3
#ifdef UNIX_AMD64_ABI
#define REG_FLTARG_4 REG_XMM4
#define REG_FLTARG_5 REG_XMM5
#define REG_FLTARG_6 REG_XMM6
#define REG_FLTARG_7 REG_XMM7
#define RBM_FLTARG_4 RBM_XMM4
#define RBM_FLTARG_5 RBM_XMM5
#define RBM_FLTARG_6 RBM_XMM6
#define RBM_FLTARG_7 RBM_XMM7
#define RBM_ARG_REGS (RBM_ARG_0|RBM_ARG_1|RBM_ARG_2|RBM_ARG_3|RBM_ARG_4|RBM_ARG_5)
#define RBM_FLTARG_REGS (RBM_FLTARG_0|RBM_FLTARG_1|RBM_FLTARG_2|RBM_FLTARG_3|RBM_FLTARG_4|RBM_FLTARG_5|RBM_FLTARG_6|RBM_FLTARG_7)
#else // !UNIX_AMD64_ABI
#define RBM_ARG_REGS (RBM_ARG_0|RBM_ARG_1|RBM_ARG_2|RBM_ARG_3)
#define RBM_FLTARG_REGS (RBM_FLTARG_0|RBM_FLTARG_1|RBM_FLTARG_2|RBM_FLTARG_3)
#endif // !UNIX_AMD64_ABI
// The registers trashed by profiler enter/leave/tailcall hook
// See vm\amd64\asmhelpers.asm for more details.
#define RBM_PROFILER_ENTER_TRASH RBM_CALLEE_TRASH
#define RBM_PROFILER_TAILCALL_TRASH RBM_PROFILER_LEAVE_TRASH
// The registers trashed by the CORINFO_HELP_STOP_FOR_GC helper.
#ifdef UNIX_AMD64_ABI
// See vm\amd64\unixasmhelpers.S for more details.
//
// On Unix a struct of size >=9 and <=16 bytes in size is returned in two return registers.
// The return registers could be any two from the set { RAX, RDX, XMM0, XMM1 }.
// STOP_FOR_GC helper preserves all the 4 possible return registers.
#define RBM_STOP_FOR_GC_TRASH (RBM_CALLEE_TRASH & ~(RBM_FLOATRET | RBM_INTRET | RBM_FLOATRET_1 | RBM_INTRET_1))
#define RBM_PROFILER_LEAVE_TRASH (RBM_CALLEE_TRASH & ~(RBM_FLOATRET | RBM_INTRET | RBM_FLOATRET_1 | RBM_INTRET_1))
#else
// See vm\amd64\asmhelpers.asm for more details.
#define RBM_STOP_FOR_GC_TRASH (RBM_CALLEE_TRASH & ~(RBM_FLOATRET | RBM_INTRET))
#define RBM_PROFILER_LEAVE_TRASH (RBM_CALLEE_TRASH & ~(RBM_FLOATRET | RBM_INTRET))
#endif
// The registers trashed by the CORINFO_HELP_INIT_PINVOKE_FRAME helper.
#define RBM_INIT_PINVOKE_FRAME_TRASH RBM_CALLEE_TRASH
#define RBM_VALIDATE_INDIRECT_CALL_TRASH (RBM_INT_CALLEE_TRASH & ~(RBM_R10 | RBM_RCX))
#define REG_VALIDATE_INDIRECT_CALL_ADDR REG_RCX
#define REG_DISPATCH_INDIRECT_CALL_ADDR REG_RAX
// What sort of reloc do we use for [disp32] address mode
#define IMAGE_REL_BASED_DISP32 IMAGE_REL_BASED_REL32
// What sort of reloc to we use for 'moffset' address mode (for 'mov eax, moffset' or 'mov moffset, eax')
#define IMAGE_REL_BASED_MOFFSET IMAGE_REL_BASED_DIR64
// Pointer-sized string move instructions
#define INS_movsp INS_movsq
#define INS_r_movsp INS_r_movsq
#define INS_stosp INS_stosq
#define INS_r_stosp INS_r_stosq
// AMD64 uses FEATURE_FIXED_OUT_ARGS so this can be zero.
#define STACK_PROBE_BOUNDARY_THRESHOLD_BYTES 0
#define REG_STACK_PROBE_HELPER_ARG REG_R11
#define RBM_STACK_PROBE_HELPER_ARG RBM_R11
#ifdef UNIX_AMD64_ABI
#define RBM_STACK_PROBE_HELPER_TRASH RBM_NONE
#else // !UNIX_AMD64_ABI
#define RBM_STACK_PROBE_HELPER_TRASH RBM_RAX
#endif // !UNIX_AMD64_ABI
// clang-format on
| // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#pragma once
#if !defined(TARGET_AMD64)
#error The file should not be included for this platform.
#endif
// clang-format off
// TODO-AMD64-CQ: Fine tune the following xxBlk threshold values:
#define CPU_LOAD_STORE_ARCH 0
#define ROUND_FLOAT 0 // Do not round intermed float expression results
#define CPU_HAS_BYTE_REGS 0
#define CPBLK_UNROLL_LIMIT 64 // Upper bound to let the code generator to loop unroll CpBlk.
#define INITBLK_UNROLL_LIMIT 128 // Upper bound to let the code generator to loop unroll InitBlk.
#define CPOBJ_NONGC_SLOTS_LIMIT 4 // For CpObj code generation, this is the the threshold of the number
// of contiguous non-gc slots that trigger generating rep movsq instead of
// sequences of movsq instructions
#ifdef FEATURE_SIMD
#define ALIGN_SIMD_TYPES 1 // whether SIMD type locals are to be aligned
#if defined(UNIX_AMD64_ABI)
#define FEATURE_PARTIAL_SIMD_CALLEE_SAVE 0 // Whether SIMD registers are partially saved at calls
#else // !UNIX_AMD64_ABI
#define FEATURE_PARTIAL_SIMD_CALLEE_SAVE 1 // Whether SIMD registers are partially saved at calls
#endif // !UNIX_AMD64_ABI
#endif
#define FEATURE_FIXED_OUT_ARGS 1 // Preallocate the outgoing arg area in the prolog
#define FEATURE_STRUCTPROMOTE 1 // JIT Optimization to promote fields of structs into registers
#define FEATURE_FASTTAILCALL 1 // Tail calls made as epilog+jmp
#define FEATURE_TAILCALL_OPT 1 // opportunistic Tail calls (i.e. without ".tail" prefix) made as fast tail calls.
#define FEATURE_SET_FLAGS 0 // Set to true to force the JIT to mark the trees with GTF_SET_FLAGS when the flags need to be set
#define MAX_PASS_SINGLEREG_BYTES 8 // Maximum size of a struct passed in a single register (double).
#ifdef UNIX_AMD64_ABI
#define FEATURE_MULTIREG_ARGS_OR_RET 1 // Support for passing and/or returning single values in more than one register
#define FEATURE_MULTIREG_ARGS 1 // Support for passing a single argument in more than one register
#define FEATURE_MULTIREG_RET 1 // Support for returning a single value in more than one register
#define FEATURE_MULTIREG_STRUCT_PROMOTE 1 // True when we want to promote fields of a multireg struct into registers
#define FEATURE_STRUCT_CLASSIFIER 1 // Uses a classifier function to determine if structs are passed/returned in more than one register
#define MAX_PASS_MULTIREG_BYTES 32 // Maximum size of a struct that could be passed in more than one register (Max is two SIMD16s)
#define MAX_RET_MULTIREG_BYTES 32 // Maximum size of a struct that could be returned in more than one register (Max is two SIMD16s)
#define MAX_ARG_REG_COUNT 2 // Maximum registers used to pass a single argument in multiple registers.
#define MAX_RET_REG_COUNT 2 // Maximum registers used to return a value.
#define MAX_MULTIREG_COUNT 2 // Maxiumum number of registers defined by a single instruction (including calls).
// This is also the maximum number of registers for a MultiReg node.
#else // !UNIX_AMD64_ABI
#define WINDOWS_AMD64_ABI // Uses the Windows ABI for AMD64
#define FEATURE_MULTIREG_ARGS_OR_RET 0 // Support for passing and/or returning single values in more than one register
#define FEATURE_MULTIREG_ARGS 0 // Support for passing a single argument in more than one register
#define FEATURE_MULTIREG_RET 0 // Support for returning a single value in more than one register
#define FEATURE_MULTIREG_STRUCT_PROMOTE 0 // True when we want to promote fields of a multireg struct into registers
#define MAX_PASS_MULTIREG_BYTES 0 // No multireg arguments
#define MAX_RET_MULTIREG_BYTES 0 // No multireg return values
#define MAX_ARG_REG_COUNT 1 // Maximum registers used to pass a single argument (no arguments are passed using multiple registers)
#define MAX_RET_REG_COUNT 1 // Maximum registers used to return a value.
#define MAX_MULTIREG_COUNT 2 // Maxiumum number of registers defined by a single instruction (including calls).
// This is also the maximum number of registers for a MultiReg node.
// Note that this must be greater than 1 so that GenTreeLclVar can have an array of
// MAX_MULTIREG_COUNT - 1.
#endif // !UNIX_AMD64_ABI
#define NOGC_WRITE_BARRIERS 0 // We DO-NOT have specialized WriteBarrier JIT Helpers that DO-NOT trash the RBM_CALLEE_TRASH registers
#define USER_ARGS_COME_LAST 1
#define EMIT_TRACK_STACK_DEPTH 1
#define TARGET_POINTER_SIZE 8 // equal to sizeof(void*) and the managed pointer size in bytes for this target
#define FEATURE_EH 1 // To aid platform bring-up, eliminate exceptional EH clauses (catch, filter, filter-handler, fault) and directly execute 'finally' clauses.
#define FEATURE_EH_CALLFINALLY_THUNKS 1 // Generate call-to-finally code in "thunks" in the enclosing EH region, protected by "cloned finally" clauses.
#ifdef UNIX_AMD64_ABI
#define ETW_EBP_FRAMED 1 // if 1 we cannot use EBP as a scratch register and must create EBP based frames for most methods
#else // !UNIX_AMD64_ABI
#define ETW_EBP_FRAMED 0 // if 1 we cannot use EBP as a scratch register and must create EBP based frames for most methods
#endif // !UNIX_AMD64_ABI
#define CSE_CONSTS 1 // Enable if we want to CSE constants
#define RBM_ALLFLOAT (RBM_XMM0 | RBM_XMM1 | RBM_XMM2 | RBM_XMM3 | RBM_XMM4 | RBM_XMM5 | RBM_XMM6 | RBM_XMM7 | RBM_XMM8 | RBM_XMM9 | RBM_XMM10 | RBM_XMM11 | RBM_XMM12 | RBM_XMM13 | RBM_XMM14 | RBM_XMM15)
#define RBM_ALLDOUBLE RBM_ALLFLOAT
#define REG_FP_FIRST REG_XMM0
#define REG_FP_LAST REG_XMM15
#define FIRST_FP_ARGREG REG_XMM0
#ifdef UNIX_AMD64_ABI
#define LAST_FP_ARGREG REG_XMM7
#else // !UNIX_AMD64_ABI
#define LAST_FP_ARGREG REG_XMM3
#endif // !UNIX_AMD64_ABI
#define REGNUM_BITS 6 // number of bits in a REG_*
#define REGSIZE_BYTES 8 // number of bytes in one register
#define XMM_REGSIZE_BYTES 16 // XMM register size in bytes
#define YMM_REGSIZE_BYTES 32 // YMM register size in bytes
#define CODE_ALIGN 1 // code alignment requirement
#define STACK_ALIGN 16 // stack alignment requirement
#define STACK_ALIGN_SHIFT 4 // Shift-right amount to convert size in bytes to size in STACK_ALIGN units == log2(STACK_ALIGN)
#if ETW_EBP_FRAMED
#define RBM_ETW_FRAMED_EBP RBM_NONE
#define RBM_ETW_FRAMED_EBP_LIST
#define REG_ETW_FRAMED_EBP_LIST
#define REG_ETW_FRAMED_EBP_COUNT 0
#else // !ETW_EBP_FRAMED
#define RBM_ETW_FRAMED_EBP RBM_EBP
#define RBM_ETW_FRAMED_EBP_LIST RBM_EBP,
#define REG_ETW_FRAMED_EBP_LIST REG_EBP,
#define REG_ETW_FRAMED_EBP_COUNT 1
#endif // !ETW_EBP_FRAMED
#ifdef UNIX_AMD64_ABI
#define MIN_ARG_AREA_FOR_CALL 0 // Minimum required outgoing argument space for a call.
#define RBM_INT_CALLEE_SAVED (RBM_EBX|RBM_ETW_FRAMED_EBP|RBM_R12|RBM_R13|RBM_R14|RBM_R15)
#define RBM_INT_CALLEE_TRASH (RBM_EAX|RBM_RDI|RBM_RSI|RBM_EDX|RBM_ECX|RBM_R8|RBM_R9|RBM_R10|RBM_R11)
#define RBM_FLT_CALLEE_SAVED (0)
#define RBM_FLT_CALLEE_TRASH (RBM_XMM0|RBM_XMM1|RBM_XMM2|RBM_XMM3|RBM_XMM4|RBM_XMM5|RBM_XMM6|RBM_XMM7| \
RBM_XMM8|RBM_XMM9|RBM_XMM10|RBM_XMM11|RBM_XMM12|RBM_XMM13|RBM_XMM14|RBM_XMM15)
#define REG_PROFILER_ENTER_ARG_0 REG_R14
#define RBM_PROFILER_ENTER_ARG_0 RBM_R14
#define REG_PROFILER_ENTER_ARG_1 REG_R15
#define RBM_PROFILER_ENTER_ARG_1 RBM_R15
#define REG_DEFAULT_PROFILER_CALL_TARGET REG_R11
#else // !UNIX_AMD64_ABI
#define MIN_ARG_AREA_FOR_CALL (4 * REGSIZE_BYTES) // Minimum required outgoing argument space for a call.
#define RBM_INT_CALLEE_SAVED (RBM_EBX|RBM_ESI|RBM_EDI|RBM_ETW_FRAMED_EBP|RBM_R12|RBM_R13|RBM_R14|RBM_R15)
#define RBM_INT_CALLEE_TRASH (RBM_EAX|RBM_ECX|RBM_EDX|RBM_R8|RBM_R9|RBM_R10|RBM_R11)
#define RBM_FLT_CALLEE_SAVED (RBM_XMM6|RBM_XMM7|RBM_XMM8|RBM_XMM9|RBM_XMM10|RBM_XMM11|RBM_XMM12|RBM_XMM13|RBM_XMM14|RBM_XMM15)
#define RBM_FLT_CALLEE_TRASH (RBM_XMM0|RBM_XMM1|RBM_XMM2|RBM_XMM3|RBM_XMM4|RBM_XMM5)
#endif // !UNIX_AMD64_ABI
#define RBM_OSR_INT_CALLEE_SAVED (RBM_INT_CALLEE_SAVED | RBM_EBP)
#define REG_FLT_CALLEE_SAVED_FIRST REG_XMM6
#define REG_FLT_CALLEE_SAVED_LAST REG_XMM15
#define RBM_CALLEE_TRASH (RBM_INT_CALLEE_TRASH | RBM_FLT_CALLEE_TRASH)
#define RBM_CALLEE_SAVED (RBM_INT_CALLEE_SAVED | RBM_FLT_CALLEE_SAVED)
#define RBM_CALLEE_TRASH_NOGC RBM_CALLEE_TRASH
#define RBM_ALLINT (RBM_INT_CALLEE_SAVED | RBM_INT_CALLEE_TRASH)
#if 0
#define REG_VAR_ORDER REG_EAX,REG_EDX,REG_ECX,REG_ESI,REG_EDI,REG_EBX,REG_ETW_FRAMED_EBP_LIST \
REG_R8,REG_R9,REG_R10,REG_R11,REG_R14,REG_R15,REG_R12,REG_R13
#else
// TEMPORARY ORDER TO AVOID CALLEE-SAVES
// TODO-CQ: Review this and set appropriately
#ifdef UNIX_AMD64_ABI
#define REG_VAR_ORDER REG_EAX,REG_EDI,REG_ESI, \
REG_EDX,REG_ECX,REG_R8,REG_R9, \
REG_R10,REG_R11,REG_EBX,REG_ETW_FRAMED_EBP_LIST \
REG_R14,REG_R15,REG_R12,REG_R13
#else // !UNIX_AMD64_ABI
#define REG_VAR_ORDER REG_EAX,REG_EDX,REG_ECX, \
REG_R8,REG_R9,REG_R10,REG_R11, \
REG_ESI,REG_EDI,REG_EBX,REG_ETW_FRAMED_EBP_LIST \
REG_R14,REG_R15,REG_R12,REG_R13
#endif // !UNIX_AMD64_ABI
#endif
#define REG_VAR_ORDER_FLT REG_XMM0,REG_XMM1,REG_XMM2,REG_XMM3,REG_XMM4,REG_XMM5,REG_XMM6,REG_XMM7,REG_XMM8,REG_XMM9,REG_XMM10,REG_XMM11,REG_XMM12,REG_XMM13,REG_XMM14,REG_XMM15
#ifdef UNIX_AMD64_ABI
#define CNT_CALLEE_SAVED (5 + REG_ETW_FRAMED_EBP_COUNT)
#define CNT_CALLEE_TRASH (9)
#define CNT_CALLEE_ENREG (CNT_CALLEE_SAVED)
#define CNT_CALLEE_SAVED_FLOAT (0)
#define CNT_CALLEE_TRASH_FLOAT (16)
#define REG_CALLEE_SAVED_ORDER REG_EBX,REG_ETW_FRAMED_EBP_LIST REG_R12,REG_R13,REG_R14,REG_R15
#define RBM_CALLEE_SAVED_ORDER RBM_EBX,RBM_ETW_FRAMED_EBP_LIST RBM_R12,RBM_R13,RBM_R14,RBM_R15
#else // !UNIX_AMD64_ABI
#define CNT_CALLEE_SAVED (7 + REG_ETW_FRAMED_EBP_COUNT)
#define CNT_CALLEE_TRASH (7)
#define CNT_CALLEE_ENREG (CNT_CALLEE_SAVED)
#define CNT_CALLEE_SAVED_FLOAT (10)
#define CNT_CALLEE_TRASH_FLOAT (6)
#define REG_CALLEE_SAVED_ORDER REG_EBX,REG_ESI,REG_EDI,REG_ETW_FRAMED_EBP_LIST REG_R12,REG_R13,REG_R14,REG_R15
#define RBM_CALLEE_SAVED_ORDER RBM_EBX,RBM_ESI,RBM_EDI,RBM_ETW_FRAMED_EBP_LIST RBM_R12,RBM_R13,RBM_R14,RBM_R15
#endif // !UNIX_AMD64_ABI
#define CALLEE_SAVED_REG_MAXSZ (CNT_CALLEE_SAVED*REGSIZE_BYTES)
#define CALLEE_SAVED_FLOAT_MAXSZ (CNT_CALLEE_SAVED_FLOAT*16)
// register to hold shift amount
#define REG_SHIFT REG_ECX
#define RBM_SHIFT RBM_ECX
// This is a general scratch register that does not conflict with the argument registers
#define REG_SCRATCH REG_EAX
// Where is the exception object on entry to the handler block?
#ifdef UNIX_AMD64_ABI
#define REG_EXCEPTION_OBJECT REG_ESI
#define RBM_EXCEPTION_OBJECT RBM_ESI
#else // !UNIX_AMD64_ABI
#define REG_EXCEPTION_OBJECT REG_EDX
#define RBM_EXCEPTION_OBJECT RBM_EDX
#endif // !UNIX_AMD64_ABI
#define REG_JUMP_THUNK_PARAM REG_EAX
#define RBM_JUMP_THUNK_PARAM RBM_EAX
// Register to be used for emitting helper calls whose call target is an indir of an
// absolute memory address in case of Rel32 overflow i.e. a data address could not be
// encoded as PC-relative 32-bit offset.
//
// Notes:
// 1) that RAX is callee trash register that is not used for passing parameter and
// also results in smaller instruction encoding.
// 2) Profiler Leave callback requires the return value to be preserved
// in some form. We can use custom calling convention for Leave callback.
// For e.g return value could be preserved in rcx so that it is available for
// profiler.
#define REG_DEFAULT_HELPER_CALL_TARGET REG_RAX
#define RBM_DEFAULT_HELPER_CALL_TARGET RBM_RAX
#define REG_R2R_INDIRECT_PARAM REG_RAX // Indirection cell for R2R fast tailcall
// See ImportThunk.Kind.DelayLoadHelperWithExistingIndirectionCell in crossgen2.
#define RBM_R2R_INDIRECT_PARAM RBM_RAX
// GenericPInvokeCalliHelper VASigCookie Parameter
#define REG_PINVOKE_COOKIE_PARAM REG_R11
#define RBM_PINVOKE_COOKIE_PARAM RBM_R11
// GenericPInvokeCalliHelper unmanaged target Parameter
#define REG_PINVOKE_TARGET_PARAM REG_R10
#define RBM_PINVOKE_TARGET_PARAM RBM_R10
// IL stub's secret MethodDesc parameter (JitFlags::JIT_FLAG_PUBLISH_SECRET_PARAM)
#define REG_SECRET_STUB_PARAM REG_R10
#define RBM_SECRET_STUB_PARAM RBM_R10
// Registers used by PInvoke frame setup
#define REG_PINVOKE_FRAME REG_EDI
#define RBM_PINVOKE_FRAME RBM_EDI
#define REG_PINVOKE_TCB REG_EAX
#define RBM_PINVOKE_TCB RBM_EAX
#define REG_PINVOKE_SCRATCH REG_EAX
#define RBM_PINVOKE_SCRATCH RBM_EAX
// The following defines are useful for iterating a regNumber
#define REG_FIRST REG_EAX
#define REG_INT_FIRST REG_EAX
#define REG_INT_LAST REG_R15
#define REG_INT_COUNT (REG_INT_LAST - REG_INT_FIRST + 1)
#define REG_NEXT(reg) ((regNumber)((unsigned)(reg) + 1))
#define REG_PREV(reg) ((regNumber)((unsigned)(reg) - 1))
// Which register are int and long values returned in ?
#define REG_INTRET REG_EAX
#define RBM_INTRET RBM_EAX
#define RBM_LNGRET RBM_EAX
#ifdef UNIX_AMD64_ABI
#define REG_INTRET_1 REG_RDX
#define RBM_INTRET_1 RBM_RDX
#define REG_LNGRET_1 REG_RDX
#define RBM_LNGRET_1 RBM_RDX
#endif // UNIX_AMD64_ABI
#define REG_FLOATRET REG_XMM0
#define RBM_FLOATRET RBM_XMM0
#define REG_DOUBLERET REG_XMM0
#define RBM_DOUBLERET RBM_XMM0
#ifdef UNIX_AMD64_ABI
#define REG_FLOATRET_1 REG_XMM1
#define RBM_FLOATRET_1 RBM_XMM1
#define REG_DOUBLERET_1 REG_XMM1
#define RBM_DOUBLERET_1 RBM_XMM1
#endif // UNIX_AMD64_ABI
#define REG_FPBASE REG_EBP
#define RBM_FPBASE RBM_EBP
#define STR_FPBASE "rbp"
#define REG_SPBASE REG_ESP
#define RBM_SPBASE RBM_ESP
#define STR_SPBASE "rsp"
#define FIRST_ARG_STACK_OFFS (REGSIZE_BYTES) // return address
#ifdef UNIX_AMD64_ABI
#define MAX_REG_ARG 6
#define MAX_FLOAT_REG_ARG 8
#define REG_ARG_FIRST REG_EDI
#define REG_ARG_LAST REG_R9
#define INIT_ARG_STACK_SLOT 0 // No outgoing reserved stack slots
#define REG_ARG_0 REG_EDI
#define REG_ARG_1 REG_ESI
#define REG_ARG_2 REG_EDX
#define REG_ARG_3 REG_ECX
#define REG_ARG_4 REG_R8
#define REG_ARG_5 REG_R9
extern const regNumber intArgRegs [MAX_REG_ARG];
extern const regMaskTP intArgMasks[MAX_REG_ARG];
extern const regNumber fltArgRegs [MAX_FLOAT_REG_ARG];
extern const regMaskTP fltArgMasks[MAX_FLOAT_REG_ARG];
#define RBM_ARG_0 RBM_RDI
#define RBM_ARG_1 RBM_RSI
#define RBM_ARG_2 RBM_EDX
#define RBM_ARG_3 RBM_ECX
#define RBM_ARG_4 RBM_R8
#define RBM_ARG_5 RBM_R9
#else // !UNIX_AMD64_ABI
#define MAX_REG_ARG 4
#define MAX_FLOAT_REG_ARG 4
#define REG_ARG_FIRST REG_ECX
#define REG_ARG_LAST REG_R9
#define INIT_ARG_STACK_SLOT 4 // 4 outgoing reserved stack slots
#define REG_ARG_0 REG_ECX
#define REG_ARG_1 REG_EDX
#define REG_ARG_2 REG_R8
#define REG_ARG_3 REG_R9
extern const regNumber intArgRegs [MAX_REG_ARG];
extern const regMaskTP intArgMasks[MAX_REG_ARG];
extern const regNumber fltArgRegs [MAX_FLOAT_REG_ARG];
extern const regMaskTP fltArgMasks[MAX_FLOAT_REG_ARG];
#define RBM_ARG_0 RBM_ECX
#define RBM_ARG_1 RBM_EDX
#define RBM_ARG_2 RBM_R8
#define RBM_ARG_3 RBM_R9
#endif // !UNIX_AMD64_ABI
#define REG_FLTARG_0 REG_XMM0
#define REG_FLTARG_1 REG_XMM1
#define REG_FLTARG_2 REG_XMM2
#define REG_FLTARG_3 REG_XMM3
#define RBM_FLTARG_0 RBM_XMM0
#define RBM_FLTARG_1 RBM_XMM1
#define RBM_FLTARG_2 RBM_XMM2
#define RBM_FLTARG_3 RBM_XMM3
#ifdef UNIX_AMD64_ABI
#define REG_FLTARG_4 REG_XMM4
#define REG_FLTARG_5 REG_XMM5
#define REG_FLTARG_6 REG_XMM6
#define REG_FLTARG_7 REG_XMM7
#define RBM_FLTARG_4 RBM_XMM4
#define RBM_FLTARG_5 RBM_XMM5
#define RBM_FLTARG_6 RBM_XMM6
#define RBM_FLTARG_7 RBM_XMM7
#define RBM_ARG_REGS (RBM_ARG_0|RBM_ARG_1|RBM_ARG_2|RBM_ARG_3|RBM_ARG_4|RBM_ARG_5)
#define RBM_FLTARG_REGS (RBM_FLTARG_0|RBM_FLTARG_1|RBM_FLTARG_2|RBM_FLTARG_3|RBM_FLTARG_4|RBM_FLTARG_5|RBM_FLTARG_6|RBM_FLTARG_7)
#else // !UNIX_AMD64_ABI
#define RBM_ARG_REGS (RBM_ARG_0|RBM_ARG_1|RBM_ARG_2|RBM_ARG_3)
#define RBM_FLTARG_REGS (RBM_FLTARG_0|RBM_FLTARG_1|RBM_FLTARG_2|RBM_FLTARG_3)
#endif // !UNIX_AMD64_ABI
// The registers trashed by profiler enter/leave/tailcall hook
// See vm\amd64\asmhelpers.asm for more details.
#define RBM_PROFILER_ENTER_TRASH RBM_CALLEE_TRASH
#define RBM_PROFILER_TAILCALL_TRASH RBM_PROFILER_LEAVE_TRASH
// The registers trashed by the CORINFO_HELP_STOP_FOR_GC helper.
#ifdef UNIX_AMD64_ABI
// See vm\amd64\unixasmhelpers.S for more details.
//
// On Unix a struct of size >=9 and <=16 bytes in size is returned in two return registers.
// The return registers could be any two from the set { RAX, RDX, XMM0, XMM1 }.
// STOP_FOR_GC helper preserves all the 4 possible return registers.
#define RBM_STOP_FOR_GC_TRASH (RBM_CALLEE_TRASH & ~(RBM_FLOATRET | RBM_INTRET | RBM_FLOATRET_1 | RBM_INTRET_1))
#define RBM_PROFILER_LEAVE_TRASH (RBM_CALLEE_TRASH & ~(RBM_FLOATRET | RBM_INTRET | RBM_FLOATRET_1 | RBM_INTRET_1))
#else
// See vm\amd64\asmhelpers.asm for more details.
#define RBM_STOP_FOR_GC_TRASH (RBM_CALLEE_TRASH & ~(RBM_FLOATRET | RBM_INTRET))
#define RBM_PROFILER_LEAVE_TRASH (RBM_CALLEE_TRASH & ~(RBM_FLOATRET | RBM_INTRET))
#endif
// The registers trashed by the CORINFO_HELP_INIT_PINVOKE_FRAME helper.
#define RBM_INIT_PINVOKE_FRAME_TRASH RBM_CALLEE_TRASH
#define RBM_VALIDATE_INDIRECT_CALL_TRASH (RBM_INT_CALLEE_TRASH & ~(RBM_R10 | RBM_RCX))
#define REG_VALIDATE_INDIRECT_CALL_ADDR REG_RCX
#define REG_DISPATCH_INDIRECT_CALL_ADDR REG_RAX
// What sort of reloc do we use for [disp32] address mode
#define IMAGE_REL_BASED_DISP32 IMAGE_REL_BASED_REL32
// What sort of reloc to we use for 'moffset' address mode (for 'mov eax, moffset' or 'mov moffset, eax')
#define IMAGE_REL_BASED_MOFFSET IMAGE_REL_BASED_DIR64
// Pointer-sized string move instructions
#define INS_movsp INS_movsq
#define INS_r_movsp INS_r_movsq
#define INS_stosp INS_stosq
#define INS_r_stosp INS_r_stosq
// AMD64 uses FEATURE_FIXED_OUT_ARGS so this can be zero.
#define STACK_PROBE_BOUNDARY_THRESHOLD_BYTES 0
#define REG_STACK_PROBE_HELPER_ARG REG_R11
#define RBM_STACK_PROBE_HELPER_ARG RBM_R11
#ifdef UNIX_AMD64_ABI
#define RBM_STACK_PROBE_HELPER_TRASH RBM_NONE
#else // !UNIX_AMD64_ABI
#define RBM_STACK_PROBE_HELPER_TRASH RBM_RAX
#endif // !UNIX_AMD64_ABI
// clang-format on
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/coreclr/pal/src/libunwind/src/aarch64/Lget_save_loc.c | #define UNW_LOCAL_ONLY
#include <libunwind.h>
#if defined(UNW_LOCAL_ONLY) && !defined(UNW_REMOTE_ONLY)
#include "Gget_save_loc.c"
#endif
| #define UNW_LOCAL_ONLY
#include <libunwind.h>
#if defined(UNW_LOCAL_ONLY) && !defined(UNW_REMOTE_ONLY)
#include "Gget_save_loc.c"
#endif
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./src/native/eventpipe/ep-block.h | #ifndef __EVENTPIPE_BLOCK_H__
#define __EVENTPIPE_BLOCK_H__
#include "ep-rt-config.h"
#ifdef ENABLE_PERFTRACING
#include "ep-types.h"
#include "ep-stream.h"
#undef EP_IMPL_GETTER_SETTER
#ifdef EP_IMPL_BLOCK_GETTER_SETTER
#define EP_IMPL_GETTER_SETTER
#endif
#include "ep-getter-setter.h"
/*
* EventPipeBlock
*/
typedef void (*EventPipeBlockClearFunc)(void *object);
typedef uint32_t (*EventPipeBlockGetHeaderSizeFunc)(void *object);
typedef void (*EventPipeBlockSerializeHeaderFunc)(void *object, FastSerializer *fast_serializer);
struct _EventPipeBlockVtable {
FastSerializableObjectVtable fast_serializable_object_vtable;
EventPipeBlockClearFunc clear_func;
EventPipeBlockGetHeaderSizeFunc get_header_size_func;
EventPipeBlockSerializeHeaderFunc serialize_header_func;
};
// The base type for all file blocks in the Nettrace file format
// This class handles memory management to buffer the block data,
// bookkeeping, block version numbers, and serializing the data
// to the file with correct alignment.
// Sub-types decide the format of the block contents and how
// the blocks are named.
#if defined(EP_INLINE_GETTER_SETTER) || defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeBlock {
#else
struct _EventPipeBlock_Internal {
#endif
FastSerializableObject fast_serializer_object;
uint8_t *block;
uint8_t *write_pointer;
uint8_t *end_of_the_buffer;
EventPipeSerializationFormat format;
};
#if !defined(EP_INLINE_GETTER_SETTER) && !defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeBlock {
uint8_t _internal [sizeof (struct _EventPipeBlock_Internal)];
};
#endif
EP_DEFINE_GETTER(EventPipeBlock *, block, uint8_t*, block)
EP_DEFINE_GETTER(EventPipeBlock *, block, uint8_t*, write_pointer)
EP_DEFINE_SETTER(EventPipeBlock *, block, uint8_t*, write_pointer)
EP_DEFINE_GETTER(EventPipeBlock *, block, uint8_t*, end_of_the_buffer)
EP_DEFINE_GETTER(EventPipeBlock *, block, EventPipeSerializationFormat, format)
static
inline
uint32_t
ep_block_get_bytes_written (const EventPipeBlock *block)
{
return block == NULL ? 0 : (uint32_t)(ep_block_get_write_pointer (block) - ep_block_get_block (block));
}
EventPipeBlock *
ep_block_init (
EventPipeBlock *block,
EventPipeBlockVtable *vtable,
uint32_t max_block_size,
EventPipeSerializationFormat format);
void
ep_block_fini (EventPipeBlock *block);
void
ep_block_clear (EventPipeBlock *block);
uint32_t
ep_block_get_header_size (EventPipeBlock *block);
void
ep_block_serialize_header (
EventPipeBlock *block,
FastSerializer *fast_serializer);
void
ep_block_fast_serialize (
EventPipeBlock *block,
FastSerializer *fast_serializer);
void
ep_block_clear_vcall (EventPipeBlock *block);
uint32_t
ep_block_get_header_size_vcall (EventPipeBlock *block);
void
ep_block_serialize_header_vcall (
EventPipeBlock *block,
FastSerializer *fast_serializer);
void
ep_block_fast_serialize_vcall (
EventPipeBlock *block,
FastSerializer *fast_serializer);
/*
* EventPipeEventHeader.
*/
struct _EventPipeEventHeader {
uint8_t activity_id [EP_ACTIVITY_ID_SIZE];
uint8_t related_activity_id [EP_ACTIVITY_ID_SIZE];
ep_timestamp_t timestamp;
uint64_t thread_id;
uint64_t capture_thread_id;
uint32_t metadata_id;
uint32_t sequence_number;
uint32_t capture_proc_number;
uint32_t stack_id;
uint32_t data_len;
};
/*
* EventPipeEventBlockBase
*/
// The base type for blocks that contain events (EventBlock and EventMetadataBlock).
#if defined(EP_INLINE_GETTER_SETTER) || defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeEventBlockBase {
#else
struct _EventPipeEventBlockBase_Internal {
#endif
EventPipeBlock block;
EventPipeEventHeader last_header;
uint8_t compressed_header [100];
ep_timestamp_t min_timestamp;
ep_timestamp_t max_timestamp;
bool use_header_compression;
};
#if !defined(EP_INLINE_GETTER_SETTER) && !defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeEventBlockBase {
uint8_t _internal [sizeof (struct _EventPipeEventBlockBase_Internal)];
};
#endif
EP_DEFINE_GETTER_REF(EventPipeEventBlockBase *, event_block_base, EventPipeBlock *, block)
EP_DEFINE_GETTER_REF(EventPipeEventBlockBase *, event_block_base, EventPipeEventHeader *, last_header)
EP_DEFINE_GETTER(EventPipeEventBlockBase *, event_block_base, ep_timestamp_t, min_timestamp)
EP_DEFINE_SETTER(EventPipeEventBlockBase *, event_block_base, ep_timestamp_t, min_timestamp)
EP_DEFINE_GETTER(EventPipeEventBlockBase *, event_block_base, ep_timestamp_t, max_timestamp)
EP_DEFINE_SETTER(EventPipeEventBlockBase *, event_block_base, ep_timestamp_t, max_timestamp)
EP_DEFINE_GETTER(EventPipeEventBlockBase *, event_block_base, bool, use_header_compression)
EP_DEFINE_GETTER_ARRAY_REF(EventPipeEventBlockBase *, event_block_base, uint8_t *, const uint8_t *, compressed_header, compressed_header[0])
EventPipeEventBlockBase *
ep_event_block_base_init (
EventPipeEventBlockBase *event_block_base,
EventPipeBlockVtable *vtable,
uint32_t max_block_size,
EventPipeSerializationFormat format,
bool use_header_compression);
void
ep_event_block_base_fini (EventPipeEventBlockBase *event_block_base);
void
ep_event_block_base_clear (EventPipeEventBlockBase *event_block_base);
uint32_t
ep_event_block_base_get_header_size (const EventPipeEventBlockBase *event_block_base);
void
ep_event_block_base_serialize_header (
EventPipeEventBlockBase *event_block_base,
FastSerializer *fast_serializer);
bool
ep_event_block_base_write_event (
EventPipeEventBlockBase *event_block_base,
EventPipeEventInstance *event_instance,
uint64_t capture_thread_id,
uint32_t sequence_number,
uint32_t stack_id,
bool is_sorted_event);
/*
* EventPipeEventBlock.
*/
#if defined(EP_INLINE_GETTER_SETTER) || defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeEventBlock {
#else
struct _EventPipeEventBlock_Internal {
#endif
EventPipeEventBlockBase event_block_base;
};
#if !defined(EP_INLINE_GETTER_SETTER) && !defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeEventBlock {
uint8_t _internal [sizeof (struct _EventPipeEventBlock_Internal)];
};
#endif
EventPipeEventBlock *
ep_event_block_alloc (
uint32_t max_block_size,
EventPipeSerializationFormat format);
void
ep_event_block_free (EventPipeEventBlock *event_block);
static
inline
uint32_t
ep_event_block_get_bytes_written (EventPipeEventBlock *event_block)
{
return ep_block_get_bytes_written ((const EventPipeBlock *)event_block);
}
static
inline
void
ep_event_block_serialize (EventPipeEventBlock *event_block, FastSerializer *fast_serializer)
{
ep_fast_serializer_write_object (fast_serializer, (FastSerializableObject*)event_block);
}
static
inline
void
ep_event_block_clear (EventPipeEventBlock *event_block)
{
ep_block_clear_vcall ((EventPipeBlock *)event_block);
}
/*
* EventPipeMetadataBlock.
*/
#if defined(EP_INLINE_GETTER_SETTER) || defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeMetadataBlock {
#else
struct _EventPipeMetadataBlock_Internal {
#endif
EventPipeEventBlockBase event_block_base;
};
#if !defined(EP_INLINE_GETTER_SETTER) && !defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeMetadataBlock {
uint8_t _internal [sizeof (struct _EventPipeMetadataBlock_Internal)];
};
#endif
EventPipeMetadataBlock *
ep_metadata_block_alloc (uint32_t max_block_size);
void
ep_metadata_block_free (EventPipeMetadataBlock *metadata_block);
static
inline
uint32_t
ep_metadata_block_get_bytes_written (EventPipeMetadataBlock *metadata_block)
{
return ep_block_get_bytes_written ((const EventPipeBlock *)metadata_block);
}
static
inline
void
ep_metadata_block_serialize (EventPipeMetadataBlock *metadata_block, FastSerializer *fast_serializer)
{
ep_fast_serializer_write_object (fast_serializer, (FastSerializableObject *)metadata_block);
}
static
inline
void
ep_metadata_block_clear (EventPipeMetadataBlock *metadata_block)
{
ep_block_clear_vcall ((EventPipeBlock *)metadata_block);
}
/*
* EventPipeSequencePointBlock.
*/
#if defined(EP_INLINE_GETTER_SETTER) || defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeSequencePointBlock {
#else
struct _EventPipeSequencePointBlock_Internal {
#endif
EventPipeBlock block;
};
#if !defined(EP_INLINE_GETTER_SETTER) && !defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeSequencePointBlock {
uint8_t _internal [sizeof (struct _EventPipeSequencePointBlock_Internal)];
};
#endif
EventPipeSequencePointBlock *
ep_sequence_point_block_alloc (EventPipeSequencePoint *sequence_point);
EventPipeSequencePointBlock *
ep_sequence_point_block_init (
EventPipeSequencePointBlock *sequence_point_block,
EventPipeSequencePoint *sequence_point);
void
ep_sequence_point_block_fini (EventPipeSequencePointBlock *sequence_point_block);
void
ep_sequence_point_block_free (EventPipeSequencePointBlock *sequence_point_block);
/*
* EventPipeStackBlock.
*/
#if defined(EP_INLINE_GETTER_SETTER) || defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeStackBlock {
#else
struct _EventPipeStackBlock_Internal {
#endif
EventPipeBlock block;
uint32_t initial_index;
uint32_t count;
bool has_initial_index;
};
#if !defined(EP_INLINE_GETTER_SETTER) && !defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeStackBlock {
uint8_t _internal [sizeof (struct _EventPipeStackBlock_Internal)];
};
#endif
EventPipeStackBlock *
ep_stack_block_alloc (uint32_t max_block_size);
void
ep_stack_block_free (EventPipeStackBlock *stack_block);
bool
ep_stack_block_write_stack (
EventPipeStackBlock *stack_block,
uint32_t stack_id,
EventPipeStackContents *stack);
static
inline
uint32_t
ep_stack_block_get_bytes_written (EventPipeStackBlock *stack_block)
{
return ep_block_get_bytes_written ((const EventPipeBlock *)stack_block);
}
static
inline
void
ep_stack_block_serialize (EventPipeStackBlock *stack_block, FastSerializer *fast_serializer)
{
ep_fast_serializer_write_object (fast_serializer, (FastSerializableObject *)stack_block);
}
static
inline
void
ep_stack_block_clear (EventPipeStackBlock *stack_block)
{
ep_block_clear_vcall ((EventPipeBlock *)stack_block);
}
#endif /* ENABLE_PERFTRACING */
#endif /* __EVENTPIPE_BLOCK_H__ */
| #ifndef __EVENTPIPE_BLOCK_H__
#define __EVENTPIPE_BLOCK_H__
#include "ep-rt-config.h"
#ifdef ENABLE_PERFTRACING
#include "ep-types.h"
#include "ep-stream.h"
#undef EP_IMPL_GETTER_SETTER
#ifdef EP_IMPL_BLOCK_GETTER_SETTER
#define EP_IMPL_GETTER_SETTER
#endif
#include "ep-getter-setter.h"
/*
* EventPipeBlock
*/
typedef void (*EventPipeBlockClearFunc)(void *object);
typedef uint32_t (*EventPipeBlockGetHeaderSizeFunc)(void *object);
typedef void (*EventPipeBlockSerializeHeaderFunc)(void *object, FastSerializer *fast_serializer);
struct _EventPipeBlockVtable {
FastSerializableObjectVtable fast_serializable_object_vtable;
EventPipeBlockClearFunc clear_func;
EventPipeBlockGetHeaderSizeFunc get_header_size_func;
EventPipeBlockSerializeHeaderFunc serialize_header_func;
};
// The base type for all file blocks in the Nettrace file format
// This class handles memory management to buffer the block data,
// bookkeeping, block version numbers, and serializing the data
// to the file with correct alignment.
// Sub-types decide the format of the block contents and how
// the blocks are named.
#if defined(EP_INLINE_GETTER_SETTER) || defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeBlock {
#else
struct _EventPipeBlock_Internal {
#endif
FastSerializableObject fast_serializer_object;
uint8_t *block;
uint8_t *write_pointer;
uint8_t *end_of_the_buffer;
EventPipeSerializationFormat format;
};
#if !defined(EP_INLINE_GETTER_SETTER) && !defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeBlock {
uint8_t _internal [sizeof (struct _EventPipeBlock_Internal)];
};
#endif
EP_DEFINE_GETTER(EventPipeBlock *, block, uint8_t*, block)
EP_DEFINE_GETTER(EventPipeBlock *, block, uint8_t*, write_pointer)
EP_DEFINE_SETTER(EventPipeBlock *, block, uint8_t*, write_pointer)
EP_DEFINE_GETTER(EventPipeBlock *, block, uint8_t*, end_of_the_buffer)
EP_DEFINE_GETTER(EventPipeBlock *, block, EventPipeSerializationFormat, format)
static
inline
uint32_t
ep_block_get_bytes_written (const EventPipeBlock *block)
{
return block == NULL ? 0 : (uint32_t)(ep_block_get_write_pointer (block) - ep_block_get_block (block));
}
EventPipeBlock *
ep_block_init (
EventPipeBlock *block,
EventPipeBlockVtable *vtable,
uint32_t max_block_size,
EventPipeSerializationFormat format);
void
ep_block_fini (EventPipeBlock *block);
void
ep_block_clear (EventPipeBlock *block);
uint32_t
ep_block_get_header_size (EventPipeBlock *block);
void
ep_block_serialize_header (
EventPipeBlock *block,
FastSerializer *fast_serializer);
void
ep_block_fast_serialize (
EventPipeBlock *block,
FastSerializer *fast_serializer);
void
ep_block_clear_vcall (EventPipeBlock *block);
uint32_t
ep_block_get_header_size_vcall (EventPipeBlock *block);
void
ep_block_serialize_header_vcall (
EventPipeBlock *block,
FastSerializer *fast_serializer);
void
ep_block_fast_serialize_vcall (
EventPipeBlock *block,
FastSerializer *fast_serializer);
/*
* EventPipeEventHeader.
*/
struct _EventPipeEventHeader {
uint8_t activity_id [EP_ACTIVITY_ID_SIZE];
uint8_t related_activity_id [EP_ACTIVITY_ID_SIZE];
ep_timestamp_t timestamp;
uint64_t thread_id;
uint64_t capture_thread_id;
uint32_t metadata_id;
uint32_t sequence_number;
uint32_t capture_proc_number;
uint32_t stack_id;
uint32_t data_len;
};
/*
* EventPipeEventBlockBase
*/
// The base type for blocks that contain events (EventBlock and EventMetadataBlock).
#if defined(EP_INLINE_GETTER_SETTER) || defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeEventBlockBase {
#else
struct _EventPipeEventBlockBase_Internal {
#endif
EventPipeBlock block;
EventPipeEventHeader last_header;
uint8_t compressed_header [100];
ep_timestamp_t min_timestamp;
ep_timestamp_t max_timestamp;
bool use_header_compression;
};
#if !defined(EP_INLINE_GETTER_SETTER) && !defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeEventBlockBase {
uint8_t _internal [sizeof (struct _EventPipeEventBlockBase_Internal)];
};
#endif
EP_DEFINE_GETTER_REF(EventPipeEventBlockBase *, event_block_base, EventPipeBlock *, block)
EP_DEFINE_GETTER_REF(EventPipeEventBlockBase *, event_block_base, EventPipeEventHeader *, last_header)
EP_DEFINE_GETTER(EventPipeEventBlockBase *, event_block_base, ep_timestamp_t, min_timestamp)
EP_DEFINE_SETTER(EventPipeEventBlockBase *, event_block_base, ep_timestamp_t, min_timestamp)
EP_DEFINE_GETTER(EventPipeEventBlockBase *, event_block_base, ep_timestamp_t, max_timestamp)
EP_DEFINE_SETTER(EventPipeEventBlockBase *, event_block_base, ep_timestamp_t, max_timestamp)
EP_DEFINE_GETTER(EventPipeEventBlockBase *, event_block_base, bool, use_header_compression)
EP_DEFINE_GETTER_ARRAY_REF(EventPipeEventBlockBase *, event_block_base, uint8_t *, const uint8_t *, compressed_header, compressed_header[0])
EventPipeEventBlockBase *
ep_event_block_base_init (
EventPipeEventBlockBase *event_block_base,
EventPipeBlockVtable *vtable,
uint32_t max_block_size,
EventPipeSerializationFormat format,
bool use_header_compression);
void
ep_event_block_base_fini (EventPipeEventBlockBase *event_block_base);
void
ep_event_block_base_clear (EventPipeEventBlockBase *event_block_base);
uint32_t
ep_event_block_base_get_header_size (const EventPipeEventBlockBase *event_block_base);
void
ep_event_block_base_serialize_header (
EventPipeEventBlockBase *event_block_base,
FastSerializer *fast_serializer);
bool
ep_event_block_base_write_event (
EventPipeEventBlockBase *event_block_base,
EventPipeEventInstance *event_instance,
uint64_t capture_thread_id,
uint32_t sequence_number,
uint32_t stack_id,
bool is_sorted_event);
/*
* EventPipeEventBlock.
*/
#if defined(EP_INLINE_GETTER_SETTER) || defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeEventBlock {
#else
struct _EventPipeEventBlock_Internal {
#endif
EventPipeEventBlockBase event_block_base;
};
#if !defined(EP_INLINE_GETTER_SETTER) && !defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeEventBlock {
uint8_t _internal [sizeof (struct _EventPipeEventBlock_Internal)];
};
#endif
EventPipeEventBlock *
ep_event_block_alloc (
uint32_t max_block_size,
EventPipeSerializationFormat format);
void
ep_event_block_free (EventPipeEventBlock *event_block);
static
inline
uint32_t
ep_event_block_get_bytes_written (EventPipeEventBlock *event_block)
{
return ep_block_get_bytes_written ((const EventPipeBlock *)event_block);
}
static
inline
void
ep_event_block_serialize (EventPipeEventBlock *event_block, FastSerializer *fast_serializer)
{
ep_fast_serializer_write_object (fast_serializer, (FastSerializableObject*)event_block);
}
static
inline
void
ep_event_block_clear (EventPipeEventBlock *event_block)
{
ep_block_clear_vcall ((EventPipeBlock *)event_block);
}
/*
* EventPipeMetadataBlock.
*/
#if defined(EP_INLINE_GETTER_SETTER) || defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeMetadataBlock {
#else
struct _EventPipeMetadataBlock_Internal {
#endif
EventPipeEventBlockBase event_block_base;
};
#if !defined(EP_INLINE_GETTER_SETTER) && !defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeMetadataBlock {
uint8_t _internal [sizeof (struct _EventPipeMetadataBlock_Internal)];
};
#endif
EventPipeMetadataBlock *
ep_metadata_block_alloc (uint32_t max_block_size);
void
ep_metadata_block_free (EventPipeMetadataBlock *metadata_block);
static
inline
uint32_t
ep_metadata_block_get_bytes_written (EventPipeMetadataBlock *metadata_block)
{
return ep_block_get_bytes_written ((const EventPipeBlock *)metadata_block);
}
static
inline
void
ep_metadata_block_serialize (EventPipeMetadataBlock *metadata_block, FastSerializer *fast_serializer)
{
ep_fast_serializer_write_object (fast_serializer, (FastSerializableObject *)metadata_block);
}
static
inline
void
ep_metadata_block_clear (EventPipeMetadataBlock *metadata_block)
{
ep_block_clear_vcall ((EventPipeBlock *)metadata_block);
}
/*
* EventPipeSequencePointBlock.
*/
#if defined(EP_INLINE_GETTER_SETTER) || defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeSequencePointBlock {
#else
struct _EventPipeSequencePointBlock_Internal {
#endif
EventPipeBlock block;
};
#if !defined(EP_INLINE_GETTER_SETTER) && !defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeSequencePointBlock {
uint8_t _internal [sizeof (struct _EventPipeSequencePointBlock_Internal)];
};
#endif
EventPipeSequencePointBlock *
ep_sequence_point_block_alloc (EventPipeSequencePoint *sequence_point);
EventPipeSequencePointBlock *
ep_sequence_point_block_init (
EventPipeSequencePointBlock *sequence_point_block,
EventPipeSequencePoint *sequence_point);
void
ep_sequence_point_block_fini (EventPipeSequencePointBlock *sequence_point_block);
void
ep_sequence_point_block_free (EventPipeSequencePointBlock *sequence_point_block);
/*
* EventPipeStackBlock.
*/
#if defined(EP_INLINE_GETTER_SETTER) || defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeStackBlock {
#else
struct _EventPipeStackBlock_Internal {
#endif
EventPipeBlock block;
uint32_t initial_index;
uint32_t count;
bool has_initial_index;
};
#if !defined(EP_INLINE_GETTER_SETTER) && !defined(EP_IMPL_BLOCK_GETTER_SETTER)
struct _EventPipeStackBlock {
uint8_t _internal [sizeof (struct _EventPipeStackBlock_Internal)];
};
#endif
EventPipeStackBlock *
ep_stack_block_alloc (uint32_t max_block_size);
void
ep_stack_block_free (EventPipeStackBlock *stack_block);
bool
ep_stack_block_write_stack (
EventPipeStackBlock *stack_block,
uint32_t stack_id,
EventPipeStackContents *stack);
static
inline
uint32_t
ep_stack_block_get_bytes_written (EventPipeStackBlock *stack_block)
{
return ep_block_get_bytes_written ((const EventPipeBlock *)stack_block);
}
static
inline
void
ep_stack_block_serialize (EventPipeStackBlock *stack_block, FastSerializer *fast_serializer)
{
ep_fast_serializer_write_object (fast_serializer, (FastSerializableObject *)stack_block);
}
static
inline
void
ep_stack_block_clear (EventPipeStackBlock *stack_block)
{
ep_block_clear_vcall ((EventPipeBlock *)stack_block);
}
#endif /* ENABLE_PERFTRACING */
#endif /* __EVENTPIPE_BLOCK_H__ */
| -1 |
dotnet/runtime | 66,211 | [mono] Remove SkipVerification support from the runtime | CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | akoeplinger | 2022-03-04T19:47:04Z | 2022-03-06T13:44:33Z | b463b1630dbf1be5b013208a9fa73e1ecd6c774c | be629f49a350d526de2c65981294734cee420b90 | [mono] Remove SkipVerification support from the runtime. CAS support was removed in .NET Core. This allows us removing a bunch of code that is unused, e.g. the dependency on libiconv. | ./eng/native/sanitizerblacklist.txt | # This file has exclusions to the Clang address sanitizer to suppress error reports
# When Clang 3.8 is available, convert these to suppression list instead as that is preferred for internal code
# CMiniMdBase::UsesAllocatedMemory - suppress stack-buffer-underflow (code backs up pointer by -1 to check allocation ownership)
fun:_ZN11CMiniMdBase19UsesAllocatedMemoryEP11CMiniColDef
# JIT_InitPInvokeFrame - suppress unknown sanitizer issue causing SEGV on unknown address 0x000000000000
# 0 0x4e8a0c in __ubsan::checkDynamicType(void*, void*, unsigned long)
# 1 0x4e807f in HandleDynamicTypeCacheMiss(__ubsan::DynamicTypeCacheMissData*, unsigned long, unsigned long, __ubsan::ReportOptions)
# 2 0x4e8051 in __ubsan_handle_dynamic_type_cache_miss
# 3 0x7f02ce676cd8 in JIT_InitPInvokeFrame(InlinedCallFrame*, void*) /home/steveharter/git/dotnet_coreclr/vm/jithelpers.cpp:6491:9
# 4 0x7f0252bbceb2 (<unknown module>)
fun:_Z20JIT_InitPInvokeFrameP16InlinedCallFramePv
| # This file has exclusions to the Clang address sanitizer to suppress error reports
# When Clang 3.8 is available, convert these to suppression list instead as that is preferred for internal code
# CMiniMdBase::UsesAllocatedMemory - suppress stack-buffer-underflow (code backs up pointer by -1 to check allocation ownership)
fun:_ZN11CMiniMdBase19UsesAllocatedMemoryEP11CMiniColDef
# JIT_InitPInvokeFrame - suppress unknown sanitizer issue causing SEGV on unknown address 0x000000000000
# 0 0x4e8a0c in __ubsan::checkDynamicType(void*, void*, unsigned long)
# 1 0x4e807f in HandleDynamicTypeCacheMiss(__ubsan::DynamicTypeCacheMissData*, unsigned long, unsigned long, __ubsan::ReportOptions)
# 2 0x4e8051 in __ubsan_handle_dynamic_type_cache_miss
# 3 0x7f02ce676cd8 in JIT_InitPInvokeFrame(InlinedCallFrame*, void*) /home/steveharter/git/dotnet_coreclr/vm/jithelpers.cpp:6491:9
# 4 0x7f0252bbceb2 (<unknown module>)
fun:_Z20JIT_InitPInvokeFrameP16InlinedCallFramePv
| -1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.