question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
Is it possible to create a system-wide global variable / semaphore / mutex in C++ on Linux?
Here's the reason: I've got a system that often runs multiple copies of the same software on unrelated data. It's common to have 4 jobs, each running the same software. The software has a small section where it creates a huge graph that takes a lot of memory; outside that section memory usage is moderate.
It so happens sometimes that 2 jobs simultaneously hit the same memory-hungry section and the whole system starts swapping. Thus we want to prevent that by creating something like a critical section mutex between different jobs so that no more than one of them would allocate a lot of memory at a time.
If these were thread of the same job pthread locks would do the job.
What would be a good way to implement such mutex between different jobs?
| You can use a named semaphore if you can get all the processes to agree on a common name.
A named semaphore is identified by a name of the form
/somename; that is, a null-terminated string of up to
NAME_MAX-4 (i.e., 251) characters consisting of an initial
slash, followed by one or more characters, none of which are
slashes. Two processes can operate on the same named
semaphore by passing the same name to sem_open(3).
| Semaphore | 32,338,732 | 19 |
POSIX anonymous semaphores allow me to create and use a semaphore in shared memory. This works great, because this means other processes that shouldn't have access to the semaphore don't have access to the semaphore. In addition, it's easy to incorporate into existing structures.
Does Windows offer something like this? The closest thing I've found is calling CreateSemaphore without a name, then having it inherited or DuplicateHandle()d to a child.
Edit: @IInspectable wanted me to give context. I'm planning to implement a sort of RPC system through shared memory. The data in shared memory looks like this:
struct shm_rpc_block {
enum rpc_state {
send = 0,
receive
};
rpc_state state;
sem_t lock;
char data[2048 - sizeof(rpc_state) - sizeof(sem_t)];
};
The state variable begins in the "send" state. Process 1 writes its data to the data array, then changes the state to "receive" and unlocks, allowing process 2 to lock. Process 2 reads in the sent data, generates a response, then sends it back. The first process then reads the result, and resets the RPC state to "send".
| NO. You cannot specify a desired starting address for semaphore with WindowsAPI.
The document only states three ways to use semaphore between processes. A process can specify the name of a semaphore object in a call to the OpenSemaphore or CreateSemaphoreEx function besides the two ways you pointed.
| Semaphore | 73,643,031 | 17 |
I'm working on an implementation of the "Fair Barbershop" problem in Ruby. This is for a class assignment, but I'm not looking for any handouts. I've been searching like crazy, but I cannot seem to find a Ruby implementation of Semaphores that mirror those found in C.
I know there is Mutex, and that's great. Single implementation, does exactly what that kind of semaphore should do.
Then there's Condition Variables. I thought that this was going to work out great, but looking at these, they require a Mutex for every wait call, which looks to me like I can't put numerical values to the semaphore (as in, I have seven barbershops, 3 barbers, etc.).
I think I need a Counting Semaphore, but I think it's a little bizarre that Ruby doesn't (from what I can find) contain such a class in its core. Can anyone help point me in the right direction?
| If you are using JRuby, you can import semaphores from Java as shown in this article.
require 'java'
java_import 'java.util.concurrent.Semaphore'
SEM = Semaphore.new(limit_of_simultaneous_threads)
SEM.acquire #To decrement the number available
SEM.release #To increment the number available
| Semaphore | 5,478,789 | 17 |
I'm trying to understand the similarities and differences between named and unnamed semaphore so my google searches yielded me this. I had a question about the wording on the page though, it says:
Unnamed semaphores might be usable by more than one process
Named semaphores are sharable by several processes
Do those two words create any important distinction between those two types of semaphores or are they irrelevant?
So so far here's what I have:
Similarities
-Several processes can do something with the semaphore
Difference
-Named are referenced with pathname and unnamed are referenced by pshared value
That's all I could glean from that definition. Is that everything and are they correct? Or am I missing some significant concept?
| Think in terms of who can access the semaphore.
Unnamed semaphores (lacking any name or handle to locate them) must exist in some pre-existing, agreed upon memory location. Usually that is (1) shared memory (inherited by children after fork) in the case of child processes; or (2) shared memory, global variable or the heap in the case where they are shared between threads of a single process. The essential thing here is that the code in parent, child, or threads already knows the address of the semaphore.
Named semaphores are necessary for unrelated processes. For example a producer and consumer might be written by two different developers and run as completely unrelated processes. But they have to share some resource that needs to be protected by a semaphore. The named semaphore gives them a path to the semaphore.
In reality you can use a named semaphore in all scenarios but they come with a little extra baggage because you have to deal with the paths and permissions and such that are unnecessary if the programs are related and already know how to access an unnamed semaphore. It's a little silly, for instance, to use a named semaphore to share a resource between threads. The threads already have access to the same memory where an unnamed semaphore could reside.
| Semaphore | 13,145,885 | 16 |
In a class, we've had to use semaphores to accomplish work with threads.
The prototype (and header file) of sem_init is the following:
int sem_init(sem_t *sem, int pshared, unsigned int value);
but I don't understand what the value variable is used for. According to opengroup.org:
value is an initial value to set the semaphore to
"value is a value..." How does that help, what is it used for?
| Semaphore value represents the number of common resources available to be shared among the threads. If the value is greater than 0, then the thread calling sem_wait need not wait. It just decrements the value by 1 and proceeds to access common resource. sem_post will add a resource back to the pool. So it increments the value by 1. If the value is 0, then we will wait till somebody has done sem_post.
| Semaphore | 9,369,873 | 16 |
I need to create two child processes each of which calls execvp ater being forked, and the executables share POSIX semaphores between them.
Do I need to create a shared memory or just implement named semaphores?
I got two answers from the following links:
Do forked child processes use the same semaphore?
How to share semaphores between processes using shared memory
But I am confused about how to proceed in the implementation.
|
Do I need to create a shared memory or just implement named
semaphores?
Either approach will work. Pick one and go with it - though I personally prefer named semaphores because you don't have to deal with memory allocation and with setting up the shared memory segments. The interface for creating and using named semaphores is way more friendly, in my opinion.
With named semaphores, in your example scenario, here's what happens:
You create and initialize the semaphore in the parent process with sem_open(3). Give it a well-known name that the child processes will know; this name is used to find the semaphore in the system.
Close the semaphore in the parent, since it won't be using it.
Fork and execute
Unlink the semaphore with sem_unlink(3). This must be done exactly once; it doesn't really matter where (any process that has a reference to the semaphore object can do it). It is ok to unlink a semaphore if other processes still have it open: the semaphore is destroyed only when all other processes have closed it, but keep in mind that the name is removed immediately, so new processes won't be able to find and open the semaphore.
The child processes call sem_open(3) with the well-known name to find and obtain a reference to the semaphore. Once a process is done with the semaphore, you need to close it with sem_close(3).
Below is an example of what I just described. A parent process creates a named semaphore, and forks + executes 2 child processes, each of which finds and opens the semaphore, using it to synchronize between each other.
It assumes that the parent forks and executes the ./sem_chld binary. Keep in mind that a name for a semaphore must begin with a forward slash, followed by one or more characters that are not a slash (see man sem_overview). In this example, the semaphore's name is /semaphore_example.
Here's the code for the parent process:
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <semaphore.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <sys/wait.h>
#define SEM_NAME "/semaphore_example"
#define SEM_PERMS (S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP)
#define INITIAL_VALUE 1
#define CHILD_PROGRAM "./sem_chld"
int main(void) {
/* We initialize the semaphore counter to 1 (INITIAL_VALUE) */
sem_t *semaphore = sem_open(SEM_NAME, O_CREAT | O_EXCL, SEM_PERMS, INITIAL_VALUE);
if (semaphore == SEM_FAILED) {
perror("sem_open(3) error");
exit(EXIT_FAILURE);
}
/* Close the semaphore as we won't be using it in the parent process */
if (sem_close(semaphore) < 0) {
perror("sem_close(3) failed");
/* We ignore possible sem_unlink(3) errors here */
sem_unlink(SEM_NAME);
exit(EXIT_FAILURE);
}
pid_t pids[2];
size_t i;
for (i = 0; i < sizeof(pids)/sizeof(pids[0]); i++) {
if ((pids[i] = fork()) < 0) {
perror("fork(2) failed");
exit(EXIT_FAILURE);
}
if (pids[i] == 0) {
if (execl(CHILD_PROGRAM, CHILD_PROGRAM, NULL) < 0) {
perror("execl(2) failed");
exit(EXIT_FAILURE);
}
}
}
for (i = 0; i < sizeof(pids)/sizeof(pids[0]); i++)
if (waitpid(pids[i], NULL, 0) < 0)
perror("waitpid(2) failed");
if (sem_unlink(SEM_NAME) < 0)
perror("sem_unlink(3) failed");
return 0;
}
Note that sem_unlink(3) is called after both children terminate; although this is not required, if it was called before there would be a race condition between the parent process unlinking the semaphore and both child processes starting up and opening the semaphore. In general, though, you can unlink as soon as you know that all required processes have opened the semaphore and no new processes will need to find it.
Here's the code for sem_chld, it's just a small toy program to show the usage of a shared semaphore:
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <semaphore.h>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/types.h>
#define SEM_NAME "/semaphore_example"
#define ITERS 10
int main(void) {
sem_t *semaphore = sem_open(SEM_NAME, O_RDWR);
if (semaphore == SEM_FAILED) {
perror("sem_open(3) failed");
exit(EXIT_FAILURE);
}
int i;
for (i = 0; i < ITERS; i++) {
if (sem_wait(semaphore) < 0) {
perror("sem_wait(3) failed on child");
continue;
}
printf("PID %ld acquired semaphore\n", (long) getpid());
if (sem_post(semaphore) < 0) {
perror("sem_post(3) error on child");
}
sleep(1);
}
if (sem_close(semaphore) < 0)
perror("sem_close(3) failed");
return 0;
}
You can eliminate the need to keep the semaphore name synchronized between the two source files by defining it in a common header file and including it in the code for each program.
Note that error handling is not ideal in this example (it's merely illustrative), there is a lot of room for improvement. It's just there to make sure you don't forget to do proper error handling when you decide to change this example to suit your needs.
| Semaphore | 32,205,396 | 16 |
i have two methods
-(void) a {
@synchronized(self) {
// critical section 1
}
}
-(void) b {
@synchronized(self) {
// critical section 2
}
}
now my question is if a thread is in critical section 1. will the critical section 2 be locked for other threads or other threads can access critical section 2.
| Critical section 2 will be blocked to other threads, as well, since you're synchronizing on the same object (self).
| Semaphore | 2,810,459 | 16 |
I need two threads to progress in a "tick tock" pattern. When implmented with a semaphore this looks fine:
Semaphore tick_sem(1);
Semaphore tock_sem(0);
void ticker( void )
{
while( true )
{
P( tick_sem );
do_tick();
V( tock_sem );
}
}
void tocker( void )
{
while( true )
{
P( tock_sem );
do_tock();
V( tick_sem );
}
}
However, if I do the same thing with a mutex ( which is technically a binary semaphore ), it has an odd code smell.
std::mutex tick_mutex;
std::mutex tock_mutex;
tock_mutex.lock();
void ticker( void )
{
while( true )
{
tick_mutex.lock();
do_tick();
tock_mutex.unlock();
}
}
void tocker( void )
{
while( true )
{
tock_mutex.lock()
do_tock();
tick_mutex.unlock();
}
}
I think the smell is that a mutex isn't meant to convey information to another thread. (The c++11 standard committee added a spurious fail to try_lock to defeat unexpected information transfer; §30.4.1/14.) It seems like mutexes are meant to synchronize access to a variable, which can then convey information to another thread.
Lastly, when implemented with a std::condition_variable, it looks correct but it's more complicated ( a tick_vs_tock variable, a mutex, and a condition variable). I've omitted the implementation for brevity, but it's really straight forward.
Is the mutex solution fine? Or is there something subtly wrong with it?
Is there a good pattern for solving my tick/tock problem that I haven't thought of?
| A Mutex is not simply just a binary semaphore, it also has the limitation that only the locking thread is allowed to unlock it.
You are breaking that rule.
Edit:
From MSDN:
The ReleaseMutex function fails if the calling thread does not own the
mutex object.
From some site that google turned up for pthread_mutex_unlock:
The pthread_mutex_unlock() function may fail if:
EPERM The current thread does not own the mutex.
And you will find the same on other mutex implementations. It makes sense because a mutex is supposed to guard a thread's access to a resource, so another thread should not be able to unlock it.
| Semaphore | 6,804,044 | 16 |
I am trying to port a project (from linux) that uses Semaphores to Mac OS X however some of the posix semaphores are not implemented on Mac OS X
The one that I hit in this port is sem_timedwait()
I don't know much about semaphores but from the man pages sem_wait() seems to be close to sem_timedwait and it is implemented
From the man pages
sem_timedwait() function shall
lock the semaphore referenced by
sem as in the sem_wait() function.
However, if the semaphore cannot be
locked without waiting for another
process or thread to unlock the
semaphore by performing a sem_post()
function, this wait shall be ter-
minated when the specified timeout
expires
From my limited understanding of how semphores work I can see that sem_timedwait() is safer, but I still should be able to use sem_wait()
Is this correct? If not what other alternatives do I have...
Thanks
| It's likely that the timeout is important to the operation of the algorithm. Therefore just using sem_wait() might not work.
You could use sem_trywait(), which returns right away in all cases. You can then loop, and use a sleep interval that you choose, each time decrementing the total timeout until you either run out of timeout or the semaphore is acquired.
A much better solution is to rewrite the algorithm to use a condition variable, and then you can use pthread_cond_timedwait() to get the appropriate timeout.
| Semaphore | 641,126 | 16 |
I want some clarification regarding mutex and semaphore.
My question is,
What mutex actually do when a thread tries to enter a region locked by a mutex,
a. it waits for the lock to be released?
or b. it goes to sleep until the lock is released. In that case how it is wake up again when the lock is released?
Same question as 1, but in this case it is semaphore.
Can you give me some code regarding busy waiting in pthread in C, and also a case where thread goes to sleep instead of waiting? does sleep mean it is blocked or sleeping is another kind of busy waiting?
i want to know some programs where this situations are covered, for example some c source codes where busy waiting, blocking etc are implemented.
| When a thread tries to acquire a lock on a mutex, if that mutex is already held then typically it will use a call to the OS kernel to indicate that it is waiting, and then when the thread that currently holds the lock unlocks the mutex then it will make a call to the OS kernel to wake one of the waiting threads.
The same applies to a semaphore, except it only blocks if the count is decremented below zero, and threads are only woken when the count is increased back above zero.
A busy wait is where you don't block or sleep when waiting for something, but repeatedly poll in a loop, so the processor is always busy, but not doing anything useful.
To truly achieve a busy wait, you need an atomic variable, but POSIX threads does not provide such a thing, so you cannot truly write a busy wait in pthreads. The closest you can get is to lock a mutex, read a flag, unlock the mutex, loop if the flag was not set. This repeatedly locks and unlocks the mutex, but does not wait for the data to be ready. In this situation you should use a condition variable instead.
Typically, you say a thread is sleeping if it has called something like usleep to suspend its own execution for a specified period of time. This is as opposed to to blocking, where it is waiting for a specific signal which will be provided by another thread.
| Semaphore | 9,427,276 | 15 |
In a graduate class, we've had to use semaphores to accomplish work with threads.
We were directed to use sem_init along with a bunch of other sem_* procedure but we were not given much information about the details of each of these sem_* methods.
The prototype (and header file) of sem_init is the following:
#include <semaphore.h>
int sem_init(sem_t *sem, int pshared, unsigned int value);
but I don't understand what the pshared value is used for. According to opengroup.org:
If the pshared argument has a non-zero
value, then the semaphore is shared
between processes; in this case, any
process that can access the semaphore
sem can use sem for performing
sem_wait(), sem_trywait(), sem_post(),
and sem_destroy() operations.
but I guess I don't understand the difference between say 1,2, 10, 25, 50000, etc. I think it is saying that if the value is 0 then the semaphore is not shared. (But then, what is the point?)
How do I appropriately use this pshared parameter?
| The GLIBC version of sem_init (what you get if you man sem_init on Linux) has this to say:
"The pshared argument indicates whether this semaphore is to be
shared between the threads of a process, or between processes."
So pshared is a boolean value: in practice meaningful values passed to it are false (0) and true (1), though any non-0 value will be treated as true. If you pass it 0 you will get a semaphore that can be accessed by other threads in the same process -- essentially an in-process lock. You can use this as a mutex, or you can use it more generally for the resource-counting properties of a semaphore. Arguably if pthreads supported a semaphore API you wouldn't need this feature of sem_init, but semaphores in Unix precede pthreads by quite a bit of time.
It would be better if the boolean was some kind of enumeration (e.g. SEM_PROCESS_PRIVATE vs SEM_PROCESS_SHARED), because then you wouldn't have had this question, but POSIX semaphores are a fairly old API as these things go.
| Semaphore | 1,291,566 | 15 |
I'm designing a .net core web api that consumes an external api that I do not control. I've found some excellent answers on stack overflow that allowed me to throttle my requests to this external API while in the same thread using semaphoreslim. I'm wondering how best to extend this throttling to be application wide instead of just throttling for a specific list of Tasks. I've been learning about HttpMessageHandlers and this seems to be a possible way to intercept all outgoing messages and apply throttling. But I'm concerned about thread safety and locking issues I may not understand. I'm including my current throttling code and hope that may be helpful in understanding what I'm trying to do, but across multiple threads, and with tasks being continuously added instead of a pre-defined list of tasks.
private static async Task<List<iMISPagedResultResponse>> GetAsyncThrottled(List<int> pages, int throttle, IiMISClient client, string url, int limit)
{
var rtn = new List<PagedResultResponse>();
var allTasks = new List<Task>();
var throttler = new SemaphoreSlim(initialCount: throttle);
foreach (var page in pages)
{
await throttler.WaitAsync();
allTasks.Add(
Task.Run(async () =>
{
try
{
var result = await GetPagedResult(client, url, page);
return result;
}
finally
{
throttler.Release();
}
}));
}
await Task.WhenAll(allTasks);
foreach (var task in allTasks)
{
var result = ((Task<PagedResultResponse>)task).Result;
rtn.Add(result);
}
return rtn;
}
| Conceptual questions
SemaphoreSlim is thread-safe so there are no thread-safety or locking concerns about using it as a parallelism throttle across multiple threads.
HttpMessageHandlers are indeed an outbound middleware mechanism to intercept calls placed through HttpClient. So they are an ideal way to apply parallelism-throttling to Http calls using SemaphoreSlim.
Simple implementation
So a ThrottlingDelegatingHandler might look like this:
public class ThrottlingDelegatingHandler : DelegatingHandler
{
private SemaphoreSlim _throttler;
public ThrottlingDelegatingHandler(SemaphoreSlim throttler)
{
_throttler = throttler ?? throw new ArgumentNullException(nameof(throttler));
}
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
if (request == null) throw new ArgumentNullException(nameof(request));
await _throttler.WaitAsync(cancellationToken);
try
{
return await base.SendAsync(request, cancellationToken);
}
finally
{
_throttler.Release();
}
}
}
Create and maintain an instance as a singleton:
int maxParallelism = 10;
var throttle = new ThrottlingDelegatingHandler(new SemaphoreSlim(maxParallelism));
Apply that DelegatingHandler to all instances of HttpClient through which you want to parallel-throttle calls:
HttpClient throttledClient = new HttpClient(throttle);
That HttpClient does not need to be a singleton: only the throttle instance does.
I've omitted the Dot Net Core DI code for brevity, but you would register the singleton ThrottlingDelegatingHandler instance with .Net Core's container, obtain that singleton by DI at point-of-use, and use it in HttpClients you construct as shown above.
But:
Better implementation: Using HttpClientFactory (.NET Core 2.1+)
The above still begs the question how you are going to manage HttpClient lifetimes:
Singleton (app-scoped) HttpClients do not pick up DNS updates. Your app will be ignorant of DNS updates unless you kill and restart it (perhaps undesirable).
A frequently-create-and-dispose pattern, using (HttpClient client = ) { }, on the other hand, can cause socket exhaustion.
One of the design goals of HttpClientFactory was to manage the lifecycles of HttpClient instances and their delegating handlers, to avoid these problems.
In .NET Core 2.1, you could use HttpClientFactory to wire it all up in ConfigureServices(IServiceCollection services) in the Startup class, like this:
int maxParallelism = 10;
services.AddSingleton<ThrottlingDelegatingHandler>(new ThrottlingDelegatingHandler(new SemaphoreSlim(maxParallelism)));
services.AddHttpClient("MyThrottledClient")
.AddHttpMessageHandler<ThrottlingDelegatingHandler>();
("MyThrottledClient" here is a named-client approach just to keep this example short; typed clients avoid string-naming.)
At point-of-use, obtain an IHttpClientFactory by DI (reference), then call
var client = _clientFactory.CreateClient("MyThrottledClient");
to obtain an HttpClient instance pre-configured with the singleton ThrottlingDelegatingHandler.
All calls through an HttpClient instance obtained in this manner will be throttled (in common, across the app) to the originally configured int maxParallelism.
And HttpClientFactory magically deals with all the HttpClient lifetime issues.
Even better implementation: Using Polly with IHttpClientFactory to get all this 'out-of-the-box'
Polly is deeply integrated with IHttpClientFactory and Polly also provides Bulkhead policy which works as a parallelism throttle by an identical SemaphoreSlim mechanism.
So, as an alternative to hand-rolling a ThrottlingDelegatingHandler, you can also just use Polly Bulkhead policy with IHttpClientFactory out of the box. In your Startup class, simply:
int maxParallelism = 10;
var throttler = Policy.BulkheadAsync<HttpResponseMessage>(maxParallelism, Int32.MaxValue);
services.AddHttpClient("MyThrottledClient")
.AddPolicyHandler(throttler);
Obtain the pre-configured HttpClient instance from HttpClientFactory as earlier. As before, all calls through such a "MyThrottledClient" HttpClient instance will be parallel-throttled to the configured maxParallelism.
The Polly Bulkhead policy additionally offers the ability to configure how many operations you want to allow simultaneously to 'queue' for an execution slot in the main semaphore. So, for instance:
var throttler = Policy.BulkheadAsync<HttpResponseMessage>(10, 100);
when configured as above into an HttpClient, would allow 10 parallel http calls, and up to 100 http calls to 'queue' for an execution slot. This can offer extra resilience for high-throughput systems by preventing a faulting downstream system causing an excessive resource bulge of queuing calls upstream.
To use the Polly options with HttpClientFactory, pull in the Microsoft.Extensions.Http.Polly and Polly nuget packages.
References: Polly deep doco on Polly and IHttpClientFactory; Bulkhead policy.
Addendum re Tasks
The question uses Task.Run(...) and mentions :
a .net core web api that consumes an external api
and:
with tasks being continuously added instead of a pre-defined list of tasks.
If your .net core web api only consumes the external API once per request the .net core web api handles, and you adopt the approaches discussed in the rest of this answer, offloading the downstream external http call to a new Task with Task.Run(...) will be unnecessary and only create overhead in additional Task instances and thread-switching. Dot net core will already be running the incoming requests on multiple threads on the thread pool.
| Semaphore | 52,044,186 | 15 |
So I am getting the error: "undefined reference to sem_open()" even though I have included the <semaphore.h> header. The same thing is happening for all my pthread function calls (mutex, pthread_create, etc). Any thoughts? I am using the following command to compile:
g++ '/home/robin/Desktop/main.cpp' -o '/home/robin/Desktop/main.out'
#include <iostream>
using namespace std;
#include <pthread.h>
#include <semaphore.h>
#include <fcntl.h>
const char *serverControl = "/serverControl";
sem_t* semID;
int main ( int argc, char *argv[] )
{
//create semaphore used to control servers
semID = sem_open(serverControl,O_CREAT,O_RDWR,0);
return 0;
}
| You need link with pthread lib, using -lpthread option.
| Semaphore | 4,916,881 | 15 |
I have a script in python which uses a resource which can not be used by more than a certain amount of concurrent scripts running.
Classically, this would be solved by a named semaphores but I can not find those in the documentation of the multiprocessing module or threading .
Am I missing something or are named semaphores not implemented / exposed by Python? and more importantly, if the answer is no, what is the best way to emulate one?
Thanks,
Boaz
PS. For reasons which are not so relevant to this question, I can not aggregate the task to a continuously running process/daemon or work with spawned processes - both of which, it seems, would have worked with the python API.
| I suggest a third party extension like these, ideally the posix_ipc one -- see in particular the semaphore section in the docs.
These modules are mostly about exposing the "system V IPC" (including semaphores) in a unixy way, but at least one of them (posix_ipc specifically) is claimed to work with Cygwin on Windows (I haven't verified that claim). There are some documented limitations on FreeBSD 7.2 and Mac OSX 10.5, so take care if those platforms are important to you.
| Semaphore | 2,798,727 | 15 |
(I think that) the consensus number for a mutex is 2.
What is the consensus number for semaphores (like in pthread_sem_*)?
What is the consensus number for condition variables (like in pthread_cond_*)?
| The consensus number for a mutex would be 1. It's trivially clear that a mutex will be wait-free for a single thread. From its definition, it's also clear that a mutex is no longer wait-free for two threads. The consensus number therefore is >=1 and <2, so it must be 1.
Likewise, other synchronization mechanisms that work by halting one thread in favor of another also have consensus number 1, and therefore cannot be used to construct a wait-free object shared by 2 threads.
| Semaphore | 773,212 | 14 |
From the Java java.util.concurrent.Semaphore docs it wasn't quite clear to me what happens if semaphore.acquire() blocks the thread and later gets interrupted by an InterruptedException. Has the semaphore value been decreased and so is there a need to release the semaphore?
Currently I am using code like this:
try {
// use semaphore to limit number of parallel threads
semaphore.acquire();
doMyWork();
}
finally {
semaphore.release();
}
Or should I rather not call release() when an InterruptedException occurs during acquire() ?
|
call release() when an InterruptedException occurs during acquire() ?
You should not. If .acquire() is interrupted, the semaphore is not acquired, so likely should not release it.
Your code should be
// use semaphore to limit number of parallel threads
semaphore.acquire();
try {
doMyWork();
}
finally {
semaphore.release();
}
| Semaphore | 12,104,978 | 14 |
I am working on a MATLAB project where I would like to have two instances of MATLAB running in parallel and sharing data. I will call these instances MAT_1 and MAT_2. More specifically, the architecture of the system is:
MAT_1 processes images sequentially, reading them one by one using imread, and outputs the result for each image using imwrite.
MAT_2 reads the images output by MAT_1 using imread and outputs its result somewhere else.
One of the problems I think I need to address is to guarantee that MAT_2 reads an image output by MAT_1 once MAT_1 has fully finished writing to it.
My questions are:
How would you approach this problem? Do I need to use semaphores or locks to prevent race conditions?
Does MATLAB provide any mechanism to lock files? (i.e. something similar to flock, but provided by MATLAB directly, and that works on multiple platforms, e.g. Windows & Linux). If not, do you know of any third-party library that I can use to build this mechanism in MATLAB?
EDIT :
As @yoda points out below, the Parallel Computing Toolbox (PCT) allows for blocking calls between MATLAB workers, which is great. That said, I am particularly interested in solutions that do not require the PCT.
Why do I require MAT_1 and MAT_2 to run in parallel threads?:
The processing done in MAT_2 is slower on average (and more prone to crashing) than MAT_1, and the output of MAT_1 feeds other programs and processes (including human inspection) that do not need to wait for MAT_2 to do its job.
Answers :
For a solution that allows for the implementation of semaphores but does not rely on the PCT see Jonas' answer below
For other good approaches to the problem, see Yoda's answer below
| I would approach this using semaphores; in my experience the PCT is unreasonably slow at synchronization.
dfacto (another answer) has a great implementation of semaphores for MATLAB, however it will not work on MS Windows; I improved on that work so that it would. The improved work is here: http://www.mathworks.com/matlabcentral/fileexchange/45504-semaphoreposixandwindows
This will be better performing than interfacing with Java, .NET, the PCT, or file locks. This does not use the Parallel Computing Toolbox (PCT), and AFAIK semaphore functionality isn't in the PCT anyway (puzzling that they left it out!). It is possible to use the PCT for synchronization but everything I'd tried in it was unreasonably slow.
To install this high-performance semaphore library into MATLAB, run this within the MATLAB interpreter:
mex -O -v semaphore.c
You'll need a C++ compiler installed to compile semaphore.c into a binary MEX-file. That MEX-file is then callable from your MATLAB code as shown in the example below.
Usage example:
function Example()
semkey=1234;
semaphore('create',semkey,1);
funList = {@fun,@fun,@fun};
parfor i=1:length(funList)
funList{i}(semkey);
end
end
function fun(semkey)
semaphore('wait',semkey)
disp('hey');
semaphore('post',semkey)
end
| Semaphore | 6,415,283 | 14 |
I wanted to know what would be better/faster to use POSIX calls like pthread_once() and sem_wait() or the dispatch_* functions, so I created a little test and am surprised at the results (questions and results are at the end).
In the test code I am using mach_absolute_time() to time the calls. I really don’t care that this is not exactly matching up with nano-seconds; I am comparing the values with each other so the exact time units don't matter, only the differences between the interval do. The numbers in the results section are repeatable and not averaged; I could have averaged the times but I am not looking for exact numbers.
test.m (simple console application; easy to compile):
#import <Foundation/Foundation.h>
#import <dispatch/dispatch.h>
#include <semaphore.h>
#include <pthread.h>
#include <time.h>
#include <mach/mach_time.h>
// *sigh* OSX does not have pthread_barrier (you can ignore the pthread_barrier
// code, the interesting stuff is lower)
typedef int pthread_barrierattr_t;
typedef struct
{
pthread_mutex_t mutex;
pthread_cond_t cond;
int count;
int tripCount;
} pthread_barrier_t;
int pthread_barrier_init(pthread_barrier_t *barrier, const pthread_barrierattr_t *attr, unsigned int count)
{
if(count == 0)
{
errno = EINVAL;
return -1;
}
if(pthread_mutex_init(&barrier->mutex, 0) < 0)
{
return -1;
}
if(pthread_cond_init(&barrier->cond, 0) < 0)
{
pthread_mutex_destroy(&barrier->mutex);
return -1;
}
barrier->tripCount = count;
barrier->count = 0;
return 0;
}
int pthread_barrier_destroy(pthread_barrier_t *barrier)
{
pthread_cond_destroy(&barrier->cond);
pthread_mutex_destroy(&barrier->mutex);
return 0;
}
int pthread_barrier_wait(pthread_barrier_t *barrier)
{
pthread_mutex_lock(&barrier->mutex);
++(barrier->count);
if(barrier->count >= barrier->tripCount)
{
barrier->count = 0;
pthread_cond_broadcast(&barrier->cond);
pthread_mutex_unlock(&barrier->mutex);
return 1;
}
else
{
pthread_cond_wait(&barrier->cond, &(barrier->mutex));
pthread_mutex_unlock(&barrier->mutex);
return 0;
}
}
//
// ok you can start paying attention now...
//
void onceFunction(void)
{
}
@interface SemaphoreTester : NSObject
{
sem_t *sem1;
sem_t *sem2;
pthread_barrier_t *startBarrier;
pthread_barrier_t *finishBarrier;
}
@property (nonatomic, assign) sem_t *sem1;
@property (nonatomic, assign) sem_t *sem2;
@property (nonatomic, assign) pthread_barrier_t *startBarrier;
@property (nonatomic, assign) pthread_barrier_t *finishBarrier;
@end
@implementation SemaphoreTester
@synthesize sem1, sem2, startBarrier, finishBarrier;
- (void)thread1
{
pthread_barrier_wait(startBarrier);
for(int i = 0; i < 100000; i++)
{
sem_wait(sem1);
sem_post(sem2);
}
pthread_barrier_wait(finishBarrier);
}
- (void)thread2
{
pthread_barrier_wait(startBarrier);
for(int i = 0; i < 100000; i++)
{
sem_wait(sem2);
sem_post(sem1);
}
pthread_barrier_wait(finishBarrier);
}
@end
int main (int argc, const char * argv[])
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
int64_t start;
int64_t stop;
// semaphore non contention test
{
// grrr, OSX doesn't have sem_init
sem_t *sem1 = sem_open("sem1", O_CREAT, 0777, 0);
start = mach_absolute_time();
for(int i = 0; i < 100000; i++)
{
sem_post(sem1);
sem_wait(sem1);
}
stop = mach_absolute_time();
sem_close(sem1);
NSLog(@"0 Contention time = %d", stop - start);
}
// semaphore contention test
{
__block sem_t *sem1 = sem_open("sem1", O_CREAT, 0777, 0);
__block sem_t *sem2 = sem_open("sem2", O_CREAT, 0777, 0);
__block pthread_barrier_t startBarrier;
pthread_barrier_init(&startBarrier, NULL, 3);
__block pthread_barrier_t finishBarrier;
pthread_barrier_init(&finishBarrier, NULL, 3);
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0);
dispatch_async(queue, ^{
pthread_barrier_wait(&startBarrier);
for(int i = 0; i < 100000; i++)
{
sem_wait(sem1);
sem_post(sem2);
}
pthread_barrier_wait(&finishBarrier);
});
dispatch_async(queue, ^{
pthread_barrier_wait(&startBarrier);
for(int i = 0; i < 100000; i++)
{
sem_wait(sem2);
sem_post(sem1);
}
pthread_barrier_wait(&finishBarrier);
});
pthread_barrier_wait(&startBarrier);
// start timing, everyone hit this point
start = mach_absolute_time();
// kick it off
sem_post(sem2);
pthread_barrier_wait(&finishBarrier);
// stop timing, everyone hit the finish point
stop = mach_absolute_time();
sem_close(sem1);
sem_close(sem2);
NSLog(@"2 Threads always contenting time = %d", stop - start);
pthread_barrier_destroy(&startBarrier);
pthread_barrier_destroy(&finishBarrier);
}
// NSTask semaphore contention test
{
sem_t *sem1 = sem_open("sem1", O_CREAT, 0777, 0);
sem_t *sem2 = sem_open("sem2", O_CREAT, 0777, 0);
pthread_barrier_t startBarrier;
pthread_barrier_init(&startBarrier, NULL, 3);
pthread_barrier_t finishBarrier;
pthread_barrier_init(&finishBarrier, NULL, 3);
SemaphoreTester *tester = [[[SemaphoreTester alloc] init] autorelease];
tester.sem1 = sem1;
tester.sem2 = sem2;
tester.startBarrier = &startBarrier;
tester.finishBarrier = &finishBarrier;
[NSThread detachNewThreadSelector:@selector(thread1) toTarget:tester withObject:nil];
[NSThread detachNewThreadSelector:@selector(thread2) toTarget:tester withObject:nil];
pthread_barrier_wait(&startBarrier);
// start timing, everyone hit this point
start = mach_absolute_time();
// kick it off
sem_post(sem2);
pthread_barrier_wait(&finishBarrier);
// stop timing, everyone hit the finish point
stop = mach_absolute_time();
sem_close(sem1);
sem_close(sem2);
NSLog(@"2 NSTasks always contenting time = %d", stop - start);
pthread_barrier_destroy(&startBarrier);
pthread_barrier_destroy(&finishBarrier);
}
// dispatch_semaphore non contention test
{
dispatch_semaphore_t sem1 = dispatch_semaphore_create(0);
start = mach_absolute_time();
for(int i = 0; i < 100000; i++)
{
dispatch_semaphore_signal(sem1);
dispatch_semaphore_wait(sem1, DISPATCH_TIME_FOREVER);
}
stop = mach_absolute_time();
NSLog(@"Dispatch 0 Contention time = %d", stop - start);
}
// dispatch_semaphore non contention test
{
__block dispatch_semaphore_t sem1 = dispatch_semaphore_create(0);
__block dispatch_semaphore_t sem2 = dispatch_semaphore_create(0);
__block pthread_barrier_t startBarrier;
pthread_barrier_init(&startBarrier, NULL, 3);
__block pthread_barrier_t finishBarrier;
pthread_barrier_init(&finishBarrier, NULL, 3);
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0);
dispatch_async(queue, ^{
pthread_barrier_wait(&startBarrier);
for(int i = 0; i < 100000; i++)
{
dispatch_semaphore_wait(sem1, DISPATCH_TIME_FOREVER);
dispatch_semaphore_signal(sem2);
}
pthread_barrier_wait(&finishBarrier);
});
dispatch_async(queue, ^{
pthread_barrier_wait(&startBarrier);
for(int i = 0; i < 100000; i++)
{
dispatch_semaphore_wait(sem2, DISPATCH_TIME_FOREVER);
dispatch_semaphore_signal(sem1);
}
pthread_barrier_wait(&finishBarrier);
});
pthread_barrier_wait(&startBarrier);
// start timing, everyone hit this point
start = mach_absolute_time();
// kick it off
dispatch_semaphore_signal(sem2);
pthread_barrier_wait(&finishBarrier);
// stop timing, everyone hit the finish point
stop = mach_absolute_time();
NSLog(@"Dispatch 2 Threads always contenting time = %d", stop - start);
pthread_barrier_destroy(&startBarrier);
pthread_barrier_destroy(&finishBarrier);
}
// pthread_once time
{
pthread_once_t once = PTHREAD_ONCE_INIT;
start = mach_absolute_time();
for(int i = 0; i <100000; i++)
{
pthread_once(&once, onceFunction);
}
stop = mach_absolute_time();
NSLog(@"pthread_once time = %d", stop - start);
}
// dispatch_once time
{
dispatch_once_t once = 0;
start = mach_absolute_time();
for(int i = 0; i <100000; i++)
{
dispatch_once(&once, ^{});
}
stop = mach_absolute_time();
NSLog(@"dispatch_once time = %d", stop - start);
}
[pool drain];
return 0;
}
On My iMac (Snow Leopard Server 10.6.4):
Model Identifier: iMac7,1
Processor Name: Intel Core 2 Duo
Processor Speed: 2.4 GHz
Number Of Processors: 1
Total Number Of Cores: 2
L2 Cache: 4 MB
Memory: 4 GB
Bus Speed: 800 MHz
I get:
0 Contention time = 101410439
2 Threads always contenting time = 109748686
2 NSTasks always contenting time = 113225207
0 Contention named semaphore time = 166061832
2 Threads named semaphore contention time = 203913476
2 NSTasks named semaphore contention time = 204988744
Dispatch 0 Contention time = 3411439
Dispatch 2 Threads always contenting time = 708073977
pthread_once time = 2707770
dispatch_once time = 87433
On my MacbookPro (Snow Leopard 10.6.4):
Model Identifier: MacBookPro6,2
Processor Name: Intel Core i5
Processor Speed: 2.4 GHz
Number Of Processors: 1
Total Number Of Cores: 2 (though HT is enabled)
L2 Cache (per core): 256 KB
L3 Cache: 3 MB
Memory: 8 GB
Processor Interconnect Speed: 4.8 GT/s
I got:
0 Contention time = 74172042
2 Threads always contenting time = 82975742
2 NSTasks always contenting time = 82996716
0 Contention named semaphore time = 106772641
2 Threads named semaphore contention time = 162761973
2 NSTasks named semaphore contention time = 162919844
Dispatch 0 Contention time = 1634941
Dispatch 2 Threads always contenting time = 759753865
pthread_once time = 1516787
dispatch_once time = 120778
on an iPhone 3GS 4.0.2 I got:
0 Contention time = 5971929
2 Threads always contenting time = 11989710
2 NSTasks always contenting time = 11950564
0 Contention named semaphore time = 16721876
2 Threads named semaphore contention time = 35333045
2 NSTasks named semaphore contention time = 35296579
Dispatch 0 Contention time = 151909
Dispatch 2 Threads always contenting time = 46946548
pthread_once time = 193592
dispatch_once time = 25071
Questions and statements:
sem_wait() and sem_post() are slow when not under contention
why is this the case?
does OSX not care about compatible APIs? is there some legacy code that forces this to be slow?
Why aren't these numbers the same as the dispatch_semaphore functions?
sem_wait() and sem_post() are just as slow when under contention as when they are not (there is a difference but I thought that it would be a huge difference between under contention and not; I expected numbers like what was in the dispatch_semaphore code)
sem_wait() and sem_post() are slower when using named semaphores.
Why? is this because the semaphore has to be synced between processes? maybe there is more baggage when doing that.
dispatch_semaphore_wait() and dispatch_semaphore_signal() are crazy fast when not under contention (no surprise here since apple is touting this a lot).
dispatch_semaphore_wait() and dispatch_semaphore_signal() are 3x slower than sem_wait() and sem_post() when under contention
Why is this so slow? this does not make sense to me. I would have expected this to be on par with the sem_t under contention.
dispatch_once() is faster than pthread_once(), around 10x, why? The only thing I can tell from the headers is that there is no function call burden with dispatch_once() than with pthread_once().
Motivation:
I am presented with 2 sets of tools to get the job done for semaphores or once calls (I actually found other semaphore variants in the meantime, but I will ignore those unless brought up as a better option). I just want to know what is the best tool for the job (If you have the option for screwing in a screw with a philips or flathead, I would choose philips if I don't have to torque the screw and flathead if I have to torque the screw).
It seems that if I start writing utilities with libdispatch I might not be able to port them to other operating systems that do not have libdispatch working yet... but it is so enticing to use ;)
As it stands:
I will be using libdispatch when I don't have to worry about portability and POSIX calls when I do.
Thanks!
| sem_wait() and sem_post() are heavy weight synchronization facilities that can be used between processes. They always involve round trips to the kernel, and probably always require your thread to be rescheduled. They are generally not the right choice for in-process synchronization. I'm not sure why the named variants would be slower than the anonymous ones...
Mac OS X is actually pretty good about Posix compatibility... But the Posix specifications have a lot of optional functions, and the Mac doesn't have them all. Your post is actually the first I've ever heard of pthread_barriers, so I'm guessing they're either relatively recent, or not all that common. (I haven't paid much attention to pthreads evolution for the past ten years or so.)
The reason the dispatch stuff falls apart under forced extreme contention is probably because under the covers the behavior is similar to spin locks. Your dispatch worker threads are very likely wasting a good chunk of their quanta under the optimistic assumption that the resource under contention is going to be available any cycle now... A bit of time with Shark would tell you for sure. The take-home point, though, should be that "optimizing" the thrashing during contention is a poor investment of programmer time. Instead spend the time optimizing the code to avoid heavy contention in the first place.
If you really have a resource that is an un-avoidable bottleneck within your process, putting a semaphore around it is massively sub-optimal. Put it on its own serial dispatch queue, and as much as possible dispatch_async blocks to be executed on that queue.
Finally, dispatch_once() is faster than pthread_once() because it's spec'd and implemented to be fast on current processors. Probably Apple could speed up the pthread_once() implementation, as I suspect the reference implementation uses pthread synchronization primitives, but... well... they've provided all of the libdispatch goodness instead. :-)
| Semaphore | 3,640,853 | 14 |
The true power of semaphore is :
Limits the number of threads that can access a resource or pool of
resources concurrently
That is understood and clear.
But I never got a chance to play with the overload of Wait which accepts a timeout integer, however - this seems to allow multiple threads get into the critical section although I've explicitly set semaphore not to allow more than one thread at a time:
private readonly SemaphoreSlim _mutex = new SemaphoreSlim(1);
private void Main()
{
Task.Run(() => DelayAndIncrementAsync());
Task.Run(() => DelayAndIncrementAsync());
}
private void DelayAndIncrementAsync()
{
_mutex.Wait(2000);
try
{
Console.WriteLine(0);
Thread.Sleep(TimeSpan.FromSeconds(5));
Console.WriteLine(1);
}
finally
{
_mutex.Release();
}
}
The first thread is entering the mutex zone, prints "0", waits 5 seconds, meanwhile after 2 seconds the other thread ALSO enters the critical section?
Question
Isn't it defeating the whole purpose of semaphore?
What are the real life scenarios which I would use this timeout, especially when the basic rule is -
"Semaphore = Limits the number of threads that can access a resource
or pool of resources concurrently
| You need to check the return value of the wait. The Timeout based wait will try for 2 seconds to take the mutex then return. You need to check if the return value is true (i.e you have the mutex) or not.
Edit: Also keep in mind that the timeout based wait will return immediately if the semaphore is available, so you cant use this to prevent an infinite loop in the code via this technique.
private readonly SemaphoreSlim _mutex = new SemaphoreSlim(1);
void Main()
{
Task.Run(()=>DelayAndIncrementAsync());
Task.Run(()=>DelayAndIncrementAsync());
}
public void DelayAndIncrementAsync()
{
if (_mutex.Wait(2000))
{
try
{
Console.WriteLine(0);
Thread.Sleep(TimeSpan.FromSeconds(5));
Console.WriteLine(1);
}
finally
{
_mutex.Release();
}
} else {
//oh noes I don't have the mutex
}
}
| Semaphore | 32,624,497 | 14 |
Can the semaphore be lower than 0? I mean, say I have a semaphore with N=3 and I call "down" 4 times, then N will remain 0 but one process will be blocked?
And same the other way, if in the beginning I call up, can N be higher than 3? Because as I see it, if N can be higher than 3 if in the beginning I call up couple of times, then later on I could call down more times than I can, thus putting more processes in the critical section then the semaphore allows me.
If someone would clarify it a bit for me I will much appreciate.
Greg
| (Using the terminology from java.util.concurrent.Semaphore given the Java tag. Some of these details are implementation-specific. I suspect your "down" is the Java semaphore's acquire() method, and your "up" is release().)
Yes, your last call to acquire() will block until another thread calls release() or your thread is interrupted.
Yes, you can call release() more times, then down more times - at least with java.util.concurrent.Semaphore.
Some other implementations of a semaphore may have an idea of a "maximum" number of permits, and a call to release beyond that maximum would fail. The Java Semaphore class allows a reverse situation, where a semaphore can start off with a negative number of permits, and all acquire() calls will fail until there have been enough release() calls. Once the number of permits has become non-negative, it will never become negative again.
| Semaphore | 1,221,322 | 14 |
Can I add more permit to a semaphore in Java?
Semaphore s = new Semaphore(3);
After this somewhere in the code i want to change the permits to 4. Is this possible?
| Yes. The release method (confusingly named imo) can be used to increment permits since, from the docs:
There is no requirement that a thread that releases a permit must
have acquired that permit by calling acquire.
Correct usage of a semaphore is established by programming convention
in the application.
In other words:
semaphore.release(10);
Will add 10 more permits if the thread calling hasn't acquired any.
| Semaphore | 9,789,073 | 14 |
In one of our classes, we make heavy use of SemaphoreSlim.WaitAsync(CancellationToken) and cancellation of it.
I appear to have hit a problem when a pending call to WaitAsync is cancelled shortly after a call to SemaphoreSlim.Release()(by shortly, I mean before the ThreadPool has had a chance to process a queued item), it puts the semaphore in a state where no further locks may be acquired.
Due to the non-deterministic nature of whether a ThreadPool item executes between the call to Release() and Cancel(), the following example does not always demonstrate the problem, for those circumstances, I have explicitly said to ignore that run.
This is my example which attempts to demonstrate the problem:
void Main()
{
for(var i = 0; i < 100000; ++i)
Task.Run(new Func<Task>(SemaphoreSlimWaitAsyncCancellationBug)).Wait();
}
private static async Task SemaphoreSlimWaitAsyncCancellationBug()
{
// Only allow one thread at a time
using (var semaphore = new SemaphoreSlim(1, 1))
{
// Block any waits
semaphore.Wait();
using(var cts1 = new CancellationTokenSource())
{
var wait2 = semaphore.WaitAsync(cts1.Token);
Debug.Assert(!wait2.IsCompleted, "Should be blocked by the existing wait");
// Release the existing wait
// After this point, wait2 may get completed or it may not (depending upon the execution of a ThreadPool item)
semaphore.Release();
// If wait2 was not completed, it should now be cancelled
cts1.Cancel();
if(wait2.Status == TaskStatus.RanToCompletion)
{
// Ignore this run; the lock was acquired before cancellation
return;
}
var wasCanceled = false;
try
{
await wait2.ConfigureAwait(false);
// Ignore this run; this should only be hit if the wait lock was acquired
return;
}
catch(OperationCanceledException)
{
wasCanceled = true;
}
Debug.Assert(wasCanceled, "Should have been canceled");
Debug.Assert(semaphore.CurrentCount > 0, "The first wait was released, and the second was canceled so why can no threads enter?");
}
}
}
And here a link to the LINQPad implementation.
Run the previous sample a few times and sometimes you will see the cancellation of WaitAsync no longer allows any threads to enter.
Update
It appears this is not reproducible on every machine, if you manage to reproduce the problem, please leave a comment saying so.
I have managed to reproduce the problem on the following:
3x 64 bit Windows 7 machines running an i7-2600
64 bit Windows 8 machine running an i7-3630QM
I have been unable to reproduce the problem on the following:
64 bit Windows 8 machine running an i5-2500k
Update 2
I have filed a bug with Microsoft here, however so far they are unable to reproduce so it would really be helpful if as many as possible could try and run the sample project, it can be found on the attachments tab of the linked issue.
| SemaphoreSlim was changed in .NET 4.5.1
.NET 4.5 Version of WaitUntilCountOrTimeoutAsync method is:
private async Task<bool> WaitUntilCountOrTimeoutAsync(TaskNode asyncWaiter, int millisecondsTimeout, CancellationToken cancellationToken)
{
[...]
// If the await completed synchronously, we still hold the lock. If it didn't,
// we no longer hold the lock. As such, acquire it.
lock (m_lockObj)
{
RemoveAsyncWaiter(asyncWaiter);
if (asyncWaiter.IsCompleted)
{
Contract.Assert(asyncWaiter.Status == TaskStatus.RanToCompletion && asyncWaiter.Result,
"Expected waiter to complete successfully");
return true; // successfully acquired
}
cancellationToken.ThrowIfCancellationRequested(); // cancellation occurred
return false; // timeout occurred
}
}
Same method in 4.5.1:
private async Task<bool> WaitUntilCountOrTimeoutAsync(TaskNode asyncWaiter, int millisecondsTimeout, CancellationToken cancellationToken)
{
[...]
lock (m_lockObj)
{
if (RemoveAsyncWaiter(asyncWaiter))
{
cancellationToken.ThrowIfCancellationRequested();
return false;
}
}
return await asyncWaiter.ConfigureAwait(false);
}
asyncWaiter is basically a task that always returns true (completes in separate thread, always with True result).
Release method calls RemoveAsyncWaiter and schedules worker to complete with true.
Here is a possible issue in 4.5:
RemoveAsyncWaiter(asyncWaiter);
if (asyncWaiter.IsCompleted)
{
Contract.Assert(asyncWaiter.Status == TaskStatus.RanToCompletion && asyncWaiter.Result,
"Expected waiter to complete successfully");
return true; // successfully acquired
}
//! another thread calls Release
//! asyncWaiter completes with true, Wait should return true
//! CurrentCount will be 0
cancellationToken.ThrowIfCancellationRequested(); // cancellation occurred,
//! throws OperationCanceledException
//! wasCanceled will be true
return false; // timeout occurred
In 4.5.1 RemoveAsyncWaiter will return false, and WaitAsync will return true.
| Semaphore | 21,019,895 | 14 |
In the man page it appears that even if you initialise a semaphore to a value of one:
sem_init(&mySem, 0, 1);
It could still be incremented to a value greater than 1 with multiple calls to
sem_post(&mySem);
But in this code example the comment seems to think differently:
sem_init(&mutex, 0, 1); /* initialize mutex to 1 - binary semaphore */
Is it possible to initialise a strictly binary semaphore in C?
Note: The reason for doing this instead of using a mutex in this case is the sem_post and sem_wait may be called by different threads.
| If you want a strictly binary semaphore on Linux, I suggest building one out of mutexes and condition variables.
struct binary_semaphore {
pthread_mutex_t mutex;
pthread_cond_t cvar;
bool v;
};
void mysem_post(struct binary_semaphore *p)
{
pthread_mutex_lock(&p->mutex);
if (p->v)
abort(); // error
p->v = true;
pthread_cond_signal(&p->cvar);
pthread_mutex_unlock(&p->mutex);
}
void mysem_wait(struct binary_semaphore *p)
{
pthread_mutex_lock(&p->mutex);
while (!p->v)
pthread_cond_wait(&p->cvar, &p->mutex);
p->v = false;
pthread_mutex_unlock(&p->mutex);
}
| Semaphore | 7,478,684 | 14 |
I have the following Java code:
import java.util.concurrent.*;
class Foo{
static Semaphore s = new Semaphore(1);
public void fun(final char c, final int r){
new Thread(new Runnable(){
public void run(){
try{
s.acquire(r);
System.out.println(c+"_"+r);
s.release(r+1);
} catch(Exception e){ e.printStackTrace(); }
}
}).start();
}
}
class ths{
public static void main(String[]args) throws Exception{
Foo f = new Foo();
f.fun('B',2);
f.fun('F',6);
f.fun('A',1);
f.fun('C',3);
f.fun('D',4);
f.fun('E',5);
}
}
Ideally, this should print A_1 through F_6 in order and exit, but for some reason that doesn't happen. It usually prints A_1 and B_2 and then it gets stuck.
I can't find anything obviously wrong with my code. Any suggestions?
| The basic problem is that acquire(int permits) does not guarantee that all permits will be grabbed at once. It could acquire fewer permits and then block while waiting for the rest.
Let's consider your code. When, say, three permits become available there's nothing to guarantee that they will be given to thread C. They could, in fact, be given to thread D to partially satisfy its acquire(4) request, resulting in a deadlock.
If you change the code like so, this fixes the problem for me:
public void fun(final char c, final int r){
new Thread(new Runnable(){
public void run(){
try{
while (!s.tryAcquire(r, 1, TimeUnit.MILLISECONDS)) {};
System.out.println(c+"_"+r);
s.release(r+1);
} catch(Exception e){ e.printStackTrace(); }
}
}).start();
}
(On second thoughts, the above is also broken since there's no guarantee the correct thread will ever get the permits -- it could keep trying and timing out indefinitely.)
| Semaphore | 7,743,203 | 13 |
I need to check my algorithm of solving the dining philosopher problem if it guarantees that all of the following are satisfied or not:
No possibility of deadlock.
No possibility of starvation.
I am using the semaphore on the chopsticks to solve the problem.
Here is my code (the algorithm):
while(true)
{
// He is Hungry
pickup_chopsticks(i);
// He is Eating...
drop_chopsticks(i);
// He is thinking
}
// ...
void pickup_chopsticks(int i)
{
if(i % 2 == 0) /* Even number: Left, then right */
{
semaphore_wait(chopstick[(i+1) % NUM_PHILOSOPHERS]);
semaphore_wait(chopstick[i]);
}
else /* Odd number: Right, then left */
{
semaphore_wait(chopstick[i]);
semaphore_wait(chopstick[(i+1) % NUM_PHILOSOPHERS]);
}
}
void drop_chopsticks(int i)
{
semaphore_signal(chopstick[i]);
semaphore_signal(chopstick[(i+1) % NUM_PHILOSOPHERS]);
}
I am sure there is no possibility of deadlock here, but is it possible to have a starvation problem here? If yes, how can I solve it?
| Definitions. A philosopher is enabled iff he is not waiting for an unavailable semaphore. An execution is an infinite sequence of steps taken by enabled philosophers. An execution is strongly fair iff every philosopher enabled infinitely often takes infinitely many steps. A dining philosophers solution is starvation-free iff, in every strongly fair execution, every philosopher dines infinitely often.
Theorem. Every loop-free deadlock-free dining philosophers solution in which non-dining philosophers do not hold semaphores is starvation-free.
Proof. Assume for the sake of obtaining a contradiction that there exists a strongly fair execution in which some philosopher, call him Phil, dines only finitely often. We show that this execution is in fact deadlocked.
Since pickup_chopsticks and drop_chopsticks have no loops, Phil takes only finitely many steps. The last step is a semaphore_wait, say on chopstick i. Because the execution is strongly fair, chopstick i is necessarily continuously unavailable from some finite time onward. Let Quentin be the last holder of chopstick i. If Quentin took infinitely many steps, then he would semaphore_signal chopstick i, so Quentin takes finitely many steps as well. Quentin, in turn, is waiting on a chopstick j, which, by the same argument, is continuously unavailable until the end of time and held by (say) Robert. By following the chain of semaphore_waits among finitely many philosophers, we necessarily arrive at a cycle, which is a deadlock.
QED
| Semaphore | 8,274,098 | 13 |
I'm late to the party, but I recently learned about SemaphoreSlim:
I used to use lock for synchronous locking, and a busy boolean for asynchronous locking. Now I just use SemaphoreSlim for everything.
private SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1, 1);
private void DoStuff()
{
semaphoreSlim.Wait();
try
{
DoBlockingStuff();
}
finally
{
semaphoreSlim.Release();
}
}
vs
private object locker = new object();
private void DoStuff()
{
lock(locker)
{
DoBlockingStuff();
}
}
Are there any synchronous cases where I should prefer using lock over SemaphoreSlim? If so, what are they?
| Here are the advantages of the lock over the SemaphoreSlim:
The lock is reentrant, while the SemaphoreSlim is not. So programming with the lock is more forgiving. In case there is a rare path in your app where you are acquiring the same lock twice, the lock will acquire it successfully, while the SemaphoreSlim will deadlock.
The lock is syntactic sugar around the Monitor class. In other words there is language support for the Monitor in C#, and there isn't for the SemaphoreSlim. So using the lock is comparatively more convenient and less verbose.
You can write more robust code with the lock, because you can add debugging assertions in auxiliary methods that the lock has been acquired: Debug.Assert(Monitor.IsEntered(_locker));
You can get contention statistics with the Monitor.LockContentionCount property: "Gets the number of times there was contention when trying to take the monitor's lock." There are no statistics available for the SemaphoreSlim class.
The SemaphoreSlim is IDisposable, so you have to think about when (and whether) to dispose it. Can you get away without disposing it? Are you disposing it prematurely and risk an ObjectDisposedException? These are questions that you don't have to answer with the lock.
The lock can survive in the scenario of an aborted thread. It is translated by the C# compiler like this:
bool lockTaken = false;
try
{
Monitor.Enter(obj, ref lockTaken);
/* Here goes the code inside the `lock`. */
}
finally
{
if (lockTaken)
{
Monitor.Exit(obj);
}
}
The Monitor.Enter has been coded carefully so that in case the thread is aborted, the lockTaken will have the correct value. On the contrary the SemaphoreSlim.Wait is typically called outside of the try/finally block, so there is a small window that the current thread can be aborted without releasing the lock, resulting in a deadlock.
The .NET Core/5+ platform has dropped support for the Thread.Abort method, so you could rightfully say that the last point has only theoretical value.
| Semaphore | 74,522,878 | 13 |
What are the differences between the functions included in <semaphore.h> and <sys/sem.h>?
Does exist a situation where is better to use a header or the other?
| <sys/sem.h> provides the interface for XSI (originally Unix System V) semaphores. These are not part of the base POSIX standard (they're in the XSI option which is largely for traditional Unix compatibility) and while they are not considered obsolescent/deprecated yet, many programmers consider them deprecated, and POSIX advises:
APPLICATION USAGE
The POSIX Realtime Extension defines alternative interfaces for interprocess communication. Application developers who need to use IPC should design their applications so that modules using the IPC routines described in XSI Interprocess Communication can be easily modified to use the alternative interfaces.
The advantages and disadvantages of XSI semaphores is that they are, and must be due to the way their interface works, kernel-space objects. The main benefit this gives you is the ability to set them up so that the kernel can back-out operations if the process exits or is killed unexpectedly. The main cost is that every operation is a round-trip to kernel-space, i.e. they're very slow. The interfaces for using them are also very obtuse and hard to learn, and they're necessarily a process-shared resource, meaning you have to deal with a shared namespace and resource cleanup issues.
<semaphore.h> defines POSIX semaphores, which are designed in such a way that they can be implemented entirely in userspace, except in the contended case where the process will call into the kernel to go to sleep. Their performance should be near-optimal (i.e. almost impossible to beat rolling your own) but they're not quite as featureful as XSI semaphores. POSIX semaphores also offer you the choice of whether you want a process-local semaphore (for use in a multi-threaded environment, or even, in some conditions, a signal handler in a single-threaded program) or a process-shared one, and in the latter case, you also have the choice whether to let the system handle allocating it in a shared namespace by name, or to obtain shared memory yourself and initialize it in shared memory.
| Semaphore | 11,058,045 | 12 |
I'm trying to implement a pool of workers in Go. The go-wiki (and Effective Go in the Channels section) feature excellent examples of bounding resource use. Simply make a channel with a buffer that's as large as the worker pool. Then fill that channel with workers, and send them back into the channel when they're done. Receiving from the channel blocks until a worker is available. So the channel and a loop is the entire implementation -- very cool!
Alternatively one could block on sending into the channel, but same idea.
My question is about changing the size of the worker pool while it's running. I don't believe there's a way to change the size of a channel. I have some ideas, but most of them seem way too complicated. This page actually implements a semaphore using a channel and empty structs in much the same way, but it has the same problem (these things come up all the time while Googling for "golang semaphore".
| I would do it the other way round. Instead of spawning many goroutines (which still require a considerable amount of memory) and use a channel to block them, I would model the workers as goroutines and use a channel to distribute the work. Something like this:
package main
import (
"fmt"
"sync"
)
type Task string
func worker(tasks <-chan Task, quit <-chan bool, wg *sync.WaitGroup) {
defer wg.Done()
for {
select {
case task, ok := <-tasks:
if !ok {
return
}
fmt.Println("processing task", task)
case <-quit:
return
}
}
}
func main() {
tasks := make(chan Task, 128)
quit := make(chan bool)
var wg sync.WaitGroup
// spawn 5 workers
for i := 0; i < 5; i++ {
wg.Add(1)
go worker(tasks, quit, &wg)
}
// distribute some tasks
tasks <- Task("foo")
tasks <- Task("bar")
// remove two workers
quit <- true
quit <- true
// add three more workers
for i := 0; i < 3; i++ {
wg.Add(1)
go worker(tasks, quit, &wg)
}
// distribute more tasks
for i := 0; i < 20; i++ {
tasks <- Task(fmt.Sprintf("additional_%d", i+1))
}
// end of tasks. the workers should quit afterwards
close(tasks)
// use "close(quit)", if you do not want to wait for the remaining tasks
// wait for all workers to shut down properly
wg.Wait()
}
It might be a good idea to create a separate WorkerPool type with some convenient methods. Also, instead of type Task string it is quite common to use a struct that also contains a done channel that is used to signal that the task had been executed successfully.
Edit: I've played around a bit more and came up with the following: http://play.golang.org/p/VlEirPRk8V. It's basically the same example, with a nicer API.
| Semaphore | 23,837,368 | 12 |
I am not an advanced developer. I'm just trying to get a hold on the task library and just googling. I've never used the class SemaphoreSlim so I would like to know what it does. Here I present code where SemaphoreSlim is used with async & await but which I do not understand. Could someone help me to understand the code below.
1st set of code
await WorkerMainAsync();
async Task WorkerMainAsync()
{
SemaphoreSlim ss = new SemaphoreSlim(10);
while (true)
{
await ss.WaitAsync();
// you should probably store this task somewhere and then await it
var task = DoPollingThenWorkAsync();
}
}
async Task DoPollingThenWorkAsync(SemaphoreSlim semaphore)
{
var msg = Poll();
if (msg != null)
{
await Task.Delay(3000); // process the I/O-bound job
}
// this assumes you don't have to worry about exceptions
// otherwise consider try-finally
semaphore.Release();
}
Firstly, the WorkerMainAsync will be called and a SemaphoreSlim is used. Why is 10 passed to the constructor of SemaphoreSlim?
When does the control come out of the while loop again?
What does ss.WaitAsync(); do?
The DoPollingThenWorkAsync() function is expecting a SemaphoreSlim but is not passed anything when it is called. Is this typo?
Why is await Task.Delay(3000); used?
They could simply use Task.Delay(3000) but why do they use await here instead?
2nd set of code for same purpose
async Task WorkerMainAsync()
{
SemaphoreSlim ss = new SemaphoreSlim(10);
List<Task> trackedTasks = new List<Task>();
while (DoMore())
{
await ss.WaitAsync();
trackedTasks.Add(Task.Run(() =>
{
DoPollingThenWorkAsync();
ss.Release();
}));
}
await Task.WhenAll(trackedTasks);
}
void DoPollingThenWorkAsync()
{
var msg = Poll();
if (msg != null)
{
Thread.Sleep(2000); // process the long running CPU-bound job
}
}
Here is a task & ss.Release added to a list. I really do not understand how tasks can run after adding to a list?
trackedTasks.Add(Task.Run(async () =>
{
await DoPollingThenWorkAsync();
ss.Release();
}));
I am looking forward for a good explanation & help to understand the two sets of code. Thanks
|
why 10 is passing to SemaphoreSlim constructor.
They are using SemaphoreSlim to limit to 10 tasks at a time. The semaphore is "taken" before each task is started, and each task "releases" it when it finishes. For more about semaphores, see MSDN.
they can use simply Task.Delay(3000) but why they use await here.
Task.Delay creates a task that completes after the specified time interval and returns it. Like most Task-returning methods, Task.Delay returns immediately; it is the returned Task that has the delay. So if the code did not await it, there would be no delay.
just really do not understand after adding task to list how they can run?
In the Task-based Asynchronous Pattern, Task objects are returned "hot". This means they're already running by the time they're returned. The await Task.WhenAll at the end is waiting for them all to complete.
| Semaphore | 19,998,779 | 12 |
Does anyone know how .NET handles a timeout on a call to Semaphore.WaitOne(timeout)?
I'd expect a TimeoutException, but the MSDN documentation doesn't list this in the list of expected exceptions, and I can't seem to find it documented anywhere.
| The method will return false if it times out, and true if it returns a signal:
if (mySemaphore.WaitOne(1000))
{
// signal received
}
else
{
// wait timed out
}
| Semaphore | 1,431,349 | 12 |
For an OS class, I currently have to create a thread-safe queue in the linux kernel that one interacts with using syscalls.
Now for the critical sections my gut feeling is that I would want to use the mutex_lock and mutex_unlock functions in the mutex.h header. However, I was told that I could instead use a binary semaphore with down_interruptible and up in the semaphore.h header, and that it'd be better.
I've read through Difference between binary semaphore and mutex: From it, I understand that the main advantage of a mutex is how strongly it enforces ownership, and that the advantage of the semaphore is that since it doesn't enforce ownership you can use it as a synchronization mechanism between two (multiple?) different threads.
My question is what are the advantages of the binary semaphore, if you use it in exactly the same way as a mutex. More explicitly if I wrote:
down()
/* critical */
up()
in the same way that I would do
mutex_lock()
/* critical */
mutex_unlock()
Is there some performance advantage because it's less safe than a mutex? Am I missing something?
Here's a small snippet of the code that I want to make thread-safe if you want more context (this is my first C proj ever):
#define MESSAGE_MAX_SIZE 512
typedef struct list_head list_node;
/* Create message struct */
typedef struct {
size_t size;
list_node node;
char data[MESSAGE_MAX_SIZE];
} Message;
/* Create the linked list queue with dummy head */
struct {
size_t size;
list_node head;
} my_q = { 0, LIST_HEAD_INIT(my_q.head) };
/*
Adds a new item to the tail of the queue.
@data: pointer to data to add to list
@len: size of the data
*/
asmlinkage long sys_enqueue(const void __user *data, long len) {
long res = 0;
Message *msg = 0;
if (len < 0) return EINVAL;
if (len > MESSAGE_MAX_SIZE) return E2BIG;
msg = kmalloc(sizeof(Message), GFP_KERNEL);
if (msg == 0) return ENOMEM;
res = copy_from_user(msg->data, data, len);
if (res != 0) return EFAULT;
/* Start Critical Section */
my_q.size++;
list_add_tail(&msg->node, &my_q.head);
/* End Critical Section */
return 0;
}
| In absence of empirical evidence, I'd quote from the book Linux Kernel Development
It (i.e. mutex) behaves similar to a semaphore with a count of one, but it has a
simpler interface, more efficient performance, and additional
constraints on its use.
Additionally, there are many constraints that apply to mutexes but not to semaphores. Things like process cannot exit while holding a mutex. Moreover, if CONFIG_DEBUG_MUTEXES kernel option is enabled, then all the constraints that apply on mutexes are ensured by debugging checks.
So, unless there is a good reason not to use mutex, that should be first choice.
| Semaphore | 40,291,858 | 11 |
Are the P() and V() operations that can be performed on a semaphore guarantee atomic? Can a semaphore prevent two processes getting into the P()?
| Suppose we have a binary semaphore, s, which has the value 1, and two processes simultaneously attempt to execute P on s. Only one of these operations will be able to complete before the next V operation on s; the other process attempting to perform a P operation is suspended.
Taken from my university notes:
We can think if P and V as controlling
access to a resource:
When a process wants to use the
resource, it performs a P operation:
if this succeeds, it decrements the
amount of resource available and the
process continues; if all the
resource is currently in use, the
process has to wait.
When a process is finished with the
resource, it performs a V operation:
if there were processes waiting on the
resource, one of these is woken up;
if there were no waiting processes,
the semaphore is incremented
indicating that there is now more of
the resource free. Note that the
definition of V doesn’t specify which
process is woken up if more than one
process has been suspended on the same
semaphore.
Semaphores can solve both mutual exclusion and condition synchronization problems. So the answer to both your questions is: yes.
| Semaphore | 5,094,440 | 11 |
Is there any way to query the javascript synchronously from the main thread?
Javascript is queried from the native code using an asynchronous function with a callback parameter to handle the response:
func evaluateJavaScript(_ javaScriptString: String, completionHandler completionHandler: ((AnyObject!, NSError!) -> Void)?)
Asynchronous behavior can usually be turned synchronous by pausing the thread & controlling execution with a semaphore:
// Executing in the main thread
let sema = dispatch_semaphore_create(0)
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)) {
// Background thread
self.evaluateJavaScript("navigator.userAgent", completionHandler: { (value:AnyObject!, error: NSError!) -> Void in
if let ua = value as? String {
userAgent = ua
} else {
ERROR("ERROR There was an error retrieving the default user agent, using hardcoded value \(error)")
}
dispatch_semaphore_signal(sema)
})
}
dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER)
...however in this case, because the completionHandler is always called on the main thread, the code deadlocks because the completionHandler block will never execute (main thread was paused by the dispatch_semaphore_wait on the last line)
Any suggestions?
EDIT
I would rather not be blocking the main thread to execute that code. But I can't decouple from the main thread without changing my APIs from synchronous to asynchronous, with a domino effect all the way up the stack (e.g. from let ua = computeUserAgent() to computeUserAgent() {(ua: String)->Void in /*Use ua value here */}). So I need to choose between 2 approaches that both have downsides, and I would rather pick the approach that doesn't mess up my internal APIs, especially for a task as trivial as looking up the user agent.
| If you must do this...
As suggested in a comment to this answer you could run a tight loop around your semaphore wait like this.
while (dispatch_semaphore_wait(sema, DISPATCH_TIME_NOW)) {
[[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode
beforeDate:[NSDate dateWithTimeIntervalSinceNow:10]];
}
| Semaphore | 28,388,197 | 11 |
Is it possible to query a semaphore created with sem_get without actually blocking like the sem_acquire function does?
Cheers,
Dan.
| Unfortunately, PHP does not currently support non-blocking semaphores.
If something like this is necessary you can utilize semaphores together with shared memory to create your own non-blocking lock mechanisms.
Use a shared memory variable to mark whether or not a lock exists and then use a semaphore around operations against that variable.
| Semaphore | 1,940,759 | 11 |
Going over this sample semaphore implementations (for SMP systems), I understand the test-and-set is required for multiprocessor atomic checks. However, once we add the atomic checks aren't the disable interrupts redundant ? The disable interrupts, anyway, only offer atomicity over one processor. Addition to the semaphore queue also needs to be protected.
class semaphore {
private int t;
private int count;
private queue q;
public semaphore(int init)
{
t = 0;
count = init;
q = new queue();
}
public void P()
{
Disable interrupts;
while (TAS(t) != 0) { /* just spin */ };
if (count > 0) {
count--;
t = 0;
Enable interrupts;
return;
}
Add process to q;
t = 0;
Enable interrupts;
Redispatch;
}
public V()
{
Disable interrupts;
while (TAS(t) != 0) { /* just spin */ };
if (q == empty) {
count++;
} else {
Remove first process from q;
Wake it up;
}
t = 0;
Enable interrupts;
}
}
| While it is true that turning interrupts off on one processor is insufficient to guarantee atomic memory access in a multiprocessor system (because, as you mention, threads on other processors can still access shared resources), we turn interrupts off for part of the multiprocessor semaphore implementation because we do not want to be descheduled while we are doing a test and set.
If a thread holding the test and set is descheduled, no other threads can do anything with the semaphore (because its count is protected by that test and set) the thread was using while it's asleep (this is not good). In order to guarantee that this doesn't happen we'll turn interrupts on our processor off while using the test and set.
| Semaphore | 27,561,084 | 11 |
This is a follow-up to Can C++11 condition_variables be used to synchronize processes?.
Can std::condition_variable objects be used as counting semaphores?
Methinks not because the object seems bound to a std::mutex, which implies it can only be used as a binary semaphore. I've looked online, including here, here, and here, but can't find reference or example of using these objects as counting semaphores.
| Yes.
struct counting_sem {
counting_sem(std::ptrdiff_t init=0):count(init) {}
// remove in C++17:
counting_sem(counting_sem&& src) {
auto l = src.lock(); // maybe drop, as src is supposed to be dead
count = src.count;
}
counting_sem& operator=(counting_sem&& src) = delete;
void take( std::size_t N=1 ) {
if (N==0) return;
auto l = lock();
cv.wait(l, [&]{
if (count > 0 && count < (std::ptrdiff_t)N) {
N -= count;
count = 0;
} else if (count >= (std::ptrdiff_t)N) {
count -= N;
N = 0;
}
return N == 0;
});
}
void give( std::size_t N=1 ) {
if (N==0) return;
{
auto l = lock();
count += N;
}
cv.notify_all();
}
// reduce the count without waiting for it
void reduce(std::size_t N=1) {
if (N==0) return;
auto l = lock();
count -= N;
}
private:
std::mutex m;
std::condition_variable cv;
std::ptrdiff_t count;
auto lock() {
return std::unique_lock<std::mutex>(m);
}
auto unlocked() {
return std::unique_lock<std::mutex>(m, std::defer_lock_t{});
}
};
Code not tested or compiled, but the design is sound.
take(7) is not equivalent to for(repeat 7 times) take(): instead, it takes as much as it can then blocks if that wasn't enough.
Modifying so that it doesn't take anything until there is enough is easy:
if (count >= (std::ptrdiff_t)N) {
count -= N;
N = 0;
}
| Semaphore | 40,335,671 | 11 |
A seemingly straightforward problem: I have a java.util.concurrent.Semaphore, and I want to acquire a permit using acquire().
The acquire() method is specified to throw InterruptedException if the thread is interrupted:
If the current thread:
has its interrupted status set on entry to this method; or
is interrupted while waiting for a permit,
then InterruptedException is thrown and the current thread's interrupted status is cleared.
However, the usual pattern with methods that may throw InterruptedException is to call them in a loop, since threads can be subject to spurious wakeups that look the same as being interrupted. For example, the documentation for Object.wait(long) says:
A thread can also wake up without being notified, interrupted, or timing out, a so-called spurious wakeup. While this will rarely occur in practice, applications must guard against it by testing for the condition that should have caused the thread to be awakened, and continuing to wait if the condition is not satisfied. In other words, waits should always occur in loops.
So the question is, is Semaphore.acquire() subject to the same kind of spurious wakeup? The logical answer would be "no", but I can't find any evidence for that, and in fact the evidence seems to point in the other direction.
Looking at the source for Semaphore, it appears that it delegates the actual acquire to an AbstractQueuedSynchronizer, which according to its source delegates to LockSupport.park().
The documentation for LockSupport.park() explicitly mentions spurious wakeup, but the implementation of AbstractQueuedSynchronizer.doAcquireInterruptably() appears to just check Thread.interrupted() and then throw InterruptedException.
So, unless I'm missing something (which is very possible), it appears that Semaphore.acquire() can throw InterruptedException spuriously?
Is that correct? More importantly, is there anything I can do about that? I could use Semaphore.acquireUninterruptably(), but I don't want an uninterruptable wait, just one that doesn't get interrupted spuriously. Is there any alternative?
| It is "spurious wakeup" not "spurious interrupt": "A thread can also wake up without being notified, interrupted, or timing out, a so-called spurious wakeup." There is no InterruptedException thrown during a spurious wakeup. As you say in the comments: The thread wakes up but the interrupted flag is not set.
| Semaphore | 12,165,030 | 11 |
I am reading the book Java Concurrency in Practice. In a section about java.util.concurrent.Semaphore, the below lines are present in the book. It is a comment about its implementation of "virtual permit" objects
The implementation has no actual permit objects, and Semaphore does
not associate dispensed permits with threads, so a permit acquired in
one thread can be released from another thread. You can think of
acquire as consuming a permit and release as creating one; a
Semaphore is not limited to the number of permits it was created with.
Can somebody explain this? I am having trouble understanding this. If we create a pool of fixed size, we create a fixed number of "permits". From the above statement, it looks like the "permits" can keep growing. Why is it designed this way?
| Instead of "handing out" permit objects, the implementation just has a counter. When a new permit is "created" the counter is increased, when a permit is "returned" the counter is decreased.
This makes for much better performance than creating actual objects all the time.
The tradeoff is that the Semaphore itself cannot detect certain kinds of programming errors (such as unauthorized permit cash-ins, or semaphore leaks). As the coder, you have to make sure to follow the rules on your own.
| Semaphore | 7,554,839 | 11 |
I need to do some process synchronization in C. I want to use a monitor, and I have read a lot about them. However I have been unable to find out how to implement one in C. I have seen them done in Java and other languages like C++, but I am unable to find examples in C.
I have looked through K&R and there is no example in there. I skimmed through Unix Systems Programming, Communication, Concurrency and Threads, but was unable to find a monitor implementation in there.
This brings me here. Where and how do I define a monitor? How do I implement it within the rest of the code?
/* I am coding in a *nix environment */
| I did this recently for a project, the concept I implemented was to have one thread start all of the others and then use semaphores and mutexes to control the inter process sync issues while dealing with shared memory.
The concept of a monitor, in the context of the monitor design pattern, is a construct that is basically there to hide mutual exclusion. This concept is expressed in C++ Boost but it doesn't exist in core C++ or C. The way you handle this type of job in C is with good old fashioned mutexes (binary semaphores) and semaphores. You can read more about this here.
Below is a basic way to initialize a semaphore and mutex, you may need to do more research of your own, but here is a link to get you started.
pthread_mutex_t myMutex;
sem_t mySemaphore;
int status;
status = pthread_mutex_init(&myMutex, NULL);
if(status != 0)
exit_with_error("There was an Error Initalizing the Mutex\n");
status = sem_init(&mySemaphore, 0, 0);
if(status != 0)
printf("There was an Error Initalizing the Semaphore\n");
| Semaphore | 3,827,598 | 11 |
The documentation for the .NET Semaphore class states that:
There is no guaranteed order, such as FIFO or LIFO, in which blocked threads enter the semaphore.
In this case, if I want a guaranteed order (either FIFO or LIFO), what are my options? Is this something that just isn't easily possible? Would I have to write my own Semaphore? I assume that would be pretty advanced.
| See this:
The FifoSemaphore works exactly like a normal Semaphore but also guarantees that tokens are served out to acquirers in the order that they manage to acquire the internal lock. The usage of a FifoSemaphore is identical to a Semaphore.
| Semaphore | 2,553,982 | 11 |
A fairly basic question, but I don't see it asked anywhere.
Let's say we have a global struct (in C) like so:
struct foo {
int written_frequently1;
int read_only;
int written_frequently2;
};
It seems clear to me that if we have lots of threads reading and writing, we need a semaphore (or other lock) on the written_frequently members, even for reading, since we can't be 100% sure that assignments to this struct will be atomic.
If we want lots of threads to read the read_only member, and none to write, to we need a semaphore on the struct access just for reading?
(I'm inclined to say no, because the fact that the locations immediately before and after are constantly changed shouldn't affect the read_only member, and multiple threads reading the value shouldn't interfere with each other. But I'm not sure.)
[Edit: I realize now I should have asked this question much better, in order to clarify very specifically what I meant. Naturally, I didn't really grok all of the issues involved when I first asked the question. Of course, if I comprehensively edit the question now, I will ruin all of these great answers. What I meant is more like:
struct bar {
char written_frequently1[LONGISH_LEN];
char read_only[LONGISH_LEN];
char written_frequently2[LONGISH_LEN];
};
The major issue I asked about is, since this data is part of a struct, is it at all influenced by the other struct members, and might it influence them in return?
The fact that the members were ints, and therefore writes are likely atomic, is really just a red herring in this case.]
| You need a mutex to guarantee that an operation is atomic. So in this particular case, you may not need a mutex at all. Specifically, if each thread writes to one element and the write is atomic and the new value is independent of the current value of any element (including itself), there is no problem.
Example: each of several threads updates a "last_updated_by" variable that simply records the last thread that updated it. Clearly, as long as the variable itself is updated atomically, no errors will occur.
However, you do need a mutex to guarantee consistency if a thread reads or writes more than one element at a time, particularly because you mention locking an element rather than the entire structure.
Example: a thread updates the "day", "month" and "year" elements of a structure. This must happen atomically, lest another thread read the structure after the "month" increments but before the "day" wraps to 1, to avoid dates such as February 31. Note that you must honor the mutex when reading; otherwise you may read an erroneous, half-updated value.
| Semaphore | 265,708 | 11 |
AFAIK, the mutex API was introduced to the kernel after LDD3 (Linux device drivers 3rd edition) was written so it's not described in the book.
The book describes how to use the kernel's semaphore API for mutex functionality.
It suggest to use down_interruptable() instead of down():
You do not, as a general rule,
want to use noninterruptible operations unless there truly is no alternative. Non-interruptible operations are a good way to create unkillable processes (the dreaded
“D state” seen in ps), and annoy your users [Linux Device Drivers 3rd ed]
Now. here's my question:
The mutex API has two "similar" functions:
mutex_lock_killable() an mutex_lock_interruptable(). Which one should I choose?
| Use mutex_lock_interruptible() function to allow your driver to be interrupted by any signal.
This implies that your system call should be written so that it can be restarted.
(Also see ERESTARTSYS.)
Use mutex_lock_killable() to allow your driver to be interrupted only by signals that actually kill the process, i.e., when the process has no opportunity to look at the results of your system call, or even to try it again.
Use mutex_lock() when you can guarantee that the mutex will not be held for a long time.
| Semaphore | 18,298,962 | 10 |
I have multiple threads accessing an external resource – a broswer. But only one thread can access it at a time. So, I am using a semaphore to synchronise them. However, one thread, which takes input from the GUI and then access the browser for the results, should have priority over other threads and I am not sure how to use a semaphore to achieve it.
I was thinking that every thread after acquiring the semaphore checks if there is the priority thread waiting in the queue and if yes, then it releases it and waits again. Only the priority thread doesn't release it once it is acquired.
Is this a good solution or is there anything else in Java API I could use?
| There're no synchronization primitives in Java that would allow you to prioritise one thread over others in the manner you want.
But you could use another approach to solving your problem. Instead of synchronizing threads, make them produce small tasks (for instance, Runnable objects) and put those tasks into a PriorityBlockingQueue with tasks from the GUI thread having the highest priority. A single working thread will poll tasks from this queue and execute them. That would guarantee both mutual exclusion and prioritization.
There're special constructors in ThreadPoolExecutor that accept blocking queues. So, all you need is such an executor with a single thread provided with your PriorityBlockingQueue<Runnable>. Then submit your tasks to this executor and it will take care of the rest.
Should you decide to choose this approach, this post might be of interest to you: How to implement PriorityBlockingQueue with ThreadPoolExecutor and custom tasks
| Semaphore | 39,437,411 | 10 |
How should I structure the try/finally when using a SemaphorSlim with cancellation token so that OperationCancelledException is handled correctly? In Option A, cancelling the token source throws OperationCancelledException but does not call Release(). In Option B, cancelling the token source throws OperationCancelledException and DOES call Release().
// option A:
_semaphorSlim.Wait( _cancellationTokenSource.Token );
try
{
// do work here
}
finally
{
_semaphorSlim.Release();
}
// option B:
try
{
_semaphorSlim.Wait( _cancellationTokenSource.Token );
// do work here
}
finally
{
_semaphorSlim.Release();
}
| Option A is more correct here. You do not need to Release the SemaphoreSlim when you cancel, as you never actually acquire and increment its count. As such, you don't want to release unless your Wait call actually succeeded.
From this MSDN Page on using Semaphore and SemaphoreSlim:
It is the programmer's responsibility to ensure that a thread does not release the semaphore too many times. For example, suppose a semaphore has a maximum count of two, and that thread A and thread B both enter the semaphore. If a programming error in thread B causes it to call Release twice, both calls succeed. The count on the semaphore is full, and when thread A eventually calls Release, a SemaphoreFullException is thrown.
| Semaphore | 6,233,774 | 10 |
When dealing with threads (specifically in C++) using mutex locks and semaphores is there a simple rule of thumb to avoid Dead Locks and have nice clean Synchronization?
| A good simple rule of thumb is to always obtain your locks in a consistent predictable order from everywhere in your application. For example, if your resources have names, always lock them in alphabetical order. If they have numeric ids, always lock from lowest to highest. The exact order or criteria is arbitrary. The key is to be consistent. That way you'll never have a deadlock situation. eg.
Thread 1 locks resource A
Thread 2 locks resource B
Thread 1 waits to obtain a lock on B
Thread 2 waits to obtain a lock on A
Deadlock
The above can never happen if you follow the rule of thumb outlined above. For a more detailed discussion, see the Wikipedia entry on the Dining Philosophers problem.
| Semaphore | 1,892,619 | 10 |
Why am I deadlocking?
- (void)foo
{
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
[self foo];
});
// whatever...
}
I expect foo to be executed twice on first call.
| Neither of the existing answers are quite accurate (one is dead wrong, the other is a bit misleading and misses some critical details). First, let's go right to the source:
void
dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
struct _dispatch_once_waiter_s * volatile *vval =
(struct _dispatch_once_waiter_s**)val;
struct _dispatch_once_waiter_s dow = { NULL, 0 };
struct _dispatch_once_waiter_s *tail, *tmp;
_dispatch_thread_semaphore_t sema;
if (dispatch_atomic_cmpxchg(vval, NULL, &dow)) {
dispatch_atomic_acquire_barrier();
_dispatch_client_callout(ctxt, func);
dispatch_atomic_maximally_synchronizing_barrier();
//dispatch_atomic_release_barrier(); // assumed contained in above
tmp = dispatch_atomic_xchg(vval, DISPATCH_ONCE_DONE);
tail = &dow;
while (tail != tmp) {
while (!tmp->dow_next) {
_dispatch_hardware_pause();
}
sema = tmp->dow_sema;
tmp = (struct _dispatch_once_waiter_s*)tmp->dow_next;
_dispatch_thread_semaphore_signal(sema);
}
} else {
dow.dow_sema = _dispatch_get_thread_semaphore();
for (;;) {
tmp = *vval;
if (tmp == DISPATCH_ONCE_DONE) {
break;
}
dispatch_atomic_store_barrier();
if (dispatch_atomic_cmpxchg(vval, tmp, &dow)) {
dow.dow_next = tmp;
_dispatch_thread_semaphore_wait(dow.dow_sema);
}
}
_dispatch_put_thread_semaphore(dow.dow_sema);
}
}
So what really happens is, contrary to the other answers, the onceToken is changed from its initial state of NULL to point to an address on the stack of the first caller &dow (call this caller 1). This happens before the block is called. If more callers arrive before the block is completed, they get added to a linked list of waiters, the head of which is contained in onceToken until the block completes (call them callers 2..N). After being added to this list, callers 2..N wait on a semaphore for caller 1 to complete execution of the block, at which point caller 1 will walk the linked list signaling the semaphore once for each caller 2..N. At the beginning of that walk, onceToken is changed again to be DISPATCH_ONCE_DONE (which is conveniently defined to be a value that could never be a valid pointer, and therefore could never be the head of a linked list of blocked callers.) Changing it to DISPATCH_ONCE_DONE is what makes it cheap for subsequent callers (for the rest of the lifetime of the process) to check the completed state.
So in your case, what's happening is this:
The first time you call -foo, onceToken is nil (which is guaranteed by virtue of statics being guaranteed to be initialized to 0), and gets atomically changed to become the head of the linked list of waiters.
When you call -foo recursively from inside the block, your thread is considered to be "a second caller" and a waiter structure, which exists in this new, lower stack frame, is added to the list and then you go to wait on the semaphore.
The problem here is that this semaphore will never be signaled because in order for it to be signaled, your block would have to finish executing (in the higher stack frame), which now can't happen due to a deadlock.
So, in short, yes, you're deadlocked, and the practical takeaway here is, "don't try to call recursively into a dispatch_once block." But the problem is most definitely NOT "infinite recursion", and the flag is most definitely not only changed after the block completes execution -- changing it before the block executes is exactly how it knows to make callers 2..N wait for caller 1 to finish.
| Semaphore | 19,176,219 | 10 |
I have threads which are given random number (1 to n) and are instructed to print them in sorted order. I used semaphore such that I acquire the number of permits = random number and release one permit more than what was acquired.
acquired = random number; released = 1+random number
Initial permit count for semaphore is 1. So thread with random number 1 should get permit and then 2 and so on.
This is supported as per the documentation given below
There is no requirement that a thread that releases a permit must have acquired that permit by calling acquire().
The problem is my program gets stuck after 1 for n>2.
My program is given below:
import java.util.concurrent.Semaphore;
public class MultiThreading {
public static void main(String[] args) {
Semaphore sem = new Semaphore(1,false);
for(int i=5;i>=1;i--)
new MyThread(i, sem);
}
}
class MyThread implements Runnable {
int var;Semaphore sem;
public MyThread(int a, Semaphore s) {
var =a;sem=s;
new Thread(this).start();
}
@Override
public void run() {
System.out.println("Acquiring lock -- "+var);
try {
sem.acquire(var);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(var);
System.out.println("Releasing lock -- "+var);
sem.release(var+1);
}
}
Output is :
Acquiring lock -- 4
Acquiring lock -- 5
Acquiring lock -- 3
Acquiring lock -- 2
Acquiring lock -- 1
1
Releasing lock -- 1
While If I modify my code with tryAcquire, it runs perfectly well.
Below is new run implementation
@Override
public void run() {
boolean acquired = false;
while(!acquired) {
acquired = sem.tryAcquire(var);
}
System.out.println(var);
sem.release(var+1);
}
Can someone please explain the semaphore's permit acquire mechanism when mulitple threads are waiting with different permit request??
| It's a clever strategy, but you're misunderstanding how Sempahore hands out permits. If you run your code enough times you'll actually see it reach step two:
Acquiring lock -- 5
Acquiring lock -- 1
1
Releasing lock -- 1
Acquiring lock -- 3
Acquiring lock -- 2
2
Acquiring lock -- 4
Releasing lock -- 2
If you keep on re-running it enough times you'd actually see it successfully finish. This happens because of how Semaphore hands out permits. You're assuming Semaphore will try to accommodate an acquire() call as soon as it has enough permits to do so. If we look carefully at the documentation for Semaphore.aquire(int) we'll see that is not the case (emphasis mine):
If insufficient permits are available then the current thread becomes disabled for thread scheduling purposes and lies dormant until ... some other thread invokes one of the release methods for this semaphore, the current thread is next to be assigned permits and the number of available permits satisfies this request.
In other words Semaphore keeps a queue of pending acquire request and, upon each call to .release(), only checks the head of the queue. In particular if you enable fair queuing (set the second constructor argument to true) you'll see even step one doesn't occur, because step 5 is (usually) the first in the queue and even new acquire() calls that could be fulfilled will be queued up behind the other pending calls.
In short this means you cannot rely on .acquire() to return as soon as possible, as your code assumes.
By using .tryAcquire() in a loop instead you avoid making any blocking calls (and therefore put a lot more load on your Semaphore) and as soon as the necessary number of permits becomes available a tryAcquire() call will successfully obtain them. This works but is wasteful.
Picture a wait-list at a restaurant. Using .aquire() is like putting your name on the list and waiting to be called. It may not be perfectly efficient, but they'll get to you in a (reasonably) fair amount of time. Imagine instead if everyone just shouted at the host "Do you have a table for n yet?" as often as they could - that's your tryAquire() loop. It may still work out (as it does in your example) but it's certainly not the right way to go about it.
So what should you do instead? There's a number of possibly useful tools in java.util.concurrent, and which is best somewhat depends on what exactly you're trying to do. Seeing as you're effectively having each thread start the next one I might use a BlockingQueue as the synchronization aid, pushing the next step into the queue each time. Each thread would then poll the queue, and if it's not the activated thread's turn replace the value and wait again.
Here's an example:
public class MultiThreading {
public static void main(String[] args) throws Exception{
// Use fair queuing to prevent an out-of-order task
// from jumping to the head of the line again
// try setting this to false - you'll see far more re-queuing calls
BlockingQueue<Integer> queue = new ArrayBlockingQueue<>(1, true);
for (int i = 5; i >= 1; i--) {
Thread.sleep(100); // not necessary, just helps demonstrate the queuing behavior
new MyThread(i, queue).start();
}
queue.add(1); // work starts now
}
static class MyThread extends Thread {
int var;
BlockingQueue<Integer> queue;
public MyThread(int var, BlockingQueue<Integer> queue) {
this.var = var;
this.queue = queue;
}
@Override
public void run() {
System.out.println("Task " + var + " is now pending...");
try {
while (true) {
int task = queue.take();
if (task != var) {
System.out.println(
"Task " + var + " got task " + task + " instead - re-queuing");
queue.add(task);
} else {
break;
}
}
} catch (InterruptedException e) {
// If a thread is interrupted, re-mark the thread interrupted and terminate
Thread.currentThread().interrupt();
return;
}
System.out.println("Finished task " + var);
System.out.println("Registering task " + (var + 1) + " to run next");
queue.add(var + 1);
}
}
}
This prints the following and terminates successfully:
Task 5 is now pending...
Task 4 is now pending...
Task 3 is now pending...
Task 2 is now pending...
Task 1 is now pending...
Task 5 got task 1 instead - re-queuing
Task 4 got task 1 instead - re-queuing
Task 3 got task 1 instead - re-queuing
Task 2 got task 1 instead - re-queuing
Finished task 1
Registering task 2 to run next
Task 5 got task 2 instead - re-queuing
Task 4 got task 2 instead - re-queuing
Task 3 got task 2 instead - re-queuing
Finished task 2
Registering task 3 to run next
Task 5 got task 3 instead - re-queuing
Task 4 got task 3 instead - re-queuing
Finished task 3
Registering task 4 to run next
Task 5 got task 4 instead - re-queuing
Finished task 4
Registering task 5 to run next
Finished task 5
Registering task 6 to run next
| Semaphore | 36,992,758 | 10 |
I am currently optimizing an existing, very slow and timing out production application. There is no option to re-write it.
In short, it is a WCF service that currently calls 4 other "worker" WCF services sequentially. None of the worker services are dependent on results from the other. So we would like it to call them all at once (not sequentially). I will reiterate that we don't have the luxury of re-writing it.
The optimization involves making it call all worker services at once. This is where asynchrony came to mind.
I have limited experience with asynchronous programming, but I have read as widely as I can on the topic, with respect to my solution.
The problem is, on testing, it works but maxes out my CPU. I would appreciate your help
The following is a simplified version of the essential code in main WCF Service
// The service operation belonging to main WCF Service
public void ProcessAllPendingWork()
{
var workerTasks = new List<Task<bool>>();
foreach(var workerService in _workerServices)
{
//DoWorkAsync is the worker method with the following signature:
// Task<bool> DoWorkAsync()
var workerTask = workerService.DoWorkAsync()
workerTasks.Add(workerTask);
}
var task = Task.Run(async ()=>
{
await RunWorkerTasks(workerTasks);
});
task.Wait();
}
private async RunWorkerTasks(IEnumerable<Tast<bool>> workerTasks)
{
using(var semaphore = new SemaphoreSlim(initialCount:3))
{
foreach (var workerTask in workerTasks)
{
await semaphore.WaitAsync();
try
{
await workerTask;
}
catch (System.Exception)
{
//assume 'Log' is a predefined logging service
Log.Error(ex);
}
}
}
}
What I have read:
Multiple ways how to limit parallel tasks processing
How to limit the amount of concurrent async I/O operations?
Approaches for throttling asynchronous methods in C#
Constraining Concurrent Threads in C#
Limiting Number of Concurrent Threads With SemaphoresSlim
Async WCF call with ChannelFactory and CreateChannel
| You didn't explain how you wanted to limit the concurrent calls. Do you want 30 concurrent worker tasks running, or do you want 30 WCF calls, each of which have all their worker tasks running concurrently, or do you want concurrent WCF calls to each have their own limit of concurrent worker tasks? Given you said that each WCF call has only 4 worker tasks and looking at your sample code, I assume you want a global limit of 30 concurrent worker tasks.
Firstly, as @mjwills implied, you need to use the SemaphoreSlim to limit calls to workerService.DoWorkAsync(). Your code currently starts all of them, and only tried to throttle how many you'll wait to finish. I assume this is why you max out CPU. The number of worker tasks started remains unbounded. Note however you'll also need to await the worker task while you hold the semaphore, otherwise you'll only throttle how fast you create tasks, not how many run concurrently.
Secondly, you're creating a new SemaphoreSlim for each WCF request. Hence my question from my first paragraph. The only way this will throttle anything is if you have more worker services than the initial count, which in your sample is 30, but you said there are only 4 workers. To have a "global" limit, you need to use a singleton SemaphoreSlim.
Thridly, you never call .Release() on the SemaphoreSlim, so if you did make it a singleton, your code will hang once it's started 30 workers since the process started. Make sure to do it in a try-finally block, so that if the worker crashes, it still gets released.
Here's some hastily written sample code:
public async Task ProcessAllPendingWork()
{
var workerTasks = new List<Task<bool>>();
foreach(var workerService in _workerServices)
{
var workerTask = RunWorker(workerService);
workerTasks.Add(workerTask);
}
await Task.WhenAll(workerTasks);
}
private async Task<bool> RunWorker(Func<bool> workerService)
{
// use singleton semaphore.
await _semaphore.WaitAsync();
try
{
return await workerService.DoWorkAsync();
}
catch (System.Exception)
{
//assume error is a predefined logging service
Log.Error(ex);
return false; // ??
}
finally
{
_semaphore.Release();
}
}
| Semaphore | 57,572,902 | 10 |
class myclass
{
private Semaphore _sync;
myclass ()
{
_sync = new Semaphore(1,1);
}
doasync()
{
_sync.WaitOne();
//do some stuff
_sync.Release();
}
}
somefunctionsomewhere()
{
var myobject = new myclass();
//spawn 100 threads that all call myobject.doasync()
}
is the above safe + clean? Will the both _sync get disposed and myclass be collected when all 100 threads have finished doing their async work?
If not, what's the best way to know when to call _sync.Dispose() ?
| Short answer: No, you don't need to call Dispose().
Long answer:
Though it is recommended to call Dispose() when you are finished using the Semaphore, the garbage collector will take care of the semaphore's resources sooner or later.
An explicit call to Dispose() will guarantee an early release of the associated resource, but it's only required if you are creating and forgetting a lot of Semaphore instance in a short amount of time.
And if you call Dispose(), it is perfectly fine to use a simple Dispose() call. You don't need to create a finalizer ~myclass() because that would just duplicate what's already done the Semaphore implementation.
| Semaphore | 13,452,978 | 10 |
Does using a lock have better performance than using a local (single application) semaphore?
I read this blog from msdn : Producer consumer solution on msdn
and I didn't like their solution to the problem because there are always 20 elements left in the queue.
So instead, I thought about using a 'Semaphore' that will be available only in my app (I just won't name it in the constructor), but I don't know how it will effect the app's performance.
Does anyone have an idea if it'll affect the performance? What are the other considerations to use a lock and not 'Semaphore'?
| Lock(obj) is the same as Monitor.Enter(obj); A lock is basicaly an unary semaphore. If you have a number of instances of the same ressource (N) you use a semaphore with the initialization value N. A lock is mainly used to ensure that a code section is not executed by two threads at the same time.
So a lock can be implemented using a semaphore with initialization value of 1. I guess that Monitor.Enter is more performant here but I have no real information about that. A test will be of help here. Here is a SO thread that handels about performance.
For your problem a blocking queue would be the solution. (producer consumer) I suggest this very good SO thread.
Here is another good source of information about Reusable Parallel Data Structures.
| Semaphore | 3,489,382 | 10 |
I am trying to limit the number of simultaneous async functions running using a semaphore, but I cannot get it to work. My code boils down to this:
import asyncio
async def send(i):
print(f"starting {i}")
await asyncio.sleep(4)
print(f"ending {i}")
async def helper():
async with asyncio.Semaphore(value=5):
await asyncio.gather(*[
send(1),
send(2),
send(3),
send(4),
send(5),
send(6),
send(7),
send(8),
send(9),
send(10),
])
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(helper())
loop.close()
The output is:
starting 1
starting 2
starting 3
starting 4
starting 5
starting 6
starting 7
starting 8
starting 9
starting 10
ending 1
ending 2
ending 3
ending 4
ending 5
ending 6
ending 7
ending 8
ending 9
ending 10
I hope and expect that only 5 will run at time, however all 10 start and stop at the same time. What am I doing wrong?
| Please find the working example below, feel free to ask questions:
import asyncio
async def send(i: int, semaphore: asyncio.Semaphore):
# to demonstrate that all tasks start nearly together
print(f"Hello: {i}")
# only two tasks can run code inside the block below simultaneously
async with semaphore:
print(f"starting {i}")
await asyncio.sleep(4)
print(f"ending {i}")
async def async_main():
s = asyncio.Semaphore(value=2)
await asyncio.gather(*[send(i, semaphore=s) for i in range(1, 11)])
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(async_main())
loop.close()
VERSION FROM 18.08.2023:
I see that many people are interested in how to use asyncio.Semaphore and I decided to extend my answer.
The new version illustrates how to use procuder-consumers pattern with asyncio.Semaphore. If you want something very simple, you are fine to use code from the original answer above. If you want more robust solution, which allows to limit number of asyncio.Tasks to work with, you can use this more robust solution.
import asyncio
from typing import List
CONSUMERS_NUMBER = 10 # workers/consumer number
TASKS_NUMBER = 20 # number of tasks to do
async def producer(tasks_to_do: List[int], q: asyncio.Queue) -> None:
print(f"Producer started working!")
for task in tasks_to_do:
await q.put(task) # put tasks to Queue
# poison pill technique
for _ in range(CONSUMERS_NUMBER):
await q.put(None) # put poison pill to all worker/consumers
print("Producer finished working!")
async def consumer(
consumer_name: str,
q: asyncio.Queue,
semaphore: asyncio.Semaphore,
) -> None:
print(f"{consumer_name} started working!")
while True:
task = await q.get()
if task is None: # stop if poison pill was received
break
print(f"{consumer_name} took {task} from queue!")
# number of tasks which could be processed simultaneously
# is limited by semaphore
async with semaphore:
print(f"{consumer_name} started working with {task}!")
await asyncio.sleep(4)
print(f"{consumer_name} finished working with {task}!")
print(f"{consumer_name} finished working!")
async def async_main() -> None:
"""Main entrypoint of async app."""
tasks = [f"TheTask#{i + 1}" for i in range(TASKS_NUMBER)]
q = asyncio.Queue(maxsize=2)
s = asyncio.Semaphore(value=2)
consumers = [
consumer(
consumer_name=f"Consumer#{i + 1}",
q=q,
semaphore=s,
) for i in range(CONSUMERS_NUMBER)
]
await asyncio.gather(producer(tasks_to_do=tasks, q=q), *consumers)
if __name__ == "__main__":
asyncio.run(async_main())
| Semaphore | 66,724,841 | 10 |
I have a fairly complex WPF application that (much like VS2013) has IDocuments and ITools docked within the main shell of the application. One of these Tools needs to be shutdown safely when the main Window is closed to avoid getting into a "bad" state. So I use Caliburn Micro's public override void CanClose(Action<bool> callback) method to perform some database updates etc. The problem I have is all of the update code in this method uses MongoDB Driver 2.0 and this stuff is async. Some code; currently I am attempting to perform
public override void CanClose(Action<bool> callback)
{
if (BackTestCollection.Any(bt => bt.TestStatus == TestStatus.Running))
{
using (ManualResetEventSlim tareDownCompleted = new ManualResetEventSlim(false))
{
// Update running test.
Task.Run(async () =>
{
StatusMessage = "Stopping running backtest...";
await SaveBackTestEventsAsync(SelectedBackTest);
Log.Trace(String.Format(
"Shutdown requested: saved backtest \"{0}\" with events",
SelectedBackTest.Name));
this.source = new CancellationTokenSource();
this.token = this.source.Token;
var filter = Builders<BsonDocument>.Filter.Eq(
BackTestFields.ID, DocIdSerializer.Write(SelectedBackTest.Id));
var update = Builders<BsonDocument>.Update.Set(BackTestFields.STATUS, TestStatus.Cancelled);
IMongoDatabase database = client.GetDatabase(Constants.DatabaseMappings[Database.Backtests]);
await MongoDataService.UpdateAsync<BsonDocument>(
database, Constants.Backtests, filter, update, token);
Log.Trace(String.Format(
"Shutdown requested: updated backtest \"{0}\" status to \"Cancelled\"",
SelectedBackTest.Name));
}).ContinueWith(ant =>
{
StatusMessage = "Disposing backtest engine...";
if (engine != null)
engine.Dispose();
Log.Trace("Shutdown requested: disposed backtest engine successfully");
callback(true);
tareDownCompleted.Set();
});
tareDownCompleted.Wait();
}
}
}
Now, to start with I did not have the ManualResetEventSlim and this would obviously return to the CanClose caller before I updated my database on the background [thread-pool] thread. In an attempt to prevent the return until I have finished my updates I tried to block the return, but this freezes the UI thread and prevents anything from happening.
How can I get my clean-up code to run without returning to the caller too early?
Thank for your time.
Note, I cannot override the OnClose method using async signature as the calling code would not await it (I have no control over this).
| I don't think you have much choice than to block the return. However your updates should still run despite the UI thread being locked. I wouldn't use a ManualResetEventSlim, but just a simple wait() and a single task without a continuation. The reason for that is by default Task.Run prevents the child task (your continuation) from being attached to the parent and so your continuation may not have time to complete before the window closes, see this post.
public override void CanClose(Action<bool> callback)
{
if (BackTestCollection.Any(bt => bt.TestStatus == TestStatus.Running))
{
// Update running test.
var cleanupTask = Task.Run(async () =>
{
StatusMessage = "Stopping running backtest...";
await SaveBackTestEventsAsync(SelectedBackTest);
// other cleanup tasks
// No continuation
StatusMessage = "Disposing backtest engine...";
if (engine != null)
engine.Dispose();
Log.Trace("Shutdown requested: disposed backtest engine successfully");
callback(true);
});
cleanupTask.Wait();
}
}
You can also use TaskFactory.StartNew with TaskCreationOptions.AttachedToParent if you really need to use a continuation.
| Semaphore | 32,167,520 | 10 |
Let's say I create a semaphore. If I fork a bunch of child processes, will they all still use that same semaphore?
Also, suppose I create a struct with semaphores inside and forked. Do all the child processes still use that same semaphore? If not, would storing that struct+semaphores in shared memory allow the child processes to use the same semaphores?
I'm really confused about how my forked child processes can use the same semaphores.
|
Let's say I create a semaphore. If I fork a bunch of child processes, will they all still use that same semaphore?
If you are using a SysV IPC semaphore (semctl), then yes. If you are using POSIX semaphores (sem_init), then yes, but only if you pass a true value for the pshared argument on creation and place it in shared memory.
Also, suppose I create a struct with semaphores inside and forked. Do all the child processes still use that same semaphore? If not, would storing that struct+semaphores in shared memory allow the child processes to use the same semaphores?
What do you mean be 'semaphores inside'? References to SysV IPC semaphores will be shared, because the semaphores don't belong to any process. If you're using POSIX semaphores, or constructing something out of pthreads mutexes and condvars, you will need to use shared memory, and the pshared attribute (pthreads has a pshared attribute for condvars and mutexes as well)
Note that anonymous mmaps created with the MAP_SHARED flag count as (anonymous) shared memory for these purposes, so it's not necessary to actually create a named shared memory segment. Ordinary heap memory will not be shared after a fork.
| Semaphore | 6,847,973 | 10 |
This past semester I was taking an OS practicum in C, in which the first project involved making a threads package, then writing a multiple producer-consumer program to demonstrate the functionality. However, after getting grading feedback, I lost points for "The usage of semaphores is subtly wrong" and "The program assumes preemption (e.g. uses yield to change control)" (We started with a non-preemptive threads package then added preemption later. Note that the comment and example contradict each other. I believe it doesn't assume either, and would work in both environments).
This has been bugging me for a long time - the course staff was kind of overwhelmed, so I couldn't ask them what's wrong with this over the semester. I've spent a long time thinking about this and I can't see the issues. If anyone could take a look and point out the error, or reassure me that there actually isn't a problem, I'd really appreciate it.
I believe the syntax should be pretty standard in terms of the thread package functions (minithreads and semaphores), but let me know if anything is confusing.
#include <stdio.h>
#include <stdlib.h>
#include "minithread.h"
#include "synch.h"
#define BUFFER_SIZE 16
#define MAXCOUNT 100
int buffer[BUFFER_SIZE];
int size, head, tail;
int count = 1;
int out = 0;
int toadd = 0;
int toremove = 0;
semaphore_t empty;
semaphore_t full;
semaphore_t count_lock; // Semaphore to keep a lock on the
// global variables for maintaining the counts
/* Method to handle the working of a student
* The ID of a student is the corresponding minithread_id */
int student(int total_burgers) {
int n, i;
semaphore_P(count_lock);
while ((out+toremove) < arg) {
n = genintrand(BUFFER_SIZE);
n = (n <= total_burgers - (out + toremove)) ? n : total_burgers - (out + toremove);
printf("Student %d wants to get %d burgers ...\n", minithread_id(), n);
toremove += n;
semaphore_V(count_lock);
for (i=0; i<n; i++) {
semaphore_P(empty);
out = buffer[tail];
printf("Student %d is taking burger %d.\n", minithread_id(), out);
tail = (tail + 1) % BUFFER_SIZE;
size--;
toremove--;
semaphore_V(full);
}
semaphore_P(count_lock);
}
semaphore_V(count_lock);
printf("Student %d is done.\n", minithread_id());
return 0;
}
/* Method to handle the working of a cook
* The ID of a cook is the corresponding minithread_id */
int cook(int total_burgers) {
int n, i;
printf("Creating Cook %d\n",minithread_id());
semaphore_P(count_lock);
while ((count+toadd) <= arg) {
n = genintrand(BUFFER_SIZE);
n = (n <= total_burgers - (count + toadd) + 1) ? n : total_burgers - (count + toadd) + 1;
printf("Cook %d wants to put %d burgers into the burger stack ...\n", minithread_id(),n);
toadd += n;
semaphore_V(count_lock);
for (i=0; i<n; i++) {
semaphore_P(full);
printf("Cook %d is putting burger %d into the burger stack.\n", minithread_id(), count);
buffer[head] = count++;
head = (head + 1) % BUFFER_SIZE;
size++;
toadd--;
semaphore_V(empty);
}
semaphore_P(count_lock);
}
semaphore_V(count_lock);
printf("Cook %d is done.\n", minithread_id());
return 0;
}
/* Method to create our multiple producers and consumers
* and start their respective threads by fork */
void starter(int* c){
int i;
for (i=0;i<c[2];i++){
minithread_fork(cook, c[0]);
}
for (i=0;i<c[1];i++){
minithread_fork(student, c[0]);
}
}
/* The arguments are passed as command line parameters
* argv[1] is the no of students
* argv[2] is the no of cooks */
void main(int argc, char *argv[]) {
int pass_args[3];
pass_args[0] = MAXCOUNT;
pass_args[1] = atoi(argv[1]);
pass_args[2] = atoi(argv[2]);
size = head = tail = 0;
empty = semaphore_create();
semaphore_initialize(empty, 0);
full = semaphore_create();
semaphore_initialize(full, BUFFER_SIZE);
count_lock = semaphore_create();
semaphore_initialize(count_lock, 1);
minithread_system_initialize(starter, pass_args);
}
| Your semaphores do nothing to protect buffer, head, etc in the innermost loop. One thread acquires sempahore "empty", the other acquires semaphore "full" while no other semaphores are held. This seems to guarantee eventual corruption.
| Semaphore | 4,514,778 | 10 |
I've created a service, and the module for it looks like this:
launchdarkly.module.ts
@Module({
providers: [LaunchdarklyService],
exports: [LaunchdarklyService],
imports: [ConfigService],
})
export class LaunchdarklyModule {}
(this service/module is to let the application use LaunchDarkly feature-flagging)
I'm happy to show the service-implementation if you'd like, but to keep this question shorter I skipped it. The important point is that this service imports the ConfigService (which it uses to grab the LaunchDarkly SDK key).
But how can I test the Launchdarkly service? It reads a key from ConfigService so I want to write tests where ConfigService has various values, but after hours of trying I can't figure out how to configure ConfigService in a test.
Here's the test:
launchdarkly.service.spec.ts
describe('LaunchdarklyService', () => {
let service: LaunchdarklyService;
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [LaunchdarklyService],
imports: [ConfigModule],
}).compile();
service = module.get<LaunchdarklyService>(LaunchdarklyService);
});
it("should not create a client if there's no key", async () => {
// somehow I need ConfigService to have key FOO=undefined for this test
expect(service.client).toBeUndefined();
});
it("should create a client if an SDK key is specified", async () => {
// For this test ConfigService needs to specify FOO=123
expect(service.client).toBeDefined();
});
})
I'm open for any non-hacky suggestions, I just want to feature-flag my application!
| Assuming the LaunchdarklyService needs the ConfigService and that is injected into the constructor, you can provide a mock variation of the ConfigService by using a Custom Provider to give back the custom credentials you need. For example, a mock for your test could look like
describe('LaunchdarklyService', () => {
let service: LaunchdarklyService;
let config: ConfigService;
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [LaunchdarklyService, {
provide: ConfigService,
useValue: {
get: jest.fn((key: string) => {
// this is being super extra, in the case that you need multiple keys with the `get` method
if (key === 'FOO') {
return 123;
}
return null;
})
}
],
}).compile();
service = module.get<LaunchdarklyService>(LaunchdarklyService);
config = module.get<ConfigService>(ConfigService);
});
it("should not create a client if there's no key", async () => {
// somehow I need ConfigService to have key FOO=undefined for this test
// we can use jest spies to change the return value of a method
jest.spyOn(config, 'get').mockReturnedValueOnce(undefined);
expect(service.client).toBeUndefined();
});
it("should create a client if an SDK key is specified", async () => {
// For this test ConfigService needs to specify FOO=123
// the pre-configured mock takes care of this case
expect(service.client).toBeDefined();
});
})
| LaunchDarkly | 65,636,980 | 23 |
gitlab-ci-multi-runner register
gave me
couldn't execute POST against https://xxxx/ci/api/v1/runners/register.json:
Post https://xxxx/ci/api/v1/runners/register.json:
x509: cannot validate certificate for xxxx because it doesn't contain any IP SANs
Is there a way to disable certification validation?
I'm using Gitlab 8.13.1 and gitlab-ci-multi-runner 1.11.2.
| Based on Wassim's answer, and gitlab documentation about tls-self-signed and custom CA-signed certificates, here's to save some time if you're not the admin of the gitlab server but just of the server with the runners (and if the runner is run as root):
SERVER=gitlab.example.com
PORT=443
CERTIFICATE=/etc/gitlab-runner/certs/${SERVER}.crt
# Create the certificates hierarchy expected by gitlab
sudo mkdir -p $(dirname "$CERTIFICATE")
# Get the certificate in PEM format and store it
openssl s_client -connect ${SERVER}:${PORT} -showcerts </dev/null 2>/dev/null | sed -e '/-----BEGIN/,/-----END/!d' | sudo tee "$CERTIFICATE" >/dev/null
# Register your runner
gitlab-runner register --tls-ca-file="$CERTIFICATE" [your other options]
Update 1: CERTIFICATE must be an absolute path to the certificate file.
Update 2: it might still fail with custom CA-signed because of gitlab-runner bug #2675
| GitLab | 44,458,410 | 45 |
I've created project and repo on my gitlab.com account, generated private key, now I'm trying to do API call to get list of commits.
Now I want to get list of projects via API, from documentation
https://docs.gitlab.com/ce/api/projects.html#list-projects
GET /projects
So I'm doing:
curl --header "PRIVATE-TOKEN: XXXXXXX -c" "https://gitlab.com/projects"
And getting 404.
I've tried several combinations and can't find correct base URL.
Same for repository commits, documentation https://docs.gitlab.com/ce/api/commits.html says
https://gitlab.example.com/api/v3/projects/5/repository/commits
fine, I'm trying (with myusername/projectname as project id)
https://gitlab.com/api/v3/projects/myusername/projectname/repository/commits
And got 404 as well
| The correct base url for the hosted GitLab is https://gitlab.com/api/v4/ so your request to
GET /projects would be
curl --header "PRIVATE-TOKEN: XXXXXX" "https://gitlab.com/api/v4/projects"
That would return all projects that are visible to you, including other user's public projects.
If you wish to view just your projects, then you should use the GET /users/:user_id/projects endpoint, where :user_id is your user ID that can be found on your GitLab profile page or in the response to your request to GET /user if you're authenticated.
# Get :user_id from this request
curl --header "PRIVATE-TOKEN: XXXXXX" "https://gitlab.com/api/v4/user"
# See your projects by replacing :user_id with id value from previous request
curl --header "PRIVATE-TOKEN: XXXXXX" "https://gitlab.com/api/v4/users/:user_id/projects"
Also, the project ID is not the same as the project name. You can retrieve the project ID from the response of your request to GET /users/:user_id/projects, or from the project's settings page.
| GitLab | 39,751,840 | 45 |
I have a problem with my releases in GitLab.
I created them in my project with tags. Now I want to remove them, so I deleted the associated tags but my releases are always displayed. I searched on Google and Stack Overflow but I can't find any solutions.
How can I remove these releases without their tags?
|
Go to Project Overview -> Releases
Click the release you want to delete
Scroll to the bottom. Find the tag icon. Click on the tag.
There is a trash can button for the tag. Deleting the tag will delete the release as well.
| GitLab | 54,418,978 | 44 |
I'm trying to delete a branch both locally and in a remote GitLab repository. Its name is origin/feat. I tried git push --delete origin feat. Git complains:
remote: error: By default, deleting the current branch is denied, because the next
remote: 'git clone' won't result in any file checked out, causing confusion.
remote:
remote: You can set 'receive.denyDeleteCurrent' configuration variable to
remote: 'warn' or 'ignore' in the remote repository to allow deleting the
remote: current branch, with or without a warning message.
remote:
remote: To squelch this message, you can set it to 'refuse'.
remote: error: refusing to delete the current branch: refs/heads/feat
OK makes sense, so I tried switching to origin/master with git checkout master and it tells me: Already on 'master'. Does the current branch also need to be set in the remote directory? How would I do that?
| Try
git push origin --delete <branch-name>
| GitLab | 44,657,989 | 44 |
I'm not able run the gitlab pipeline due to this error
Invalid CI config YAML file
jobs:run tests:artifacts:reports config contains unknown keys: cobertura
| Check the latest correct doc here: https://docs.gitlab.com/ee/ci/yaml/artifacts_reports.html#artifactsreportscoverage_report
Some of the docs are in somewhat of a messy state right now, due to the new release as mentioned.
This was the fix for me:
artifacts:
expire_in: 2 days
reports:
coverage_report:
coverage_format: cobertura
path: python_app/coverage.xml
| GitLab | 72,138,080 | 43 |
I have a gitlab CI build process with 4 steps, in which artifacts produced in first step are packaged into docker image in 2nd step, then the output image is given as the artifact to 3rd step, and there is a 4th step afterwards, that notifies external service.
The 2nd step needs artifacts from step 1, the 3rd step need artifacts from step 2. This is done with 'dependencies'parameter and it works fine.
What is not working is the step 4, that need no artifacts. I've skipped the 'dependencies' block, then I've declared dependencies: [], but in both cases the both artifacts are downloaded!
How do I correct inform gitlab CI that the step has no dependencies? Or is it some bug in Gitlab CI?
| As per the gitlab-ci documentation:
To disable artifact passing, define the job with empty dependencies:
job:
stage: build
script: make build
dependencies: []
I've found the same issue here: https://gitlab.com/gitlab-org/gitlab-runner/issues/228
This seems to be fixed in: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/10359
Please update your CI Runner to a newer version as that should fix it.
| GitLab | 47,657,634 | 43 |
I am trying to get GitLab working on my server (running CentOS 6.5). I followed the gitlab-receipe to the line, but I just can't get it working. I am able to access the web interface, create new projects but pushing to the master branch returns the following error :
fatal: protocol error: bad line length character: This
I have done checks on the production environment, here are the results :
Checking Environment ...
Git configured for git user? ... yes
Checking Environment ... Finished
Checking GitLab Shell ...
GitLab Shell version >= 1.7.9 ? ... OK (1.8.0)
Repo base directory exists? ... yes
Repo base directory is a symlink? ... no
Repo base owned by git:git? ... yes
Repo base access is drwxrws---? ... yes
update hook up-to-date? ... yes
update hooks in repos are links: ...
ASC / Wiki ... repository is empty
Running /home/git/gitlab-shell/bin/check
Check GitLab API access: OK
Check directories and files:
/home/git/repositories: OK
/home/git/.ssh/authorized_keys: OK
Test redis-cli executable: redis-cli 2.4.10
Send ping to redis server: PONG
gitlab-shell self-check successful
Checking GitLab Shell ... Finished
Checking Sidekiq ...
Running? ... yes
Number of Sidekiq processes ... 1
Checking Sidekiq ... Finished
Checking LDAP ...
LDAP is disabled in config/gitlab.yml
Checking LDAP ... Finished
Checking GitLab ...
Database config exists? ... yes
Database is SQLite ... no
All migrations up? ... yes
GitLab config exists? ... yes
GitLab config outdated? ... no
Log directory writable? ... yes
Tmp directory writable? ... yes
Init script exists? ... yes
Init script up-to-date? ... no
Try fixing it:
Redownload the init script
For more information see:
doc/install/installation.md in section "Install Init Script"
Please fix the error above and rerun the checks.
projects have namespace: ...
ASC / Wiki ... yes
Projects have satellites? ...
ASC / Wiki ... can't create, repository is empty
Redis version >= 2.0.0? ... yes
Your git bin path is "/usr/bin/git"
Git version >= 1.7.10 ? ... yes (1.8.3)
Checking GitLab ... Finished
For the init script error, the receipt says
Do not mind about that error if you are sure that you have downloaded
the up-to-date
so as I have downloaded the latest one, I can't really do much about it.
I've been banging my head for the past week, and can not figure out why this error is occurring, any help would appreciated!!
| If anyone else has this problem, the solution is to change the login shell of the user 'git' (or whatever your user is called) to /bin/bash. This can be done via the command : usermod -s /bin/bash git (Link). The reason for changing the login shell is because the default shell for the git user is /sbin/nologin (or similar, depending on environment), which prevents the git application from logging in as the git user on the git server.
| GitLab | 22,314,298 | 43 |
I have tried searching for it everywhere, but I can’t find anything.
It would be really awesome if someone could define it straight out of the box.
I don’t know what an instance of GitLab URL is. I’m asking if someone could clarify what it is, and where can I get it. I am currently trying to add it in Visual Studio Code extension GitLab Workflow. The extension is asking for my GitLab instance URL, and I don’t know where to get it.
| The instance URL of any GitLab install is basically the link to the GitLab you're trying to connect to.
For example, if your project is hosted on gitlab.example.com/yourname/yourproject then for the instance URL enter https://gitlab.example.com.
Another example, if your project is hosted on gitlab.com/username/project then the instance URL is https://gitlab.com. Though note that in the VS Code extension specifically, gitlab.com is the default so you can leave it blank in this case.
| GitLab | 58,236,175 | 42 |
I am trying to set an environment variable for my GitLab Runner based on the branch that the commit originated from.
I have 4 kubernetes clusters: staging, integration, production, and qa. Essentially I want to deploy my application to the proper cluster based on the branch I am pushing to.
image: google/cloud-sdk:latest
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
stages:
- publish
publish:
stage: publish
script:
- if [ "$CI_COMMIT_REF_NAME" = "master" ]; then $ENVIRONMENT="production"; else $ENVIRONMENT="$CI_COMMIT_REF_NAME"; fi
- echo $ENVIRONMENT
.
.
.
- kubectl apply -f cfg/${ENVIRONMENT}/web-deployment.yaml
only:
- master
- integration
- qa
- staging
Any time I run my script with a different form of the if statement I get the following error:
/bin/bash: line 83: =integration: command not found
ERROR: Job failed: exit code 1
So from what I can tell the variable is being set, but the script exits. I've seen several SO questions related to this problem, but nothing about how to set a variable and then continue a script. How can I fix this issue?
| The comment above helped me figure it out. So I use a VERSION file that right now contains 0.0.0 which I manipulate to create other variables
# determine what branch I am on
- if [ "$CI_COMMIT_REF_NAME" = "master" ]; then ENVIRONMENT="qa"; else ENVIRONMENT="$CI_COMMIT_REF_NAME"; fi
# determine patch number for semver
- PATCH=`git log --pretty=oneline | wc -l | sed -e 's/^[[:space:]]*//'`
- VERSION=`cat VERSION`
# drop trailing 0 from VERSION
- VERSION=${VERSION%?}
# set all env variables
- TAG="${VERSION}${PATCH}"
- IMAGE="${TAG}-${ENVIRONMENT}" # used for Kubernetes
- API_HOST="https://api.${ENVIRONMENT}.my-app.com/"
- WEB_HOST="https://www.${ENVIRONMENT}.my-app.com/"
# pass enviornment variables to make
- ENVIRONMENT="$ENVIRONMENT" IMAGE="$IMAGE" API_HOST="$API_HOST" WEB_HOST="$WEB_HOST" make
# make has a step that calls sed and replaces text inline in this file to prepare Kubernetes
- kubectl apply -f cfg/web-deployment.yaml
# create a tag in the repo after deployment is done
- curl -X POST --silent --insecure --show-error --fail "https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/repository/tags?tag_name=${TAG}&ref=${CI_COMMIT_SHA}&private_token=${GITLAB_TOKEN}"
| GitLab | 53,965,695 | 42 |
Is it possible to invalidate or clear a pipeline cache with the Gitlab CI after a pipeline completes?
My .gitlab-ci.yml file has the following global cache definition
cache:
key: "%CI_PIPELINE_ID%"
paths:
- './msvc/Project1`/bin/Debug'
- './msvc/Project2`/bin/Debug'
- './msvc/Project3`/bin/Debug'
The cache-key value specifies that each pipeline should maintain it's own cache, which is working fine, but the cache file continues to exist after the pipeline completes. With hundreds of pipelines being run, the size starts to add up and manually deleting the cache folder on our machine isn't a great solution.
I tried adding a cleanup job at the end of the pipeline
cleanup:
stage: cleanup
script:
- rm -rf './msvc/Project1/bin'
- rm -rf './msvc/Project2/bin'
- rm -rf './msvc/Project3/bin'
when: always
Which deletes the local files, but won't delete them from the cache.
Am I missing something here?
Currently running Gitlab-EE 10.3.3
| Artifacts are the solution as mentioned in the comments. However there is an option to clear caches in the Pipelines page as shown in the image below.
| GitLab | 48,469,675 | 42 |
I have a Dockerfile that starts with installing the texlive-full package, which is huge and takes a long time. If I docker build it locally, the intermedate image created after installation is cached, and subsequent builds are fast.
However, if I push to my own GitLab install and the GitLab-CI build runner starts, this always seems to start from scratch, redownloading the FROM image, and doing the apt-get install again. This seems like a huge waste to me, so I'm trying to figure out how to get the GitLab DinD image to cache the intermediate images between builds, without luck so far.
I have tried using the --cache-dir and --docker-cache-dir for the gitlab-runner register command, to no avail.
Is this even something the gitlab-runner DinD image is supposed to be able to do?
My .gitlab-ci.yml:
build_job:
script:
- docker build --tag=example/foo .
My Dockerfile:
FROM php:5.6-fpm
MAINTAINER Roel Harbers <[email protected]>
RUN apt-get update && apt-get install -qq -y --fix-missing --no-install-recommends texlive-full
RUN echo Do other stuff that has to be done every build.
I use GitLab CE 8.4.0 and gitlab/gitlab-runner:latest as runner, started as
docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/local/gitlab-ci-runner/config:/etc/gitlab-runner \
gitlab/gitlab-runner:latest \
; \
The runner is registered using:
docker exec -it gitlab-runner gitlab-runner register \
--name foo.example.com \
--url https://gitlab.example.com/ci \
--cache-dir /cache/build/ \
--executor docker \
--docker-image gitlab/dind:latest \
--docker-privileged \
--docker-disable-cache false \
--docker-cache-dir /cache/docker/ \
; \
This creates the following config.toml:
concurrent = 1
[[runners]]
name = "foo.example.com"
url = "https://gitlab.example.com/ci"
token = "foobarsldkflkdsjfkldsj"
tls-ca-file = ""
executor = "docker"
cache_dir = "/cache/build/"
[runners.docker]
image = "gitlab/dind:latest"
privileged = true
disable_cache = false
volumes = ["/cache"]
cache_dir = "/cache/docker/"
(I have experimented with different values for cache_dir, docker_cache_dir and disable_cache, all with the same result: no caching whatsoever)
| I suppose there's no simple answer to your question. Before adding some details, I strongly suggest to read this blog article from the maintainer of DinD, which was originally named "do not use Docker in Docker for CI".
What you might try is declaring /var/lib/docker as a volume for your GitLab runner. But be warned, depending on your file-system drivers you may use AUFS in the container on an AUFS filesystem on your host, which is very likely to cause problems.
What I'd suggest to you is creating a separate Docker-VM, only for the runner(s), and bind-mount docker.sock from the VM into your runner-container.
We are using this setup with GitLab with great success (>27.000 builds in about 12 months).
You can take a look at our runner with docker-compose support which is actually based on the shell-executor of GitLab's runner.
| GitLab | 35,556,649 | 42 |
There are 3 stages - build, test and deploy in .gitlab-ci.yml.
A nightly regression test stage needs to be run nightly.
Here's the relevant .gitlab-ci.yml code:
stages:
- build
- test
- deploy
build_project:
stage: build
script:
- cd ./some-dir
- build-script.sh
except:
- tags
#Run this only when say variable 'NIGHTLY_TEST == True'. But HOW?
nightly_regression_test_project:
stage: test
script:
- cd ./some-dir
- execute test-script
Tagging daily to only run the test stage is not preferable.
Any other ideas?
| except and only can specify variables that will trigger them.
You can use the following in your .gitlab-ci.yml:
build1:
stage: build
script:
- echo "Only when NIGHTLY_TEST is false"
except:
variables:
- $NIGHTLY_TEST
test1:
stage: test
script:
- echo "Only when NIGHTLY_TEST is true"
only:
variables:
- $NIGHTLY_TEST
| GitLab | 39,988,497 | 41 |
I'd can't seem to find any documentation of manual staging in Gitlab CI in version 8.9. How do I do a manual stage such as "Deploy to Test"?
I'd like Gitlab CI to deploy a successful RPM to dev, and then once I've reviewed it, push to Test, and from there generate a release. Is this possible with Gitlab CI currently?
| You can set tasks to be manual by using when: manual in the job (documentation).
So for example, if you want to want the deployment to happen at every push but give the option to manually tear down the infrastructure, this is how you would do it:
stages:
- deploy
- destroy
deploy:
stage: deploy
script:
- [STEPS TO DEPLOY]
destroy:
stage: destroy
script:
- [STEPS TO DESTROY]
when: manual
With the above config, if you go to the GitLab project > Pipelines, you should see a play button next to the last commit. When you click the play button you can see the destroy option.
| GitLab | 31,904,686 | 41 |
I have a GitLab installation running, and I have a repository that I want to share with my friends. I can't understand the flow of sending pull requests in GitLab.
A user can't fork my repository or access my project (unless he is my on team). A merge request can be from one branch to another in my repository.
How do pull requests work in GitLab?
| GitLab.com co-founder here. Forking should work fine in recent versions of GitLab (6.x). You can fork a repo belonging to someone else and then create a merge request (the properly named version of the GitHub pull request).
| GitLab | 15,396,753 | 41 |
According to the documentation, it should be possible to access GitLab repos with project access tokens:
The username is set to project_{project_id}_bot, such as project_123_bot.
Never mind that that's a lie -- the actual user is called project_4194_bot1 in my case; apparently they increment a number for subsequent tokens.
Either way -- and I have tried both with and without the trailing 1 -- I would expect
git clone "https://project_4194_bot1:[email protected]/my-group/my-project.git"
to succeed, same as with my.username:$PERSONAL_TOKEN (which works perfectly). However, I get
remote: HTTP Basic: Access denied
fatal: Authentication failed for '<snip>'
What may be going on here? How can I access GitLab repositories using project access tokens?
It's not as if we'd get that far, but FWIW, the token seems to have sufficient permissions:
| It seems that using the project name as username works. In your case replacing project_4194_bot1 with my-project should work:
git clone "https://my-project:[email protected]/my-group/my-project.git"
EDIT: One can actually use any non-blank value as a username (see docs), as others correctly pointed out.
| GitLab | 63,924,723 | 40 |
So, in addition to GitKraken won't let me clone from a private repo on GitHub
I get this screen when opening my GitLab Repo:
Anyone got a solution of how to make my Repo 'non-private' or how to make GitKraken let me open this without the Pro Plan?
Already tried:
Generating new SSH Key in GitKraken
Removing Repo, Generate new GitLab connection, Clone Repo
Checked GitLab: GitKraken is an Authorized applications
Git Pull via command line gives no trouble, so no permission issue
...
| 6.5.1 is the last version to support private repo. You can see the release details at this link https://blog.axosoft.com/gitkraken-v6-0/#pricing-changes OR https://support.gitkraken.com/release-notes/6x/
And you can also download it (Mac version) from Axosoft https://release.axocdn.com/darwin/GitKraken-v6.5.1.zip OR https://release.gitkraken.com/darwin/GitKraken-v6.5.1.zip
I not sure how to turn off the automatic update function, so if you turn off GitKraken completely and reopen it, it will update to the latest version.
=======
Updated
Block IP Address for updating
For MacOS
echo "127.0.0.1 release.gitkraken.com" >> /private/etc/hosts
Windows 10 – “C:\Windows\System32\drivers\etc\hosts”
Linux – “/etc/hosts”
Mac OS X – “/private/etc/hosts”
| GitLab | 58,095,592 | 40 |
recently my runners have been stopped and I don't know why?
I've just upgraded nodejs on the server and it did happen.
after this problem, I've tried to update gitlab to the latest version and check the runner status but the problem still persists and in the title of grey icon shows:
Runner is offline, the last contact was about 22 hours ago.
What should I do?
and when I try to Retry stuck jobs, see this error:
This job is stuck, because you don't have any active runners online with any of these tags assigned to them: 'my label'.
Any Help is appreciated!
| To me, the following solved the problem:
gitlab-runner restart
Where gitlab-runner is a symlink to gitlab-ci-multi-runner:
GitLab Runner is the open source project that is used to run your jobs and send the results back to GitLab. It is used in conjunction with GitLab CI, the open-source continuous integration service included with GitLab that coordinates the jobs.
| GitLab | 44,746,357 | 40 |
Dear stackoverflow community, once more I turn to you :)
I've recently come across the wonder of Gitlab and their very nice bundled CI/CD solution. It works gallantly however, we all need to sign our binaries don't we and I've found no way to upload a key as I would to a Jenkins server for doing this.
So, how can I, without checking in my keys and secrets sign my android (actually flutter) application when building a release?
From what I see, most people define the build job with signing settings referring to a non-committed key.properties file specifying a local keystore.jks. This works fine when building APKs locally but if I would like to build and archive them as a part of the CI/CD job, how do I?
For secret keys, for example the passwords to the keystore itself, I've found that I can simply store them as protected variables but the actual keystore file itself. What can I do about that?
Any ideas, suggestions are dearly welcome.
Cheers
Edit:
I apologise for never marking a right answer here and as @IvanP proposed the solution of writing individual values to a file was what I used for a long time. But as @VonC added later, Gitlab now has the capability to data as actual files which simplifies this so I am marking that as the correct answer.
| Usually I store keystore file (as base64 string), alias and passwords to Gitlab's secrets variables.
In the .gitlab-ci.yml do something like:
create_property_files:
stage: prepare
only:
- master
script:
- echo $KEYSTORE | base64 -d > my.keystore
- echo "keystorePath=my.keystore" > signing.properties
- echo "keystorePassword=$KEYSTORE_PASSWORD" >> signing.properties
- echo "keyAlias=$ALIAS" >> signing.properties
- echo "keyPassword=$KEY_PASSWORD" >> signing.properties
artifacts:
paths:
- my.keystore
- signing.properties
expire_in: 10 mins
And, finally, in your build gradle:
signingConfigs {
release {
file("../signing.properties").with { propFile ->
if (propFile.canRead()) {
def properties = new Properties()
properties.load(new FileInputStream(propFile))
storeFile file(properties['keystorePath'])
storePassword properties['keystorePassword']
keyAlias properties['keyAlias']
keyPassword properties['keyPassword']
} else {
println 'Unable to read signing.properties'
}
}
}
}
| GitLab | 51,725,339 | 39 |
Look at this picture showing gitlab ce memory consumption.
I really dont need all of those workers, sidekiq or unicorn or all of those daemon. This is on IDLE. I mean, I installed this to manage 1 project, with like 4 people, I dont need all those daemon. Is there any way to reduce this ?
| I also had problems with gitlab's high memory consumption. So I ran the linux tool htop.
In my case I found out that the postgresl service used most of the memory.
With postgres service running 14.5G of 16G were used
I stopped one gitlab service after the other and found out that when I stop postgres a lot of memory was freed.
You can try it
gitlab-ctl stop postgresql
and start the service again with
gitlab-ctl start postgresql
Finally I came across the following configuration in /etc/gitlab/gitlab.rb
##! **recommend value is 1/4 of total RAM, up to 14GB.**
# postgresql['shared_buffers'] = "256MB"
I just set the shared buffers to 256MB by removing the comment #, because 256MB is sufficient for me.
postgresql['shared_buffers'] = "256MB"
and executed gitlab-ctl reconfigure. gitlab-ctl restarts the affected services and the memory consumption is now very moderate.
Hopefully that helps someone else.
| GitLab | 36,122,421 | 39 |
When having one gitlab runner serving multiple projects, it can only run one CI pipeline while the other project pipelines have to queue.
Is it possible to make a gitlab runner run pipelines from all projects in parallel?
I don't seem to find anywhere a configuration explanation for this.
| I believe the configuration options you are looking for is concurrent and limit, which you'd change in the GitLab Runners config.toml file.
From the documentation:
concurrent: limits how many jobs globally can be run concurrently. The most upper limit of jobs using all defined runners. 0 does not mean unlimited
limit: limit how many jobs can be handled concurrently by this token.
The location for the config.toml file:
/etc/gitlab-runner/config.toml on *nix systems when GitLab Runner is
executed as root (this is also path for service configuration)
~/.gitlab-runner/config.toml on *nix systems when GitLab Runner is
executed as non-root
./config.toml on other systems
Useful issue as well.
| GitLab | 51,828,805 | 38 |
I will like to clone my android code from gitlab repository in Android Studio 0.8.1.I checked into VCS >> Checked out from Version Control >> Git >> Added HTTP url here.It prompts me that "Repositroy test has failed".Kindly help me to sort out the issue.I have checked the plugins as well.Thanks a lot.
| You need to download and install git from http://git-scm.com/downloads
Then you need to track the git.exe on AndroidStudio:
Go to Settings > Project Settings > Version Control > VCSs > Git > Path to Git executable
Select (or type) executable path, eg: D:\Program Files (x86)\Git\cmd\git.exe
If you installed GitHub Desktop for Windows
In case you have GitHub Desktop for Windows, git.exe will be found at C:\Users\YOUR_USER_NAME\AppData\Local\GitHub\PortableGit_c7e0cbde92ba565zz218z5214zzz0e854zzza28\cmd.
| GitLab | 24,625,335 | 38 |
My Gitlab (version 5) is not sending any e-mails and I am lost trying to figure out what is happening. The logs give no useful information. I configured it to used sendmail.
I wrote a small script that sends e-mail through ActionMailer (I guess it is what gitlab uses to send e-mail, right?). And it sends the e-mail correctly.
But, on my Gitlab, I can guarantee that sendmail is not even being called.
Do I need to enable something to get e-mail notifications? How can I debug my issue?
Update
The problem is that I can not find any information anywhere. The thing just fails silently. Where can I find some kind of log? The logs in the log dir provide no useful information.
My question is, how can I make Gitlab be more verbose? How can I make it tell me what is going on?
Update 2
I just found a lot of mails scheduled on the Background jobs section. A lot of unprocessed Sidekiq::Extensions::DelayedMailer. What does it mean? Why were these jobs not processed?
| Stumbled upon this issue today, here's my research:
Debugging SMTP connections in the GitLab GUI is not supported yet. However there is a pending feature request and a command line solution.
Set the desired SMTP settings /etc/gitlab/gitlab.rb and run gitlab-ctl reconfigure (see https://docs.gitlab.com/omnibus/settings/smtp.html).
Start the console running gitlab-rails console -e production.
Show the configured delivery method (should be :smtp) running the command ActionMailer::Base.delivery_method. Show all configured SMTP settings running ActionMailer::Base.smtp_settings.
To send a test mail run
Notify.test_email('[email protected]', 'Hello World', 'This is a test message').deliver_now
On the admin page in GitLab, the section »Background jobs« shows information about all jobs. Failing SMTP connections are listed there as well.
Please note, you may need to restart the GitLab instance in order to use the newly configured SMTP settings (on my instance the console was able to send mails, the GUI required a restart). Run gitlab-ctl restart to restart your instance.
| GitLab | 16,125,623 | 38 |
I'm using Hosted Gitlab to host my Git repositories, and more recently I've been using it to build/deploy PHP and Java applications to servers.
What I'd like to do is once a build is complete, deploy the application using SSH. Sometimes this might just be uploading the contents of the final build (PHP files) to a server via SSH, or other times it may be uploading a compiled .jar file and then executing a command on the remote server to restart a service.
I've set up my own Docker container as a build environment, this includes things such as Java, PHP, Composer, and Maven all that I need for builds to complete. I'm using this image to run builds.
What I'd like to know is, how can I SSH into an external server in order to perform deployment commands that I can specify in my gitlab-ci.yaml file?
| You can store your SSH key as a secret variable within gitlab-ci.yaml and use it during your build to execute SSH commands, for more details please see our documentation here.
Once you have SSH access you can then use commands such as rsync and scp to copy files onto your server. I found an example of this in another post here which you can use as a reference.
| GitLab | 42,676,369 | 37 |
Here is what my dashboard looks like:
Not really sure where to add an SSH key. Anyone have any idea?
| Go to your GitLab account: https://gitlab.com/
Click on Settings on the top right drop-down, which will appear once you select the icon(white-fox image [specific to my profile]).
Click on Settings on the top right drop-down, which will appear once you select the icon(white-fox image).
Click on SSH Keys:
Add/Paste the SSH Key.
How to generate the ssh key: Download gitbash or putty:
After downloading gitbash/putty follow the steps:
Open a terminal on Linux or macOS, or Git Bash / WSL on Windows.
Generate a new ED25519 SSH key pair:
ssh-keygen -t ed25519 -C "[email protected]"
Or, if you want to use RSA:
ssh-keygen -t rsa -b 4096 -C "[email protected]"
It will generate the key in => C:\Users\yourname.ssh directory.
Copy the public key and paste in the gitlab location:
Command to run on gitbash to clone the repository:
ssh-agent $(ssh-add C:\Users\youname\.ssh\id_rsa; git clone [email protected]:xyz/SpringBootStarter.git)
| GitLab | 35,901,982 | 37 |
I'm running GitLab in a container of Docker but it's okay so far, no problem with that at all. I'm just in doubt about the creation of repositories in projects. I've created my first project in GitLab then after it creation i'd been redirected to a page with some commands to use in terminal. There were three sections, one of them were "Create a repository", i've used those commands and so i could create my repository of my project. However, after this, that page with commands went out and i could just see it again when i created a new project. After all,here goes my question again, is it possible to create two or more repositories into only one project?
| I only have time to give a short answer right now, but I hope it helps:
In short: NO
But also: YES, after a fashion
There is a one-to-one correspondence between repositories and projects (which would perhaps better be called repositories as well).
One Solution: Gitlab supports the creation of groups of projects/repos, which can be managed as a project consisting of multiple repos.
Git-based/local Options
If you are interested in git-based solutions to including a repository inside of another repository check out my answer here. If you use either the subtree merge method (at least a variant of it that tracks history) or subrepository method in this answer, your subprojects will appear in your master project in Gitlab, but the master project will also track changes in the subprojects.
Alternative Solution: Create a dummy repo that contains all of your desired repos as subrepos. This master repo will then track all subrepo changes. However; there are a few logistical issues, the .git files for the subrepos will not exist on Gitlab, so you might want a dedicated client with these files to pull the master repo from Gitlab (probably one commit at a time, if you want the subrepo histories to match the main repo history) and update the corresponding local subrepos (these could also be stored independently on GitLab).
| GitLab | 28,416,576 | 37 |
I would like to create a webhook within Gitlab to automatically update a mirror repository on Github, whenever a push event happens. I've checked this page, but I didn't understand how it is done.
My Gitlab version is 6.5. Here is the configuration page:
What should I put in URL? Where do I need to place the script to update the repository?
| You don't need a webhook for that. A regular post-receive hook will work very well.
To create and use such a hook you just have to login on the server where your gitlab is installed and create an ssh key for git user.
sudo -u git ssh-keygen -f /home/git/.ssh/reponame_key
(do not type any passphrase when prompted)
Go to your github account and add the public key (it's been created as /home/git/ssh/reponame_key.pub) to your project as a deploy key.
have a look at https://help.github.com/articles/managing-deploy-keys if you need help with that.
Once that is done, you just have to configure the connection between your git server and github's:
add an alias to git user's ssh configuration (add following lines to /home/git/.ssh/config - create it if it's not present)
Host reponame
IdentityFile /home/git/.ssh/reponame_key
HostName github.com
User git
Now add the new remote (using the alias you just created) to your repository:
cd /home/git/repositories/namespace/reponame.git
git remote add --mirror github reponame:youruser/reponame.git
Now that everything is in place you'll have to create the actual hook:
cd /home/git/repositories/namespace/reponame.git/hooks
echo "exec git push --quiet github &" >> post-receive
chmod 755 post-receive
The lastcommand is very important because git will check if a hook is executable before running it.
That's it!
(Replace reponame, namespace and youruser according to your real accounts and enjoy).
Last note: if you want your name andavatar near commits on github, make sure that the email address you are using on gitlab is one of the addresses inked to your github account as well. You'll see your gitlab username otherwise.
| GitLab | 21,962,872 | 37 |
I've been using Git for the past few months. Recently when I try to clone or to push, I keep on getting this error. I've researched on the internet but so far no solution has worked for me. Does anyone have an idea?
External note : Now I moved to different country, it was working perfectly where I was before.
Git Version : 2.11.0 , OS : Debian GNU/Linux 9.11 (stretch)
Error :
git push
fatal: unable to access 'https://**************/': gnutls_handshake() failed: Handshake failed
| This is solution fix this issue on ubuntu server 14.04.x
1, Edit file:
sudo nano /etc/apt/sources.list
2, Add to file sources.list
deb http://security.ubuntu.com/ubuntu xenial-security main
deb http://cz.archive.ubuntu.com/ubuntu xenial main universe
3, Run command update and update CURL to new version
apt-get update && apt-get install curl
4, Check version (Optional):
curl -V
Response :
curl 7.47.0 (x86_64-pc-linux-gnu) libcurl/7.47.0 GnuTLS/3.4.10 zlib/1.2.8 libidn/1.28 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz TLS-SRP UnixSockets
5, Test connect with bitbucket (Optional)
GIT_CURL_VERBOSE=1 git ls-remote https://bitbucket.org/
Response:
* Closing connection 0
fatal: repository 'https://bitbucket.org/' not found
This done.
| GitLab | 60,262,230 | 36 |
I started to look in to ssl certificates when I stumbled upon let's encrypt, and I wanted to use it with gitlab, however being that it is running on a raspberry pi 2 and its running quite perfectly now (so I dont want to mess anything up), he would I go about installing a lets encrypt ssl certificate properly?
PS: My installation is omnibus
| The by far best solution I was able to find for now is described in this blog post. I won't recite everything, but the key points are:
Use the webroot authenticator for Let's Encrypt
Create the folder /var/www/letsencrypt and use this directory as webroot-path for Let's Encrypt
Change the following config values in /etc/gitlab/gitlab.rb and run gitlab-ctl reconfigure after that:
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate']= "/etc/letsencrypt/live/gitlab.yourdomain.com/fullchain.pem"
nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.yourdomain.com/privkey.pem"
nginx['custom_gitlab_server_config']="location ^~ /.well-known {\n alias /var/www/letsencrypt/.well-known;\n}\n"
If you are using Mattermost which is shipped with the Omnibus package then you can additionally set these options in /etc/gitlab/gitlab.rb:
mattermost_nginx['redirect_http_to_https'] = true
mattermost_nginx['ssl_certificate']= "/etc/letsencrypt/live/gitlab.yourdomain.com/fullchain.pem"
mattermost_nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.yourdomain.com/privkey.pem"
mattermost_nginx['custom_gitlab_mattermost_server_config']="location ^~ /.well-known {\n alias /var/www/letsencrypt/.well-known;\n}\n"
After requesting your first certificate remember to change the external_url to https://... and again run gitlab-ctl reconfigure
This method is very elegant since it just mounts the directory /var/www/letsencrypt/.well-known used by the Let's Encrypt authenticator into the Gitlab web-root via a custom Nginx configuration and authentication is always possible when Gitlab is running. This means that you can automatically renew the Let's Encrypt certificates.
| GitLab | 34,189,199 | 36 |
My institution recently installed GitLab for us. I've figured out how to install R packages from the GitLab server using devtools::install_git and it works as long as the project is public.
#* When modeltable project has Public status
devtools::install_git('https://mini-me2.lerner.ccf.org/nutterb/modeltable.git')
However, if I have a package that is listed as either "Internal" or "Private," I can't install the package without some form of authentication. As of yet, I haven't figured out how to pass authentication via the URL. Does anyone have experience with downloading packages from GitLab?
#* After changing the 'modeltable' project to Private status
devtools::install_git('https://mini-me2.lerner.ccf.org/nutterb/modeltable.git')
Preparing installation of modeltable using the Git-URL: https://mini-me2.lerner.ccf.org/nutterb/modeltable.git
'/usr/bin/git'clone --depth 1 --no-hardlinks https://mini-me2.lerner.ccf.org/nutterb/modeltable.git /tmp/Rtmp5aj1cU/file24493dc03a32
Error: There seems to be a problem retrieving this Git-URL.
| I'd highly recommend going the SSH route, and the below works for that. I found making the leap to SSH was easy, especially with R and RStudio. I'm using Windows in the below example. Edits from code I use in practice are in all caps.
creds = git2r::cred_ssh_key("C:\\Users\\MYSELF\\.ssh\\id_rsa.pub",
"C:\\Users\\MYSELF\\.ssh\\id_rsa")
devtools::install_git("[email protected]:GITLABGROUP/PACKAGE.git",
credentials = creds)
Two quick additional comments:
git2r is imported with devtools, you shouldn't need to install it separately.
Also I don't think this should need mentioning, but passwords in plaintext in your script is a very bad idea.
| GitLab | 27,319,207 | 36 |
I've signed up to Gitlab using the connection they have with Google Accounts. Once that is made and I have permission to clone from a git repository, I try to clone using the https:// link (not the git: SSH one)
Now to complete this process, I am asked my username and password, but what is that in this scenario? Please don't recommend using SSH link instead as ssh is not straightforward on a Windows OS.
| You can actually use the https link if you login using a Google, Twitter or GitHub link but you have to have an actual GitLab password. If you've already created your account by logging in with a social network, all you have to do is use the Forgot Password feature.
log out and then use the "Forgot your password?" button and put in the email address of the social network you logged in with.
Once you get the email, "change" your password which will have actually set your GitLab password
You should now be able to use the https: clone url with your username and the password you just set.
| GitLab | 22,436,827 | 36 |
GitLab offers the project access levels:
"Guest"
"Reporter"
"Developer"
"Master"
for "team members" co-operating with a specific project.
"Master" and "Guest" are self-explanatory, but the others aren't quite clear to me, in their extents as well as in their granularity.
What is the difference between these levels?
| 2013: The project_security_spec.rb test each profile capabilities, which are listed in ability.rb:
(2017 GitLab 10.x: this would be more likely in app/policies/project_policy.rb)
See also, as noted in jdhao's answer: "Project members permissions"
Those rules are quite explicit:
def public_project_rules
[
:download_code,
:fork_project,
:read_project,
:read_wiki,
:read_issue,
:read_milestone,
:read_project_snippet,
:read_team_member,
:read_merge_request,
:read_note,
:write_issue,
:write_note
]
end
def project_guest_rules
[
:read_project,
:read_wiki,
:read_issue,
:read_milestone,
:read_project_snippet,
:read_team_member,
:read_merge_request,
:read_note,
:write_project,
:write_issue,
:write_note
]
end
def project_report_rules
project_guest_rules + [
:download_code,
:fork_project,
:write_project_snippet
]
end
def project_dev_rules
project_report_rules + [
:write_merge_request,
:write_wiki,
:push_code
]
end
That means:
a reporter is a guest who can also:
download code,
fork a project,
write project snippet
a developer is a reporter who can also:
write merge request,
write wiki pages,
push code
Note: with GitLab 15.0 (May 2022):
Users with the Reporter role can manage iterations and milestones
We’ve changed the permissions necessary to create, edit, and delete milestones and iterations from the Developer to Reporter role.
This change better reflects the typical day-to-day Reporter responsibilities of managing and tracking planning timeboxes.
See Documentation and Issue.
And GitLab 17.0 (May 2024) adds, still for the Reporter role:
Design Management features extended to Product teams
GitLab is expanding collaboration by updating our permissions. Now, users with the Reporter role can access Design Management features, enabling product teams to engage more directly in the design process. This change simplifies workflows and accelerates innovation by inviting broader participation from across your organization.
See Documentation and Issue.
And, still with GitLab 17.0 (May 2024)
Guests in groups can link issues
We reduced the minimum role required to relate issues and tasks from Reporter to Guest, giving you more flexibility to organize work across your GitLab instance while maintaining permissions.
See Documentation and Epic.
| GitLab | 17,657,781 | 36 |
I would like to add my gitlab account to sourcetree. Inside Preferences -> Accounts, I tried the 'add' button
host: GitLab.com
Auth type: greyed out
username xxxxxx
password: xxxxxx
protocol: https
when I go to save. I get a pop up screen that says: "We couldn't connect to GitLab with your (XXXXXX) credentials. Check your username and try the password again."
I've double checked both username and password.
| Someone on the GitLab forum had a similar issue recently, and they documented the steps to solve it:
I eventually noticed that for github and bitbucket the credentials are through "Oauth", and for GitLab "Personal access token". I had generated yesterday a toke, but hadn't used anywhere.
Steps to add a repo from GitLab on SourceTree:
On your browser, go to your account and > User settings > Personal Access Tokens (https://gitlab.com/profile/personal_access_tokens)
Generate and copy the token
On Sourcetree,
a) leave https as preferred protocol
b) click on Refresh Personal Access Token
c) type your username
d) use the copied token as password
Refer below image
| GitLab | 53,184,950 | 35 |
I have several developers working on a local Gitlab instance. The client requires that their Github repo is kept updated. So our Gitlab repo should push any commits directly to Github. Any commits to Github should likewise be pulled into Gitlab.
I could do the first part (dev --> gitlab --> github) with jenkins or something, but am stuck on the reverse. Our Gitlab and Jenkins run inside our firewall.
Any hints or pointers (or drop in solutions!) would be very appreciated.
| It's only in the enterprise edition and on GitLab.com, but GitLab has introduced this feature directly, without any workarounds.
They've documented pulling/pushing from/to a remote repository in GitLab Docs → User Docs → Projects → Repositories → Mirroring.
It's in the same section of configuration that you can push, too:
From within a project use the gear icon to select Mirror Repository
Scroll down to Push to a remote repository
Checkmark Remote mirror repository: Automatically update the remote mirror's branches, tags, and commits from this repository every hour.
Enter the repository you want to update; for GitHub you can include your username and password in the URL, like so: https://yourgithubusername:[email protected]/agaric/guts_discuss_resource.git
Note that I haven't tried it, but you should be able to push to and pull from the same repository. It's working great for me pulling from a remote repository (drupal.org), and pushing to a different remote repository (gitlab.com).
| GitLab | 32,762,024 | 35 |
Project cannot be transferred, because tags are present in its container registry
I am encountering the above error when I try to transfer my git repository to a group.
I have checked in Repository/Tags but there are none. I have also checked in CI/CD tabs and there's nothing outstanding there either. So I'm wondering what tags is it referring to.
I am currently using Netlify for my frontend hosting and Heroku for my backend hosting.
Could either of these be applying some tags that I can't see/find?
Is my only option to export the project?
| Issue 33301 mentions:
the only way to move a project with containers in the registry is to first delete them all.
Meaning delete the container tags in the registry (not the repository or CI/CD)
Navigate to sidebar menu Package->Container Registry on a project where Container registry is enabled
Click on the button "Remove Tag"
(since GitLab 12.3, Sept. 2019, you can delete the tags you want)
| GitLab | 61,557,101 | 34 |
i want to run a script that is needed for my test_integration and build stage. Is there a way to specify this in the before script so i don't have to write it out twice.
before_script:
stage: ['test_integration', 'build']
this does not seem to work i get the following error in gitlab ci linter.
Status: syntax is incorrect
Error: before_script config should be an array of strings
.gitlab-ci.yml
stages:
- security
- quality
- test
- build
- deploy
image: node:10.15.0
before_script:
stage: ['test_integration', 'build']
script:
- apt-get update
- apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
- curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
- apt-get update
- apt-get -y install docker-ce
- curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
services:
- mongo
- docker:dind
security:
stage: security
script:
- npm audit
quality:
stage: quality
script:
- npm install
- npm run-script lint
test_unit:
stage: test
script:
- npm install
- npm run-script unit-test
test_integration:
stage: test
script:
- docker-compose -f CI/backend-service/docker-compose.yml up -d
- npm install
- npm run-script integration-test
build:
stage: build
script:
- npm install
- export VERSION=`git describe --tags --always`
- docker build -t $CI_REGISTRY_IMAGE:$VERSION .
- docker push $CI_REGISTRY_IMAGE
deploy:
stage: deploy
script: echo 'deploy'
| The before_script syntax does not support a stages section. You could use before_script as you have done without the stages section, however the before_script stage would run for every single job in the pipeline.
Instead, what you could do is use YAML's anchor's feature (supported by Gitlab), which allows you to duplicate content across the .gitlab-ci file.
So in your scenario, it would look something like:
stages:
- security
- quality
- test
- build
- deploy
image: node:10.15.0
.before_script_template: &build_test-integration
before_script:
- apt-get update
- apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
- curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
- apt-get update
- apt-get -y install docker-ce
- curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
services:
- mongo
- docker:dind
security:
stage: security
script:
- npm audit
quality:
stage: quality
script:
- npm install
- npm run-script lint
test_unit:
stage: test
script:
- npm install
- npm run-script unit-test
test_integration:
stage: test
<<: *build_test-integration
script:
- docker-compose -f CI/backend-service/docker-compose.yml up -d
- npm install
- npm run-script integration-test
build:
stage: build
<<: *build_test-integration
script:
- npm install
- export VERSION=`git describe --tags --always`
- docker build -t $CI_REGISTRY_IMAGE:$VERSION .
- docker push $CI_REGISTRY_IMAGE
deploy:
stage: deploy
script: echo 'deploy'
Edit: there is another way, instead of using anchors, you could also use extends syntax:
.before_script_template:
before_script:
- apt-get update
- apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
- curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
- apt-get update
- apt-get -y install docker-ce
- curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
test_integration:
extends: .before_script_template
stage: test
script:
- docker-compose -f CI/backend-service/docker-compose.yml up -d
- npm install
- npm run-script integration-test
build:
extends: .before_script_template
stage: build
script:
- npm install
- export VERSION=`git describe --tags --always`
- docker build -t $CI_REGISTRY_IMAGE:$VERSION .
- docker push $CI_REGISTRY_IMAGE
etc
| GitLab | 54,074,433 | 34 |
On a private repository from gitlab, when I run git clone [email protected]:group/project-submodule.git the clone completes successfully.
As part of the cloning process, I'm asked for the passphrase of my private key.
When I run
submodule update --init "group/project-submodule"
It fails with:
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
fatal: Could not read from remote repository.
While trying to process the submodule getting, I'm not asked for the passphrase for my private key.
(I had to anonymize it)
fatal: clone of '[email protected]:group/project-submodule.git' into submodule path 'C:/Users/user/repos/project-module/project-submodule' failed
I've checked the .gitmodules file and it contains the right data (I think it can be confirmed by the error message).
The main element that calls my attention is that I'm not asked for my private key passphrase. Even weirder because, when I use git clone directly, it runs as expected.
I also already diagnosed by accessing with ssh and it asks me for the passphrase just like it happens when I execute a pull or a clone
Using git for windows "git version 2.16.2.windows.1"
| Git tries to clone the submodule using ssh and not https.
If you haven't configured your ssh key this will fail.
You can setup ssh-agent to cache the password for the ssh key and get git to use that. Or change to https.
Git is a bit confusing regarding submodules. They are configured in the .gitmodules file in the directory, but changing the url here from ssh to https won't help. Git uses the url that is configured in .git/config.
Open this file and you will find something like this.
[submodule "project-submodule"]
url = [email protected]:project-submodule.git
Change this url to the https equivalent and try again.
| GitLab | 49,191,565 | 34 |
I'm currently using gitlab.com (not local installation) with their multi-runner for CI integration. This works great on one of my projects but fails for another.
I'm using 2012R2 for my host with MSBuild version 14.0.23107.0. I know the error below shows 403 which is an access denied message. My problem is finding the permission setting to change.
Error message:
Running with gitlab-ci-multi-runner 1.5.3 (fb49c47) Using Shell
executor... Running on WIN-E0ORPCQUFHS...
Fetching changes...
HEAD is now at 6a70d96 update runner file remote: Access denied fatal:
unable to access
'https://gitlab-ci-token:[email protected]/##REDACTED##/ADInactiveObjectCleanup.git/':
The requested URL returned error: 403 Checking out 60ea1410 as
Production...
fatal: reference is not a tree:
60ea1410dd7586f6ed9535d058f07c5bea2ba9c7 ERROR: Build failed: exit
status 128
gitlab-ci.yml file:
variables:
Solution: ADInactiveObjectCleanup.sln
before_script:
#- "echo off"
#- 'call "%VS120COMNTOOLS%\vsvars32.bat"'
## output environment variables (usefull for debugging, propably not what you want to do if your ci server is public)
#- echo.
#- set
#- echo.
stages:
- build
#- test
#- deploy
build:
stage: build
script:
- echo building...
- '"%ProgramFiles(x86)%\MSBuild\14.0\Bin\msbuild.exe" "%Solution%" /p:Configuration=Release'
except:
#- tags
| To resolve this issue I had to add myself as a project member. This is a private repo. I'm not sure if that caused the runner to fail with the different permission setup or not, but it is highly possible.
This help article at gitlab outlines this issue.
With the new permission model in place, there may be times that your
build will fail. This is most likely because your project tries to
access other project's sources, and you don't have the appropriate
permissions. In the build log look for information about 403 or
forbidden access messages.
As an Administrator, you can verify that the
user is a member of the group or project they're trying to have access
to, and you can impersonate the user to retry the failing build in
order to verify that everything is correct.
From the project page click the settings gear and then click members. Add yourself (or user generating builds) as a member to the project. I used the "Master" Role, but based off of this document you can probably use the "Reporter" role as a minimum. The reporter role is the least privilege that still has access to "Pull project code." This removed my 403 error and allowed me to continue on.
| GitLab | 40,006,690 | 34 |
I have lost my Phone and do not have the recovery code for my 2FA for GitLab.
So I am locked out of my account.
What are my options?
| I know this is an old question, but the following, which I have tested only with gitlab.com free hosted accounts, may be useful for others with GitLab 2fa problems.
IF
you have set up 2fa but then lost access to your 2fa device for some reason, and
you have lost (or never saved) your recovery codes, and
you had previously configured your ssh key in your gitlab.com account
THEN ...
You can create a brand new list of recovery codes via ssh:
ssh [email protected] 2fa_recovery_codes
Answer the questions and save the list of recovery codes somewhere safe this time! I'm guilty of all of the above and this solution provided by GitLab is both simple and elegant.
Source: https://gitlab.com/gitlab-org/gitlab-ce/issues/3765
| GitLab | 39,142,153 | 34 |
Is it possible to mark gitlab ci jobs to start manually?
I need it for deploying application but I want to decide if it's going to be deployed
| This has changed since the first answer has been posted. Here's the link to the original Gitlab Issue. It is now supported to do something like
production:
stage: deploy
script: run-deployment $OMNIBUS_GITLAB_PACKAGE
environment: production
when: manual
Note the when: manual attribute. The UI updates itself to provide a way for users to trigger the job.
| GitLab | 36,663,765 | 34 |
I see two possibilities of doing this:
Do a replace of the local branch with the changes from remote master
Follow the work flow that I get using Gitlab by creating a merge request and merge the changes from master branch into the branch that I wish to update to the latest from master
What are the advantages and disadvantages of both these approaches? I'm leaning more towards the first approach. What do you guys say?
| The simple answer - there are plenty of more complicated ones - is to just do a merge, so:
git checkout master
git pull
git checkout <your-branch>
git merge master
(This is effectively the same as you describe in option 2)
Depending on your settings, you might not need all of those steps (but doing them all won't hurt) - I'd recommend reading up on each of the commands to find the precise workflow that fits you best.
This will merge the changes from master into your branch and probably create a new commit, with a comment making it clear that it's a merge.
The alternative, and slightly more advanced option would be to rebase, instead of merge, which will effectively rewind time to the point at which your branch diverged from master, then pull in the changes on master, bringing your branch in-line with master, but without your commits, and finally apply your commits at the end. The advantage of this is that it keeps the history more simple - you just get a straight line of changes, with the changes from your branch right at the end, rather than two separate branches that join at the point of the merge.
To do that, you'd do:
git checkout <your-branch>
git rebase master
I'd recommend reading the docs on rebase, because there are lots of cases where it gets difficult, and if you're new to git, definitely go for merge, but come back to rebase when you're more confident - it's a very powerful feature, and more like what I think you're describing in your option 1.
| GitLab | 34,656,523 | 34 |
There is a Git branch which was deleted by GitLab when closing a merge request. I would like to restore (undelete) that branch; however, I'm not seeing an option in the UI to do so.
In GitHub it is possible to restore a branch deleted by a pull request after the fact (via the "Restore branch" button on the pull request). I'm wondering if GitLab has an analogous feature for closed merge requests.
I can of course do this manually through the Git command line, checking out the last commit hash for the deleted branch, and pushing it back up to the deleted branch name. However, I would rather use the UI for something like this if it exists.
How can I restore a branch which GitLab deleted when closing a merge request?
| Restoring a deleted branch is an open issue, so GitLab has not implemented this feature at the time of this writing.
However, if you know the commit ID (and it hasn't been pruned), you can create a new branch from that commit:
From the Web UI, go to Repository > Commits
Find the commit you want and copy the SHA to your clipboard
Go to Repository > Branches
Click "New Branch"
Enter a branch name
Click the "Create from" drop-down menu and enter the commit SHA. Press enter to apply it.
Click "Create Branch"
| GitLab | 69,761,824 | 33 |
When a non-owner dev pushes a branch to our Gitlab repo, it returns a "pipeline failed" message, with the detail "Pipeline failed due to the user not being verified". On the dev's account, he's getting a prompt to add a credit card to verify him to be eligible for free pipeline minutes.
But I haven't set up any pipelines - I don't have a gitlab-ci.yml file in my repo, neither does the new branch. There are no jobs or schedules under the CI/CD tab of the project on Gitlab. So why is there a marker saying the branch failed in the pipeline?
| In my case, I was using my own runner for my project. In that case also, I got this error.
I fixed the error by disabling the shared runner in my project.
Under
Setting -> CICD -> Runner (Expand) -> Under the shared runner section, disable Shared runner.
| GitLab | 67,875,196 | 33 |
My GitLab pipelines execute automatically on every push, I want to manually run pipeline and not on every push.
Pipeline docs: https://docs.gitlab.com/ee/ci/yaml/#workflowrules
I tried this in
.gitlab-ci.yml
workflow:
rules:
- when: manual # Error: workflow:rules:rule when unknown value: manual
| You should specify a condition that tells Gitlab to not run the pipeline specifically on push events like so:
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
when: never # Prevent pipeline run for push event
- when: always # Run pipeline for all other cases
| GitLab | 64,557,223 | 33 |
I currently set up a Jenkins Multibranch Pipeline job that is based on a Git repository hosted on our GitLab server. Jenkins can read the branches in the repository and creates a job for every branch in the repository. But I can't figure out how to trigger the jobs with webhooks in GitLab.
My questions are:
How can I trigger the creation of a new branch job in Jenkins from our GitLab server? I can't see a webhook for a new branch being pushed.
How do I trigger the actual build job for a single branch? I can only add a webhook for push events but then I would have to add the branch name which I don't know how to do.
How can I make sure that GitLab always triggers the "creation of the branch job" before a push to a branch triggers the build job itself.
What I tried so far is triggering the multi-branch job, but this has no effect and following this post does not work at all.
| You need to install the GitLab Plugin on Jenkins.
This will add a /project endpoint on Jenkins. (See it in Jenkins => Manage Jenkins => Configure System => GitLab)
Now add a webhook to your GitLab project => Settings => Integrations. (or in older GitLab versions: GitLab project => Wheel icon => Integrations, it seems you need to be owner of the project in this case)
Set the URL to http://*yourjenkins.com*/**project**(/*foldername*)?/*yourprojectname* then click "Add webhook".
When you click "Test" on your webhook it should trigger your Jenkins pipeline build (you should get a 200 HTTP response).
It works without authentication in the GitLab plugin, configuration with authentication are welcome.
| GitLab | 40,979,405 | 33 |
I am trying to connect my GitLab repository with IntelliJ-IDEA, and it still cant connect to the repo. I have tried the next things:
I have msysgit installed correctly
Generated the SSH keys (https://help.github.com/articles/generating-ssh-keys/)
Added the key on GitLab keys
Define the enviroment variables HOME USERPROFILE point to C:\Users\sebastian.garces.ssh. %USERPROFILE$/.ssh %HOME$/.ssh
In IntelliJ changed SSH executable to Native
I did a lot of things from this links:
How do I connect IntelliJ to GitHub using SSH
How to store SSH host key in IntelliJ IDEA
And many other google searchs
I dont know what else to do nothing is working.
UPDATE: When i try to Clone the repository and press the Test Button it loads and loads and nothing happen after a while it give me this error: repository test has failed
| Try to install plugin:
Settings -> Plugins -> Browse repositories -> type GitLab Projects Plugin minimum version 1.3.0
and go to Settings -> Other Settings -> GitLab Settings
Fill GitLab Server Url with https://gitlab.com/ (ensure slash at the end)
and
GitLab API Key with string (private token which is shown as the first thing on: https://gitlab.com/profile/account)
And then the test in IntelliJ IDEA will pass and you will be able to use ssh or https for each repository...
UPDATE: GitLab UI changed, so private tokens are created at https://gitlab.com/profile/personal_access_tokens
| GitLab | 31,975,143 | 33 |
Subsets and Splits