question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
def callback(ch, method, properties, body): prop = properties print prop #print prop[1] #print prop[“headers”] #print prop.headers() print body This is the list from prop: <BasicProperties(['delivery_mode=2', "headers={'BIProto.ClickEvent': 'BIProto.ClickEvent'}", 'content_type=application/x-protobuf'])> I'm able to print body and the list of basic properties. But how can I just get headers? All the #print statements in the function error-ed.
Nevermind, all I had to do was print prop.headers
RabbitMQ
13,460,352
11
I'm using RabbitMQ as my message broker and my workers are Celery tasks. I'm trying to diagnose an issue where I'm enqueue tasks to RabbitMQ but Celery doesn't pick then up. Is there a way I can check what tasks are enqueued in RabbitMQ? I'd like to see the date and time when they are enqueued, any ETA is specified, the arguments and the task name. I haven't been able to find this information in the docs — maybe I've overlooked it — and was hoping that some of you might know an easy way to inspect the task queue. Thanks.
You can use Flower to monitor tasks in real time. https://github.com/mher/flower Check out also rabbitmqclt command which inspects RabbitMQ server status: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html rabbitmqctl list_queues
RabbitMQ
13,049,829
11
I am using Spring amqp 1.1 version as my java client. I have a queue which has around 2000 messages. I want to have a service which checks this queue size and and if it is empty it will send out a message saying " All items processed". I dont know how to get current queue size ? Please help I googled and found a class "RabbitBrokerAdmin" that was present in earlier version 1.0. I think it is not present in 1.1 now. Any pointers in getting current queue size?
So I know this is a little late and a solution has already been found but here is another way to look message counts in your queues This solution assumes that you are using the spring rabbitmq framework and have defined your queues in your application config with the following tags defined <rabbit:queue> <rabbit:admin> The java class: public class QueueStatsProcessor { @Autowired private RabbitAdmin admin; @Autowired private List<Queue> rabbitQueues; public void getCounts(){ Properties props; Integer messageCount; for(Queue queue : rabbitQueues){ props = admin.getQueueProperties(queue.getName()); messageCount = Integer.parseInt(props.get("QUEUE_MESSAGE_COUNT").toString()); System.out.println(queue.getName() + " has " + messageCount + " messages"); } } } You can also use this solution to read the current consumers attached to the queue http://docs.spring.io/spring-amqp/docs/1.2.1.RELEASE/api/org/springframework/amqp/rabbit/core/RabbitAdmin.html#getQueueProperties(java.lang.String)
RabbitMQ
11,446,443
11
I have just installed RabbitMQ on my WindowsXP PC. I have fulfilled the Erlang OPC15 prereq as well. My rabitmq seems to be working. I did a simple test using pika in python and it seems to work. The service is urnning. The problem is that I cannot do anything with rabbitmqctl.bat. I always get the response: Status of node rabbit@MYPCNAME ... Error: unable to connect to node rabbit@MYPCNAME: nodedown diagnostics: - nodes and their ports on MYPCNAME: [{rabbit,3097},{rabbitmqctl17251,1132}] - current node: rabbitmqctl17251@mypcname - current node home dir: C:\Documents and Settings\Myuser - current node cookie hash: NOTSUREIFTHISISSENSITIVESOREMOVED== In my rabbitmq log file I get: =ERROR REPORT==== 12-Feb-2012::17:01:22 === ** Connection attempt from disallowed node rabbitmqctl17251@mypcname ** From various forums I deduce this has something to do with cookies. What cookies are we talking about? What do I need to do to be able to manage my RabbitMQ instance using rabbitmqctl.bat? Please word your answer in a way that a non-erlang non-functional programmer would understand.
Had the same problem, this instruction straight out of the manual installation guide solved my problem: Synchronise Erlang Cookies (when running a manually installed Windows Service) Erlang Security Cookies used by the service account and the user running rabbitmqctl.bat must be synchronised for rabbitmqctl.bat to function. To ensure Erlang cookie files contain the same string, copy the .erlang.cookie file from the Windows directory (normally C:\WINDOWS\.erlang.cookie) to replace the user .erlang.cookie. The user cookie will be in the user's home directory (%HOMEDRIVE%%HOMEPATH%), e.g. C:\Documents and Settings\%USERNAME%\.erlang.cookie or C:\Users\%USERNAME%\.erlang.cookie (Windows Vista and later).
RabbitMQ
9,250,364
11
I need to develop a realtime recent activity feed in django (with AJAX long-polling), and I'm wondering what's the best strategy for the server-side. Pseudocode: def recent_activity_post_save(): notify_view() [in the view] while not new_activity(): sleep(1) return HttpResponse(new_activity()) The first thing that comes in mind is querying the DB every second. Not feasible. Other options: using the cache as a notification service using a specialized tool, like Celery (I'd rather not do it, because it seems like overkill) What's the best way to go here?
I would suggest keeping it simple... Create a database table to store your events, insert into that table when appropriate, then just implement a simple ajax polling technique to hit the server every x seconds on the client side. I have concerns with other solutions considering using a push-notification approach or using a noSql data store. It's a whole lot more complicated than a traditional pull-notification system using the tools that are built in to the Django framework, and except for very rare exceptions, is overkill. Unless you specifically require a strict real-time solution, keep it simple and use the tools that already exist in the framework, and for people with objections based on database or network performance, all I have to say is that premature optimization is the root of all evil. Build a model that contains recent activity data specific to your application then, whenever your application does something that should log new activity you can just insert into this table. Your view would simply be like any other view, pulling the top x rows from this RecentActivity table (optionally based on query parameters and whatever). Then, on the client side, you'd just have a simple ajax poller hitting your view every x seconds. There is no shortage of complicated plugins and technologies you can use, but writing your own isn't that complicated either: function simplePoll() { $.get("your-url", {query-parameters}, function(data){ //do stuff with the data, replacing a div or updating json or whatever setTimeout(simplePoll, delay); }); } My opinion is that performance issues aren't really issues until your site is successful enough for them to be an issue. A traditional relational database can scale up fairly well until you start reaching the level of success like Twitter, Google, etc. Most of us aren't at that level :)
RabbitMQ
7,460,149
11
Our requirement is very simple. Send messages to users subscribed to a topic. We need our messaging system to be able to support millions of topics and maybe millions of subscribers to any given topic in near real time. Our application is built with Java. We almost decided on RabbitMQ because of the community support, documentation and features (possibly it will deliver everything we need). But I am very inclined towards using Redis because it looks promising and lightweight. Honestly I have limited understanding about Redis as a messaging system, but looking at a growing number of companies using it as a queuing(with Ruby Resque), I want to know if there is an offering like Resque in Java and what are the advantages or disadvantages of using Redis as a MQ over RabbitMQ.
RabbitMQ supports clustering and now has active/active High Availably queues allowing for greater scale out and availability options then possible with Redis out of the box. RabbitMQ gives you a greater amount of control on everything from the users/permissions of exchanges/queues, to the durability of a specific exchange or queue (disk vs memory), to the guarantees of delivery (transactions, Publisher Confirms). It also allows for more flexibility and options on your topologies (fanout, topic, direct) and routing to multiple queues, RPC with private queues and reply-to, etc.
RabbitMQ
7,382,655
11
UDATE3: found the issue. See the answer below. UPDATE2: It seems I might have been dealing with an automatic naming and relative imports problem by running the djcelery tutorial through the manage.py shell, see below. It is still not working for me, but now I get new log error messages. See below. UPDATE: I added the log at the bottom of the post. It seems the example task is not registered? Original Post: I am trying to get django-celery up and running. I was not able to get through the example. I installed rabbitmq succesfully and went through the tutorials without trouble: http://www.rabbitmq.com/getstarted.html I then tried to go through the djcelery tutorial. When I run python manage.py celeryd -l info I get the message: [Tasks] - app.module.add [2011-07-27 21:17:19, 990: WARNING/MainProcess] celery@sequoia has started. So that looks good. I put this at the top of my settings file: import djcelery djcelery.setup_loader() BROKER_HOST = "localhost" BROKER_PORT = 5672 BROKER_USER = "guest" BROKER_PASSWORD = "guest" BROKER_VHOST = "/" added these to my installed apps: 'djcelery', here is my tasks.py file in the tasks folder of my app: from celery.task import task @task() def add(x, y): return x + y I added this to my django.wsgi file: os.environ["CELERY_LOADER"] = "django" Then I entered this at the command line: >>> from app.module.tasks import add >>> result = add.delay(4,4) >>> result (AsyncResult: 7auathu945gry48- a bunch of stuff) >>> result.ready() False So it looks like it worked, but here is the problem: >>> result.result >>> (nothing is returned) >>> result.get() When I put in result.get() it just hangs. What am I doing wrong? UPDATE: This is what running the logger in the foreground says when I start up the worker server: No handlers could be found for logger “multiprocessing” [Configuration] - broker: amqplib://guest@localhost:5672/ - loader: djcelery.loaders.DjangoLoader - logfile: [stderr]@INFO - concurrency: 4 - events: OFF - beat: OFF [Queues] - celery: exchange: celery (direct) binding: celery [Tasks] - app.module.add [2011-07-27 21:17:19, 990: WARNING/MainProcess] celery@sequoia has started. C:\Python27\lib\site-packages\django-celery-2.2.4-py2.7.egg\djcelery\loaders.py:80: UserWarning: Using settings.DEBUG leads to a memory leak, neveruse this setting in production environments! warnings.warn(“Using settings.DEBUG leads to a memory leak, never” then when I put in the command: >>> result = add(4,4) This appears in the error log: [2011-07-28 11:00:39, 352: ERROR/MainProcess] Unknown task ignored: Task of kind ‘task.add’ is not registered, please make sure it’s imported. Body->”{‘retries’: 0, ‘task’: ‘tasks.add’, ‘args’: (4,4), ‘expires’: None, ‘ta’: None ‘kwargs’: {}, ‘id’: ‘225ec0ad-195e-438b-8905-ce28e7b6ad9’}” Traceback (most recent call last): File “C:\Python27\..\celery\worker\consumer.py”,line 368, in receive_message Eventer=self.event_dispatcher) File “C:\Python27\..\celery\worker\job.py”,line 306, in from_message **kw) File “C:\Python27\..\celery\worker\job.py”,line 275, in __init__ self.task = tasks[self.task_name] File “C:\Python27\...\celery\registry.py”, line 59, in __getitem__ Raise self.NotRegistered(key) NotRegistered: ‘tasks.add’ How do I get this task to be registered and handled properly? thanks. UPDATE 2: This link suggested that the not registered error can be due to task name mismatches between client and worker - http://celeryproject.org/docs/userguide/tasks.html#automatic-naming-and-relative-imports exited the manage.py shell and entered a python shell and entered the following: >>> from app.module.tasks import add >>> result = add.delay(4,4) >>> result.ready() False >>> result.result >>> (nothing returned) >>> result.get() (it just hangs there) so I am getting the same behavior, but new log message. From the log, it appears the server is working but it won't feed the result back out: [2011-07-28 11:39:21, 706: INFO/MainProcess] Got task from broker: app.module.tasks.add[7e794740-63c4-42fb-acd5-b9c6fcd545c3] [2011-07-28 11:39:21, 706: INFO/MainProcess] Task app.module.tasks.add[7e794740-63c4-42fb-acd5-b9c6fcd545c3] succeed in 0.04600000038147s: 8 So the server got the task and it computed the correct answer, but it won't send it back? why not?
I found the solution to my problem from another stackoverflow post: Why does Celery work in Python shell, but not in my Django views? (import problem) I had to add these lines to my settings file: CELERY_RESULT_BACKEND = "amqp" CELERY_IMPORTS = ("app.module.tasks", ) then in the task.py file I named the task as such: @task(name="module.tasks.add") The server and the client had to be informed of the task names. The celery and django-celery tutorials omit these lines in their tutorials.
RabbitMQ
6,854,133
11
Is it possible with RabbitMQ and Python to do content-based routing? The AMQP standard and RabbitMQ claims to support content-based routing, but are there any libraries for Python which support specifying content-based bindings etc.? The library I am currently using (py-amqplib http://barryp.org/software/py-amqplib/) seems to only support topic-based routing with simple pattern-matching (#, *).
The answer is "yes", but there's more to it... :) Let's first agree on what content-based routing means. There are two possible meanings. Some people say that it is based on the header portion of a message. Others say it's based on the data portion of a message. If we take the first definition, these are more or less the assumptions we make: The data comes into existence somewhere, and it gets sent to the AMQP broker by some piece of software. We assume that this piece of software knows enough about the data to put key-value (KV) pairs in the header of the message that describe the content. Ideally, the sender is also the producer of the data, so it has as much information as we could ever want. Let's say the data is an image. We could then have the sender put KV pairs in the message header like this: width=1024 height=768 mode=bw photographer=John Doe Now we can implement content-based routing by creating appropriate queues. Let's say we have a separate operation to perform on black-and-white images and a separate one on colour images. We can create two queues, one that receives messages with mode=bw and another with mode=colour. Then we have separate clients listening on those queues. The broker performs the routing, and there is nothing in our client that needs to be aware of the routing. If we take the second definition, we go from different assumptions. We assume that the data comes into existence somewhere, and it gets sent to AMQP broker by some piece of software. But we assume that it's not sensible to demand that that software should populate the header with KV pairs. Instead, we want to make a routing decision based on the data itself. There are two options for this in AMQP: you can decide to implement a new exchange for your particular data format, or you can delegate the routing to a client. In RabbitMQ, there are direct (1-to-1), fanout (1-to-N), headers (header-filtered 1-to-N) and topic (topic-filtered 1-to-N) exchanges, but you can implement your own according to the AMQP standard. This would require reading a lot of RabbitMQ documentation and implementing the exchange in Erlang. The other option is to make an AMQP client in Python that listens to a special "content routing queue". Whenever a message arrives at the queue, your router-client picks it up, does whatever is needed to make a routing decision, and sends the message back to the broker to a suitable queue. So to implement the scenario above, your Python program would detect whether an image is in black-and-white or colour, and would (re)send it to a "black-and-white" or a "colour" queue, where some suitable client would take over. So on your second question, there's really nothing that you do in your client that does any content-based binding. Either your client(s) work as described above, or you create a new exchange type in RabbitMQ itself. Then, in your client setup code, you define the exchange type to be your new type. Hope this answers your question!
RabbitMQ
3,280,676
11
What's the difference between SimpleMessageListenerContainer and DirectMessageListenerContainer in Spring AMQP? I checked both of their documentation pages, SimpleMessageListenerContainer has almost no explanation on inner workings, and DirectMessageListenerContainer has the following explanation: The SimpleMessageListenerContainer is not so simple. Recent changes to the rabbitmq java client has facilitated a much simpler listener container that invokes the listener directly on the rabbit client consumer thread. There is no txSize property - each message is acked (or nacked) individually. I don't really understand what these mean. It says listener container that invokes the listener directly on the rabbit client consumer thread. If so, then how does SimpleMessageListenerContainer do the invocation? I wrote a small application and used DirectMessageListenerContainer and just to see the difference, I switched to SimpleMessageListenerContainer, but as far as I can see there was no difference on RabbitMQ side. From Java side the difference was in methods (SimpleMessageListenerContainer provides more) and logs (DirectMessageListenerContainer logged more stuff) I would like to know the scenarios to use each one of those.
The SMLC has a dedicated thread for each consumer (concurrency) which polls an internal queue. When a new message arrives for a consumer on the client thread, it is put in the internal queue and the consumer thread picks it up and invokes the listener. This was required with early versions of the client to provide multi-threading. With the newer client that is not a problem so we can invoke the listener directly (hence the name). There are a few other differences aside from txSize. See Choosing a Container.
RabbitMQ
56,438,819
10
I'm trying to set up a subscription to a RabbitMQ queue and pass it a custom event handler. So I have a class called RabbitMQClient which contains the following method: public void Subscribe(string queueName, EventHandler<BasicDeliverEventArgs> receivedHandler) { using (var connection = factory.CreateConnection()) { using (var channel = connection.CreateModel()) { channel.QueueDeclare( queue: queueName, durable: false, exclusive: false, autoDelete: false, arguments: null ); var consumer = new EventingBasicConsumer(channel); consumer.Received += receivedHandler; channel.BasicConsume( queue: queueName, autoAck: false, consumer: consumer ); } } } I'm using dependency injection, so I have a RabbitMQClient (singleton) interface for it. In my consuming class, I have this method which I want to act as the EventHandler public void Consumer_Received(object sender, BasicDeliverEventArgs e) { var message = e.Body.FromByteArray<ProgressQueueMessage>(); } And I'm trying to subscribe to the queue like this: rabbitMQClient.Subscribe(Consts.RabbitMQ.ProgressQueue, Consumer_Received); I can see that the queue starts to get messages, but the Consumer_Received method is not firing. What am I missing here?
"Using" calls dispose on your connection and your event wont be triggered. Just remove your "using" block from the code so that it doesn't close the connection. var connection = factory.CreateConnection(); var channel = connection.CreateModel(); channel.QueueDeclare( queue: queueName, durable: false, exclusive: false, autoDelete: false, arguments: null); var consumer = new EventingBasicConsumer(channel); consumer.Received += receivedHandler; channel.BasicConsume( queue: queueName, autoAck: false, consumer: consumer);
RabbitMQ
52,159,857
10
Apache Pulsar (by Yahoo) seems to be the next generation of Apache Kafka. Apache RocketMQ (by Alibaba) seems to be the next generation of Apache ActiveMQ. Both are open source distributed messaging and streaming data platforms. But how do they compare? When should I prefer one over another in terms of features and performance? Is Pulsar (like Kafka) strictly better at streaming, and RocketMQ (like ActiveMQ) strictly better at messaging?
Looks like you answer your own question. To be fair, the main advantages of Pulsar against RocketMQ are: Pulsar is oriented to topics and multi-topic. RocketMQ is more interesting in batch and keeps the index of the messages. RocketMQ you still need an adaptor to keep up with the backwards, Pulsar in the other hand comes built-in. RabbitMQ is push model and RocketMQ is pulling model since has zero-loss tolerance. Pulsar offers message priority and RocketMQ since it's a queue doesn't support that.
RabbitMQ
50,826,968
10
We are working with celery at the last year, with ~15 workers, each one defined with concurrency between 1-4. Recently we upgraded our celery from v3.1 to v4.1 Now we are having the following errors in each one of the workers logs, any ideas what can cause to such error? 2017-08-21 18:33:19,780 94794 ERROR Control command error: error(104, 'Connection reset by peer') [file: pidbox.py, line: 46] Traceback (most recent call last): File "/srv/dy/venv/lib/python2.7/site-packages/celery/worker/pidbox.py", line 42, in on_message self.node.handle_message(body, message) File "/srv/dy/venv/lib/python2.7/site-packages/kombu/pidbox.py", line 129, in handle_message return self.dispatch(**body) File "/srv/dy/venv/lib/python2.7/site-packages/kombu/pidbox.py", line 112, in dispatch ticket=ticket) File "/srv/dy/venv/lib/python2.7/site-packages/kombu/pidbox.py", line 135, in reply serializer=self.mailbox.serializer) File "/srv/dy/venv/lib/python2.7/site-packages/kombu/pidbox.py", line 265, in _publish_reply **opts File "/srv/dy/venv/lib/python2.7/site-packages/kombu/messaging.py", line 181, in publish exchange_name, declare, File "/srv/dy/venv/lib/python2.7/site-packages/kombu/messaging.py", line 203, in _publish mandatory=mandatory, immediate=immediate, File "/srv/dy/venv/lib/python2.7/site-packages/amqp/channel.py", line 1748, in _basic_publish (0, exchange, routing_key, mandatory, immediate), msg File "/srv/dy/venv/lib/python2.7/site-packages/amqp/abstract_channel.py", line 64, in send_method conn.frame_writer(1, self.channel_id, sig, args, content) File "/srv/dy/venv/lib/python2.7/site-packages/amqp/method_framing.py", line 178, in write_frame write(view[:offset]) File "/srv/dy/venv/lib/python2.7/site-packages/amqp/transport.py", line 272, in write self._write(s) File "/usr/lib64/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) error: [Errno 104] Connection reset by peer BTW: our tasks in the form: @app.task(name='EXAMPLE_TASK'], bind=True, base=ConnectionHolderTask) def example_task(self, arg1, arg2, **kwargs): # task code
We are also having massive issues with celery... I spend 20% of my time just dancing around weird idle-hang/crash issues with our workers sigh We had a similar case that was caused by a high concurrency combined with a high worker_prefetch_multiplier, as it turns out fetching thousands of tasks is a good way to frack the connection. If that's not the case: try to disable the broker pool by setting broker_pool_limit to None. Just some quick ideas that might (hopefully) help :-)
RabbitMQ
45,803,728
10
I'm running a site using Django 10, RabbitMQ, and Celery 4 on CentOS 7. My Celery Beat and Celery Worker instances are controlled by supervisor and I'm using the django celery database scheduler. I've scheduled a cron style task using the cronsheduler in Django-admin. When I start celery beat and worker instances the job fires as expected. But if a change the schedule time in Django-admin then the changes are not picked up unless I restart the celery-beat instance. Is there something I am missing or do I need to write my own scheduler? Celery Beat, with the 'django_celery_beat.schedulers.DatabaseScheduler' loads the schedule from the database. According to the following doc https://media.readthedocs.org/pdf/django-celery-beat/latest/django-celery-beat.pdf this should force Celery Beat to reload: A schedule that runs at a specific interval (e.g. every 5 seconds). • django_celery_beat.models.CrontabSchedule A schedule with fields like entries in cron: minute hour day-of-week day_of_month month_of_year. django_celery_beat.models.PeriodicTasks This model is only used as an index to keep track of when the schedule has changed. Whenever you update a PeriodicTask a counter in this table is also incremented, which tells the celery beat service to reload the schedule from the database. If you update periodic tasks in bulk, you will need to update the counter manually: from django_celery_beat.models import PeriodicTasks PeriodicTasks.changed() From the above I would expect the Celery Beat process to check the table regularly for any changes.
i have changed the celery from 4.0 to 3.1.25, django to 1.9.11 and installed djcelery 3.1.17. Then test again, It's OK. So, maybe it's a bug.
RabbitMQ
40,579,804
10
We are using celery to make third party http calls. We have around 100+ of tasks which simply calls the third party HTTP API calls. Some tasks call the API's in bulk, for example half a million requests at 4 AM in morning, while some are continuous stream of API calls receiving requests almost once or twice per second. Most of API call response time is between 500 - 800 ms. We are seeing very slow delivery rates with celery. For most of the above tasks, the max delivery rate is around 100/s (max) to almost 1/s (min). I believe this is very poor and something is definitely wrong, but I am not able to figure out what it is. We started with cluster of 3 servers and incrementally made it a cluster of 7 servers, but with no improvement. We have tried with different concurrency settings from autoscale to fixed 10, 20, 50, 100 workers. There is no result backend and our broker is RabbitMQ. Since our task execution time is very small, less than a second for most, we have also tried making prefetch count unlimited to various values. --time-limit=1800 --maxtasksperchild=1000 -Ofair -c 64 --config=celeryconfig_production Servers are 64 G RAM, Centos 6.6. Can you give me idea on what could be wrong or pointers on how to solve it? Should we go with gevents? Though I have little of idea of what it is.
First of all - GIL - that should not be a case, since more machines should go faster. But - please check if load goes only on one core of the server... I'm not sure if whole Celery is good idea in your case. That is great software, with a lot of functionality. But, if that is not needed, it is better to use something simpler - just in case some of that features interfere. I would write small PoC, check other client software, like pika. If that would not help - problem is with infrastructure. If helps - you have solution. :) It is really hard to tell what is going on. It can be something with IO, or too many network calls... I would step back - to find out something working. Write integration tests, but be sure to use 2-3 machines just to use full tcp stack. Be sure to have CI, and run that tests once a day, or so - to see if things are going in right direction.
RabbitMQ
36,106,216
10
I've got a problem with connecting from Python code using pika to dockerized RabbitMQ. I'm using this code to connect to the queue: @retry(wait_exponential_multiplier=1000, wait_exponential_max=10000, stop_max_attempt_number=2) def rabbit_connect(): connection_uri = cfg.get("System", "rabbit_uri", raw=True) queue = cfg.get("System", "queue") username = cfg.get("System", "username") password = cfg.get("System", "password") host = cfg.get("System", "rabbit_host") port = cfg.get("System", "rabbit_port") credentials = pika.PlainCredentials(username, password) log.info("Connecting queue %s at %s:%s", queue, host, port) connection = None try: connection = pika.BlockingConnection(pika.ConnectionParameters(credentials=credentials, host=host, port=int(port))) except Exception, e: log.error("Can't connect to RabbitMQ") log.error(e.message) raise And these are my docker containers: root@pc:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2063ad939823 rabbitmq:3-management "/docker-entrypoint.s" About an hour ago Up About an hour 4369/tcp, 5671-5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:8080->15672/tcp new-rabbitmg 94628f1fb33f rabbitmq "/docker-entrypoint.s" About an hour ago Up About an hour 4369/tcp, 5671-5672/tcp, 25672/tcp new-rabbit When I try to connect to localhost:8080 with any available credentials, pika retries connection until the error comes: Traceback (most recent call last): File "script.py", line 146, in worker connection = rabbit_connect() File "build/bdist.linux-x86_64/egg/retrying.py", line 49, in wrapped_f return Retrying(*dargs, **dkw).call(f, *args, **kw) File "build/bdist.linux-x86_64/egg/retrying.py", line 212, in call raise attempt.get() File "build/bdist.linux-x86_64/egg/retrying.py", line 247, in get six.reraise(self.value[0], self.value[1], self.value[2]) File "build/bdist.linux-x86_64/egg/retrying.py", line 200, in call attempt = Attempt(fn(*args, **kwargs), attempt_number, False) File "script.py", line 175, in rabbit_connect connection = pika.BlockingConnection(pika.ConnectionParameters(credentials=credentials, host=host, port=int(port))) File "build/bdist.linux-x86_64/egg/pika/adapters/blocking_connection.py", line 339, in __init__ self._process_io_for_connection_setup() File "build/bdist.linux-x86_64/egg/pika/adapters/blocking_connection.py", line 374, in _process_io_for_connection_setup self._open_error_result.is_ready) File "build/bdist.linux-x86_64/egg/pika/adapters/blocking_connection.py", line 410, in _flush_output self._impl.ioloop.poll() File "build/bdist.linux-x86_64/egg/pika/adapters/select_connection.py", line 602, in poll self._process_fd_events(fd_event_map, write_only) File "build/bdist.linux-x86_64/egg/pika/adapters/select_connection.py", line 443, in _process_fd_events handler(fileno, events, write_only=write_only) File "build/bdist.linux-x86_64/egg/pika/adapters/base_connection.py", line 364, in _handle_events self._handle_read() File "build/bdist.linux-x86_64/egg/pika/adapters/base_connection.py", line 412, in _handle_read return self._handle_disconnect() File "build/bdist.linux-x86_64/egg/pika/adapters/base_connection.py", line 288, in _handle_disconnect self._adapter_disconnect() File "build/bdist.linux-x86_64/egg/pika/adapters/select_connection.py", line 95, in _adapter_disconnect super(SelectConnection, self)._adapter_disconnect() File "build/bdist.linux-x86_64/egg/pika/adapters/base_connection.py", line 154, in _adapter_disconnect self._check_state_on_disconnect() File "build/bdist.linux-x86_64/egg/pika/adapters/base_connection.py", line 169, in _check_state_on_disconnect raise exceptions.IncompatibleProtocolError IncompatibleProtocolError Is it some kind of bug? Or am I doing something not right?
You have mapped localhost:8080 to docker container's (new-rabbitmg) port 15672, which is actually port for the webui management. The port for amqp communication is 5672 or 5671.
RabbitMQ
35,773,789
10
We are in need of a queuing system in our Ruby On Rails 4 web application what are the differences and why would/wouldn't you pick Sidekiq over RabbitMQ?
It's quite different things with different usage. Sidekiq is full-featured solution for job queueing and processing, while RabbitMQ is just a message broker where you can build your own stuff upon it.
RabbitMQ
34,525,941
10
Seems since spring-amqp version 1.5, there is a new annotation @queuebinding。But how to use it, i don't know if it can be used on a class or a method? Does it exist any example?
Not sure what problem you have, but here is a sample exactly from the Reference Manual: @Component public class MyService { @RabbitListener(bindings = @QueueBinding( value = @Queue(value = "myQueue", durable = "true"), exchange = @Exchange(value = "auto.exch"), key = "orderRoutingKey") ) public void processOrder(String data) { ... } And yes, it can be use as on class level as well as on method level.
RabbitMQ
33,239,347
10
I was playing around the rabbitmq HTTP API and came across a weird scenario. When I look at my queues through the web interface, the status of both of them shows as IDLE. . However when I use the HTTP API, the return for both the queue shows as 'running'. The code im using is below: import requests import json uri = 'http://localhost:15672/api/queues' r = requests.get(uri, auth=("guest","guest")) parsed = json.loads(r.content) #print json.dumps(parsed, indent=4) for i in parsed: print '{:<20} : {}'.format(i.get('name'), i.get('state')) Output: test queue : running test2 : running Can someone explain this behaviour to me?
Check the Management_console source code here: https://github.com/rabbitmq/rabbitmq-management/blob/master/priv/www/js/formatters.js#L479 function fmt_object_state(obj) { if (obj.state == undefined) return ''; var colour = 'green'; var text = obj.state; var explanation; if (obj.idle_since !== undefined) { colour = 'grey'; explanation = 'Idle since ' + obj.idle_since; text = 'idle'; } The console shows "idle" if the field idle_since is not null. If there is "traffic" in your queue you will have a json like that: "policy":"", "exclusive_consumer_tag":"", "consumers":0, "consumer_utilisation":"", "memory":176456, "recoverable_slaves":"", "state":"running", if the queue is in idle (without traffic) you will have a json like that: "idle_since":"2015-06-25 10:15:07", "consumer_utilisation":"", "policy":"", "exclusive_consumer_tag":"", "consumers":0, "recoverable_slaves":"", "state":"running", As you can see the field "idle_since" is not null. In both cases the queue is always in running state. In conclusion it is just a web-view formatting.
RabbitMQ
31,038,064
10
I have 100 clients. Each client has unique username, password and two channels (users can't connect to different channels apart from their own). Should I create VirtualHost for each user? How to write proper user permission to the below situation?: my_user can connect only to vahost called user_vhost using username and password my_user can consume only from the user_channel channel my_user can publish only to the user_channel channel my_user can connect remotely Thank You!
Virutal host in RabbitMQ is more like a logical container where a user connected to a particular virtual host cannot access any resource (exchange, queue...) from another virtual host. I always think about it like a administrative domain kind of thing. Based on what you have explained, I think having a virtual host per user is a good way to keep things simple and clean. Also, this way you do not need to come up with complicated permissions rules, just grant permissions based on virtual host.
RabbitMQ
28,253,641
10
When trying to install rabbitmq-server on RHEL: [ec2-user@ip-172-31-34-1XX ~]$ sudo rpm -i rabbitmq-server-3.3.5-1.noarch.rpm error: Failed dependencies: erlang >= R13B-03 is needed by rabbitmq-server-3.3.5-1.noarch [ec2-user@ip-172-31-34-1XX ~]$ rpm -i rabbitmq-server-3.3.5-1.noarch.rpm error: Failed dependencies: erlang >= R13B-03 is needed by rabbitmq-server-3.3.5-1.noarch I'm unsure why trying to rpm install isn't recognizing my erlang install since running $ erlgives: [ec2-user@ip-172-31-34-1XX ~]$ which erl /usr/local/bin/erl [ec2-user@ip-172-31-34-1XX ~]$ sudo which erl /bin/erl
You need to install erlang via RPM for it to recognise the dependency. The erlang RPMs are available in the EPEL repository: https://www.rabbitmq.com/install-rpm.html
RabbitMQ
25,855,331
10
I am trying to setup rabbitmq it can be accessed externally (from non-localhost) through nginx. nginx-rabbitmq.conf: server { listen 5672; server_name x.x.x.x; location / { proxy_pass http://localhost:55672/; } } rabbitmq.conf: [ {rabbit, [ {tcp_listeners, [{"127.0.0.1", 55672}]} ] } ] By default guest user can only interact from localhost, so we need to create another user with required permissions, like so: sudo rabbitmqctl add_user my_user my_password sudo rabbitmqctl set_permissions my_user ".*" ".*" ".*" However, when I attempt a connection to rabbitmq through pika I get ConnectionClosed exception import pika credentials = pika.credentials.PlainCredentials('my_username', 'my_password') pika.BlockingConnection( pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials) ) --[raises ConnectionClosed exception]-- If I use the same parameters but change host to localhost and port to 5672 then I connect ok: pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials) I have opened port 5672 on the GCE web console, and communication through nginx is happening: nginx access.log file shows [30/Apr/2014:22:59:41 +0000] "AMQP\x00\x00\x09\x01" 400 172 "-" "-" "-" Which shows a 400 status code response (bad request). So by the looks the request fails when going through nginx, but works when we request rabbitmq directly. Has anyone else had similar problems/got rabbitmq working for external users through nginx? Is there a rabbitmq log file where I can see each request and help further troubleshooting?
Since nginx 1.9 there is stream module for the tcp or udp (not compiled with by default). I configured my nginx (1.13.3) with ssl stream stream { upstream rabbitmq_backend { server rabbitmq.server:5672 } server { listen 5671 ssl; ssl_protocols TLSv1.2 TLSv1.1 TLSv1; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_handshake_timeout 30s; ssl_certificate /path/to.crt; ssl_certificate_key /path/to.key; proxy_connect_timeout 1s; proxy_pass rabbitmq_backend; } } https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/
RabbitMQ
23,399,604
10
As I understood AMQP 0.9.1, the main benefit was that you could send and receive messages and configure your exchanges / bindings / queues in a broker-independent way, thus you were able to switch your broker implementation without too much headache. Now, AMQP 1.0 only defines a wire-level protocol, so you actually have to know your broker specifics in order to implement most messaging patterns. Then why would I favour a message broker that is AMQP compliant over one that is not? If the broker implements AMQP 1.0, I'm still locked in with broker specific client code. With AMQP 0.9.1, I am theoretically broker independent but would most likely end up with RabbitMQ, since they seem to be the only ones to sincerely maintain the full support for AMQP 0.9.1.
Unfortunately, your concerns are very valid. Pieter Hintjens has great post on that, claiming that "AMQP/1.0 will go down in history as a prime documented example of how to fail a standardization effort and hurt a lot of people in the process." To your question: "why would I favour a message broker that is AMQP compliant over one that is not?" Definitely not because you get the option to change your message broker provider, as it is now not required. There are only two situations, I would think of favouring AMQP message broker: 1) You are already used to that and have that broker easily available 2) AMQP still brings the advantage of having clients available in multiple programming languages. But STOMP would serve here too.
RabbitMQ
22,882,108
10
DefaultConsumer My DemoConsumer inherits from DefaultConsumer. I have noticed that working this way handleDelivery() is invoked from ThreadPool. (printing Thread.currentThread().getName() I see pool-1-thread-1/2/3/4 eachtime. I have also tested it several times and saw that the order is saved. Just to make sure - since different threads call handle delivery - will it mess my order? QueueingConsumer All of the java tutorial use QueueingConsumer to consume messages. In the API Docs it is mentioned as a deprecated class. Should I change my code to inherit from DefaultConsumer use it? Is the tutorial outdated? Thanks.
Yes,DefaultConsumer uses an internal thread pool that can be changed. Using ExecutorService as: ExecutorService es = Executors.newFixedThreadPool(20); Connection conn = factory.newConnection(es); Read http://www.rabbitmq.com/api-guide.html “Advanced Connection options”. As you can read from the “QueueingConsumer” doc: As such, it is now safe to implement Consumer directly or to extend DefaultConsumer. I never used QueueingConsumer, because it isn’t properly event-driven. As you can see here: QueueingConsumer consumer = new QueueingConsumer(channel); channel.basicConsume(QUEUE_NAME, true, consumer); while (true) { QueueingConsumer.Delivery delivery = consumer.nextDelivery(); /// here you are blocked, waiting the next message. String message = new String(delivery.getBody()); } A typical problem in this case is how to close the subscription, and a common workaround is to send a tagged close message in local host. Actually I don’t like it so much. If you extend DefaultConsumer instead, you can correctly close the subscription and the channel: public class MyConsumer extends DefaultConsumer {...} then public static void main(String[] args) { MyConsumer consumer = new MyConsumer (channel); String consumerTag = channel.basicConsume(Constants.queue, false, consumer); System.out.println("press any key to terminate"); System.in.read(); channel.basicCancel(consumerTag); channel.close(); .... In conclusion, you shouldn’t worry about the message order because if all works correctly, the message order is correct, but I think you can’t assume it because if there is some problem, you can lose the message order. If you absolutely need to maintain message order, you should include a sequential tag to reconstruct the message order at the consumer side. And you should extend DefaultConsumer.
RabbitMQ
22,840,247
10
I have a simple publisher done in MassTransit. I’m sending the message in an interval and am able to receive it from .NET client using MassTransit. But when I try to observe something from Python, it is silent. Is there a way to consume MassTransit from Python or other languages? Examples appreciated. Publisher: builder.Register(c => ServiceBusFactory.New(sbc => { sbc.UseRabbitMq(); sbc.UseBsonSerializer(); sbc.UseLog4Net(); sbc.ReceiveFrom("rabbitmq://localhost/masstransit"); }); .NET client: public void Execute(IJobExecutionContext context) { using (var scope = ServiceLocator.Current.GetInstance<ILifetimeScope>().BeginLifetimeScope()) { var log = scope.Resolve<ILog>(); log.Debug("Sending queue message"); var bus = scope.Resolve<IServiceBus>(); bus.Publish(new SimpleTextMessage{Text = "some text"}); } } Python client: import pika print('Stating consumer') connection = pika.BlockingConnection(pika.ConnectionParameters( host='localhost')) channel = connection.channel() channel.queue_declare(queue='python_consumer_1') print ' [*] Waiting for messages. To exit press CTRL+C' def callback(ch, method, properties, body): print " [x] Received %r" % (body,) channel.basic_consume(callback, queue='python_consumer_1') channel.start_consuming() The trace from C# app: Configuration Result: [Success] Name MyApp [Success] ServiceName MyApp Topshelf v3.1.122.0, .NET Framework v4.0.30319.34003 INFO (MassTransit.BusConfigurators.ServiceBusConfiguratorImpl) 209 - MassTransit v2.9.2/v2.9.0.0, .NET Framework v4.0.30319.34003 DEBUG(MassTransit.Transports.RabbitMq.RabbitMqTransportFactory) 245 - CreatingRabbitMQ connection: rabbitmq://localhost/groups_error DEBUG(MassTransit.Transports.RabbitMq.RabbitMqTransportFactory) 246 - Using default configurator for connection: rabbitmq://localhost/groups_error DEBUG(MassTransit.Transports.RabbitMq.RabbitMqTransportFactory) 251 - RabbitMQconnection created: localhost:5672// DEBUG(MassTransit.Transports.RabbitMq.RabbitMqTransportFactory) 921 - Creating RabbitMQ connection: rabbitmq://localhost/groups DEBUG(MassTransit.Transports.RabbitMq.RabbitMqTransportFactory) 922 - Using default configurator for connection: rabbitmq://localhost/groups DEBUG(MassTransit.Transports.RabbitMq.RabbitMqTransportFactory) 924 - RabbitMQconnection created: localhost:5672// DEBUG(MassTransit.ServiceContainer) 1056 - Starting bus service: MassTransit.Subscriptions.Coordinator.SubscriptionRouterService DEBUG(MassTransit.ServiceContainer) 1062 - Starting bus service: MassTransit.Subscriptions.SubscriptionBusService DEBUG(MassTransit.Threading.ThreadPoolConsumerPool) 1080 - Starting Consumer Pool for rabbitmq://localhost/groups [Topshelf.Quartz] Scheduled Job: DEFAULT.ea637337-950a-4281-99c0-f10b842814c9 [Topshelf.Quartz] Job Schedule: Trigger 'DEFAULT.8a1d0b7c-d670-440b-974f-31ec8be6f294': triggerClass: 'Quartz.Impl.Triggers.SimpleTriggerImpl calendar: '' misfireInstruction: 0 nextFireTime: 01/28/2014 07:12:35 +00:00 - Next Fire Time (local): 28.01.2014 9:12:35 +02:00 [Topshelf.Quartz] Scheduler started... DEBUG(MassTransit.Transports.RabbitMq.RabbitMqTransportFactory) 1248 - CreatingRabbitMQ connection: rabbitmq://localhost/MyApp.Transit:SimpleTextMessage_error DEBUG(MassTransit.Transports.RabbitMq.RabbitMqTransportFactory) 1250 - Using default configurator for connection: rabbitmq://localhost/MyApp.Transit:SimpleTextMessage_error DEBUG(Global) 1254 - Sending queue message DEBUG(MassTransit.Transports.RabbitMq.RabbitMqTransportFactory) 1254 - RabbitMQconnection created: localhost:5672// DEBUG(MassTransit.Transports.RabbitMq.RabbitMqTransportFactory) 1272 - CreatingRabbitMQ connection: rabbitmq://localhost/MyApp.Transit:SimpleTextMessage DEBUG(MassTransit.Transports.RabbitMq.RabbitMqTransportFactory) 1273 - Using default configurator for connection: rabbitmq://localhost/MyApp.Transit:SimpleTextMessage DEBUG(MassTransit.Transports.RabbitMq.RabbitMqTransportFactory) 1277 - RabbitMQconnection created: localhost:5672// DEBUG(MassTransit.Messages) 1439 - SEND:rabbitmq://localhost/MyApp.Transit:SimpleTextMessage:935a0000-5d93-0015-961c-08d0ea0f744a:MyApp.Transit.SimpleTextMessage, MyApp DEBUG(MassTransit.Messages) 1441 - SEND:rabbitmq://localhost/MyApp.Transit:SimpleTextMessage:935a0000-5d93-0015-8965-08d0ea0f744b:MyApp.Transit.SimpleTextMessage, MyApp The MyApp service is now running, press Control+C to exit. DEBUG(Global) 21212 - Sending queue message DEBUG(MassTransit.Messages) 21214 - SEND:rabbitmq://localhost/MyApp.Transit:SimpleTextMessage:935a0000-5d93-0015-d4fb-08d0ea0f77c4:MyApp.Transit.SimpleTextMessage, MyApp DEBUG(Global) 41213 - Sending queue message DEBUG(MassTransit.Messages) 41214 - SEND:rabbitmq://localhost/MyApp.Transit:SimpleTextMessage:935a0000-5d93-0015-077b-08d0ea0f7b40:MyApp.Transit.SimpleTextMessage, MyApp DEBUG(Global) 61212 - Sending queue message DEBUG(MassTransit.Messages) 61214 - SEND:rabbitmq://localhost/MyApp.Transit:SimpleTextMessage:935a0000-5d93-0015-2bed-08d0ea0f7ebb:MyApp.Transit.SimpleTextMessage, MyApp DEBUG(Global) 81213 - Sending queue message DEBUG(MassTransit.Messages) 81215 - SEND:rabbitmq://localhost/MyApp.Transit:SimpleTextMessage:935a0000-5d93-0015-5f44-08d0ea0f8236:MyApp.Transit.SimpleTextMessage, MyApp DEBUG(Global) 101212 - Sending queue message DEBUG(MassTransit.Messages) 101214 - SEND:rabbitmq://localhost/MyApp.Transit:SimpleTextMessage:935a0000-5d93-0015-80f8-08d0ea0f85b1:MyApp.Transit.SimpleTextMessage, MyApp DEBUG(Global) 121212 - Sending queue message DEBUG(MassTransit.Messages) 121213 - SEND:rabbitmq://localhost/MyApp.Transit:SimpleTextMessage:935a0000-5d93-0015-a971-08d0ea0f892c:MyApp.Transit.SimpleTextMessage, MyApp DEBUG(Global) 141212 - Sending queue message DEBUG(MassTransit.Messages) 141214 - SEND:rabbitmq://localhost/MyApp.Transit:SimpleTextMessage:935a0000-5d93-0015-d53e-08d0ea0f8ca7:MyApp.Transit.SimpleTextMessage, MyApp DEBUG(Global) 161212 - Sending queue message DEBUG(MassTransit.Messages) 172109 - SEND:rabbitmq://localhost/MyApp.Transit:SimpleTextMessage:935a0000-5d93-0015-7504-08d0ea0f9208:MyApp.Transit.SimpleTextMessage, MyApp DEBUG(Global) 181212 - Sending queue message DEBUG(MassTransit.Messages) 193461 - SEND:rabbitmq://localhost/MyApp.Transit:SimpleTextMessage:935a0000-5d93-0015-dd26-08d0ea0f95bf:MyApp.Transit.SimpleTextMessage, MyApp
It seems, that the easiest way is to bind the python queue to the exchange in RabbitMq management. After doing it I've sucessfully recieved the messages. PyhonConsumer now looks the following way: import pika print('Stating consumer') connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) channel = connection.channel() channel.queue_declare('python_consumer_1') print ' [*] Waiting for messages. To exit press CTRL+C' def callback(ch, method, properties, body): print " [x] Received %r" % (body,) ch.basic_ack(delivery_tag = method.delivery_tag) channel.queue_bind(queue='python_consumer_1', exchange='MyApp.Transit:SimpleTextMessage') channel.basic_consume(callback, queue='python_consumer_1') channel.start_consuming()
RabbitMQ
21,385,143
10
ActiveMQ / JMS has a built in-mechanism for ensuring that messages that share a common header (namely, the JMSXGroupID header) are always consumed by the same consumer of a queue when using a competing consumers pattern. The consumers of a queue are completely agnostic of the actual header values, as the guarantee of messages with a common header is performed server-side and not consumer-side. For more details on how this works, see http://activemq.apache.org/message-groups.html . Is doing such a thing possible with AMQP or with something RabbitMQ specific?
As of the time this answer is written, this is not possible with AMQP alone and will require work on the application side of things. RabbitMQ plans on implementing something like this in the future, but it is not slated for release or development anytime soon. References: https://github.com/rabbitmq/rabbitmq-server/issues/262 and https://twitter.com/old_sound/status/410898209788411904
RabbitMQ
20,530,591
10
I'm looking to know is it possible to move / merge messages from one queue to another. For example: main-queue contains messages ['cat-1','cat-2','cat-3','cat-4','dog-1','dog-2','cat-5'] dog-queue contains messages ['dog-1, dog-2, dog-3, dog-4] So the question is, (assuming both queues are on the same cluster, vhost) it possible to move messages from dog-queue to main-queue using rabbitmqctl ? So at the end I'm looking to get something like: Ideally: main-queue : ['cat-1','cat-2','cat-3','cat-4','dog-1','dog-2','cat-5', dog-3, dog-4] But this is ok too: main-queue : ['cat-1','cat-2','cat-3','cat-4','dog-1','dog-2','cat-5', 'dog-1, dog-2, dog-3, dog-4]
What you are/were looking for is the 'shovel' plugin. The shovel plugin comes built into the core but you have to explicitly enable it. It's really easy to use as it does everything for you (no manually consuming/republishing to another queue). Enable shovel plugin via cli: sudo rabbitmq-plugins enable rabbitmq_shovel If you manage RabbitMQ via GUI, install the shovel mgmt plugin too: sudo rabbitmq-plugins enable rabbitmq_shovel_management Login to the GUI and you will see Shovel Management under the Admin section. You can create shovels to move messages from any queue to another queue, even remotely hosted queues. You can delete the shovel when it's finished, or leave it there and it'll continually move the messages as they come in. If you manage RabbitMQ via CLI, you can execute shovel directly from rabbitmqctl: sudo rabbitmqctl set_parameter shovel cats-and-dogs \ '{"src-uri": "amqp://user:pass@host/vhost", "src-queue": "dog-queue", \ "dest-uri": "amqp://user:pass@host/vhost", "dest-queue": "main-queue"}' Official plugin docs: Shovel Plugin - https://www.rabbitmq.com/shovel.html Creating Shovels - https://www.rabbitmq.com/shovel-dynamic.html
RabbitMQ
17,075,116
10
Is there a way how can I send data to RabbitMQ from $.ajax? My application is made up of several thousands web-clients (written on js) and WCF REST service and now I am trying to figure out how can I create a scalable point for my application. The idea is to have a rabbitmq instance which receives messages from js clients placed on one side, and instances of WCF Workflow Services which are taking pending messages from the queue. I understand that AMQP and HTTP is pretty different things. So the question is - is there a REST interface for rabbit mq or some sort of gateway for it
There are lots of 3rd-party HTTP plugins listed on RabbitMQ's developer tools page, and they also offer an experimental JSON-RPC plugin that allows for AMQP over HTTP access. You should also take a look at RabbitJS and SockJS to see what the Rabbit team is doing to bring messaging to the worlds of node.js and WebSockets, respectively.
RabbitMQ
10,080,718
10
I'm currently using Socket.IO with redis store. And I'm using Room feature with it. So I'm totally okay with Room join (subscribe) and Leave (unsubscribe) with Socket.IO. I just see this page http://www.rabbitmq.com/blog/2010/11/12/rabbitmq-nodejs-rabbitjs/ And I have found that some people are using Socket.IO with rabbitMQ. Why using Socket.IO alone is not good enough? Is there any good reason to use Socket.IO with rabbitMQ?
SocketIO is a browser --> server transport mechanism whereas RabbitMQ is a server --> server message bus. The two can be implemented together to create a very responsive system in scenarios where a user journey consists of a message starting life on a browser and ending up in, say, some persistence layer (such as a database). A message would be transported to the web server via socketIO and then, instead of the web server being responsible for persisting the message, it would drop it on a Rabbit queue and leave some other process responsible for persisting it. This way, the web server is free to return to its web serving responsibilities and, crucially, lessening its load.
RabbitMQ
9,824,952
10
Is there a way to get the timestamp when a message was placed on the queue, from a consumer. Not when it was published, but when it actually made it to the queue.
No there's no way to figure this out, unless, as you state yourself you write a plugin for this. There is nothing in the AMQP specification that says that the message must know when it arrived in the queue. There is no need from the AMQP point of view to know this. There are also many cases when the message might pass through several queues and then which queue should represent the relevant timestamp?
RabbitMQ
9,216,712
10
I am using celery 2.4.1 with python 2.6, the rabbitmq backend, and django. I would like my task to be able to clean up properly if the worker shuts down. As far as I am aware you cannot supply a task destructor so I tried hooking into the worker_shutdown signal. Note: AbortableTask only works with the database backend so I cant use that. from celery.signals import worker_shutdown @task def mytask(*args) obj = DoStuff() def shutdown_hook(*args): print "Worker shutting down" # cleanup nicely obj.stop() worker_shutdown.connect(shutdown_hook) # blocking call that monitors a network connection obj.stuff() However, the shutdown hook never gets called. Ctrl-C'ing the worker doesnt kill the task and I have to manually kill it from the shell. So if this is not the proper way to go about it, how do I allow tasks to shutdown gracefully?
worker_shutdown is only sent by the MainProcess, not the child pool workers. All worker_* signals except for worker_process_init, refer to the MainProcess. However, the shutdown hook never gets called. Ctrl-C'ing the worker doesn't kill the task and I have to manually kill it from the shell. The worker never terminates a task under normal (warm) shutdown. Even if a task takes days to complete, the worker won't complete shutdown until it's completed. You can set --soft-time-limit, or --time-limit to to tell the instance when it's ok to terminate the task. So to add any kind of process cleanup process you first need to make sure that the tasks can actually complete. As the cleanup wouldn't be called before that happens. To add a cleanup step to the pool worker processes you can use something like: from celery import platforms from celery.signals import worker_process_init def cleanup_after_tasks(signum, frame): # reentrant code here (see http://docs.python.org/library/signal.html) def install_pool_process_sighandlers(**kwargs): platforms.signals["TERM"] = cleanup_after_tasks platforms.signals["INT"] = cleanup_after_tasks worker_process_init.connect(install_pool_process_sighandlers)
RabbitMQ
8,138,642
10
In my models.py: from django.db import models from core import tasks class Image(models.Model): image = models.ImageField(upload_to='images/orig') thumbnail = models.ImageField(upload_to='images/thumbnails', editable=False) def save(self, *args, **kwargs): super(Image, self).save(*args, **kwargs) tasks.create_thumbnail.delay(self.id) In my tasks.py: from celery.decorators import task from core.models import Image @task() def create_thumbnail(image_id): ImageObj = Image.objects.get(id=image_id) # other stuff here This is returning the following: Exception Type: ImportError Exception Value: cannot import name tasks The error disappears if I comment out from core.models import Image in tasks.py, however this obviously will cause a problem since Image has no meaning in here. I have tried to import it inside create_thumbnail however it still won't recognize Image. I have read somewhere that usually the object itself can be passed as an argument to a task and that would solve my problem. However, a friend once told me that it is considered best practice to send as little data as possible in a RabbitMQ message, so to achieve that I'm trying to only pass the image ID and then retrieve it again in the task. 1) Is what I'm trying to do considered a best practice? If yes, how do I work it out? 2) I have noticed in all the examples I found around the web, they execute the task from a view and never from a model. I'm trying to create a thumbnail whenever a new image is uploaded, I don't want to call create_thumbnail in every form/view I have. Any idea about that? Is executing a task from a model not recommended or a common practice?
1) Is what I'm trying to do considered a best practice? If yes, how do I work it out? Yes, passing only a little information to the task is generally a good thing like you mentioned. 2) I have noticed in all the examples I found around the web, they execute the task from a view and never from a model. I'm trying to create a thumbnail whenever a new image is uploaded, I don't want to call create_thumbnail in every form/view I have. Any idea about that? Is executing a task from a model not recommended or a common practice? I've noticed the same thing, and feel that tutorials and documentation call tasks from their views because it is easier to demonstrate how things work using simple views than with models or forms. To eliminate circular imports, you should think about which way the imports should happen. Generally, tasks.py will need to import many things from models.py whereas models.py rarely needs to know anything about tasks.py. The standard should be that models.py does not import from tasks.py. Thus, if you do need to do this and are calling a task from a model method, make the import in the method as so: from django.db import models class Image(models.Model): image = models.ImageField(upload_to='images/orig') thumbnail = models.ImageField(upload_to='images/thumbnails', editable=False) def save(self, *args, **kwargs): super(Image, self).save(*args, **kwargs) from core.tasks import create_thumbnail create_thumbnail.delay(self.id)
RabbitMQ
8,107,085
10
I'm looking at the repos and there are so many projects, not sure which are wrappers/clients and which is the actual project. Is it Erlang?
Yes, it is Erlang. You can check this out by downloading the source for the server here: http://www.rabbitmq.com/releases/rabbitmq-server/v2.3.1/
RabbitMQ
5,363,401
10
I want to have a task that will execute every 5 minutes, but it will wait for last execution to finish and then start to count this 5 minutes. (This way I can also be sure that there is only one task running) The easiest way I found is to run django application manage.py shell and run this: while True: result = task.delay() result.wait() sleep(5) but for each task that I want to execute this way I have to run it's own shell, is there an easy way to do it? May be some king custom ot django celery scheduler?
Wow it's amazing how no one understands this person's question. They are asking not about running tasks periodically, but how to ensure that Celery does not run two instances of the same task simultaneously. I don't think there's a way to do this with Celery directly, but what you can do is have one of the tasks acquire a lock right when it begins, and if it fails, to try again in a few seconds (using retry). The task would release the lock right before it returns; you can make the lock auto-expire after a few minutes if it ever crashes or times out. For the lock you can probably just use your database or something like Redis.
RabbitMQ
5,361,521
10
I am currently evaluating message queue systems and RabbitMq seems like a good candidate, so I'm digging a little more into it. To give a little context I'm looking to have something like one exchange load balancing the message publishing to multiple queues. I don't want to replicate the messages, so a fanout exchange is not an option. Also the reason I'm thinking of having multiple queues vs one queue handling the round-robin w/ the consumers, is that I don't want our single point of failure to be at the queue level. Sounds like I could add some logic on the publisher side to simulate that behavior by editing the routing key and having the appropriate bindings in place. But that's kind of a passive approach that wouldn't take the pace of the message consumption on each queue into account, potentially leading to fill up one queue if the consumer applications for that queue are dead. I was looking for a more pro-active way from the exchange entity side, that would decide where to send the next message based on each queue size or something of that nature. I read about Alice and the available RESTful APIs but that seems kind of a heavy duty solution to implement fast routing decisions. Anyone knows if round-robin between the exchange the queues is feasible w/ RabbitMQ then? Thanks.
Exchanges are generally stateless in the AMQP model, though there have been some recent experiments in stateful exchanges now that there's both a system for managing RabbitMQ plugins and for providing new experimental exchange types. There's nothing that does quite what you want, I don't think, though I'm not completely sure I understand the requirement. Aside from the single-point-of-failure point, would having a single queue with workers reading from it solve your problem? If so, then your problem reduces to configuring RabbitMQ in an HA configuration that permits you to use that solution. There are a couple of approaches to doing that: either use HALinux and a shared store to get active/passive HA with quick failover, or set up more than one parallel broker and deduplicate on the client, perhaps using redis or similar to do so. I suggest asking your question again on the rabbitmq-discuss mailing list, where more people will be able to offer suggestions, and where the discussion can be archived for posterity.
RabbitMQ
2,596,208
10
Once upon a time, there was a file in my project that I would now like to be able to get. The problem is: I have no idea of when have I deleted it and on which path it was. How can I locate the commits of this file when it existed?
If you do not know the exact path you may use git log --all --full-history -- "**/thefile.*" If you know the path the file was at, you can do this: git log --all --full-history -- <path-to-file> This should show a list of commits in all branches which touched that file. Then, you can find the version of the file you want, and display it with... git show <SHA> -- <path-to-file> Or restore it into your working copy with: git checkout <SHA>^ -- <path-to-file> Note the caret symbol (^), which gets the checkout prior to the one identified, because at the moment of <SHA> commit the file is deleted, we need to look at the previous commit to get the deleted file's contents
GitLab
7,203,515
1,930
What is the difference between a Pull request and a Merge request? In GitHub, it's a Pull Request while in GitLab, for example, it's a Merge Request. So, is there a difference between both of these?
GitLab's "merge request" feature is equivalent to GitHub's "pull request" feature. Both are means of pulling changes from another branch or fork into your branch and merging the changes with your existing code. They are useful tools for code review and change management. An article from GitLab discusses the differences in naming the feature: Merge or pull requests are created in a git management application and ask an assigned person to merge two branches. Tools such as GitHub and Bitbucket choose the name pull request since the first manual action would be to pull the feature branch. Tools such as GitLab and Gitorious choose the name merge request since that is the final action that is requested of the assignee. In this article we'll refer to them as merge requests. A "merge request" should not be confused with the git merge command. Neither should a "pull request" be confused with the git pull command. Both git commands are used behind the scenes in both pull requests and merge requests, but a merge/pull request refers to a much broader topic than just these two commands.
GitLab
22,199,432
833
I have a problem when I push my code to git while I have developer access in my project, but everything is okay when I have master access. Where is the problem come from? And how to fix it? Error message: error: You are not allowed to push code to protected branches on this project. ... error: failed to push some refs to ...
there's no problem - everything works as expected. In GitLab some branches can be protected. By default only Maintainer/Owner users can commit to protected branches (see permissions docs). master branch is protected by default - it forces developers to issue merge requests to be validated by project maintainers before integrating them into main code. You can turn on and off protection on selected branches in Project Settings (where exactly depends on GitLab version - see instructions below). On the same settings page you can also allow developers to push into the protected branches. With this setting on, protection will be limited to rejecting operations requiring git push --force (rebase etc.) Since GitLab 9.3 Go to project: "Settings" → "Repository" → "Expand" on "Protected branches" I'm not really sure when this change was introduced, screenshots are from 10.3 version. Now you can select who is allowed to merge or push into selected branches (for example: you can turn off pushes to master at all, forcing all changes to branch to be made via Merge Requests). Or you can click "Unprotect" to completely remove protection from branch. Since GitLab 9.0 Similarly to GitLab 9.3, but no need to click "Expand" - everything is already expanded: Go to project: "Settings" → "Repository" → scroll down to "Protected branches". Pre GitLab 9.0 Project: "Settings" → "Protected branches" (if you are at least 'Master' of given project). Then click on "Unprotect" or "Developers can push":
GitLab
32,246,503
620
How to check which version of GitLab is installed on the server? I am about version specified in GitLab changelog: https://gitlab.com/gitlab-org/gitlab-foss/blob/master/CHANGELOG.md For example: "6.5.0", "6.4.3", etc. Сan this be done only through the terminal? Is there a way to do that remotely (with browser instead of terminal)?
I have updated my server to GitLab 6.6.4 and finally found the way to get version of GitLab remotely without SSH access to server. You should be logged in to access the following page: https://your.domain.name/help It shows something similar to: GitLab 6.6.4 42e34ae GitLab is open source software to collaborate on code. ... etc.
GitLab
21,068,773
401
I have created several repositories in GitLab. One of those was for testing purposes and has some commits and branches. I want to delete or remove this repository. How can I do this?
Go to the project page Select "Settings" Select the "General" section (you must be in the repository you want to delete it) If you have enough rights, then at the bottom of the page will be a button for "Advanced settings" (i.e. project settings that may result in data loss) or "Remove project" (in newer GitLab versions) Push this button and follow the instructions This is only available for admins/owner. As a mere project maintainer, you do not see the "Remove project" button.
GitLab
24,032,232
387
Can one transfer repositories from GitLab to GitHub if the need be. If so, how exactly can I go about doing the same? Also, are there any pitfalls in doing so or precautionary measures that I need to keep in mind before doing so given that I may decide to eventually move them to GitHub (as it has more features at the moment that I might find handy for my project).
You can transfer those (simply by adding a remote to a GitHub repo and pushing them) create an empty repo on GitHub git remote add github https://[email protected]/yourLogin/yourRepoName.git git push --mirror github The history will be the same. But you will lose the access control (teams defined in GitLab with specific access rights on your repo) If you face any issue with the https URL of the GitHub repo: The requested URL returned an error: 403 All you need to do is to enter your GitHub password, but the OP suggests: Then you might need to push it the ssh way. You can read more on how to do it here. See "Pushing to Git returning Error Code 403 fatal: HTTP request failed". Note that mike also adds in the comments: GitLab can also be set to push mirror to downstream repositories, and there are specific instructions for push mirroring to GitHub. This can use a GitHub Personal Access Token and also be set to periodically push. You might use this option to share on GitHub, but keep your main development activity in your GitLab instance. tswaehn suggests in the comments the tool piceaTech/node-gitlab-2-github It is possible to migrate issues, labels, ... with this tool github.com/piceaTech/node-gitlab-2-github: I tested it, not bad. But had issues when transferring attachments of the issues itself. Still worth a try, maybe. Frotz notes in the comments that: I've since discovered that there's a wait option available to use in the user-editable settings.ts file that's not documented anywhere. I discovered it while implementing my own quick-and-dirty delay, which did work in stopping the "Abuse detected" slowdowns Does this mean that when I want to push new changes I would have to do git push github [branch_name] instead of using origin? No, you can: delete origin, (git remote remove origin), rename github remote as origin (git remote rename github origin), and go on git push (to origin, which is now GitHub): the transfer from GitLab to GitHub is complete.
GitLab
22,265,837
366
I have run gitlabhq rails server on virtual machine, following 1-6 steps from this tutorial https://github.com/gitlabhq/gitlab-recipes/blob/master/install/centos/README.md and starts rails server executing command sudo -u git -H bundle exec rails s -e production. After that I created user, using admin tools and created new project under this user. Then I'm trying to push the existing project to this repo as always. But in the last step, git push origin master fails with the error [remote rejected] master -> master (pre-receive hook declined) Additional info: 1) I haven't activated user (project owner) via email activation link, because I haven't configured post service on server-side and I didn't find instructions how to do that in this manual. 2) Gitlab server generates tips how to push project to repo and there is not repositories/ in path. I mean it generates git@mygitlabhost:user/repo.git instead of git@mygitlabhost:repositories/user/repo.git which is correct. 3) When i tried to debug it, I opened pre-receive script inside repo on server and tried to output variables (there is 3 of them): refs = ARGF.read, key_id = ENV['GL_ID'] and repo_path = Dir.pwd and found, that key_id is always empty. Maybe the problem is here... If so, please give me suggestions on how to fix that. Thanks
GitLab by default marks master branch as protected (See part Protecting your code in https://about.gitlab.com/2014/11/26/keeping-your-code-protected/ why). If so in your case, then this can help: Open your project > Settings > Repository and go to "Protected branches", find "master" branch into the list and click "Unprotect" and try again. via https://gitlab.com/gitlab-com/support-forum/issues/40 For version 8.11 and above how-to here: https://docs.gitlab.com/ee/user/project/protected_branches.html#restricting-push-and-merge-access-to-certain-users
GitLab
28,318,599
342
I accidentally pushed my local master to a branch called origin on gitlab and now it is the default. Is there a way to rename this branch or set a new master branch to master?
Updated: Prerequisites: You have the Owner or Maintainer role in the project. To update the default branch for an individual project: On the left sidebar, select Search or go to find your project. Select Settings > Repository. Expand Branch defaults. For Default branch, select a new default branch. Optional. Select the Auto-close referenced issues on default branch checkbox to close issues when a merge request uses a closing pattern. Select Save changes. To change a default branch name for an instance or group: On the left sidebar, at the bottom, select Admin Area. Select Settings > Repository. Expand Default branch. For Initial default branch name, select a new default branch. Select Save changes.
GitLab
30,987,216
290
My problem is that I can't push or fetch from GitLab. However, I can clone (via HTTP or via SSH). I get this error when I try to push : Permission denied (publickey) fatal : Could not read from remote repository From all the threads I've looked, here is what I have done : Set up an SSH key on my computer and added the public key to GitLab Done the config --global for username and email Cloned via SSH and via HTTP to check if it would resolve the issue Done the ssh -T [email protected] command If you have any insight about how to resolve my issue, it would be greatly appreciated.
I found this after searching a lot. It will work perfectly fine for me. Go to "Git Bash" just like cmd. Right click and "Run as Administrator". Type ssh-keygen Press enter. It will ask you to save the key to the specific directory. Press enter. It will prompt you to type password or enter without password. The public key will be created to the specific directory. Now go to the directory and open .ssh folder. You'll see a file id_rsa.pub. Open it on notepad. Copy all text from it. Go to https://gitlab.com/-/profile/keys or Paste here in the "key" textfield. Now click on the "Title" below. It will automatically get filled. Then click "Add key". Now give it a shot and it will work for sure.
GitLab
40,427,498
289
I'm facing this error when I try to clone a repository from GitLab (GitLab 6.6.2 4ef8369): remote: Counting objects: 66352, done. remote: Compressing objects: 100% (10417/10417), done. error: RPC failed; curl 18 transfer closed with outstanding read data remaining fatal: The remote end hung up unexpectedly fatal: early EOF fatal: index-pack failed The clone is then aborted. How can I avoid this?
It happens more often than not, I am on a slow internet connection and I have to clone a decently huge git repository. The most common issue is that the connection closes and the whole clone is cancelled. Cloning into 'large-repository'... remote: Counting objects: 20248, done. remote: Compressing objects: 100% (10204/10204), done. error: RPC failed; curl 18 transfer closed with outstanding read data remaining fatal: The remote end hung up unexpectedly fatal: early EOF fatal: index-pack failed After a lot of trial and errors and a lot of “remote end hung up unexpectedly” I have a way that works for me. The idea is to do a shallow clone first and then update the repository with its history. $ git clone http://github.com/large-repository --depth 1 $ cd large-repository $ git fetch --unshallow
GitLab
38,618,885
254
I am learning GitLab CI/CD. I installed GitLab and GitLab Runner from Officials. Whenever I run the pipeline during Maven build, the job gets stuck. I have a registered runner and it is available to my project, but jobs get stuck. .gitlab-ci.yml image: docker:latest services: - docker:dind variables: DOCKER_DRIVER: overlay SPRING_PROFILES_ACTIVE: gitlab-ci stages: - build - package - deploy maven-build: image: maven:3-jdk-8 stage: build script: "mvn package -B" artifacts: paths: - target/*.jar docker-build: stage: package script: - docker build -t registry.com/ci-cd-demo . - docker push registry.com/ci-cd-demo k8s-deploy: image: google/cloud-sdk stage: deploy script: - echo "$GOOGLE_KEY" > key.json - gcloud container clusters get-credentials standard-cluster-demo -- zone us-east1-c --project ascendant-study-222206 - kubectl apply -f deployment.yml My runner settings Error message when runner is already associated with project: Can you help me?
The job is stuck because your runners have tags and your jobs don't. Follow these 4 steps to enable your runner to run without tags: Or set tags to your jobs. For more info: Configuration of your jobs with .gitlab-ci.yml - Tags
GitLab
53,370,840
227
Suppose that I would like to implement a fix to a project of someone else. That project resides on GitHub. I could create a fork on GitHub and implement the fix. However, I would like to create my fork on GitLab rather than on GitHub. Is that possible? How? I have read this article: https://about.gitlab.com/2016/12/01/how-to-keep-your-fork-up-to-date-with-its-origin/ Anyway, I am not sure what should I do in my case. Should I just create a fork on GitLab of the project from GitHub somehow? Or should I create a mirror on GitLab of the project from GitHub? Or should I create a mirror on GitLab and then fork the mirror? Or should I do something completely different? What is the correct approach. Thanks. UPDATE Repository mirroring on GitLab does not make sense probably. I can create a mirror of MY GitHub repository on GitLab but I cannot create a mirror of a repository of someone else. https://docs.gitlab.com/ee/user/project/repository/mirror/ This is what I have done so far: I have cloned the original GitHub project to my local machine. I have commited the fix to a new branch in my local repository. I have created an empty project on GitLab. I have set origin in my local repository to that empty project on GitLab and pushed both branches to GitLab. I have set upstream in my local repository to the GitHub repository. When I want to get new commits from the original GitHub repository to the repository on GitLab (i.e. sync the repositories), I can do this using my local repo as an intermediate step. However, there is no direct connection between the repo on GitHub and the repo on GitLab. Is my setup correct? Is there any difference if I make a fork on GitHub?
If you just want to track changes, first make an empty repository in GitLab (or whatever else you may be using) and clone it to your computer. Then add the GitHub project as the "upstream" remote with: git remote add upstream https://github.com/user/repo Now you can fetch and pull from the upstream should there be any changes. (You can also push or merge to it if you have access rights.) git pull upstream master Finally, push back to your own GitLab repository: git push origin master If you don't want to manually pull upstream/push origin, GitLab offers a mirroring ability in Settings => Repository => Mirroring repositories.
GitLab
50,973,048
203
In my GitLab repository, I have a group with 20 projects. I want to clone all projects at once. Is that possible?
Update Dec. 2022, use glab repo clone glab repo clone -g <group> -a=false -p --paginate With: -p, --preserve-namespace: Clone the repo in a subdirectory based on namespace --paginate: Make additional HTTP requests to fetch all pages of projects before cloning. Respects --per-page -a, --archived: Limit by archived status. Use with -a=false to exclude archived repositories. Used with --group flag That does support cloning more than 100 repositories (since MR 1030, and glab v1.24.0, Dec. 2022) This is for gitlab.com or for a self-managed GitLab instance, provided you set the environment variable GITLAB_URI or GITLAB_HOST: it specifies the URL of the GitLab server if self-managed (eg: https://gitlab.example.com). Original answer and updates (starting March 2015): Not really, unless: you have a 21st project which references the other 20 as submodules. (in which case a clone followed by a git submodule update --init would be enough to get all 20 projects cloned and checked out) or you somehow list the projects you have access (GitLab API for projects), and loop on that result to clone each one (meaning that can be scripted, and then executed as "one" command) Since 2015, Jay Gabez mentions in the comments (August 2019) the tool gabrie30/ghorg ghorg allows you to quickly clone all of an org's or user's repos into a single directory. Usage: $ ghorg clone someorg $ ghorg clone someuser --clone-type=user --protocol=ssh --branch=develop $ ghorg clone gitlab-org --scm=gitlab --namespace=gitlab-org/security-products $ ghorg clone --help Also (2020): https://github.com/ezbz/gitlabber usage: gitlabber [-h] [-t token] [-u url] [--debug] [-p] [--print-format {json,yaml,tree}] [-i csv] [-x csv] [--version] [dest] Gitlabber - clones or pulls entire groups/projects tree from gitlab
GitLab
29,099,456
202
I have an account of a Gitlab installation where I created the repository "ffki-startseite" Now I want to clone the repository git://freifunk.in-kiel.de/ffki-startseite.git into that repository with all commits and branches, so I can start working on it in my own scope. How can I import it?
I was able to fully export my project along with all commits, branches and tags to gitlab via following commands run locally on my computer: To illustrate my example, I will be using https://github.com/raveren/kint as the source repository that I want to import into gitlab. I created an empty project named Kint (under namespace raveren) in gitlab beforehand and it told me the http git url of the newly created project there is http://gitlab.example.com/raveren/kint.git The commands are OS agnostic. In a new directory: git clone --mirror https://github.com/raveren/kint cd kint.git git remote add gitlab http://gitlab.example.com/raveren/kint.git git push gitlab --mirror Now if you have a locally cloned repository that you want to keep using with the new remote, just run the following commands* there: git remote remove origin git remote add origin http://gitlab.example.com/raveren/kint.git git fetch --all *This assumes that you did not rename your remote master from origin, otherwise, change the first two lines to reflect it.
GitLab
20,359,936
186
We are using GitLab for our private project. There are some forked libraries from github, that we want to install as npm module. Installing that module directly from npm is ok and for example this: npm install git://github.com/FredyC/grunt-stylus-sprite.git ...works correctly too, but doing the same for GitLab, just changing domain gets me this error. npm WARN `git config --get remote.origin.url` returned wrong result (git://git.domain.com/library/grunt-stylus-sprite.git) npm ERR! git clone git://git.domain.com/library/grunt-stylus-sprite.git Cloning into bare repository 'D:\users\Fredy\AppData\Roaming\npm-cache\_git-remotes\git-git-domain-com-library-grunt-stylus-sprite-git-6f33bc59'... npm ERR! git clone git://git.domain.com/library/grunt-stylus-sprite.git fatal:unable to connect to git.domain.com: npm ERR! git clone git://git.domain.com/library/grunt-stylus-sprite.git git.domain.com[0: 77.93.195.214]: errno=No error npm ERR! Error: Command failed: Cloning into bare repository 'D:\users\Fredy\App Data\Roaming\npm-cache\_git-remotes\git-git-domain-com-library-grunt-stylus-spr ite-git-6f33bc59'... npm ERR! fatal: unable to connect to git.domain.com: npm ERR! git.domain.com[0: xx.xx.xx.xx]: errno=No error From the web interface of GitLab, I have this URL [email protected]:library/grunt-stylus-sprite.git. Running this against npm install it tries to install git module from npm registry. However using URL: [email protected]:library/grunt-stylus-sprite.git is suddenly asking me for the password. My SSH key doesn't include passphrase, so I assume it wasn't able to load that key. Maybe there is some configuration for that I have missed ? Key is located at standard location in my home directory with the name "id_rsa". I am on Windows 7 x64. UPDATE Since NPM v3 there is built-in support for GitLab and other sources (BitBucket, Gist), from where you can install packages. It works for public and private ones so it's not exactly related to this, but some might find it useful. npm install gitlab:<gitlabname>/<gitlabrepo>[#<commit-ish>] Check out documentation: https://docs.npmjs.com/cli/install I you want to work with private repos in Gitlab you are required to manage your credentials/auth-token in your .npmrc. See here: https://docs.gitlab.com/ee/user/packages/npm_registry/#authenticate-to-the-package-registry
You have the following methods for connecting to a private gitlab repository With SSH git+ssh://[email protected]:Username/Repository#{branch|tag} git+ssh://[email protected]/Username/Repository#{branch|tag} With HTTPS git+https://[email protected]/Username/Repository#{branch|tag} With HTTPS and deploy token git+https://<token-name>:<token>@gitlab.com/Username/Repository#{branch|tag}
GitLab
22,988,876
158
I'm trying to add a ruby rails file to my repository in gitlab, but it doesn't allow me to add the file saying that my file does not have a commit checked out. I've tried git pull, and making the the file again and git adding it, but it still doesn't work. This is the error message I get: error: '172069/08_lab_routes_controllers_views_172069_172188-Copy/adventure_game/' does not have a commit checked out fatal: adding files failed
If you have a subdirectory with a .git directory and try to git add . you will see this message. This can happen if you have a git repo and then create/clone another repo in a subdirectory under that repo.
GitLab
56,873,278
155
We have recently started to use GitLab. Currently using a "centralized" workflow. We are considering moving to the github-flow but I want to make sure. What are the pros and cons of git-flow vs github-flow?
As discussed in GitMinutes episode 17, by Nicholas Zakas in his article on "GitHub workflows inside of a company": Git-flow is a process for managing changes in Git that was created by Vincent Driessen and accompanied by some Git extensions for managing that flow. The general idea behind git-flow is to have several separate branches that always exist, each for a different purpose: master, develop, feature, release, and hotfix. The process of feature or bug development flows from one branch into another before it’s finally released. Some of the respondents indicated that they use git-flow in general. Some started out with git-flow and moved away from it. The primary reason for moving away is that the git-flow process is hard to deal with in a continuous (or near-continuous) deployment model. The general feeling is that git-flow works well for products in a more traditional release model, where releases are done once every few weeks, but that this process breaks down considerably when you’re releasing once a day or more. In short: Start with a model as simple as possible (like GitHub flow tends to be), and move towards a more complex model if you need to. You can see an interesting illustration of a simple workflow, based on GitHub-Flow at: "A simple git branching model", with the main elements being: master must always be deployable. all changes made through feature branches (pull-request + merge) rebase to avoid/resolve conflicts; merge in to master For an actual more complete and robust workflow, see gitworkflow (one word).
GitLab
18,188,492
152
I'm using GitLab to write a read.me file. I tried to create a link to a header. According to the wiki an id should be automatically created: see here I created a header using: ### 1. This is my Header and tried to create a link to it: [link](#1--this-is-my-header) but it is not working. What am I doing wrong?
In the Documentation you link to we learn that... The IDs are generated from the content of the header according to the following rules: All text is converted to lowercase. All non-word text (e.g., punctuation, HTML) is removed. All spaces are converted to hyphens. Two or more hyphens in a row are converted to one. If a header with the same ID has already been generated, a unique incrementing number is appended, starting at 1. Note rule 4: "Two or more hyphens in a row are converted to one." However, the example you tried has two hyphens in a row (after the 1). Remove one of them and you should have it. [link](#1-this-is-my-header) From time to time I have encountered a unique header which is converted into an ID in some non-obvious way. A quick way to work out the ID is to use your browser's view source and/or inspect tools to view the HTML source code. For example, you might find the following HTML for your example: <h3 id="1-this-is-my-header">1. This is my Header</h3> Then just use the contents of the id attribute with a hash to link to that header: #1-this-is-my-header.
GitLab
51,221,730
150
I created a custom docker image and push it to docker hub but when I run it in CI/CD it gives me this error. exec /usr/bin/sh: exec format error Where : Dockerfile FROM ubuntu:20.04 RUN apt-get update RUN apt-get install -y software-properties-common RUN apt-get install -y python3-pip RUN pip3 install robotframework .gitlab-ci.yml robot-framework: image: rethkevin/rf:v1 allow_failure: true script: - ls - pip3 --version Output Running with gitlab-runner 15.1.0 (76984217) on runner zgjy8gPC Preparing the "docker" executor Using Docker executor with image rethkevin/rf:v1 ... Pulling docker image rethkevin/rf:v1 ... Using docker image sha256:d2db066f04bd0c04f69db1622cd73b2fc2e78a5d95a68445618fe54b87f1d31f for rethkevin/rf:v1 with digest rethkevin/rf@sha256:58a500afcbd75ba477aa3076955967cebf66e2f69d4a5c1cca23d69f6775bf6a ... Preparing environment 00:01 Running on runner-zgjy8gpc-project-1049-concurrent-0 via 1c8189df1d47... Getting source from Git repository 00:01 Fetching changes with git depth set to 20... Reinitialized existing Git repository in /builds/reth.bagares/test-rf/.git/ Checking out 339458a3 as main... Skipping Git submodules setup Executing "step_script" stage of the job script 00:00 Using docker image sha256:d2db066f04bd0c04f69db1622cd73b2fc2e78a5d95a68445618fe54b87f1d31f for rethkevin/rf:v1 with digest rethkevin/rf@sha256:58a500afcbd75ba477aa3076955967cebf66e2f69d4a5c1cca23d69f6775bf6a ... exec /usr/bin/sh: exec format error Cleaning up project directory and file based variables 00:01 ERROR: Job failed: exit code 1 any thoughts on this to resolve the error?
The problem is that you built this image for arm64/v8 -- but your runner is using a different architecture. If you run: docker image inspect rethkevin/rf:v1 You will see this in the output: ... "Architecture": "arm64", "Variant": "v8", "Os": "linux", ... Try building and pushing your image from your GitLab CI runner so the architecture of the image will match your runner's architecture. Alternatively, you can build for multiple architectures using docker buildx . Alternatively still, you could also run a GitLab runner on ARM architecture so that it can run the image for the architecture you built it on. With modern versions of docker, you may also explicitly control the platform docker uses. Docker will use platform emulation if the specified platform is different from your native platform. For example: Using the DOCKER_DEFAULT_PLATFORM environment variable: DOCKER_DEFAULT_PLATFORM="linux/amd64" docker build -t test . Using the --platform argument, either in the CLI or in your dockerfile: docker build --platform="linux/amd64" -t test . FROM --platform=linux/amd64 ubuntu:jammy Systems with docker desktop installed should already be able to do this. If your system is using docker without docker desktop, you may need to install the docker-buildx plugins explicitly.
GitLab
73,285,601
150
What is the difference between Jenkins and other CI like GitLab CI, drone.io coming with the Git distribution. On some research I could only come up that GitLab community edition doesn't allow Jenkins to be added, but GitLab enterprise edition does. Are there any other significant differences?
This is my experience: At my work we manage our repositories with GitLab EE and we have a Jenkins server (1.6) running. In the basis they do pretty much the same. They will run some scripts on a server/Docker image. TL;DR; Jenkins is easier to use/learn, but it has the risk to become a plugin hell Jenkins has a GUI (this can be preferred if it has to be accessible/maintainable by other people) Integration with GitLab is less than with GitLab CI Jenkins can be split off your repository Most CI servers are pretty straight forward (concourse.ci, gitlab-ci, circle-ci, travis-ci, drone.io, gocd and what else have you). They allow you to execute shell/bat scripts from a YAML file definition. Jenkins is much more pluggable, and comes with a UI. This can be either an advantage or disadvantage, depending on your needs. Jenkins is very configurable because of all the plugins that are available. The downside of this is that your CI server can become a spaghetti of plugins. In my opinion chaining and orchestrating of jobs in Jenkins is much simpler (because of the UI) than via YAML (calling curl commands). Besides that Jenkins supports plugins that will install certain binaries when they are not available on your server (don't know about that for the others). Nowadays (Jenkins 2 also supports more "proper ci" with the Jenkinsfile and the pipline plugin which comes default as from Jenkins 2), but used to be less coupled to the repository than i.e. GitLab CI. Using YAML files to define your build pipeline (and in the end running pure shell/bat) is cleaner. The plug-ins available for Jenkins allow you to visualize all kinds of reporting, such as test results, coverage and other static analyzers. Of course, you can always write or use a tool to do this for you, but it is definitely a plus for Jenkins (especially for managers who tend to value these reports too much). Lately I have been working more and more with GitLab CI. At GitLab they are doing a really great job making the whole experience fun. I understand that people use Jenkins, but when you have GitLab running and available it is really easy to get started with GitLab CI. There won't be anything that will integrate as seamlessly as GitLab CI, even though they put quite some effort in third-party integrations. Their documentation should get you started in no time. The threshold to get started is very low. Maintenance is easy (no plugins). Scaling runners is simple. CI fully part of your repository. Jenkins jobs/views can get messy. Some perks at the time of writing: Only support for a single file in the community edition. Multiples files in the enterprise edition.
GitLab
37,429,453
140
Question: Is there a way to automatically checkout git submodules via the same method (ssh or https) as the main repository? Background: We have a non-public gitlab repository (main) that has a submodule (utils) which is also hosted as a non-public gitlab repository on the same server. Those repositories can be accessed either via ssh or https: [email protected]:my/path/repo.git https://gitlabserver.com/my/path/repo.git Both variants obviously require different forms of authentication and depending on the client computer and the user, one or the other is preferred. For the top level repository (main) that is not an issue, as anyone can choose the method he or she prefers, but for the sub module this depends on the .gitmodules file and hence is (initially) the same for all. Now instead of everyone having to adapt the .gitmodules file to whatever they prefer and make sure they don't accidentally commit those changes, it would be nice, if there was a way to just specify the server and repo path and git chooses either the same method that is used for the main repo, or something that can be set in gitconfig.
I finally solved this problem by specifying the submodules url as a relative path: So lets say your main git repository can be reached either via https://gitlabserver.com/my/path/main.git or via [email protected]:my/path/main.git And the .gitmodules file looks like this: [submodule "utils"] path = libs/utils url = https://gitlabserver.com/my/path/utils.git That would mean that even when you check out the main application via ssh, the submodule utils would still be accessed via https. However, you can replace the absolute path with a relative one like this: [submodule "utils"] path = libs/utils url = ../utils.git and from now on use either git clone --recursive https://gitlabserver.com/my/path/main.git or git clone --recursive [email protected]:my/path/main.git to get the whole repository structure which ever way you want. Obviously that doesn't work for cases where the relative ssh and the https paths are not the same, but at least for gitlab hosted repositories this is the case. This is also handy if you (for whatever reason) mirror your repository structure at two different remote sites.
GitLab
40,841,882
140
I use GitLab on their servers. I would like to download my latest built artifacts (build via GitLab CI) via the API like this: curl --header "PRIVATE-TOKEN: 9koXpg98eAheJpvBs5tK" "https://gitlab.com/api/v3/projects/1/builds/8/artifacts" Where do I find this project ID? Or is this way of using the API not intended for hosted GitLab projects?
I just found out an even easier way to get the project id: just see the HTML content of the gitlab page hosting your project. There is an input with a field called project_id, e.g: <input type="hidden" name="project_id" id="project_id" value="335" />
GitLab
39,559,689
137
I have one project on Gitlab and I worked with it for the last few days! Now i want pull project on my home PC but show me below error : Invocation failed Unexpected Response from Server: Unauthorized java.lang.RuntimeException: Invocation failed Unexpected Response from Server: Unauthorized at org.jetbrains.git4idea.nativessh.GitNativeSshAskPassXmlRpcClient.handleInput(GitNativeSshAskPassXmlRpcClient.java:34) at org.jetbrains.git4idea.nativessh.GitNativeSshAskPassApp.main(GitNativeSshAskPassApp.java:30) Caused by: java.io.IOException: Unexpected Response from Server: Unauthorized at org.apache.xmlrpc.LiteXmlRpcTransport.sendRequest(LiteXmlRpcTransport.java:231) at org.apache.xmlrpc.LiteXmlRpcTransport.sendXmlRpc(LiteXmlRpcTransport.java:90) at org.apache.xmlrpc.XmlRpcClientWorker.execute(XmlRpcClientWorker.java:72) at org.apache.xmlrpc.XmlRpcClient.execute(XmlRpcClient.java:194) at org.apache.xmlrpc.XmlRpcClient.execute(XmlRpcClient.java:185) at org.apache.xmlrpc.XmlRpcClient.execute(XmlRpcClient.java:178) My android studio version is 3.4 !
Enabling credentials helper worked for me, using Android Studio 3.6.2 on Windows 10 AndroidStudio -> File -> Settings -> Git -> Use credential helper
GitLab
55,783,219
132
I had created a private repository which I then changed to public repository. However, I could not find any way to release. Is it possible to create releases in GitLab? If so, how are they done?
To create a release on the GitLab website: Go to your repository In the menu choose Repository > Tags Add a tag for the version of your app. For example, v1.3.1. Add a message (title) about the release. For example, Release 1.3.1. Add a note that describes the details of the release. (Not optional. Adding a note to a tag is what makes it a release.) Click Create tag. The release will now show up under Project > Releases. Read more at the GitLab documentation. GitLab recommends that you use the Release API now, but their documentation is hard to follow. It would be the preferred method for automating everything with CI/CD, though.
GitLab
29,520,905
127
I want to create a public repo to put some sample files from my main repo (private). Is there any way to soft link few folders from a git repo to another git repo?
Then you should use submodules for this task. Submodule are different git repositories under the same root. This way you can manage 2 different project at folder level inside the root repository Submodules allow foreign repositories to be embedded within a dedicated subdirectory of the source tree, always pointed at a particular commit. git submodule Break your big project to sub projects as you did so far. Now add each sub project to you main project using : git submodule add <url> Once the project is added to your repo, you have to init and update it. git submodule init git submodule update As of Git 1.8.2 new option --remote was added git submodule update --remote --merge will fetch the latest changes from upstream in each submodule, merge them in, and check out the latest revision of the submodule. As the docs describe it: --remote This option is only valid for the update command. Instead of using the superproject’s recorded SHA-1 to update the submodule, use the status of the submodule’s remote-tracking branch. This is equivalent to running git pull in each submodule. However, how would I push a commit in the scenario of bug fix in C which affects the code shared with the parent layers? Again: using submodule will place your code inside your main project as part of its content. The difference between having it locally inside the folder or having it as part of a submodule is that in submodule the content is managed (commited) to a different standalone repository. This is an illustration of submodule - project inside another project in which each project is a standalone project. git subtree Git subtree allows you to insert any repository as a sub-directory of another one Very similar to submodule but the main difference is where your code is managed. In submodules the content is placed inside a separate repo and is managed there which allow you to clone it to many other repos as well. subtree is managing the content as part of the root project and not in a separate project. Instead of writing down how to set it up and to understand how to use it you can simply read this excellent post which will explain it all. https://developer.atlassian.com/blog/2015/05/the-power-of-git-subtree/
GitLab
36,554,810
127
Today I've enabled Gitlab's 2nd-factor authentication. After that, since I logged in the Gitlab website, I need to use my cell phone to pass a 6-digits plus my password, that's good, it makes me feel safe. However, when I use the general operations, for example git clone some-repo.git, I got the error: Cloning into 'some-repo'... remote: HTTP Basic: Access denied remote: You must use a personal access token with 'api' scope for Git over HTTP. remote: You can generate one at https://gitlab.com/profile/personal_access_tokens fatal: Authentication failed for 'some-repo.git' Then I try existing cloned local repo, using git pull, the same error occurs. Before I enabled the 2nd-factor authentication, all the above operation worked fine. Flowing the above error's instructions, I went to the mentioned address: https://gitlab.com/profile/personal_access_tokens. I created the following token, and save the token's key. However, I don't know what to do with this key. Can someone tell me how to use this key to enable the basic operations like git pull, git clone, git push etc... Edit I had many repos on local before I enabled the 2nd-factor authentication. I want these to work too.
As explained in using gitlab token to clone without authentication, you can clone a GitLab repo using your Personal Access Token like this: git clone https://oauth2:[email protected]/yourself/yourproject.git As for how to update your existing clones to use the GitLab Personal Access Token, you should edit your .git/config file in each local git directory, which will have an entry something like this: [remote "origin"] url = https://[email protected]/yourself/yourproject.git Change the url: [remote "origin"] url = https://oauth2:[email protected]/yourself/yourproject.git Now you can continue using this existing git clone as you did before you enabled 2FA.
GitLab
51,658,549
121
We have a project that is composed of multiple (non-public) repositories. To build the whole project, the build system needs to have the files of all repositories (master branches). Is there a way I can configure GitLab CI to provide the repositories I need? I guess I could do a git fetch or similar during the CI build, but how to deal with authentication then?
If you are running GitLab version 8.12 or later, the permissions model was reworked. Along with this new permission model comes the CI environment variable CI_JOB_TOKEN. The premium version of GitLab uses this environment variable for triggers, but you can use it to clone repos. dummy_stage: script: - git clone https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.instance/group/project.git
GitLab
32,995,578
116
I have stored a Markdown file and an image file in a Git repo as follows: readme.markdown images/ image.png I reference the image from readme.markdown like this: ![](./images/image.png) This renders as expected in ReText, but does not render when I push the repo to GitLab. How can I reference the image from the Markdown file so that it renders when viewed in GitLab?
![](images/image.png) without the ./ works for me: https://gitlab.com/cirosantilli/test/blob/bffbcc928282ede14dcb42768f10a7ef21a665f1/markdown.md#image I have opened a request for this to be allowed at: http://feedback.gitlab.com/forums/176466-general/suggestions/6746307-support-markdown-image-path-in-current-directory-s , but it entered into Internet void when GitLab dumped UserVoice.
GitLab
27,016,052
115
I want to publish some programming documentation I have in a public available repository. This documentation has formatted text, some UML diagrams, and a lot of code examples. I think that GitHub or GitLab are good places to publish this. To publish the UML diagrams, I would like to have some easy way to keep them updated into the repository and visible as images in the wiki. I don't want to keep the diagrams in my computer (or on the cloud), edit them, generate an image, and then publish it every time. Is there a way to put the diagrams in the repository (in PlantUML syntax would be ideal), link them in the markdown text, and make the images auto-update every time the diagram is updated?
Edit: Alternative with Proxy service This way is significantly different and simpler than the answer below; it uses the PlantUML proxy service: http://www.plantuml.com/plantuml/proxy?cache=no&src=https://raw.github.com/plantuml/plantuml-server/master/src/main/webapp/resource/test2diagrams.txt The GitHub markdown for this would be: ![alternative text](http://www.plantuml.com/plantuml/proxy?cache=no&src=https://raw.github.com/plantuml/plantuml-server/master/src/main/webapp/resource/test2diagrams.txt) This method suffers from not being able to specify the SVG format (it defaults to PNG), and it is perhaps not possible to work-around the caching bug mentioned in the comments. After trying the other answer, I discovered the service to be slow and seemingly not up to the latest version of PlantUML. I've found a different way that's not quite as straightforward, but it works via PlantUML.com's server (in the cloud). As such, it should work anywhere you can hotlink to an image. It exploits the !includeurl function and is essentially an indirection. The markdown file links to a PlantUML source that includes the diagram's source. This method allows modifying the source in GitHub, and any images in the GitHub markdown files will automatically update. But it requires a tricky step to create the URL to the indirection. Get the URL to the raw PlantUML source, e.g., https://raw.githubusercontent.com/linux-china/plantuml-gist/master/src/main/uml/plantuml_gist.puml (using the example in the joanq's answer) Go to http://plantuml.com/plantuml/form (or PlantText.com) and create a one-line PlantUML source that uses the !includeurl URL-TO-RAW-PLANTUML-SOURCE-ON-GITHUB operation. Continuing with the example URL, the PlantUML (meta)source is: !includeurl https://raw.githubusercontent.com/linux-china/plantuml-gist/master/src/main/uml/plantuml_gist.puml Copy the image URL from PlantUML.com's image, e.g., http://plantuml.com:80/plantuml/png/FSfB2e0m303Hg-W1RFPUHceiDf36aWzwVEl6tOEPcGGvZXBAKtNljW9eljD9NcCFAugNU15FU3LWadWMh2GPEcVnQBoSP0ujcnS5KnmaWH7-O_kEr8TU and paste it into your GitHub markdown file. This URL won't change. ![PlantUML model](http://plantuml.com:80/plantuml/png/3SNB4K8n2030LhI0XBlTy0YQpF394D2nUztBtfUHrE0AkStCVHu0WP_-MZdhgiD1RicMdLpXMJCK3TC3o2iEDwHSxvNVjWNDE43nv3zt731SSLbJ7onzbyeF) Bonus: You can even get access to the SVG format by modifying the plantuml/png/ part of the URL to be plantuml/svg/ as follows ![PlantUML model](http://plantuml.com:80/plantuml/svg/3SNB4K8n2030LhI0XBlTy0YQpF394D2nUztBtfUHrE0AkStCVHu0WP_-MZdhgiD1RicMdLpXMJCK3TC3o2iEDwHSxvNVjWNDE43nv3zt731SSLbJ7onzbyeF) Example on GitHub https://github.com/fuhrmanator/course-activity-planner/blob/master/ooad/overview.md Caveat with private repos As davidbak pointed out in a comment, the raw file in a private repo will have a URL with token=<LONGSTRINGHERE> in it, and this token changes as the source file updates. Unfortunately, the markdown breaks when this happens, so you have to update the Readme file after you commit the file to GitHub, which is not a great solution.
GitLab
32,203,610
114
I formatted my Windows 7 laptop and in an attempt to have git setup working again, I installed git and source tree application. I deleted the SSH Key from gitlab and regenerated the key using ssh-keygen. But when I try to add the SSH Key at gitlab, it throws the following exception : Key is invalid Fingerprint has already been taken Fingerprint cannot be generated Because of this I am unable to clone the git repository from the source tree application since gitlab is unable to authenticate the SSH key.I followed queries at google groups of gitlab but none of them seem to resolve my issue. Is there any workaround or steps to get the SSH key accepted by gitlab?
In my case; the public key i was trying to add was already used with 'work' Gitlab account and i received the said error upon trying to use the same key with 'personal' Gitlab account. Solution - Add another public key on the same machine and use that with 'personal' gitlab account (both on same machine). navigate to .ssh folder in your profile (even works on windows) and run command ssh-keygen -t rsa when asked for file name give another filename id_rsa_2 (or any other). enter for no passphrase (or otherwise). You will end up making id_rsa_2 and id_rsa_2.pub use the command cat id_rsa_2.pub copy and save key in 'personal' Gitlab account. create a file with no extension in .ssh folder named 'config' put this block of configuration in your config file Host gitlab.com HostName gitlab.com IdentityFile C:\Users\<user name>\.ssh\id_rsa User <user name> Host gitlab_2 HostName gitlab.com IdentityFile C:\Users\<user name>\.ssh\id_rsa_2 User <user name> now whenever you want to use 'personal' gitlab account simply change alias in git URLs for action to remote servers. for example instead of using git clone [email protected]:.............. simply use git clone git@gitlab_2:............... doing that would use the second configuration with gitlab.com (from 'config' file) and will use the new id_rsa_2 key pair for authentication. Find more about above commands on this link https://clubmate.fi/how-to-setup-and-manage-multiple-ssh-keys/
GitLab
23,537,881
112
According to the official gitlab documentation, one way to enable docker build within ci pipelines, is to make use of the dind service (in terms of gitlab-ci services). However, as it is always the case with ci jobs running on docker executors, the docker:latest image is also needed. Could someone explain: what is the difference between the docker:dind and the docker:latest images? (most importantly): why are both the service and the docker image needed (e.g. as indicated in this example, linked to from the github documentation) to perform e.g. a docker build whithin a ci job? doesn't the docker:latest image (within which the job will be executed!) incorporate the docker daemon (and I think the docker-compose also), which are the tools necessary for the commands we need (e.g. docker build, docker push etc)? Unless I am wrong, the question more or less becomes: Why a docker client and a docker daemon cannot reside in the same docker (enabled) container
what is the difference between the docker:dind and the docker:latest images? docker:latest contains everything necessary to connect to a docker daemon, i.e., to run docker build, docker run and such. It also contains the docker daemon but it's not started as its entrypoint. docker:dind builds on docker:latest and starts a docker daemon as its entrypoint. So, their content is almost the same but through their entrypoints one is configured to connect to tcp://docker:2375 as a client while the other is meant to be used for a daemon. why are both the service and the docker image needed […]? You don't need both. You can just use either of the two, start dockerd as a first step, and then run your docker build and docker run commands as usual like I did here; apparently this was the original approach in gitlab at some point. But I find it cleaner to just write services: docker:dind instead of having a before_script to setup dockerd. Also you don't have to figure out how to start & install dockerd properly in your base image (if you are not using docker:latest.) Declaring the service in your .gitlab-ci.yml also lets you swap out the docker-in-docker easily if you know that your runner is mounting its /var/run/docker.sock into your image. You can set the protected variable DOCKER_HOST to unix:///var/run/docker.sock to get faster builds. Others who don't have access to such a runner can still fork your repository and fallback to the dind service without modifying your .gitlab-ci.yml.
GitLab
47,280,922
111
I have set up and we are running a default install of GitLab v6.0.1 (we're about to upgrade as well). It was a "Production" setup, following this guide precisely to the letter: https://github.com/gitlabhq/gitlabhq/blob/master/doc/install/installation.md Now, how do we safely change the URL of a working install? Apparently our URL is very long and we've come up with a new URL. I've edited a number of configuration files and the "Application Status Checks" report everything is OK. I've rebooted the server to ensure things are still working. I can access Nginx just fine, over our original SSL. I can browse the GitLab site, create a repository, etc. I can fork and commit just fine. It all seems to be OK; but, since this is not a native environment for me, I wanted to double check that I have done everything to rename a GitLab site. The files I've edited are: /etc/hosts 127.0.0.1 localhost 10.0.0.10 wake.domain.com wake 10.0.0.10 git.domain.com git /home/git/gitlab/config/gitlab.yml production: &base gitlab: host: git.domain.com /home/git/gitlab-shell/config.yml gitlab_url: "https://git.domain.com" ^- yes, we are on SSL and that is working, even on a new URL /etc/nginx/sites-available/gitlab server { server_name git.domain.com
GitLab Omnibus For an Omnibus install, it is a little different. The correct place in an Omnibus install is: /etc/gitlab/gitlab.rb external_url 'http://gitlab.example.com' Finally, you'll need to execute sudo gitlab-ctl reconfigure and sudo gitlab-ctl restart so the changes apply. I was making changes in the wrong places and they were getting blown away. The incorrect paths are: /opt/gitlab/embedded/service/gitlab-rails/config/gitlab.yml /var/opt/gitlab/.gitconfig /var/opt/gitlab/nginx/conf/gitlab-http.conf Pay attention to those warnings that read: # This file is managed by gitlab-ctl. Manual changes will be # erased! To change the contents below, edit /etc/gitlab/gitlab.rb # and run `sudo gitlab-ctl reconfigure`.
GitLab
19,456,129
101
Is it possible to move a GitLab repository from one group to another in GitLab? For example, I have https://gitlab.com/my-user/my-repo. I'd like to move it to https://gitlab.com/my-group/another-group/my-repo. Ideally, I'd like to keep all the issues associated with it.
Yes, you can move your GitLab project from one namespace to another. Your project -> Settings -> General -> Advanced Then, almost at the end of the list. Transfer project
GitLab
54,758,093
98
I implemented the oauth2 web flow in order to get access_token from users of my app. With the access_token, I would like to do the following actions: Get user informations Create a repo for this user Push code to this repo (using git push ) I already successfully get the user information(1) and create a repo(2) The problem is I can't push code (3), I got "Unauthorized" error. The command I run: git remote add origin https://gitlab-ci-token<mytoken>@gitlab.com/myuser/myrepo.git git push origin master
You should do git remote add origin https://<access-token-name>:<access-token>@gitlab.com/myuser/myrepo.git Note that this stores the access token as plain text in the .git\config file. To avoid this you can use the git credential system, providing the access token name for "username" and the access token for "password". This should store the credentials in the git credential system in a more secure way.
GitLab
42,074,414
96
In my CI pipeline I am generating an artifact public/graph.png that visualises some aspect of my code. In a later step I want to commit that to the repo from within the CI pipeline. Here's the pertinent part of .gitlab-ci.yml: commit-graph: stage: pages script: - git config user.email "[email protected]" - git config user.name "CI Pipeline" - cd /group/project - mv public/graph.png . - git add graph.png - git commit -m "committing graph.png [ci skip]" - echo $CI_COMMIT_REF_NAME - git push origin HEAD:$CI_COMMIT_REF_NAME When the pipeline runs within gitlab it fails with: $ git config user.email "[email protected]" $ git config user.name "CI Pipeline" $ cd /group/project $ mv public/graph.png . $ git add graph.png $ git commit -m "committing graph.png [ci skip]" [detached HEAD 22a50d1] committing graph.png [ci skip] 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 graph.png $ echo $CI_COMMIT_REF_NAME jamiet/my-branch $ git push origin HEAD:$CI_COMMIT_REF_NAME fatal: unable to access 'https://gitlab-ci-token:[email protected]/group/project/project.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none Not sure what I'm doing wrong and don't know enough about SSL to understand that error. Can anyone advise? We are hosting gitlab ourselves by the way.
Nowadays there is a much cleaner way to solve this without using SSH but using a project scoped access token, also see this answer. In the GitLab project create an project scoped access token so it is linked to the project, not to an individual. Next store this token as an GitLab CI/CD variable. You can now connect using the following: push-back-to-remote: script: - git config user.email "[email protected]" - git config user.name "ci-bot" - git remote add gitlab_origin https://oauth2:[email protected]/path-to-project.git - git add . - git commit -m "push back from pipeline" - git push gitlab_origin HEAD:main -o ci.skip # prevent triggering pipeline again
GitLab
51,716,044
95
Following this tutorial [link] to install gitlab on a dedicated server. I need to : sudo -u git -H bundle install --deployment --without development test postgres aws But an error occurred while installing rugged : Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for cmake... no ERROR: CMake is required to build Rugged. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/local/bin/ruby Gem files will remain installed in /home/git/gitlab/vendor/bundle/ruby/2.0.0/gems/rugged-0.21.2 for inspection. Results logged to /home/git/gitlab/vendor/bundle/ruby/2.0.0/gems/rugged-0.21.2/ext/rugged/gem_make.out An error occurred while installing rugged (0.21.2), and Bundler cannot continue. Make sure that `gem install rugged -v '0.21.2'` succeeds before bundling. So I installed rugged -> I installed CMake & config-pkg : /home/git/gitlab$ sudo gem install rugged Building native extensions. This could take a while... Successfully installed rugged-0.21.2 Parsing documentation for rugged-0.21.2 unable to convert "\xC0" from ASCII-8BIT to UTF-8 for lib/rugged/rugged.so, skipping 1 gem installed But it doesnt change anything : Errno::EACCES: Permission denied - /home/git/gitlab/vendor/bundle/ruby/2.0.0/gems/rugged-0.21.2/LICENSE An error occurred while installing rugged (0.21.2), and Bundler cannot continue. Make sure that `gem install rugged -v '0.21.2'` succeeds before bundling. Any idea ?
For OSX if you're using homebrew: brew install cmake bundle install
GitLab
27,472,234
93
I've been following this guide on configuring GitLab continuous integration with Jenkins. As part of the process, it is necessary to set the refspec as follows: +refs/heads/*:refs/remotes/origin/* +refs/merge-requests/*/head:refs/remotes/origin/merge-requests/* Why this is necessary is not explained in the post, so I began looking online for an explanation and looked at the official documentation as well as some related StackOverflow questions like this one. In spite of this, I'm still confused: What exactly is refspec? And why is the above refspec necessary – what does it do?
A refspec tells git how to map references from a remote to the local repo. The value you listed was +refs/heads/*:refs/remotes/origin/* +refs/merge-requests/*/head:refs/remotes/origin/merge-requests/*; so let's break that down. You have two patterns with a space between them; this just means you're giving multiple rules. (The pro git book refers to this as two refspecs; which is probably technically more correct. However, you just about always have the ability to list multiple refspecs if you need to, so in day to day life it likely makes little difference.) The first pattern, then, is +refs/heads/*:refs/remotes/origin/* which has three parts: The + means to apply the rule without failure even if doing so would move a target ref in a non-fast-forward manner. I'll come back to that. The part before the : (but after the + if there is one) is the "source" pattern. That's refs/heads/*, meaning this rule applies to any remote reference under refs/heads (meaning, branches). The part after the : is the "destination" pattern. That's refs/remotes/origin/*. So if the origin has a branch master, represented as refs/heads/master, this will create a remote branch reference origin/master represented as refs/remotes/origin/master. And so on for any branch name (*). So back to that +... suppose the origin has A --- B <--(master) You fetch and, applying that refspec you get A --- B <--(origin/master) (If you applied typical tracking rules and did a pull you also have master pointed at B.) A --- B <--(origin/master)(master) Now some things happen on the remote. Someone maybe did a reset that erased B, then committed C, then forced a push. So the remote says A --- C <--(master) When you fetch, you get A --- B \ C and git must decide whether to allow the move of origin/master from B to C. By default it wouldn't allow this because it's not a fast-forward (it would tell you it rejected the pull for that ref), but because the rule starts with + it will accept it. A --- B <--(master) \ C <--(origin/master) (A pull will in this case result in a merge commit.) The second pattern is similar, but for merge-requests refs (which I assume is related to your server's implementation of PR's; I'm not familiar with it). More about refspecs: https://git-scm.com/book/en/v2/Git-Internals-The-Refspec
GitLab
44,333,437
93
I have a private repository in GitLab. I have to give its access to members of my team. How can I do that using GitLab web-interface? I know, how to do this in GitHub, but in GitLab it's somehow different.
Update 2021: This answer is out of date, scroll down for the 2021 info. UPDATE: Gitlab has changed a bit in 2 years, so here is the updated flow. Click on the project that you want to share. Click on the Settings tab (the gear icon on the left). Click on the Members subtab. Add member, and find the user if it exists on GitLab, or insert email to send an invitation. Select access level for the user, the possible levels are: Guest, can see wiki pages, view and create issues. Reporter, can do what the guest can plus view the code. Developer, normal developer access, can develop, but cannot push or merge to procted branches by default. Maintainer, can do everything except managing the project. For more info about the user access level refer to gitlab official help. (Optional) Set expiration date of the user access. Old instructions: Click on the project that you want to share. Click on members. Add member, and find the user if it exists on GitLab, or insert email to send an invitation.
GitLab
31,908,222
92
How can I change the project owner in GitLab? There are options in project settings, but in the "transfer" field, it does not recognize any username or anything. Is it possible to change the owner-permissions and root-privileges?
TL;DR Move your project to a new group where both you and the other user are owners, then the other user must transfer it to his own namespace. Background The other answers obviously do not work to transfer a project to a different user, although the comments section of one is enough for someone to figure it out. Also there is this issue on GitLab itself that provides some insights. My Situation I installed and now administer a few instances of GitLab for a few small developer teams as well as one for my personal projects. Resultingly, I have run into numerous questions about this. I keep coming back to this question only to realize that it was never actually answered correctly. The Namespace Problem The issue that you face when doing this is that there can only be one owner of a project, but to transfer a project you have to own the namespace that you are transferring it to. To my knowledge there is no other way to move a project. For completeness, I'll add that the namespace here is, e.g., "gitlab.com/my-user-name/..." or "gitlab.com/my-group-name/...". Solution Because one user cannot "own" another namespace (not even admins), the only option to set up a scenario where two users own the same namespace is with a group. Perform the following steps to accomplish this. Create a new group. Add the user that you want to transfer your project to as an owner member of that group. Transfer your project to that group (a namespace you manage because you are an owner). Login as the other user, then transfer the group project to the "other user" namespace. At this point you will be left as a master in the project. You can now remove yourself from the project entirely if desired.
GitLab
21,579,693
90
While working on a project using GitHub I've fallen in love with GitHub for Windows as a client. Now a new project beckons where I'll be using GitLab instead of GitHub. Will I still be able to use GitHub for Windows as a client for GitLab? After all, they're both based on git, right? If not, what clients are available for GitLab?
Yes, you can use the Windows GitHub client and the GitHub Desktop client with GitLab, BitBucket or any other hosted Git solution. We only use it with HTTPS and you'll need a valid certificate if you do use HTTPS. It may work with HTTP as well. We never did get SSH to work completely right since it's a tough to inject your own SSH keys into the application. If you want to clone a repository, you have to drag and drop the HTTP URL onto the GitHub application. I was unable to get the drag and drop trick to work on OS X. But you can add locally cloned repositories into the OSX version and then the application works like normal. And OSX supports SSH keys unlike the Windows version.
GitLab
22,639,815
89
I am building a workflow with Gitlab, Jenkins and - probably - Nexus (I need an artifact storage). I would like to have GitLab to store releases/binaries - is it possible in a convenient way? I would not like to have another service from which a release (and documentation) could be downloaded but to have it somehow integrated with repository manager, just like releases are handled in e.g. GitHub. Any clues?
Update Oct. 2020: GitLab 13.5 now offers: Attach binary assets to Releases If you aren’t currently using GitLab for your releases because you can’t attach binaries to releases, your workflow just got a lot simpler. You now have the ability to attach binaries to a release tag from the gitlab.ci-yml. This extends support of Release Assets to include binaries, rather than just asset links or source code. This makes it even easier for your development teams to adopt GitLab and use it to automate your release process. See Documentation and Issue. Update Nov. 2015: GitLab 8.2 now supports releases. With its API, you now can create and update a relase associated to a tag. For now, it is only the ability to add release notes (markdown text and attachments) to git tags (aka Releases). first upload the release binary create a new release and place a link to the uploaded binary in the description Update May 2019: GitLab 1.11 introduces an interesting "Guest access to Releases": It is now possible for guest users of your projects to view releases that you have published on the Releases page. They will be able to download your published artifacts, but are not allowed to download the source code nor see repository information such as tags and commits. Original answer March 2015 This is in progress, and suggested in suggestions 4156755: We’re accepting merge requests for the minimal proposal by Ciro: For each repository tag under https://github.com/cirosantilli/test/releases/tag/3.0, allow to upload and download a list of files. 2. The upload and download can be done directly from the tag list view. 3. If a tag gets removed, the uploads are destroyed. (we’re not accepting the tag message edit mentioned in recent comments) The comments to that suggestion include: What makes it more interesting is the next step. I would really like a way to let public download artifacts from "releases" without being able to access source code (i.e. make sources private for project team only except anything else like wiki, "releases", issue tracker). However, such additional feature looks more generic and I submitted a separate feature request for that. Nevertheless, I repeat my point here: While the simplistic version of "releases" is still nice to have, many people can easily set up external file server and point URLs in release/tag description to this server outside GitLab. In other words, "releases" may not look attractive now without some future picture of integration.
GitLab
29,013,457
85
Just started out using self hosted GitLab... it looks like it's going to be really useful moving towards a DevOps workflow. Anyway, after migrating about 20 local Git repositories to the new GitLab server, neatly arranged into 4 groups. I then noticed you can actually have sub-groups within the groups. This would help organisation even further, but I'm struggling to work out how to move the existing projects I've spent a day importing and configuring into a newly created sub-group. Sure I could just create a new project and copy the files over and commit them into the new project, and spend the time reconfiguring the project. Is there an easy way of moving the existing configured Project from the group into the new subgroup?
Turns out the "slug" for a project... the part of the URL after the GitLab server domain name is made up of the "namespace" and the project name. The name space is the group/subgroup path, so I was looking to transfer project to new namespace. So for example if the group is "important-group" and project is called "project". Then the slug will be something like /important-group/project. To then move that to /important-group/sub-group/project, we need to create the new subgroup (down arrow next to the "New project" button). Then change the project namespace. To do this, go to the project page, click the settings button (cog bottom left). Go to the Advanced settings section. And it's just below the rename project option. note: you need to have "owner" privileges on the project otherwise the settings don't appear... maintainer or developer is not enough. Just select the new subgroup and your done! Here is the GitLab docs link with more info on managing projects in GitLab, in case that is useful to anyone.
GitLab
52,778,548
84
We are working on integrating GitLab (enterprise edition) in our tooling, but one thing that is still on our wishlist is to create a merge request in GitLab via a command line (or batchfile or similar, for that matter). We would like to integrate this in our tooling. Searching here and on the web lead me to believe that this is not possible with native GitLab, but that we need additional tooling for that. Am I correct? And what kind of tooling would I want to use for this?
As of GitLab 11.10, if you're using git 2.10 or newer, you can automatically create a merge request from the command line like this: git push -o merge_request.create More information can be found in the docs.
GitLab
37,410,262
82
When you work on your .gitlab-ci.yml for a big project, for example having a time consuming testing stage causes a lot of delay. Is there an easy way to disable that stage, as just removing it from the stages definition, will make the YAML invalid from Gitlab's point of view (since there's a defined but unused stage), and in my case results in: test job: chosen stage does not exist; available stages are .pre, build, deploy, .post Since YAML does not support block comments, you'd need to comment out every line of the offending stage. Are there quicker ways?
You could disable all the jobs from your stage using this trick of starting the job name with a dot ('.'). See https://docs.gitlab.com/ee/ci/jobs/index.html#hide-jobs for more details. .hidden_job: script: - run test
GitLab
64,992,049
82
How can I delete a commit that I made on GitLab? This commit that I made is not the HEAD now. If I can't delete it, can I edit? When it was the HEAD, I tried: git reset --soft HEAD git reset --soft HEAD^1 git revert HEAD git rebase -i HEAD git rebase -i HEAD~1 git reset --hard HEAD git reset --hard Id-Code I already tried to rebase it, but it still stays on the branch. Now I just removed it from the HEAD, but it is still there. There is another command?
git reset --hard CommitId git push -f origin master 1st command will rest your head to commitid and 2nd command will delete all commit after that commit id on master branch. Note: Don't forget to add -f in push otherwise it will be rejected.
GitLab
40,245,767
80
I have GitLab repository there and I need to test every merge request locally, before merging to the target branch. How can I pull/fetch merge request as a new branch?
Pull merge request to new branch git fetch origin merge-requests/REQUESTID/head:BRANCHNAME i.e git fetch origin merge-requests/10/head:file_upload Checkout to newly created branch git checkout BRANCHNAME i.e (git checkout file_upload) OR with single command git fetch origin merge-requests/REQUESTID/head:BRANCHNAME && git checkout BRANCHNAME i.e git fetch origin merge-requests/18/head:file_upload && git checkout file_upload
GitLab
44,992,512
77
I would like to run specific jobs on the .gitlab-ci.yaml if and only if files within specific directories of the repository have changed. Is there a way to do this with gilab's ci/cd tooling or would it be easier just to run a custom build script?
Changes policy introduced in GitLab 11.4. For example: docker build: script: docker build -t my-image:$CI_COMMIT_REF_SLUG . only: changes: - Dockerfile - docker/scripts/* - dockerfiles/**/* - more_scripts/*.{rb,py,sh} In the scenario above, if you are pushing multiple commits to GitLab to an existing branch, GitLab creates and triggers the docker build job, provided that one of the commits contains changes to either: The Dockerfile file. Any of the files inside docker/scripts/ directory. Any of the files and subdirectories inside the dockerfiles directory. Any of the files with rb, py, sh extensions inside the more_scripts directory. You can read more in the documentation and with some more examples.
GitLab
51,661,076
77
I am trying to take a git clone from a particular branch of my bitbucket repository using the below command: git clone <url> --branch <branchname>. However, I am getting the below error while taking the clone: error:unable to create file foldername/nodemodules/......: Filename too long. I tried resolving this by running the below command in my git cmd git config --system core.longpaths true. But I am getting: error: could not lock config file c://.gitconfig: Permission denied error: could not lock config file c://.gitconfig: Invalid argument. How do I solve these two errors?
Start Git Bash as Administrator Run command git config --system core.longpaths true Another way (only for this clone): git clone -c core.longpaths=true <repo-url>
GitLab
52,699,177
77
I've read documentation, some articles and you might call me dumb, but this is my first time working with a concept like this. I've registered runner with tag "testing" created tag "testing" in gitlab binded this runner, with particular project I've also added the same tag e.g. "testing"in my local repo. BUT how exactly is running my jobs dependent on those tags? Are all these operations necessary? If I push new code to repo, *.yml file is executed anyway as far as I tested. So what if I want to run build only when I define a version in a commit? IDK... git commit --tags "v. 2.0" -m "this is version 2.0" (probably not right) But of course it should be universal, so I don't have to always tell, which tag to use to trigger the runner, but for example let him recognize numeric values. As you can see, I'm fairly confused... If you could elaborate how exactly tags work, so I would be able to understand the concept, I would be really grateful.
Tags for GitLab CI and tags for Git are two different concepts. When you write your .gitlab-ci.yml, you can specify some jobs with the tag testing. If a runner with this tag associated is available, it will pickup the job. In Git, within your repository, tags are used to mark a specific commit. It is often used to tag a version. The two concepts can be mixed up when you use tags (in Git) to start your pipeline in GitLab CI. In your .gitlab-ci.yml, you can specify the section only with tags. Refer to GitLab documentation for tags and only. An example is when you push a tag with git: $ git tag -a 1.0.0 -m "1.0.0" $ git push origin 1.0.0 And a job in .gitlab-ci.yml like this: compile: stage: build only: [tags] script: - echo Working... tags: [testing] would start using a runner with the testing tag. By my understanding, what is missing in your steps is to specify the tag testing to your runner. To do this, go in GitLab into your project. Next to Wiki, click on Settings. Go to CI/CD Pipelines and there you have your runner(s). Next to its Guid, click on the pen icon. On next page the tags can be modified.
GitLab
43,638,979
72
I have GitLab & GitLab CI set up to host and test some of my private repos. For my composer modules under this system, I have Satis set up to resolve my private packages. Obviously these private packages require an ssh key to clone them, and I have this working in the terminal - I can run composer install and get these packages, so long as I have the key added with ssh-add in the shell. However, when running my tests in GitLab CI, if a project has any of these dependencies the tests will not complete as my GitLab instance needs authentication to get the deps (obviously), and the test fails saying Host key verification failed. My question is how do I set this up so that when the runner runs the test it can authenticate to gitlab without a password? I have tried putting a password-less ssh-key in my runners ~/.ssh folder, however the build wont even add the key, "eval ssh-agent -s" followed by ssh-add seems to fail saying the agent isn't running...
See also other solutions: git submodule permission (see Marco A.'s answer) job token and override repo in git config (see a544jh's answer) Here a full howto with SSH keys: General Design generating a pair of SSH keys adding the private one as a secure environment variable of your project making the private one available to your test scripts on GitLab-CI adding the public one as a deploy key on each of your private dependencies Generating a pair of public and private SSH keys Generate a pair of public and private SSH keys without passphrase: ssh-keygen -b 4096 -C "<name of your project>" -N "" -f /tmp/name_of_your_project.key Adding the private SSH key to your project You need to add the key as a secure environment variable to your project as following: browse https://<gitlab_host>/<group>/<project_name>/variables click on "Add a variable" fill the text field Key with SSH_PRIVATE_KEY fill the text field Value with the private SSH key itself click on "Save changes" Exposing the private SSH key to your test scripts In order to make your private key available to your test scripts you need to add the following to your .gitlab-ci.yml file: before_script: # install ssh-agent - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )' # run ssh-agent - eval $(ssh-agent -s) # add ssh key stored in SSH_PRIVATE_KEY variable to the agent store - ssh-add <(echo "$SSH_PRIVATE_KEY") # disable host key checking (NOTE: makes you susceptible to man-in-the-middle attacks) # WARNING: use only in docker container, if you use it with shell you will overwrite your user's ssh config - mkdir -p ~/.ssh - echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config Code Snippet comes from GitLab documentation Adding the public SSH key as a deploy key to all your private dependencies You need to register the public SSH key as deploy key to all your private dependencies as following: browse https://<gitlab_host>/<group>/<dependency_name>/deploy_keys click on "New deploy key" fill the text field Title with the name of your project fill the text field Key with the public SSH key itself click on "Create deploy key"
GitLab
25,689,231
71
I have a directory that i want to turn into a git project. I created a new project in gitlab and then i did the following: git init git remote add origin [email protected]:a/b/c.git git add . git commit -m "Initial commit" git push -u origin master In addition, I created the following .gitignore file: * !*/scripts !*/jobs After running git push -u origin master i got the following error: Counting objects: 33165, done. Delta compression using up to 2 threads. Compressing objects: 100% (32577/32577), done. Writing objects: 100% (33165/33165), 359.84 MiB | 1.70 MiB/s, done. Total 33165 (delta 21011), reused 0 (delta 0) remote: Resolving deltas: 100% (21011/21011), done. remote: GitLab: remote: A default branch (e.g. master) does not yet exist for a/b/c remote: Ask a project Owner or Maintainer to create a default branch: remote: remote: https://gitlab.com/a/b/c/project_members remote: To gitlab.com:a/b/c.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to '[email protected]:a/b/c.git' What could be the issue? Please advise
This is linked to issue 27456 and merge request 6608: document the need to be owner or have the master permission level for the initial push So it might be a permission level, not a branch issue. See commit 81ee443: You will need to be owner or have the master permission level for the initial push, as the master branch is automatically protected.
GitLab
52,026,119
71
Recently, I have created newbranch and created a merge request to Master branch. Before TeamLead accept merge request into Master branch another team member was committed another fix to the same branch(newbranch). After that I committed my local changes and pulled the changes in newbranch to local branch. And I pushed my local commit to newbranch My TeamLead told me to rebase my branch to an earlier version. And resolve conflicts. I don't know what to do now. Any idea?
Starting on your newBranch: git checkout master to get back on the master branch git pull origin master to get the most up-to-date version of the master branch git checkout newBranch to get back on your newBranch git rebase origin/master -i to perform an interactive rebase. The command will take you through and let you pick commits, rename them, squash them, etc. Assuming you will want to keep them all, it will pause when there are merge conflicts and then you'll have to resolve them in your text editor, it will tell you where (in your text editor) that the conflicts occur. You will have to add those files after fixing them then do git rebase --continue to proceed with the rebase. When you're done with the rebase your newBranch will be synced up with master and have any commits in master that weren't there when you started your work, and all merge conflicts will be resolved so that you can easily merge your newBranch.
GitLab
53,066,369
71
I have a job in my pipeline that has a script with two very important steps: mvn test to run JUnit tests against my code junit2html to convert the XML result of the tests to a HTML format (only possible way to see the results as my pipelines aren't done through MRs) that is uploaded to GitLab as an artifact docker rm to destroy a container created earlier in the pipeline My problem is that when my tests fail, the script stops immediately at mvn test, so the junit2html step is never reached, meaning the test results are never uploaded in the event of failure, and docker rm is never executed either, so the container remains and messes up subsequent pipelines as a result. What I want is to be able to keep a job going till the end even if the script fails at some point. Basically, the job should still count as failed in GitLab CI / CD, but its entire script should be executed. How can I configure this?
In each step that you need to continue even if the step fails, you can add a flag to your .gitlab-ci.yml file in that step. For example: ... Unit Tests: stage: tests only: - branches allow_failure: true script: - ... It's that allow_failure: true flag that will continue the pipeline even if that specific step fails. Gitlab CI Documentation about allow_failure is here: https://docs.gitlab.com/ee/ci/yaml/#allow_failure Update from comments: If you need the step to keep going after a failure, and be aware that something failed, this has worked well for me: ./script_that_fails.sh || FAILED=true if [ $FAILED ] then ./do_something.sh fi
GitLab
59,180,675
71
GitLab is a free, open-source way to host private .git repositories but it does not seem to work with Go. When you create a project it generates a URL of the form: [email protected]:private-developers/project.git where: 1.2.3.4 is the IP address of the gitlab server private-developers is a user group which has access to the private repo Golang 1.2.1 doesn't seem to understand this syntax. go get [email protected]:private-developers/project.git results in: package [email protected]/project.git: unrecognized import path "[email protected]/project.git" Is there a way to get this to work?
Run this command: git config --global url."[email protected]:".insteadOf "https://1.2.3.4/" Assuming you have the correct privileges to git clone the repository, this will make go get work for all repos on server 1.2.3.4. I tested this with go version 1.6.2, 1.8, and 1.9.1.
GitLab
29,707,689
68
I'm new to source control; in the past, I've manually backed up copies of files and made changes on clones then transferred changes manually to master files once debugged. I realize this is similar to how branches work with Git repositories, however I've never used one. I downloaded Git and made an account on GitLab, and started a new project. My site is hosted on a local server and my files are saved locally. How do I connect these files to a Git repository and continue developing with branches?
To register a project as a local Git repository the first thing you need to do is perform the following command at your project root: git init This will create a .git folder at your project root and will allow you to start using Git in that repository. If you want to "push" your local Git repository to a remote Git server (in your case, to GitLab), you'll need to perform the following command first: git remote add origin <Repository_Location> You can call origin whatever you like, really, but origin is the standard name for Git remote repositories. <Repository_Location> is the URL to your remote repository. For example, if I had a new project called MyNewProject that I wanted to push to GitLab, I'd perform: git remote add origin https://gitlab.com/Harmelodic/MyNewProject.git You can then "push" your changes from your local machine to your remote repo using the following command: git push origin <branch_name> where branch name is the name of the branch you want to push, e.g. master. You can find a good beginners guide to Git here.
GitLab
36,132,956
68
In my Gitlab CI, I have a stage that triggers another stage by api call trigger and I want to pass the current branch name as parameter to the other project holding the trigger. I used the CI_COMMIT_REF_NAME for this, it seemed to work, but now that I call the stage only when merging the branch to master, the CI_COMMIT_REF_NAME always says "master". In the documentation it says "The branch or tag name for which project is built", do I understand it correctly that it kind of holds the target branch of my working branch? I tried to get the current branch in gitlab ci with git symbolic-ref HEAD | sed 's!refs\/heads\/!!' as well but it was empty. Is CI_COMMIT_REF_NAME the variable I am looking for and something is going wrong or do I need something else? Thanks in advance.
I'm not sure what you mean by “a stage that triggers another stage by api call trigger”. But, generally speaking, GitLab CI jobs are part of a CI pipeline, and CI pipelines are created for a branch or tag. The CI_COMMIT_REF_NAME variable contains the name of the branch or tag that the pipeline was created for. The CI_MERGE_REQUEST_TARGET_BRANCH_NAME is "The target branch name of the merge request." from GitLab Predefined Variables reference
GitLab
52,169,219
67
I would like to setup my project_dev CI only for 3 branches and specific kind of tags like: dev_1.0, dev_1.1, dev_1.2. How can I achieve that? This is what I have now: project_dev: stage: dev script: - export - bundle exec pod repo update - bundle exec pod install - bundle exec fastlane crashlytics_project_dev after_script: - rm -rf ~/Library/Developer/Xcode/Archives || true when: manual only: - develop - release - master - //here I need to add condition to fire that stage additionally only for specific tags. How can I setup regexp here? tags: - iOS When I type it like: only: - branches - /^dev_[0-9.]*$/ It also runs the CI for tags like: dev1.2 but it should not. Why? Is there a regexp for tags at all?
Sounds like a regular expression question. I just created a project on gitlab.com for the regular expression. File: .gitlab-ci.yml project_dev: # Irrelevant keys is skipped script: - echo "Hello World" only: - develop - release - master - /^dev_[0-9]+(?:.[0-9]+)+$/ # regular expression I was pushed all of tags you mentioned to test this regular expression. As you can see , It will match tags like dev_1.0, dev_1.1, but the job project_dev will not be triggered by tag dev1.2, You can check the result on pipeline pages
GitLab
52,830,653
67
Is it possible to have gitlab set up to automatically sync (mirror) a repository hosted at another location? At the moment, the easiest way I know of doing this involves manually pushing to the two (gitlab and the other) repository, but this is time consuming and error prone. The greatest problem is that a mirror can resynchronize if two users concurrently push changes to the two different repositories. The best method I can come up with to prevent this issue is to ensure users can only push to one of the repositories.
Update Dec 2016: Mirroring is suported with GitLAb EE 8.2+: see "Repository mirroring". As commented by Xiaodong Qi: This answer can be simplified without using any command lines (just set it up on Gitlab repo management interface) Original answer (January 2013) If your remote mirror repo is a bare repo, then you can add a post-receive hook to your gitlab-managed repo, and push to your remote repo in it. #!/bin/bash git push --mirror [email protected]:/path/to/repo.git As Gitolite (used by Gitlab) mentions: if you want to install a hook in only a few specific repositories, do it directly on the server. which would be in: ~git/repositories/yourRepo.git/hook/post-receive Caveat (Update Ocotober 2014) Ciro Santilli points out in the comments: Today (Q4 2014) this will fail because GitLab automatically symlinks github.com/gitlabhq/gitlab-shell/tree/… into every repository it manages. So if you make this change, every repository you modify will try to push. Not to mention possible conflicts when upgrading gitlab-shell, and that the current script is a ruby script, not bash (and you should not remove it!). You could correct this by reading the current directory name and ensuring bijection between that and the remote, but I recommend people to stay far far away from those things See (and vote for) feeadback "Automatic push to remote mirror repo after push to GitLab Repo". Update July 2016: I see this kind of feature added for GitLab EE (Enterprise Edition): MR 249 Add ability to enter remote push URL under Mirror Repository settings Add implementation code to push to remote repository Add new background worker Show latest update date and sync errors if they exists. Sync remote mirror every hour. Note that the recent Remote Mirror Repository (issues 17940) can be tricky: I'm currently trying to shift main development of the Open Source npm modules of my company Lossless GmbH (https://www.npmjs.com/~lossless) from GitHub.com to GitLab.com I'm importing all the repos from GitHub, however when I try to switch off Mirror Repository and switch on Remote Mirror Repository with the original GitHub URL I get an error saying: Remote mirrors url is already in use Here is one of the repos this fails with: https://gitlab.com/pushrocks/npmts Edited 2 months ago turns out, it just requires multiple steps: disable the Mirror Repository press save remove the URl press save then add the Remote Mirror
GitLab
14,288,288
66
I have just installed GitLab. I created a project called project-x. I have created few users and assigned it to the project. Now I tried to clone: git clone [email protected]:project-x.git It prompted me for a password. What password should I use?
Not strictly related to the current scenario. Sometimes when you are prompted for password, it is because you added the wrong* origin format (HTTPS instead of SSH) HTTP(S) protocol is commonly used for public repos with strong username+pass SSH authentication is more common for internal projects where you can authenticate with a ssh-key-file and simple pass-phrase GitLab users are more likely to use the SSH protocol View your remote info with git remote -v If you see HTTP(S) address, this is the command to change it to SSH: git remote set-url origin [email protected]_domain.com/example-project.git
GitLab
15,495,843
66
I worked on github and integrated it to sourcetree (MAC version) for one of my project. I would like to use sourcetree for GITLAB. But I am not able to add remote of gitlab to source tree. In Repository settings, Only I can see host type as "unknown", "bitbucket", "github" & "stash". I used unknown but it won't help me. Sourcetree Version 2.0.4 (2.0.4)
This worked for me, Step 1: Click on + New Repository> Clone from URL Step 2: In Source URL provide URL followed by your user name, Example: GitLab Repo URL : http://git.zaid-labs.info/zaid/iosapp.git GitLab User Name : zaid.pathan So final URL should be http://[email protected]/zaid/iosapp.git Note: zaid.pathan@ added before git. Step 3: Enjoy cloning :).
GitLab
27,570,370
64
Recently I've found three concepts of a workflow in Git: GitFlow GitHub Flow GitLab Flow I've read the nice articles about it but I don't understand GitLab Flow very well. Briefly. GitFlow We've a master branch as a production branch. Also we have a develop branch where every developer merges his features. Sometimes we create a release branch to deploy our features in production. If we have a bug in the release branch, fix it and pull changes into the develop branch. If we have a critical bug in production, create new hotfix-branch, fix the bug and merge branch with production (master) and develop branches. This approach works well if we seldom publish results of our work. (Maybe once every 2 weeks). GitHub Flow We have a master branch as a production branch. And we (as developers) can create branches for adding new features or fixing bugs and merge them with production (master) branch. It sounds very simple. This approach fits for extreme programming where the production branch is deployed several times in a day. GitLab Flow I've seen new terms like a pre-production, a production, a release (stable) branch and a staging environment, a pre-production environment, a production environment. What relationships do they have between them? I understand it this way: If we need to add a new feature we deploy a pre-production branch from the master branch. When we have finished the feature, we deploy a production branch from the pre-production branch. A pre-production branch is the intermediate stage. And then the master branch pulls all changes from the production branch. The approach is good if we want to see each separate feature; we just checkout in the branch what we need to look at. But if we need to show our work we create a release branch with a tag as late as possible. If later we fix bugs in the master branch we need to cherry-pick them to the last release branch. At the end we have the release branch with tags that can help us to move between versions. Is my understanding correct? What is the difference between pull and cherry-pick?
It has been a year now since this post was raised, but considering future readers and the fact things have changed a bit I think it's worth refreshing. GitHub Flow as originally depicted by Scott Chacon in 2011 assumed each change once reviewed on a feature branch and merged into master should be deployed to production immediately. While this worked at the time and conformed to the only GitHub Flow rule, which is anything in the master branch is deployable, it was quickly discovered that in order to keep master a true record of known working production code the actual deployment to production should happen from the feature branch before merging it into master. Deploying from the feature branch makes perfect sense as in the case of any issue production can be instantaneously reverted by deploying master to it. Please take a look at a short visual introduction to GitHub Flow. GitLab Flow is kind of an extension to GitHub Flow accompanied by a set of guidelines and best practices that aim to further standardize the process. Aside from promoting ready to deploy master branch and feature branches (same as GitHub Flow) it introduces three other kinds of branches: Production branch Environment branches: uat, pre-production, production Release branches: 1-5-stable, 1-6-stable I believe above names and examples are self-descriptive, thus I won't elaborate further.
GitLab
39,917,843
64
I've just started using GitLab, and have created a set of issues, in order to keep an overview of what needs to be done for my application. I was wondering if it was possible to create a branch from these issues, such that the branch and issues are linked, similar as in jira and Stash from atlassian?
If you create a branch with the name <issue-number>-issue-description and push that branch to gitlab, it will automatically be linked to that issue. For instance, if you have an issue with id 654 and you create a branch with name 654-some-feature and push it to gitlab, it will be linked to issue 654. Gitlab will even ask you if you want to create a merge request and will automatically add Closes #654 to the merge request description which will close issue 654 when the merge request is accepted. Also if you go to a given issue page on gitlab, you should see a New Branch button which will automatically create a branch with a name of the form <issue-number>-issue-description.
GitLab
43,295,151
64
I'm trying to connect to a GitLab repository using the I/O preview of Android Studio. Does anyone know how to do this/if it is possible yet?
How to add an Android Studio project to GitLab This answer shows how to do it using the Android Studio GUI. 1. Create a new project on GitLab Chose the + button on the menu bar. Add a project name and then click "Create project". This will give you a new project address. Choose the https version. It will look something like this: https://gitlab.com/MyUserName/my-project.git 2. Create a Git repository in Android Studio In the Android Studio menu go to VCS > Import into Version Control > Create Git Repository... Select the root directory of your project. (It will be automatically selected if you already have it highlighted in the Project view. Otherwise you will have to browse up to find it.) 3. Add remote Go to VCS > Git > Remotes.... Then paste in the https address you got from GitLab in step one. You may need to log in with your GitLab username and password. 4. Add, commit, and push your files Make sure you have the top level of the project selected. If you are in the Android view you can switch it to the Project view. Add: Go to VCS > Git > Add. Commit: After adding, do VCS > Git > Commit Directory. (You will need to write a commit message, something like initial commit.) Push: Finally, go to VCS > Git > Push. Finished! You should be able to view your files in GitLab now. See also There is a plugin that would probably streamline the process. Check it out here.
GitLab
16,677,931
63
I have accounts in GitHub and GitLab. I generated and added an RSA key to my account in GitLab, but now I need to work with GitHub on a second project. I know that GitLab and GitHub both use git. Please tell me if it's possible to use GitHub and GitLab on one machine?
Yes you can, you can share the same key between them both (ssh key) or create a new one per git server. Create a SSH config file When you have multiple identity files(in your case one for gitlab and one for github) , create a SSH config file to store your various identities. The format for the alias entries use in this example is: Host alias HostName github.com IdentityFile ~/.ssh/identity To create a config file for two identities (workid and personalid), you would do the following: Open a terminal window. Edit the ~/.ssh/config file. If you don't have a config file, create one. Add an alias for each identity combination for example: Host github HostName github.com IdentityFile ~/.ssh/github Host gitlab HostName gitlab.com IdentityFile ~/.ssh/gitlab This way you can have as many accounts as you wish each one with a different ssh key attached to it.
GitLab
40,549,348
63
After I switched from HTTPS to SSH for my repo then I received this error when pushing to origin master: ssh: Could not resolve hostname git: Name or service not known fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. I also add my ssh in the gitlab. What should I do?
I was getting the same error. This error typically occurs when there is an issue with the SSH configuration or the Git remote repository's hostname cannot be resolved. Here, you can check your remote by running: git remote -v There are few reasons why this error occurs: Incorrect Hostname SSH Configuration Network Connection DNS Resolution Firewall or Proxy Typically, you can solve this by Change the [remote "origin"] URL value in .gitconfig or .config/git/config file Previously: https://[email protected]:userName/repo.git OR ssh://[email protected]:userName/repo.git New: [email protected]:userName/repo.git
GitLab
53,129,706
63
I use GitLab Community Edition 9.1.3 2e4e522 on Windows 10 Pro x64. With Git client. Error Cloning into 'project_name'... remote: HTTP Basic: Access denied fatal: Authentication failed for 'http://[email protected]/my_user_name/project_name.git/' How to fix it?
Open CMD (Run as administrator) type command: git config --system --unset credential.helper then enter new password for Git remote server.
GitLab
44,514,728
61