question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I have celery beat and celery (four workers) to do some processing steps in bulk. One of those tasks is roughly along the lines of, "for each X that hasn't had a Y created, create a Y." The task is run periodically at a semi-rapid rate (10sec). The task completes very quickly. There are other tasks going on as well. I've run into the issue multiple times in which the beat tasks apparently become backlogged, and so the same task (from different beat times) are executed simultaneously, causing incorrectly duplicated work. It also appears that the tasks are executed out-of-order. Is it possible to limit celery beat to ensure only one outstanding instance of a task at a time? Is setting something like rate_limit=5 on the task the "correct" way to of doing this? Is it possible to ensure that beat tasks are executed in-order, e.g. instead of dispatching a task, beat adds it to a task chain? What's the best way of handling this, short of making those tasks themselves execute atomically and are safe to be executed concurrently? That was not a restriction I would have expected of beat tasks… The task itself is defined naïvely: @periodic_task(run_every=timedelta(seconds=10)) def add_y_to_xs(): # Do things in a database return Here's an actual (cleaned) log: [00:00.000] foocorp.tasks.add_y_to_xs sent. id->#1 [00:00.001] Received task: foocorp.tasks.add_y_to_xs[#1] [00:10.009] foocorp.tasks.add_y_to_xs sent. id->#2 [00:20.024] foocorp.tasks.add_y_to_xs sent. id->#3 [00:26.747] Received task: foocorp.tasks.add_y_to_xs[#2] [00:26.748] TaskPool: Apply #2 [00:26.752] Received task: foocorp.tasks.add_y_to_xs[#3] [00:26.769] Task accepted: foocorp.tasks.add_y_to_xs[#2] pid:26528 [00:26.775] Task foocorp.tasks.add_y_to_xs[#2] succeeded in 0.0197986490093s: None [00:26.806] TaskPool: Apply #1 [00:26.836] TaskPool: Apply #3 [01:30.020] Task accepted: foocorp.tasks.add_y_to_xs[#1] pid:26526 [01:30.053] Task accepted: foocorp.tasks.add_y_to_xs[#3] pid:26529 [01:30.055] foocorp.tasks.add_y_to_xs[#1]: Adding Y for X id #9725 [01:30.070] foocorp.tasks.add_y_to_xs[#3]: Adding Y for X id #9725 [01:30.074] Task foocorp.tasks.add_y_to_xs[#1] succeeded in 0.0594762689434s: None [01:30.087] Task foocorp.tasks.add_y_to_xs[#3] succeeded in 0.0352867960464s: None We're currently using Celery 3.1.4 with RabbitMQ as the transport. EDIT Dan, here's what I came up with: Dan, here's what I ended up using: from sqlalchemy import func from sqlalchemy.exc import DBAPIError from contextlib import contextmanager def _psql_advisory_lock_blocking(conn, lock_id, shared, timeout): lock_fn = (func.pg_advisory_xact_lock_shared if shared else func.pg_advisory_xact_lock) if timeout: conn.execute(text('SET statement_timeout TO :timeout'), timeout=timeout) try: conn.execute(select([lock_fn(lock_id)])) except DBAPIError: return False return True def _psql_advisory_lock_nonblocking(conn, lock_id, shared): lock_fn = (func.pg_try_advisory_xact_lock_shared if shared else func.pg_try_advisory_xact_lock) return conn.execute(select([lock_fn(lock_id)])).scalar() class DatabaseLockFailed(Exception): pass @contextmanager def db_lock(engine, name, shared=False, block=True, timeout=None): """ Context manager which acquires a PSQL advisory transaction lock with a specified name. """ lock_id = hash(name) with engine.begin() as conn, conn.begin(): if block: locked = _psql_advisory_lock_blocking(conn, lock_id, shared, timeout) else: locked = _psql_advisory_lock_nonblocking(conn, lock_id, shared) if not locked: raise DatabaseLockFailed() yield And the celery task decorator (used only for periodic tasks): from functools import wraps from preo.extensions import db def locked(name=None, block=True, timeout='1s'): """ Using a PostgreSQL advisory transaction lock, only runs this task if the lock is available. Otherwise logs a message and returns `None`. """ def with_task(fn): lock_id = name or 'celery:{}.{}'.format(fn.__module__, fn.__name__) @wraps(fn) def f(*args, **kwargs): try: with db_lock(db.engine, name=lock_id, block=block, timeout=timeout): return fn(*args, **kwargs) except DatabaseLockFailed: logger.error('Failed to get lock.') return None return f return with_task
from functools import wraps from celery import shared_task def skip_if_running(f): task_name = f'{f.__module__}.{f.__name__}' @wraps(f) def wrapped(self, *args, **kwargs): workers = self.app.control.inspect().active() for worker, tasks in workers.items(): for task in tasks: if (task_name == task['name'] and tuple(args) == tuple(task['args']) and kwargs == task['kwargs'] and self.request.id != task['id']): print(f'task {task_name} ({args}, {kwargs}) is running on {worker}, skipping') return None return f(self, *args, **kwargs) return wrapped @shared_task(bind=True) @skip_if_running def test_single_task(self): pass test_single_task.delay()
RabbitMQ
20,894,771
13
I have below versions of celery and rabbitmq installed - celery 3.1.6 rabbitmq 3.1.1 I can post a task to the default queue from PHP - //client.php <?php require 'celery-php/celery.php'; $c = new Celery('localhost', 'guest', 'guest', '/'); $result = $c->PostTask('tasks.add', array(2,2)); My worker module is in python - # tasks.py from celery import Celery celery = Celery('tasks', broker='amqp://guest:guest@localhost:5672//') @celery.task(queue='demo', name='add') def add(x, y): return x + y I run the celery worker and client like this - # terminal window 1 $ celery -A tasks worker --loglevel=info # terminal window 2 $ php -f client.php This works. I see below output in terminal window 1 : Received task: tasks.add[php_52b1759141a8b3.43107845] Task tasks.add[php_52b1759141a8b3.43107845] succeeded in 0.000701383920386s: 4 But I want to have different queues. For a demonstration, let's say I only want one queue called demo. So I run my celery worker like this - $ celery -A tasks worker --loglevel=info -Q demo But it's not working. The task is not getting executed. I guess it's probably because PHP code is posting the task on default queue : celery (apparently not on demo queue). How do I post my task on a particular queue in PHP? Please help.
By default, your PHP client for Celery takes the queue name as "celery". In order to change the queue to publish to, you must specify the queue name while instantiating a connection to Celery. So, if you are starting your Celery worker with "-Q demo" option, then your connection to Celery in PHP should be - $exchange = 'demo'; $binding = 'demo'; $c = new Celery('localhost', 'guest', 'guest', '/', $exchange, $binding); Note: With -Q option, the exchange and routing_key value is same as queue_name. Please try this and share the results. About exchange and binding : With analogy to Telephone Services, Exchange is like "Telephone Operator", whose only job is to "Direct the call to YOU" with the help of routing_key. Binding is then "Your Telephone Number", which acts as routing_key to your telephone. Note : This process where exchange is redirecting the incoming message to the queue based on binding (routing_key), is a DIRECT exchange type. AMQP has few other types of exchanges, which you can read in AMQP documentation. You can also refer this Celery page
RabbitMQ
20,655,367
13
I only created the last 2 queue names that show in Rabbitmq management Webui in the below table: The rest of the table has hash-like queues, which I don't know: 1- Who created them? (I know it is celery, but which process, task,etc.) 2- Why they are created, and what they are created for?. I can notice that when the number of pushed messages increase, the number of those hash-like messages increase.
When using celery, Rabbitmq is used as a default result backend, and also to store errors of failing tasks(that raised exceptions). Every new task creates a new queue on the server, with thousands of tasks the broker may be overloaded with queues and this will affect performance in negative ways. Each queue in Rabbit will be a separate Erlang process, so if you’re planning to keep many results simultaneously you may have to increase the Erlang process limit, and the maximum number of file descriptors your OS allows. Old results will not be cleaned automatically, so we have to tell rabbit to do so. The below conf. line dictates the time to live of the temp queues. The default is 1 day CELERY_AMQP_TASK_RESULT_EXPIRES = Number of seconds OR, We can change the backend store totally, and not make it in Rabbit. CELERY_BACKEND = "amqp" We may also ignore it: CELERY_IGNORE_RESULT = True. Also, when ignoring the result, we can also keep the errors stored for later usage, which means one more queue for the failing tasks. CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True. I will not mark this question as answered, waiting for a better answer. Rererences: This SO link Celery documentation Rabbitmq documentation
RabbitMQ
20,442,580
13
Is there a way to receive multiple message using a single synchronous call ? When I know that there are N messages( N could be a small value less than 10) in the queue, then I should be able to do something like channel.basic_get(String queue, boolean autoAck , int numberofMsg ). I don't want to make multiple requests to the server .
RabbitMQ's basic.get doesn't support multiple messages unfortunately as seen in the docs. The preferred method to retrieve multiple messages is to use basic.consume which will push the messages to the client avoiding multiple round trips. acks are asynchronous so your client won't be waiting for the server to respond. basic.consume also has the benefit of allowing RabbitMQ to redeliver the message if the client disconnects, something that basic.get cannot do. This can be turned off as well setting no-ack to true. Setting basic.qos prefetch-count will set the number of messages to push to the client at any time. If there isn't a message waiting on the client side (which would return immediately) client libraries tend to block with an optional timeout.
RabbitMQ
17,005,515
13
Perhaps I'm being silly asking the question but I need to wrap my head around the basic concepts before I do further work. I am processing a few thousand RSS feeds, using multiple Celery worker nodes and a RabbitMQ node as the broker. The URL of each feed is being written as a message in the queue. A worker just reads the URL from the queue and starts processing it. I have to ensure that a single RSS feed does not get processed by two workers at the same time. The article Ensuring a task is only executed one at a time suggests a Memcahced-based solution for locking the feed when it's being processed. But what I'm trying to understand is that why do I need to use Memcached (or something else) to ensure that a message on a RabbitMQ queue not be consumed by multiple workers at the same time. Is there some configuration change in RabbitMQ (or Celery) that I can do to achieve this goal?
A single MQ message will certainly not be seen by multiple consumers in a normal working setup. You'll have to do some work for the cases involving failing/crashing workers, read up on auto-acks and message rejections, but the basic case is sound. I don't see a synchronized queue (read: MQ) in the article you've linked, so (as far as I can tell) they're using the lock mechanism (read: memcache) to synchronize, as an alternative. And I can think of a few problems which wouldn't be there in a proper MQ setup.
RabbitMQ
12,153,451
13
I have been trying to share connection between threads and have channels open only on thread creation but after researching a bit more, I think I want to also try to connection pooling. How can I do this on rabbitmq? or is this a general idea I can apply generally? My goal is to spawn X threads and then have them not have to open new channels(which requires round robin establishment between client and server). Since the threads are their own class, I'm not sure if I need to put the pool in the class itself that spawns the threads or where they go?I also have multiple types of threads I would want to share these connections between(not just a single one). Is that possible? Just to give you a general idea, here's how connections/channels are estblished in rabbitmq: ConnectionFactory factory = new ConnectionFactory(); factory.setHost("localhost"); Connection connection = factory.newConnection(); Channel channel = connection.createChannel(); //I want to share several of these between threads
All you need is a pool of Channel objects that your threads can pull from. The Apache commons actually already has a generic ObjectPool you can use. The javadoc for the interface can be found here: http://commons.apache.org/pool/api-1.6/org/apache/commons/pool/ObjectPool.html The javadoc for one of their pre-built implementations can be found here: http://commons.apache.org/pool/api-1.6/org/apache/commons/pool/impl/GenericObjectPool.html A tutorial for using it can be found here: http://commons.apache.org/pool/examples.html If this is over-complicated for you simple needs, really all you need to do is write a class that manages a set of Channel objects, allowing threads to check them out and return them to the pool, with the appropriate synchronization to prevent two threads from getting ahold of the same Channel
RabbitMQ
10,365,867
13
Basically my consumers are producers as well. We get an initial dataset and it gets sent to the queue. A consumer takes an item and processes it, from that point there's 3 possibilities: Data is good and gets putting a 'good' queue for storage Data is bad and discarded Data is not good(yet) or bad(yet) so data is broken down into smaller parts and sent back to the queue for further processing. My problem is with step 3, because the queue grows very quickly at first its possible that a piece of data is broken down into a part thats duplicated in the queue and the consumers continue to process it and end up in a infinite loop. I think the way to prevent against this is to prevent duplicates from going into the queue. I can't do this on the client side because over the course of an hour I may have many cores dealing with billions of data points(to have each client scan it before submitting would slow me down too much). I think this needs to be done on the server side but, like I mentioned, the data is quite large and I don't know how to efficiently ensure no duplicates. I might be asking the impossible but thought I'd give it a shot. Any ideas would be greatly appreciated.
I think even if you could fix the issue of not sending duplicates to the queue, you will sooner or later hit this issue: From RabbitMQ Documentation: "Recovery from failure: in the event that a client is disconnected from the broker owing to failure of the node to which the client was connected, if the client was a publishing client, it's possible for the broker to have accepted and passed on messages from the client without the client having received confirmation for them; and likewise on the consuming side it's possible for the client to have issued acknowledgements for messages and have no idea whether or not those acknowledgements made it to the broker and were processed before the failure occurred. In short, you still need to make sure your consuming clients can identify and deal with duplicate messages." Basically, it looks like this, you send a request to rabbitmq, rabbitmq replies with an ACK but for 1 reason or another, your consumer or producer does not receive this ACK. Rabbitmq has no way of knowing the ack was not received and your producer will end up re-sending the message, having never received an ack. It is a pain to handle duplicate messages especially in apps where messaging is used as a kind of RPC, but it looks like this is unavoidable when using this kind of messaging architecture.
RabbitMQ
10,155,114
13
I'm using celery with django and rabbitmq to create a message queue. I also have a worker, which is originating from a different machine. In a django view I'm starting a process like this: def processtask(request, name): args = ["ls", "-l"] MyTask.delay(args) return HttpResponse("Task set to execute.") My task is configured like this: class MyTask(Task): def run(self, args): p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (out, err) = p.communicate() return out My question now is how can a broker (my django project) now receive the output from the "ls -l" command that the worker executed on his computer. I guess the best thing would be for worker to call a function in broker whenever it's ready to send the output from the executed command. I would like to receive the output from worker asynchronously, then update the webpage with the output, but that's for another time. For now I would only like to receive the output from worker. Update Right now I've added a HTTP GET request that is triggered at the end of task notifying the web application that the task is done - I'm also sending the task_id in the http GET. The http GET method calls django view, which creates AsyncResult and gets the result, but the problem is that when calling result.get() I get the following error: /usr/lib64/python2.6/site-packages/django_celery-2.5.1-py2.6.egg/djcelery/managers.py:178: TxIsolationWarning: Polling results with transaction isolation level repeatable-read within the same transaction may give outdated results. Be sure to commit the transaction for each poll iteration. "Polling results with transaction isolation level" Any ideas why? I'm not using database, because I'm using rabbitmq with AMQP. Update. I would very much like to use third option, which seems like the best option - for small and big return values. My whole task looks like this: class MyTask(Task): def __call__(self, *args, **kwargs): return self.run(*args, **kwargs) def after_return(self, status, retval, task_id, args, kwargs, einfo): if self.webhost is not None: conn = httplib.HTTPConnection(self.webhost, self.webport) conn.request("HEAD", "/vuln/task/output/"+task_id) def run(self, args, webhost=None, webport=None): self.webhost = webhost self.webport = webport r = "This is a basic result string used for code clarity" return r So I've overridden the after_return function, which should also release the lock on my task, since the task's run() function already returned a value. In the HEAD request I'm basically calling a django function, which calls AsyncResult on task_id, which should provide with the result of the task. I've used arbitrary result for testing purposes in my case, since it's only for testing. I would like to know why the above code doesn't work. I can use on_success, but I don't think it will make a difference - or will it?
If you look here you will find the following: Django-celery uses MySQL to keep track of all tasks/results, rabbit-mq is used as a communication bus basically. What really is happening is that you are trying to fetch the ASyncResult of the worker while the task is still running (the task invoked an HTTP request to your server and since it didn't return yet, the db locking session from the worker is still active and the result row is still locked). When Django tries to read the task result (its state and the actual return value of the run function) it finds the row locked and issues you a warning. There are a few ways to go about resolving this: Set up another celery task to reap the result and chain it to your processing task. That way original task will finish, release the lock on db and the new one will acquire it, read the result in django and do whatever you need it to do. Look up celery docs on this. Don't bother at all, and simply do a POST to Django with full processing result attached as a payload, rather than trying to fetch it via db. Override on_success in your task class and POST your notification request to Django then at which point the lock should be released on the db table. Notice that you need to store the whole processing result (no matter how big it is) in the return of the run method (possibly pickled). You didn't mention how big the result can be so it might make sense to actually just do scenario #2 above (which is what I would do). Alternatively I would go with #3. Also don't forget to handle on_failure method as well in your task.
RabbitMQ
9,576,160
13
Our company has a Python based web site and some Python based worker nodes which communicate via Django/Celery and RabbitMQ. I have a Java based application which needs to submit tasks to the Celery based workers. I can send jobs to RabbitMQ from Java just fine, but the Celery based workers are never picking up the jobs. From looking at the packet captures of both types of job submissions, there are differences, but I cannot fathom how to account for them because a lot of it is binary that I cannot find documentation about decoding. Does anyone here have any reference or experience with having Java/RabbitMQ and Celery working together?
I found the solution. The Java library for RabbitMQ refers to exchanges/queues/routekeys. In Celery, the queue name is actually mapping to the exchange referred to in the Java library. By default, the queue for Celery is simply "celery". If your Django settings define a queue called "myqueue" using the following syntax: CELERY_ROUTES = { 'mypackage.myclass.runworker' : {'queue':'myqueue'}, } Then the Java based code needs to do something like the following: ConnectionFactory factory = new ConnectionFactory(); Connection connection = null ; try { connection = factory.newConnection(mqHost, mqPort); } catch (IOException ioe) { log.error("Unable to create new MQ connection from factory.", ioe) ; } Channel channel = null ; try { channel = connection.createChannel(); } catch (IOException ioe) { log.error("Unable to create new channel for MQ connection.", ioe) ; } try { channel.queueDeclare("celery", false, false, false, true, null); } catch (IOException ioe) { log.error("Unable to declare queue for MQ channel.", ioe) ; } try { channel.exchangeDeclare("myqueue", "direct") ; } catch (IOException ioe) { log.error("Unable to declare exchange for MQ channel.", ioe) ; } try { channel.queueBind("celery", "myqueue", "myqueue") ; } catch (IOException ioe) { log.error("Unable to bind queue for channel.", ioe) ; } // Generate the message body as a string here. try { channel.basicPublish(mqExchange, mqRouteKey, new AMQP.BasicProperties("application/json", "ASCII", null, null, null, null, null, null, null, null, null, "guest", null, null), messageBody.getBytes("ASCII")); } catch (IOException ioe) { log.error("IOException encountered while trying to publish task via MQ.", ioe) ; } It turns out that it is just a difference in terminology.
RabbitMQ
6,933,833
13
I am trying to get RabbitMQ with Celery and Django going on an EC2 instance to do some pretty basic background processing. I'm running rabbitmq-server 2.5.0 on a large EC2 instance. I downloaded and installed the test client per the instructions here (at the very bottom of the page). I have been just letting the test script go and am getting the expected output: recving rate: 2350 msg/s, min/avg/max latency: 588078478/588352905/588588968 microseconds recving rate: 1844 msg/s, min/avg/max latency: 588589350/588845737/589195341 microseconds recving rate: 1562 msg/s, min/avg/max latency: 589182735/589571192/589959071 microseconds recving rate: 2080 msg/s, min/avg/max latency: 589959557/590284302/590679611 microseconds The problem is that it is consuming an incredible amount of CPU: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 668 rabbitmq 20 0 618m 506m 2340 S 166 6.8 2:31.53 beam.smp 1301 ubuntu 20 0 2142m 90m 9128 S 17 1.2 0:24.75 java I was testing on a micro instance earlier and it was completely consuming all resources on the instance. Is this to be expected? Am I doing something wrong? Thanks. Edit: The real reason for this post was that celerybeat seemed to run okay for awhile and then suddenly consume all resources on the system. I installed the rabbitmq management tools and have been investigating how the queues are created from celery and from the rabbitmq test suite. It seems to me that celery is orphaning these queues and they are not going away. Here is the queue as generated by the test suite. One queue is created and all the messages go into it and come out: Celerybeat creates a new queue for every time it runs the task: It sets the auto-delete parameter to true, but I'm not entirely sure when these queues will get deleted. They seem to just slowly build up and eat resources. Does anyone have an idea? Thanks.
Ok, I figured it out. Here's the relevant piece of documentation: http://readthedocs.org/docs/celery/latest/userguide/tasks.html#amqp-result-backend Old results will not be cleaned automatically, so you must make sure to consume the results or else the number of queues will eventually go out of control. If you’re running RabbitMQ 2.1.1 or higher you can take advantage of the x-expires argument to queues, which will expire queues after a certain time limit after they are unused. The queue expiry can be set (in seconds) by the CELERY_AMQP_TASK_RESULT_EXPIRES setting (not enabled by default).
RabbitMQ
6,362,829
13
I am currently running a docker-compose stack for basic integration tests with a protractor test runner, a nodejs server serving a web page and a wildfly server serving a java backend. The stack is run from a dind(docker in docker) container in my build server(concourse ci). But it appears that the containers does not terminate on finishing the protractor tests. So since the containers for wildfly, and nodejs are still running the build task never finishes... How can I make the compose end in success or failure when the tests are finished? # Test runner test-runner: image: "${RUNNER_IMG}" privileged: true links: - client - server volumes: - /Users/me/frontend_test/client-devops:/protractor/project - /dev/shm:/dev/shm entrypoint: - /entrypoint.sh - --baseUrl=http://client:9000/dist/ - /protractor/conf-dev.js - --suite=remember # Client deployment client: image: "${CLIENT_IMG}" links: - server # Server deployment server: image: "${SERVER_IMG}"
You can use these docker-compose parameters to achieve that: --abort-on-container-exit Stops all containers if any container was stopped. --exit-code-from Return the exit code of the selected service container. For example, having this docker-compose.yml: version: '2.3' services: elasticsearch: ... service-api: ... service-test: ... depends_on: - elasticsearch - service-api The following command ensures that elasticsearch and service-api go down after service-test is finished, and returns the exit code from the service-test container: docker-compose -f docker-compose.yml up \ --abort-on-container-exit \ --exit-code-from service-test
Concourse
40,907,954
73
It's not clear for me from the documentation if it's even possible to pass one job's output to the another job (not from task to task, but from job to job). I don't know if conceptually I'm doing the right thing, maybe it should be modeled differently in Concourse, but what I'm trying to achieve is having pipeline for Java project split into several granular jobs, which can be executed in parallel, and triggered independently if I need to re-run some job. How I see the pipeline: First job: pulls the code from github repo builds the project with maven deploys artifacts to the maven repository (mvn deploy) updates SNAPSHOT versions of the Maven project submodules copies artifacts (jar files) to the output directory (output of the task) Second job: picks up jar's from the output builds docker containers for all of them (in parallel) Pipeline goes on I was unable to pass the output from job 1 to job 2. Also, I am curious if any changes I introduce to the original git repo resource will be present in the next job (from job 1 to job 2). So the questions are: What is a proper way to pass build state from job to job (I know, jobs might get scheduled on different nodes, and definitely in different containers)? Is it necessary to store the state in a resource (say, S3/git)? Is the Concourse stateless by design (in this context)? Where's the best place to get more info? I've tried the manual, it's just not that detailed. What I've found so far: outputs are not passed from job to job Any changes to the resource (put to the github repo) are fetched in the next job, but changes in working copy are not Minimal example (it fails if commented lines are uncommented with error: missing inputs: gist-upd, gist-out): --- resources: - name: gist type: git source: uri: "[email protected]:snippets/foo/bar.git" branch: master private_key: {{private_git_key}} jobs: - name: update plan: - get: gist trigger: true - task: update-gist config: platform: linux image_resource: type: docker-image source: {repository: concourse/bosh-cli} inputs: - name: gist outputs: - name: gist-upd - name: gist-out run: path: sh args: - -exc - | git config --global user.email "[email protected]" git config --global user.name "Concourse" git clone gist gist-upd cd gist-upd echo `date` > test git commit -am "upd" cd ../gist echo "foo" > test cd ../gist-out echo "out" > test - put: gist params: {repository: gist-upd} - name: fetch-updated plan: - get: gist passed: [update] trigger: true - task: check-gist config: platform: linux image_resource: type: docker-image source: {repository: alpine} inputs: - name: gist #- name: gist-upd #- name: gist-out run: path: sh args: - -exc - | ls -l gist cat gist/test #ls -l gist-upd #cat gist-upd/test #ls -l gist-out #cat gist-out/test
To answer your questions one by one. All build state needs to be passed from job to job in the form of a resource which must be stored on some sort of external store. It is necessary to store on some sort of external store. Each resource type handles this upload and download itself, so for your specific case I would check out this maven custom resource type, which seems to do what you want it to. Yes, this statelessness is the defining trait behind concourse. The only stateful element in concourse is a resource, which must be strictly versioned and stored on an external data store. When you combine the containerization of tasks with the external store of resources, you get the guaranteed reproducibility that concourse provides. Each version of a resource is going to be backed up on some sort of data store, and so even if the data center that your ci runs on is to completely fall down, you can still have strict reproducibility of each of your ci builds. In order to get more info I would recommend doing a tutorial of some kind to get your hands dirty and build a pipeline yourself. Stark and wayne have a tutorial that could be useful. In order to help understand resources there is also a resources tutorial, which might be helpful for you specifically. Also, to get to your specific error, the reason that you are seeing missing inputs is because concourse will look for directories (made by resource gets) named each of those inputs. So you would need to get resource instances named gist-upd and gist-out prior to to starting the task.
Concourse
42,634,934
20
When I configure the following pipeline: resources: - name: my-image-src type: git source: uri: https://github.com/concourse/static-golang - name: my-image type: docker-image source: repository: concourse/static-golang username: {{username}} password: {{password}} jobs: - name: "my-job" plan: - get: my-image-src - put: my-image After building and pushing the image to the Docker registry, it subsequently fetches the image. This can take some time and ultimately doesn't really add anything to the build. Is there a way to disable it?
Every put implies a get of the version that was created. There are a few reasons for this: The primary reason for this is so that the newly created resource can be used by later steps in the build plan. Without the get there is no way to introduce "new" resources during a build's execution, as they're all resolved to a particular version to fetch when the build starts. There are some side-benefits to doing this as well. For one, it immediately warms the cache on one worker. So it's at least not totally worthless; later jobs won't have to fetch it. It also acts as validation that the put actually had the desired effect. In this particular case, as it's the last step in the build plan, the primary reason doesn't really apply. But we didn't bother optimizing it away since in most cases the side benefits make it worth not having the secondary question arise ("why do only SOME put steps imply a get?"). It also cannot be disabled as we resist adding so many knobs that you'll want to turn one day and then have to go back and turn back off once you actually do need it back to the default. Docs: https://concourse-ci.org/put-step.html
Concourse
38,964,299
13
I created a repository on hub.docker.com and now want to push my image to the Dockerhub using my credentials. I am wondering whether I have to use my username and password or whether I can create some kind of access token to push the docker image. What I want to do is using the docker-image resource from Concourse to push an image to Dockerhub. Therefore I have to configure credentials like: type: docker-image source: email: {{docker-hub-email}} username: {{docker-hub-username}} password: {{docker-hub-password}} repository: {{docker-hub-image-dummy-resource}} and I don't want to use my Dockerhub password for that.
In short, you can't. There are some solutions that may appeal to you, but it may ease your mind first to know there's a structural reason for this: Resources are configured via their source and params, which are defined at the pipeline level (in your yml file). Any authentication information has to be defined there, because there's no way to get information from an earlier step in your build into the get step (it has no inputs). Since bearer tokens usually time out after "not that long" (i.e. hours or days) which is also true of DockerHub tokens, the concourse instance needs to be able to fetch a new token from the authentication service every time the build runs if necessary. This requires some form of persistent auth to be stored in the concourse server anyway, and currently Dockerhub does not support CI access tokens a la github. All that is to say, you will need to provide a username and password to Concourse one way or another. If you're worried about security, there are some steps you can most likely take to reduce risk: you can use --load-vars-from to protect your credentials from being saved in your pipeline, storing them elsewhere (LastPass, local file, etc). you might be able to create a user on Dockerhub that only has access to the particular repo(s) you want to push, a "CI bot user" if you will.
Concourse
41,834,554
13
My goal is to be able to build, package and test a java project that is built with maven using a councourse build pipeline. The setup as such is in place, and everything runs fine, but build times are much too long due to poor maven download rates from our nexus. My build job yml file uses the following resource as base for the maven build: # ... image_resource: type: docker-image source: repository: maven tag: '3.3-jdk-8' # ... I am aware of the fact that having a "blank slate" for every buils is somwhat built into concourse by design. Now my question is: what would be a good way to cache a local maven repository (say, with at least some basic stuff inside like Spring and it's dependencies)? Following options come to my mind: Using a docker image that has the dependencies built-in already Creating a ressource that provides me the needed dependencies As far as I can see, option 1) will not make the download size for the build smaller, as concourse seems to not cache docker images used as base for the build jobs (or am I wrong here?) Before I move on, I would like to make sure that following option 2) gives me any advantage - does concourse cache docker images used as ressources? I might miss out something, as I am relatively new to councourse. So forgive me if I force you to state the obvious here. :)
Assuming that your Nexus is local, I would look into why there are poor download rates from that, as using something like Nexus and Artifactory locally is currently the easiest way to do caching. They will manage the lifetime of your cached dependencies, so that you don't have dependencies being cached longer that they are needed and new dependencies are adding as they are used. If you want to share a cache between tasks of a job, then output the cached dependencies folder (.m2 folder for maven) of a task and use that as an input of another task. For reference, see following example: --- jobs: - name: create-and-consume public: true plan: - task: make-a-file config: platform: linux run: # ... outputs: # ensure that relative .m2 folder is used: https://stackoverflow.com/a/16649787/5088458 - name: .m2 - task: consume-the-file config: platform: linux inputs: - name: .m2 run: # ... If you want to share a cache between all executions of a single task in a job, tasks can also be configured to cache folders. If you want to cached between jobs then you could: build a docker image with the cached folder, but then you'll need to manage that when dependencies are updated, although that may be possible via another pipeline. create a resource that manages the cache for you. For example look at gradle-cache-resource or npm-cache-resource, although they require that the input is from git-resource. I think Concourse CI does cache docker images used for tasks, but can also have them as resources of your pipeline and then use the image parameter of the task to pass that resource. You can see what is cached and for how long using the volumes command of fly.
Concourse
40,736,296
10
During Concourse build of Java application I want to: Checkout git master branch Run mvn package If it was successful: increment the SNAPSHOT version in Maven's pom.xml commit it back to the master branch with [skip ci] commit message prefix push local branch to the upstream I haven't found the recommended way of dealing with git except git-resource, which can only get or put resources, but not produce new commits.
You should make your commit inside of a task. You do that by making a task which has your repo as an input, and declares a modified repo as an output. After cloning from input to output, change into your output folder, make your changes and commit. Here's an example pipeline.yml: resources: - name: some-repo type: git source: uri: [email protected]:myorg/project jobs: - name: commit-and-push plan: - get: some-repo - task: commit config: platform: linux image_resource: type: docker-image source: repository: concourse/buildroot tag: git inputs: - name: some-repo outputs: - name: some-modified-repo run: path: /bin/bash args: - -c - | set -eux git clone some-repo some-modified-repo cd some-modified-repo echo "new line" >> some-file.txt git add . git config --global user.name "YOUR NAME" git config --global user.email "YOUR EMAIL ADDRESS" git commit -m "Changed some-file.txt" - put: some-repo params: {repository: some-modified-repo}
Concourse
42,607,033
10
I get the following error output while running the Maven release plugin prepare step i.e. mvn release:prepare --batch-mode -DreleaseVersion=1.1.2 -DdevelopmentVersion=1.2.0-SNAPSHOT -Dtag=v1.1.2 -X from an Atlassian Bamboo plan. However doing the same in the command line works fine. The full error stack is below. Any ideas how can this be solved? [ERROR] Failed to execute goal org.apache.maven.plugins:maven-release-plugin:2.4.2:prepare (default-cli) on project hpcmom: An error is occurred in the checkin process: Exception while executing SCM command. Detecting the current branch failed: fatal: ref HEAD is not a symbolic ref -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-release-plugin:2.4.2:prepare (default-cli) on project hpcmom: An error is occurred in the checkin process: Exception while executing SCM command. at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) Caused by: org.apache.maven.plugin.MojoExecutionException: An error is occurred in the checkin process: Exception while executing SCM command. at org.apache.maven.plugins.release.PrepareReleaseMojo.prepareRelease(PrepareReleaseMojo.java:281) at org.apache.maven.plugins.release.PrepareReleaseMojo.execute(PrepareReleaseMojo.java:232) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) ... 19 more Caused by: org.apache.maven.shared.release.ReleaseExecutionException: An error is occurred in the checkin process: Exception while executing SCM command. at org.apache.maven.shared.release.phase.AbstractScmCommitPhase.checkin(AbstractScmCommitPhase.java:160) at org.apache.maven.shared.release.phase.AbstractScmCommitPhase.performCheckins(AbstractScmCommitPhase.java:145) at org.apache.maven.shared.release.phase.ScmCommitPreparationPhase.runLogic(ScmCommitPreparationPhase.java:76) at org.apache.maven.shared.release.phase.AbstractScmCommitPhase.execute(AbstractScmCommitPhase.java:78) at org.apache.maven.shared.release.DefaultReleaseManager.prepare(DefaultReleaseManager.java:234) at org.apache.maven.shared.release.DefaultReleaseManager.prepare(DefaultReleaseManager.java:169) at org.apache.maven.shared.release.DefaultReleaseManager.prepare(DefaultReleaseManager.java:146) at org.apache.maven.shared.release.DefaultReleaseManager.prepare(DefaultReleaseManager.java:107) at org.apache.maven.plugins.release.PrepareReleaseMojo.prepareRelease(PrepareReleaseMojo.java:277) ... 22 more Caused by: org.apache.maven.scm.ScmException: Exception while executing SCM command. at org.apache.maven.scm.command.AbstractCommand.execute(AbstractCommand.java:63) at org.apache.maven.scm.provider.git.AbstractGitScmProvider.executeCommand(AbstractGitScmProvider.java:291) at org.apache.maven.scm.provider.git.AbstractGitScmProvider.checkin(AbstractGitScmProvider.java:217) at org.apache.maven.scm.provider.AbstractScmProvider.checkIn(AbstractScmProvider.java:410) at org.apache.maven.shared.release.phase.AbstractScmCommitPhase.checkin(AbstractScmCommitPhase.java:156) ... 30 more Caused by: org.apache.maven.scm.ScmException: Detecting the current branch failed: fatal: ref HEAD is not a symbolic ref at org.apache.maven.scm.provider.git.gitexe.command.branch.GitBranchCommand.getCurrentBranch(GitBranchCommand.java:147) at org.apache.maven.scm.provider.git.gitexe.command.checkin.GitCheckInCommand.createPushCommandLine(GitCheckInCommand.java:192) at org.apache.maven.scm.provider.git.gitexe.command.checkin.GitCheckInCommand.executeCheckInCommand(GitCheckInCommand.java:132) at org.apache.maven.scm.command.checkin.AbstractCheckInCommand.executeCommand(AbstractCheckInCommand.java:54) at org.apache.maven.scm.command.AbstractCommand.execute(AbstractCommand.java:59) ... 34 more [ERROR] [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException simple 02-Dec-2013 17:18:09 Failing task since return code of [/opt/dev/apache-maven/3.0.5//bin/mvn -Djava.io.tmpdir=/opt/atlassian/bamboo/5.2.1/temp/HPCMOM-RELEASE-JOB1 release:prepare --batch-mode -DignoreSnapshots=false -DreleaseVersion=1.1.2 -DdevelopmentVersion=1.2.0-SNAPSHOT -Dtag=v1.1.2 -X] was 1 while expected 0 UPDATE: Doing git ls-remote . on a local workspace clone produces: azg@olympus:~/code/hpcmom$ git ls-remote . 7894eea08a0afecb99515d1339623be63a7539d4 HEAD 7894eea08a0afecb99515d1339623be63a7539d4 refs/heads/master 7894eea08a0afecb99515d1339623be63a7539d4 refs/remotes/origin/HEAD 7894eea08a0afecb99515d1339623be63a7539d4 refs/remotes/origin/master 6a7095b86cccdfd4b28e4dea633d0930809ae9ac refs/tags/v1.0 1a53462b1ecf0abfea8245016304cda9c78b420d refs/tags/v1.0^{} 5113a7cbcf35c47b680a9c36e15e5fa01ef1d2e6 refs/tags/v1.1 79a3073ecabe65d3c8051520f8007d9e49a65a06 refs/tags/v1.1^{} a00249209597ea1214d80ee38f228c40db7022c2 refs/tags/v1.1.0 e892bce8d25d87368ab557fee0d30810bef7e31e refs/tags/v1.1.0^{} b491a312c39088533cb069e4ab1ae8a00d1f6bfe refs/tags/v1.1.2 a3f7618dada7ed60d8190426152ffd90e0e40a86 refs/tags/v1.1.2^{} Doing git ls-remote . on the Bamboo clone produces: azg@olympus:/var/atlassian/application-data/bamboo/xml-data/build-dir/HPCMOM-RELEASE-JOB1$ git ls-remote . 2422ce066ac35dae3c54f1435ef8dae5008a9a14 HEAD 57c08d581c0fd9e788049733fbdc9c22b9a6ae00 refs/heads/master 57c08d581c0fd9e788049733fbdc9c22b9a6ae00 refs/remotes/origin/HEAD 57c08d581c0fd9e788049733fbdc9c22b9a6ae00 refs/remotes/origin/master 7539f9700d78a1b766fca7ed9f409914f1ea9d08 refs/tags/vnull 6bfa8c3fdb1f8f56a385035f01b1b77b6e88da8b refs/tags/vnull^{} and this is very weird why is the local development clone output so different from the Bamboo one?
I ran into the same error on Jenkins in combination with maven release plugin, we fixed it by going to Additional behaviours, Check out to specific local branch and enter 'master' I realise this is not a solution but it might give you some direction in where to look.
Bamboo
20,351,051
131
Anyone out there have experience with both Hudson and Bamboo? Any thoughts on the relative strengths and weaknesses of these products? Okay, since folks keep mentioning other CI products I'll open this up further. Here are my general problem. I want to setup a CI system for a new project. This project will likely have Java components (WARs and JARs), some python modules, and possibly even a .NET component. So I want a CI server that can: Handle multiple languages, Deploy artifacts to servers (i.e. deploy the war if all the unit tests pass.) I would also like something that integrated with a decent code coverage tool. Good looking reports are nice, but not essential. Multiple notification mechanisms when things go wrong. I'm not worried about hosting. I'll either run it on a local server or on an Amazon instance. Also, this maybe pie in the sky, but is there something that can also build iPhone apps?
Disclaimer: I work on Bamboo and therefore I am not going to comment on features of other CI products since my experience with them is limited. To answer your specific requirements: Handle multiple languages Bamboo has out of the box support for multiple languages. Customers use it with Java, .Net, PHP, JavaScript etc. That being said, most build servers are generic enough to at least execute a script that can kick off your build process. Deploy artifacts to servers (i.e. deploy the war if all the unit tests pass.) Bamboo 2.7 supports Build Stages, which allow you to break up your build into a Unit Test Stage and a Deploy Stage. Only if the Unit Test Stage succeeds, the build will move on to the Deploy Stage. In Bamboo 3.0 we will support Artifact sharing between stages, allowing you to create an Artifact (e.g. your war) in the first Stage and use this Artifact in the following Stages for testing and deployment. I would also like something that integrated with a decent code coverage tool. Bamboo comes with support for Clover and also has a plugin available for Cobertura. Good looking reports are nice, but not essential. Bamboo has a whole bunch of reports which are nice, but not essential :) Multiple notification mechanisms when things go wrong. Bamboo can notify you via email, RSS, IM, an IDE plugin or a nice wallboard that is visible to the whole team. I'm not worried about hosting. I'll either run it on a local server or on an Amazon instance. From experience, it is generally cheaper to host your own CI server. But if you need to scale, Bamboo makes it easy to distribute your builds to additional local agents or scale out to Amazon via Elastic agents. Also, this maybe pie in the sky, but is there something that can also build IPhone apps? Similar to the answer to your first question, most CI servers will be able to build iPhone apps in some ways. It's possible that there is a little more scripting required though. Price: Bamboo is not free(apart from our free starter license)/libre/open-source, but you will get Bamboo's source-code if you purchase a commercial license and full support. Compared to the cost of computing power and potential maintenance required for a CI server, the cost of a Bamboo license is rather small. Hope this helps.
Bamboo
4,806,331
117
I have a webapp build plan running on a Continuous Integration system (Atlassian Bamboo 2.5). I need to incorporate QUnit-based JavaScript unit tests into the build plan so that on each build, the Javascript tests would be run and Bamboo would interpret the test results. Preferably I would like to be able to make the build process "standalone" so that no connections to external servers would be required. Good ideas on how to accomplish this? The CI system running the build process is on an Ubuntu Linux server.
As I managed to come up with a solution myself, I thought it would be a good idea to share it. The approach might not be flawless, but it's the first one that seemed to work. Feel free to post improvements and suggestions. What I did in a nutshell: Launch an instance of Xvfb, a virtual framebuffer Using JsTestDriver: launch an instance of Firefox into the virtual framebuffer (headlessly) capture the Firefox instance and run the test suite generate JUnit-compliant test results .XML Use Bamboo to inspect the results file to pass or fail the build I will next go through the more detailed phases. This is what my my directory structure ended up looking like: lib/ JsTestDriver.jar test/ qunit/ equiv.js QUnitAdapter.js jsTestDriver.conf run_js_tests.sh tests.js test-reports/ build.xml On the build server: Install Xvfb (apt-get install Xvfb) Install Firefox (apt-get install firefox) Into your application to be built: Install JsTestDriver: http://code.google.com/p/js-test-driver/ add the QUnit adapters equiv.js and QUnitAdapter.js configure JsTestDriver (jsTestDriver.conf): server: http://localhost:4224 load: # Load QUnit adapters (may be omitted if QUnit is not used) - qunit/equiv.js - qunit/QUnitAdapter.js # Tests themselves (you'll want to add more files) - tests.js Create a script file for running the unit tests and generating test results (example in Bash, run_js_tests.sh): #!/bin/bash # directory to write output XML (if this doesn't exist, the results will not be generated!) OUTPUT_DIR="../test-reports" mkdir $OUTPUT_DIR XVFB=`which Xvfb` if [ "$?" -eq 1 ]; then echo "Xvfb not found." exit 1 fi FIREFOX=`which firefox` if [ "$?" -eq 1 ]; then echo "Firefox not found." exit 1 fi $XVFB :99 -ac & # launch virtual framebuffer into the background PID_XVFB="$!" # take the process ID export DISPLAY=:99 # set display to use that of the xvfb # run the tests java -jar ../lib/JsTestDriver.jar --config jsTestDriver.conf --port 4224 --browser $FIREFOX --tests all --testOutput $OUTPUT_DIR kill $PID_XVFB # shut down xvfb (firefox will shut down cleanly by JsTestDriver) echo "Done." Create an Ant target that calls the script: <target name="test"> <exec executable="cmd" osfamily="windows"> <!-- This might contain something different in a Windows environment --> </exec> <exec executable="/bin/bash" dir="test" osfamily="unix"> <arg value="run_js_tests.sh" /> </exec> </target> Finally, tell the Bamboo build plan to both invoke the test target and look for JUnit test results. Here the default "**/test-reports/*.xml" will do fine.
Bamboo
2,070,499
59
At my company, we currently use Atlassian Bamboo for our continuous integration tool. We currently use Java for all of our projects, so it works great. However, we are considering using a Django + Python for one of our new applications. I was wondering if it is possible to use Bamboo for this. First off, let me say that I have a low level of familiarity with Bamboo, as I've only ever used it, not configured it (other than simple changes like changing the svn checkout directory for a build). Obviously there isn't a lot of point in just running a build (since Python projects don't really build), but I'd like to be able to use Bamboo for running the test suite, as well as use bamboo to deploy the latest code to our various test environments the way we do with our Java projects. Does Bamboo support this type of thing with a Python project?
Bamboo essentially just runs a shell script, so this could just as easily be: ./manage.py test as it typically is: mvn clean install or: ant compile You may have to massage to output of the Django test runner into traditional JUnit XML output, so that Bamboo can give you pretty graphs on how many tests passed. Look at this post about using xmlrunner.py to get Python working with Hudson. Also take a look at NoseXUnit.
Bamboo
1,419,629
37
I have SVN configured in Linux at a different location and I need to check-in a shell script to SVN with executable attribute ON from Windows. I use Bamboo as CI, which checks out sources from SVN and does the periodic build. It throws error that shell script is not executable. (Bamboo run as root). What is the best way to set the executable permission? I don't use any SVN client and use eclipse to check-in and check-out. If SVN client is the only, how do I find a version that is compatible with SVN plugin that I use in eclipse. I had a compatibility problem earlier. When I checked-in a file from Tortoise, I couldn't checkout that file from Eclipse.
svn propset svn:executable "*" someScript The syntax is propset key value so svn:executable is the key and "*" is the value someScript is the filename
Bamboo
6,874,085
33
I'm trying to tag the git repo of a ruby gem in a Bamboo build. I thought doing something like this in ruby would do the job `git tag v#{current_version}` `git push --tags` But the problem is that the repo does not have the origin. somehow Bamboo is getting rid of the origin Any clue?
Yes, if you navigate to the job workspace, you will find that Bamboo does not do a straightforward git clone "under the hood", and the the remote is set to an internal file path. Fortunately, Bamboo does store the original repository URL as ${bamboo.repository.git.repositoryUrl}, so all you need to do is set a remote pointing back at the original and push to there. This is what I've been using with both basic Git repositories and Stash, creating a tag based on the build number. git tag -f -a ${bamboo.buildNumber} -m "${bamboo.planName} build number ${bamboo.buildNumber} passed automated acceptance testing." ${bamboo.planRepository.revision} git remote add central ${bamboo.planRepository.repositoryUrl} git push central ${bamboo.buildNumber} git ls-remote --exit-code --tags central ${bamboo.buildNumber} The final line is simply to cause the task to fail if the newly created tag cannot be read back. EDIT: Do not be tempted to use the variable ${bamboo.repository.git.repositoryUrl}, as this will not necessarily point to the repo checked out in your job. Also bear in mind that if you're checking out from multiple sources, ${bamboo.planRepository.repositoryUrl} points to the first repo in your "Source Code Checkout" task. The more specific URLs are referenced via: ${bamboo.planRepository.1.repositoryUrl} ${bamboo.planRepository.2.repositoryUrl} ... and so on.
Bamboo
27,371,629
28
Is it possible for TeamCity to integrate to JIRA like how Bamboo integrates to JIRA? I couldnt find any documentation on JetBrains website that talks about issue-tracker integration. FYI: I heard that TeamCity is coming out with their own tracker called Charisma. Is that true?
TeamCity 5 EAP has support for showing issues from Jira on the tabs of your build. EAP Release Notes you still don't have the integration in Jira itself which I would prefer
Bamboo
754,195
27
I'm in a process of writing a Bamboo plugin, the bulk of which is complete. The plugin works by starting a remote process off via a post request to a server and then polling the same server until it gets a message saying the process has completed or an error occurred - this part works. I would like to add some extra logic where I can notify this server if the user cancels the job, however I'm unsure of how to go about this. I have played about with creating another task which runs as a final task, however I don't know how to detect if any of the previous tasks failed or were cancelled. I have tried using List<TaskResult> taskResults = taskContext.getBuildContext().getBuildResult().getTaskResults(); to get a list of the previous Task's results, however this always appears to return 0 Task Results. I have also tried using a Post-Build Completed Action Module, however I'm unsure how I would add this to a job and the documentation on this has me slightly mystified. If anyone could help me in the right direction I would appreciate it.
From reading what you had written, I think that using an event listener is definitely the correct way to approach your problem. Below I have provided an image of my own creation that seems to describe what you have constructed and that shows where it might be best to place the event listener. Essentially, the client of yours will issue a cancel notification to the server via its network controller mechanism. The server will then receive that cancelation notification via its network controller which is already connected to the client via some network protocol (I am assuming TCP). When that cancellation notification from the client network controller reaches the network controller of the server, the event listener in the network controller of the server will then signal the server build manager to terminate the remote build. Diagram of your program I hope this helps.
Bamboo
18,514,084
22
Is there a way to display all tests names in Bamboo, instead of only the names of the failed/fixed tests. When I browse to the tests section of the result page of the build, only the total number of tests is displayed, e.g. '30 tests in total'. What I actually want to see is a list of all tests performed.
Go to your build results, choose the test tab and click on the small arrow on the left of the screen. A navigator will show up. Click on the job for which you want to see the test results and a list with all tests will show up in the right part of the screen.
Bamboo
9,211,999
16
I want to know if it is possible to configure something similar to what is accomplished by Jenkins+Github with the request builder plugin. Specifically, triggering a build on Bamboo when a pull request is created on Stash, using the pull request branch for the build. Bonus points for triggering new builds when the pull request is updated, or if some command is given through comments (like with the Jenkins plugin). I can't see a way to do that, and I can't even see a way to create a plugin that will make it possible. Maybe the Merge-checks trigger for plugins would work, but it looks like something triggered when someone goes look at the pull request, not something triggered when a pull request arrives.
We solved this by writing a Stash plugin, which has now been open sourced and is available on github. The trick is to annotate methods with com.atlassian.event.api.EventListener, which will get Stash to call them when a corresponding event happens. Then just listen to events such as: com.atlassian.stash.event.pull.PullRequestCommentAddedEvent com.atlassian.stash.event.pull.PullRequestOpenedEvent com.atlassian.stash.event.pull.PullRequestReopenedEvent com.atlassian.stash.event.pull.PullRequestRescopedEvent Aside from that, just follow Atlassian guidelines to create plugins. The open sourced plugin can serve as a reference.
Bamboo
17,581,061
16
We're running Atlassian's Bamboo build server 4.1.2 on a Windows machine. I've created a batch file that is executed within a Task. The script is just referenced in a .bat file an not inline in the task. (e.g. createimage.bat) Within the createimage.bat, I'd like to use Bamboo's PLAN variables. The usual variable syntax is not working, means not replaced. A line in the script could be for example: goq-image-${bamboo.INTERNALVERSION}-SB${bamboo.buildNumber} Any ideas?
You are using the internal Bamboo variables syntax, but the Script Task passes those into the operating system's script environment and they need to be referenced with the respective syntax accordingly, e.g. (please note the underscores between terms): Unix - goq-image-$bamboo_INTERNALVERSION-SB$bamboo_buildNumber Windows - goq-image-%bamboo_INTERNALVERSION%-SB%bamboo_buildNumber% Surprisingly, I'm unable to find an official reference for the Windows variation, there's only Using variables in bash right now: Bamboo variables are exported as bash shell variables. All full stops (periods) are converted to underscores. For example, the variable bamboo.my.variable is $bamboo_my_variable in bash. This is related to File Script tasks (not Inline Script tasks). However, I've figured the Windows syntax from Atlassian's documentation at some point, and tested and used it as documented in Bamboo Variable Substitution/Definition: these variables are also available as environment variables in the Script Task for example, albeit named slightly different, e.g. $bamboo_custom_aws_cfn_stack_StringWithRegex (Unix) or %bamboo_custom_aws_cfn_stack_StringWithRegex% (Windows)
Bamboo
12,196,936
15
My setup: git-repository on an Atlassian Stash-server and Atlassian Bamboo. I'm using Maven 3.1.1 with the release-plugin 2.3.2. The plan in Bamboo looks like this: Check out from git-repository perform a clean install perform release:prepare and release:perform with ignoreSnapshots=true and resume=false Everything up to the last step works fine, but Maven states, that it can't tag the release, because the tag already exists. Here is the log: build 26-Nov-2013 10:36:37 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-release-plugin:2.3.2:prepare (default-cli) on project [PROJECT-NAME]: Unable to tag SCM build 26-Nov-2013 10:36:37 [ERROR] Provider message: build 26-Nov-2013 10:36:37 [ERROR] The git-tag command failed. build 26-Nov-2013 10:36:37 [ERROR] Command output: build 26-Nov-2013 10:36:37 [ERROR] fatal: tag '[PROJECT-NAME]-6.2.2' already exists Well, obviously the tag already exists, no big deal. However, this is what git tag looks like for my repository: bash:~/git/repositories/PROJECT-NAME$ git tag [PROJECT-NAME]-5.2.5 [PROJECT-NAME]-5.3.0 [PROJECT-NAME]-5.3.1 [PROJECT-NAME]-5.4.0 [PROJECT-NAME]-5.5.0 [PROJECT-NAME]-5.5.1 [PROJECT-NAME]-5.5.2 [PROJECT-NAME]-5.5.3 [PROJECT-NAME]-5.5.4 [PROJECT-NAME]-5.6.0 [PROJECT-NAME]-5.6.1 [PROJECT-NAME]-5.6.2 [PROJECT-NAME]-5.6.3 [PROJECT-NAME]-5.6.4 [PROJECT-NAME]-5.6.5 [PROJECT-NAME]-5.6.6 [PROJECT-NAME]-6.0.0 [PROJECT-NAME]-6.0.1 [PROJECT-NAME]-6.0.2 [PROJECT-NAME]-6.1.0 [PROJECT-NAME]-6.1.1 [PROJECT-NAME]-6.1.2 [PROJECT-NAME]-6.2.0 [PROJECT-NAME]-6.2.1 The git-repository is cloned via svn2git from an svn-repository. I've tried multiple times reimporting the repository and deleting and re-cloning it on the stash-server. Yet the tag 6.2.2 seems to exist somewhere in the depths for Maven. What's going on here? Update: I just tried removing ALL tags from the repository. Same result. Changing the version from 6.2.2 to 6.2.3 showed positive results. Another update: It seems to have something to do with the name of the repository. Creating a new repository with the same name but adding -2 at the end helped.
mvn release:clean before release:prepare is what worked for me
Bamboo
20,213,557
14
Our CI server fails to restore our NuGet packages when attempting to build a project. It thinks they are already installed. Here are the logs: build 16-Apr-2015 12:56:38 C:\build-dir\IOP-IOL-JOB1>nuget restore IOHandlerLibrary.sln -NoCache build 16-Apr-2015 12:56:39 All packages listed in packages.config are already installed. What causes NuGet to believe that the packages are installed? Is it something in the solution or in the project file?
I had the same issue. When I ran nuget restore for sln: > nuget restore MySolution.sln MSBuild auto-detection: using msbuild version '14.0' from 'C:\Program Files (x86)\MSBuild\14.0\bin'. All packages listed in packages.config are already installed. When I ran the restore command individually for each project in solution: > nuget restore MySolution.Common\packages.config -PackagesDirectory .\packages Restoring NuGet package Microsoft.Azure.KeyVault.WebKey.3.0.0. Restoring NuGet package Microsoft.Rest.ClientRuntime.2.3.13. Restoring NuGet package Microsoft.Azure.KeyVault.3.0.0. Restoring NuGet package Microsoft.Rest.ClientRuntime.Azure.3.3.15. Restoring NuGet package NLog.4.5.9. Restoring NuGet package Autofac.4.8.1. Restoring NuGet package Microsoft.WindowsAzure.ConfigurationManager.3.2.3. Restoring NuGet package System.IO.4.3.0. .... All references are back, and the solution builds correctly after this. Seems like a Nuget bug.
Bamboo
29,684,996
14
I've been looking at TFS, TeamCity, Jenkins and Bamboo and to be honest, none of them were convincing. I want Good reporting Good Git support Gated/delayed check-in/commit Integration with Visual Studio and/or Atlassian products The solution shouldn't require regular developers to use command line or terminal (Git Extensions FTW) TFS is a mess to configure and work with in general, it doesn't support Git obviously, but it has gated check-ins (although it seems to unnecessarily check out the whole project every time and so it is slow?). Also really lacking in the reporting department. TeamCity has really bad gated check-in support when it comes to Git, otherwise it's my favorite. Supports a lot of stuff out of the box. The reporting in Jenkins is bad (historical trends and so on), it seems to have more bugs than the others, and the plugin quality can be scary. On the other hand it's free and versatile. How is the support for Git and gated check-ins? Bamboo obviously has great Atlassian integration, but no support for gated check-ins. :( Any advice?
@arex1337 All the answers here provided have their merits. Experience tells us no project/organization is ever happy with a single vendor for all their needs. What you may probably end up having is a base CI tool with a mix of plugins/additions from other vendors who their own USPs. As an example : Jenkins as a base tool. @Aura and @sti have already mentioned all the good things; while we can agree the plugin development is a little uncontrolled, there are still a lot of them out there which provide excellent quality. The main thing being the community is active, really agile (they have 1 release per week normally) and any problems you might have are easily solved. Additional benefit being easy plugin development so if if push comes to shove, you can write your own. @Mark O'Connor is bang on with the SONAR suggestion. One of the best ones you can get in terms of reporting and get cool reports. And @Thomas has cleared the air about gated commits In favor of Jenkins: Good reporting - You got it with SONAR+Jenkins Good Git support - Jenkins gives that Gated/delayed check-in/commit - Jenkins Gerrit plugin Integration with Visual Studio and/or Atlassian products - The Jenkins wiki itself runs on Atlassian. Here is a list of some integrations already there Clover , Crowd , Confluence, JIRA : Plugin1 Plugin2 Plugin3 Shouldn't require regular developers to use CLI - Jenkins doesn't Now you may replace Jenkins with Bamboo in the above example and might come close to what you want. But as of now it seems your best bet is Jenkins. TFS and TeamCity : There are not yet there in the league of Jenkins and Bamboo.
Bamboo
12,155,401
13
Many of my project builds utilize the same stages, jobs and tasks over and over again. Is there any way to define a "template" plan and use it to make other templated plans from? I'm not talking about cloning, because with cloning, you are then able to make independent changes to all the clones. What I want is a way to template, say, 10 different plans, and then if I want to add a new job/task to all of them, I would only need to change the template and that would ripple out into all the plans utilizing the template. Is this possible, and if so, how?
That isn't currently possible, unfortunately: A fairly old feature request for plan templates to reuse across projects (BAM-907) has been resolved as Fixed due to the introduction of plan branches in Bamboo 4.0 (see Using plan branches for details): Plan Branches are a Bamboo Plan configuration that represent a branch in your version control system. They inherit all of the configuration defined by the parent Plan, except that instead of building against the repository's main line, they build against a specified branch. It is also worth noting that only users with edit access to the Plan can create Plan Branches that inherit from that plan. While plan branches are a killer simplification for typical Git workflows around feature branches and pull requests indeed and might help accordingly, they neither fully cover the original request nor yours, presumably - that aspect is meanwhile tracked via Add possibility to create plan templates and choose a template when creating a plan (BAM-11380) and esp. Build and deployment templates (BAM-13600), with the latter featuring a somewhat promising comment from January 2014: Thank you for reporting this issue. We've been thinking about templates a lot over the last few months. When we've got more news to share on this, we will be sure to update this ticket.
Bamboo
23,083,779
13
I just installed nodejs on one of my build servers (Win Server 2008 R2) which hosts a Bamboo remote agent. After completing the installation and doing a reboot I got stuck in the following situation: The remote Bamboo build agent is running as a windows service with user MyDomain\MyUser. When a build with an inline powershell task is executing it fails with the error (from the build agent log): com.atlassian.utils.process.ProcessNotStartedException: powershell could not be started ... java.io.IOException: Cannot run program "powershell" ... java.io.IOException: CreateProcess error=2, The system cannot find the file specified Loggin on to the server as MyDomain\MyUser, I have checked that powershell is in the path: where powershell C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe I have tried to restart the service and reboot the machine multiple times. No luck. The only thing that works is if I execute my scripts as a bat file with an absolute path to powershell - but I do not want that. I have searched for solutions on this, but even though this one seems related: Hudson cannot find powershell after update to powershell 3 - the proposed solutions do not work. What am I missing here?
If you do a default installation of nodejs you will see that it adds nodejs and npm to the path. Sometimes I have seen that the installer adds a user variable named PATH - it might be that the Bamboo agent decides to read the user path without "merging" it with the system path. I think it would be worth a try to give that a look.
Bamboo
30,183,168
12
We use Bamboo CI. There are multiple bamboo local agents and parallel builds across many plans. The build-dir in bamboo-home is many hundreds of gigabytes, and analysis shows that it just continually grows as new feature branches are added. Plans seem to be duplicated in each local agent directory, and also directly in build-dir. Unlike expiring artifacts, Bamboo does not seem to clean this up by itself. For example, if a local agent is removed then the local agents build directory sits there forever taking up a significant amount of space. Plans can be set to clean up at the end of a build, however this impacts problem analysis in the event of needing to do a post-mortem on the build. Due to the directory running out of space I have just added a daily cron task to periodically remove files and directories that haven't been accessed for more than 21 days. When I first ran this manually I reclaimed 300GB from a 600GB partition. I want to know if others have encountered this same issue, and if it is safe to externally clean the build-dir in the long term. Could it impact bamboo builds? Is there some bamboo option that I have missed that would do this for me? Searching on the Atlassian site has not been helpful and yields no answers... what are others doing to tame this space hog?
The cron job has been running for a while now without any issues, and it is keeping the space usage under control. I have reduced the parameter to 15 days. My crontab looks like this: # clean up old files from working directory 0 20 * * * find /<path_to>/bamboo-home/xml-data/build-dir/ -depth -not -path *repositories-cache* -atime +15 -delete # clean up old backups every Sunday 0 21 * * 0 find /<path_to>/bamboo-home/backups -type f -mtime +28 -delete # remove any old logs from install directory after 15 days 0 22 * * * find /<path_to>/bamboo/logs/ -type f -mtime +15 -delete # quick and dirty truncate catalina.out to stop it growing too large (or better still use logrotate) 0 23 * * * cat /dev/null > /<path_to>/bamboo/logs/catalina.out I hope this is useful for others trying to tame bamboo's diskspace usage. The first job is the important one, the last three are just housekeeping. N.B. logrotate is not used on catalina.out due to unique circumstances in my companies outsourced linux environment. I would generally recommend logrotate if possible rather than my quick and dirty truncate method - see answer by Jon V.
Bamboo
35,444,951
12
i am new to this continuous integration tool..named Bamboo .. could someone point me to the right direction where i can get information about how to setup this bamboo .. how to write scripts for automatic deployment for different environments... thank you in advance....
You will use your ant script or Mavn pom.xml to deploy and bamboo will scheduled it. You will find a getting start tutorial here with a guide that shows you how to install Bamboo (really easy): https://confluence.atlassian.com/bamboo/bamboo-installation-guide-289276785.html
Bamboo
1,403,309
11
I am trying to access Bamboo's variables as environment variables in my build script (PowerShell). For example, this works fine in TeamCity $buildNumber = "$env:BUILD_NUMBER" And I expected this to work in Bamboo $buildNumber = "$env:bamboo_buildNumber"
In the current version of Bamboo (5.x), the following environment variables work for me in Bash on an Amazon EC2 Linux client within a Bash script. It should be very similar in PowerShell. ${bamboo.buildKey} -- The job key for the current job, in the form PROJECT-PLAN-JOB, e.g. BAM-MAIN-JOBX ${bamboo.buildResultsUrl} -- The URL of the result in Bamboo once the job has finished executing. ${bamboo.buildNumber} -- The Bamboo build number, e.g. 123 ${bamboo.buildPlanName} -- The Bamboo plan name e.g. Some Project name - Some plan name You can see the full list of Bamboo build variables on the Atlassian Bamboo build variable documentation page.
Bamboo
15,987,684
11
We are using Bamboo 5.2 for continuous integration. Source plan has several additional branches. Each branch is triggered by commits in git repo. Deployment project is configured with separate environment for each branch, deployment happens automatically on successful build of source plan. When default branch is deployed automatically - new release is created correctly with naming schema defined in "Release versioning" (we use source plan variables to create release version). The problem appears when any other branch is deployed automatically - we get new release with default version. As bamboo states: "Releases from branches will default to using the branch name suffixed with the build number of the build result." Is there any possibility to override this approach? Target is to set release version from plan's variables (no matter default plan variables or branch plan variables), reason is that we have single plan with several stable branches configured.
In Bamboo 6.1.0 Atlassian has solved the problem! Please see https://jira.atlassian.com/browse/BAM-14422. Since now on naming for releases created on non-default branches follow defined naming rules.
Bamboo
20,760,542
11
This seemed like a good idea at the time public static final String MY_CONFIG_FILE = System.getenv("APP_HOME") + "/cfg/app.properties"; When pushed code to Bamboo some tests failed with java.io.FileNotFoundException: ./cfg/app.properties (No such file or directory) I did set EnvironmentVariable in Bamboo as APP_HOME=. Still, however, Bamboo can't seem to find the file. What am i doing wrong please?
In case someone is interested, in order to reference current working directory, APP_HOME should be set to: APP_HOME=${bamboo.build.working.directory}
Bamboo
10,417,486
10
I have a bamboo build with 2 stages: Build&Test and Publish. The way bamboo works, if Build&Test fails, Publish is not run. This is usually the way that I want things. However, sometimes, Build&Test will fail, but I still want Publish to run. Typically, this is a manual process where even though there is a failing test, I want to push a button so that I can just run the Publish stage. In the past, I had two separate plans, but I want to keep them together as one. Is this possible?
From the Atlassian help forum, here: https://answers.atlassian.com/questions/52863/how-do-i-run-just-a-single-stage-of-a-build Short answer: no. If you want to run a stage, all prior stages have to finish successfully, sorry. What you could do is to use the Quarantine functionality, but that involves re-running the failed job (in yet-unreleased Bamboo 4.1, you may have to press "Show more" on the build result screen to see the re-run button). Another thing that could be helpful in such situation (but not for OP) is disabling jobs.
Bamboo
10,459,130
10
While building a single page app with AngularJS, I'm trying to integrate Jasmine tests in my build. I did something similar before with the Maven Jasmine plugin, but I don't like to wrap my project in maven just to run the Jasmine tests. It seems cleaner to use Karma (was Testacular) for this somehow. I'm comfortable that I'll get things running from a shell command, and my guess is that I can then run the command from Bamboo. My questions: Am I on the right track? How can I best fail the build from a script, or does Bamboo recognize the Karma output automatically?
Great question. Make sure testacular.conf.js is configured to output junit xml for consumption by bamboo junitReporter = { // will be resolved to basePath (in the same way as files/exclude patterns) outputFile: 'test-results.xml' }; You can configure Testacular to run against many browsers and is pre-configured to use Chrome, we've chosen to start going headless with PhantomJS to do unit testing. Testacular already has jasmine inside. For CI we are following the recommendation in // Continuous Integration mode // if true, it capture browsers, run tests and exit singleRun = true; If you use Ant a lot (and we do) sometimes you just want to stick with what you know... so you may want to checkout ANT, Windows and NodeJS Modules. to run node modules (ie testacular). One note, if you are running testacular on windows, the npm install of testacular fails on hiredis module, which seems to be just *nix friendly. So, far it works fine without it. It took us a couple of hours to prove all of this works. Hope this helps --dan
Bamboo
13,134,463
10
During build with Bamboo we creating file /var/atlassian/bamboo/xml-data/build-dir/T4-TGDP-RD/release/dev_patch_release.tar.bz2. This file exist, checked it with command line. At 'Artifact definitions' I have following pattern: **/release/*.bz2. But unfortunately after build is done, in Bamboo -> Build -> Artifact No artifacts have been found for this build result.. In the same time, I have unit tests with result at **/extra/build/logs/*.xml that successfully parsed by JUnit. So, I also created another artifact pattern with **/extra/build/logs/*.xml - still Bamboo does not see it, but JUnit parse it. How do I create an artifact dev_patch_release.tar.bz2 with Bamboo? Bamboo build Log: simple 08-May-2014 23:11:33 Build Dev Patch - Release and Deploy #17 (T4-TGDP-RD-17) started building on agent Agent2 simple 08-May-2014 23:11:33 Build working directory is /var/atlassian/bamboo/xml-data/build-dir/T4-TGDP-RD simple 08-May-2014 23:11:33 Executing build Dev Patch - Release and Deploy #17 (T4-TGDP-RD-17) simple 08-May-2014 23:11:33 Starting task 'Source Code Checkout' of type 'com.atlassian.bamboo.plugins.vcs:task.vcs.checkout' simple 08-May-2014 23:11:33 Updating source code to revision: c100a20080b08f79b6d1f566dc55a1f5154ff069 simple 08-May-2014 23:11:37 Updated source code to revision: c100a20080b08f79b6d1f566dc55a1f5154ff069 simple 08-May-2014 23:11:37 Finished task 'Source Code Checkout' simple 08-May-2014 23:11:37 Running pre-build action: Clover Grails PreBuild Action simple 08-May-2014 23:11:37 Running pre-build action: VCS Version Collector command 08-May-2014 23:11:37 Substituting variable: ${bamboo.build.working.directory} with /var/atlassian/bamboo/xml-data/build-dir/T4-TGDP-RD command 08-May-2014 23:11:37 Substituting variable: ${bamboo.buildResultKey} with T4-TGDP-RD-17 command 08-May-2014 23:11:37 Substituting variable: ${bamboo.repository.revision.number} with c100a20080b08f79b6d1f566dc55a1f5154ff069 simple 08-May-2014 23:11:37 Starting task 'Run Phing' of type 'com.atlassian.bamboo.plugins.scripttask:task.builder.command' command 08-May-2014 23:11:37 Beginning to execute external process for build 'Dev Patch - Release and Deploy #17 (T4-TGDP-RD-17)'\n ... running command line: \n/usr/bin/phing -buildfile /var/atlassian/bamboo/xml-data/build-dir/T4-TGDP-RD/bamboo-dev-patch.xml test\n ... in: /var/atlassian/bamboo/xml-data/build-dir/T4-TGDP-RD\n ... using extra environment variables: \nrevision=c100a20080b08f79b6d1f566dc55a1f5154ff069\nbuild_result_key=T4-TGDP-RD-17\n build 08-May-2014 23:11:39 [00;36mBuildfile: /var/atlassian/bamboo/xml-data/build-dir/T4-TGDP-RD/bamboo-dev-patch.xml[0m build 08-May-2014 23:11:39 [00;32m build 08-May-2014 23:11:39 Dev Patch Build Plan > prepare: build 08-May-2014 23:11:39 [0m build 08-May-2014 23:11:39 [00;36m [mkdir] Created dir: /var/atlassian/bamboo/xml-data/build-dir/T4-TGDP-RD/release[0m build 08-May-2014 23:12:05 Dev Patch Build Plan > test: build 08-May-2014 23:12:05 [0m build 08-May-2014 23:12:05 [00;36m [echo] tar cfj /var/atlassian/bamboo/xml-data/build-dir/T4-TGDP-RD/release/dev_patch_release.tar.bz2 ./[0m build 08-May-2014 23:12:48 [00;32m build 08-May-2014 23:12:48 BUILD FINISHED build 08-May-2014 23:12:48 build 08-May-2014 23:12:48 Total time: 1 minutes 9.67 seconds build 08-May-2014 23:12:48 [0m simple 08-May-2014 23:12:48 Finished task 'Run Phing' simple 08-May-2014 23:12:48 Running post build plugin 'NCover Results Collector' simple 08-May-2014 23:12:48 Running post build plugin 'Clover Results Collector' simple 08-May-2014 23:12:48 Finalising the build... simple 08-May-2014 23:12:48 Stopping timer. simple 08-May-2014 23:12:48 Build T4-TGDP-RD-17 completed. simple 08-May-2014 23:12:48 Running on server: post build plugin 'NCover Results Collector' simple 08-May-2014 23:12:48 Running on server: post build plugin 'Clover Delta Calculator' simple 08-May-2014 23:12:48 All post build plugins have finished simple 08-May-2014 23:12:48 Generating build results summary... simple 08-May-2014 23:12:48 Saving build results to disk... simple 08-May-2014 23:12:48 Indexing build results... simple 08-May-2014 23:12:48 Finished building T4-TGDP-RD-17.
In artifact definition screen: For Location specify a relative path to the files you want to create artifact For Copy pattern, specify the pattern to be copied. For you case, put ./release into the Location box, then specify *.bz2 as a copy pattern. For more info, see this issue https://jira.atlassian.com/browse/BAM-2149
Bamboo
23,544,905
10
I've got a branched build in Bamboo which is configured to delete builds after 14 days. Usually branches aren't inactive that long in our project, however with Christmas leave and some early New Year priorities one branch has been inactive for more than 14 days. As a result it has dropped out of the branched build list. How do I add it back in Bamboo?
Have a look at this post: Deleted plans are not remade when code is pushed to existing branch. Go to Plan Configuration -> Branches, and click button in upper-right to manually add a branch. From branch add modal, select desired branch and check box for 'Enable branches'. (optional) on following screen for new plan branch, check the box to exempt this branch from getting pruned again.
Bamboo
34,867,230
10
I have a PowerShell script which I intend to use as a deployment step in Bamboo. Opening PowerShell and running the script with ./script.ps1 works fine, but using powershell.exe -command ./script.ps1 fails with error Unable to find type [Microsoft.PowerShell.Commands.WebRequestMethod]. What is the difference between running the script directly from PowerShell and by using powershell.exe -command? What am I missing? MWE for the issue in question: function Test-RestMethod { param([string]$Uri, [Microsoft.PowerShell.Commands.WebRequestMethod] $Method = 'Get') $result = Invoke-RestMethod $uri -Method $Method return $result } Test-RestMethod -Uri https://blogs.msdn.microsoft.com/powershell/feed/ -Method 'Get' | Format-Table -Property Title, pubDate
I guess it can be an issue with PowerShell.exe itself, I can reproduce the issue in PowerShell 2.0, 3.0, 4.0 and 5.0. It's an issue that you can't use type constraint of namespace Microsoft.PowerShell.Commands if you don't run any other command first when you are running your script by using PowerShell.exe I found two workarounds for you. a. Run a senseless cmdlet in the beginning of your script, for example Start-Sleep -Milliseconds 1 function Test-RestMethod { param([string]$Uri, [Microsoft.PowerShell.Commands.WebRequestMethod] $Method = 'Get') $result = Invoke-RestMethod $uri -Method $Method return $result } Test-RestMethod -Uri https://blogs.msdn.microsoft.com/powershell/feed/ -Method 'Get' | Format-Table -Property Title, pubDate b. Remove the type constraint, it's still working fine function Test-RestMethod { param([string]$Uri, $Method = 'Get') $result = Invoke-RestMethod $uri -Method $Method return $result } Test-RestMethod -Uri https://blogs.msdn.microsoft.com/powershell/feed/ -Method 'Get' | Format-Table -Property Title, pubDate
Bamboo
49,852,915
10
I updated the gradle plugin to the latest version : com.android.tools.build:gradle:3.0.0-alpha1 and this error occured in AS: export TERM="dumb" if [ -e ./gradlew ]; then ./gradlew test;else gradle test;fi FAILURE: Build failed with an exception. What went wrong: A problem occurred configuring root project 'Android-app'. Could not resolve all dependencies for configuration ':classpath'. Could not find com.android.tools.build:gradle:3.0.0-alpha1. Searched in the following locations: https://jcenter.bintray.com/com/android/tools/build/gradle/3.0.0-alpha1/gradle-3.0.0-alpha1.pom https://jcenter.bintray.com/com/android/tools/build/gradle/3.0.0-alpha1/gradle-3.0.0-alpha1.jar Required by: Current CI config circle.yml dependencies: pre: - mkdir -p $ANDROID_HOME"/licenses" - echo $ANDROID_SDK_LICENSE > $ANDROID_HOME"/licenses/android-sdk-license" - source environmentSetup.sh && get_android_sdk_25 cache_directories: - /usr/local/android-sdk-linux - ~/.android - ~/.gradle override: - ./gradlew dependencies || true test: post: - mkdir -p $CIRCLE_TEST_REPORTS/junit/ - find . -type f -regex ".*/target/surefire-reports/.*xml" -exec cp {} $CIRCLE_TEST_REPORTS/junit/ \; machine: java: version: oraclejdk8 Edit: My gradle file : buildscript { repositories { jcenter() maven { url 'https://maven.google.com' } } dependencies { classpath 'com.android.tools.build:gradle:3.0.0-alpha1' classpath 'com.google.gms:google-services:3.0.0' classpath "io.realm:realm-gradle-plugin:3.1.3" } } allprojects { repositories { mavenCentral() jcenter() } } task clean(type: Delete) { delete rootProject.buildDir }
Google have new maven repo https://android-developers.googleblog.com/2017/10/android-studio-30.html > section Google's Maven Repository https://developer.android.com/studio/preview/features/new-android-plugin-migration.html https://developer.android.com/studio/build/dependencies.html#google-maven So add the dependency on maven repo: buildscript { repositories { ... // You need to add the following repository to download the // new plugin. google() // new which replace https://maven.google.com jcenter() } dependencies { classpath 'com.android.tools.build:gradle:3.6.3' //Minimum supported Gradle version is 4.6. } }
CircleCI
44,071,080
176
Is it possible to install npm package only if it has not been already installed? I need this to speed up test on CircleCI, but when I run npm install [email protected] etc. it always downloads things and installs them from scracth, however, node_modules folder with all modules is already present at the moment of running commands (cached from previous build) and protractor --version etc. shows the needed version of the package. Its perfect to have some one-line command like this: protractor --version || npm install -g [email protected] but the one that will also check version of the package.
You could try npm list protractor || npm install [email protected] Where npm list protractor is used to find protractor package. If the package is not found, it will return npm ERR! code 1 and do npm install [email protected] for installation
CircleCI
30,667,239
56
I am trying to integrate my springboot tutorial project with CircleCi. My project is inside a subdirectory inside a Github repository and I get the following error from CircleCi. Goal requires a project to execute but there is no POM in this directory (/home/circleci/recipe). Please verify you invoked Maven from the correct directory. I can't figure out how to tell circle-ci my project is inside a subdirectory. I have tried a couple of things, as well as trying to cd inside 'recipe' but it does not work or even feel right. Here is the structure of my project: Spring-tutorials | +-- projectA | +-- recipe | | +--pom.xml Here is my config.yml # Java Maven CircleCI 2.0 configuration file # # Check https://circleci.com/docs/2.0/language-java/ for more details # version: 2 jobs: build: docker: # specify the version you desire here - image: circleci/openjdk:8-jdk # Specify service dependencies here if necessary # CircleCI maintains a library of pre-built images # documented at https://circleci.com/docs/2.0/circleci-images/ # - image: circleci/postgres:9.4 working_directory: ~/recipe environment: # Customize the JVM maximum heap limit MAVEN_OPTS: -Xmx3200m steps: - checkout - run: cd recipe/; ls -la; pwd; # Download and cache dependencies - restore_cache: keys: - recipe-{{ checksum "pom.xml" }} # fallback to using the latest cache if no exact match is found - recipe- - run: cd recipe; mvn dependency:go-offline - save_cache: paths: - ~/recipe/.m2 key: recipe-{{ checksum "pom.xml" }} # run tests! - run: mvn integration-test
I managed to fix the issue. I believe the combination of working_directory: ~/spring-tutorial/recipe and of - checkout: path: ~/spring-tutorial made it work. Here is my working config.yml: # Java Maven CircleCI 2.0 configuration file # # Check https://circleci.com/docs/2.0/language-java/ for more details # version: 2 jobs: build: working_directory: ~/spring-tutorial/recipe docker: # specify the version you desire here - image: circleci/openjdk:8-jdk # Specify service dependencies here if necessary # CircleCI maintains a library of pre-built images # documented at https://circleci.com/docs/2.0/circleci-images/ # - image: circleci/postgres:9.4 environment: # Customize the JVM maximum heap limit MAVEN_OPTS: -Xmx3200m steps: - checkout: path: ~/spring-tutorial # Download and cache dependencies - restore_cache: keys: - recipe-{{ checksum "pom.xml" }} # fallback to using the latest cache if no exact match is found - recipe- - run: mvn dependency:go-offline - save_cache: paths: - ~/.m2 key: recipe-{{ checksum "pom.xml" }} # run tests! - run: mvn integration-test
CircleCI
50,570,221
47
My iOS certificate is stored in GitHub and it is expired, the failure message in circleci progress is that ‘Your certificate 'xxxxxxx.cer' is not valid, please check end date and renew it if necessary’. Do I need to create a new certificate, or download an existing one? I don’t remember how this was originally created, I thought it was done by Fastlane as part of the build. But I don't know how to modify the Fastlane command, I have tried to add the 'cert', but it fails.
You can use fastlane match development after deleting the development profiles and certificates from your git repo. Alternatively, you can delete everything from git repo and run fastlane match If you do not care about existing profiles and certificates, just run fastlane match nuke development and fastlane match nuke appstore, then fastlane match development and fastlane match appstore. These commands will first delete everything from your git repo and apple developer portal and the next two commands will create everything on your apple developer portal and push them to your git repo. Read up this
CircleCI
56,179,677
43
I have tested with my React-app in typescript, using ts-jest like below. import * as React from "react"; import * as renderer from "react-test-renderer"; import { ChartTitle } from "Components/chart_title"; describe("Component: ChartTitle", () => { it("will be rendered with no error", () => { const chartTitle = "My Chart 1"; renderer.create(<ChartTitle title={chartTitle} />); }); }); and it has passed in my local environment but failed in CircleCI. FAIL __tests__/components/chart_title.tsx ● Test suite failed to run TypeScript diagnostics (customize using `[jest-config].globals.ts-jest.diagnostics` option): __tests__/components/chart_title.tsx:4:28 - error TS2307: Cannot find module 'Components/chart_title'. 4 import { ChartTitle } from "Components/chart_title"; ~~~~~~~~~~~~~~~~~~~~~~~~ This Components/ is an alias expression by moduleNameMapper, and I think it doesn't work in only CircleCI. jest --showConfig option tells me there is no difference between local and CI environment. Is there any fault in my settings? app/frontend/jest.config.js module.exports = { globals: { "ts-jest": { tsConfig: "tsconfig.json", diagnostics: true }, NODE_ENV: "test" }, moduleNameMapper: { "^Components/(.+)$": "<rootDir>/src/components/$1" }, moduleDirectories: ["node_modules", 'src'], moduleFileExtensions: ["ts", "tsx", "js", "jsx", "json"], transform: { "^.+\\.tsx?$": "ts-jest" }, verbose: true }; app/frontend/tsconfig.json { "compilerOptions": { "baseUrl": "src", "outDir": "dist", "allowJs": true, "checkJs": true, "moduleResolution": "node", "sourceMap": true, "noImplicitAny": true, "target": "esnext", "module": "commonjs", "lib": ["es6", "dom"], "jsx": "react", "strict": false, "removeComments": true, "types": ["jest"] }, "typeRoots": ["node_modules/@types"], "paths": { "Components/*": ["src/components/*"] }, "include": ["src/**/*"], "exclude": ["node_modules", "__tests__"] } app/frontend/package.json { "scripts": { "build": "webpack --mode development --watch", "build-production": "node_modules/.bin/webpack --mode production", "test": "jest", "lint": "npx eslint src/**/* __tests__/**/* --ext \".ts, .tsx\"", }, } app/.circleci/.config.yml version: 2 jobs: build: ... steps: - run: name: run tests for frontend command: npm test -- -u working_directory: frontend
tsconfig-paths-jest is not usable in Jest >23. For current Jest 26 I got it working via: https://kulshekhar.github.io/ts-jest/docs/getting-started/paths-mapping/ jest.config.js const { pathsToModuleNameMapper } = require('ts-jest'); const { compilerOptions } = require('./tsconfig'); module.exports = { preset: 'ts-jest', testEnvironment: 'node', moduleNameMapper: pathsToModuleNameMapper(compilerOptions.paths, { prefix: '<rootDir>/src/' } ) }; tsconfig.json "compilerOptions" "baseUrl": "./src", "paths": { "@models/*": [ "./models/*" ], "@inputs/*": [ "./inputs/*" ], "@tests/*": [ "./__tests__/*" ], "@resolvers/*": [ "./resolvers/*" ], "@seeds/*": [ "./seeds/*" ], "@index": [ "./index.ts" ], "@ormconfig":[ "../ormconfig.ts" ] },
CircleCI
55,488,882
41
I'm trying to build my android project using gradle and circleCI, but I've got this error : * What went wrong: A problem occurred configuring root project '<myproject>'. > Could not resolve all dependencies for configuration ':classpath'. > Could not find com.android.tools.build:gradle:2.2.3. Searched in the following locations: file:/home/ubuntu/.m2/repository/com/android/tools/build/gradle/2.2.3/gradle-2.2.3.pom file:/home/ubuntu/.m2/repository/com/android/tools/build/gradle/2.2.3/gradle-2.2.3.jar https://repo1.maven.org/maven2/com/android/tools/build/gradle/2.2.3/gradle-2.2.3.pom https://repo1.maven.org/maven2/com/android/tools/build/gradle/2.2.3/gradle-2.2.3.jar https://oss.sonatype.org/content/repositories/snapshots/com/android/tools/build/gradle/2.2.3/gradle-2.2.3.pom https://oss.sonatype.org/content/repositories/snapshots/com/android/tools/build/gradle/2.2.3/gradle-2.2.3.jar Required by: :<myproject>:unspecified Can someone explain me why I've got this problem please?
It seems the current versions of the Android Gradle plugin are not added to Maven Central, but they are present on jcenter. Add jcenter() to your list of repositories and Gradle should find version 2.2.3. On Maven Central the newest available version is 2.1.3: http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22com.android.tools.build%22%20AND%20a%3A%22gradle%22. You can also complain to the authors that the current versions are missing on Maven Central.
CircleCI
41,570,435
40
I want to use CircleCI just to push my docker image to Dockerhub when I merge with master. I am using CircleCI in other projects where it is more useful and want to be consistent (as well as I am planning to add tests later). However all my builds fail because CircleCI says: "NO TESTS!", which is true. How can I disable CircleCI from checking for tests presence.
I solved the problem by overriding the test section of circle.yml: test: override: - echo "test" This works for CircleCI 1.0
CircleCI
35,304,343
30
I have a circle.yml file like so: dependencies: override: - meteor || curl https://install.meteor.com | /bin/sh deployment: production: branch: "master" commands: - ./deploy.sh When I push to Github, I get the error: /home/ubuntu/myproject/deploy.sh returned exit code 126 bash: line 1: /home/ubuntu/myproject/deploy.sh: Permission denied Action failed: /home/ubuntu/myproject/deploy.sh When I run the commands that live inside deploy.sh outside of the file (under commands) everything runs fine. Everything in the circle.yml file seems to be in line with the examples in the CircleCI docs.. What am I doing wrong?
Several possible problems: deploy.sh might not be marked as executable (chmod +x deploy.sh would fix this) The first line of deploy.sh might not be a runnable shell... If the first doesn't work, can we please see the contents of deploy.sh?
CircleCI
33,942,926
29
I've upgraded a project to Go 1.11 and enabled module support for my project, but it seems that CircleCI is re-downloading the dependencies on every build. I know CircleCI allows caching between rebuilds, so I've looked at the documentation for Go modules, and while it mentions a cache, I can't seem to find where it actually exists. Where is the source cache for Go modules?
As of the final 1.11 release, the go module cache (used for storing downloaded modules and source code), is in the $GOPATH/pkg/mod location (see the docs here). For clarification, the go build cache (used for storing recent compilation results) is in a different location. This article, indicated that it's in the $GOPATH/src/mod, but in the timespan of the recent ~40 days, the golang team must have changed that target location. This message thread has some discussion on why the downloaded items ended up in $GOPATH/pkg. You can also use the go mod download -json command to see the downloaded modules/source metadata and their location on your local disk. Example output below: $ go mod download -json go: finding github.com/aws/aws-sdk-go v1.14.5 go: finding github.com/aws/aws-lambda-go v1.2.0 { "Path": "github.com/aws/aws-lambda-go", "Version": "v1.2.0", "Info": "/go/pkg/mod/cache/download/github.com/aws/aws-lambda-go/@v/v1.2.0.info", "GoMod": "/go/pkg/mod/cache/download/github.com/aws/aws-lambda-go/@v/v1.2.0.mod", "Zip": "/go/pkg/mod/cache/download/github.com/aws/aws-lambda-go/@v/v1.2.0.zip", "Dir": "/go/pkg/mod/github.com/aws/[email protected]", "Sum": "h1:2f0pbAKMNNhvOkjI9BCrwoeIiduSTlYpD0iKEN1neuQ=", "GoModSum": "h1:zUsUQhAUjYzR8AuduJPCfhBuKWUaDbQiPOG+ouzmE1A=" } { "Path": "github.com/aws/aws-sdk-go", "Version": "v1.14.5", "Info": "/go/pkg/mod/cache/download/github.com/aws/aws-sdk-go/@v/v1.14.5.info", "GoMod": "/go/pkg/mod/cache/download/github.com/aws/aws-sdk-go/@v/v1.14.5.mod", "Zip": "/go/pkg/mod/cache/download/github.com/aws/aws-sdk-go/@v/v1.14.5.zip", "Dir": "/go/pkg/mod/github.com/aws/[email protected]", "Sum": "h1:+l1m6QH6LypE2kL0p/G0Oh7ceCv+IVQ1h5UEBt2xjjU=", "GoModSum": "h1:ZRmQr0FajVIyZ4ZzBYKG5P3ZqPz9IHG41ZoMu1ADI3k=" } That output is from a build on CircleCI 2.0, using their official circleci/golang:1.11 image. This is a contrived example to show how you would include the restore_cache and save_cache steps for the new golang module cache location: steps: - checkout - restore_cache: keys: - gomod-cache-{{ checksum "go.sum" }} - run: go vet ./... - save_cache: key: gomod-cache-{{ checksum "go.sum" }} paths: - /go/pkg/mod
CircleCI
52,082,783
22
When executing a build for git repository giantswarm/docs-content in CircleCI, I'd like to push a commit to another repository giantswarm/docs. I have this in the deployment section of circle.yml: git config credential.helper cache git config user.email "<some verified email>" git config user.name "Github Bot" git clone --depth 1 https://${GITHUB_PERSONAL_TOKEN}:[email protected]/giantswarm/docs.git cd docs/ git commit --allow-empty -m "Trigger build and publishing via docs-content" git push -u origin master This fails in the very last command with this error message: ERROR: The key you are authenticating with has been marked as read only. fatal: Could not read from remote repository. The GITHUB_PERSONAL_TOKEN environment variable is set to a user's personal access token, which has been created with repo scope to access the private repo giantswarm/docs. In addition, I added the user to a team that has admin permissions for that repo. That series of commands works just fine when I execute it in a fresh Ubuntu VM. Any idea why it doesn't on CircleCI?
I've used git push -q https://${GITHUB_PERSONAL_TOKEN}@github.com/<user>/<repo>.git master and it worked. Update it to be: # Push changes git config credential.helper 'cache --timeout=120' git config user.email "<email>" git config user.name "<user-name>" git add . git commit -m "Update via CircleCI" # Push quietly to prevent showing the token in log git push -q https://${GITHUB_PERSONAL_TOKEN}@github.com/giantswarm/docs.git master
CircleCI
44,773,415
20
In my Django application I have a circle.yml file that runs 'pip install -r requirements/base.txt'. When I push up code, and check the CircleCI logs when there is an error, its hard to get to because there are so many dependencies and as of pip6 they started showing progress bars for the installations. Because of that it get busy pretty quick. I read on pip's github page that a few people were requesting a flag to the install command to remove the progress bars, but continue to show everything else like exceptions. something like pip install --no-progress-bar foo https://github.com/pypa/pip/pull/4194. It doesn't look like this has been released yet though. Is there any way to currently do this without using --no-cache-dir ?
That PR was merged and is available on the latest stable build (pip 10.0.1 at the time of writing). Just do: pip install foo --progress-bar off Other args are available. See the pip install docs.
CircleCI
48,429,265
19
So the background is this: I have an Xcode project that depends on a swift package that's in a private repository on github. Of course, this requires a key to access. So far, I've managed to configure CI such that I can ssh into the instance and git clone the required repository for the swift package. Unfortunately when running it with xcbuild as CI does, it doesn't work and I get this message: static:ios distiller$ xcodebuild -showBuildSettings -workspace ./Project.xcworkspace \ -scheme App\ Prod Resolve Package Graph Fetching [email protected]:company-uk/ProjectDependency.git xcodebuild: error: Could not resolve package dependencies: Authentication failed because the credentials were rejected In contrast, git clone will happily fetch this repo as seen here: static:ios distiller$ git clone [email protected]:company-uk/ProjectDependency.git Cloning into 'ProjectDependency'... Warning: Permanently added the RSA host key for IP address '11.22.33.44' to the list of known hosts. remote: Enumerating objects: 263, done. remote: Counting objects: 100% (263/263), done. remote: Compressing objects: 100% (171/171), done. remote: Total 1335 (delta 165), reused 174 (delta 86), pack-reused 1072 Receiving objects: 100% (1335/1335), 1.11 MiB | 5.67 MiB/s, done. Resolving deltas: 100% (681/681), done. For a bit more context, this is running on CircleCI, set up with a Deploy key on GitHub, which has been added to the Job on CI. Any suggestions about what might be different between the way Xcode tries to fetch dependencies and the way vanilla git does it would be great. Thanks.
For CI pipelines where you cannot sign into GitHub or other repository hosts this is the solution I found that bypasses the restrictions/bugs of Xcode around private Swift packages. Use https urls for the private dependencies because the ssh config is currently ignored by xcodebuild even though the documentation says otherwise. Once you can build locally with https go to your repository host and create a personal access token (PAT). For GitHub instructions are found here. With your CI system add this PAT as a secret environment variable. In the script below it is referred to as GITHUB_PAT. Then in your CI pipeline before you run xcodebuild make sure you run an appropriately modified version of this bash script: for FILE in $(grep -Ril "https://github.com/[org_name]" .); do sed -i '' "s/https:\/\/github.com\/[org_name]/https:\/\/${GITHUB_PAT}@github.com\/[org_name]/g" ${FILE} done This script will find all https references and inject the PAT into it so it can be used without a password. Don't forget: Replace [org_name] with your organization name. Replace ${GITHUB_PAT} with the name of your CI Secret if you named it differently. Configure the grep command to ignore any paths you don't want modified by the script.
CircleCI
59,035,529
19
Currently, rake db:schema:load is run to setup the database on CircleCI. In migrating from using schema.rb to structure.sql, the command has been updated to: rake db:structure:load. Unfortunately, it appears to hang and does not return: $ bin/rake db:structure:load --trace ** Invoke db:structure:load (first_time) ** Invoke db:load_config (first_time) ** Execute db:load_config ** Execute db:structure:load WARNING: terminal is not fully functional set_config ------------ (1 row) (END)rake aborted! Interrupt: <STACK TRACE> bin/rake:9:in `<main>' Tasks: TOP => db:structure:load Too long with no output (exceeded 10m0s) Found someone else with the same issue on CircleCI, no answers though.
This seems to have something to do with the psql client's output to the terminal expecting user input: set_config ------------ (1 row) (END) <--- like from a terminal pager Not exactly a proper solution, but a workaround in .circleci/config.yml: jobs: build: docker: - image: MY_APP_IMAGE environment: PAGER: cat # prevent psql commands using less
CircleCI
53,055,044
18
I am trying to integrate CircleCi with gcloud Kubernetes engine. I created a service account with Kubernetes Engine Developer and Storage Admin roles. Created CircleCi yaml file and configured CI. Part of my yaml file includes: docker: - image: google/cloud-sdk environment: - PROJECT_NAME: 'my-project' - GOOGLE_PROJECT_ID: 'my-project-112233' - GOOGLE_COMPUTE_ZONE: 'us-central1-a' - GOOGLE_CLUSTER_NAME: 'my-project-bed' steps: - checkout - run: name: Setup Google Cloud SDK command: | apt-get install -qq -y gettext echo $GCLOUD_SERVICE_KEY > ${HOME}/gcloud-service-key.json gcloud auth activate-service-account --key-file=${HOME}/gcloud-service-key.json gcloud --quiet config set project ${GOOGLE_PROJECT_ID} gcloud --quiet config set compute/zone ${GOOGLE_COMPUTE_ZONE} gcloud --quiet container clusters get-credentials ${GOOGLE_CLUSTER_NAME} Everything runs perfectly except that the last command: gcloud --quiet container clusters get-credentials ${GOOGLE_CLUSTER_NAME} It keeps failing with the error: ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required "container.clusters.get" permission(s) for "projects/my-project-112233/zones/us-central1-a/clusters/my-project-bed". See https://cloud.google.com/kubernetes-engine/docs/troubleshooting#gke_service_account_deleted for more info. I tried to give the ci account the role of project owner but I still got that error. I tried to disable and re-enable the Kubernetes Service but it didn't help. Any idea how to solve this? I am trying to solve it for 4 days...
This is an old thread, this is how this issue handled today in case using cloud build : Granting Cloud Build access to GKE To deploy the application in your Kubernetes cluster, Cloud Build needs the Kubernetes Engine Developer Identity and Access Management Role. Get Project Number: PROJECT_NUMBER="$(gcloud projects describe ${PROJECT_ID} --format='get(projectNumber)')" Add IAM Policy bindings: gcloud projects add-iam-policy-binding ${PROJECT_NUMBER} \ --member=serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \ --role=roles/container.developer More info can be found here.
CircleCI
53,420,870
17
We keep getting the following exception on CircleCI while building out project. Everything runs well when running the job from the CircleCI CLI. Has anyone found a fix / resolution for this? Compilation with Kotlin compile daemon was not successful java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is: java.io.EOFException at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:236) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161) at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:227) at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:179) at com.sun.proxy.$Proxy104.compile(Unknown Source) at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.incrementalCompilationWithDaemon(GradleKotlinCompilerWork.kt:284) at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.compileWithDaemon(GradleKotlinCompilerWork.kt:198) at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.compileWithDaemonOrFallbackImpl(GradleKotlinCompilerWork.kt:141) at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.run(GradleKotlinCompilerWork.kt:118) at org.jetbrains.kotlin.compilerRunner.GradleCompilerRunner.runCompilerAsync(GradleKotlinCompilerRunner.kt:158) at org.jetbrains.kotlin.compilerRunner.GradleCompilerRunner.runCompilerAsync(GradleKotlinCompilerRunner.kt:153) at org.jetbrains.kotlin.compilerRunner.GradleCompilerRunner.runJvmCompilerAsync(GradleKotlinCompilerRunner.kt:92) at org.jetbrains.kotlin.gradle.tasks.KotlinCompile.callCompilerAsync$kotlin_gradle_plugin(Tasks.kt:447) at org.jetbrains.kotlin.gradle.tasks.KotlinCompile.callCompilerAsync$kotlin_gradle_plugin(Tasks.kt:355) at org.jetbrains.kotlin.gradle.tasks.AbstractKotlinCompile.executeImpl(Tasks.kt:312) at org.jetbrains.kotlin.gradle.tasks.AbstractKotlinCompile.execute(Tasks.kt:284) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:103) at org.gradle.api.internal.project.taskfactory.IncrementalTaskInputsTaskAction.doExecute(IncrementalTaskInputsTaskAction.java:46) at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:41) at org.gradle.api.internal.project.taskfactory.AbstractIncrementalTaskAction.execute(AbstractIncrementalTaskAction.java:25) at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:28) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$5.run(ExecuteActionsTaskExecuter.java:404) at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:402) at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:394) at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165) at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250) at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158) at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:92) at org.gradle.internal.operations.DelegatingBuildOperationExecutor.run(DelegatingBuildOperationExecutor.java:31) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:393) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:376) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.access$200(ExecuteActionsTaskExecuter.java:80) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$TaskExecution.execute(ExecuteActionsTaskExecuter.java:213) at org.gradle.internal.execution.steps.ExecuteStep.lambda$execute$0(ExecuteStep.java:32) at java.util.Optional.map(Optional.java:215) at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:32) at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:26) at org.gradle.internal.execution.steps.CleanupOutputsStep.execute(CleanupOutputsStep.java:58) at org.gradle.internal.execution.steps.CleanupOutputsStep.execute(CleanupOutputsStep.java:35) at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:48) at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:33) at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:39) at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:73) at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:54) at org.gradle.internal.execution.steps.CatchExceptionStep.execute(CatchExceptionStep.java:35) at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:51) at org.gradle.internal.execution.steps.SnapshotOutputsStep.execute(SnapshotOutputsStep.java:45) at org.gradle.internal.execution.steps.SnapshotOutputsStep.execute(SnapshotOutputsStep.java:31) at org.gradle.internal.execution.steps.CacheStep.executeWithoutCache(CacheStep.java:201) at org.gradle.internal.execution.steps.CacheStep.execute(CacheStep.java:70) at org.gradle.internal.execution.steps.CacheStep.execute(CacheStep.java:45) at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:49) at org.gradle.internal.execution.steps.StoreSnapshotsStep.execute(StoreSnapshotsStep.java:43) at org.gradle.internal.execution.steps.StoreSnapshotsStep.execute(StoreSnapshotsStep.java:32) at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:38) at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:24) at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:96) at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$0(SkipUpToDateStep.java:89) at java.util.Optional.map(Optional.java:215) at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:54) at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:38) at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:77) at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:37) at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:36) at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:26) at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:90) at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:48) at org.gradle.internal.execution.impl.DefaultWorkExecutor.execute(DefaultWorkExecutor.java:33) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:120) at org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionStateTaskExecuter.execute(ResolveBeforeExecutionStateTaskExecuter.java:75) at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:62) at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:108) at org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionOutputsTaskExecuter.execute(ResolveBeforeExecutionOutputsTaskExecuter.java:67) at org.gradle.api.internal.tasks.execution.ResolveAfterPreviousExecutionStateTaskExecuter.execute(ResolveAfterPreviousExecutionStateTaskExecuter.java:46) at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:94) at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46) at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:95) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56) at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:73) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:49) at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416) at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406) at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165) at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250) at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:102) at org.gradle.internal.operations.DelegatingBuildOperationExecutor.call(DelegatingBuildOperationExecutor.java:36) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:49) at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:43) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:355) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:343) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:336) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:322) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:134) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:129) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:202) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.executeNextNode(DefaultPlanExecutor.java:193) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:129) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.EOFException at java.io.DataInputStream.readByte(DataInputStream.java:267) at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:222) ... 110 more Unable to clear jar cache after compilation, maybe daemon is already down: java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is: java.net.ConnectException: Connection refused (Connection refused) Could not connect to kotlin daemon. Using fallback strategy.
This configuration got rid of the issue: Changing from JVM_OPTS: -Xmx3200m to JVM_OPTS: -Xmx2048m was important version: 2 executors: my-executor: docker: - image: circleci/android:api-28-node working_directory: ~/code environment: JVM_OPTS: -Xmx2048m GRADLE_OPTS: -Xmx1536m -XX:+HeapDumpOnOutOfMemoryError -Dorg.gradle.caching=true -Dorg.gradle.configureondemand=true -Dkotlin.compiler.execution.strategy=in-process -Dkotlin.incremental=false
CircleCI
55,939,204
17
I am looking for a way via GitHub (or CircleCI) settings to prevent the person who opens or commits to a pull request from being able to merge or approve that pull request. So far I have the protection of a branch that requires approvals but post-approval I as PR creator and committer I still able to merge.
You need to be able to prevent the person that is involved in PR (create PR or make a commit) to be able to merge PR (or even approve it) A contributor who has created a PR cannot approve or request changes by default in GitHub, so that is already taken care of. Since a Pull Request is a GitHub feature, a PR merge can currently only be blocked by 2 ways Using GitHub's settings Using pre-receive hooks (only for GitHub Enterprise) Using GitHub's settings, you can only block merging by requiring either pull request reviews, status checks to pass, signed commits or linear history as shown under the branch protection settings. or by allowing merge commits, squash merging or rebase merging as shown in the Merge button section under repo settings If you are on GitHub Enterprise, you can use a pre-receive hook (documentation) like below and ensure that self merging PRs are blocked (This eg is here) if [[ "$GITHUB_VIA" = *"merge"* ]] && [[ "$GITHUB_PULL_REQUEST_AUTHOR_LOGIN" = "$GITHUB_USER_LOGIN" ]]; then echo "Blocking merging of your own pull request." exit 1 fi exit 0 Apart from the above, there is no other way currently to block self merging PRs on GitHub. And using CircleCI or any other CI workflow can only block merging for everybody(if you opt for the requirement of status checks on GitHub) or nobody, as it can't control the PR merge button.
CircleCI
62,601,595
17
Are there any cloud CI services that allow Vagrant VMs to run using VirtualBox as a provider? Early investigation shows this seems not to be possible with Travis CI or Circle CI, although the vagrant-aws plugin allows for the use of AWS servers as a Vagrant provider. Is this correct?
Update January 2021: GitHub Actions also supports Vagrant - and Vagrant/VirtualBox are both installed out-of-the-box in the MacOS environment (not on Linux or Windows currently!). See the possible environments here. Therefore I created a fully comprehensible example project at: https://github.com/jonashackt/vagrant-github-actions 1.: Create a Vagrantfile (and you're not limited to libvirt as with Travis, you have a full VirtualBox environment with nested virtualization working on GitHub Actions!) like this: Vagrant.configure("2") do |config| config.vm.box = "generic/ubuntu1804" config.vm.define 'ubuntu' # Prevent SharedFoldersEnableSymlinksCreate errors config.vm.synced_folder ".", "/vagrant", disabled: true end 2.: Create a GitHub Actions workflow like this vagrant-up.yml inside the .github/workflows directory in your repository: name: vagrant-up on: [push] jobs: vagrant-up: runs-on: macos-10.15 steps: - uses: actions/checkout@v2 - name: Run vagrant up run: vagrant up - name: ssh into box after boot run: vagrant ssh -c "echo 'hello world!'" You can even add caching for the Vagran boxes, this will safe you some seconds :) Early 2020: TravisCI is now able to run Vagrant finally! Thanks to this GitHub issue I learned about libvirt and KVM, which could be used together with the vagrant-libvirt Plugin to run Vagrant boxes on TravisCI. An example TravisCI .travis.yml should look somehow like that: --- dist: bionic language: python install: # Install libvrt & KVM - sudo apt-get update && sudo apt-get install -y bridge-utils dnsmasq-base ebtables libvirt-bin libvirt-dev qemu-kvm qemu-utils ruby-dev # Download Vagrant & Install Vagrant package - sudo wget -nv https://releases.hashicorp.com/vagrant/2.2.7/vagrant_2.2.7_x86_64.deb - sudo dpkg -i vagrant_2.2.7_x86_64.deb # Vagrant correctly installed? - vagrant --version # Install vagrant-libvirt Vagrant plugin - sudo vagrant plugin install vagrant-libvirt script: - sudo vagrant up --provider=libvirt - sudo vagrant ssh -c "echo 'hello world!'" With the help of the generic Vagrant Box images from Vagrant Cloud you can also establish a workflow of using Vagrant + libvirt + KVM on Travis and Vagrant + VirtualBox on your local machine, if you like: I created a fully working and 100% comprehensible example project here: https://github.com/jonashackt/vagrant-travisci-libvrt
CircleCI
31,828,555
16
Config: CircleCI 2.0 Bitbucket private repo After I click on "Rebuild with SSH", the "Enable SSH" section outputs Failed to enable SSH No SSH key is found. Please make sure you've added at least one SSH key in your VCS account. What does this mean? How do I fix this?
You can use your personal private public id_rsa id_rsa.pub key-pair (which you may already generated to SSH access to other machines) just add your public key ~./ssh/id_rsa.pub to Bitbucket -> Settings -> SSH keys -> add SSH key then go to CircleCI and rebuild the project. There may be confusion because CircleCi uses other SSH key called checkout SSH key-pair for: checking out the main project checking out any Bitbucket-hosted submodules checking out any Bitbucket-hosted private dependencies automatic git merging/tagging/etc. Private checkout SSH key is saved on circleCi servers and public key is automatically uploaded to Bitbucket.
CircleCI
46,047,337
16
I have the following code (see the comments for what's happening): // Clone repository from GitHub into a local directory. Git git = Git.cloneRepository() .setBranch("gh-pages") .setURI("https://github.com/RAnders00/KayonDoc.git") .setDirectory(new File("/home/ubuntu/KayonDoc")) .call(); // Print out remotes in config from JGit Config config = git.getRepository().getConfig(); config.getSubsections("remote").forEach(it -> { System.out.println(config.getString("remote", it, "url")); }); // Prints https://github.com/RAnders00/KayonDoc.git // Everything seems OK // You could perform some changes to the repository here... // Push changes to origin git.push() .setCredentialsManager(new UsernamePasswordCredentialsProvider("RAnders00", "hunter2")) .call(); // Throws exception (look below) Caught: org.eclipse.jgit.api.errors.TransportException: [email protected]:RAnders00/KayonDoc.git: push not permitted org.eclipse.jgit.api.errors.TransportException: [email protected]:RAnders00/KayonDoc.git: push not permitted at org.eclipse.jgit.api.PushCommand.call(PushCommand.java:164) at org.eclipse.jgit.api.PushCommand.call(PushCommand.java:80) at <your class> (YourClass.java:?) Caused by: org.eclipse.jgit.errors.TransportException: [email protected]:RAnders00/KayonDoc.git: push not permitted at org.eclipse.jgit.transport.BasePackPushConnection.noRepository(BasePackPushConnection.java:176) at org.eclipse.jgit.transport.BasePackConnection.readAdvertisedRefsImpl(BasePackConnection.java:200) at org.eclipse.jgit.transport.BasePackConnection.readAdvertisedRefs(BasePackConnection.java:178) at org.eclipse.jgit.transport.TransportGitSsh$SshPushConnection.<init>(TransportGitSsh.java:341) at org.eclipse.jgit.transport.TransportGitSsh.openPush(TransportGitSsh.java:166) at org.eclipse.jgit.transport.PushProcess.execute(PushProcess.java:154) at org.eclipse.jgit.transport.Transport.push(Transport.java:1200) at org.eclipse.jgit.api.PushCommand.call(PushCommand.java:157) ... 3 more JGit is saving the git: url into the .git/FETCH_HEAD file, which is then being used for pushing. Since the git: protocol does not support authentication, I am unable to push to the remote and the process fails. The .git/config file contains the correct https: URI for the remote (that's why the code is printing the https: URI). My question is: What can I do to make JGit set the https: URI correctly (which would then allow me to push again)? This issue only arises in a very special environment (on CircleCI, a Ubuntu 12.04.2 LTS virtual box) - it's not reproducable on 15.10, 14.04 LTS and 12.04.2 LTS fresh ubuntu distributions and not reproducable on Windows. The easiest way to reproduce the issue is to create a dummy GitHub repository, then start building your dummy project on CircleCI, and then to rerun your first build with SSH. You then have 30 minutes of SSH time to upload any groovy/java files to the box. After the 30 minutes the box will be forcefully shut down. If I use git remote -v in the directory this was cloned into, I get: (which points me to the fact that the git: URIs are indeed used) origin [email protected]:RAnders00/KayonDoc.git (fetch) origin [email protected]:RAnders00/KayonDoc.git (push)
Looks like you have defined URL Rewriting Git provides a way to rewrite URLs with the following config: git config --global url."git://".insteadOf https:// To verify if you have set it check the configuration of your repository: git config --list You'll see the following line in the output: url.git://.insteadof=https:// You can also check your .gitconfig files to verify that you don't have this entry in your config files [url "git://"] insteadOf = https://
CircleCI
33,835,669
15
Im running some tests in circleci and some of the tests are taking longer then 10 min cause its ui tests that run on a headless browser that Im installing in my circle.yml How can I extend the time of the timeout? thanks
You can add the timeout modifier to your command to increase the timeout beyond the default 600 seconds (10min). For example, if you ran a test called my-test.sh, you could do the following: test: override: - ./my-test.sh: timeout: 900 Note that the command ends with a colon (:), with the modifier on the next line, double-indented (4 spaces instead of 2). Reference: https://circleci.com/docs/configuration#modifiers
CircleCI
36,173,553
15
I am stuck in this problem. I am running cypress tests. When I run locally, it runs smoothly. when I run in circleCI, it throws error after some execution. Here is what i am getting: [334:1020/170552.614728:ERROR:bus.cc(392)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory [334:1020/170552.616006:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [334:1020/170552.616185:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [521:1020/170552.652819:ERROR:gpu_init.cc(441)] Passthrough is not supported, GL is swiftshader Current behavior: When I run my specs headless on the circleCI, Cypress closed unexpectedly with a socket error. Error message: The Test Runner unexpectedly exited via a exit event with signal SIGSEGV Please search Cypress documentation for possible solutions: https://on.cypress.io Platform: linux (Debian - 10.5) Cypress Version: 8.6.0
Issue resolved by reverting back cypress version to 7.6.0.
CircleCI
69,658,152
15
I have a spec for a method that returns a timestamp of an ActiveRecord object. The spec passes locally, but whenever it is run on CircleCI, there is a slight mismatch between the expected and the actual. The spec looks something like this: describe '#my_method' do it 'returns created_at' do object = FactoryGirl.create(:something) expect(foo.bar(object)).to eq object.created_at end end While it passes locally, on CircleCI, I continually get similar error messages. Here are examples: (1) expected: 2015-05-09 10:42:59.752192641 +0000 got: 2015-05-09 10:42:59.752192000 +0000 (2) expected: 2015-05-08 10:16:36.777541226 +0000 got: 2015-05-08 10:16:36.777541000 +0000 From the error, I suspect that CircleCI is rounding up the timestamp value, but I do not have enough information. Any suggestions?
I am encountering the same issue and currently have an open ticket with CircleCI to get more information. I'll update this answer when I know more. In the meantime, a workaround to get these tests passing is just to ensure that the timestamp you're working with in a test like this is rounded using a library that mocks time (like timecop). describe '#my_method' do it 'returns created_at' do # CircleCI seems to round milliseconds, which can result in # slight differences when serializing times. # To work around this, ensure the millseconds end in 000. Timecop.freeze(Time.local(2015)) do object = FactoryGirl.create(:something) expect(foo.bar(object)).to eq object.created_at end end end UPDATE: Based on the initial response from CircleCI, the above approach is actually their recommended approach. They haven't been able to give me an explanation yet as to why the rounding is actually happening, though. UPDATE 2: It looks like this has something to do with the precision difference between different systems. I'm personally seeing this issue on OS X. Here's the response from Circle: From what I know, Time.now actually has different precision on OS X and Linux machines. I would suppose that you will get the exact same result on other Linux hosts, but all OS X hosts will give you the result without rounding. I might be wrong, but I recall talking about that with another customer. Mind checking that on a VM or an EC2 instance running Linux? In the Time reference you can search for precision on the page—the round method can actually adjust the precision for you. Would it be an option for you to round the time in the assertion within the test? I have not tried their suggestions yet to confirm, but this does seem to provide an explanation as well as an additional workaround (rounding the assertion within the test) that doesn't require timecop.
CircleCI
30,139,038
14
I have some questions and issues with my CI and CD solution. Rails: 4.2 Capistrano: 3.4.0 The application is hosted on a private server. Right now I have the workflow working with deploying development, staging and production via the terminal. I also hooked up Circle CI working good on these branches. I cannot find how to setup Circle CI to use Capistrano to deploy. Everything is configured with the server user in the Capistrano config. How do I give Circle CI SSH access to my deploy user? Because now I have to provide a password for the user.
Use SSH keys for authentication. You might as well use it for your own SSH sessions too, because it's more convenient and secure (a rare occasion!) than password authentication. Check out this tutorial on how to set it up. Then, paste your private key to CircleCI in Project Settings -> SSH Permissions, as described here. You'd need to copy the private key from your local machine from the key pair whose public key you added to the deploy user on the server. CircleCI then will have SSH access to your server. You can set the hostname to the domain that points to your server or your server's IP, or leave it blank so this key would be used in all hosts.
CircleCI
32,467,128
14
I have created spec/lint/rubocop_spec.rb which runs Rubocop style checker on the files changed between current branch and master. This works when I test locally but not when the test run on the build server Circle.ci. I suspect it is because only the branch in question is downloaded, so it does not find any differences between master. Is there a better way than git co master && git pull origin master? Can I query the Github API perhaps to get the files changed listed? require 'spec_helper' describe 'Check that the files we have changed have correct syntax' do before do current_sha = `git rev-parse --verify HEAD`.strip! files = `git diff master #{current_sha} --name-only | grep .rb` files.tr!("\n", ' ') @report = 'nada' if files.present? puts "Changed files: #{files}" @report = `rubocop #{files}` puts "Report: #{@report}" end end it { @report.match('Offenses').should_not be true } end
You don't have to use github api, or even ruby (unless you want to wrap the responses) you can just run: git fetch && git diff-tree -r --no-commit-id --name-only master@\{u\} head | xargs ls -1 2>/dev/null | xargs rubocop --force-exclusion see http://www.red56.uk/2017/03/26/running-rubocop-on-changed-files/ for longer write-up of this
CircleCI
32,553,877
14
When building the app on CircleCI for v0.59.x it gives me the following error (It used to work fine till v0.57.8): [12:45:19]: ▸ Note: Some input files use or override a deprecated API. [12:45:19]: ▸ Note: Recompile with -Xlint:deprecation for details. [12:45:19]: ▸ > Task :react-native-svg:processReleaseJavaRes NO-SOURCE [12:45:19]: ▸ > Task :react-native-svg:transformClassesAndResourcesWithPrepareIntermediateJarsForRelease [12:45:19]: ▸ > Task :app:javaPreCompileQa [12:45:44]: ▸ > Task :app:bundleQaJsAndAssets [12:45:44]: ▸ warning: the transform cache was reset. [12:46:00]: ▸ Loading dependency graph, done. [12:46:19]: ▸ > Task :app:bundleQaJsAndAssets FAILED [12:46:19]: ▸ FAILURE: Build failed with an exception. [12:46:19]: ▸ * What went wrong: [12:46:19]: ▸ Execution failed for task ':app:bundleQaJsAndAssets'. [12:46:19]: ▸ > Process 'command 'node'' finished with non-zero exit value 137 I figure this has something to do with memory or Gradle/Java options because the build works fine on my local machine (./gradlew assembleRelease) Useful snippets from circle config: jobs: make-android: ... docker: - image: circleci/android:api-28-node8-alpha environment: TERM: dumb # JAVA_OPTS... # GRADLE_OPTS... steps: - checkout: path: *root_dir - attach_workspace: at: *root_dir - run: name: Build the app no_output_timeout: 30m command: bundle exec fastlane make And fastlane make is gradle(task: "clean") gradle(task: "assembleRelease") I tried multiple JAVA_OPTS and GRADE_OPTS, including removing them (it used to work fine with no _OPTS with v0.57.8) JAVA_OPTS: "-Xms512m -Xmx4096m" GRADLE_OPTS: -Xmx4096m -Dorg.gradle.daemon=false -Dorg.gradle.jvmargs="-Xms512m -Xmx4096m -XX:+HeapDumpOnOutOfMemoryError" JAVA_OPTS: "-Xms512m -Xmx2048m" GRADLE_OPTS: -Xmx2048m -Dorg.gradle.daemon=false -Dorg.gradle.jvmargs="-Xmx2048m -XX:+HeapDumpOnOutOfMemoryError" I also have this in android/app/build.gradle dexOptions { javaMaxHeapSize "2g" preDexLibraries false }
One of the reasons could be the number of workers the Metro bundler is using. Setting maxWorkers: <# workers> in metro.config.js fixed it for me: module.exports = { transformer: { getTransformOptions: async () => ({ transform: { experimentalImportSupport: false, inlineRequires: false, }, }), }, maxWorkers: 2, }; Other things I changed were set JAVA_OPTS and GRADLE_OPTS in .circle/config.yml JAVA_OPTS: '-Xms512m -Xmx2g' GRADLE_OPTS: '-Xmx3g -Dorg.gradle.daemon=false -Dorg.gradle.jvmargs="-Xmx2g -XX:+HeapDumpOnOutOfMemoryError"'
CircleCI
56,002,938
14
I would like to programmatically determine if a particular Python script is run a testing environment such as GitHub action Travis CI Circle CI etc. I realize that this will require some heuristics, but that's good enough for me. Are certain environment variables always set? Is the user name always the same? Etc.
An environment variable is generally set for each CI/CD pipeline tool. The ones I know about: os.getenv("GITHUB_ACTIONS") os.getenv("TRAVIS") os.getenv("CIRCLECI") os.getenv("GITLAB_CI") Will return true in a python script when executed in the respective tool environment. e.g: os.getenv("GITHUB_ACTIONS") == "true" in a Github Action workflow. os.getenv("CIRCLECI") == "true" in a CircleCI pipeline. ... PS: If I'm not mistaken, to identify the python script is being executed in Jenkins or Kubernetes Service host, the behavior isn't the same.
CircleCI
73,973,332
14
I currently have a few services such as db and web in a django application, and docker-compose is used to string them together. The web version has code like this.. web: restart: always build: ./web expose: - "8000" The docker file in web has python2.7-onbuild, so it uses the requirements.txt file to install all the necessary dependencies. I am now using circle CI for integration and have a circle.yml file like this.. .... dependencies: pre: - pip install -r web/requirements.txt .... Is there anyway I could avoid the dependency clause in the circle yml file. Instead I would like Circle CI to use docker-compose.yml instead, if that makes sense.
Yes, using docker-compose in the circle.yml file can be a nice way to run tests because it can mirror ones dev environment very closely. This is a extract from our working tests on a AngularJS project: --- machine: services: - docker dependencies: override: - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS - sudo pip install --upgrade docker-compose==1.3.0 test: pre: - docker-compose pull - docker-compose up -d - docker-compose run npm install - docker-compose run bower install --allow-root --config.interactive=false override: # grunt runs our karma tests - docker-compose run grunt deploy-build compile Notes: The docker login is only needed if you have private images in docker hub. when we wrote our circle.yml file only docker-compose 1.3 was available. This is probably updated now.
CircleCI
31,787,426
13
Excerpt from a CircleCI config file: deploy: machine: enabled: true steps: - run: name: AWS EC2 deploy command: | ssh -o "StrictHostKeyChecking no" [email protected] "cd ~/circleci-aws; git pull; npm i; npm run build; pm2 restart build/server How can I break the command into multiple lines? Tried below syntax, but it only runs the first command: deploy: machine: enabled: true steps: - run: name: Deploy command: | ssh -o StrictHostKeyChecking=no [email protected] cd ~/circleci-aws git pull npm i npm run build pm2 restart build/server
This is an old one, but it's had a lot of views so what I've found seems worth sharing. In the CircleCI docs (https://circleci.com/docs/2.0/configuration-reference/#shorthand-syntax) they indicate that in using the run shorthand syntax you can also do multi-line. That would look like the following - run: | git add --all git commit -am "a commit message" git push The difference between the question's example and this is the commands are under "run", not "command".
CircleCI
51,672,067
13
I'm getting deprecation warning from my pipelines at circleci. Message. /home/circleci/evobench/env/lib/python3.7/site-packages/_pytest/junitxml.py:436: PytestDeprecationWarning: The 'junit_family' default value will change to 'xunit2' in pytest 6.0. Command - run: name: Tests command: | . env/bin/activate mkdir test-reports python -m pytest --junitxml=test-reports/junit.xml How should I modify command to use xunit? Is it possible to a default tool, as it is mentioned in the message? I mean without specyfing xunit or junit. Here's full pipeline.
Run your command in this ways. with xunit2 python -m pytest -o junit_family=xunit2 --junitxml=test-reports/junit.xml with xunit1 python -m pytest -o junit_family=xunit1 --junitxml=test-reports/junit.xml or python -m pytest -o junit_family=legacy --junitxml=test-reports/junit.xml This here describes the change in detail: The default value of junit_family option will change to xunit2 in pytest 6.0, given that this is the version supported by default in modern tools that manipulate this type of file. In order to smooth the transition, pytest will issue a warning in case the --junitxml option is given in the command line but junit_family is not explicitly configured in pytest.ini: PytestDeprecationWarning: The `junit_family` default value will change to 'xunit2' in pytest 6.0. Add `junit_family=legacy` to your pytest.ini file to silence this warning and make your suite compatible. In order to silence this warning, users just need to configure the junit_family option explicitly: [pytest] junit_family=legacy
CircleCI
60,212,552
13
Is there a way to restrict circleci deployment on checkings that have a specific git tag? Currently I am using this ... deployment: dockerhub: branch: master commands: - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS - docker push abcdef Instead of branch: master I would like to write something like tag: /release_.*/ Background: I would like to set docker tags depending on git tags. So for example, whenever something is committed to master, a new docker images with latest tag will be created and pushed. Whenever a special git tag is set (e.g. release_1.0_2015-06-13) a new docker image with a tag 1.0 will be created and pushed. Alternative is to only use different branches according to the different tags. But I would like to use tags to mark a specific release.
It looks like this was added since Kim answered. Normally, pushing a tag will not run a build. If there is a deployment configuration with a tag property that matches the name of the tag you created, we will run the build and the deployment section that matches. In the below example, pushing a tag named release-v1.05 would trigger a build & deployment. Pushing a tag qa-9502 would not trigger a build. deployment: release: tag: /release-.*/ owner: circleci commands: - ./deploy_master.sh
CircleCI
30,817,760
12
I maintain an open-source framework that uses CircleCI for continuous integration. I've recently hit a wall where the project suddenly refused to build in rather strange circumstances. Build 27 was the last one that succeeded. After that, I made some minor changes to dependencies and noticed that the build fails. I've tried to fix it without success, so I reverted back to last working configuration and it still failed. The reason for failure are two dependencies, both being bindings to native C libraries: OpenGL (OpenGLRaw) and GLFW (bindings-glfw). They error out in link stage with numerous lines of: /tmp/ghc18975_0/ghc18975_6.o:(.data+0x0): multiple definition of `__stginit_bindizu0Qm7f8FzzUN32WFlos7AKUm_BindingsziGLFW' /tmp/ghc18975_0/ghc18975_6.o:(.data+0x0): first defined here I am totally stumped as for why that might happen. The exact same versions of those libraries were built back when the original build passed, and being on CI it uses a fresh container each time (I've tried cleaning the cache obviously). The build involves both apt-get update and cabal update though, so there's a possibility that some external resource was changed. If anyone has ever encountered such or similar problem, it might vastly help in diagnosing and removing the issue. Google search for this specific multiple definition problem of that scale yields nothing. I tried to update cabal version (since some hints over the internet pointed at it), but with: cabal-install version 1.22.6.0 using version 1.22.4.0 of the Cabal library The problem persists. One important thing I forgot to mention is that this doesn't look strictly like some simple package mixup. I connected over SSH to that box, created an empty folder and a sandbox there, and even simple cabal install OpenGLRaw failed with the same problem (so it's unlikely that that itself would pull in two versions of the same module that could cause those conflicts). I've also extracted a verbose cabal installation log. Did SSH again, cloned raw sources of OpenGLRaw, still the same. Tried 7.6.3, still the same.
It seems to be an issue with gcc-4.9.2. I forked your project, started a build with high verbosity level, connected to the circleci container and run the exact linking command. It fails the same way: ubuntu@box1305:~$ /usr/bin/gcc -fno-stack-protector -DTABLES_NEXT_TO_CODE '-Wl,--hash-size=31' -Wl,--reduce-memory-overheads -Wl,--no-as-needed -nostdlib -Wl,-r -nodefaultlibs '-Wl,--build-id=none' -o Types.o /tmp/ghc17998_0/ghc_15.ldscript /tmp/ghc17998_0/ghc_14.o: In function `r2vy_closure': (.data+0x0): multiple definition of `__stginit_OpenGzu8rT20eO9AxEKIONeYf57cS_GraphicsziRenderingziOpenGLziRawziTypes' /tmp/ghc17998_0/ghc_14.o:(.data+0x0): first defined here /tmp/ghc17998_0/ghc_14.o: In function `r2vy_closure': (.data+0x8): multiple definition of `OpenGzu8rT20eO9AxEKIONeYf57cS_GraphicsziRenderingziOpenGLziRawziTypes_makeGLDEBUGPROC_closure' /tmp/ghc17998_0/ghc_14.o:(.data+0x8): first defined here /tmp/ghc17998_0/ghc_14.o: In function `c2y7_info': (.text+0xc0): multiple definition of `OpenGzu8rT20eO9AxEKIONeYf57cS_GraphicsziRenderingziOpenGLziRawziTypes_makeGLDEBUGPROC_info' /tmp/ghc17998_0/ghc_14.o:(.text+0xc0): first defined here But with gcc-4.8 it works: ubuntu@box1305:~$ /usr/bin/gcc-4.8 -fno-stack-protector -DTABLES_NEXT_TO_CODE '-Wl,--hash-size=31' -Wl,--reduce-memory-overheads -Wl,--no-as-needed -nostdlib -Wl,-r -nodefaultlibs '-Wl,--build-id=none' -o Types.o /tmp/ghc17998_0/ghc_15.ldscript ubuntu@box1305:~$ So you should switch to older gcc and probably report a bug to gcc devs. ADDED: Here is an example how to switch gcc version. And here is a successful build.
CircleCI
34,654,262
12
I'm trying to build a react app on CircleCI which until recently I've had no issues with. I'm now getting the following error whenever I attempt an npm run build from my circle.yml: #!/bin/bash -eo pipefail npm run build > [email protected] build /home/circleci/repo > react-scripts build /home/circleci/repo/node_modules/dotenv-expand/lib/main.js:8 var key = match.replace(/\$|{|}/g, '') ^ RangeError: Maximum call stack size exceeded at String.replace (<anonymous>) at /home/circleci/repo/node_modules/dotenv-expand/lib/main.js:8:23 at Array.forEach (<anonymous>) at interpolate (/home/circleci/repo/node_modules/dotenv-expand/lib/main.js:7:13) at /home/circleci/repo/node_modules/dotenv-expand/lib/main.js:14:18 at Array.forEach (<anonymous>) at interpolate (/home/circleci/repo/node_modules/dotenv-expand/lib/main.js:7:13) at /home/circleci/repo/node_modules/dotenv-expand/lib/main.js:14:18 at Array.forEach (<anonymous>) at interpolate (/home/circleci/repo/node_modules/dotenv-expand/lib/main.js:7:13) npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] build: `react-scripts build` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] build script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/circleci/.npm/_logs/2018-03-14T20_57_45_719Z-debug.log Exited with code 1 I've tried adding/re-added dotenv-expand dependencies as well as any environmental variables I'm using on CircleCI but with no luck. Any suggestions? Thanks.
Turns out I'd been importing environmental variables using the same name e.g. REACT_APP_API_KEY_GOOGLE_MAPS=${REACT_APP_API_KEY_GOOGLE_MAPS} . Once I changed the name e.g. REACT_APP_API_KEY_GOOGLE_MAPS=${REACT_APP_API_KEY_GOOGLE_MAPS_EXT} this issue was resolved!
CircleCI
49,287,598
12
I have to migrate from CircleCI 1.0 to 2.0. After I have changed the old configuration to the new one, build failed because of eslint-plugin-prettier reported prettier spacing violations. MyProject - is my GitHub repo and it contains a folder client which has all front-end code which I want to build on CI. In client folder there are .eslintrc.json ... "extends": ["airbnb", "prettier"], "plugins": ["prettier"], ... .prettierrc { "tabWidth": 4, "trailingComma": "all", "singleQuote": true } .gitattributes (I work on Windows 10) with the following code: *.js text eol=crlf *.jsx text eol=crlf and of course package.json New CircleCI configuration: version: 2 jobs: build: working_directory: ~/MyProject docker: - image: circleci/node:6.14.3-jessie-browsers steps: - checkout - run: name: Install Packages command: npm install working_directory: client - run: name: Test command: npm run validate working_directory: client Old CircleCI configuration: ## Customize dependencies machine: node: version: 6.1.0 # Remove cache before new build. It takes much time # but fixing a build which is broken long time ago (and not detected because of cache) # is even more time consuming dependencies: post: - rm -r ~/.npm ## Customize test commands test: override: - npm run validate general: build_dir: client The build fails due to linting problems (all are about the number of spaces): So, what could cause these errors? I am out of ideas here. I first thought it might be because .prettierrc was not found. However, when I deleted it for an experiment and run locally I got errors in all files in total more than 1000. While on CI with .prettierrc in place there are were only 188 in few files.
I have finally figured it out. My package.json file contained the following dependency on the Prettier: "prettier": "^1.11.1". I had to learn hard way the meaning of this little symbol ^. It allows installing any version of Prettier which is compatible with 1.11.1. In my case on CircleCI 2.0 it installed 1.14.2 which adds new features to Prettier. I believe it did not break on CircleCI version 1.0 and locally because of cached node_modules which contained earlier Prettier versions compatible with 1.11.1 Here is nice video about semantic versioning.
CircleCI
51,121,469
12
I'm trying to migrate from Crashlytics Beta to Firebase App Distribution. CircleCi in the Middle. The build failes in CircleCi with the following error: What went wrong: Execution failed for task ':FiverrApp:appDistributionUploadRelease'. Service credentials file does not exist. Please check the service credentials path and try again Here is how i'm configuring serviceCredentialsFile variable In my build.gradle: release { buildConfigField "boolean", "FORCE_LOGS", "true" firebaseAppDistribution { releaseNotes="Notes\n" + getCommitMessages() groups="android-testers" serviceCredentialsFile="/api-project-xxx-yyy.json" } } the file api-project-xxx-yyy.json is in the same folder with build.gradle file. I've also tried: serviceCredentialsFile="api-project-xxx-yyy.json" serviceCredentialsFile='api-project-xxx-yyy.json' And still no luck... Would appreciate if someone can help me.
Try to use $rootDir to get a path. For example if you pass you credentials file api-project-xxx-yyy.json to root directory than you can take it something like this: firebaseAppDistribution { ... serviceCredentialsFile="$rootDir/api-project-xxx-yyy.json" }
CircleCI
58,743,588
12
I am trying to cache a command line tool needed for my build process. The tool is made out of NodeJS. The build succeeds, but I need it to run faster. The relevant parts of my circle.yml look like this : dependencies: post: - npm -g list - if [ $(npm -g list | grep -c starrynight) -lt 1 ]; then npm install -g starrynight; else echo "StarryNight seems to be cached"; fi test: override: - npm -g list - starrynight run-tests --framework nightwatch The second npm -g list shows starrynight available for use, but the first one shows that it is not being cached. echo $(npm prefix -g) . . . gets me . . . /home/ubuntu/nvm/v0.10.33 . . . so I am assuming CircleCI doesn't cache anything installed globally into nvm. Nothing I have tried gets me my message, "StarryNight seems to be cached". How can I cache starrynight?
Ok, I figured this out. Thanks to Hirokuni Kim of CircleCI for pointing me in the right direction. The relevant bits of the new circle.yml looks like this : machine: node: version: 0.10.33 dependencies: cache_directories: - ~/nvm/v0.10.33/lib/node_modules/starrynight - ~/nvm/v0.10.33/bin/starrynight pre: - if [ ! -e ~/nvm/v0.10.33/bin/starrynight ]; then npm install -g starrynight; else echo "Starrynight seems to be cached"; fi; Hirokuni suggested caching ~/nvm but cache retrieval took as long as the build, since it restores every available version of nodejs. I had tried previously to cache just ~/nvm/v0.10.33/lib/node_modules/starrynight on its own, without realizing that the sister 'directory' bin/starrynight is actually an essential symlink to the entry point of the module. My working assumption is that NodeJS modules run from the command line through a series of symbolic references, probably as follows. . . npm install -g starrynight creates two new artifacts: an environment alias for npm named starrynight a symlink in the ${prefix}/bin directory, which points to the entry point file, starrynight.js specified with the bin key in package.json. When the user types starrynight as a CLI command the shell interprets it as an alias for npm and executes it. npm examines $0, gets starrynight, and starts up nodejs with the symlink ${prefix}/bin/starrynight as the module to execute. That symlink refers to ~/nvm/v0.10.33/lib/node_modules/starrynight where the real action takes place. In short, it is necessary to cache both ${prefix}/lib/node_modules/xxx and ${prefix}/bin/xxx
CircleCI
31,766,930
11
This seems very basic but I can't find it anywhere in the docs. I'm working on a project where we run some tests through a shell script wrapper like: ./foo.sh a ./foo.sh b ./foo.sh c foo.sh does not output XUnit format, so we need a different way to signal failure to CircleCI. Is exit 1 (or any nonzero exit code) recognized as a failure? What conditions cause CircleCI to report a step as having failed?
Yes, CircleCI fails the build if any command, whether it runs tests or not, exits with a non-zero exit code. Documented in the configuration reference. These snippets pulled from the above link go into detail on why that's the case: For jobs that run on Linux, the default value of the shell option is /bin/bash -eo pipefail Descriptions of the -eo pipefail options are provided below. -e Exit immediately if a pipeline (which may consist of a single simple command), a subshell command enclosed in parentheses, or one of the commands executed as part of a command list enclosed by braces exits with a non-zero status. -o pipefail If pipefail is enabled, the pipeline’s return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully. The shell waits for all commands in the pipeline to terminate before returning a value.
CircleCI
35,161,269
11
I am setting up a Circle CI build for an Android project, and am wondering how to add a gradle.properties file to my project build. I use a local gradle.properties to store my API keys and sensitive data. Other CI tools (ie, Jenkins) allow you to upload a gradle.properties file to use across all builds, but I am not able to find a way to do this in Circle CI. It seems that environment variables are the only way Circle CI allows you to add secret credentials to your project. Is there a way to use credentials from gradle.properties on Circle CI builds?
Add all properties in the gradle.properties to CircleCI "Environment Variables", but prepend them with: ORG_GRADLE_PROJECT_
CircleCI
35,440,907
11
I've been spending a day for the CircleCI in Android Project and I keep getting java.lang.UnsupportedClassVersionError: com/android/build/gradle/AppPlugin : Unsupported major.minor version 52.0 when CircleCI runs gradle dependencies command. Here is an stacktrace that it shows: * Where: Build file '/home/ubuntu/MyProject/app/build.gradle' line: 1 * What went wrong: A problem occurred evaluating project ':app'. > java.lang.UnsupportedClassVersionError: com/android/build/gradle/AppPlugin : Unsupported major.minor version 52.0 * Try: Run with --info or --debug option to get more log output. * Exception is: org.gradle.api.GradleScriptException: A problem occurred evaluating project ':app'. at org.gradle.groovy.scripts.internal.DefaultScriptRunnerFactory$ScriptRunnerImpl.run(DefaultScriptRunnerFactory.java:93) at org.gradle.configuration.DefaultScriptPluginFactory$ScriptPluginImpl$1.run(DefaultScriptPluginFactory.java:144) at org.gradle.configuration.ProjectScriptTarget.addConfiguration(ProjectScriptTarget.java:72) at org.gradle.configuration.DefaultScriptPluginFactory$ScriptPluginImpl.apply(DefaultScriptPluginFactory.java:149) at org.gradle.configuration.project.BuildScriptProcessor.execute(BuildScriptProcessor.java:38) at org.gradle.configuration.project.BuildScriptProcessor.execute(BuildScriptProcessor.java:25) at org.gradle.configuration.project.ConfigureActionsProjectEvaluator.evaluate(ConfigureActionsProjectEvaluator.java:34) at org.gradle.configuration.project.LifecycleProjectEvaluator.evaluate(LifecycleProjectEvaluator.java:55) at org.gradle.api.internal.project.AbstractProject.evaluate(AbstractProject.java:510) at org.gradle.api.internal.project.AbstractProject.evaluate(AbstractProject.java:90) at org.gradle.execution.TaskPathProjectEvaluator.configureHierarchy(TaskPathProjectEvaluator.java:47) at org.gradle.configuration.DefaultBuildConfigurer.configure(DefaultBuildConfigurer.java:35) at org.gradle.initialization.DefaultGradleLauncher$2.run(DefaultGradleLauncher.java:125) at org.gradle.internal.Factories$1.create(Factories.java:22) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:90) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:52) at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:122) at org.gradle.initialization.DefaultGradleLauncher.access$200(DefaultGradleLauncher.java:32) at org.gradle.initialization.DefaultGradleLauncher$1.create(DefaultGradleLauncher.java:99) at org.gradle.initialization.DefaultGradleLauncher$1.create(DefaultGradleLauncher.java:93) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:90) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:62) at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:93) at org.gradle.initialization.DefaultGradleLauncher.run(DefaultGradleLauncher.java:82) at org.gradle.launcher.exec.InProcessBuildActionExecuter$DefaultBuildController.run(InProcessBuildActionExecuter.java:94) at org.gradle.tooling.internal.provider.ExecuteBuildActionRunner.run(ExecuteBuildActionRunner.java:28) at org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35) at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:43) at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:28) at org.gradle.launcher.exec.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:78) at org.gradle.launcher.exec.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:48) at org.gradle.launcher.daemon.server.exec.ExecuteBuild.doBuild(ExecuteBuild.java:52) at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120) at org.gradle.launcher.daemon.server.exec.WatchForDisconnection.execute(WatchForDisconnection.java:37) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120) at org.gradle.launcher.daemon.server.exec.ResetDeprecationLogger.execute(ResetDeprecationLogger.java:26) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120) at org.gradle.launcher.daemon.server.exec.RequestStopIfSingleUsedDaemon.execute(RequestStopIfSingleUsedDaemon.java:34) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120) at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.call(ForwardClientInput.java:74) at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.call(ForwardClientInput.java:72) at org.gradle.util.Swapper.swap(Swapper.java:38) at org.gradle.launcher.daemon.server.exec.ForwardClientInput.execute(ForwardClientInput.java:72) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120) at org.gradle.launcher.daemon.server.health.DaemonHealthTracker.execute(DaemonHealthTracker.java:47) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120) at org.gradle.launcher.daemon.server.exec.LogToClient.doBuild(LogToClient.java:66) at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120) at org.gradle.launcher.daemon.server.exec.EstablishBuildEnvironment.doBuild(EstablishBuildEnvironment.java:72) at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120) at org.gradle.launcher.daemon.server.health.HintGCAfterBuild.execute(HintGCAfterBuild.java:41) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120) at org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy$1.run(StartBuildOrRespondWithBusy.java:50) at org.gradle.launcher.daemon.server.DaemonStateCoordinator$1.run(DaemonStateCoordinator.java:246) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54) at org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40) Caused by: com.google.common.util.concurrent.ExecutionError: java.lang.UnsupportedClassVersionError: com/android/build/gradle/AppPlugin : Unsupported major.minor version 52.0 at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199) at com.google.common.cache.LocalCache.get(LocalCache.java:3934) at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821) at org.gradle.api.internal.plugins.DefaultPluginRegistry.uncheckedGet(DefaultPluginRegistry.java:149) at org.gradle.api.internal.plugins.DefaultPluginRegistry.lookup(DefaultPluginRegistry.java:144) at org.gradle.api.internal.plugins.DefaultPluginRegistry.lookup(DefaultPluginRegistry.java:127) at org.gradle.api.internal.plugins.DefaultPluginManager.apply(DefaultPluginManager.java:108) at org.gradle.api.internal.plugins.DefaultObjectConfigurationAction.applyType(DefaultObjectConfigurationAction.java:112) at org.gradle.api.internal.plugins.DefaultObjectConfigurationAction.access$200(DefaultObjectConfigurationAction.java:35) at org.gradle.api.internal.plugins.DefaultObjectConfigurationAction$3.run(DefaultObjectConfigurationAction.java:79) at org.gradle.api.internal.plugins.DefaultObjectConfigurationAction.execute(DefaultObjectConfigurationAction.java:135) at org.gradle.api.internal.project.AbstractPluginAware.apply(AbstractPluginAware.java:46) at org.gradle.api.plugins.PluginAware$apply.call(Unknown Source) at org.gradle.api.internal.project.ProjectScript.apply(ProjectScript.groovy:35) at org.gradle.api.Script$apply$0.callCurrent(Unknown Source) at build_3t8kcqhef15uw367iarbj60nz.run(/home/ubuntu/MyProject/app/build.gradle:1) at org.gradle.groovy.scripts.internal.DefaultScriptRunnerFactory$ScriptRunnerImpl.run(DefaultScriptRunnerFactory.java:91) ... 58 more Caused by: java.lang.UnsupportedClassVersionError: com/android/build/gradle/AppPlugin : Unsupported major.minor version 52.0 at org.gradle.api.internal.plugins.DefaultPluginRegistry$1.load(DefaultPluginRegistry.java:71) at org.gradle.api.internal.plugins.DefaultPluginRegistry$1.load(DefaultPluginRegistry.java:51) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2317) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195) ... 75 more BUILD FAILED Total time: 33.396 secs Here is my configuration in .yml file: machine: java: version: openjdk7 environment: ANDROID_HOME: /usr/local/android-sdk-linux dependencies: pre: - echo y | android update sdk --no-ui --all --filter "tools" - echo y | android update sdk --no-ui --all --filter "platform-tools" - echo y | android update sdk --no-ui --all --filter "build-tools" - echo y | android update sdk --no-ui --all --filter "android-24" - echo y | android update sdk --no-ui --all --filter "extra-google-m2repository" - echo y | android update sdk --no-ui --all --filter "extra-google-google_play_services" - echo y | android update sdk --no-ui --all --filter "extra-android-support" - echo y | android update sdk --no-ui --all --filter "extra-android-m2repository" - (./gradlew -version): timeout: 360 #override: #- ANDROID_HOME=/usr/local/android-sdk-linux ./gradlew dependencies checkout: post: - git submodule init - git submodule update test: override: - (./gradlew assemble -PdisablePreDex): timeout: 360 - cp -r ${HOME}/${CIRCLE_PROJECT_REPONAME}/app/build/outputs/apk/ $CIRCLE_ARTIFACTS - emulator -avd circleci-android22 -no-audio -no-window: background: true parallel: true # wait for it to have booted - circle-android wait-for-boot # run tests against the emulator. - ./gradlew connectedAndroidTest deployment: staging: branch: staging commands: - (./gradlew clean assembleStaging crashlyticsUploadDistributionStaging -PdisablePreFex): timeout: 720 I set java compileOptions in build.gradle to version 1.7 and enable the Databinding. android { ... compileOptions { sourceCompatibility JavaVersion.VERSION_1_7 targetCompatibility JavaVersion.VERSION_1_7 } } Have anyone faced this problem before? Please give me some advices. Thanks.
You get this error because a Java 7 VM tries to load a class compiled for Java 8 Java 8 has the class file version 52.0 but a Java 7 VM can only load class files up to version 51.0 In your case the Java 7 VM is your gradle build and the class is com.android.build.gradle.AppPlugin Please give me some advices. Try to update your configuration .yml in order to use a Java 8 VM: machine: java: version: openjdk8 # This line is what you need. environment: ANDROID_HOME: /usr/local/android-sdk-linux
CircleCI
38,209,522
11
When I run my docker-compose, it creates a web container and postgres container. I want to manually trigger my Django tests to run, via something like docker-compose run web python manage.py test the problem with this is it creates a new container (requiring new migrations to be applied, housekeeping work, etc.) The option I'm leaning towards it doing something like docker exec -i -t <containerid> python manage.py test This introduces a new issue which is that I must run docker ps first to grab the container name. The whole point of this is to automatically run the tests for each build so it has to be automated, manually running docker ps is not a solution. So is there a way to dynamically grab the container id or is there a better way to do this? This would not be an issue if you could assign container names in docker-compose
While an accepted answer was provided, the answer itself is not really related to the title of this question: Dynamically get a running container name created by docker-compose To dynamically retrieve the name of a container run by docker-compose you can execute the following command: $(docker inspect -f '{{.Name}}' $(docker-compose ps -q web) | cut -c2-)
CircleCI
41,623,477
11
I'm deploying to CircleCI and but my code is timing out. The command in particular that CircleCI is calling that's causing the time-out is during the checkout stage: git reset --hard SHA Where SHA is the hash of the build, but upon ssh'ing in I noted that HEAD and others that I tried also run forever. At that point the code has been checked out with: git clone --quiet [email protected]:Organization/Repo.git . --config core.compression=9 --depth 10 --no-single-branch Why would git reset --hard run (seemingly) forever on the CircleCI environment, and what fixes are reasonably available? More details (we've got some git-lfs files here, too): lsb-release Ubuntu 14.04.4 LTS git version 2.11.0 git-lfs/1.5.4 (GitHub; linux amd64; go 1.7.4) EDIT This appears related to: github.com/git-lfs/git-lfs/pull/1932 (per @torek's comment) https://discuss.circleci.com/t/is-there-any-way-to-disable-git-lfs-in-ubuntu-14-04-trusty-image/10208/12 I would also note that adding GIT_LFS_SKIP_SMUDGE=1 (in the CircleCI Project config) has not had any useful effect. It does help to go back to Ubuntu 12, but obviously that's less than ideal. EDIT Here's a recent bug report I filed: https://discuss.circleci.com/t/cannot-pull-with-git-lfs/14346 (Just to ensure visibility, because my other reports were buried in comments)
The issue was a typo, namely that CircleCI was running version 1.0, but should have been using 2.0. In particular, I had created a .circleci/config.yaml, with the appropriate config. ... however, it should've been called .circleci/config.yml.
CircleCI
44,986,734
11
I have a very simple config.yml: version: 2 jobs: build: working_directory: ~/app docker: - image: circleci/node:8.4.0 steps: - checkout - run: node -e "console.log('Hello from NodeJS ' + process.version + '\!')" - run: yarn - setup_remote_docker - run: docker build . All it does: boot a node image, test if node is running, do a yarn install and a docker build. My dockerfile is nothing special; it has a COPY and ENTRYPOINT. When I run circleci build on my MacBook Air using Docker Native, I get the following error: Got permission denied while trying to connect to the Docker daemon socket at unix://[...] If I change the docker build . command to: sudo docker build ., everything works as planned, locally, with circleci build. However, pushing this change to CircleCI will result in an error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? So, to summarize: using sudo works, locally, but not on CircleCI itself. Not using sudo works on CircleCI, but not locally. Is this something the CircleCI staff has to fix, or is there something I can do? For reference, I have posted this question on the CircleCI forums as well.
I've created a workaround for myself. In the very first step of the config.yml, I run this command: if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then echo "This is a local build. Enabling sudo for docker" echo sudo > ~/sudo else echo "This is not a local build. Disabling sudo for docker" touch ~/sudo fi Afterwards, you can do this: eval `cat ~/sudo` docker build . Explanation: The first snippet checks if the CircleCI-provided environment variable CIRCLE_SHELL_ENV contains localbuild. This is only true when running circleci build on your local machine. If true, it creates a file called sudo with contents sudo in the home directory. If false, it creates a file called sudo with NO contents in the home directory. The second snippet opens the ~/sudo file, and executes it with the arguments you give afterwards. If the ~/sudo file contains "sudo", the command in this example will become sudo docker build ., if it doesn't contain anything, it will become docker build ., with a space before it, but that will be ignored. This way, both the local (circleci build) builds and remote builds will work.
CircleCI
45,796,661
11
I am in the process of a setting up a CircleCI 2.0 configuration and I am needing to include the ubuntu package 'pdf2htmlex', but I am being given the following error: apt-get update && apt-get install -y pdf2htmlex E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied) E: Unable to lock directory /var/lib/apt/lists/ E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root? Exited with code 100 Here is the relevant portions of the .circle/config.yml: version: 2 jobs: build: docker: # specify the version you desire here - image: circleci/node:7.10 - image: circleci/postgres:9.6.2 # Specify service dependencies here if necessary # CircleCI maintains a library of pre-built images # documented at https://circleci.com/docs/2.0/circleci-images/ # - image: circleci/mongo:3.4.4 working_directory: ~/repo steps: - checkout - run: name: Install System Dependencies command: apt-get update && apt-get install -y pdf2htmlex - run: name: save SHA to a file command: echo $CIRCLE_SHA1 > .circle-sha - restore_cache: key: dependency-cache-{{ checksum "package.json" }} - run: name: Install dependencies via npm command: npm install - run: name: Run tests command: npm run test - run: scripts/build.sh - save_cache: key: dependency-cache-{{ checksum "package.json" }} paths: - node_modules - save_cache: key: v1-repo-{{ checksum ".circle-sha" }} paths: - ~/repo Can anyone suggest a way around this, since this is causing some of our integration tests to fail?
You should be able to add sudo to theapt-get install line: version: 2 jobs: build: docker: # specify the version you desire here - image: circleci/node:7.10 - image: circleci/postgres:9.6.2 # Specify service dependencies here if necessary # CircleCI maintains a library of pre-built images # documented at https://circleci.com/docs/2.0/circleci-images/ # - image: circleci/mongo:3.4.4 working_directory: ~/repo steps: - checkout - run: name: Install System Dependencies command: sudo apt-get update && sudo apt-get install -y pdf2htmlex - run: name: save SHA to a file command: echo $CIRCLE_SHA1 > .circle-sha - restore_cache: key: dependency-cache-{{ checksum "package.json" }} - run: name: Install dependencies via npm command: npm install - run: name: Run tests command: npm run test - run: scripts/build.sh - save_cache: key: dependency-cache-{{ checksum "package.json" }} paths: - node_modules - save_cache: key: v1-repo-{{ checksum ".circle-sha" }} paths: - ~/repo
CircleCI
46,781,452
11
I tried continuous integration tools Travis CI, CircleCI and codeship, but found none of them provide support document for phabricator. Does anyone have ideas about how to do continuous integration (CI) with Phabricator?
I have done an integration with Travis-CI by adding post diff and land hooks to Phabricator to push diffs as branches to GitHub where Travis looks for branch updates. As far as I know, Travis-CI integrates only with GitHub, so if your main repo is there and Phabricator is pointing to it, it can be done. If you want to take this approach, the place to start is with creating your own ArcanistConfiguration and overriding didRunWorkflow. See also how to create a new library. The API documentation is pretty good, but I had to go through some trial and error to get what I wanted. The Phabricator people are probably happy to answer questions. You can also look into the Phabricator conduit differential.createcomment to script messages to diffs like so: arc call-conduit --conduit="https://my.phabricator.com/" --arcrc-file="robot.arcrc" \ differential.createcomment <<EOF {"revision_id":"1234","message":"Yer build done failed"} EOF Where robot.arcrc is an arcrc file with the credentials to push messages, and 1234 is the revision number. You would have to use the conduit API to get the revision number. So, I think the answer is you may have to build your own custom solution depending on which CI integration for the CI tool you want to integrate with. And here's a discussion of Travis support for Phabricator. Edit: Here's traphic, an example of extending arcanist to push diffs to branches on GitHub on arc diff and remove them on arc land. As Travis-CI looks for update from GitHub, it will build your diffs. Side note: This is mostly a brain dump. I know good answers have more code examples and links are frowned on, but the question was pretty open ended and was looking for pointers, so I'm trying to be helpful.
CircleCI
27,517,657
10
I am now using the CircleCI for my project. Also I am implementing the new constraintLayout in my project. Now I am stuck with the CircleCI building. It shows me this when gradle -dependencies run: File /home/ubuntu/.android/repositories.cfg could not be loaded. FAILURE: Build failed with an exception. * What went wrong: A problem occurred configuring project ':app'. > You have not accepted the license agreements of the following SDK components: [com.android.support.constraint:constraint-layout:1.0.0-alpha3, com.android.support.constraint:constraint-layout-solver:1.0.0-alpha3]. Before building your project, you need to accept the license agreements and complete the installation of the missing components using the Android Studio SDK Manager. Alternatively, to learn how to transfer the license agreements from one workstation to another, go to http://d.android.com/r/studio-ui/export-licenses.html Here is my configuration in .yml file: #Install android build tools, platforms #Supported versions here https://circleci.com/docs/android machine: java: version: openjdk8 environment: ANDROID_HOME: /usr/local/android-sdk-linux dependencies: pre: - echo y | android list sdk - echo y | android update sdk --no-ui --all --filter "tools" - echo y | android update sdk --no-ui --all --filter "platform-tools" - echo y | android update sdk --no-ui --all --filter "build-tools-24.0.0" - echo y | android update sdk --no-ui --all --filter "android-24" - echo y | android update sdk --no-ui --all --filter "extra-google-m2repository" - echo y | android update sdk --no-ui --all --filter "extra-google-google_play_services" - echo y | android update sdk --no-ui --all --filter "extra-android-support" - echo y | android update sdk --no-ui --all --filter "extra-android-m2repository" - (./gradlew -version): timeout: 360 override: #- ANDROID_HOME=/usr/local/android-sdk-linux ./gradlew dependencies - export TERM="dumb"; if [ -e ./gradlew ]; then ./gradlew clean dependencies -stacktrace;else gradle clean dependencies -stacktrace;fi #Pull any submodules checkout: post: - git submodule init - git submodule update #-PdisablePreDex is a must else gradle just dies due to memory limit #Replace test: override: - (./gradlew assemble -PdisablePreDex): timeout: 360 - cp -r ${HOME}/${CIRCLE_PROJECT_REPONAME}/app/build/outputs/apk/ $CIRCLE_ARTIFACTS - emulator -avd circleci-android22 -no-audio -no-window: background: true parallel: true # wait for it to have booted - circle-android wait-for-boot # run tests against the emulator. - ./gradlew connectedAndroidTest #Deploy when tests pass deployment: #production: # branch: master # commands: # - (./gradlew clean assembleRelease crashlyticsUploadDistributionRelease -PdisablePreFex): # timeout: 720 staging: branch: staging commands: - (./gradlew clean assembleStaging crashlyticsUploadDistributionStaging -PdisablePreFex): timeout: 720 I checked in the build log when echo y | android update sdk --no-ui --all --filter "extra-android-m2repository" command run and here is the result: November 20, 2015 Do you accept the license 'android-sdk-license-c81a61d9' [y/n]: Installing Archives: Preparing to install archives Downloading Android Support Repository, revision 33 Installing Android Support Repository, revision 33 Installed Android Support Repository, revision 33 Done. 1 package installed. And my classpath is: classpath 'com.android.tools.build:gradle:2.2.0-alpha4' I am not sure what I've done incorrectly or is there anything I need to add more. Please suggest. Thanks.
Alex Fu's answer explains nicely where the problem lies and how to deal with it but there is a simpler solution. Since the license files are really just simple files with a bunch of hex characters in them you can create them simply without any copying. An example would be putting the following code into the pre: section: - ANDROID_HOME=/usr/local/android-sdk-linux - mkdir "$ANDROID_HOME/licenses" || true - echo "8933bad161af4178b1185d1a37fbf41ea5269c55" > "$ANDROID_HOME/licenses/android-sdk-license" - echo "84831b9409646a918e30573bab4c9c91346d8abd" > "$ANDROID_HOME/licenses/android-sdk-preview-license" - echo "d975f751698a77b662f1254ddbeed3901e976f5a" > "$ANDROID_HOME/licenses/intel-android-extra-license"
CircleCI
38,210,675
10
I'm trying to deploy to Elastic Beanstalk, specifically using CircleCI, and I ran into this error: ERROR: UndefinedModelAttributeError - "serviceId" not defined in the metadata of the model: <botocore.model.ServiceModel object at 0x7fdc908efc10> From my Google search, I see that it's a Python error which makes sense because that's what Elastic Beanstalk uses. But there is no information out there for this specific case. Does anyone know why this happens?
Update EBCLI 3.14.6 is compatible with the current latest AWS CLI (> 1.16.10). Previously ... To solve this issue: Upgrade awsebcli to 3.14.5: Upgrade awsebcli to 3.14.6 pip install awsebcli --upgrade OR If you must continue using awsebcli < 3.14.5, perform: pip install 'botocore<1.12' The core of the problem is the open dependency range on botocore that awsebcli < 3.14.5 allowed so that users can always have access to the latest AWS CLI commands/AWS APIs (botocore manages AWS service models). When botocore released version 1.12, it created an incompatibility in the EBCLI. EBCLI 3.14.5 restricts the dependency on botocore to < 1.12. EDIT: As an aside, note that EBCLI 3.14.5 is incompatible with AWS CLI 1.16.10. Instead, use AWS CLI 1.16.9.
CircleCI
52,237,638
10
I've read this answer, reducing boilerplate, looked at few GitHub examples and even tried redux a little bit (todo apps). As I understand, official redux doc motivations provide pros comparing to traditional MVC architectures. BUT it doesn't provide an answer to the question: Why you should use Redux over Facebook Flux? Is that only a question of programming styles: functional vs non-functional? Or the question is in abilities/dev-tools that follow from redux approach? Maybe scaling? Or testing? Am I right if I say that redux is a flux for people who come from functional languages? To answer this question you may compare the complexity of implementation redux's motivation points on flux vs redux. Here are motivation points from official redux doc motivations: Handling optimistic updates (as I understand, it hardly depends on 5th point. Is it hard to implement it in facebook flux?) Rendering on the server (facebook flux also can do this. Any benefits comparing to redux?) Fetching data before performing route transitions (Why it can't be achieved in facebook flux? What's the benefits?) Hot reload (It's possible with React Hot Reload. Why do we need redux?) Undo/Redo functionality Any other points? Like persisting state...
Redux author here! Redux is not that different from Flux. Overall it has same architecture, but Redux is able to cut some complexity corners by using functional composition where Flux uses callback registration. There is not a fundamental difference in Redux, but I find it makes certain abstractions easier, or at least possible to implement, that would be hard or impossible to implement in Flux. Reducer Composition Take, for example, pagination. My Flux + React Router example handles pagination, but the code for that is awful. One of the reasons it's awful is that Flux makes it unnatural to reuse functionality across stores. If two stores need to handle pagination in response to different actions, they either need to inherit from a common base store (bad! you're locking yourself into a particular design when you use inheritance), or call an externally defined function from within the event handler, which will need to somehow operate on the Flux store's private state. The whole thing is messy (although definitely in the realm of possible). On the other hand, with Redux pagination is natural thanks to reducer composition. It's reducers all the way down, so you can write a reducer factory that generates pagination reducers and then use it in your reducer tree. The key to why it's so easy is because in Flux, stores are flat, but in Redux, reducers can be nested via functional composition, just like React components can be nested. This pattern also enables wonderful features like no-user-code undo/redo. Can you imagine plugging Undo/Redo into a Flux app being two lines of code? Hardly. With Redux, it is—again, thanks to reducer composition pattern. I need to highlight there's nothing new about it—this is the pattern pioneered and described in detail in Elm Architecture which was itself influenced by Flux. Server Rendering People have been rendering on the server fine with Flux, but seeing that we have 20 Flux libraries each attempting to make server rendering “easier”, perhaps Flux has some rough edges on the server. The truth is Facebook doesn't do much server rendering, so they haven't been very concerned about it, and rely on the ecosystem to make it easier. In traditional Flux, stores are singletons. This means it's hard to separate the data for different requests on the server. Not impossible, but hard. This is why most Flux libraries (as well as the new Flux Utils) now suggest you use classes instead of singletons, so you can instantiate stores per request. There are still the following problems that you need to solve in Flux (either yourself or with the help of your favorite Flux library such as Flummox or Alt): If stores are classes, how do I create and destroy them with dispatcher per request? When do I register stores? How do I hydrate the data from the stores and later rehydrate it on the client? Do I need to implement special methods for this? Admittedly Flux frameworks (not vanilla Flux) have solutions to these problems, but I find them overcomplicated. For example, Flummox asks you to implement serialize() and deserialize() in your stores. Alt solves this nicer by providing takeSnapshot() that automatically serializes your state in a JSON tree. Redux just goes further: since there is just a single store (managed by many reducers), you don't need any special API to manage the (re)hydration. You don't need to “flush” or “hydrate” stores—there's just a single store, and you can read its current state, or create a new store with a new state. Each request gets a separate store instance. Read more about server rendering with Redux. Again, this is a case of something possible both in Flux and Redux, but Flux libraries solve this problem by introducing a ton of API and conventions, and Redux doesn't even have to solve it because it doesn't have that problem in the first place thanks to conceptual simplicity. Developer Experience I didn't actually intend Redux to become a popular Flux library—I wrote it as I was working on my ReactEurope talk on hot reloading with time travel. I had one main objective: make it possible to change reducer code on the fly or even “change the past” by crossing out actions, and see the state being recalculated. I haven't seen a single Flux library that is able to do this. React Hot Loader also doesn't let you do this—in fact it breaks if you edit Flux stores because it doesn't know what to do with them. When Redux needs to reload the reducer code, it calls replaceReducer(), and the app runs with the new code. In Flux, data and functions are entangled in Flux stores, so you can't “just replace the functions”. Moreover, you'd have to somehow re-register the new versions with the Dispatcher—something Redux doesn't even have. Ecosystem Redux has a rich and fast-growing ecosystem. This is because it provides a few extension points such as middleware. It was designed with use cases such as logging, support for Promises, Observables, routing, immutability dev checks, persistence, etc, in mind. Not all of these will turn out to be useful, but it's nice to have access to a set of tools that can be easily combined to work together. Simplicity Redux preserves all the benefits of Flux (recording and replaying of actions, unidirectional data flow, dependent mutations) and adds new benefits (easy undo-redo, hot reloading) without introducing Dispatcher and store registration. Keeping it simple is important because it keeps you sane while you implement higher-level abstractions. Unlike most Flux libraries, Redux API surface is tiny. If you remove the developer warnings, comments, and sanity checks, it's 99 lines. There is no tricky async code to debug. You can actually read it and understand all of Redux. See also my answer on downsides of using Redux compared to Flux.
Flux
32,461,229
1,124
I just recently discovered Redux. It all looks good. Are there any downsides, gotcha or compromises of using Redux over Flux? Thanks
Redux author here! I'd like to say you're going to make the following compromises using it: You'll need to learn to avoid mutations. Flux is unopinionated about mutating data, but Redux doesn't like mutations and many packages complementary to Redux assume you never mutate the state. You can enforce this with dev-only packages like redux-immutable-state-invariant, use Immutable.js, or trust yourself and your team to write non-mutative code, but it's something you need to be aware of, and this needs to be a conscious decision accepted by your team. You're going to have to carefully pick your packages. While Flux explicitly doesn't try to solve “nearby” problems such as undo/redo, persistence, or forms, Redux has extension points such as middleware and store enhancers, and it has spawned a young but rich ecosystem. This means most packages are new ideas and haven't received the critical mass of usage yet. You might depend on something that will be clearly a bad idea a few months later on, but it's hard to tell just yet. You won't have a nice Flow integration yet. Flux currently lets you do very impressive static type checks which Redux doesn't support yet. We'll get there, but it will take some time. I think the first is the biggest hurdle for the beginners, the second can be a problem for over-enthusiastic early adopters, and the third is my personal pet peeve. Other than that, I don't think using Redux brings any particular downsides that Flux avoids, and some people say it even has some upsides compared to Flux. See also my answer on upsides of using Redux.
Flux
32,021,763
249
I'm going migrate to Redux. My application consists of a lot of parts (pages, components) so I want to create many reducers. Redux examples show that I should use combineReducers() to generate one reducer. Also as I understand Redux application should have one store and it is created once the application starts. When the store is being created I should pass my combined reducer. This makes sense if the application is not too big. But what if I build more than one JavaScript bundle? For example, each page of application has own bundle. I think in this case the one combined reducer is not good. I looked through the sources of Redux and I have found replaceReducer() function. It seems to be what I want. I could create combined reducer for each part my application and use replaceReducer() when I move between parts of application. Is this a good approach?
Update: see also how Twitter does it. This is not a full answer but should help you get started. Note that I'm not throwing away old reducers—I'm just adding new ones to the combination list. I see no reason to throw away the old reducers—even in the largest app you're unlikely to have thousands of dynamic modules, which is the point where you might want to disconnect some reducers in your application. reducers.js import { combineReducers } from 'redux'; import users from './reducers/users'; import posts from './reducers/posts'; export default function createReducer(asyncReducers) { return combineReducers({ users, posts, ...asyncReducers }); } store.js import { createStore } from 'redux'; import createReducer from './reducers'; export default function configureStore(initialState) { const store = createStore(createReducer(), initialState); store.asyncReducers = {}; return store; } export function injectAsyncReducer(store, name, asyncReducer) { store.asyncReducers[name] = asyncReducer; store.replaceReducer(createReducer(store.asyncReducers)); } routes.js import { injectAsyncReducer } from './store'; // Assuming React Router here but the principle is the same // regardless of the library: make sure store is available // when you want to require.ensure() your reducer so you can call // injectAsyncReducer(store, name, reducer). function createRoutes(store) { // ... const CommentsRoute = { // ... getComponents(location, callback) { require.ensure([ './pages/Comments', './reducers/comments' ], function (require) { const Comments = require('./pages/Comments').default; const commentsReducer = require('./reducers/comments').default; injectAsyncReducer(store, 'comments', commentsReducer); callback(null, Comments); }) } }; // ... } There may be neater way of expressing this—I'm just showing the idea.
Flux
32,968,016
210
The first principle of Redux documentation is: The state of your whole application is stored in an object tree within a single store. And I actually thought that I understand all of the principles well. But I'm now confused, what does application mean. If application means just one of little complicated part in a website and works in just one page, I understand. But what if application means the whole website? Should I use LocalStorage or cookie or something for keeping the state tree? But what if the browser doesn't support LocalStorage? I want to know how developers keep their state tree! :)
If you would like to persist your redux state across a browser refresh, it's best to do this using redux middleware. Check out the redux-persist and redux-storage middleware. They both try to accomplish the same task of storing your redux state so that it may be saved and loaded at will. -- Edit It's been some time since I've revisited this question, but seeing that the other (albeit more upvoted answer) encourages rolling your own solution, I figured I'd answer this again. As of this edit, both libraries have been updated within the last six months. My team has been using redux-persist in production for a few years now and have had no issues. While it might seem like a simple problem, you'll quickly find that rolling your own solution will not only cause a maintenance burden, but result in bugs and performance issues. The first examples that come to mind are: JSON.stringify and JSON.parse can not only hurt performance when not needed but throw errors that when unhandled in a critical piece of code like your redux store can crash your application. (Partially mentioned in the answer below): Figuring out when and how to save and restore your app state is not a simple problem. Do it too often and you'll hurt performance. Not enough, or if the wrong parts of state are persisted, you may find yourself with more bugs. The libraries mentioned above are battle-tested in their approach and provide some pretty fool-proof ways of customizing their behavior. Part of the beauty of redux (especially in the React ecosystem) is its ability to be placed in multiple environments. As of this edit, redux-persist has 15 different storage implementations, including the awesome localForage library for web, as well as support for React Native, Electron, and Node. To sum it up, for 3kB minified + gzipped (at the time of this edit) this is not a problem I would ask my team to solve itself.
Flux
37,195,590
144
I use axios for ajax requests and reactJS + flux for render UI. In my app there is third side timeline (reactJS component). Timeline can be managed by mouse's scroll. App sends ajax request for the actual data after any scroll event. Problem that processing of request at server can be more slow than next scroll event. In this case app can have several (2-3 usually) requests that already is deprecated because user scrolls further. it is a problem because every time at receiving of new data timeline begins redraw. (Because it's reactJS + flux) Because of this, the user sees the movement of the timeline back and forth several times. The easiest way to solve this problem, it just abort previous ajax request as in jQuery. For example: $(document).ready( var xhr; var fn = function(){ if(xhr && xhr.readyState != 4){ xhr.abort(); } xhr = $.ajax({ url: 'ajax/progress.ftl', success: function(data) { //do something } }); }; var interval = setInterval(fn, 500); ); How to cancel/abort requests in axios?
Axios does not support canceling requests at the moment. Please see this issue for details. UPDATE: Cancellation support was added in axios v0.15. EDIT: The axios cancel token API is based on the withdrawn cancelable promises proposal. UPDATE 2022: Starting from v0.22.0 Axios supports AbortController to cancel requests in fetch API way: Example: const controller = new AbortController(); axios.get('/foo/bar', { signal: controller.signal }).then(function(response) { //... }); // cancel the request controller.abort()
Flux
38,329,209
136
I am new on react.js I have implemented one component in which I am fetching the data from server and use it like, CallEnterprise:function(TenantId){ fetchData('http://xxx.xxx.xx.xx:8090/Enterprises?TenantId='+TenantId+' &format=json').then(function(enterprises) { EnterprisePerspectiveActions.getEnterprise(enterprises); }).catch(function() { alert("There was some issue in API Call please contact Admin"); //ComponentAppDispatcher.handleViewAction({ // actionType: MetaItemConstants.RECEIVE_ERROR, // error: 'There was a problem getting the enterprises' //}); }); }, I want to store Url in configuration file so when I deployed this on Testing server or on Production I have to just change the url on config file not in js file but I don't know how to use configuration file in react.js Can anyone please guide me how can I achieve this ?
With webpack you can put env-specific config into the externals field in webpack.config.js externals: { 'Config': JSON.stringify(process.env.NODE_ENV === 'production' ? { serverUrl: "https://myserver.com" } : { serverUrl: "http://localhost:8090" }) } If you want to store the configs in a separate JSON file, that's possible too, you can require that file and assign to Config: externals: { 'Config': JSON.stringify(process.env.NODE_ENV === 'production' ? require('./config.prod.json') : require('./config.dev.json')) } Then in your modules, you can use the config: var Config = require('Config') fetchData(Config.serverUrl + '/Enterprises/...') For React: import Config from 'Config'; axios.get(this.app_url, { 'headers': Config.headers }).then(...); Not sure if it covers your use case but it's been working pretty well for us.
Flux
30,568,796
132
Should you ever use this.setState() when using redux? Or should you always be dispatching actions and relying on props?
Clear uses of setState would be for UI components that have local display state, but aren't relevant for the global application. For example a boolean that represents whether a specific dropdown menu is actively displayed doesn't need to be in global state, so it's more conveniently controlled by the menu component's state. Other examples might include the collapse/expand state of lines in an accordion display of a hierarchy. Or possibly the currently selected tab in tab navigation. However in both of these examples you might still choose to handle UI state globally. For example this would be necessary if you wanted to persist the expand/collapse state in browser storage so that it would be preserved by page refresh. In practice it's usually easiest to implement such UI elements with local state, and refactor them into global state as needed.
Flux
34,711,477
118
Here is the code in actions.js export function exportRecordToExcel(record) { return ({fetch}) => ({ type: EXPORT_RECORD_TO_EXCEL, payload: { promise: fetch('/records/export', { credentials: 'same-origin', method: 'post', headers: {'Content-Type': 'application/json'}, body: JSON.stringify(data) }).then(function(response) { return response; }) } }); } The returned response is an .xlsx file. I want the user to be able to save it as a file, but nothing happens. I assume the server is returning the right type of response because in the console it says Content-Disposition:attachment; filename="report.xlsx" What am I missing? What should I do in the reducer?
Browser technology currently doesn't support downloading a file directly from an Ajax request. The work around is to add a hidden form and submit it behind the scenes to get the browser to trigger the Save dialog. I'm running a standard Flux implementation so I'm not sure what the exact Redux (Reducer) code should be, but the workflow I just created for a file download goes like this... I have a React component called FileDownload. All this component does is render a hidden form and then, inside componentDidMount, immediately submit the form and call it's onDownloadComplete prop. I have another React component, we'll call it Widget, with a download button/icon (many actually... one for each item in a table). Widget has corresponding action and store files. Widget imports FileDownload. Widget has two methods related to the download: handleDownload and handleDownloadComplete. Widget store has a property called downloadPath. It's set to null by default. When it's value is set to null, there is no file download in progress and the Widget component does not render the FileDownload component. Clicking the button/icon in Widget calls the handleDownload method which triggers a downloadFile action. The downloadFile action does NOT make an Ajax request. It dispatches a DOWNLOAD_FILE event to the store sending along with it the downloadPath for the file to download. The store saves the downloadPath and emits a change event. Since there is now a downloadPath, Widget will render FileDownload passing in the necessary props including downloadPath as well as the handleDownloadComplete method as the value for onDownloadComplete. When FileDownload is rendered and the form is submitted with method="GET" (POST should work too) and action={downloadPath}, the server response will now trigger the browser's Save dialog for the target download file (tested in IE 9/10, latest Firefox and Chrome). Immediately following the form submit, onDownloadComplete/handleDownloadComplete is called. This triggers another action that dispatches a DOWNLOAD_FILE event. However, this time downloadPath is set to null. The store saves the downloadPath as null and emits a change event. Since there is no longer a downloadPath the FileDownload component is not rendered in Widget and the world is a happy place. Widget.js - partial code only import FileDownload from './FileDownload'; export default class Widget extends Component { constructor(props) { super(props); this.state = widgetStore.getState().toJS(); } handleDownload(data) { widgetActions.downloadFile(data); } handleDownloadComplete() { widgetActions.downloadFile(); } render() { const downloadPath = this.state.downloadPath; return ( // button/icon with click bound to this.handleDownload goes here {downloadPath && <FileDownload actionPath={downloadPath} onDownloadComplete={this.handleDownloadComplete} /> } ); } widgetActions.js - partial code only export function downloadFile(data) { let downloadPath = null; if (data) { downloadPath = `${apiResource}/${data.fileName}`; } appDispatcher.dispatch({ actionType: actionTypes.DOWNLOAD_FILE, downloadPath }); } widgetStore.js - partial code only let store = Map({ downloadPath: null, isLoading: false, // other store properties }); class WidgetStore extends Store { constructor() { super(); this.dispatchToken = appDispatcher.register(action => { switch (action.actionType) { case actionTypes.DOWNLOAD_FILE: store = store.merge({ downloadPath: action.downloadPath, isLoading: !!action.downloadPath }); this.emitChange(); break; FileDownload.js - complete, fully functional code ready for copy and paste - React 0.14.7 with Babel 6.x ["es2015", "react", "stage-0"] - form needs to be display: none which is what the "hidden" className is for import React, {Component, PropTypes} from 'react'; import ReactDOM from 'react-dom'; function getFormInputs() { const {queryParams} = this.props; if (queryParams === undefined) { return null; } return Object.keys(queryParams).map((name, index) => { return ( <input key={index} name={name} type="hidden" value={queryParams[name]} /> ); }); } export default class FileDownload extends Component { static propTypes = { actionPath: PropTypes.string.isRequired, method: PropTypes.string, onDownloadComplete: PropTypes.func.isRequired, queryParams: PropTypes.object }; static defaultProps = { method: 'GET' }; componentDidMount() { ReactDOM.findDOMNode(this).submit(); this.props.onDownloadComplete(); } render() { const {actionPath, method} = this.props; return ( <form action={actionPath} className="hidden" method={method} > {getFormInputs.call(this)} </form> ); } }
Flux
35,206,589
76
According to docs state of react app has to be something serializable. What about classes then? Let's say I have a ToDo app. Each of Todo items has properties like name, date etc. so far so good. Now I want to have methods on objects which are non serializable. I.e. Todo.rename() which would rename todo and do a lot of other stuff. As far as I understand I can have function declared somewhere and do rename(Todo) or perhaps pass that function via props this.props.rename(Todo) to the component. I have 2 problems with declaring .rename() somewhere: 1) Where? In reducer? It would be hard to find all would be instance methods somewhere in the reducers around the app. 2) Passing this function around. Really? should I manually pass it from all the higher level components via And each time I have more methods add a ton of boilerplate to just pass it down? Or always do and hope that I only ever have one rename method for one type of object. Not Todo.rename() Task.rename() and Event.rename() That seems silly to me. Object should know what can be done to it and in which way. Is not it? What I'm missing here?
In Redux, you don't really have custom models. Your state should be plain objects (or Immutable records). They are not expected to have any custom methods. Instead of putting methods onto the models (e.g. TodoItem.rename) you are expected to write reducers that handle actions. That's the whole point of Redux. // Manages single todo item function todoItem(state, action) { switch (action.type) { case 'ADD': return { name: action.name, complete: false }; case 'RENAME': return { ...state, name: action.name }; case 'TOGGLE_COMPLETE': return { ...state, complete: !state.complete }; default: return state; } } // Manages a list of todo items function todoItems(state = [], action) { switch (action.type) { case 'ADD': return [...state, todoItem(undefined, action)]; case 'REMOVE': return [ ...state.slice(0, action.index), ...state.slice(action.index + 1) ]; case 'RENAME': case 'TOGGLE_COMPLETE': return [ ...state.slice(0, action.index), todoItem(state[action.index], action), ...state.slice(action.index + 1) ]; } } If this still doesn't make sense please read through the Redux basics tutorial because you seem to have a wrong idea about how Redux applications are structured.
Flux
32,352,982
53
Looking at the following diagram (which explains MVC), I see unidirectional data flow. So why do we consider MVC to have bidirectional data flow while justifying Flux ?
Real and Pure MVC is unidirectional. It is clear from the the wikipedia diagram pasted in the question. More than a decade ago, when server side frameworks like Apache Struts implemented a variant of MVC called Model View Presenter (MVP) pattern, they made every request go through controller and every response come back through controller. Everyone continued calling it MVC. Due to inherent nature of the web, any changes in the model cannot be propagated to the view without view sending a request or update. So Pure MVC is not implemented. Rather MVP is implemented. Few years back, when frameworks like Angular, Ember, Knockout implemented MVC on front end, they implemented another variant of MVC called Model View ViewModel (MVVM) pattern, few folks continued called it MVC. (and few realized that terminology is not important and called it MVW (W stands for Whatever)), none of them implemented pure MVC. When React was born, they took the opportunity to implement pure MVC (not MVP or MVVM), and renamed it as Flux with few changes. I feel Flux is one more variant of MVC. Although, Flux/React team says it is not MVC, I see lot of parity between both the architectures - Flux and MVC.
Flux
33,447,710
51
I've been struggling for hours to finding a solution to this problem... I am developing a game with an online scoreboard. The player can log in and log out at any time. After finishing a game, the player will see the scoreboard, and see their own rank, and the score will be submitted automatically. The scoreboard shows the player’s ranking, and the leaderboard. The scoreboard is used both when the user finishes playing (to submit a score), and when the user just wants to check out their ranking. This is where the logic becomes very complicated: If the user is logged in, then the score will be submitted first. After the new record is saved then the scoreboard will be loaded. Otherwise, the scoreboard will be loaded immediately. The player will be given an option to log in or register. After that, the score will be submitted, and then the scoreboard will be refreshed again. However, if there is no score to submit (just viewing the high score table). In this case, the player’s existing record is simply downloaded. But since this action does not affect the scoreboard, both the scoreboard and the player’s record should be downloaded simultaneously. There is an unlimited number of levels. Each level has a different scoreboard. When the user views a scoreboard, then the user is ‘observing’ that scoreboard. When it is closed, the user stops observing it. The user can log in and log out at any time. If the user logs out, the user’s ranking should disappear, and if the user logs in as another account, then the ranking information for that account should be fetched and displayed. ...but this fetching this information should only take place for the scoreboard whose user is currently observing. For viewing operations, the results should be cached in-memory, so that if user re-subscribes to the same scoreboard, there will be no fetching. However, if there is a score being submitted, the cache should not be used. Any of these network operations may fail, and the player must be able to retry them. These operations should be atomic. All the states should be updated in one go (no intermediate states). Currently, I am able to solve this using Bacon.js (a functional reactive programming library), as it comes with atomic update support. The code is quite concise, but right now it is a messy unpredictable spaghetti code. I started looking at Redux. So I tried to structure the store, and came up with something like this (in YAMLish syntax): user: (user information) record: level1: status: (loading / completed / error) data: (record data) error: (error / null) scoreboard: level1: status: (loading / completed / error) data: - (record data) - (record data) - (record data) error: (error / null) The problem becomes: where do I put the side-effects. For side-effect-free actions, this becomes very easy. For instance, on LOGOUT action, the record reducer could simply blast all the records off. However, some actions do have side effect. For example, if I am not logged in before submitting the score, then I log in successfully, the SET_USER action saves the user into the store. But because I have a score to submit, this SET_USER action must also cause an AJAX request to be fired off, and at the same time, set the record.levelN.status to loading. The question is: how do I signify that a side-effects (score submission) should take place when I log in in an atomic way? In Elm architecture, an updater can also emit side-effects when using the form of Action -> Model -> (Model, Effects Action), but in Redux, it’s just (State, Action) -> State. From the Async Actions docs, the way they recommend is to put them in an action creator. Does this means that the logic of submitting the score will have to be put in the action creator for a successful login action as well? function login (options) { return (dispatch) => { service.login(options).then(user => dispatch(setUser(user))) } } function setUser (user) { return (dispatch, getState) => { dispatch({ type: 'SET_USER', user }) let scoreboards = getObservedScoreboards(getState()) for (let scoreboard of scoreboards) { service.loadUserRanking(scoreboard.level) } } } I find this a bit odd, because the code responsible for this chain reaction now exists in 2 places: In the reducer. When SET_USER action is dispatched, the record reducer must also set the status of the records belonging to the observed scoreboards to loading. In the action creator, which performs the actual side-effect of fetching/submitting score. It also seems that I have to manually keep track of all the active observers. Whereas in Bacon.js version, I did something like this: Bacon.once() // When first observing the scoreboard .merge(resubmit口) // When resubmitting because of network error .merge(user川.changes().filter(user => !!user).first()) // When user logs in (but only once) .flatMapLatest(submitOrGetRanking(data)) The actual Bacon code is a lot longer, because of the all the complex rules above, that made the Bacon version barely-readable. But Bacon kept track of all active subscriptions automatically. This led me to start questioning that it might not be worth the switch, because rewriting this to Redux would require a lot of manual handling. Can anyone suggest some pointer?
When you want complex async dependencies, just use Bacon, Rx, channels, sagas, or another asynchronous abstraction. You can use them with or without Redux. Example with Redux: observeSomething() .flatMap(someTransformation) .filter(someFilter) .map(createActionSomehow) .subscribe(store.dispatch); You can compose your asynchronous actions any way you like—the only important part is that eventually they turn into store.dispatch(action) calls. Redux Thunk is enough for simple apps, but as your async needs get more sophisticated, you need to use a real asynchronous composition abstraction, and Redux doesn't care which one you use. Update: Some time has passed, and a few new solutions have emerged. I suggest you to check out Redux Saga which has become a fairly popular solution for async control flow in Redux.
Flux
32,925,837
47
What is the general practice of setting the initial state of the app with isomorphic applications? Without Flux I would simple use something like: var props = { }; // initial state var html = React.renderToString(MyComponent(props); Then render that markup via express-handlebars and display via {{{reactMarkup}}. On the client-side to set the initial state I would do something like this: if (typeof window !== 'undefined') { var props = JSON.parse(document.getElementById('props').innerHTML); React.render(MyComponent(props), document.getElementById('reactMarkup')); } So yes essentially you are setting the state twice, on server and client, however React will compare the differences and in most cases so it won't impact the performance by re-rendering. How would this principle work when you have actions and stores in the Flux architecture? Inside my component I could do: getInitialState: function() { return AppStore.getAppState(); } But now how do I set the initial state in the AppStore from the server? If I use React.renderToString with no passed properties it will call AppStore.getAppState() which won't have anything in it because I still don't understand how would I set the state in my store on the server? Update Feb. 5, 2015 I am still looking for a clean solution that does not involve using third-party Flux implementations like Fluxible, Fluxxor, Reflux. Update Aug. 19, 2016 Use Redux.
Take a look at dispatchr and yahoo's related libraries. Most flux implementations don't work in node.js because they use singleton stored, dispatchers, and actions, and have no concept of "we're done" which is required to know when to render to html and respond to the request. Yahoo's libraries like fetchr and routr get around this limitation of node by using a very pure form of dependency injection (no parsing functions for argument names or anything like that). Instead you define api functions like this in services/todo.js: create: function (req, resource, params, body, config, callback) { And actions like this in actions/createTodo.js: module.exports = function (context, payload, done) { var todoStore = context.getStore(TodoStore); ... context.dispatch('CREATE_TODO_START', newTodo); ... context.service.create('todo', newTodo, {}, function (err, todo) { The last line indirectly calls the create function in services/todo.js. In this case indirectly can either mean: on the server: fetchr fills in the extra arguments when you're on the server it then calls your callback on the client side: the fetchr client makes a http request fetchr on the server intercepts it it calls the service function with the correct arguments it sends the response back to the client fetchr the client side fetchr handles calling your callback This is just the tip of the iceberg. This is a very sophisticated group of modules that work together to solve a tough problem and provide a useable api. Isomorphism is inherently complicated in real world use cases. This is why many flux implementations don't support server side rendering. You may also want to look into not using flux. It doesn't make sense for all applications, and often just gets in the way. Most often you only need it for a few parts of the application if any. There are no silver bullets in programming!
Flux
27,336,882
43
Recently I conducted a preliminary study on developing an E-commerce site and discovered that redux and reflux both come from flux architecture in Facebook and that both are popular. I am confused about the difference between the two. When should I use redux vs reflux, and which is most flexible during the development phase of an e-commerce web application?
Flux, Reflux and Redux (and many other similar libraries) are all different ways to handle transversal data management. Basic React components work fine with parent-children relationships, but when you have to provide and update data from different parts of the app which are not directly connected it can become quickly messy. Those libraries provide stores and actions (and other mechanisms) to maintain and update such data. Flux is the original solution developed by Facebook (just like React), it is powerful but probably not the easiest or readable. Reflux was developed partly to make it easier and clearer. The main difference is that in Reflux every piece of data has its own store and actions, which make it very readable and easy to write. Unfortunately Reflux is not so much actively developed anymore, the author is looking for maintainers. But all in all I would say Reflux is a more elegant alternative to Flux. Redux is another solution, which has become the most popular so far. Its advantage is that it provide nested stores with immutable content so that you can easily implement previous/next feature and have transversal actions that have impact on many parts of the store. The disadvantages of redux are that it is quite verbose and has many more concepts than Flux or Reflux. For the same basic actions it will need much more code, and the async implementation is not the cleanest. It is definitively powerful and scalable. Here is a link that talks about it more extensively: http://jamesknelson.com/which-flux-implementation-should-i-use-with-react/
Flux
36,326,210
39