question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I have a problem running a gradle build on Jenkins:
Gradle version is https://services.gradle.org/distributions/gradle-2.14.1-bin.zip
FAILURE: Build failed with an exception.
* What went wrong:
A problem occurred configuring root project 'myApp'.
> Could not resolve all dependencies for configuration ':classpath'.
> Could not resolve org.springframework.boot:spring-boot-gradle-plugin:1.4.2.RELEASE.
Required by:
:myApp:unspecified
> Could not resolve org.springframework.boot:spring-boot-gradle-plugin:1.4.2.RELEASE.
> Could not get resource 'https://repo1.maven.org/maven2/org/springframework/boot/spring-boot-gradle-plugin/1.4.2.RELEASE/spring-boot-gradle-plugin-1.4.2.RELEASE.pom'.
> Could not HEAD 'https://repo1.maven.org/maven2/org/springframework/boot/spring-boot-gradle-plugin/1.4.2.RELEASE/spring-boot-gradle-plugin-1.4.2.RELEASE.pom'.
> repo1.maven.org: Nome o servizio sconosciuto
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
This is my build.gradle file:
buildscript {
ext {
springBootVersion = '1.4.2.RELEASE'
}
repositories {
mavenCentral()
}
dependencies {
classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
}
}
apply plugin: 'java'
apply plugin: 'eclipse-wtp'
apply plugin: 'org.springframework.boot'
apply plugin: 'war'
war {
baseName = 'myApp'
version = '1.0.5'
}
sourceCompatibility = 1.8
targetCompatibility = 1.8
repositories {
mavenCentral()
}
configurations {
providedRuntime
}
dependencies {
compile('org.springframework.boot:spring-boot-starter-thymeleaf')
providedRuntime('org.springframework.boot:spring-boot-starter-tomcat')
compile('org.springframework.boot:spring-boot-starter-security')
testCompile('org.springframework.boot:spring-boot-starter-test')
compile('org.springframework.boot:spring-boot-starter-web')
compile('com.fasterxml.jackson.core:jackson-core:2.7.3')
compile("org.springframework:spring-jdbc")
compile('com.fasterxml.jackson.core:jackson-databind:2.7.3')
compile('com.fasterxml.jackson.core:jackson-annotations:2.7.3')
compile files('src/main/resources/static/lib/ojdbc7.jar')
// https://mvnrepository.com/artifact/org.json/json
compile group: 'org.json', name: 'json', version: '20080701'
}
| As the error tells you Nome o servizio sconosciuto, repo1.maven.org cannot be resolved via DNS. So you have some networking problem or you need to use a proxy server which you did not configure for Gradle. Ask your IT support as to why you cannot resolve the hostname.
| Jenkins | 41,979,802 | 45 |
Here is my current Jenkins setup for a project:
one job runs all development branches
one job runs all pull requests
one job runs only the master branch
one job makes the automated release only when master passes
This setup allows me to have continuous automated delivery as well as constant feedback during development. The first 3 jobs also run all tests and coverage reports.
The problem is that I could not find a way to exclude the master branch from the "all development branches" job. It unnecessarily builds master twice every time I merge a pull-request.
Does anybody know how to exclude one branch from the job in Jenkins ?
ps: I am using the Git and the Github plugins. My project is stored on Github.
| You can choose "Inverse" strategy for targeting branches to build.
Check out Jenkins job configuration,
"Source Code Management" section (choose "Git")
Additional Behaviours
click "Add" button
choose "Strategy for choosing what to build"
select "Inverse" strategy in combo box.
(Don't forget to fill in "Branches to build" text field by "master")
See also attachment screenshot image:
| Jenkins | 21,314,632 | 45 |
Where do the environment variables under Jenkins ( manage jenkins -> system information ) come from?
I checked /etc/init.d/tomcat5, /usr/bin/dtomcat5, /usr/bin/tomcat5, /etc/sysconfig/tomcat5 and /etc/profile but do not see any such variables there specially the ones related to Oracle (Base, Home, Ld_lib, path, etc.).
Tomcat's bashrc has some oracle related variables which I commented out but I still see the same in the jenkins system info page. Any directions?
| The environment variables displayed in Jenkins (Manage Jenkins -> System information) are inherited from the system (i.e. inherited environment variables)
If you run env command in a shell you should see the same environment variables as Jenkins shows.
These variables are either set by the shell/system or by you in ~/.bashrc, ~/.bash_profile.
There are also environment variables set by Jenkins when a job executes, but these are not displayed in the System Information.
| Jenkins | 21,130,931 | 45 |
I'm using Jenkins to execute daily tasks with my projects, but every execution, Jenkins stores a 20MB dir in PROJECT_HOME/builds, so after a lot of executions, the space in the disk of every project is huge (10GB for some Jenkins tasks).
It isn't very important for me to store the result of the previous executions, so what I want to know is that if exists a way to say Jenkins no to store that information.
Does anybody know how to avoid Jenkins to store result of old executions?
| If you go into the project's configuration page, you will find a checkbox labeled "Discard Old Builds". Enabling this allows you to specify both the number of days to retain builds for and the maximum number of builds to keep.
| Jenkins | 7,994,379 | 45 |
I have already added 2 secret files to Jenkins credentials with names PRIVATE-KEY and PUBLIC-KEY.
How can I copy those 2 files to /src/resources directory inside a job?
I have the following snippet
withCredentials([file(credentialsId: 'PRIVATE_KEY', variable: 'my-private-key'),
file(credentialsId: 'PUBLIC_KEY', variable: 'my-public-key')]) {
//how to copy, where are those files to copy from?
}
| Ok, I think I managed to do it. my-private-key variable is a path to the secret, so I had to copy that secret to the destination I needed.
withCredentials([file(credentialsId: 'PRIVATE_KEY', variable: 'my-private-key'),
file(credentialsId: 'PUBLIC_KEY', variable: 'my-public-key')]) {
sh "cp \$my-public-key /src/main/resources/my-public-key.der"
sh "cp \$my-private-key /src/main/resources/my-private-key.der"
}
| Jenkins | 49,460,520 | 44 |
I seem unable to create a Jenkins Pipeline job that builds a specific branch, where that branch is a build parameter.
Here's some configuration screenshots:
(i've tried with a Git Parameter and a String Parameter, same outcome)
(I've tried $BRANCH_NAME_PARAM, ${BRANCH_NAME_PARAM} and ${env.BRANCH_NAME_PARAM}, same outcome for all variations)
And the build log:
hudson.plugins.git.GitException: Command "git fetch --tags --progress origin +refs/heads/${BRANCH_NAME_PARAM}:refs/remotes/origin/${BRANCH_NAME_PARAM} --prune" returned status code 128:
stdout:
stderr: fatal: Couldn't find remote ref refs/heads/${BRANCH_NAME_PARAM}
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1970)
I'm obviously doing something wrong - any ideas on what?
| https://issues.jenkins-ci.org/plugins/servlet/mobile#issue/JENKINS-28447
Appears that its something to do with a lightweight checkout. if i deselect this option in my config, my parameter variables are resolved
| Jenkins | 47,565,933 | 44 |
I'm trying to get a declarative pipeline that looks like this:
pipeline {
environment {
ENV1 = 'default'
ENV2 = 'default also'
}
}
The catch is, I'd like to be able to override the values of ENV1 or ENV2 based on an arbitrary condition. My current need is just to base it off the branch but I could imagine more complicated conditions.
Is there any sane way to implement this? I've seen some examples online that do something like:
stages {
stage('Set environment') {
steps {
script {
ENV1 = 'new1'
}
}
}
}
But I believe this isn't setting the actually environment variable, so much as it is setting a local variable which is overriding later calls to ENV1. The problem is, I need these environment variables read by a nodejs script, and those need to be real machine environment variables.
Is there any way to set environment variables to be dynamic in a jenkinsfile?
| Maybe you can try Groovy's ternary-operator:
pipeline {
agent any
environment {
ENV_NAME = "${env.BRANCH_NAME == "develop" ? "staging" : "production"}"
}
}
or extract the conditional to a function:
pipeline {
agent any
environment {
ENV_NAME = getEnvName(env.BRANCH_NAME)
}
}
// ...
def getEnvName(branchName) {
if("int".equals(branchName)) {
return "int";
} else if ("production".equals(branchName)) {
return "prod";
} else {
return "dev";
}
}
But, actually, you can do whatever you want using the Groovy syntax (features that are supported by Jenkins at least)
So the most flexible option would be to play with regex and branch names...So you can fully support Git Flow if that's the way you do it at VCS level.
| Jenkins | 44,007,034 | 44 |
I have a pipeline groovy script in Jenkins v2.19. Also I have a
"Slack Notification Plugin" v2.0.1 and "Groovy Postbuild Plugin" installed.
I can successfully send "build started" and "build finished" messages.
When a build fails, how can I send the "Build failed" message to a Slack channel?
| You could do something like this and use a try catch block.
Here is some example Code:
node {
try {
notifyBuild('STARTED')
stage('Prepare code') {
echo 'do checkout stuff'
}
stage('Testing') {
echo 'Testing'
echo 'Testing - publish coverage results'
}
stage('Staging') {
echo 'Deploy Stage'
}
stage('Deploy') {
echo 'Deploy - Backend'
echo 'Deploy - Frontend'
}
} catch (e) {
// If there was an exception thrown, the build failed
currentBuild.result = "FAILED"
throw e
} finally {
// Success or failure, always send notifications
notifyBuild(currentBuild.result)
}
}
def notifyBuild(String buildStatus = 'STARTED') {
// build status of null means successful
buildStatus = buildStatus ?: 'SUCCESSFUL'
// Default values
def colorName = 'RED'
def colorCode = '#FF0000'
def subject = "${buildStatus}: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'"
def summary = "${subject} (${env.BUILD_URL})"
// Override default values based on build status
if (buildStatus == 'STARTED') {
color = 'YELLOW'
colorCode = '#FFFF00'
} else if (buildStatus == 'SUCCESSFUL') {
color = 'GREEN'
colorCode = '#00FF00'
} else {
color = 'RED'
colorCode = '#FF0000'
}
// Send notifications
slackSend (color: colorCode, message: summary)
}
Complete snippet can be found here Jenkinsfile Template
| Jenkins | 39,140,191 | 44 |
After going through the pipeline and Jenkinsfile documentation, I am a bit confused on how to create a Stage -> Production pipeline.
One way is to use the input step like
node() {
stage 'Build to Stage'
sh '# ...'
input 'Deploy to Production'
stage 'Build to Production'
sh '# ...'
}
This seems a bit clunky, as this will block an executor all the time until you want to deploy to production. Is there any alternative way of being able to deploy to production, from Jenkins.
| EDIT (Oct 2016): Please see my other answer "Use milestone and lock" below, which includes recently introduced features.
Use timeout Step
As first option, you can wrap your sh step into a timeout step.
node() {
stage 'Build to Stage' {
sh '# ...'
}
stage 'Promotion' {
timeout(time: 1, unit: 'HOURS') {
input 'Deploy to Production?'
}
}
stage 'Deploy to Production' {
sh '# ...'
}
}
This stops the build after the timeout.
Move input Step to Flyweight Executor
Another option is to not allocate a heavyweight executor for the input step. You can do this by using the input step outside of the node block, like this:
stage 'Build to Stage' {
node {
sh "echo building"
stash 'complete-workspace'
}
}
stage 'Promotion' {
input 'Deploy to Production?'
}
stage 'Deploy to Production' {
node {
unstash 'complete-workspace'
sh "echo deploying"
}
}
This is was probably the more elegant way, but can still be combined with the timeout step.
EDIT: As pointed out by @amuniz, you have to stash/unstash the contents of the workspace, as different nodes respectively workspace directories might be allocated for the two node steps.
| Jenkins | 37,831,386 | 44 |
A strange thing happen sometime, Jenkins start displaying " Jenkins is going to shut down" even when nobody turned this message on and restarting Jenkins.
Screenshot:
| I have a plug in "Thin backup" which was configured to shut down after back up. Changed this setting and it is working fine now. It's bit tricky to find it because this plug in is not under configure system, its under manage jenkins. You can easily miss it.
As mentioned by Florian below the ThinBackup setting in question is named Wait until Jenkins/Hudson is idle to perform a backup which comes with a Force Jenkins to quiet mode after specified minutes.
| Jenkins | 26,218,018 | 44 |
How can I delete a build from the Jenkins GUI? I know that I can delete the directory from the 'jobs' folder, but I want to do it from the GUI. Is it also possible to delete multiple builds?
| If you go into the build you want to delete and if you have the permissions to delete, then you will see on the upper right corner a button "Delete this build".
| Jenkins | 7,995,079 | 44 |
Is there any way to import the changelog that is generated by Jenkins to the subject of an email (either through the default email, or the email-ext plugin)?
I am new to Jenkins configuration, so I apologize if this is a simple issue, but I was not able to find anything on the email-ext documentation.
| I configured my Email-ext plug-in to use the CHANGES Token (official documentation here):
Changes:
${CHANGES, showPaths=true, format="%a: %r %p \n--\"%m\"", pathFormat="\n\t- %p"}
That prints the following in my build notifications:
Changes:
Username: 123
- Project/Filename1.m
- Project/Filename2.m
-- "My log message"
For HTML messages, I placed the same code within a div and added formatting:
<div style="padding-left: 30px; padding-bottom: 15px;">
${CHANGES, showPaths=true, format="<div><b>%a</b>: %r %p </div><div style=\"padding-left:30px;\"> — “<em>%m</em>”</div>", pathFormat="</div><div style=\"padding-left:30px;\">%p"}
</div>
Here's a sample screenshot of how it looks in the e-mails sent out by Jenkins now (this particular commit came from Subversion, but it works exactly the same for Git and other version control systems):
| Jenkins | 7,773,010 | 44 |
I'm trying to deploy my working Windows 10 Spring-Boot/React app on Ubuntu 18.04 but keep getting "react-scripts: Permission denied" error despite numerous attempts to fix. Hopefully one of you react experts can spot what I'm doing wrong.
My package.json looks like this
{
"name": "medaverter-front",
"version": "0.1.0",
"private": true,
"dependencies": {
"@testing-library/jest-dom": "^4.2.4",
"@testing-library/react": "^9.3.2",
"@testing-library/user-event": "^7.1.2",
"axios": "^0.19.2",
"bootstrap": "^4.4.1",
"react": "^16.13.0",
"react-dom": "^16.13.0",
"react-router-dom": "^5.1.2",
"react-scripts": "3.4.0",
"react-table-6": "^6.11.0",
"react-validation": "^3.0.7",
"reactstrap": "^6.5.0",
"validator": "^12.2.0"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": "react-app"
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}
I'm logged in as root and used nvm to install node and lts. I installed nvm like this:
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.0/install.sh | bash
then did this:
nvm install node
nvm use node
nvm install --lts
nvm use --lts
Then I cd to /var/lib/jenkins/workspace/MedAverter/medaverter-front and install node_modules like this:
npm install -g
and then change the permissions to 777 recursively, like this:
chmod -R 777 node_modules
I also changed all the /root/.nvm permissions to 777 recursively, like this:
chmod -R 777 /root/.nvm
I can get it build once using
npm run build
but then I run a "Build Now" from Jenkins and it fails with the same
[[1;34mINFO[m] Running 'npm run build' in /var/lib/jenkins/workspace/MedAverter/medaverter-front
[[1;34mINFO[m] [[1;34mINFO[m] > [email protected] build /var/lib/jenkins/workspace/MedAverter/medaverter-front
[[1;34mINFO[m] > react-scripts build [[1;34mINFO[m]
[[1;31mERROR[m] sh: 1: **react-scripts: Permission denied**
[[1;31mERROR[m] npm ERR! code ELIFECYCLE
[[1;31mERROR[m] npm ERR! errno 126
[[1;31mERROR[m] npm ERR! [email protected] build: `react-scripts build`
[[1;31mERROR[m] npm ERR! Exit status 126
Then I cd to /var/lib/jenkins/workspace/MedAverter/medaverter-front and run
npm run build
And also get the same error again:
> root@ubuntu-s-1vcpu-1gb-nyc1-01:/var/lib/jenkins/workspace/MedAverter/medaverter-front#
> npm run build
>
> > [email protected] build /var/lib/jenkins/workspace/MedAverter/medaverter-front
> > react-scripts build
>
> sh: 1: **react-scripts: Permission denied** npm ERR! code ELIFECYCLE
> npm ERR! errno 126 npm ERR! [email protected] build:
> `react-scripts build` npm ERR! Exit status 126
I've literally spent days trying to figure this out. Suggestions?
| Solution 1:
I think you have react-script globally installed. so try this command
npm install react-scripts --save
and then run the application again.
Solution 2:
try this command
sudo chmod +x node_modules/.bin/react-scripts
and then run the application again.
Solution 3;
I think your npm not have permission. you can try to run by sudo
sudo npm run build
and you can fix this problem like this
Step 1:
check path of npm if you are using npm by
which npm
you will "/usr/local/bin/npm" this type of path
OR
check path of yarn if you are using yarn by
which yarn
you will "/usr/local/bin/npm" this type of path
Step 2:
give permission 777 to this path and try to run project
sudo chmod -R 777 /usr/local/bin/npm
| Jenkins | 62,140,265 | 43 |
I have a declarative pipeline script for my multibranch project in which I would like to read a text file and store the result as a string variable to be accessed by a later step in the pipeline. Using the snippet generator I tried to do something like this:
filename = readFile 'output.txt'
For which filename would be my string.
I get an error in the Jenkins console output:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 30: Expected a step @ line 30, column 5.
filename = readFile 'output.txt'
Do I need to use a withEnv step to set the output of readFile to a Jenkins environment variable? If so, how?
Thanks
| The error is due to that you're only allowed to use pipeline steps inside the steps directive. One workaround that I know is to use the script step and wrap arbitrary pipeline script inside of it and save the result in the environment variable so that it can be used later.
So in your case:
pipeline {
agent any
stages {
stage("foo") {
steps {
script {
env.FILENAME = readFile 'output.txt'
}
echo "${env.FILENAME}"
}
}
}
}
| Jenkins | 42,540,148 | 43 |
Absolute Jenkins pipeline/groovy noob here, I have a stage
stage('Building and Deploying'){
def build = new Build()
build.deploy()
}
which is using the shared lib, the source of the Build.groovy is here:
def deploy(branch='master', repo='xxx'){
if (env.BRANCH_NAME.trim() == branch) {
def script = libraryResource 'build/package_indexes/python/build_push.sh'
// TODO: Test out http://stackoverflow.com/questions/40965725/jenkins-pipeline-cps-global-lib-resource-file-for-shell-script-purpose/40994132#40994132
env.PYPI_REPO = repo
sh script
}else {
echo "Not pushing to repo because branch is: "+env.BRANCH_NAME.trim()+" and not "+branch
}
}
Problem is when failing to push the build to a remote repo (see below), the stage still ends up showing successful.
running upload
Submitting dist/xxx-0.0.7.tar.gz to https://xxx.jfrog.io/xxx/api/pypi/grabone-pypi-local
Upload failed (403): Forbidden
...
Finished: SUCCESS
How do I bubble up the exit code of the shell script and fail the stage?
| The sh step returns the same status code that your actual sh command (your script in this case) returns. From sh documentation :
Normally, a script which exits with a nonzero status code will cause the step to fail with an exception.
You have to make sure that your script returns a nonzero status code when it fails. If you're not sure what your script returns, you can check the return value using the returnStatus param of the sh step, which will not fail the build but will return the status code. E.g:
def statusCode = sh script:script, returnStatus:true
You can then use this status code to set the result of your current build.
You can use :
currentBuild.result = 'FAILURE' or currentBuild.result = 'UNSTABLE' to mark the step as red/yellow respectively. In this case the build will still process the next steps.
error "Your error message" if you want the build to fail and exit immediately.
| Jenkins | 42,428,871 | 43 |
I want to update submodule on git clone.
Is there a way to do this with Jenkins pipeline Git command?
Currently I'm doing this...
git branch: 'master',
credentialsId: 'bitbucket',
url: 'ssh://bitbucket.org/hello.git'
It doesn't however update submodule once cloned
| The git command as a pipeline step is rather limited as it provides a default implementation of the more complex checkout command. For more advanced configuration, you should use checkout command, for which you can pass a whole lot of parameters, including the desired submodules configuration.
What you want to use is probably something like this :
checkout([$class: 'GitSCM',
branches: [[name: '*/master']],
doGenerateSubmoduleConfigurations: false,
extensions: [[$class: 'SubmoduleOption',
disableSubmodules: false,
parentCredentials: false,
recursiveSubmodules: true,
reference: '',
trackingSubmodules: false]],
submoduleCfg: [],
userRemoteConfigs: [[url: 'your-git-server/your-git-repository']]])
From the documentation it is often cumbersome to write these kind of lines, I recommand you use instead Jenkins very good Snippet Generator (YourJenkins > yourProject > PipelineSyntax) to automatically generate the checkout line !
| Jenkins | 42,290,133 | 43 |
I can't seem to figure out how to use the basic archive artifacts statement. What I want is to archive an entire subtree but naming it doesn't seem to work. Nor does directory/** nor directory/**/
I've read the ant doc but it doesn't make much sense to me.
How do I specify a subtree? Or... where can I find a meaningful description of whatever goes in that field?
| Directory/**/*.* -> All the files recursively under Directory
**/*.* -> all the files in the workspace
**/*.xml -> all xml files in your workspace.
Directory/**/*.xml -> All the xml files recursively under Directory
| Jenkins | 40,597,655 | 43 |
I am using @NonCPS in front of my Jenkinsfile function which performs a regex match and i'm still getting java.io.NotSerializableException java.util.regex.Matcher error even with the @NonCPS annotation.
Note, it calls the function many times and the exception only occurs once a match is actually made.
Here is my code:
@NonCPS
def extractEndTime(logLine) {
def MY_REGEX = /.*(20[0-9]{2}-[0-9]{2}-[0-9]{2}).([0-9]{2}:[0-9]{2}:[0-9]{2}).*\"\w+\"\sthe text\s(\w+)\./
m = (logLine =~ TEST_LOGLINE_END_REGEX)
if (m.count) {
return [m[1],m[2],m[3]]
} else {
return null
}
}
The output when doing a jenkins build:
GitHub has been notified of this commit’s build result
java.io.NotSerializableException: java.util.regex.Matcher
at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:860)
at org.jboss.marshalling.river.BlockMarshaller.doWriteObject(BlockMarshaller.java:65)
at org.jboss.marshalling.river.BlockMarshaller.writeObject(BlockMarshaller.java:56)
at org.jboss.marshalling.MarshallerObjectOutputStream.writeObjectOverride(MarshallerObjectOutputStream.java:50)
at org.jboss.marshalling.river.RiverObjectOutputStream.writeObjectOverride(RiverObjectOutputStream.java:179)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:344)
at java.util.LinkedHashMap.internalWriteEntries(LinkedHashMap.java:333)
at java.util.HashMap.writeObject(HashMap.java:1354)
at sun.reflect.GeneratedMethodAccessor116.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.jboss.marshalling.reflect.SerializableClass.callWriteObject(SerializableClass.java:271)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:976)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:967)
at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:854)
at org.jboss.marshalling.river.BlockMarshaller.doWriteObject(BlockMarshaller.java:65)
at org.jboss.marshalling.river.BlockMarshaller.writeObject(BlockMarshaller.java:56)
at org.jboss.marshalling.MarshallerObjectOutputStream.writeObjectOverride(MarshallerObjectOutputStream.java:50)
at org.jboss.marshalling.river.RiverObjectOutputStream.writeObjectOverride(RiverObjectOutputStream.java:179)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:344)
at com.cloudbees.groovy.cps.SerializableScript.writeObject(SerializableScript.java:26)
at sun.reflect.GeneratedMethodAccessor145.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.jboss.marshalling.reflect.SerializableClass.callWriteObject(SerializableClass.java:271)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:976)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:967)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:967)
at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:854)
at org.jboss.marshalling.river.RiverMarshaller.doWriteFields(RiverMarshaller.java:1032)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:988)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:967)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:967)
at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:854)
at org.jboss.marshalling.river.BlockMarshaller.doWriteObject(BlockMarshaller.java:65)
at org.jboss.marshalling.river.BlockMarshaller.writeObject(BlockMarshaller.java:56)
at org.jboss.marshalling.MarshallerObjectOutputStream.writeObjectOverride(MarshallerObjectOutputStream.java:50)
at org.jboss.marshalling.river.RiverObjectOutputStream.writeObjectOverride(RiverObjectOutputStream.java:179)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:344)
at java.util.HashMap.internalWriteEntries(HashMap.java:1777)
at java.util.HashMap.writeObject(HashMap.java:1354)
at sun.reflect.GeneratedMethodAccessor116.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.jboss.marshalling.reflect.SerializableClass.callWriteObject(SerializableClass.java:271)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:976)
at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:854)
at org.jboss.marshalling.river.RiverMarshaller.doWriteFields(RiverMarshaller.java:1032)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:988)
at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:854)
at org.jboss.marshalling.AbstractObjectOutput.writeObject(AbstractObjectOutput.java:58)
at org.jboss.marshalling.AbstractMarshaller.writeObject(AbstractMarshaller.java:111)
at org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.writeObject(RiverWriter.java:132)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:433)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:412)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:357)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:78)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:236)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:224)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:63)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: an exception which occurred:
in field delegate
in field closures
in object org.jenkinsci.plugins.workflow.cps.CpsThreadGroup@42b37962
Finished: FAILURE
| Jenkins require all variables to be serializable because the state of the pipeline is periodically saved to disk in case of interrupts like a server restarts. This feature allows pipelines to maintain their state and continue even after the server is restarted. Variables of type Matcher are not serializable and require some additional work by the developer.
From jenkinsci/pipeline-plugin Serializing Local Variables:
However the safest approach is to isolate use of nonserializable state
inside a method marked with the annotation @NonCPS. Such a method will
be treated as “native” by the Pipeline engine, and its local variables
never saved.
The code example provided:
@NonCPS
def version(text) {
def matcher = text =~ '<version>(.+)</version>'
matcher ? matcher[0][1] : null
}
Additional material backing this can be found on Pipeline Groovy Plugin Technical Design, here they discuss more technical details and behavior of methods marked with @NonCPS.
Pipeline scripts may mark designated methods with the annotation
@NonCPS. These are then compiled normally (except for sandbox security
checks), and so behave much like “binary” methods from the Java
Platform, Groovy runtime, or Jenkins core or plugin code. @NonCPS
methods may safely use non-Serializable objects as local variables,
though they should not accept nonserializable parameters or return or
store nonserializable values. You may not call regular
(CPS-transformed) methods, or Pipeline steps, from a @NonCPS method,
so they are best used for performing some calculations before passing
a summary back to the main script. Note in particular that @Overrides
of methods defined in binary classes, such as Object.toString(),
should in general be marked @NonCPS since it will commonly be binary
code calling them.
See: Serializing Local Variables
and Pipeline Groovy Plugin Technical Design
| Jenkins | 40,454,558 | 43 |
When building a Jenkins pipeline job (Jenkins ver. 2.7.4), I get this warning:
Using the ‘stage’ step without a block argument is deprecated
How do I fix it?
Pipeline script snippet:
stage 'Workspace Cleanup'
deleteDir()
| From Jenkins pipeline stage step doc:
An older, deprecated mode of this step did not take a block
argument...
In order to remove the warning just add a block argument:
stage('Stage Name') {
// some block
}
You can also generate a stage step using Snippet Generator.
| Jenkins | 39,445,488 | 43 |
I am trying to create a job where I have to select multiple values for one parameter.
env: dev1, dev2, qa1, qa2 etc
I want to be able to select dev1 & dev2 to update certain values.
Is there a way/plugin for Jenkins to handle it?
| Extended Choice Parameter plugin is the way to go for such requirement. You need to select Extended Choice Parameter from the drop-down list as shown below:
In Name text-box, assign a name. For example, Environment. This is the name with which you will be accessing all the values (dev1,dev2,...) that you will select while triggering the build. Now, in Simple Parameter Types section, you will see another drop-down with the name Parameter Type. Select Multi Select from that drop-down. Now, in Choose Source for Value section, enter the values (dev1,dev2,qa1,qa2,...) in Value box. Comma (,) is the default delimiter. Refer screenshot below:
Once you are done with the above settings, you will then have to access the selected option in your script (using variable assigned to Name as described above) and decide the course of action.
| Jenkins | 26,006,265 | 43 |
Are Jenkins parameters case-sensitive? I have a parametrized build which needs an ant parameter named "build_parameter" to be set before the build. When I try to access the ${BUILD_NUMBER} set by Jenkins, I get the value set for the ant parameter. If the build parameters are not case sensitive, can anyone suggest me a work around to this issue? I cannot change the build parameter name as I will have to change my build scripts (which is not an option). Thanks!
| To Answer your first question, Jenkins variables are case sensitive. However, if you are writing a windows batch script, they are case insensitive, because Windows doesn't care about the case.
Since you are not very clear about your setup, let's make the assumption that you are using an ant build step to fire up your ant task. Have a look at the Jenkins documentation (same page that Adarsh gave you, but different chapter) for an example on how to make Jenkins variables available to your ant task.
EDIT:
Hence, I will need to access the environmental variable ${BUILD_NUMBER} to construct the URL.
Why don't you use $BUILD_URL then? Isn't it available in the extended email plugin?
| Jenkins | 19,179,447 | 43 |
How to change Jenkins default folder on Windows where Jenkins runs as Windows service.
I want to change C:\Users\Coola\.jenkins folder to d:\Jenkins due to lack of space on C: partition (Every build takes ~10MB of free space). I don't want to reinstall Jenkins as Windows service. I just want to change folder of existing Jenkins instance. In case of lack of global solution I could focus only on relocating jobs folder.
Thanks in advance for your help.
|
Stop Jenkins service
Move C:\Users\Coola\.jenkins folder to d:\Jenkins
Using regedit, change HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Jenkins\ImagePath to "d:\Jenkins\jenkins.exe"
Start service
| Jenkins | 12,689,139 | 43 |
I am setting up a new server to run Jenkins. I have an existing Jenkins server with jobs in place. Now, I want to copy the jobs over from the old instance to the new instance.
On the new instance I am at the New Job screen. I notice that there is a "copy existing job" option. When I put in the path to the job on the old instance, I keep getting an error saying "no such job at http://old-instance/job/jobName".
How can I copy a job from one instance to another?
| According to the manual, https://wiki.jenkins-ci.org/display/JENKINS/Administering+Jenkins, it's simply to move the corresponding job directory to the new Jenkins instance.
The "Copy existing Job" option requires the job to exist on the current Jenkins instance. It's an option to use the existing job as a template. It can't be used to move jobs between instances.
| Jenkins | 9,038,748 | 43 |
I've stored username and password as credentials in jenkins. Now I would like to use them in my Jenkinsfile.
I am using withCredentials DSL, however, I'm not sure how to get the username password as separate variables so I can use them in my command.
This is what I'm doing:
withCredentials([usernameColonPassword(credentialsId: 'mycreds', variable: 'MYCREDS')]) {
sh 'cf login some.awesome.url -u <user> -p password'
}
How can I the username and passwork separately? I tried doing ${MYCREDS.split(":")[0]} but that doesn't seem to work.
| Here is a tiny bit simpler version of StephenKing's answer
withCredentials([usernamePassword(credentialsId: 'mycreds', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
sh 'cf login some.awesome.url -u $USERNAME -p $PASSWORD'
}
| Jenkins | 43,026,637 | 42 |
I'd like to leverage the existing Mailer plugin from Jenkins within a Jenkinsfile that defines a pipeline build job. Given the following simple failure script I would expect an email on every build.
stage 'Test'
node {
try {
sh 'exit 1'
} finally {
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: '[email protected]', sendToIndividuals: true])
}
}
The output from the build is:
Started by user xxxxx
[Pipeline] stage (Test)
Entering stage Test
Proceeding
[Pipeline] node
Running on master in /var/lib/jenkins/jobs/rpk-test/workspace
[Pipeline] {
[Pipeline] sh
[workspace] Running shell script
+ exit 1
[Pipeline] step
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
As you can see, it does record that it performs the pipeline step immediately after the failure, but no emails get generated.
Emails in other free-style jobs that leverage the mailer work fine, its just invoking via pipeline jobs.
This is running with Jenkins 2.2 and mailer 1.17.
Is there a different mechanism by which I should be invoking failed build emails? I don't need all the overhead of the mail step, just need notifications on failures and recoveries.
| In Pipeline failed sh doesn't immediately set the currentBuild.result to FAILURE whereas its initial value is null. Hence, build steps that rely on the build status like Mailer might work seemingly incorrect.
You can check it by adding a debug print:
stage 'Test'
node {
try {
sh 'exit 1'
} finally {
println currentBuild.result // this prints null
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: '[email protected]', sendToIndividuals: true])
}
}
This whole pipeline is wrapped with exception handler provided by Jenkins that's why Jenkins marks the build as failed in the the end.
So if you want to utilize Mailer you need to maintain the build status properly. For instance:
stage 'Test'
node {
try {
sh 'exit 1'
currentBuild.result = 'SUCCESS'
} catch (any) {
currentBuild.result = 'FAILURE'
throw any //rethrow exception to prevent the build from proceeding
} finally {
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: '[email protected]', sendToIndividuals: true])
}
}
If you don't need to re-throw the exception, you can use catchError. It is a Pipeline built-in which catches any exception within its scope, prints it into console and sets the build status. For instance:
stage 'Test'
node {
catchError {
sh 'exit 1'
}
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: '[email protected]', sendToIndividuals: true])
}
| Jenkins | 37,169,100 | 42 |
I am trying to do continuous integration with Hudson and MSTest.
When I try to run this job I get the following error:
1 Warnung(en)
0 Fehler
Verstrichene Zeit 00:00:00.13
[workspace] $ sh -xe C:\Windows\TEMP\hudson4419897732634199534.sh
The system cannot find the file specified
FATAL: Befehlsausführung fehlgeschlagen
java.io.IOException: Cannot run program "sh" (in directory "C:\Users\Markus\.hudson\jobs\Test1 Unit TEst\workspace"): CreateProcess error=2, Das System kann die angegebene Datei nicht finden
at java.lang.ProcessBuilder.start(Unknown Source)
at hudson.Proc$LocalProc.<init>(Proc.java:187)
at hudson.Proc$LocalProc.<init>(Proc.java:157)
at hudson.Launcher$LocalLauncher.launch(Launcher.java:649)
at hudson.Launcher$ProcStarter.start(Launcher.java:266)
at hudson.Launcher$ProcStarter.join(Launcher.java:273)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:79)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:54)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:34)
at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:646)
at hudson.model.Build$RunnerImpl.build(Build.java:181)
at hudson.model.Build$RunnerImpl.doRun(Build.java:136)
at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:434)
at hudson.model.Run.run(Run.java:1390)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:40)
at hudson.model.ResourceController.execute(ResourceController.java:81)
at hudson.model.Executor.run(Executor.java:137)
Caused by: java.io.IOException: CreateProcess error=2, Das System kann die angegebene Datei nicht finden
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(Unknown Source)
at java.lang.ProcessImpl.start(Unknown Source)
... 17 more
Processing tests results in file results.trx
FATAL: No MSTest TRX test report files were found. Configuration error?
[DEBUG] Skipping watched dependency update for build: Test1 Unit TEst #5 due to result: FAILURE
Finished: FAILURE
My Configuration looks like this:
Buildverfahren
Build a Visual Studio project or solution using MSBuild
MSBuild Version MS Build .NET 4
MSBuild Build File trunk\UnitTestWithNHibernate\UnitTestWithNHibernate.sln
Command Line Arguments /p:Configuration=Release
My Command Line looks like this:
"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe"
/runconfig: trunk\UnitTestWithNHibernate\UnitTest\LocalTestRun.testrunconfig /testcontainer: trunk\UnitTestWithNHibernate\UnitTest\bin\Debug\UnitTest.dll /resultsfile:results.trx
| This happens if you have specified your Windows command as "Execute shell" rather than "Execute Windows batch command".
| Jenkins | 15,135,771 | 42 |
This is my composer.json file:
"require": {
"php": ">=5.4",
"zendframework/zendframework": "2.*",
"doctrine/doctrine-module": "dev-master",
"doctrine/doctrine-orm-module": "0.*",
"gedmo/doctrine-extensions": "dev-master"
},
"require-dev": {
"phpunit/phpunit": "3.7.*"
},
"scripts": {
"post-update-cmd": [
"rm -rf vendor/Behat",
"git clone git://github.com/Behat/Behat.git",
"cp composer.phar Behat/composer.phar",
"cd Behat && git submodule update --init",
"cd Behat && php composer.phar install",
"cd Behat && php composer.phar require guzzle/guzzle:3.0.*",
"mv Behat vendor/Behat",
"ln -sf ../Behat/bin/behat vendor/bin/"
]
}
How can I make it so the scripts are only run in the dev environment?
Basically I want the scripts to run only when I call:
php composer.phar update --dev
| To do the non-development environment update without triggering any scripts, use the --no-scripts command line switch for the update command:
php composer.phar update --no-scripts
^^^^^^^^^^^^
By default, Composer scripts are only executed1 in the base package2. So you could have one package for development and in the live environment make it a dependency of the live system.
Apart from that, I do not see any way to differentiate scripts automatically.
This answer is back from 2012, the additional options in order of appearance can now safely be listed. As the list is composer-only and the original question smells to fall for x/y, the general answer is to use a build manager.
But turns out this could be done already at the day of asking, as in this composer-json example:
{
"scripts": {
"post-update-cmd": "composer::dev-post-update-cmd",
"dev-post-update-cmd": [
"rm -rf vendor/Behat",
": ... ",
"ln -sf ../Behat/bin/behat vendor/bin/"
]
}
}
1.0.0-alpha6 (2012-10-23): dispatch dev-only scripts with PHP script class and __callStatic, e.g. composer::dev-post-update-cmd. if in_array('--dev', $_SERVER['argv']) then $eventDispatcher->dispatchCommandEvent($name).
1.0.0-alpha7 (2013-05-04): --dev is now default and henceforth optional, --no-dev required for not --dev script dispatching. this effectively changes the command-line in question from: composer update --dev running only to: not running on composer update --no-dev. $event->isDevMode() now available, also required for new second parameter on dispatching the script on the event dispatcher. Compare with answer by Christian Koch.
1.0.0-alpha9 (2014-12-07): autoload-dev now allows to not have the script class in --no-dev automatically by removing it from autoload. this works b/c non-existing classes fall-through w/o error.
1.3.0-RC (2016-12-11): --dev is deprecated. COMPOSER_DEV_MODE environment parameter now available in update/install/dumpautoload scripts. no inherit need any longer for the PHP script class, unless enviroment parameters do not work in your setup, then use a PHP class script for a workaround. Compare with answer by Veda.
3 (future): --dev will needlessly make composer fatal, at least this is announced. this closes the circle to pre-1.0.0-alpha7 which also failed on the opposite, --no-dev. the command-line library in use then and yet in the future is not able to cope with -[-not]-args symmetrically, and if the pair is divided and one must go, then it can only throw.
References
See "Note:" in What is a script? - Scripts - Composer Docs
Base package is commonly referred to as root package in the Composer documentation.
| Jenkins | 13,087,088 | 42 |
I have a parameterized job that uses the Perforce plugin and would like to retrieve the build parameters/properties as well as the p4.change property that's set by the Perforce plugin.
How do I retrieve these properties with the Jenkins Groovy API?
| Update: Jenkins 2.x solution:
With Jenkins 2 pipeline dsl, you can directly access any parameter with the trivial syntax based on the params (Map) built-in:
echo " FOOBAR value: ${params.'FOOBAR'}"
The returned value will be a String or a boolean depending on the Parameter type itself. The syntax is the same for scripted or declarative syntax. More info at: https://jenkins.io/doc/book/pipeline/jenkinsfile/#handling-parameters
If your parameter name is itself in a variable:
def paramName = "FOOBAR"
def paramValue = params.get(paramName) // or: params."${paramName}"
echo """ FOOBAR value: ${paramValue}"
Original Answer for Jenkins 1.x:
For Jenkins 1.x, the syntax is based on the build.buildVariableResolver built-ins:
// ... or if you want the parameter by name ...
def hardcoded_param = "FOOBAR"
def resolver = build.buildVariableResolver
def hardcoded_param_value = resolver.resolve(hardcoded_param)
Please note the official Jenkins Wiki page covers this in more details as well, especially how to iterate upon the build parameters:
https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+System+Groovy+script
The salient part is reproduced below:
// get parameters
def parameters = build?.actions.find{ it instanceof ParametersAction }?.parameters
parameters.each {
println "parameter ${it.name}:"
println it.dump()
}
| Jenkins | 10,882,515 | 42 |
Jenkins requires a certificate to use the ssh publication and ssh commands. It can be configured under "manage jenkins" -> "Configure System"-> "publish over ssh".
The question is: How does one create the certificates?
I have two ubuntu servers, one running Jenkins, and one for running the app.
Do I set up a Jenkins cert and put part of it on the deployment box, or set up a cert on the deployment box, and put part of it on Jenkins? Does the cert need to be in the name of a user called Jenkins, or can it be for any user? We don't have a Jenkins user on the development box.
I know there are a number of incompatible ssh types, which does Jenkins require?
Has anyone found a guide on how to set this all up (how to generate keys, where to put them etc.)?
| You will need to create a public/private key as the Jenkins user on your Jenkins server, then copy the public key to the user you want to do the deployment with on your target server.
Step 1, generate public and private key on build server as user jenkins
build1:~ jenkins$ whoami
jenkins
build1:~ jenkins$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/jenkins/.ssh/id_rsa):
Created directory '/var/lib/jenkins/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/jenkins/.ssh/id_rsa.
Your public key has been saved in /var/lib/jenkins/.ssh/id_rsa.pub.
The key fingerprint is:
[...]
The key's randomart image is:
[...]
build1:~ jenkins$ ls -l .ssh
total 2
-rw------- 1 jenkins jenkins 1679 Feb 28 11:55 id_rsa
-rw-r--r-- 1 jenkins jenkins 411 Feb 28 11:55 id_rsa.pub
build1:~ jenkins$ cat .ssh/id_rsa.pub
ssh-rsa AAAlskdjfalskdfjaslkdjf... [email protected]
Step 2, paste the pub file contents onto the target server.
target:~ bob$ cd .ssh
target:~ bob$ vi authorized_keys (paste in the stuff which was output above.)
Make sure your .ssh dir has permissoins 700 and your authorized_keys file has permissions 644
Step 3, configure Jenkins
In the jenkins web control panel, nagivate to "Manage Jenkins" -> "Configure System" -> "Publish over SSH"
Either enter the path of the file e.g. "var/lib/jenkins/.ssh/id_rsa", or paste in the same content as on the target server.
Enter your passphrase, server and user details, and you are good to go!
| Jenkins | 37,331,571 | 41 |
In this integration pipeline in Jenkins, I am triggering different builds in parallel using the build step, as follows:
stage('trigger all builds')
{
parallel
{
stage('componentA')
{
steps
{
script
{
def myjob=build job: 'componentA', propagate: true, wait: true
}
}
}
stage('componentB')
{
steps
{
script
{
def myjob=build job: 'componentB', propagate: true, wait: true
}
}
}
}
}
I would like to access the return value of the build step, so that I can know in my Groovy scripts what job name, number was triggered.
I have found in the examples that the object returned has getters like getProjectName() or getNumber() that I can use for this.
But how do I know the exact class of the returned object and the list of methods I can call on it? This seems to be missing from the Pipeline documentation. I am asking for this case in particular, but generally speaking, how can I know the class of the returned object and its documentation?
| The step documentation is generated based on some files that are bundled with the plugin, which sometimes isn't enough. One easy way would be to just print out the class of the result object by calling getClass:
def myjob=build job: 'componentB', propagate: true, wait: true
echo "${myjob.getClass()}"
This output would tell you that the result (in this case) is a org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper which has published Javadoc.
For other cases, I usually have to dive into the Jenkins source code. Here is my general strategy:
Figure out which plugin the step comes from either by the step documentation, jenkins.io steps reference, or just searching the internet
From the plugin site, go to the source code repository
Search for the String literal of the step name, and find the step type that returns it. In this case, it looks to be coming from the BuildTriggerStep class, which extends AbstractStepImpl
@Override
public String getFunctionName() {
return "build";
}
Look at the nested DescriptorImpl to see what execution class is returned
public DescriptorImpl() {
super(BuildTriggerStepExecution.class);
}
Go to BuildTriggerStepExecution and look at the execution body in the start() method
Reading over the workflow step README shows that something should call context.onSuccess(value) to return a result. There is one place in that file, but that is only on the "no-wait" case, which always returns immediately and is null (source).
if (step.getWait()) {
return false;
} else {
getContext().onSuccess(null);
return true;
}
Ok, so it isn't completing in the step execution, so it must be somwhere else. We can also search the repository for onSuccess and see what else might trigger it from this plugin. We find that a RunListener implementation handles setting the result asynchronously for the step execution if it has been configured that way:
for (BuildTriggerAction.Trigger trigger : BuildTriggerAction.triggersFor(run)) {
LOGGER.log(Level.FINE, "completing {0} for {1}", new Object[] {run, trigger.context});
if (!trigger.propagate || run.getResult() == Result.SUCCESS) {
if (trigger.interruption == null) {
trigger.context.onSuccess(new RunWrapper(run, false));
} else {
trigger.context.onFailure(trigger.interruption);
}
} else {
trigger.context.onFailure(new AbortException(run.getFullDisplayName() + " completed with status " + run.getResult() + " (propagate: false to ignore)"));
}
}
run.getActions().removeAll(run.getActions(BuildTriggerAction.class));
The trigger.context.onSuccess(new RunWrapper(run, false)); is where the RunWrapper result comes from
| Jenkins | 51,103,359 | 41 |
How can I teach my Jenkisfile to login via basic auth in this setup?
I'm using a custom docker image for my Jenkins build.
As described in the documentation here I defined a docker agent like so:
pipeline {
agent {
docker {
image 'registry.az1:5043/maven-proto'
registryUrl 'https://registry.az1'
args '-v /var/jenkins_home/.m2:/root/.m2'
}
}
options {
timeout(time: 1, unit: 'HOURS')
buildDiscarder(logRotator(numToKeepStr:'10'))
}
stages {
stage ('Build') {
steps{
sh ...
}
}
stage ('Test') {
steps {
sh ...
}
}
stage ('Deploy') {
steps {
sh ...
}
}
}
post {
always {
echo 'Clean up workspace'
deleteDir()
}
}
}
If I use the following agent setup:
pipeline {
agent {
docker.withRegistry('https://registry.az1', 'registry_login'){
image 'registry.az1:5043/maven-proto'
registryUrl 'https://registry.az1'
args '-v /var/jenkins_home/.m2:/root/.m2'
}
}
The execution of the pipeline fails with the following exception:
WorkflowScript: 3: Too many arguments for map key "withRegistry" @ line 3, column 16.
docker.withRegistry('https://registry.az1', 'registry_login'){
^
WorkflowScript: 3: Invalid agent type "withRegistry" specified. Must be one of [docker, dockerfile, label, any, none] @ line 3, column 16.
docker.withRegistry('https://registry.az1', 'registry_login'){
^
The problem is that the used registry requires a basic auth login. The registry runs behind a nginx reverse proxy using this configuration.
| As specified in Using a custom registry, you can specify the credentials and registry URL to use as such:
Scripted pipelines syntax:
docker.withRegistry('https://registry.az1', 'credentials-id') {
...
}
Declarative pipelines syntax:
agent {
docker {
image 'registry.az1:5043/maven-proto'
registryUrl 'https://registry.az1'
registryCredentialsId 'credentials-id'
args '-v /var/jenkins_home/.m2:/root/.m2'
}
}
You need to create a Jenkins credentials object which will contain the credentials for the repository and give it a name to replace credentials-id above. Per the documentation, your credentials object in Jenkins needs to have a "Kind" of "Username with password".
| Jenkins | 49,029,379 | 41 |
Jenkins is running on localhost.
I have my repository in GitHub. I have the option to 'Build when a change is pushed to GitHub' checked.
When I click 'Build Now', build is done successfully, no issues there. But when am committing code to my repository, auto build is not happening. I can access GitHub from my system as the repository is public and I believe even Jenkins should be able to detect it. I know there is a polling option but I want Jenkins to build when change is detected in repository(as this is what we have been trying to achieve).
Configuration:
Jenkins 1.615
Git Plugin 2.3.5
Git Client Plugin 1.17.1
————————————————————————————————————————————
EDIT: "Build when a change is pushed to GitHub" option has been renamed to "GitHub hook trigger for GITScm polling" in most recent version of GitHub plugin.
(thanks to @smrubin's feedback.)
| I suspect you missed the webhook url.
Besides checking the Build when a change is pushed to GitHub option, you should also add the webhook url into your Github repository to get the Auto trigger mechanism to work and here is how:
Go to your Github repository:
Settings--> Webhooks&Services-->Service--> Add Services--> Choose "Jenkins (GitHub plugin)"
Then fill in the Jenkins hook url with your jenkins url like this:
http://your_jenkins_url/github-webhook/
And, VERY IMPORTANT, since you are installing your jenkins server in your localhost, please be aware that you shouldn't fill in above Jenkins hook url like http://localhost:8080/github-webhook/ because Github is not able to recognize localhost or 127.0.0.1 or 192.168.*.*.
Either you should use an externally accessible DNS name or an IP address, which can be recognized by Github.
| Jenkins | 30,576,881 | 41 |
I have a Jenkins job that builds from a github.com repository's master branch with Maven (mvn clean install), then checks for license headers in Java files and missing NOTICE files, and adds them if necessary (mvn license:format notice:generate). Sometimes this will result in changed or added files, sometimes not.
Whenever there have any changes been made (by the license plugin) I want to push the changes to the github repo.
Now I'm having trouble figuring out how best to achieve that. I've added a shell build step after the Maven license plugin where I execute git commands:
git add . # Just in case any NOTICE files have been added
git commit -m "Added license headers"
git add . alone works, i.e., doesn't break the build, even if no files have been added. However, git commit breaks the build, if there aren't any changes at all.
I'm not concerned about pushing back to github, as I believe the Git Publisher post-build action can do this for me. Can someone please point me in the right direction for the git commit?
| git diff --quiet && git diff --staged --quiet || git commit -am 'Added license headers'
This command do exactly what is required, 'git commit only if there are changes', while the commands in the other answers do not: they only ignore any error of git commit.
| Jenkins | 22,040,113 | 41 |
I am able to use the Jenkins API to get information about my build via the url
http://localhost:8080/job/myjob/149/api/json
I want to be able to query the changeSet node using the tree query string parameter. I can successfully query non-indexed nodes like "duration" via
http://localhost:8080/job/myjob/149/api/json?tree=duration
How do I query indexed nodes like changeSet? I can't seem to find any doc anywhere.
{
"actions": [
{
"causes": [
{
"shortDescription": "Started by an SCM change"
}
]
},
{},
{},
{}
],
"artifacts": [],
"building": false,
"description": null,
"duration": 80326,
"estimatedDuration": 68013,
"executor": null,
"fullDisplayName": "my project #149",
"id": "2013-06-14_14-31-06",
"keepLog": false,
"number": 149,
"result": "SUCCESS",
"timestamp": 1371234666000,
"url": "http://localhost:8080/job/my project/149/",
"builtOn": "",
"changeSet": {
"items": [
{
"affectedPaths": [
"SearchViewController.m",
"Sample.strings"
],
"author": {
"absoluteUrl": "http://localhost:8080/user/my user",
"fullName": "My User"
},
"commitId": "9032",
"timestamp": 1371234304048,
"date": "2013-06-14T18:25:04.048031Z",
"msg": "Author:my_author Description: changes Id: B-186199 Reviewer:reviewer_name",
"paths": [
{
"editType": "edit",
"file": "/branches/project_name/iOS/_MainLine/project_name/SearchViewController.m"
},
],
"revision": 9032,
"user": "user_name"
}
],
"kind": "svn",
"revisions": [
{
"module": "repo_url",
"revision": 8953
},
{
"module": "repo_url",
"revision": 9032
}
]
},
"culprits": [
{
"absoluteUrl": "http://localhost:8080/user/username",
"fullName": "username"
}
]
}
| The API documentation has a hint:
A newer alternative is the tree query parameter. [snip] you need only know what elements you are looking for, rather than what you are not looking for (which is anyway an open-ended list when plugins can contribute API elements). The value should be a list of property names to include, with subproperties inside square braces.
For a simple list, get the whole subtree with:
http://jenkins/job/myjob/../api/json?tree=artifacts[*]
or list specific properties within the braces.
For changeSet, use
http://jenkins/job/myjob/../api/json?tree=changeSet[*[*]]
to retrieve everything.
Use nested square braces for specific sub-subproperties, e.g.:
http://jenkins/job/myjob/../api/json?tree=changeSet[items[revision]]
The tree documentation says that it's intended for cases where the caller doesn't know what properties to retrieve.
| Jenkins | 17,236,710 | 41 |
I've setup Jenkins, and it's working well. It uses the Perforce plugin as the SCM, and builds automatically upon a checkin. My issue is that when a user makes a commit to the tree it auto creates a user account on the system, but no password is set, and the user cannot login.
The system is secured on a intranet, and I have set Jenkins to use "Jenkins own user database" and "Logged in users can do anything". Problem is I can't find any way for someone to log in once they have made a commit, there username is shown in the list of auto-created accounts, but no password is ever sent. Is there a default password, or a way to reset?
The system is running on Ubuntu 12 with Tomcat7 serving the Jenkins front end.
| Users created by SCM are not "full" users. They are created for purposes of showing SCM changes and receiving e-mails. Therefore they need to sign up (using 'Sign Up' icon that appears to the left of of 'log in' icon in the upper right corner) and provide their password. It is advisable for the username to match the SCM name.
Alternatively, a user with a "full" account can go to http://<jenkins-server>/people/ -> click on username -> click on Configure link to the left, and configure the user (I'm not 100% sure if this will work, though, try it).
| Jenkins | 10,805,946 | 41 |
I am trying to create a file called groovy1.txt with the content "Working with files the Groovy way is easy."
Note: I don't want to use the shell to create this file, instead I want to use Groovy to achieve this.
I have following script in my Jenkins pipeline.
node {
def file1 = new File('groovy1.txt')
file1.write 'Working with files the Groovy way is easy.\n'
sh 'ls -l'
// Expecting the file groovy1.txt should present with the content mentioned above
}
But it is throwing FileNotFound (permission denied) error as below
java.io.FileNotFoundException: groovy1.txt (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
at java.io.FileWriter.<init>(FileWriter.java:90)
at org.codehaus.groovy.runtime.ResourceGroovyMethods.write(ResourceGroovyMethods.java:740)
at org.codehaus.groovy.runtime.dgm$1035.doMethodInvoke(Unknown Source)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:47)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:104)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
at WorkflowScript.run(WorkflowScript:3)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
at sun.reflect.GeneratedMethodAccessor257.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
at com.cloudbees.groovy.cps.Next.step(Next.java:83)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:122)
at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:261)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$101(SandboxContinuable.java:34)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.lambda$run0$0(SandboxContinuable.java:59)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:58)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:174)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:332)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:83)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:244)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:232)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE
| Jenkins Pipeline provides writeFile step that can be used to write a file inside job's workspace.
Take a look at following example:
node {
writeFile file: 'groovy1.txt', text: 'Working with files the Groovy way is easy.'
sh 'ls -l groovy1.txt'
sh 'cat groovy1.txt'
}
Running this pipeline scripts generates following output:
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/test-pipeline
[Pipeline] {
[Pipeline] writeFile
[Pipeline] sh
[test-pipeline] Running shell script
+ ls -l groovy1.txt
-rw-r--r-- 1 jenkins jenkins 42 Jul 8 16:38 groovy1.txt
[Pipeline] sh
[test-pipeline] Running shell script
+ cat groovy1.txt
Working with files the Groovy way is easy.[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Using Java File
As Jon S mentioned in the comment, Java new File("${env.WORKSPACE}/groovy1.txt") will work only if your node step is executed on master node - if it gets executed on slave node then your pipeline code will fail. You can check following Stack Overflow thread for more information:
In jenkins job, create file using system groovy in current workspace
| Jenkins | 51,233,919 | 40 |
[Symptoms]
Install Jenkins by using official steps, but failed with error message Failed to start LSB: Start Jenkins at boot time.
Reproduce Steps
wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
sudo apt-add-repository "deb https://pkg.jenkins.io/debian-stable binary/"
sudo apt install jenkins
Console log
gaspar@jenkins:~$ sudo apt install jenkins
...
Setting up default-jre-headless (2:1.9-62ubuntu2) ...
Setting up jenkins (2.107.2) ...
Job for jenkins.service failed because the control process exited with error code.
See "systemctl status jenkins.service" and "journalctl -xe" for details.
invoke-rc.d: initscript jenkins, action "start" failed.
● jenkins.service - LSB: Start Jenkins at boot time
Loaded: loaded (/etc/init.d/jenkins; generated)
Active: failed (Result: exit-code) since Thu 2018-04-19 10:03:05 UTC; 9ms ago
Docs: man:systemd-sysv-generator(8)
Process: 27282 ExecStart=/etc/init.d/jenkins start (code=exited, status=7)
Apr 19 10:03:03 evt-jenkins systemd[1]: Starting LSB: Start Jenkins at boot time...
Apr 19 10:03:03 evt-jenkins jenkins[27282]: * Starting Jenkins Automation Server jenkins
Apr 19 10:03:03 evt-jenkins su[27313]: Successful su for jenkins by root
Apr 19 10:03:03 evt-jenkins su[27313]: + ??? root:jenkins
Apr 19 10:03:03 evt-jenkins su[27313]: pam_unix(su:session): session opened for user jenkins by (uid=0)
Apr 19 10:03:03 evt-jenkins su[27313]: pam_unix(su:session): session closed for user jenkins
Apr 19 10:03:05 evt-jenkins jenkins[27282]: ...fail!
Apr 19 10:03:05 evt-jenkins systemd[1]: jenkins.service: Control process exited, code=exited status=7
Apr 19 10:03:05 evt-jenkins systemd[1]: jenkins.service: Failed with result 'exit-code'.
Apr 19 10:03:05 evt-jenkins systemd[1]: Failed to start LSB: Start Jenkins at boot time.
dpkg: error processing package jenkins (--configure):
installed jenkins package post-installation script subprocess returned error exit status 1
...
[Environment]
Ubuntu 18.04 LTS Beta2
Jenkins 2.107.2
| [Root cause]
Ubuntu 18.04 LTS use Java 9 as default java
Jenkins 2.107.2 still use Java 8
[Solution]
Install Java 8 before install Jenkins
sudo add-apt-repository ppa:webupd8team/java
sudo apt install oracle-java8-installer
wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
sudo apt-add-repository "deb https://pkg.jenkins.io/debian-stable binary/"
sudo apt-get update
sudo apt install jenkins
| Jenkins | 49,937,743 | 40 |
Is there any environment variable available for getting the Jenkins Pipeline Title?
I know we can use $JOB_NAME to get title for a freestyle job,
but is there anything that can be used for getting Pipeline name?
| You can access the same environment variables from groovy using the same names (e.g. JOB_NAME or env.JOB_NAME).
From the documentation:
Environment variables are accessible from Groovy code as env.VARNAME or simply as VARNAME. You can write to such properties as well (only using the env. prefix):
env.MYTOOL_VERSION = '1.33'
node {
sh '/usr/local/mytool-$MYTOOL_VERSION/bin/start'
}
These definitions will also be available via the REST API during the build or after its completion, and from upstream Pipeline builds using the build step.
For the rest of the documentation, click the "Pipeline Syntax" link from any Pipeline job
| Jenkins | 41,604,854 | 40 |
I have just started with Jenkins
My freestyle project used to report JUnit tests results in Slack like this
MyJenkinsFreestyle - #79 Unstable after 4 min 59 sec (Open)
Test Status:
Passed: 2482, Failed: 13, Skipped: 62
Now I have moved the same to pipeline project, and all is good except that Slack notifications do not have Test Status
done MyPipelineProject #68 UNSTABLE
I understand I have to construct the message to send to Slack, and I have done that above for now.
The only issue is how do I read the test status - the passed count, failed count etc.
This is called "test summary" in Jenkins slack-plugin commit, and here is the screenshot
So how do I access Junit tests count/details in Jenkins Pipeline project ? - so that these are reported in notifications.
UPDATE:
In the Freestyle project, the Slack notification itself has the "test summary", and there is no option to opt (or not) for the test summary.
In Pipeline project, my "junit" command to "Publish JUnit test results" is before sending Slack notification.
So in code those lines look like this (this are last lines of the last stage):
bat runtests.bat
junit 'junitreport/xml/TEST*.xml'
slackSend channel: '#testschannel', color: 'normal', message: "done ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>)";
| For anyone coming here in 2020, there appears to be a simpler way now. The call to 'junit testResults' returns a TestResultSummary object, which can be assigned to a variable and used later.
As an example to send the summary via slack:
def summary = junit testResults: '/somefolder/*-reports/TEST-*.xml'
slackSend (
channel: "#mychannel",
color: '#007D00',
message: "\n *Test Summary* - ${summary.totalCount}, Failures: ${summary.failCount}, Skipped: ${summary.skipCount}, Passed: ${summary.passCount}"
)
| Jenkins | 39,920,437 | 40 |
I'm doing a build on my Ubuntu 14.04 LTS but I'm getting the following:
Started by user anonymous
Building in workspace /var/lib/jenkins/workspace/videovixx
> /usr/bin/git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> /usr/bin/git config remote.origin.url https://bitbucket.org/mdennis10/videovixx.git # timeout=10
Fetching upstream changes from https://bitbucket.org/mdennis10/videovixx.git
> /usr/bin/git --version # timeout=10
using .gitcredentials to set credentials
> /usr/bin/git config --local credential.helper store -- file=/tmp/git6236060328558794078.credentials # timeout=10
> /usr/bin/git fetch --tags --progress https://bitbucket.org/mdennis10/videovixx.git +refs/heads/*:refs/remotes/origin/*
> /usr/bin/git config --local --remove-section credential # timeout=10
> /usr/bin/git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> /usr/bin/git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision f5c53e95d33c1e15abd7519346c18ec6bc0c81d7 (refs/remotes/origin/master)
> /usr/bin/git config core.sparsecheckout # timeout=10
> /usr/bin/git checkout -f f5c53e95d33c1e15abd7519346c18ec6bc0c81d7
> /usr/bin/git rev-list f5c53e95d33c1e15abd7519346c18ec6bc0c81d7 # timeout=10
[videovixx] $ mvn install package
FATAL: command execution failed
java.io.IOException: Cannot run program "mvn" (in directory "/var/lib/jenkins/workspace/videovixx"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at hudson.Proc$LocalProc.<init>(Proc.java:244)
at hudson.Proc$LocalProc.<init>(Proc.java:216)
at hudson.Launcher$LocalLauncher.launch(Launcher.java:802)
at hudson.Launcher$ProcStarter.start(Launcher.java:380)
at hudson.Launcher$ProcStarter.join(Launcher.java:387)
at hudson.tasks.Maven.perform(Maven.java:328)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:770)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:533)
at hudson.model.Run.execute(Run.java:1745)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:89)
at hudson.model.Executor.run(Executor.java:240)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:186)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
... 15 more
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Recording test results
Finished: FAILURE
I'm assuming that this caused by some linux security feature that stops the /var/lib/jenkins/workspace/videovixx from being created without the correct permissions
which I might not have. Is this the problem and how do I solve it?
| There are multiple things here.
You either didn't select Maven version in Job configuration.
Or you didn't configure Jenkins to install a Maven version.
Or you expected to use locally installed Maven on the Slave, but it's not configured for jenkins user.
Since I don't know what you've configured (or didn't configure) and what you expected to use, I can't answer directly, but I can explain how it works.
If you want to use locally installed Maven on master/slave
You must have Maven locally installed
You must be able to launch it with jenkins user
Execute sudo jenkins, and then execute mvn on your Slave to verify that jenkins user can run mvn
If that fails, you need to properly install/configure Maven
In Job configuration, for Maven Version, you must select Default. This is the setting that uses the version that's installed locally on the node
If you want to have Jenkins install Maven for you
You must go to Jenkins Global Tool Configuration, and configure a Maven version with automatic installer (from the web).
In Job configuration, for Maven Version, you must select that particular version that you've just configured.
| Jenkins | 26,906,972 | 40 |
I'm trying to ssh from Jenkins to a local server but the following error is thrown:
[SSH] Exception:Algorithm negotiation fail
com.jcraft.jsch.JSchException: Algorithm negotiation fail
at com.jcraft.jsch.Session.receive_kexinit(Session.java:520)
at com.jcraft.jsch.Session.connect(Session.java:286)
at com.jcraft.jsch.Session.connect(Session.java:150)
at org.jvnet.hudson.plugins.SSHSite.createSession(SSHSite.java:141)
at org.jvnet.hudson.plugins.SSHSite.executeCommand(SSHSite.java:151)
at org.jvnet.hudson.plugins.SSHBuildWrapper.executePreBuildScript(SSHBuildWrapper.java:75)
at org.jvnet.hudson.plugins.SSHBuildWrapper.setUp(SSHBuildWrapper.java:59)
at hudson.model.Build$BuildExecution.doRun(Build.java:154)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:533)
at hudson.model.Run.execute(Run.java:1754)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:89)
at hudson.model.Executor.run(Executor.java:240)
Finished: FAILURE
Installed version of Java on SSH server:
java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b18)
Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode)
Installed version of java on client:
java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b18)
Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode)
Also tried this solution:
JSchException: Algorithm negotiation fail
but it's not working. From putty everything seems to be ok. The connection is established but when I trigger the Jenkins job the error is thrown. Should I try another version of ssh server. Now I'm using copssh.
| TL;DR edit your sshd_config and enable support for diffie-hellman-group-exchange-sha1 and diffie-hellman-group1-sha1 in KexAlgorithms:
KexAlgorithms [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1
I suspect that the problem appeared after the following change in OpenSSH 6.7: "The default set of ciphers and MACs has been altered to remove unsafe algorithms.". (see changelog). This version was released on Oct, 6, and made it on Oct, 21 to Debian testing (see Debian changelog).
OpenSSH enables only the following key exchange algorithms by default:
[email protected]
ecdh-sha2-nistp256
ecdh-sha2-nistp384
ecdh-sha2-nistp521
diffie-hellman-group-exchange-sha256
diffie-hellman-group14-sha1
Whereas JSch claims to support these algorithms (see under "features") for key exchange:
diffie-hellman-group-exchange-sha1
diffie-hellman-group1-sha1
So indeed, they cannot agree on a common key exchange algorithm. Updating sshd_config (and restarting the SSH server) does the trick. Apparently JSch is supposed to support the "diffie-hellman-group-exchange-sha256" method since version 0.1.50 (see changelog).
| Jenkins | 26,424,621 | 40 |
I am keeping a shell script file named urltest.sh in /var/lib/jenkins and executing the file from jenkins build.
When I execute the build, It fails.
The Environment Variables are -
HOME - /var/lib/jenkins ;
JENKINS_HOME - /var/lib/jenkins
The console output comes as:
Started by user anonymous
Building in workspace /var/lib/jenkins/workspace/AutoScript
[AutoScript] $ /bin/sh -xe /tmp/hudson2777728063740604479.sh
+ sh urltest.sh
sh: 0: Can't open urltest.sh
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Where should I keep the shell script file so that it is executed?
| Based on the number of views this question has, it looks like a lot of people are visiting this to see how to set up a job that executes a shell script.
These are the steps to execute a shell script in Jenkins:
In the main page of Jenkins select New Item.
Enter an item name like "my shell script job" and chose Freestyle project. Press OK.
On the configuration page, in the Build block click in the Add build step dropdown and select Execute shell.
In the textarea you can either paste a script or indicate how to run an existing script. So you can either say:
#!/bin/bash
echo "hello, today is $(date)" > /tmp/jenkins_test
or just
/path/to/your/script.sh
Click Save.
Now the newly created job should appear in the main page of Jenkins, together with the other ones. Open it and select Build now to see if it works. Once it has finished pick that specific build from the build history and read the Console output to see if everything happened as desired.
You can get more details in the document Create a Jenkins shell script job in GitHub.
| Jenkins | 21,276,351 | 40 |
Below is my build script (not using xcodebuild plugin).
Build step works
I have created a separate keychain with the required certs and private keys, and they are visible in Keychain Access
keychain commands don't fail in the script
security list-keychains shows these as valid keychains
It's acting like unlock command doesn't truly succeed.
When I try to run codesign from the command line via
codesign -f -s "iPhone Developer: mycert" -v sample.app/ --keychain /Users/Shared/Jenkins/Library/Keychains/JenkinsCI.keychain
I get
CSSM_SignData returned: 000186AD
sample.app/: unknown error -2070=fffffffffffff7ea
although I'm not sure I'm emulating from the command line properly since you can at best
sudo -u jenkins bash
xcodebuild ONLY_ACTIVE_ARCH="NO" CODE_SIGN_IDENTITY="" CODE_SIGNING_REQUIRED="NO" -scheme "MySchemeName" CONFIGURATION_BUILD_DIR="`pwd`"
security list-keychains -s /Users/Shared/Jenkins/Library/Keychains/JenkinsCI.keychain
+ security default-keychain -d user -s /Users/Shared/Jenkins/Library/Keychains/JenkinsCI.keychain
+ security unlock-keychain -p jenkins /Users/Shared/Jenkins/Library/Keychains/JenkinsCI.keychain
+ security list-keychains
"/Users/Shared/Jenkins/Library/Keychains/JenkinsCI.keychain"
"/Library/Keychains/System.keychain"
+ security default-keychain
"/Users/Shared/Jenkins/Library/Keychains/JenkinsCI.keychain"
+ codesign -f -s '$IDENTITY_GOES_HERE.' -v sample.app/
sample.app/: User interaction is not allowed.
Any help is greatly appreciated.
| We don't use Jenkins but I've seen this in our build automation before. Here's how we solved it:
1) Create your build Keychain. This will contain the private key/certificate used for codesigning:
security create-keychain -p [keychain_password] MyKeychain.keychain
The keychain_password is up to you. You'll use this later to unlock the keychain during the build.
2) Import the private key (*.p12) for your CodeSign identity:
security import MyPrivateKey.p12 -t agg -k MyKeychain.keychain -P [p12_Password] -A
The key here is the "-A" flag. This will allow access to the keychain without warning. This is why you're seeing the "User interaction is not allowed" error. If you were attempting this build via the Xcode UI, this is the point where it would prompt you to "Allow access" to your keychain.
3) However you're saving the Keychain (e.g.: checking it in to source control), make sure it's writeable and executable by your build user.
When you're ready to build, add the following prior to running xcodebuild:
# Switch keychain
security list-keychains -s "/path/to/MyKeyhain.keychain"
security default-keychain -s "/path/to/MyKeychain.keychain"
security unlock-keychain -p "[keychain_password]" "/path/to/MyKeychain.keychain"
If you're running locally, you may want to add something at the end of your build script that switches back to the login keychain (~/Library/Keychains/login.keychain), e.g.:
# Switch back to login keychain
security list-keychains -s "~/Library/Keychains/login.keychain"
security default-keychain -s "~/Library/Keychains/login.keychain"
Give that a try. We create a separate Keychain for each identity we use (our own plus builds on behalf of customers). In our company's case, we have both an AppStore and Enterprise account. This can result in naming conflicts while codesigning (e.g.: both accounts resolve to "iPhone Distribution: ACME Corporation"). By keeping these identities in separate keychains we avoid this conflict.
| Jenkins | 16,550,594 | 40 |
Just installed Jenkins in Ubuntu 12.04 and I wanted to create a simple build that just clones a project and builds it.
It fails because it cannot tag. It cannot tag because it errors out saying "tell me who you are" apparently because I didn't set git settings UserName and UserEmail.
But, I should not need to set those, Jenkins is going to just clone the repository, why does it need the credentials if it's not going to push changes, why does it need to do a tag at all?
Full error log is:
Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Foo.Bar.Baz/workspace - hudson.remoting.LocalChannel@38e609c9
Using strategy: Default
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from [email protected]:foo-bar-baz/foo-bar-baz.git
Seen branch in repository origin/1.0
Seen branch in repository origin/1.5.4
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision 479d37776b46283a946dd395c1ea78f18c0b97c7 (origin/1.0)
Checking out Revision 479d37776b46283a946dd395c1ea78f18c0b97c7 (origin/1.0)
FATAL: Could not apply tag jenkins-Foo.Bar.Baz-2
hudson.plugins.git.GitException: Could not apply tag jenkins-Foo.Bar.Baz-2
at hudson.plugins.git.GitAPI.tag(GitAPI.java:737)
at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1320)
at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1268)
at hudson.FilePath.act(FilePath.java:758)
at hudson.FilePath.act(FilePath.java:740)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1268)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:565)
at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:453)
at hudson.model.Run.run(Run.java:1376)
at hudson.matrix.MatrixBuild.run(MatrixBuild.java:220)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:175)
at hudson.model.OneOffExecutor.run(OneOffExecutor.java:66)
Caused by: hudson.plugins.git.GitException: Command "git tag -a -f -m Jenkins Build #2 jenkins-Foo.Bar.Baz-2" returned status code 128:
stdout:
stderr:
*** Please tell me who you are.
Run
git config --global user.email "[email protected]"
git config --global user.name "Your Name"
to set your account's default identity.
Omit --global to set the identity only in this repository.
fatal: empty ident <jenkins@somehostname.(none)> not allowed
at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:786)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:748)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:758)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:735)
... 13 more
| The idea of tagging when pulling/cloning a repo is common to most Build Scheduler out there:
Hudson-Jenkins, but also CruiseControl (The build label determined by the labelincrementer), or RTC Jazz Build Engine (where they are called "snapshots").
The idea is to set a persistent record of the input to a build.
That way, the code you are pulling, even if it wasn't tagged, is tagged automatically for you by the build scheduler, in order to be able to get back to that specific build later.
If that policy (always tagging before a build) is set, then Jenkins will need to know who you are in order to make a git tag (it is a git object with an author attached to it: user.name and user.email).
However, as mentioned in " Why hudson/jenkins tries to make commit? ":
Checks "Skip internal tag" config under "Advanced..." in section "Source code management".
That should avoid that extra tagging step you appear to not need.
| Jenkins | 11,122,913 | 40 |
I want to build a project using two Git repositories. One of them contains of the source code, while the other has the build and deployment scripts.
My problem is that I need to have a repository for building and deployment of different parts of the project (big project, multiple repositories, same build and deployment scripts), but Jenkins does not seem to be able to handle this (or I don't know/didn't find how).
| UPDATE
Multiple SCMs Plugin is now deprecated so users should migrate to Pipeline plugin.
Old answer
Yes, Jenkins can handle this. Just use Multiple SCMs under Source Code Management, add your repositories and then go to the Advanced section of each repository. Here you need to set Local subdirectory for repo (optional) and Unique SCM name (optional).
Your repository will be pulled to the Local subdirectory which you have set so then you can build them in any order you want.
Updating per harishs answer - you need to install Multiple SCMs Plugin in order to achieve this functionality.
| Jenkins | 16,538,198 | 39 |
We are thinking to move our ci from jenkins to gitlab. We have several projects that have the same build workflow. Right now we use a shared library where the pipelines are defined and the jenkinsfile inside the project only calls a method defined in the shared library defining the actual pipeline. So changes only have to be made at a single point affecting several projects.
I am wondering if the same is possible with gitlab ci? As far as i have found out it is not possible to define the gitlab-ci.yml outside the repository. Is there another way to define a pipeline and share this config with several projects to simplify maintainance?
| GitLab 11.7 introduces new include methods, such as include:file:
https://docs.gitlab.com/ee/ci/yaml/#includefile
include:
- project: 'my-group/my-project'
ref: master
file: '/templates/.gitlab-ci-template.yml'
This will allow you to create a new project on the same GitLab instance which contains a shared .gitlab-ci.yml.
| Jenkins | 47,790,403 | 39 |
I have a Jenkins running as a docker container, now I want to build a Docker image using pipeline, but Jenkins container always tells Docker not found.
[simple-tdd-pipeline] Running shell script
+ docker build -t simple-tdd .
/var/jenkins_home/workspace/simple-tdd-pipeline@tmp/durable-
ebc35179/script.sh: 2: /var/jenkins_home/workspace/simple-tdd-
pipeline@tmp/durable-ebc35179/script.sh: docker: not found
Here is how I run my Jenkins image:
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v
/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock
jenkins
And the DockerFile of Jenkins image is:
https://github.com/jenkinsci/docker/blob/9f29488b77c2005bbbc5c936d47e697689f8ef6e/Dockerfile
| You're missing the docker client. Install it as this in Dockerfile:
RUN curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.04.0-ce.tgz \
&& tar xzvf docker-17.04.0-ce.tgz \
&& mv docker/docker /usr/local/bin \
&& rm -r docker docker-17.04.0-ce.tgz
Source
| Jenkins | 44,850,565 | 39 |
When using the Jenkins pipeline where each stage runs on a different agent, it is good practice to use agent none at the beginning:
pipeline {
agent none
stages {
stage('Checkout') {
agent { label 'master' }
steps { script { currentBuild.result = 'SUCCESS' } }
}
stage('Build') {
agent { label 'someagent' }
steps { bat "exit 1" }
}
}
post {
always {
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: "[email protected]", sendToIndividuals: true])
}
}
}
But doing this leads to Required context class hudson.FilePath is missing error message when the email should go out:
[Pipeline] { (Declarative: Post Actions)
[Pipeline] step
Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node
[Pipeline] error
[Pipeline] }
When I change from agent none to agent any, it works fine.
How can I get the post step to work without using agent any?
| wrap the step that does the mailing in a node step:
post {
always {
node('awesome_node_label') {
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: "[email protected]", sendToIndividuals: true])
}
}
}
| Jenkins | 44,531,003 | 39 |
I just upgraded my project to Asp.Net 4, from 3.5. When the build kicks off from TeamCity, I get the following error:
[Project "Website.metaproj" (Rebuild target(s)):] C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_compiler.exe -v /Website -p Website\ -u -f PrecompiledWeb\Website\
[12:11:50]: [Project "Website.metaproj" (Rebuild target(s)):] ASPNETCOMPILER error ASPCONFIG: Could not load file or assembly 'Microsoft.VisualBasic.Activities.Compiler' or one of its dependencies. An attempt was made to load a program with an incorrect format.
[12:11:50]: MSBuild output:
[12:11:50]: Copying file from "C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\Dependencies\wnvxls.dll" to "Website\\Bin\wnvxls.dll".
[12:11:50]: Copying file from "C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\Dependencies\wnvxls.xml" to "Website\\Bin\wnvxls.xml".
[12:11:50]: C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_compiler.exe -v /Website -p Website\ -u -f PrecompiledWeb\Website\
[12:11:50]: ASPNETCOMPILER : error ASPCONFIG: Could not load file or assembly 'Microsoft.VisualBasic.Activities.Compiler' or one of its dependencies. An attempt was made to load a program with an incorrect format. [C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\Website.metaproj]
[12:11:50]: Done Building Project "C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\Website.metaproj" (Rebuild target(s)) -- FAILED.
[12:11:50]: Done Building Project "C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\MyProject.sln" (Rebuild target(s)) -- FAILED.
[12:11:50]: Done Building Project "C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\MyProject.sln.teamcity.patch.tcprojx" (TeamCity_Generated_Build;TeamCity_Generated_NUnitTests target(s)) -- FAILED.
[12:11:50]: Build FAILED.
[12:11:50]: "C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\MyProject.sln.teamcity.patch.tcprojx" (TeamCity_Generated_Build;TeamCity_Generated_NUnitTests target) (1) ->
[12:11:50]: "C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\MyProject.sln" (Rebuild target) (2) ->
[12:11:50]: "C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\Website.metaproj" (Rebuild target) (3) ->
[12:11:50]: "C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\MyProject.Other\MyProject.Other.csproj" (Rebuild target) (5) ->
[12:11:50]: (CoreCompile target) ->
[12:11:50]: Helpers\ProgramHelper.cs(40,21): warning CS0168: The variable 'ex' is declared but never used [C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\MyProject.Other\MyProject.Other.csproj]
[12:11:50]: "C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\MyProject.sln.teamcity.patch.tcprojx" (TeamCity_Generated_Build;TeamCity_Generated_NUnitTests target) (1) ->
[12:11:50]: "C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\MyProject.sln" (Rebuild target) (2) ->
[12:11:50]: "C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\Website.metaproj" (Rebuild target) (3) ->
[12:11:50]: (Rebuild target) ->
[12:11:50]: ASPNETCOMPILER : error ASPCONFIG: Could not load file or assembly 'Microsoft.VisualBasic.Activities.Compiler' or one of its dependencies. An attempt was made to load a program with an incorrect format. [C:\Program Files\TeamCity\buildAgent\work\8bbb8fc03bd91944\Website.metaproj]
[12:11:50]: 1 Warning(s)
[12:11:50]: 1 Error(s)
[12:11:50]: Time Elapsed 00:00:31.48
I tried copying the .net framework reference assemblies to C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.0 thinking that would fix things, but still no luck. Any thoughts?
| For me it was indeed an x86/x64 mismatch. I solved it by specifying the path to the x64 MSBuild through the MSBuild environment variable:
| TeamCity | 3,055,633 | 20 |
I need to run some code only if I'm running from within the TeamCity test launcher. What's the easiest way to detect this?
| Check if TEAMCITY_VERSION environment variable is defined.
Another approach is to use NUnit categories.
Based on the comment below this code should be able to check if the test is being run by teamcity:
private static bool IsOnTeamCity()
{
string environmentVariableValue = Environment.GetEnvironmentVariable("TEAMCITY_VERSION");
if (!string.IsNullOrEmpty(environmentVariableValue))
{
return true;
}
return false;
}
| TeamCity | 1,907,479 | 20 |
I'm trying to use assembly info patcher to create a version number something like:
1.2.3.1a3c19e
where the last bit is the git short hash.
I've tried using a powershell script build step to create the short hash (as I cant find a variable that has it) and adding this to a system variable but this build step appears to run after assembly info patcher, so isn't much use.
| If you want to write this to the Assembly Info field it can be done, but it requires a separate build configuration to generate the build number. The sole purpose of this step is to create the build number that has the hash appended to it.
1. Create a build configuration to generate the short hash
2. Add a step to generate the hash
3. Add a parameter to store the hash
4. Add a second build configuration and add a dependency to the first one
5. You can now consume the parameter in the dependent step
6. At this point you can use it in the assembly info patcher
The alternative to this is to write your build number back to Git using the VCS labeling build feature.
Hope this helps.
| TeamCity | 30,416,789 | 20 |
I need to exclude some files from TC's artifacts during my ASP MVC project's build. These files include web.debug.config files, but there are others as well.
At the moment the Artifact path setting in TC looks like this:
src/Project.Web/*.config => arch.zip
I need somehow to tell it to skip the web.debug.config file.
I tried this and doesn't work:
src/Project.Web/*.config => arch.zip
-src/Project.Web/*.debug.config
So, ideally I don't want these files from arch.zip that is created during the build.
| Starting from TC10 it's possible. In your case it would be:
+:src/Project.Web/*.config => arch.zip
-:src/Project.Web/*.debug.config => arch.zip
| TeamCity | 16,040,016 | 20 |
how can I copy the artifacts from Teamcity to another server?
Thanks
| The way I have done this, make things a lot easier.. Setup another configuration that pulls in, via artifact dependencies, all the files you need then run a cmd script to xcopy/copy the files to another drive on the network. You can do this using cmd script, vbs, python, shell etc..
Remember, you only need to refer to directories as if they were local as you would have your script in the same working directory
i.e cmd script :: xcopy .\"my build artifact(s)" \path\to\drive\on\my\network\"my build artifacts"
It doesn't get easier than that.
Naturally, if your artifacts are huge, then you may want to consider your more complicated option. However, TeamCity currently have a ticket pending, which you can vote on, that allows you to run multiple runners in one configuration - so you could just add your cmd script to the same configuration to save the copy time; please vote if can spare a minute:
http://youtrack.jetbrains.net/issue/TW-3660
| TeamCity | 2,545,677 | 20 |
The default path for teamcity artifacts is
C:\#User#\.BuildServer\system\artifacts
How can i change it to
d:\TeamCity\Artifacts
Thanks
| For me the default is D:\BuildServer\system\artifacts
Yes you can, set the TEAMCITY_DATA_PATH environment variable.
See here: http://www.jetbrains.net/confluence/display/TCD4/TeamCity+Data+Directory
By default, the is placed in the user's
home directory (e.g. it is
$HOME/.BuildServer under Linux and
C:\Documents and
Settings\.BuildServer)
under Windows. Alternatively, you can
define this directory in one of the
following ways:
As a Tomcat system property teamcity.data.path (see System
Properties for Running the Server)
In the TEAMCITY_DATA_PATH environment variable (this will be
used if the teamcity.data.path JVM
system property is not found)
| TeamCity | 2,092,604 | 20 |
I am setting up TeamCity and I am wondering what should be used as the VCS Root.
My svn repository is located at http://obfuscatedserver/svn/main/MyProject1/
Should I set the VCS Root at http://obfuscatedserver/svn/main/MyProject1/ or use the trunk folder at http://obfuscatedserver/svn/main/MyProject1/trunk/ ?
Right now I am not using the trunk folder and I had to set the Build Runner "Build file path" setting to "trunk/MyProject1.proj" (using msbuild).
Which location is the appropriate one?
| I would recommend using http://obfuscatedserver/svn/main/ as the VCS Root, and then restricting which folders are checked out using checkout rules.
Add the following checkout rules (section 2 of the build config):
+:/MyProject1/trunk
You will probably also need to update the location of your msbuild file to
MyProject1/trunk/MyProject1.proj
and set the working directory to
MyProject1/trunk
This does seem like a lot of work, but next time you want to add a new build, you don't have to create a new VCSroot.
However, the real benefit comes when TeamCity polls your SVN repo. Polling your repo once will discover all the changes for all your builds. This is especially important if your repository is hosted somewhere like sourceforge or googlecode. You certainly don't want to be polling their servers for every build you have configured.
Also, if your repo is hosted by a third party, you might want to set the vcsRoot's Checking interval to once an hour or similar. You can always ask teamcity to check for pending changes from the actions menu on any of the build overview pages if you can't be bothered waiting for the hour to elapse.
| TeamCity | 1,560,969 | 20 |
I have a Git setup with the typical master --> develop --> feature structure. I have 5 TeamCity (v8.1) build agents. Is it possible to configure TeamCity so that if multiple people commit to develop at the same time, the develop branch won't run concurrent builds? Part of our CI process is deploy-on-success, so I don't want two builds to be deploying to the same endpoint at the same time.
(I would want this setup for all branches, not just develop)
| On the General Settings configuration page you can set the number of simultaneous builds to 1 instead of 0 for unlimited. This means that it am queue up say 5 builds but only 1 will run at a time.
| TeamCity | 21,761,138 | 19 |
How can I create a git tag after successful build in Team City?
| You can use VCS Labeling build feature to tag successful builds in TeamCity.
| TeamCity | 28,836,775 | 19 |
I upgraded to TeamCity 10.0 this morning, and since the upgrade, TC cannot connect to my Subversion server. The error I see is:
Test connection failed in MyProject
Error connecting the specified URL:
svn: E200015: Server SSL certificate for 'https://svnserver:8443' rejected
There was no issue with the cert prior to upgrading to v10. Is there something I need to do now to allow TC to get to SVN over SSL?
| TeamCity 10.0 seems to have added an option to 'VCS Root' under 'Subversion Connection Settings' to 'Enable non-trusted SSL certificate'. Checking that option fixed those errors for me.
| TeamCity | 38,534,050 | 19 |
We're using TeamCity 7 and wondered if it's possible to have a step run only if a previous one has failed? Our options in the build step configuration give you the choice to execute only if all steps were successful, even if a step failed, or always run it.
Is there a means to execute a step only if a previous one failed?
| Theres no way to setup a step to execute only if a previous one failed.
The closest I've seen to this, is to setup a build that has a "Finish Build" trigger that would always execute after your first build finishes. (Regardless of success or failure).
Then in that second build, you could use the TeamCity REST API to determine if the last execution from the first build was successful or not. If it wasn't successful then you could whatever it is you want to do.
| TeamCity | 19,689,093 | 19 |
I'm trying to run my karma (version v0.10.2) unit tests on teamcity (version 7.1).
When I run karma start --reporters teamcity --single-run I get the following error:
Can not load "teamcity", it is not registered! Perhaps you are missing some plugin?
I have installed the karma-teamcity-reporter module, but that hasn't helped.
The following are installed in my local node_modules folder:
karma
karma-chrome-launcher
karma-coffee-preprocessor
karma-coverage
karma-firefox-launcher
karma-html2js-preprocessor
karma-jasmine
karma-phantomjs-launcher
karma-requirejs
karma-script-launcher
karma-teamcity-reporter
Here is my karma.conf.js:
I'm running karma version v0.10.2. Here's my karma.conf.js:
module.exports = function(karma) {
karma.set({
// base path, that will be used to resolve files and exclude
basePath: '../../myapplication.web',
frameworks: ['jasmine'],
plugins: [
'karma-jasmine',
'karma-coverage',
'karma-chrome-launcher',
'karma-phantomjs-launcher'
],
// list of files / patterns to load in the browser
files: [
'Scripts/jquery/jquery-2.0.2.min.js',
'Scripts/jquery-ui/jquery-ui-1.10.3.min.js',
'Scripts/daterangepicker/daterangepicker.js',
'Scripts/angular/angular.js',
'Scripts/angular/restangular/underscore-min.js',
'Scripts/angular/restangular/restangular-min.js',
'Scripts/angular/angular-*.js',
'Scripts/angular/angular-test/angular-*.js',
'Scripts/angular/angular-ui/*.js',
'Scripts/angular/angular-strap/*.js',
'Scripts/angular/angular-http-auth/*.js',
'Scripts/sinon/*.js',
'Scripts/moment/moment.min.js',
'uifw/scripts/ui-framework-angular.js',
'app/app.js',
'app/**/*.js',
'Tests/unit/**/*.js'
],
// list of files to exclude
exclude: [
'Scripts/angular/angular-test/angular-scenario.js'
],
// test results reporter to use
// possible values: 'dots', 'progress', 'junit'
reporters: ['progress', 'coverage', 'teamcity'],
preprocessors : {
'app/**/*.js': ['coverage']
},
coverageReporter : {
type: 'html',
dir: 'Tests/coverage/'
},
// web server port
port : 9876,
// cli runner port
runnerPort : 9100,
// enable / disable colors in the output (reporters and logs)
colors : true,
// level of logging
// possible values: LOG_DISABLE || LOG_ERROR || LOG_WARN || LOG_INFO || LOG_DEBUG
logLevel : karma.LOG_INFO,
// enable / disable watching file and executing tests whenever any file changes
autoWatch : true,
// Start these browsers, currently available:
// - Chrome
// - ChromeCanary
// - Firefox
// - Opera
// - Safari (only Mac)
// - PhantomJS
// - IE (only Windows)
browsers: ['PhantomJS'],
// If browser does not capture in given timeout [ms], kill it
captureTimeout : 60000,
// Continuous Integration mode
// if true, it capture browsers, run tests and exit
singleRun : true
});
};
If I run karma start karma.conf.js it runs correctly. What am I doing wrong?
| Turned out I needed to add karma-teamcity-reporter to the plugins section to get this to work:
...
plugins: [
'karma-teamcity-reporter',
'karma-jasmine',
'karma-coverage',
'karma-chrome-launcher',
'karma-phantomjs-launcher'
],
...
| TeamCity | 19,514,395 | 19 |
I added a self-signed certificate to my Teamcity BuildServer to introduce https support so that it can now be accessed at
https://ServerUrl:8443
(More details about how here )
The result was that I was able access the server via https, but my build agent was now disconnected. How to fix this?
| The build agent works as a client to the build server and communicates with it using http/https, and it turns out that when you add a self-signed certificate the build agent does not accept it.
I needed to
Let the build agent know the new path for communicating with the server
Let the build agent know that it could trust the self-signed certificate
To change the path I did the following (see this post for more details )
Locate the file:
$TEAMCITY_HOME/buildAgent/conf/buildAgent.properties
Change the property
serverUrl=http:\://localhost\:8080 to your new url
To let the build agent know that it could trust the new certificate I had to import it into the build agent's key store.This was done using keytool:
keytool -importcert -file <cert file>
-keystore <agent installation path>/jre/lib/security/cacerts
( unless you've changed it, the keystore is protected by password: changeit)
The TeamCity team describes this process in slightly more details here
NOTE
If you need to retrieve your certificate from the TeamCity buildserver keystore, you can also use keytool to do this :
keytool -export -alias <alias name>
-file <certificate file name>
-keystore <Teamcity keystore path>
| TeamCity | 14,980,207 | 19 |
How do you pass the artifact paths to a script in TeamCity.
The scenario is this
Build Project
Deploy Project (with an artifact dependency to #1)
Step 2 consists of a a script which
Stops a service (to unlock files)
Copies the build artifacts to the server
Restarts the service
I'm struggling with step 2, I figure I need to pass the path of the build artifacts into the script but I can't see how you do it?
| We do something like this. It is not 100% clear but it looks like you want to do the build and deployment as two separate builds in TeamCity with an artifact dependency from the deployment build on the main build which is exactly what we do. Here is how we do it.
Setup your artifacts from the main build which it sounds like you have already done.
Example: **\bin\release\*.* => bin
Set up the artifact dependency (we also do a snap shot dependency as well but you don't have to) to pull your artifacts from the main build and put them into a local folder in your deployment build.
Example: Artifacts paths: bin\**\*.* Destination path: bin\
We use a mixture of MSBuild and PowerShell for doing the actual deployment work. In each case you can reference the artifacts using a relative path.
IF the build work folder looks like this:
root
|- bin (Artifacts pulled in from main build)
|- src
|- build (Where your build and deployment scripts live)
You would access the bin files from your deployment script located in the build folder like:
..\bin\[your files]
You can then pass the path to your build artifacts like this
%teamcity.build.checkoutDir%\bin\
| TeamCity | 10,354,187 | 19 |
I have created a new application using Entity Framework 4.3 database migrations. The migrations work great from the package manager console using the "update-database" command.
Now I want to run the database migrations every time the application is built using Team City, it looks like I need to create a powershell script that will do this.
Can anyone point me to some instructions on how to get the package manager commands to run from the command line, or powershell? All I can find is instructions on how to do this via the package manager console, which I don't know how to run from a Team City build step.
| migrate.exe is what I was looking for, it is found in "packages\EntityFramework.4.3.1\tools".
Add a new build step in Team City using:
Runner type: command line
Command executable: packages\EntityFramework.4.3.1\tools\migrate.exe
Command parameters: MyApplicationName /StartupDirectory:MyApplicationName\bin
| TeamCity | 9,868,252 | 19 |
Has anyone had any success with running StyleCop from TeamCity?
I know StyleCop supports a command line mode, however i am not sure how this will integrate into the report output by TeamCity.
I've checked out this plugin found here: https://bitbucket.org/metaman/teamcitydotnetcontrib/src/753712db5df7/stylecop/
However could not get it running.
I am using TeamCity 6.5.1 (latest).
| I don't know how familiar you are with MSBuild, but you should be able to add a new Build Step in TC 6 and above, and set MSBuild as the build runner, and point it to a .proj file which does something similar to the following:
<Target Name="StyleCop">
<!-- Create a collection of files to scan -->
<CreateItem Include="$(SourceFolder)\**\*.cs">
<Output TaskParameter="Include" ItemName="StyleCopFiles" />
</CreateItem>
<StyleCopTask
ProjectFullPath="$(MSBuildProjectFile)"
SourceFiles="@(StyleCopFiles)"
ForceFullAnalysis="true"
TreatErrorsAsWarnings="true"
OutputFile="StyleCopReport.xml"
CacheResults="true" />
<Xslt Inputs="StyleCopReport.xml"
RootTag="StyleCopViolations"
Xsl="tools\StyleCop\StyleCopReport.xsl"
Output="StyleCopReport.html" />
<XmlRead XPath="count(//Violation)" XmlFileName="StyleCopReport.xml">
<Output TaskParameter="Value" PropertyName="StyleCopViolations" />
</XmlRead>
<Error Condition="$(StyleCopViolations) > 0" Text="StyleCop found $(StyleCopViolations) broken rules!" />
</Target>
If you don't want to fail the build on a StyleCop error, then set the Error task to be Warning instead.
You'll also need to add the following to your .proj file:
<UsingTask TaskName="StyleCopTask" AssemblyFile="$(StyleCopTasksPath)\Microsoft.StyleCop.dll" />
Microsoft.StyleCop.dll is included in the StyleCop installation, and you'll need to set your paths appropriately.
To see the outputted StyleCop results in TeamCity, you will need to transform the .xml StyleCop report to HTML using an appropriate .xsl file (called StyleCopReport.xsl in the script above).
To display the HTML file in TeamCity, you'll need to create an artifact from this .html output, and then include that artifact in the build results.
The Continuous Integration in .NET book is a great resource.
| TeamCity | 6,370,278 | 19 |
Does anyone know how to use the TeamCity REST API to find out which builds are currently running, and how far through they are (elapsed time vs estimated time)?
| The URL returns what you are asking for, including percentage complete.
http://teamcityserver/httpAuth/app/rest/builds?locator=running:true
<builds count="1">
<build id="10" number="8" running="true" percentageComplete="24" status="SUCCESS" buildTypeId="bt3" startDate="20110714T210916+1200" href="/httpAuth/app/rest/builds/id:10" webUrl="http://phillipn02:29000/viewLog.html?buildId=10&buildTypeId=bt3"/>
</builds>
Source: http://devnet.jetbrains.net/message/5291132#5291132.
The relevant line on the REST API documentation is the one that reads "http://teamcity:8111/httpAuth/app/rest/builds/?locator= - to get builds by "build locator"." in the "Usage" section.
This works with TeamCity version 6.5; I haven't tried it on earlier versions, but I suspect it will work back to version 5.
| TeamCity | 4,750,963 | 19 |
I'm working on a C#/VB.Net project that uses SVN and TeamCity build server. A dozen or so assemblies are produced by the build. I want to control the assembly versions so that they all match up and also match the TeamCity build label.
I've configured TeamCity to use a build label of
Major.Minor.{Build}.{Revision}
Where Major and Minor are constants that I set manually, {Revision} is determined by the SVN repository version at checkout and {Build} is a TeamCity auto-incrementing build counter. So an example build label would be
2.5.437.4423
What techniques would you suggest to ensure that all of the assembly versions match the TeamCity build label?
| I'd suggest using TeamCity's AssemblyInfo patcher build feature:
http://confluence.jetbrains.net/display/TCD65/AssemblyInfo+Patcher
Just create your projects from VisualStudio, configure the build feature in the BuildSteps page (see http://confluence.jetbrains.net/display/TCD65/Adding+Build+Features), and as long as you keep the default AssemblyInfo.cs file, it will work.
This approach is working great for me.
Advantages:
Developers can build the solution in their machines.
You don't need to touch the .sln or .csproj files. It just works.
By using TeamCity variables you can easily make the version number match some other project's version, etc.
Disadvantages:
You can't easily switch to another CI server from TeamCity because you don't have a build script (but switching CI servers is like switching ORM or database: it is very unlikely and will require a lot of work anyway).
| TeamCity | 1,223,245 | 19 |
I deploy website through Teamcity using webdeploy method:
web.csproj /P:Configuration=%env.Configuraton% /P:DeployOnBuild=True
/P:DeployTarget=MSDeployPublish
/P:MsDeployServiceUrl=%env.DeployServiceUrl%
/P:AllowUntrustedCertificate=True /P:MSDeployPublishMethod=WMSvc
/P:CreatePackageOnPublish=True /P:UserName=%env.DeployUserName%
/P:Password=%env.DeployPassword%
The error I recieve constantly:
[MSDeployPublish] VSMSDeploy (35s) [VSMSDeploy] C:\Program Files
(x86)\MSBuild\Microsoft\VisualStudio\v10.5\Web\Microsoft.Web.Publishing.targets(4196,
5): error ERROR_EXCEEDED_MAX_SITE_CONNECTIONS: Web deployment task
failed. (The maximum number of connections for this site has been
exceeded. Learn more at:
http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_EXCEEDED_MAX_SITE_CONNECTIONS.)
On Teamcity agent installed Visual Studio 2010 Express, .netf framework version: 4.0
| I have fixed this problem by restarting the Web Management Service in Services.
| TeamCity | 18,248,488 | 18 |
I have inherited a TeamCity server and I am reviewing the configuration. To make it a little easier, I would like to rename a build agent from a long random name to something a little more easily identifiable.
However, I cannot find any options to change the name in the Agent Summary page.
Has anyone got a way to change the build agent name?
| You need to edit the name field in the buildAgent.properties file on the agent itself:
name=change-this-name
Depending on where you installed the TeamCity Agent, on Windows the file may live at C:\TeamCity\buildAgent\conf\buildAgent.properties or on Linux at /home/teamcity/buildagent/conf/buildAgent.properties.
| TeamCity | 36,158,402 | 18 |
I have a build chain with two projects: A is the root project, B depends on it. B has two dependencies configured: an artifact and a snapshot dependency. One build configuration for B has an environment variable (parameter) set. However, I also need this parameter set for the root project A.
Is there any way in TeamCity 9 to pass a build configuration parameter from a project to its dependency (in the same build chain)?
| Since TeamCity 9.0 it is possible to override the dependencies parameters by redefining them in the dependent build:
reverse.dep.<btID>.<property name>
| TeamCity | 28,822,099 | 18 |
I have done a number of changes to a build configuration in TeamCity 8. I know I can see an audit trail of the changes that I have done to the build configuration and I can check the details of each individual change, but I wonder if I can select one of those previous versions of the build configuration and restore it; there doesn't seem to be any obvious option in TeamCity for this.
For the avoidance of doubt, I'm not after reverting changes in the source code, but in the build configuration of TeamCity. I changed a few parameters, build steps, triggers, etc., and I want to revert those changes.
| You are right ,there is no obvious option in Teamcity to rollback to a previous version.
However, all teamcity build configurations are maintained in a xml file on the local disk drive in the Local Build Server. The files are created in a rolling format (the latest config is called config.xml, the one previous to it is config-1..xml). If you can figure out from the audit page on which exact xml you want to rollback to, you can copy the backed up config.xml to the recent one, or you can make the changes manually.
I would recommend playing with this on a test target first and then doing it on the original target.
| TeamCity | 25,085,047 | 18 |
I'm having trouble with my NuGet Installer build step.
We're using both official NuGet.org packages and our own packages hosted on the TeamCity NuGet server. If I leave Packages Sources blank, then packages from nuget.org are found, but as soon as I specify %teamcity.nuget.feed.server% as the package source, then packages from nuget.org are not found.
I tried setting Packages Sources to include both, but it still isn't working for official nuget.org packages.
https://nuget.org/api/v2/
%teamcity.nuget.feed.server%
Is that not the right URL for the nuget.org package source? How do I tell it to use both sources?
I asked this on the JetBrains Developer discussion board, but haven't gotten any responses.
| Had same problem, funny enough my Nuget sources were specified as
https://www.nuget.org/api/v2/
http://nugetserver/nuget
Adding a forward slash on the second url to make it http://mynugetserver/nuget/ fixed the problem.
Took me a while to figure out. Now my Nuget-installer build step is running fine.
| TeamCity | 12,897,747 | 18 |
I have been studying MSBuild as I have the need to automate my development shop's builds. I was able to easily write a .BAT file that invokes the VS command prompt and passes my MSBuild commands to it. This works rather well and is kinda nifty.
Here is the contents of my .BAT build file:
call "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin\amd64\vcvars64.bat"
cd C:\Sandbox\Solution
msbuild MyTopSecretApplication.sln /p:OutputPath=c:\TESTMSBUILDOUTPUT /p:Configuration=Release,Platform=x86
pause
^ This works well but I now have the need to use the MSBuild task for TeamCity CI. I have tried to write a few MSBuild scripts but I cannot get them to work the same. What is the equivalent build script to the command I am using in my .BAT file? Any ideas?
I have tried using something like this, but no success (I know this is wrong):
<?xml version="1.0"?>
<project name="Hello Build World" default="run" basedir=".">
<target name="build">
<mkdir dir="mybin" />
<echo>Made mybin directory!</echo>
<csc target="exe" output="c:\TESTMSBUILDOUTPUT">
<sources>
<include name="MyTopSecretApplication.sln"/>
</sources>
</csc>
<echo>MyTopSecretApplication.exe was built!</echo>
</target>
<target name="clean">
<delete dir="mybin" failonerror="false"/>
</target>
<target name="run" depends="build">
<exec program="mybin\MyTopSecretApplication.exe"/>
</target>
What I simply need is an MSBuild XML build script that compiles a single solution for Release mode to a specified output directory. Any help?
| Use the MSBuild task to build the solution passing the properties you need.
<?xml version="1.0" encoding="utf-8"?>
<Project
xmlns="http://schemas.microsoft.com/developer/msbuild/2003"
ToolsVersion="4.0"
DefaultTargets="Build">
<PropertyGroup>
<OutputDir>c:\TESTMSBUILDOUTPUT</OutputDir>
</PropertyGroup>
<ItemGroup>
<ProjectToBuild Include="MySecretApplication.sln">
<Properties>OutputPath=$(OutputDir);Configuration=Release</Properties>
</ProjectToBuild>
</ItemGroup>
<Target Name="Build">
<MSBuild Projects="@(ProjectToBuild)"/>
</Target>
</Project>
| TeamCity | 5,119,913 | 18 |
I have many build configurations in TeamCity, each servicing a large project. In the past if a build is kicked off the Build Agent could be busy for up to 20min!
In order to improve throughput I installed a second Build Agent on the same machine such that if a build run is kicked off by say Build Agent 1 and it is busy for 20min and someone from another project makes a change then Build Agent 2 can do the build for the other project without needing to wait on the current build run to finish.
All was well until two successive check-ins resulted in both Build Agents running a build for a single build configuration in parallel. Since some resources are shared, IIS directories & databases, I don't want a single build configuration to run on both Build Agents in parallel.
How can I ensure a build isn't triggered if a build is currently running for that build configuration on a different build agent?
One way seems to involve environmental variables and ensuring a 50/50 split by Build Agent in terms of build configuration compatibility, but that seems a little clunky.
| You can "Limit the number of simultaneously running builds" for a build configuration (general settings page).
Set it to 1, to fulfil your task.
| TeamCity | 5,060,790 | 18 |
I have set up multiple targets in a single xml file. I expect all targets to run but only the frist target gets executed.
Here is a simplified version of what iam trying to do:
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Target Name="T1">
<Copy SourceFiles="c:\temp\a.txt" DestinationFolder="C:\temp2\" />
</Target>
<Target Name="T2">
<Copy SourceFiles="c:\temp\b.txt" DestinationFolder="C:\temp2\" />
</Target>
</Project>
I'am running the build from the TeamCity CI Server and the logs reports Process exit code: 0.
Anyone got any ideas why it does not run T2?
| You need to tell MSBuild about your multiple targets
Try
<Target Name="Build" DependsOnTargets="T1; T2">
</Target>
| TeamCity | 1,112,913 | 18 |
Are there real tangible differences or is it just a matter of taste?
| Getting cruise control setup and maintained takes more time than TeamCity (where you can setup automated project (sln) build in matter of minutes). TeamCity also has a couple of very nice features, such as reporting build failure (via email, jabber, web site) immediately, so you don't have to wait for x minutes.
Version 4 (currently EAP) also has a feature that runs failed tests first, so you know if you fixed the build quickly.
So... my vote goes for teamcity, unless your team is so big you have to pay for it... In that case, I don't know.
| TeamCity | 242,339 | 18 |
I am using Team City 7.1.1 (build 24074), and I would like to exclude some namespaces in code coverage.
I am using dotcover as code coverage tool.
I am using MSPec, Machine.Fakes and Rhino Mocks in my tests.
Thanks!
| Finally I have found the way to exlude NAMESPACES
-:assemblyName;type=nameSpace.*
| TeamCity | 12,729,952 | 17 |
I have gone through the documentation for TeamCity on build artifact outputs
(https://confluence.jetbrains.com/display/TCD8/Configuring+General+Settings#ConfiguringGeneralSettings-ArtifactPaths)
However, it doesn't seem clear to me as to how I can output a standard file from the build checkout directory, AND rename it when placing it into the build's artifacts.
I can do this pretty easily using archive file designations. For example:
%system.teamcity.build.checkoutDir%\TestProject.Installer\DiskImages\*.exe => setup-1.0.%build.counter%.zip
However, this would just simply zip up the executable installer file as a zip file with my renamed specification, where I actually just want it to stay as an .exe file. The problem I can see is that this rename convention only works on archive file types according to the above TeamCity linked documentation.
So is it possible to rename an executable file that is fetched from the build checkout directory and place it into the build artifacts?
|
Add command line step which will rename the artifact
ren Release\oldname.exe newname_%build.number%.exe
Define artifact as path to the renamed file.
newname_%build.number%.exe
| TeamCity | 26,280,244 | 17 |
I've recently starting seeing the above error with ever-increasing frequency on our build server. Nothing has changed in our TeamCity configuration during this period, so I'm guessing it might be changes at GitHub that are causing the error.
I've tried changing our VCS polling interval from 60s down to 600s in case GitHub was doing some kind of connection throttling, but there has been no affect.
Is it possible to make TeamCity less sensitive to connection timeouts?
| I've figured out the answer.
TeamCity has no issues - it's actually AZURE that has a problem.
For proof, try doing this in your server, where TC is installed.
(command line, of course)
C:\git\bin\git.exe clone https://github.com/libgit2/libgit2.git
and this should not work most of the time.
So AZURE has a networking bug and they know about it and are trying to resolve the issue.
This info was provided via GitHub after they worked with Azure to figure out what was going on.
Conclusion
You have to use SSH KEYS as a current workaround.
| TeamCity | 21,400,320 | 17 |
I don't want Build Config A and Build Config B to run at the same time. This is because they share the same resource which cannot be accessed simultaneously. However each build config is run by a separate agent so it is possible for them to run simultaneously.
Instead I would like one build config, when triggered, to wait for the other to finish if it is running. For example if Build Config B begins to run but Build Config A is already running, then B would wait until A finishes and then B would run.
I don't think a snapshot dependency will work because that assumes one config has a dependency on the other which is not true in my case.
| Keith, there are two plugins that can help you:
The first one is Groovy plugin. It has functionality of creating name locks over all projects.
The second one is TeamCity.SharedResources. It has functionality of definig shared resources and locking them with read and write locks. However, resources defined in this plugin, are are defined per-project. We are actively developing this plugin, so you are welcome to watch its page in our tracker
| TeamCity | 14,468,161 | 17 |
I am using msdeploy to deploy a asp.net-mvc web application via teamcity.
I am using a paramaters.xml file to manipulate my application's web.config, specifically the application settings section.
I have some Settings where it is only valid to have a value for a specific environment and the rest of the time the value should be blank (ie, Property should only have a value on Production). However, MSDeploy gives me this Exception when I do not specify a value:
Microsoft.Web.Deployment.DeploymentException:
The 'facebookUserToken' argument cannot be null or empty.
at Microsoft.Web.Deployment.DeploymentSyncParameterValidation.Validate(String parameterName, String parameterValue)
at Microsoft.Web.Deployment.DeploymentSyncParameter.set_Value(String value)
at Microsoft.Web.Deployment.DeploymentSyncParameterCollection.LoadFromFile(XPathNavigator nav, String fileName, Boolean ignoreExtraSetParameters)
at Microsoft.Web.Deployment.DeploymentSyncParameterCollection.Load(Stream stream, String fileName, Boolean ignoreExtraSetParameters)
at Microsoft.Web.Deployment.DeploymentSyncParameterCollection.Load(String fileName, Boolean ignoreExtraSetParameters)
at MSDeploy.MSDeploy.HandleSetParameters(DeploymentObject sourceObject, Random random)
at MSDeploy.MSDeploy.ExecuteWorker()
at MSDeploy.MSDeploy.Execute()
at MSDeploy.MSDeploy.Main(String[] unusedArgs)
How can I configure MSDeploy to allow a parameter to have an empty value?
web.config:
<applicationSettings>
<SO.Example>
<setting name="FacebookUserToken" serializeAs="String">
<value></value>
</setting>
</SO.Example>
</applicationSettings>
parameters.config:
<parameter name="facebookUserToken" description="" defaultValue="">
<parameterEntry kind="XmlFile" scope="Web.config"
match="XPath removed for readability">
</parameterEntry>
</parameter>
| I ran across this issue a while back and found the solution at Richard Szalay's blog. You need to add the parameterValidation to your parameter declaration:
<parameters>
<parameter name="ReplaceVariable"
description="Sample variable that allows empty values" defaultValue="">
<parameterValidation kind="AllowEmpty" />
<parameterEntry type="TextFile" scope="Web\.config$" match="TextToReplace" />
</parameter>
</parameters>
So for your specific case:
<parameter name="facebookUserToken" description="" defaultValue="">
<parameterValidation kind="AllowEmpty"/>
<parameterEntry kind="XmlFile" scope="Web.config"
match="XPath removed for readability">
</parameterEntry>
</parameter>
| TeamCity | 25,663,912 | 17 |
We have been writing specifications for our JavaScript business logic using Jasmine. We're able to run our test suite within a browser, but how would we integrate this within TeamCity? Preferrably we do not want to use NodeJS, rather something as simple as possible.
| I have created a modified version of run-jasmine.js that is found in the PhantomJS sources (original version is here. This version can be used within TeamCity (it will automatically detect that it is running in TeamCity). This updated version is using TeamCity service messages which allows for a nice integration.
You will need PhantomJS. You'll also need one of the following:
run-jasmine.js (for Jasmine 1.x).
run-jasmine.js (for Jasmine 2.x).
Add a build step in your TeamCity build configuration that can run this step:
phantomjs.exe run-jasmine.js index.html
index.html is your Jasmine runner page. If the build agents do not include PhantomJS, you can commit it to your repository along with your sources (this is what we do).
The result will look like this:
Test details:
The above is from a Tasks sample ASP.NET MVC project with this setup. It can be run in TeamCity using a Visual Studio (sln) build step. It will also run the tests within Visual Studio, as a pre-build step.
| TeamCity | 21,185,246 | 17 |
I use Teamcity to build different packages and want to save those Packages as Artifacts. My Artifact Path in TeamCity is the following:
%system.teamcity.build.workingDir%\**\Release**/*.wsp => Solution
Now TeamCity collects all WSP-Files in any Release-Directory after building correctly. But it is saved including all subdirectories like:
I only want the .wsp-File directly under "solution" without the directory tree.
| From TeamCity docs:
wildcard — to publish files matching Ant-like wildcard pattern ("" and
"*" wildcards are only supported). The wildcard should represent a
path relative to the build checkout directory. The files will be
published preserving the structure of the directories matched by the
wildcard (directories matched by "static" text will not be created).
That is, TeamCity will create directories starting from the first
occurrence of the wildcard in the pattern.
http://confluence.jetbrains.net/display/TCD65/Configuring+General+Settings#ConfiguringGeneralSettings-artifactPaths
In your build script ( or additional final build step) you will have to copy the necessary files to a single folder and publish that folder as Artifacts
| TeamCity | 7,902,893 | 17 |
I am new to TeamCity. I have my projects in different repositories. I want to checkout my projects in Different subfolders. e.g.
Lets suppose that I have following 3 .net Projects in three different projects.
Framework
XYZ
MyProject
Each project is stored in its own repository. MyProject contains a solution file, which expects that Framwork and XYZ Projects Folders are in main Folder so that the Folder structure looks like that
+FrameWork
-ProjectFile
-.........
+XYZ
-ProjectFile
+MyProject
-SolutionFile(has references of both Projects.)
Now my problem is I want to checkout my projects from different repositories in own Folders. How to configure it in TeamCity.
Thanks
| You would need to configure each VCS Root in Version Control Settings. For each root, you can
specify what folders are of interest to you with the Checkout Rules. When creating the checkout rules, you have the option to leave the folder structure the same as it is in your VCS or you can remap the struture to suit your needs.
http://confluence.jetbrains.net/display/TCD5/2.Version+Control+Settings
http://confluence.jetbrains.net/display/TCD5/VCS+Checkout+Rules
In Order to solve the given problem. Following checkout Rules need to be applied on corresponding version control root.
+:.=>FrameWork
+:.=>XYZ
+:.=>MyProject
| TeamCity | 4,737,114 | 17 |
I'm trying to run a simple Watin test through TeamCity but the Internet Explorer window is never shown as is usually is via CruiseControl.
I get an error that it can't find a text field so something is running. But i can't see what without the window.
Is there a specific change to the setup of TeamCity server that I need to do?
| Found this on another forum
All credits go to Matt Baker
For future reference to anyone who attempts to run WatiN tests automatically using TeamCity. You must start your build agent using \bin\agent.bat start and NOT as a service. WatiN requires a full UI to execute properly and it doesn't get this environment as a service. I hope this makes it easier for other people!
| TeamCity | 488,443 | 17 |
I'm compiling a NAnt project on linux with TeamCity Continuous Integration server. I have been able to generate a test report by running NAnt on mono thru a Command Line Runner but don't have the options of using the report like a NAnt Runner. I'm also using MBUnit for the testing framework.
How can I merge in the test report and display "Tests failed: 1 (1 new), passed: 3049" for the build?
Update: take a look at MBUnitTask its a NAnt task that uses sends messages that TeamCity expects from NUnit so it lets you use all of TeamCity's features for tests.
MBUnitTask
Update: Galio has better support so you just have to reference the Galio MBUnit 3.5 dlls instead of the MBUnit 3.5 dlls and switch to the galio runner to make it work.
| Gallio now has an extension to output TeamCity service messages.
Just use the included Gallio.NAntTasks.dll and enable the TeamCity extension. (this won't be necessary in the next release)
| TeamCity | 3,143 | 17 |
On the builds server I have set up TeamCity (8.1.1) so that it executes the build process if there are changes in either the master, one of the feature branches or one of the pull request branches using the branch specifier:
+:refs/heads/*
+:refs/pull/(*/merge)
I have turned on the build agent option:
teamcity.git.use.local.mirrors=true
which clones the repository in a directory outside the build directory and then pulls from that local repository.
The build process needs access to the git repository and the master branch, even for builds of one of the feature branches or pull request branches. However TeamCity only has the branch that contains the changes in the local repository thereby making my builds fail, e.g. when the change was on the issue/mycoolissue branch then that is the only branch that exists in the git repository in the TeamCity working space.
I have tried performing a local git fetch to get the master branch but because the local repository does not have the master branch this fails. While I could add a remote pointing to the origin (a github private repository) that would mean that I'd have to handle credentials as well and I'd rather have TeamCity take care of all of that for me.
My question is whether there is a way to tell TeamCity to just pull all the branches into both the local repository and the working repository?
| Starting from TeamCity 10.0.4, you can do that by adding a configuration parameter teamcity.git.fetchAllHeads=true See here
| TeamCity | 23,733,970 | 16 |
Is there a simple way to have TeamCity include a text or html change-log as one of its output artifacts?
Perhaps I need to go down the route of having msbuild or some other process create the change log but as TeamCity generates one for every build, I'm wondering if there is already a simple way to access it as an artifact and include it in the artifact paths directives so that it can be part of a release package.
| Yes, the change-log is accessible as a file, path to this file is in the TeamCity build parameter:
%system.teamcity.build.changedFiles.file%
So you could do this:
Add a command-line build step to your build.
Use type Custom Script.
Enter this script:
copy "%system.teamcity.build.changedFiles.file%" changelog.txt
Finally edit the artifact rules for your build to include the changelog.txt in your artifacts (General settings -> Artifact paths -> Add "changelog.txt").
| TeamCity | 4,317,409 | 16 |
I recenlty updated my TeamCity to the newest Version. (10.0 build 42002)
Since then the build agent can't build any of my projects.
The agent tells me the following:
Unmet requirements: DotNetFramework4.0_x86 exists
To solve this problem I already did what was suggested in this stackoverflow question:
TeamCity Agent Missing DotNetFramework4.0_x86, but not?
Sadly it doesn't work. So I looked at the log files but didn't find anything weird.
Then I looked at the agent configuration paramets. I found this:
DotNetFramework4.6.01055_x64_Path C:\Windows\Microsoft.NET\Framework64\v4.0.30319
DotNetFramework4.6.01055_x86_Path C:\Windows\Microsoft.NET\Framework\v4.0.30319
DotNetFramework4.6_x64 4.6.01055
DotNetFramework4.6_x64_Path C:\Windows\Microsoft.NET\Framework64\v4.0.30319
DotNetFramework4.6_x86 4.6.01055
DotNetFramework4.6_x86_Path C:\Windows\Microsoft.NET\Framework\v4.0.30319
As you can see the .NET 4.0 Framework is mapped to DotNetFramework4.6. For me this seems to be the problem.
Has someone an idea what I can do to fix this?
| I used the work around from Greg B found here to solve the problem.
To get the agent back running you need to insert following lines to the config of the agent. (For example located here: C:\TeamCity\buildAgent\conf\buildAgent.properties)
DotNetFramework4.0_x86_Path=C\:\\Windows\\Microsoft.NET\\Framework\\v4.0.30319
DotNetFramework4.0_x86=4.0.30319
DotNetFramework4.0_x64_Path=C\:\\Windows\\Microsoft.NET\\Framework64\\v4.0.30319
DotNetFramework4.0_x64=4.0.30319
I stopped the agent in the windows services
I pasted the parameters in the buildAgent.properties
I started the agent in the windows services
As far as I understand JetBrains fixed a bug in TeamCity and because of this the .NET Frameworks will not be found anymore.
Quote from Evgeniy Koshkin
...in case your tool targeting .net 4.0 as its required runtime you
actually should avoid installing .net 4.5(6) on your build agents. in
that case TeamCity will report that .net 4.0 runtime is available. But
i don't think this limitation of installed .net version makes sence in
most of the cases. Before this bug was fixed TeamCity reports the fact
'.net 4.0 was previously a runtime on this agent' as '.net 4.0 is a
runtime on this agent'. It's a buggy behaviour in my point of view.
| TeamCity | 38,695,121 | 16 |
I've recently updated to TeamCity 9.1.6 to run my new unit tests based on NUnit 3.2.1. But now I'm having trouble running the tests:
I've selected the NUnit3 executor in build steps, configured it accordingly:
When building, I get an error: "Could not load file or assembly 'nunit.framework' or one of its dependencies. The system cannot find the file specified.".
Everything should be fine, the paths are fine, the assembly is in the path of the Test assembly, everything is built in AnyCPU configuration.
There's also the error stating that NUnit version is not a release version, which I think is bullshit, it's a release on NUnit website. And the error doesn't seem to break anything (it was present even when I had an error before the 'nunit.framework' error, and when I fixed that one, the build got further).
Any leads appreciated!
UPDATE:
Running tests using a Command Line runner and running that same nunit3-console.exe works fine. So I guess this is a NUnit runner specific problem. Still, suggestions are welcome on how to fix this.
UPDATE 2:
I tried downgrading both the solution package and the NUnit-Console used by TeamCity to 3.0.0 - still, same result.
UPDATE 3:
As I've suspected, TeamCity support confirmed that the message about "NUnit version not being supported" is a faulty one, and shouldn't affect anything.
| I had the same problem with TeamCity 10.0.1 (build 42078) and NUnit 3.4.1.
And it turned out to be completely my fault. I'm posting it here as someone else can stumble into the same problem and this can save them some time.
It turned out that the problem was in the "Run tests from: " setting in my build configuration.
I had **\*.Test.dll. That was accidentally picking up dlls for \obj\**\ directories (where there is no nunit.framework.dll present). Once I changed the setting to **\bin\%BuildConfiguration%\*.test.dll it all works fine.
Note: %BuildConfiguration% is a parameter which specifies your preferred build configuration on the TC (like Debug / Release / CIBuild etc.)
| TeamCity | 36,996,564 | 16 |
I'm a bit of a n00b when it comes to nodejs npm, but since implementing it in our build environment using steps recommended on several articles its tripled our build times.
We use it for the standard stuff (minify/concat/etc js/css/etc)
We use TeamCity and have added a Node.js NPM step then a gulp step to run the tasks (RE: https://github.com/jonnyzzz/TeamCity.Node)
The task to setup NPM takes the most time, 2min 10 seconds, which is over 65% of the total build time calling the command "npm install", which appears to re-download all the packages on each build
Step 3/7: NPM Setup (Node.js NPM) (2m:10s)
[npm install] Starting: cmd /c npm install
Out total build times before were around 1min 30sec, including unit tests.
is there anyway to cache these locally and prevent re-download on each build? in the user profile or something maybe as opposed to the build folder?
More detail..
This probably best explains the setup http://www.dotnetcurry.com/visualstudio/1096/using-grunt-gulp-bower-visual-studio-2013-2015
We have C# projects that are using the new Task Runner Explorer, Dependencies are saved into a package.json by this, you pre-run "npm install" once on your local environment in you workspace (need to use a .tfignore to prevent it from checking in to source) then not again, unless you start a new local workspace.
When the build run it needs to run "npm install" from the command line and it picks up dependencies from the package.json file and installs them into a sub folder inside the working directory of the build every time, even if the files are already there from a previous build(i.e. TC agent hasn't cleaned them up), afaik you cant install them outside the working folder.
I could be wrong... Or I should say I hope I'm wrong, and looking for a way for gulp to support this, but what ever way we make it work will need to work with task runner explorer so the F5 experience for the dev is still the same on their local.
We do have multiple agents yes.
| I don't know about Node.js, but here are a couple TeamCity-specific suggestions:
Does NPM perhaps download the files into %TEMP%? If so, they won't be reusable between subsequent TeamCity builds because a TeamCity agent hijacks the %TEMP% directory (redirects it to <TeamCity Home>/buildAgent/temp/buildTmp) and always completely wipes this directory before every new build. (See buildTmp here.)
In that sense, it would be preferable if you could instruct NPM to store the downloaded files in the workspace (the directory where you checkout your build) instead.
If NPM is downloading into the workspace (the checkout dir), have you perhaps requested to do a clean checkout on every run? (See Edit Configuration Settings | Version Control Settings | Show advanced options | Clean all files in the checkout directory before the build checkbox.)
In that case, uncheck the checkbox.
Is perhaps TeamCity cleaning up the checkout directory due to low disk space? This clean-up kicks in automatically when TeamCity notices it's running out of space. (The clean-up can be made even more aggressive with the Free disk space build feature.)
In that case, stop using the build feature. If it's not used and the automatic clean-up is to blame, it's hard to control. It's best if you simply clean-up that part of your file-system which is not managed by TeamCity (your own %TEMP% and other places) and thus give some leeway to TeamCity.
Is your build running on a different agent every time? (Consult the build history.) If so, it cannot reuse the downloaded artifacts (even if they are downloaded into the checkout dir), since they are downloaded to a different machine's filesystem every time. I doubt this is the case though, since TeamCity gravitates towards agent-workspace reuse (sticking to the same agent).
In that case, you can force agent reuse by setting an agent requirement, specifying that you want your builds to run on one specific agent all the time. You can also single that agent out into its own pool, so that no other builds can run on it.
| TeamCity | 32,834,881 | 16 |
I have a custom .targets file which I import into my C# MVC web application's project file. I've added custom targets to this like so:
<Target Name="CopyFiles" BeforeTargets="Build"></Target>
This works fine when building under Visual Studio, but when I use TeamCity to build it, the target never gets run, and I can't work out why.
If I change my target to use BeforeTargets="Compile" then it runs. Alternatively, if I add an additional target with the name Build to the .targets file
<Target Name="Build" />
then it will run, but doing so overrides the existing Build target and thus my application doesn't build. I can't quite make out the logic to this - it doesn't make sense. I'm using the Compile target for now, but if someone could explain why trying to execute it before the Build task doesn't work I'd really appreciate it.
| 'Build' is a special built-in target, so doesn't really work the same way as most other targets. It definitely can't be safely overridden.
The most relevant documentation is here: https://msdn.microsoft.com/en-us/library/ms366724.aspx
If you want something to run before build, the standard approach (as recommend by the comments in a newly-created .csproj file) is to override the BeforeBuild target (as documented above).
However, this isn't the most robust solution. As noted in the documentation above:
Overriding predefined targets is an easy way to extend the build process, but, because MSBuild evaluates the definition of targets sequentially, there is no way to prevent another project that imports your project from overriding the targets you already have overridden.
It's better (and only slightly more complex), to override the BuildDependsOn property and extend the default value of this property to include the target you want to run (this is also documented in the link above).
Another approach would be to leave BeforeBuild empty and use BeforeTargets="BeforeBuild", which feels a bit odd but is quite simple and will still work even if the BeforeBuild target gets overridden.
As to why BeforeTargets="Build" doesn't work, I can't find a reference for this in the documentation, but I think it's to do with its special nature. It doesn't work the same as ordinary targets and it's probably better not to think of it as a target at all.
| TeamCity | 27,986,147 | 16 |
I've recently added some custom Portable Class Library projects to an application that is built in an build server. The build was working fine, but after that it stopped working and shows me the following messages:
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets(983,
5): warning MSB3644: The reference assemblies for framework
".NETPortable,Version=v4.0,Profile=Profile136" were not found.
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets(1578,
5): warning MSB3270: There was a mismatch between the processor
architecture of the project being built "MSIL" and the processor
architecture of the reference
"C:\Windows\Microsoft.NET\Framework64\v4.0.30319\mscorlib.dll",
"AMD64".
error CS0234: The type or namespace name 'Linq' does not exist in the
namespace 'System' (are you missing an assembly reference?)
The build server specs:
Windows Server 2008 R2 Standard
TeamCity 8.0.4
.NET 4.5
Portable Class Library Tools (as advised here)
Silverlight 5 SDK
The solution is a .NET 4.0 application and the portable projects target .NET4.0+ e Silverlight 5, only.
I have checked my development machine (Windows 8, Visual Studio 2012). There is indeed a folder "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETPortable\v4.0\Profile\Profile136" (in fact, the profiles for .NET 4.0 go up to 158).
In the build machine, however, there are only folders for profiles up to 131.
Is Portable Class Library Tools up to date? It seems it miss installing profiles for the most recent platforms.
UPDATE
I copied the ".NETPortable\v4.0\Profile\Profile136" of my development machine to the build server, and now the application builds successfully. I still would like to know why installing the Portable Class Library Tools does not work out of the box.
| A more general and elegant solution is to install the latest Microsoft .NET Portable Library Reference Assemblies. This will install profile138 among many others.
The standalone installer(s) can be found at:
4.6 (June 2014):
| TeamCity | 20,518,424 | 16 |
I want to have these Versions in a format
like this.. {Major}.{Minor}.{Build}.{patch}
how to set this in the assembly info patcher in team city?
so that it will automatically increment the versions for each time it builds...
i want some guidance and help in this...?!?
| TeamCity can version assemblies for you with the AssemblyInfo Patcher build feature. To take advantage of this:
Create a build parameter called %Major.Minor%. Set this manually to some value, e.g. 1.0.
On the General Settings tab, set the Build number format to %Major.Minor%.%build.vcs.number%.%build.counter%.
On the Build Steps tabe, scroll to the Additional Build Features at the bottom of the page. Add an Assembly Info Patcher build step. It will default to using the %system.build.number%, which you've defined in step 2.
This will result in all of your assemblies being versioned with the %system.build.number%, which includes the Major and Minor version, the VCS revision, and TeamCity's incremental build number.
| TeamCity | 15,252,282 | 16 |
Can TeamCity push successful builds to a git repository?
I cannot see a specific build step in TeamCity to do this.
I use the version 7.1.1 of TeamCity
Thanks, Henrik
UPDATE:
Ok thanks for your answer,
I find it a bit complicated.
I found out that I can simply push back tags on successful builds to my global repository from which TeamCity fetches data for the build. I can pull changes from it and see whether the last commits were successful.
I would be happy if TeamCity provided a simple option for this kind of workflow!
It would be awesome if every developer could just pull from a repo that is only updated when the build is successful, or am I wrong here?
| You can have TeamCity execute a shell script that subsequently calls git push (with appropriate arguments, e.g. git push <repository> to push to a different repository). Do make sure that git doesn't need interactive authentication for the push operation.
A related example (deploy to Heroku using a git push) can be found here: http://blog.carbonfive.com/2010/08/06/deploying-to-heroku-from-teamcity/.
| TeamCity | 13,326,487 | 16 |
I recently set up a CI server in TeamCity and now want to take it to the next step, continuous deployment. Basically, we host a suite of restful services and about 3 web applications for each one of our customers. All customers get 3 environments QA, UAT and Prod. We want to be able to automatically deploy our builds once our tests pass. I'm not looking for custom scripting options to do this. I've seen plenty of those of SO. What we're looking for is a solutions like UDeploy but at a lower price point. Is anyone aware of alternatives to UDeploy? Or other Continuous Deployment plugins that work with TeamCity?
Thanks,
| I agree with @Niklas Ringdahl -- I think you're thinking about it wrong.
You can deploy directly from TeamCity using MS WebDeploy.
See Troy Hunt's excellent blog series about this:
Part 1: Config transforms
Part 2: MS Build and deployable packages
Part 3: Publishing with WebDeploy
Part 4: Continuous builds with TeamCity
Part 5: WebDeploy with TeamCity
| TeamCity | 10,192,776 | 16 |
I get "cannot stop" status once in a while after trying to stop builds on TeamCity. I would expect that killing my build process on build agent would do the trick, but it doesn't work. Stopping TeamCity agent process on the build machine doesn't help either. Restarting build agent (i.e. computer) does the trick, but it takes plus 2-3min after the machine has started. It looks like TeamCity server itself thinks that my build is still running.
Is there a better way to stop those builds? Or maybe there is some info somewhere that could explain this logic?
| Just restart the TeamCity WebServer windows service.
You shouldn't need to restart your whole machine.
| TeamCity | 3,227,314 | 16 |
I have an Asp.Net MVC Web Application that I am developing. I have TeamCity installed on my development workstation, and have been running CI builds on. All has been working fine. I'd like to move TeamCity off of my machine, and onto the new dev/build server that was just delivered. I do not want to install Visual Studio onto the build server. But it seams that msbuild cannot build the Web Application project.
E:\TeamCity\buildAgent\work\48e528785fe346fa\src\Web\Web.csproj(489,
11): error MSB4019: The imported
project "C:\Program
Files\MSBuild\Microsoft\VisualStudio\v9.0\WebApplications\Microsoft.WebApplication.targets"
was not found. Confirm that the path
in the declaration is
correct, and that the file exists on
disk.
I've found a few hits on google, but nothing acceptable. Suggestions were to either install Visual Studio, or copy certain directories from Visual Studio over to the server, etc.
What can I do to enable TeamCity to build my project on the dev/build server.
| Looks like copying the file over will definitely work. Have you tried it? Think of the .targets file as a series of definitions for how MSBuild will do its work.
| TeamCity | 811,417 | 16 |
I am VERY new with teamcity so please bear with me
I set up an email notifier to let me know when a build has failed, but TeamCity is reporting the following error:
Failed to send email notification via
SMTP server mail, due to error:
Unknown SMTP host: mail; nested
exception is:
java.net.UnknownHostException: mail
For the life of me, I cannot find where to configure the mail server settings. I don't even want it to use an SMTP server, but I don't see any options for this anywhere.
| Option Description
------ ----------
SMTP host Specify the SMTP host name.
SMTP port Specify the SMTP port number.
Send messages from Specify the email address, from which notification messages will be sent to the user.
SMTP login Specify the SMTP login name, if any.
SMTP password Specify the SMTP password.
Use TLS (SSL) Select this option to secure your SMTP connection with TLS.
(This feature is only available in TeamCity 3.1+)
Test connection Click this button to establish a connection with the specified SMTP host.
Save Click this button to save changes and close the page.
source: http://www.jetbrains.net/confluence/display/TCD3/Email+and+Jabber+Notifier+Settings
| TeamCity | 749,014 | 16 |
I'd like to have 3 distinct builds within a TeamCity project (Development, QA, Production). With the dependencies linked (Production can't build without a successful QA, and QA can't build without a successful Development), I'd like to propagate the version numbers through the builds.
Development Build => v 1.0.1.0
QA Build => on successful build set version to v1.0.1.0
Is there a way to set a build configuration version to a different builds version?
I'm using TeamCity 4.0.2, runner is Rake, building VS2008 solutions.
| If you have snapshot dependencies for Dev->QA->Production build, you can reference build number from Dev build in QA and Production builds.
Please read http://www.jetbrains.net/devnet/message/5231290 for details how to do it.
Update:
The recent information on how to achieve this is available in this TeamCity How-To question.
| TeamCity | 580,138 | 16 |
How do I setup TeamCity 4.0 so that I can access it over port 443 on the internet? e.g. https://teamcity.mydomain.com
I am running IIS 7 on the same server that TeamCity is installed. I see two options:
Setup TeamCity to use port 8443 and
create a reverse proxy in IIS that
routes requests to the TeamCity
public IP address to the Tomcat port
on the internal IP address.
Setup Tomcat to run on a different
IP address than IIS 7, and configure
TeamCity to run on port 443.
I'm not sure on the details of either of these steps.
| It requires configuring the bundled Tomcat server for https. See here:
http://confluence.jetbrains.net/display/TCD65/Using+HTTPS+to+access+TeamCity+server
and here:
http://tomcat.apache.org/tomcat-6.0-doc/ssl-howto.html
I also setup Tomcat to listen on just one IP Address. All of this turned out to be a real pain, and I still am not able to run TeamCity as a service. I can only run it at the command line. If I were going to do this over, I would install TeamCity to run on the default port, and reverse proxy to it using IIS7 Application Request Routing or Apache Virtual Directories.
[Edit]
I have done this over, and I used IIS Application Request Routing to set up a reverse proxy. It works perfectly, and Team City upgrades are painless as well.
| TeamCity | 331,755 | 16 |
Subsets and Splits