question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
This is a slightly.. vain question, but BuildBot's output isn't particularly nice to look at..
For example, compared to..
phpUnderControl
Jenkins
Hudson
CruiseControl.rb
..and others, BuildBot looks rather.. archaic
I'm currently playing with Hudson, but it is very Java-centric (although with this guide, I found it easier to setup than BuildBot, and produced more info)
Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiny graphs and the likes?
Update: Since this time the Jenkins project has replaced Hudson as the community version of the package. The original authors have moved to this project as well. Jenkins is now a standard package on Ubuntu/Debian, RedHat/Fedora/CentOS, and others. The following update is still essentially correct. The starting point to do this with Jenkins is different.
Update: After trying a few alternatives, I think I'll stick with Hudson. Integrity was nice and simple, but quite limited. I think Buildbot is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it.
Setting Hudson up for a Python project was pretty simple:
Download Hudson from http://hudson-ci.org/
Run it with java -jar hudson.war
Open the web interface on the default address of http://localhost:8080
Go to Manage Hudson, Plugins, click "Update" or similar
Install the Git plugin (I had to set the git path in the Hudson global preferences)
Create a new project, enter the repository, SCM polling intervals and so on
Install nosetests via easy_install if it's not already
In the a build step, add nosetests --with-xunit --verbose
Check "Publish JUnit test result report" and set "Test report XMLs" to **/nosetests.xml
That's all that's required. You can setup email notifications, and the plugins are worth a look. A few I'm currently using for Python projects:
SLOCCount plugin to count lines of code (and graph it!) - you need to install sloccount separately
Violations to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build)
Cobertura can parse the coverage.py output. Nosetest can gather coverage while running your tests, using nosetests --with-coverage (this writes the output to **/coverage.xml)
| You might want to check out Nose and the Xunit output plugin. You can have it run your unit tests, and coverage checks with this command:
nosetests --with-xunit --enable-cover
That'll be helpful if you want to go the Jenkins route, or if you want to use another CI server that has support for JUnit test reporting.
Similarly you can capture the output of pylint using the violations plugin for Jenkins
| Jenkins | 225,598 | 112 |
Building my Jenkins/MSBuild solution gives me this error
c:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(483,9): error :
The OutputPath property is not set for project '<projectname>.csproj'. Please check to
make sure that you have specified a valid combination of Configuration and Platform
for this project. Configuration='Latest' Platform='AnyCPU'. You may be seeing this
message because you are trying to build a project without a solution file, and have
specified a non-default Configuration or Platform that doesn't exist for this project.
[C:\<path>\<projectname>.csproj]
Any ideas?
EDIT
I have this in my .csproj file
<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Latest|AnyCPU'">
<OutputPath>bin\Latest\</OutputPath>
</PropertyGroup>
| I have figured out how it works (without changing sln/csproj properties in VS2013/2015).
if you want to build .sln file:
/p:ConfigurationPlatforms=Release /p:Platform="Any CPU"
if you want to build .csproj file:
/p:Configuration=Release /p:Platform=AnyCPU
notice the "Any CPU" vs AnyCPU
check the code analysis, fxcop, test coverage(NCover) targets, as well as the MSBUILD should be located properly. In my case its:
C:\Windows\Microsoft.NET\Framework64\v4.0.30319
but it can be different as you can see microsoft has given 6 cmd options to build code base::AMD (with cross plt, x86 & x64 options) and Windows(cross, x86, x64) and that also when code development happened with default JIT (it can be PreJIT ngen.exe, econoJIT)
I think more than this troubleshooting can be handle using power shell + msbuild. May be helpful for someone ...
| Jenkins | 15,134,384 | 110 |
I have a machine with Ubuntu 12.04 and have installed Jenkins ver. 1.424.6 using apt-get based on *this guide*, but there is a new version:
New version of Jenkins (1.447.2) is available for download (changelog).
If I press download, I get a jenkins.war file... but how do I use that for upgrading my current installation? or is that not possible before the apt repositories gets updated?
| You can overwrite the existing jenkins.war file with the new one and then restart Jenkins.
This file is usually located in /usr/share/jenkins.
If this is not the case for your system, in Manage Jenkins -> System Information, it will display the path to the .war file under executable-war.
| Jenkins | 11,062,335 | 110 |
How do I tell Jenkins/Hudson to trigger a build only for changes on a particular project in my Git tree?
| If you are using a declarative syntax of Jenkinsfile to describe your building pipeline, you can use changeset condition to limit stage execution only to the case when specific files are changed. This is now a standard feature of Jenkins and does not require any additional configruation/software.
stages {
stage('Nginx') {
when { changeset "nginx/*"}
steps {
sh "make build-nginx"
sh "make start-nginx"
}
}
}
You can combine multiple conditions using anyOf or allOf keywords for OR or AND behaviour accordingly:
when {
anyOf {
changeset "nginx/**"
changeset "fluent-bit/**"
}
}
steps {
sh "make build-nginx"
sh "make start-nginx"
}
| Jenkins | 5,243,593 | 110 |
I've got an existing Hudson project that is configured and working.
I need to duplicate the project so that I can have the original and then change the new one so that it points to a different source control.
I don't want to manually recreate the build. How can i "copy & paste" or otherwise duplicate the exiting build configuration, so I can get the new build configuration up and running faster?
| Click on "new job" and then select "Copy existing job" at the bottom. Then enter the name of the job you want to copy into the text field.
| Jenkins | 3,133,537 | 110 |
I am running Jenkins from user jenkins thats has $PATH set to something and when I go into Jenkins web interface, in the System Properties window (http://$host/systemInfo) I see a different $PATH.
I have installed Jenkins on Centos with the native rpm from Jenkins website. I am using the startup script provided with the installation using sudo /etc/init.d/jenkins start
Can anyone please explain to me why that happens?
| Michael,
Two things:
When Jenkins connects to a computer, it goes to the sh shell, and not the bash shell (at least this is what I have noticed - I may be wrong). So any changes you make to $PATH in your bashrc file are not considered.
Also, any changes you make to $PATH in your local shell (one that you personally ssh into) will not show up in Jenkins.
To change the path that Jenkins uses, you have two options (AFAIK):
1) Edit your /etc/profile file and add the paths that you want there
2) Go to the configuration page of your slave, and add environment variable PATH, with value: $PATH:/followed-by/paths/you/want/to/add
If you use the second option, your System Information will still not show it, but your builds will see the added paths.
| Jenkins | 5,818,403 | 108 |
I configured Jenkins in Spinnaker as follows and setup the Spinnaker pipeline.
jenkins:
# If you are integrating Jenkins, set its location here using the baseUrl
# field and provide the username/password credentials.
# You must also enable the "igor" service listed separately.
#
# If you have multiple Jenkins servers, you will need to list
# them in an igor-local.yml. See jenkins.masters in config/igor.yml.
#
# Note that Jenkins is not installed with Spinnaker so you must obtain this
# on your own if you are interested.
enabled: ${services.igor.enabled:false}
defaultMaster:
name: default
baseUrl: http://server:8080
username: spinnaker
password: password
But I am seeing the following error when trying to run the Spinnaker pipeline.
Exception ( Start Jenkins Job )
403 No valid crumb was included in the request
| Finally, this post helped me to do away with the crumb problem, but still securing Jenkins from a CSRF attack.
Solution for no-valid crumb included in the request issue
Basically, we need to first request for a crumb with authentication and then issue a POST API calls with a crumb as a header along with authentication again.
This is how I did it,
curl -v -X GET http://jenkins-url:8080/crumbIssuer/api/json --user <username>:<password>
The response was,
{
"_class":"hudson.security.csrf.DefaultCrumbIssuer",
"crumb":"0db38413bd7ec9e98974f5213f7ead8b",
"crumbRequestField":"Jenkins-Crumb"
}
Then the POST API call with the above crumb information in it.
curl -X POST http://jenkins-url:8080/job/<job-name>/build --user <username>:<password> -H 'Jenkins-Crumb: 0db38413bd7ec9e98974f5213f7ead8b'
| Jenkins | 44,711,696 | 108 |
I've installed jenkins and I'm trying to get into a shell as Jenkins to add an ssh key. I can't seem to su into the jenkins user:
[root@pacmandev /]# sudo su jenkins
[root@pacmandev /]# whoami
root
[root@pacmandev /]# echo $USER
root
[root@pacmandev /]#
The jenkins user exists in my /etc/passwd file. Runnin su jenkins asks for a password, but rejects my normal password. sudo su jenkins doesn't seem to do anything; same for sudo su - jenkins. I'm on CentOS.
| jenkins is a service account, it doesn't have a shell by design. It is generally accepted that service accounts shouldn't be able to log in interactively.
I didn't answer this one initially as it's a duplicate of a question that has been moved to server fault. I should have answered rather than linked to the answer in a comment.
if for some reason you want to login as jenkins, you can do so with:
sudo su -s /bin/bash jenkins
| Jenkins | 18,068,358 | 107 |
Summary:
Setting up Jenkins on OS X has been made significantly easier with the most recent installer (as of 1.449 - March 9, 2012), however managing the process of code signing is still very difficult with no straightforward answer.
Motivation:
Run a headless CI server that follows common best practices for running services on OS X (Some of which is explained here in plain language).
Background:
October 12, 2009 - How to automate your iPhone app builds with Hudson
June 15, 2011 - Jenkins on Mac OS X; git w/ ssh public key
June 23, 2011 - Continuous Deployment of iOS Apps with Jenkins and TestFlight
July 26, 2011 - Missing certificates and keys in the keychain while using Jenkins/Hudson as Continuous Integration for iOS and Mac development
August 30, 2011 - Xcode Provisioning File not found with Jenkins
September 20, 2011 - How to set up Jenkins CI on a Mac
September 14, 2011 - Getting Jenkins Running on a Mac
November 12, 2011 - Howto: Install Jenkins on OS X and make it build Mac stuff
January 23, 2012 - Upcoming Jenkins OSX installer changes
March 7, 2012 - Thanks for using OSX Installer
Process:
Install Jenkins CI via OS X installer package. For the "Installation Type" step, click the Customize button, and choose "Start at boot as 'jenkins.'"
Discussion:
The naive expectation at this point was that a free-style project with the build script xcodebuild -target MyTarget -sdk iphoneos should work. As indicated by the title of this post, it does not and fails with:
Code Sign error: The identity 'iPhone Developer' doesn't match any valid certificate/private key pair in the default keychain
It is obvious enough what needs to happen - you need to add a valid code signing certificate and a private key into the default keychain. In researching how to accomplish this, I have not found a solution that doesn't open up the system to some level of vulnerability.
Problem 1: No default keychain for jenkins daemon
sudo -u jenkins security default-keychain
...yields "A default keychain could not be found"
As pointed out below by Ivo Dancet, the UserShell is set to /usr/bin/false for the jenkins daemon by default (I think this is a feature, not a bug); follow his answer to change the UserShell to bash. You can then use sudo su jenkins to get logged in as the jenkins user and get a bash prompt.
sudo su jenkins
cd ~/Library
mkdir Keychains
cd Keychains
security create-keychain <keychain-name>.keychain
security default-keychain -s <keychain-name>.keychain
Okay, great. We've got a default keychain now; let's move on right? But, first why did we even bother making a default keychain?
Almost all answers, suggestions, or conversation I read throughout researching suggest that one should just chuck their code signing certs and keys into the system keychain. If you run security list-keychains as a free-style project in Jenkins, you see that the only keychain available is the system keychain; I think that's where most people came up with the idea to put their certificate and key in there. But, this just seems like a very bad idea - especially given that you'll need to create a plain text script with the password to open the keychain.
Problem 2: Adding code signing certs and private key
This is where I really start to get squeamish. I have a gut feeling that I should create a new public / private key unique for use with Jenkins. My thought process is if the jenkins daemon is compromised, then I can easily revoke the certificate in Apple's Provisioning Portal and generate another public / private key. If I use the same key and certificate for my user account and Jenkins, then it means more hassle (damage?) if the jenkins service is attacked.
Pointing to Simon Urbanek's answer you'll be unlocking the keychain from a script with a plain text password. It seems irresponsible to keep anything but "disposable" certificates and keys in the jenkins daemon's keychain.
I am very interested in any discussion to the contrary. Am I being overly cautious?
To make a new CSR as the jenkins daemon in Terminal I did the following...
sudo su jenkins
certtool r CertificateSigningRequest.certSigningRequest You'll be prompted for the following (most of these I made educated guesses at the correct answer; do you have better insight? Please share.)...
Enter key and certificate label:
Select algorithm: r (for RSA)
Enter key size in bits: 2048
Select signature algorithm: 5 (for MD5)
Enter challenge string:
Then a bunch of questions for RDN
Submit the generated CSR file (CertificateSigningRequest.certSigningRequest) to Apple's Provisioning Portal under a new Apple ID
Approve the request and download the .cer file
security unlock-keychain
security add-certificate ios_development.cer
This takes us one step closer...
Problem 3: Provisioning profile and Keychain unlocking
I made a special provisioning profile in the Provisioning Portal just for use with CI in hopes that if something bad happens I've made the impact a little smaller. Best practice or overly cautious?
sudo su jenkins
mkdir ~/Library/MobileDevice
mkdir ~/Library/MobileDevice/Provisioning\ Profiles
Move the provisioning profile that you setup in the Provisioning Portal into this new folder. We're now two short steps away from being able to run xcodebuild from the the command line as jenkins, and so that means we're also close to being able to get the Jenkins CI running builds.
security unlock-keychain -p <keychain password>
xcodebuild -target MyTarget -sdk iphoneos
Now we get a successful build from a command line when logged in as the jenkins daemon, so if we create a free-style project and add those final two steps (#5 and #6 above) we will be able to automate the building of our iOS project!
It might not be necessary, but I felt better setting jenkins UserShell back to /usr/bin/false after I'd successfully gotten all this setup. Am I being paranoid?
Problem 4: Default keychain still not available!
(EDIT: I posted the edits to my question, rebooted to make sure my solution was 100%, and of course, I'd left out a step)
Even after all the steps above, you'll need to modify the Launch Daemon plist at /Library/LaunchDaemons/org.jenkins-ci.plist as stated in this answer. Please note this is also an openrdar bug.
It should look like this:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>EnvironmentVariables</key>
<dict>
<key>JENKINS_HOME</key>
<string>/Users/Shared/Jenkins/Home</string>
</dict>
<key>GroupName</key>
<string>daemon</string>
<key>KeepAlive</key>
<true/>
<key>Label</key>
<string>org.jenkins-ci</string>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>/Library/Application Support/Jenkins/jenkins-runner.sh</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>UserName</key>
<string>jenkins</string>
<!-- **NEW STUFF** -->
<key>SessionCreate</key>
<true />
</dict>
</plist>
With this setup, I would also recommend the Xcode plugin for Jenkins, which makes setting up the xcodebuild script a little bit easier. At this point, I'd also recommend reading the man pages for xcodebuild - hell you made it this far in Terminal, right?
This setup is not perfect, and any advice or insight is greatly appreciated.
I have had a hard time selecting a "correct" answer since what I've come to use to solve my problem was a collection of just about everyone's input. I've tried to give everyone at least an up vote, but award the answer to Simon because he mostly answered the original question. Furthermore, Sami Tikka deserves a lot of credit for his efforts getting Jenkins to work through AppleScript as a plain ol' OS X app. If you're only interested in getting Jenkins up and going quickly within your user session (i.e. not as a headless server) his solution is much more Mac-like.
I hope that my efforts spark further discussion, and help the next poor soul who comes along thinking they can get Jenkins CI setup for their iOS projects in a weekend because of all the wonderful things they've heard about it.
Update: August 9, 2013
With so many upvotes and favorites, I thought I would come back to this 18 months later with some brief lessons learned.
Lesson 1: Don't expose Jenkins to the public internet
At the 2012 WWDC I took this question to the Xcode and OS X Server engineers. I received a cacophony of "don't do that!" from anyone I asked. They all agreed that an automated build process was great, but that the server should only be accessible on the local network. The OS X Server engineers suggested allowing remote access via VPN.
Lesson 2: There are new install options now
I recently gave a CocoaHeads talk about my Jenkins experience, and much to my surprise I found some new install methods - Homebrew and even a Bitnami Mac App Store version. These are definitely worth checking out. Jonathan Wright has a gist detailing getting Homebrew Jenkins working.
Lesson 3: No, seriously, don't expose your build box to the internet
It's pretty clear from the original post that I'm neither a system administrator nor security expert. Common sense about private-y stuff (keychains, credentials, certificates, etc) left me feeling pretty uneasy about putting my Jenkins box on the internet. Nick Arnott at Neglected Potential was able to confirm my heebie-jeebies pretty easily in this article.
TL;DR
My recommendation to others looking to automate their build process has changed over the past year and a half. Make sure your Jenkins machine is behind your firewall. Install and set Jenkins up as a dedicated Jenkins user either using the installer, Bitnami Mac App Store version, Sami Tikka's AppleScript, etc; this resolves most of the headache I detail above. If you need remote access, setting up VPN services in OS X Server takes ten minutes tops. I've been using this setup for over a year and am very happy with it. Good luck!
| Keychains need to be unlocked before they can be used. You can use security unlock-keychain to unlock. You can do that interactively (safer) or by specifying the password on the command line (unsafe), e.g.:
security unlock-keychain -p mySecretPassword...
Obviously, putting this into a script compromises the security of that keychain, so often people setup an individual keychain with only the signing credentials to minimize such damage.
Typically in Terminal the keychain is already unlocked by your session, since the default keychain is unlocked on login, so you don't need to do that. However, any process not run in your session won't have unlocked keychain even if it has you as the user (most commonly this affects ssh, but also any other process).
| Jenkins | 9,245,149 | 107 |
In a project I'm working on, we are using shell scripts to execute different tasks. Some are sh/bash scripts that run rsync, and some are PHP scripts. One of the PHP scripts is running some integration tests that output to JUnit XML, code coverage reports, and similar.
Jenkins is able to mark the jobs as successful / failed based on exit status. In PHP, the script exits with 1 if it has detected that the tests failed during the run. The other shell scripts run commands and use the exit codes from those to mark a build as failed.
// :: End of PHP script:
// If any tests have failed, fail the build
if ($build_error) exit(1);
In Jenkins Terminology, an unstable build is defined as:
A build is unstable if it was built successfully and one or more publishers report it unstable. For example if the JUnit publisher is configured and a test fails then the build will be marked unstable.
How can I get Jenkins to mark a build as unstable instead of only success / failed when running shell scripts?
| Modern Jenkins versions (since 2.26, October 2016) solved this: it's just an advanced option for the Execute shell build step!
You can just choose and set an arbitrary exit value; if it matches, the build will be unstable. Just pick a value which is unlikely to be launched by a real process in your build.
| Jenkins | 8,148,122 | 103 |
I'm using the Jenkins Multiple SCM plugin to check out three git repositories into three sub directories in my Jenkins job. I then execute one set of commands to build a single set of artifacts with information and code drawn from all three repositories.
Multiple SCM is now depreciated, and the text recommends moving to pipelines. I tried, but I can't figure out how to make it work.
Here is the directory structure I'm interested in seeing from the top level of my Jenkins job directory:
$ ls
Combination
CombinationBuilder
CombinationResults
Each of those three sub-directories has a single git repo checked out. With the Multiple SCM, I used git, and then added the "checkout to a subdirectory" behavior. Here was my attempt with a pipeline script:
node('ATLAS && Linux') {
sh('[ -e CalibrationResults ] || mkdir CalibrationResults')
sh('cd CalibrationResults')
git url: 'https://github.com/AtlasBID/CalibrationResults.git'
sh('cd ..')
sh('[ -e Combination ] || mkdir Combination')
sh('cd Combination')
git url: 'https://github.com/AtlasBID/Combination.git'
sh('cd ..')
sh('[ -e CombinationBuilder ] || mkdir CombinationBuilder')
sh('cd CombinationBuilder')
git url: 'https://github.com/AtlasBID/CombinationBuilder.git'
sh 'cd ..'
sh('ls')
sh('. CombinationBuilder/build.sh')
}
However, the git command seems to execute at the top level directory of the workspace (which makes some sense), and according to the syntax too, there doesn't seem to be the checkout-to-sub-directory behavior.
| You can use the dir command to execute a pipeline step in a subdirectory:
node('ATLAS && Linux') {
dir('CalibrationResults') {
git url: 'https://github.com/AtlasBID/CalibrationResults.git'
}
dir('Combination') {
git url: 'https://github.com/AtlasBID/Combination.git'
}
dir('CombinationBuilder') {
git url: 'https://github.com/AtlasBID/CombinationBuilder.git'
}
sh('ls')
sh('. CombinationBuilder/build.sh')
}
| Jenkins | 40,224,272 | 102 |
I am invoking a Jenkins job remotely using:
wget http://<ServerIP>:8080/job/Test-Jenkins/build?token=DOIT
Here Test-Jenkins job is invoked and DOIT is the security token that I have used.
Now I need to pass some parameters to the build.xml file of this job i.e. Test-Jenkins.
I have not yet figured out how to pass the variables yet.
| See Jenkins documentation: Parameterized Build
Below is the line you are interested in:
http://server/job/myjob/buildWithParameters?token=TOKEN&PARAMETER=Value
| Jenkins | 20,359,810 | 102 |
I have two jobs in jenkins, both of which need the same parameter.
How can I run the first job with a parameter so that when it triggers the second job, the same parameter is used?
| You can use Parameterized Trigger Plugin which will let you pass parameters from one task to another.
You need also add this parameter you passed from upstream in downstream.
| Jenkins | 9,704,677 | 102 |
I am trying to integrate an external system with jenkins by REST API.
Although I have done lots of google search on its API reference, I still cannot get a full list of jenkins REST API reference.
Anybody knows about this?
| Jenkins has a link to their REST API in the bottom right of each page.
This link appears on every page of Jenkins and points you to an API output for the exact page you are browsing. That should provide some understanding into how to build the API URls.
You can additionally use some wrapper, like I do, in Python, using http://jenkinsapi.readthedocs.io/en/latest/
Here is their website: https://wiki.jenkins-ci.org/display/JENKINS/Remote+access+API
| Jenkins | 25,661,362 | 101 |
With GitHub command I have:
ssh -T [email protected]
Hi (MyName)! You've successfully authenticated, but GitHub does not provide shell access.
My connection with GitHub is ok (no problem), but with Jenkins I have this error:
ERROR: Error cloning remote repo 'origin' : Could not clone [email protected]:Name-MysRepo/MyRepo.git
hudson.plugins.git.GitException: Could not clone [email protected]:Name-MysRepo/MyRepo.git
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.clone(CliGitAPIImpl.java:219)
at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1001)
at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:942)
at hudson.FilePath.act(FilePath.java:904)
at hudson.FilePath.act(FilePath.java:877)
at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:942)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1101)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1369)
at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581)
at hudson.model.Run.execute(Run.java:1575)
at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:477)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:241)
Caused by: hudson.plugins.git.GitException: Command "git clone --progress -o origin [email protected]:Name-MysRepo/MyRepo.git /root/.jenkins/jobs/TestKRGDAOV01/workspace" returned status code 128:
stdout: Cloning into '/root/.jenkins/jobs/TestKRGDAOV01/workspace'...
stderr: Permission denied (publickey).
fatal: The remote end hung up unexpectedly
Is this problem with public key?
I use Jenkins under Tomcat 7 / Ubuntu 12.
| This error:
stderr: Permission denied (publickey). fatal: The remote end hung up unexpectedly
indicates that Jenkins is trying to connect to github with the wrong ssh key.
You should:
Determine the user that jenkins runs as, eg. 'build' or 'jenkins'
Login on the jenkins host that is trying to do the clone - that is, do not login to the master if a node is actually doing the build.
Try you ssh to github - if it fails, then you need to add the proper key to <jenkins user home>/.ssh
| Jenkins | 16,721,629 | 101 |
Jenkins won't execute any jobs. Having viewed this question, I have disabled all slave nodes but a simple job won't even run on the Master node.
What is wrong?
| The Jenkins admin console can run, even with the Master node offline. This can happen when Jenkins runs out of disk space.
To confirm, do the following (with thanks to geekride - jenkins-pending-waiting-for-next-available-executor):
go to Jenkins -> Manage Jenkins -> Manage Nodes
examine the "master" node to see if it is offline. It may be reporting that the master node is out of disk space.
| Jenkins | 15,112,890 | 101 |
How do you access parameters set in the "This build is parameterized" section of a "Workflow" Jenkins job?
TEST CASE
Create a WORKFLOW job.
Enable "This build is parameterized".
Add a STRING PARAMETER foo with default value bar text.
Add the code below to Workflow Script:
node()
{
print "DEBUG: parameter foo = ${env.foo}"
}
Run job.
RESULT
DEBUG: parameter foo = null
| I think the variable is available directly, rather than through env, when using Workflow plugin.
Try:
node()
{
print "DEBUG: parameter foo = ${foo}"
}
| Jenkins | 28,572,080 | 100 |
I'm currently doing some evaluation on the Jenkins Pipeline plugin (formerly know as Workflow plugin).
Reading the documentation I found out that I currently cannot retriev the workspace path using
env.WORKSPACE:
The following variables are currently unavailable inside a workflow script:
NODE_LABELS
WORKSPACE
SCM-specific variables such as SVN_REVISION
Is there any other way how to get the absolute path to the current workspace? I need this running some test which in turn gets some parameter (absolute path to some executable file).
I already tried using new File("").absolutePath() inside a @NonCPS section but looks like the non-CPS stuff gets always executed on the master.
Does anybody have a clue how to get this path without running some batch script which stores the path into some file which later on can be read in again?
| Since version 2.5 of the Pipeline Nodes and Processes Plugin (a component of the Pipeline plugin, installed by default), the WORKSPACE environment variable is available again. This version was released on 2016-09-23, so it should be available on all up-to-date Jenkins instances.
Example
node('label'){
// now you are on slave labeled with 'label'
def workspace = WORKSPACE
// ${workspace} will now contain an absolute path to job workspace on slave
workspace = env.WORKSPACE
// ${workspace} will still contain an absolute path to job workspace on slave
// When using a GString at least later Jenkins versions could only handle the env.WORKSPACE variant:
echo "Current workspace is ${env.WORKSPACE}"
// the current Jenkins instances will support the short syntax, too:
echo "Current workspace is $WORKSPACE"
}
| Jenkins | 36,934,028 | 99 |
Does anyone know how to increase the the timeout window before Jenkins logs out a user? I'm looking to raise it to 1 day or so.
I work in and out jenkins all day and we keep getting logged out between running of jobs. Added to this frustration, the 'stay logged in' checkbox doesn't seem to work either.
| Jenkins uses Jetty, and Jetty's default timeout is 30 minutes. This is independent of authentication settings -- I use Active Directory but it's still this setting that affects timeouts.
You can override the timeout by passing an argument --sessionTimeout=<minutes> to the Jenkins init script, or -DsessionTimeout=<minutes> to the .war file. For example:
# Set the session timeout to 1 week
$ java -jar jenkins.war --sessionTimeout=10080
Alternatively, you can edit Jenkins' <jenkinsHome>/.jenkins/war/WEB-INF/web.xml and add explicitly set it:
<session-config>
<!-- one hour -->
<session-timeout>60</session-timeout>
</session-config>
According to Oracle's docs you can set this to 0 to disable timeouts altogether.
To find out the current value for timeouts, you can use the Groovy console provided in Jenkins:
import org.kohsuke.stapler.Stapler;
Stapler.getCurrentRequest().getSession().getMaxInactiveInterval() / 60
On my instance, this shows Result: 30.
| Jenkins | 26,407,541 | 98 |
Inside a groovy script (for a jenkins pipeline): How can I run a bash command instead of a sh command?
I have tried the following:
Call "#!/bin/bash" inside the sh call:
stage('Setting the variables values') {
steps {
sh '''
#!/bin/bash
echo "hello world"
'''
}
}
Replace the sh call with a bash call:
stage('Setting the variables values') {
steps {
bash '''
#!/bin/bash
echo "hello world"
'''
}
}
Additional Info:
My command is more complex than a echo hello world.
| The Groovy script you provided is formatting the first line as a blank line in the resultant script. The shebang, telling the script to run with /bin/bash instead of /bin/sh, needs to be on the first line of the file or it will be ignored.
So instead, you should format your Groovy like this:
stage('Setting the variables values') {
steps {
sh '''#!/bin/bash
echo "hello world"
'''
}
}
And it will execute with /bin/bash.
| Jenkins | 44,330,148 | 97 |
I am user of AWS elastic beanstalk, and I have a little problem. I want to build my CSS files with less+node. But I don`t know how to install node in my dockerfile, when building with jenkins.
Here is installation packages what I am using in my docker. I will be glad for any suggestions.
FROM php:5.6-apache
# Install PHP5 and modules along with composer binary
RUN apt-get update
RUN apt-get -y install \
curl \
default-jdk \
git \
libcurl4-openssl-dev \
libpq-dev \
libmcrypt-dev \
libpq5 \
npm \
node \
zlib1g-dev \
libfreetype6-dev \
libjpeg62-turbo-dev \
libpng12-dev
RUN docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/
RUN docker-php-ext-install curl json mbstring opcache pdo_mysql zip gd exif sockets mcrypt
# Install pecl
RUN pecl install -o -f memcache-beta \
&& rm -rf /tmp/pear \
&& echo 'extension=memcache.so' > /usr/local/etc/php/conf.d/memcache.ini
After this I am runing my entrypoint.sh with code
#!/usr/bin/env sh
composer run-script post-install-cmd --no-interaction
chmod 0777 -R /var/app/app/cache
chmod 0777 -R /var/app/app/logs
exec apache2-foreground
But then I`ve got this error
Error Output: [2016-04-04 11:23:44] assetic.ERROR: The template ":tmp:module.html.twig" contains an error: A template that extends another one cannot have a body in ":tmp:module.ht
ml.twig" at line 7.
But when I install inside the Docker container node this way
apt-get install git-core curl build-essential openssl libssl-dev
git clone https://github.com/nodejs/node.git
cd node
./configure
make
sudo make install
node -v
I can build my CSS. So question is..how this installation above make install inside my Dockerfile when I am building it with Jenkins?
| I think this works slightly better.
ENV NODE_VERSION=16.13.0
RUN apt install -y curl
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
ENV NVM_DIR=/root/.nvm
RUN . "$NVM_DIR/nvm.sh" && nvm install ${NODE_VERSION}
RUN . "$NVM_DIR/nvm.sh" && nvm use v${NODE_VERSION}
RUN . "$NVM_DIR/nvm.sh" && nvm alias default v${NODE_VERSION}
ENV PATH="/root/.nvm/versions/node/v${NODE_VERSION}/bin/:${PATH}"
RUN node --version
RUN npm --version
Note that nvm is a version manager for node.js, designed to be installed per-user, and invoked per-shell. nvm works on any POSIX-compliant shell (sh, dash, ksh, zsh, bash), in particular on these platforms: unix, macOS, and windows WSL.
| Jenkins | 36,399,848 | 96 |
I am running a Jenkins cluster where in the Master and Slave, both are running as a Docker containers.
The Host is latest boot2docker VM running on MacOS.
To allow Jenkins to be able to perform deployment using Docker, I have mounted the docker.sock and docker client from the host to the Jenkins container like this :-
docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker -v $HOST_JENKINS_DATA_DIRECTORY/jenkins_data:/var/jenkins_home -v $HOST_SSH_KEYS_DIRECTORY/.ssh/:/var/jenkins_home/.ssh/ -p 8080:8080 jenkins
I am facing issues while mounting a volume to Docker containers that are run inside the Jenkins container. For example, if I need to run another Container inside the Jenkins container, I do the following :-
sudo docker run -v $JENKINS_CONTAINER/deploy.json:/root/deploy.json $CONTAINER_REPO/$CONTAINER_IMAGE
The above runs the container, but the file "deploy.json" is NOT mounted as a file, but instead as a "Directory". Even if I mount a Directory as a Volume, I am unable to view the files in the resulting container.
Is this a problem, because of file permissions due to Docker in Docker case?
| A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.
Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.
Very basic and obvious thing which I missed, but realized as soon I typed the question.
| Jenkins | 31,381,322 | 95 |
I have a submodule in a project in Jenkins. I've enabled the advanced setting to recursively update submodules.
When I run the build, I see that the workspace has the files from the submodule. The problem is, it seems to be the first revision of the submodule. When I push changes (repository hosted on GitHub) Jenkins doesn't seem to update the submodule to get the right changes. Has anyone ever seen this?
| Note that the Jenkins Git plugin 2.0 will have "advance submodule behaviors", which should ensure proper updates of the submodules:
As commented by vikramvi:
Advanced sub-modules behavior > "Path of the reference repo to use during submodule update" against this field , add submodule git url.
Owen B mentions in the comments:
For the authentication issue, there's now a "Use credentials from default remote of parent repository" option
Seen here in JENKINS-20941:
| Jenkins | 9,953,299 | 94 |
I have installed Jenkins executable on OSX, but now I want to stop it running. Whenever I kill it, no matter how, it just restarts immediately.
I've tried using the exit command on the jenkins url:
http://localhost:8080/exit
which asks me to post the command, which I do, and the server shuts down as requested. But then it restarts.
I've tried searching for the process id using ps, and force killing it (kill -9 pid), and the server shuts down immediately, as requested. But then it restarts.
I've tried shutting it down via the gui, but unfortunately there doesn't seem to be a way to do that.
There must be a daemon somewhere, making this a general OSX question.
| Just unload the plist using launchctl
sudo launchctl unload /Library/LaunchDaemons/org.jenkins-ci.plist
| Jenkins | 6,959,327 | 94 |
I've been following this guide on configuring GitLab continuous integration with Jenkins.
As part of the process, it is necessary to set the refspec as follows: +refs/heads/*:refs/remotes/origin/* +refs/merge-requests/*/head:refs/remotes/origin/merge-requests/*
Why this is necessary is not explained in the post, so I began looking online for an explanation and looked at the official documentation as well as some related StackOverflow questions like this one.
In spite of this, I'm still confused:
What exactly is refspec? And why is the above refspec necessary β what does it do?
| A refspec tells git how to map references from a remote to the local repo.
The value you listed was +refs/heads/*:refs/remotes/origin/* +refs/merge-requests/*/head:refs/remotes/origin/merge-requests/*; so let's break that down.
You have two patterns with a space between them; this just means you're giving multiple rules. (The pro git book refers to this as two refspecs; which is probably technically more correct. However, you just about always have the ability to list multiple refspecs if you need to, so in day to day life it likely makes little difference.)
The first pattern, then, is +refs/heads/*:refs/remotes/origin/* which has three parts:
The + means to apply the rule without failure even if doing so would move a target ref in a non-fast-forward manner. I'll come back to that.
The part before the : (but after the + if there is one) is the "source" pattern. That's refs/heads/*, meaning this rule applies to any remote reference under refs/heads (meaning, branches).
The part after the : is the "destination" pattern. That's refs/remotes/origin/*.
So if the origin has a branch master, represented as refs/heads/master, this will create a remote branch reference origin/master represented as refs/remotes/origin/master. And so on for any branch name (*).
So back to that +... suppose the origin has
A --- B <--(master)
You fetch and, applying that refspec you get
A --- B <--(origin/master)
(If you applied typical tracking rules and did a pull you also have master pointed at B.)
A --- B <--(origin/master)(master)
Now some things happen on the remote. Someone maybe did a reset that erased B, then committed C, then forced a push. So the remote says
A --- C <--(master)
When you fetch, you get
A --- B
\
C
and git must decide whether to allow the move of origin/master from B to C. By default it wouldn't allow this because it's not a fast-forward (it would tell you it rejected the pull for that ref), but because the rule starts with + it will accept it.
A --- B <--(master)
\
C <--(origin/master)
(A pull will in this case result in a merge commit.)
The second pattern is similar, but for merge-requests refs (which I assume is related to your server's implementation of PR's; I'm not familiar with it).
More about refspecs: https://git-scm.com/book/en/v2/Git-Internals-The-Refspec
| Jenkins | 44,333,437 | 93 |
Sorry for the 'svn' style - we are in a process of migration from SVN to GIT (including our CI Jenkins environment).
What we need is to be able to make Jenkins to checkout (or should I say clone?) the GIT project (repository?) into a specific directory. We've tried some refspecs magic but it wasn't too obvious to understand and use successfully.
Furthermore, if in the same Jenkins project we need to checkout several private GitHub repositories into several separate dirs under a project root, how can we do it please?
We have GitHub plugin installed. Hope we've phrased the things right.
| In the new Jenkins 2.0 pipeline (previously named the Workflow Plugin), this is done differently for:
The main repository
Other additional repositories
Here I am specifically referring to the Multibranch Pipeline version 2.9.
Main repository
This is the repository that contains your Jenkinsfile.
In the Configure screen for your pipeline project, enter your repository name, etc.
Do not use Additional Behaviors > Check out to a sub-directory. This will put your Jenkinsfile in the sub-directory where Jenkins cannot find it.
In Jenkinsfile, check out the main repository in the subdirectory using dir():
dir('subDir') {
checkout scm
}
Additional repositories
If you want to check out more repositories, use the Pipeline Syntax generator to automatically generate a Groovy code snippet.
In the Configure screen for your pipeline project:
Select Pipeline Syntax. In the Sample
Step drop down menu, choose checkout: General SCM.
Select your SCM system, such as Git. Fill in the usual information
about your repository or depot.
Note that in the Multibranch Pipeline, environment variable
env.BRANCH_NAME contains the branch name of the main repository.
In the Additional Behaviors drop down menu, select
Check out to a sub-directory
Click Generate Groovy. Jenkins will display the Groovy code snippet
corresponding to the SCM checkout that you specified.
Copy this code into your pipeline script or Jenkinsfile.
| Jenkins | 9,767,919 | 93 |
I need to know which branch is being built in my Jenkins multibranch pipeline in order for it to run steps correctly.
We are using a gitflow pattern with dev, release, and master branches that all are used to create artifacts. The dev branch auto deploys, the other two do not. Also there are feature, bugfix and hotfix branches. These branches should be built, but not produce an artifact. They should just be used to inform the developer if there is a problem with their code.
In a standard build, I have access to the $GIT_BRANCH variable to know which branch is being built, but that variable isn't set in my multibranch pipeline. I have tried env.GIT_BRANCH too, and I tried to pass $GIT_BRANCH as a parameter to the build. Nothing seems to work. I assumed that since the build knows about the branch being built (I can see the branch name at the top of the console output) that there is something that I can use - I just can't find any reference to it.
| The env.BRANCH_NAME variable contains the branch name.
As of Pipeline Groovy Plugin 2.18, you can also just use BRANCH_NAME
(env isn't required but still accepted.)
| Jenkins | 32,789,619 | 92 |
I recently updated the configuration of one of my hudson builds. The build history is out of sync. Is there a way to clear my build history?
Please and thank you
| Use the script console (Manage Jenkins > Script Console) and something like this script to bulk delete a job's build history https://github.com/jenkinsci/jenkins-scripts/blob/master/scriptler/bulkDeleteBuilds.groovy
That script assumes you want to only delete a range of builds. To delete all builds for a given job, use this (tested):
// change this variable to match the name of the job whose builds you want to delete
def jobName = "Your Job Name"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().each { it.delete() }
// uncomment these lines to reset the build number to 1:
//job.nextBuildNumber = 1
//job.save()
| Jenkins | 3,410,141 | 92 |
I am creating a sample jenkins pipeline, here is the code.
pipeline {
agent any
stages {
stage('test') {
steps {
sh 'echo hello'
}
}
stage('test1') {
steps {
sh 'echo $TEST'
}
}
stage('test3') {
if (env.BRANCH_NAME == 'master') {
echo 'I only execute on the master branch'
} else {
echo 'I execute elsewhere'
}
}
}
}
this pipeline fails with following error logs
Started by user admin
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 15: Not a valid stage section definition: "if (env.BRANCH_NAME == 'master') {
echo 'I only execute on the master branch'
} else {
echo 'I execute elsewhere'
}". Some extra configuration is required. @ line 15, column 9.
stage('test3') {
^
WorkflowScript: 15: Nothing to execute within stage "test3" @ line 15, column 9.
stage('test3') {
^
But when i execute the following example from this url, it executes successfully and print the else part.
node {
stage('Example') {
if (env.BRANCH_NAME == 'master') {
echo 'I only execute on the master branch'
} else {
echo 'I execute elsewhere'
}
}
}
The only difference i can see is that in the working example there is no stages but in my case it has.
What is wrong here, can anyone please suggest?
| your first try is using declarative pipelines, and the second working one is using scripted pipelines. you need to enclose steps in a steps declaration, and you can't use if as a top-level step in declarative, so you need to wrap it in a script step. here's a working declarative version:
pipeline {
agent any
stages {
stage('test') {
steps {
sh 'echo hello'
}
}
stage('test1') {
steps {
sh 'echo $TEST'
}
}
stage('test3') {
steps {
script {
if (env.BRANCH_NAME == 'master') {
echo 'I only execute on the master branch'
} else {
echo 'I execute elsewhere'
}
}
}
}
}
}
you can simplify this and potentially avoid the if statement (as long as you don't need the else) by using "when". See "when directive" at https://jenkins.io/doc/book/pipeline/syntax/. you can also validate jenkinsfiles using the jenkins rest api. it's super sweet. have fun with declarative pipelines in jenkins!
| Jenkins | 43,587,964 | 91 |
I have two Jenkins pipelines, let's say pipeline-A and pipeline-B. I want to invoke pipeline-A in pipeline-B. How can I do this?
(pipeline-A is a subset of pipeline-B. Pipeline-A is responsible for doing some routine stuff which can be reused in pipeline-B)
I have installed Jenkins 2.41 on my machine.
| Following solution works for me:
pipeline {
agent
{
node {
label 'master'
customWorkspace "${env.JobPath}"
}
}
stages
{
stage('Start') {
steps {
sh 'ls'
}
}
stage ('Invoke_pipeline') {
steps {
build job: 'pipeline1', parameters: [
string(name: 'param1', value: "value1")
]
}
}
stage('End') {
steps {
sh 'ls'
}
}
}
}
Adding link of the official documentation of "Pipeline: Build Step" here:
https://jenkins.io/doc/pipeline/steps/pipeline-build-step/
| Jenkins | 43,337,070 | 91 |
I've create a jenkins pipeline and it is pulling the pipeline script from scm.
I set the branch specifier to 'all', so it builds on any change to any branch.
How do I access the branch name causing this build from the Jenkinsfile?
Everything I have tried echos out null except
sh(returnStdout: true, script: 'git rev-parse --abbrev-ref HEAD').trim()
which is always master.
| Use multibranch pipeline job type, not the plain pipeline job type. The multibranch pipeline jobs do posess the environment variable env.BRANCH_NAME which describes the branch.
In my script..
stage('Build') {
node {
echo 'Pulling...' + env.BRANCH_NAME
checkout scm
}
}
Yields...
Pulling...master
| Jenkins | 42,383,273 | 91 |
I have a report file I'm generating, and I would like to be able to add the current build number to that file within a Jenkins job. Is there an environment variable or plugin I can use to get at the current build number?
| BUILD_NUMBER is the current build number. You can use it in the command you execute for the job, or just use it in the script your job executes.
See the Jenkins documentation for the full list of available environment variables. The list is also available from within your Jenkins instance at http://hostname/jenkins/env-vars.html.
| Jenkins | 7,167,650 | 91 |
I run Jenkins in its own container. I use the command "nohup java -jar jenkins.war --httpsPort=8443".
How do I shut it down safely? Right now, I use the kill command to kill the process.
| Use http://[jenkins-server]/exit
This page shows how to use URL commands.
| Jenkins | 10,238,604 | 89 |
I have the following code within a Jenkins pipeline:
stage ('Question') {
try {
timeout(time: 1, unit: 'MINUTES') {
userInput = input message: 'Choose server to publish to:', ok: '', parameters: [
[$class: 'hudson.model.ChoiceParameterDefinition', choices: 'pc-ensureint\nother-server', description: 'Choose server to publish to:', name: 'server']
]
}
} catch (err) {
userInput = [server: 'pc-ensureint'] // if an error is caught set this value
}
}
node () {
println ${server}
}
I'm trying to troubleshoot a problem with the server variable which is set in the ChoiceParameterDefinition.
When I run the build, I get the following error:
java.lang.NoSuchMethodError: No such DSL method '$' found among steps [AddInteractivePromotion, ArtifactoryGradleBuild, ArtifactoryMavenBuild, ConanAddRemote, ConanAddUser, InitConanClient, MavenDescriptorStep, RunConanCommand, ansiblePlaybook, archive, artifactoryDownload, artifactoryPromoteBuild, artifactoryUpload, bat, build, catchError, checkout, collectEnv, deleteDir, dir, dockerFingerprintFrom, dockerFingerprintRun, dockerPullStep, dockerPushStep, echo, emailext, emailextrecipients, envVarsForTool, error, fileExists, getArtifactoryServer, getContext, getDatabaseConnection, git, input, isUnix, library, libraryResource, load, mail, milestone, newArtifactoryServer, newBuildInfo, newGradleBuild, newMavenBuild, node, parallel, properties, publishBuildInfo, pwd, readFile, readTrusted, resolveScm, retry, script, sh, sleep, sql, stage, stash, step, svn, timeout, timestamps, tool, unarchive, unstash, validateDeclarativePipeline, waitForQualityGate, waitUntil, withContext, withCredentials, withDockerContainer, withDockerRegistry, withDockerServer, withEnv, wrap, writeFile, ws, xrayScanBuild] or symbols [all, allOf, always, ant, antFromApache, antOutcome, antTarget, any, anyOf, apiToken, architecture, archiveArtifacts, artifactManager, batchFile, booleanParam, branch, buildButton, buildDiscarder, caseInsensitive, caseSensitive, choice, choiceParam, cleanWs, clock, cloud, command, configFile, configFileProvider, cron, crumb, defaultView, demand, disableConcurrentBuilds, docker, dockerfile, downloadSettings, downstream, dumb, envVars, environment, expression, file, fileParam, filePath, fingerprint, frameOptions, freeStyle, freeStyleJob, git, github, githubPush, gradle, hyperlink, hyperlinkToModels, installSource, jdk, jdkInstaller, jgit, jgitapache, jnlp, jobName, junit, label, lastDuration, lastFailure, lastGrantedAuthorities, lastStable, lastSuccess, legacy, legacySCM, list, local, location, logRotator, loggedInUsersCanDoAnything, masterBuild, maven, maven3Mojos, mavenErrors, mavenMojos, mavenWarnings, modernSCM, msbuild, msbuildError, msbuildWarning, myView, node, nodeProperties, nonStoredPasswordParam, none, not, overrideIndexTriggers, paneStatus, parameters, password, pattern, pipeline-model, pipelineTriggers, plainText, plugin, pollSCM, projectNamingStrategy, proxy, queueItemAuthenticator, quietPeriod, remotingCLI, run, runParam, schedule, scmRetryCount, search, security, shell, skipDefaultCheckout, skipStagesAfterUnstable, slave, stackTrace, standard, status, string, stringParam, swapSpace, text, textParam, tmpSpace, toolLocation, unsecured, upstream, usernameColonPassword, usernamePassword, viewsTabBar, weather, withSonarQubeEnv, zfs, zip] or globals [Artifactory, currentBuild, docker, env, params, pipeline, scm]
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:149)
at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108)
at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:123)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:123)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
at WorkflowScript.run(WorkflowScript:16)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
at sun.reflect.GeneratedMethodAccessor637.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:46)
at com.cloudbees.groovy.cps.Next.step(Next.java:74)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:165)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:330)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:82)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:242)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:230)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
at java.util.concurrent.FutureTask.run(Unknown Source)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Finished: FAILURE
As far as I know, server is a groovy variable and thus I'm supposed to be able to access it using ${ }.
So I've tried:
echo ${server}
print ${server}
println ${server}
println "${server}"
But no matter what I try I keep getting this error.
Any idea what I'm doing wrong?
| The following code worked for me:
echo userInput
| Jenkins | 43,866,369 | 88 |
I get this when running a lot of liquibase-scripts against a Oracle-server. SomeComputer is me.
Waiting for changelog lock....
Waiting for changelog lock....
Waiting for changelog lock....
Waiting for changelog lock....
Waiting for changelog lock....
Waiting for changelog lock....
Waiting for changelog lock....
Liquibase Update Failed: Could not acquire change log lock. Currently locked by SomeComputer (192.168.15.X) since 2013-03-20 13:39
SEVERE 2013-03-20 16:59:liquibase: Could not acquire change log lock. Currently locked by SomeComputer (192.168.15.X) since 2013-03-20 13:39
liquibase.exception.LockException: Could not acquire change log lock. Currently locked by SomeComputer (192.168.15.X) since 2013-03-20 13:39
at liquibase.lockservice.LockService.waitForLock(LockService.java:81)
at liquibase.Liquibase.tag(Liquibase.java:507)
at liquibase.integration.commandline.Main.doMigration(Main.java:643)
at liquibase.integration.commandline.Main.main(Main.java:116)
Could it be that the number of simultaneous sessions/transactions are reached? Anyone has any ideas?
| Sometimes if the update application is abruptly stopped, then the lock remains stuck.
Then running
UPDATE DATABASECHANGELOGLOCK SET LOCKED=0, LOCKGRANTED=null, LOCKEDBY=null where ID=1;
against the database helps.
You may also need to replace LOCKED=0 with LOCKED=FALSE.
Or you can simply drop the DATABASECHANGELOGLOCK table, it will be recreated.
| Liquibase | 15,528,795 | 412 |
I try to find documentation on the supported types that can be used in change log files.
But cannot find it.
Is there any document, site or something similar where I can find all types-specific issues.
For example clob type is supported in databases with different types. And I have to use something like:
<property name="clob.type" value="clob" dbms="oracle,h2,hsqldb"/>
<property name="clob.type" value="longtext" dbms="mysql"/>
<column name="clob1" type="${clob.type}">
<constraints nullable="true"/>
</column>
I hope there is a resource where all liquibase types are described.
| This is a comprehensive list of all liquibase datatypes and how they are converted for different databases:
boolean
MySQLDatabase: BIT(1)
SQLiteDatabase: BOOLEAN
H2Database: BOOLEAN
PostgresDatabase: BOOLEAN
UnsupportedDatabase: BOOLEAN
DB2Database: SMALLINT
MSSQLDatabase: [bit]
OracleDatabase: NUMBER(1)
HsqlDatabase: BOOLEAN
FirebirdDatabase: SMALLINT
DerbyDatabase: SMALLINT
InformixDatabase: BOOLEAN
SybaseDatabase: BIT
SybaseASADatabase: BIT
tinyint
MySQLDatabase: TINYINT
SQLiteDatabase: TINYINT
H2Database: TINYINT
PostgresDatabase: SMALLINT
UnsupportedDatabase: TINYINT
DB2Database: SMALLINT
MSSQLDatabase: [tinyint]
OracleDatabase: NUMBER(3)
HsqlDatabase: TINYINT
FirebirdDatabase: SMALLINT
DerbyDatabase: SMALLINT
InformixDatabase: TINYINT
SybaseDatabase: TINYINT
SybaseASADatabase: TINYINT
int
MySQLDatabase: INT
SQLiteDatabase: INTEGER
H2Database: INT
PostgresDatabase: INT
UnsupportedDatabase: INT
DB2Database: INTEGER
MSSQLDatabase: [int]
OracleDatabase: INTEGER
HsqlDatabase: INT
FirebirdDatabase: INT
DerbyDatabase: INTEGER
InformixDatabase: INT
SybaseDatabase: INT
SybaseASADatabase: INT
mediumint
MySQLDatabase: MEDIUMINT
SQLiteDatabase: MEDIUMINT
H2Database: MEDIUMINT
PostgresDatabase: MEDIUMINT
UnsupportedDatabase: MEDIUMINT
DB2Database: MEDIUMINT
MSSQLDatabase: [int]
OracleDatabase: MEDIUMINT
HsqlDatabase: MEDIUMINT
FirebirdDatabase: MEDIUMINT
DerbyDatabase: MEDIUMINT
InformixDatabase: MEDIUMINT
SybaseDatabase: MEDIUMINT
SybaseASADatabase: MEDIUMINT
bigint
MySQLDatabase: BIGINT
SQLiteDatabase: BIGINT
H2Database: BIGINT
PostgresDatabase: BIGINT
UnsupportedDatabase: BIGINT
DB2Database: BIGINT
MSSQLDatabase: [bigint]
OracleDatabase: NUMBER(38, 0)
HsqlDatabase: BIGINT
FirebirdDatabase: BIGINT
DerbyDatabase: BIGINT
InformixDatabase: INT8
SybaseDatabase: BIGINT
SybaseASADatabase: BIGINT
float
MySQLDatabase: FLOAT
SQLiteDatabase: FLOAT
H2Database: FLOAT
PostgresDatabase: FLOAT
UnsupportedDatabase: FLOAT
DB2Database: FLOAT
MSSQLDatabase: [float](53)
OracleDatabase: FLOAT
HsqlDatabase: FLOAT
FirebirdDatabase: FLOAT
DerbyDatabase: FLOAT
InformixDatabase: FLOAT
SybaseDatabase: FLOAT
SybaseASADatabase: FLOAT
double
MySQLDatabase: DOUBLE
SQLiteDatabase: DOUBLE
H2Database: DOUBLE
PostgresDatabase: DOUBLE PRECISION
UnsupportedDatabase: DOUBLE
DB2Database: DOUBLE
MSSQLDatabase: [float](53)
OracleDatabase: FLOAT(24)
HsqlDatabase: DOUBLE
FirebirdDatabase: DOUBLE PRECISION
DerbyDatabase: DOUBLE
InformixDatabase: DOUBLE PRECISION
SybaseDatabase: DOUBLE
SybaseASADatabase: DOUBLE
decimal
MySQLDatabase: DECIMAL
SQLiteDatabase: DECIMAL
H2Database: DECIMAL
PostgresDatabase: DECIMAL
UnsupportedDatabase: DECIMAL
DB2Database: DECIMAL
MSSQLDatabase: [decimal](18, 0)
OracleDatabase: DECIMAL
HsqlDatabase: DECIMAL
FirebirdDatabase: DECIMAL
DerbyDatabase: DECIMAL
InformixDatabase: DECIMAL
SybaseDatabase: DECIMAL
SybaseASADatabase: DECIMAL
number
MySQLDatabase: numeric
SQLiteDatabase: NUMBER
H2Database: NUMBER
PostgresDatabase: numeric
UnsupportedDatabase: NUMBER
DB2Database: numeric
MSSQLDatabase: [numeric](18, 0)
OracleDatabase: NUMBER
HsqlDatabase: numeric
FirebirdDatabase: numeric
DerbyDatabase: numeric
InformixDatabase: numeric
SybaseDatabase: numeric
SybaseASADatabase: numeric
blob
MySQLDatabase: LONGBLOB
SQLiteDatabase: BLOB
H2Database: BLOB
PostgresDatabase: BYTEA
UnsupportedDatabase: BLOB
DB2Database: BLOB
MSSQLDatabase: [varbinary](MAX)
OracleDatabase: BLOB
HsqlDatabase: BLOB
FirebirdDatabase: BLOB
DerbyDatabase: BLOB
InformixDatabase: BLOB
SybaseDatabase: IMAGE
SybaseASADatabase: LONG BINARY
function
MySQLDatabase: FUNCTION
SQLiteDatabase: FUNCTION
H2Database: FUNCTION
PostgresDatabase: FUNCTION
UnsupportedDatabase: FUNCTION
DB2Database: FUNCTION
MSSQLDatabase: [function]
OracleDatabase: FUNCTION
HsqlDatabase: FUNCTION
FirebirdDatabase: FUNCTION
DerbyDatabase: FUNCTION
InformixDatabase: FUNCTION
SybaseDatabase: FUNCTION
SybaseASADatabase: FUNCTION
UNKNOWN
MySQLDatabase: UNKNOWN
SQLiteDatabase: UNKNOWN
H2Database: UNKNOWN
PostgresDatabase: UNKNOWN
UnsupportedDatabase: UNKNOWN
DB2Database: UNKNOWN
MSSQLDatabase: [UNKNOWN]
OracleDatabase: UNKNOWN
HsqlDatabase: UNKNOWN
FirebirdDatabase: UNKNOWN
DerbyDatabase: UNKNOWN
InformixDatabase: UNKNOWN
SybaseDatabase: UNKNOWN
SybaseASADatabase: UNKNOWN
datetime
MySQLDatabase: datetime
SQLiteDatabase: TEXT
H2Database: TIMESTAMP
PostgresDatabase: TIMESTAMP WITHOUT TIME ZONE
UnsupportedDatabase: datetime
DB2Database: TIMESTAMP
MSSQLDatabase: [datetime]
OracleDatabase: TIMESTAMP
HsqlDatabase: TIMESTAMP
FirebirdDatabase: TIMESTAMP
DerbyDatabase: TIMESTAMP
InformixDatabase: DATETIME YEAR TO FRACTION(5)
SybaseDatabase: datetime
SybaseASADatabase: datetime
time
MySQLDatabase: time
SQLiteDatabase: time
H2Database: time
PostgresDatabase: TIME WITHOUT TIME ZONE
UnsupportedDatabase: time
DB2Database: time
MSSQLDatabase: [time](7)
OracleDatabase: DATE
HsqlDatabase: time
FirebirdDatabase: time
DerbyDatabase: time
InformixDatabase: INTERVAL HOUR TO FRACTION(5)
SybaseDatabase: time
SybaseASADatabase: time
timestamp
MySQLDatabase: timestamp
SQLiteDatabase: TEXT
H2Database: TIMESTAMP
PostgresDatabase: TIMESTAMP WITHOUT TIME ZONE
UnsupportedDatabase: timestamp
DB2Database: timestamp
MSSQLDatabase: [datetime]
OracleDatabase: TIMESTAMP
HsqlDatabase: TIMESTAMP
FirebirdDatabase: TIMESTAMP
DerbyDatabase: TIMESTAMP
InformixDatabase: DATETIME YEAR TO FRACTION(5)
SybaseDatabase: datetime
SybaseASADatabase: timestamp
date
MySQLDatabase: date
SQLiteDatabase: date
H2Database: date
PostgresDatabase: date
UnsupportedDatabase: date
DB2Database: date
MSSQLDatabase: [date]
OracleDatabase: date
HsqlDatabase: date
FirebirdDatabase: date
DerbyDatabase: date
InformixDatabase: date
SybaseDatabase: date
SybaseASADatabase: date
char
MySQLDatabase: CHAR
SQLiteDatabase: CHAR
H2Database: CHAR
PostgresDatabase: CHAR
UnsupportedDatabase: CHAR
DB2Database: CHAR
MSSQLDatabase: [char](1)
OracleDatabase: CHAR
HsqlDatabase: CHAR
FirebirdDatabase: CHAR
DerbyDatabase: CHAR
InformixDatabase: CHAR
SybaseDatabase: CHAR
SybaseASADatabase: CHAR
varchar
MySQLDatabase: VARCHAR
SQLiteDatabase: VARCHAR
H2Database: VARCHAR
PostgresDatabase: VARCHAR
UnsupportedDatabase: VARCHAR
DB2Database: VARCHAR
MSSQLDatabase: [varchar](1)
OracleDatabase: VARCHAR2
HsqlDatabase: VARCHAR
FirebirdDatabase: VARCHAR
DerbyDatabase: VARCHAR
InformixDatabase: VARCHAR
SybaseDatabase: VARCHAR
SybaseASADatabase: VARCHAR
nchar
MySQLDatabase: NCHAR
SQLiteDatabase: NCHAR
H2Database: NCHAR
PostgresDatabase: NCHAR
UnsupportedDatabase: NCHAR
DB2Database: NCHAR
MSSQLDatabase: [nchar](1)
OracleDatabase: NCHAR
HsqlDatabase: CHAR
FirebirdDatabase: NCHAR
DerbyDatabase: NCHAR
InformixDatabase: NCHAR
SybaseDatabase: NCHAR
SybaseASADatabase: NCHAR
nvarchar
MySQLDatabase: NVARCHAR
SQLiteDatabase: NVARCHAR
H2Database: NVARCHAR
PostgresDatabase: VARCHAR
UnsupportedDatabase: NVARCHAR
DB2Database: NVARCHAR
MSSQLDatabase: [nvarchar](1)
OracleDatabase: NVARCHAR2
HsqlDatabase: VARCHAR
FirebirdDatabase: NVARCHAR
DerbyDatabase: VARCHAR
InformixDatabase: NVARCHAR
SybaseDatabase: NVARCHAR
SybaseASADatabase: NVARCHAR
clob
MySQLDatabase: LONGTEXT
SQLiteDatabase: TEXT
H2Database: CLOB
PostgresDatabase: TEXT
UnsupportedDatabase: CLOB
DB2Database: CLOB
MSSQLDatabase: [varchar](MAX)
OracleDatabase: CLOB
HsqlDatabase: CLOB
FirebirdDatabase: BLOB SUB_TYPE TEXT
DerbyDatabase: CLOB
InformixDatabase: CLOB
SybaseDatabase: TEXT
SybaseASADatabase: LONG VARCHAR
currency
MySQLDatabase: DECIMAL
SQLiteDatabase: REAL
H2Database: DECIMAL
PostgresDatabase: DECIMAL
UnsupportedDatabase: DECIMAL
DB2Database: DECIMAL(19, 4)
MSSQLDatabase: [money]
OracleDatabase: NUMBER(15, 2)
HsqlDatabase: DECIMAL
FirebirdDatabase: DECIMAL(18, 4)
DerbyDatabase: DECIMAL
InformixDatabase: MONEY
SybaseDatabase: MONEY
SybaseASADatabase: MONEY
uuid
MySQLDatabase: char(36)
SQLiteDatabase: TEXT
H2Database: UUID
PostgresDatabase: UUID
UnsupportedDatabase: char(36)
DB2Database: char(36)
MSSQLDatabase: [uniqueidentifier]
OracleDatabase: RAW(16)
HsqlDatabase: char(36)
FirebirdDatabase: char(36)
DerbyDatabase: char(36)
InformixDatabase: char(36)
SybaseDatabase: UNIQUEIDENTIFIER
SybaseASADatabase: UNIQUEIDENTIFIER
For reference, this is the groovy script I've used to generate this output:
@Grab('org.liquibase:liquibase-core:3.5.1')
import liquibase.database.core.*
import liquibase.datatype.core.*
def datatypes = [BooleanType,TinyIntType,IntType,MediumIntType,BigIntType,FloatType,DoubleType,DecimalType,NumberType,BlobType,DatabaseFunctionType,UnknownType,DateTimeType,TimeType,TimestampType,DateType,CharType,VarcharType,NCharType,NVarcharType,ClobType,CurrencyType,UUIDType]
def databases = [MySQLDatabase, SQLiteDatabase, H2Database, PostgresDatabase, UnsupportedDatabase, DB2Database, MSSQLDatabase, OracleDatabase, HsqlDatabase, FirebirdDatabase, DerbyDatabase, InformixDatabase, SybaseDatabase, SybaseASADatabase]
datatypes.each {
def datatype = it.newInstance()
datatype.finishInitialization("")
println datatype.name
databases.each { println "$it.simpleName: ${datatype.toDatabaseDataType(it.newInstance())}"}
println ''
}
| Liquibase | 16,890,723 | 122 |
I have searched for this answer on stack overflow, but I couldn't find any questions on this.
I am new to Liquibase and want to learn
Why Liquibase?
When exactly one should use Liquibase in the project?
I know that this is to keep all database changes in one place but the similar can be done by creating a simple SQL files in some repository system and keep updating it with time.
| The key differentiator between a self-managed schema create file and Liquibase (or other schema migration tools) is that the latter provides a schema changelog. This is a record of the schema changes over time. It allows the database designer to specify changes in schema & enables programmatic upgrade or downgrade of the schema on demand.
There are other benefits, such as:
Database vendor independence (this is questionable, but they try)
automated documentation
database schema diffs
One alternative tool is flyway.
You would choose to use a schema migration tool when you want or need to automatically manage schema updates without losing data. That is, you expect the schema to change after your system has been deployed to a long-lived environment such as a customer site or stable test environment.
| Liquibase | 29,760,629 | 118 |
Maven fires liquibase validation fail even no changes was made in changeset.
My database is oracle.
Situation:
In DB changelog table was record for changeset <changeSet id="1" author="me" dbms="oracle">;
Then by mistake i added another changeset <changeSet id="1" author="me" dbms="hsqldb">
Reruned liquibase scripts Maven fired checksum validation error.
Then i changed hsqldb changeSet to <changeSet id="2" author="me" dbms="hsqldb">
Maven still firing checksum validation error.
Then i changed first changeSet checksum in DB manually to current checkSum and scripts runned successfully.
Everything looks nice ,but when i redeploy whole application and run liquibase scripts checksum of first changeSet is still like before 6 step.
| If you're confident that your scripts correctly reflect what should be in the database, run the liquibase:clearCheckSums maven goal, which will clean it all up.
| Liquibase | 9,995,747 | 72 |
We're using Liquibase 3.2 with Java 6. Is there a way I can force Liquibase to recalculate checksums without re-running the same statements from our Liquibase files? In our database, I run this ...
update DATABASECHANGELOG set md5sum = null where 1;
However, when I run my Liquibase change scripts, certain executions still fail with the following errors ...
invoking liquibase change script with file /tmp/deploywork/db.changelog-master.xml
running /usr/java/liquibase/liquibase --logLevel=info --driver=com.mysql.jdbc.Driver --classpath=/usr/java/jboss/modules/com/mysql/main/mysql-connector-java-5.1.22-bin.jar --changeLogFile=/tmp/deploywork/db.changelog-master.xml --url="jdbc:mysql://myservername:3306/my_db" --username=username --password=password update
INFO 5/13/15 2:15 PM: liquibase: Successfully acquired change log lock
INFO 5/13/15 2:15 PM: liquibase: Reading from my_db.DATABASECHANGELOG
INFO 5/13/15 2:15 PM: liquibase: Successfully released change log lock
Unexpected error running Liquibase: Validation Failed:
3 change sets check sum
db.changelog-1.0.xml::1357593229391-25::rob (generated) is now: 7:5cfe9ecd779a71b6287ef2360a6979bf
db.changelog-7.0.xml::create-address-email-index::davea is now: 7:da0132e30ebd6a1bc52d9a39bb8c56d7
db.changelog-7.0.xml::add-myproject-event-object-id-col::davea is now: 7:2eab5d784647ce33ef3488aa8c383443
SEVERE 5/13/15 2:15 PM: liquibase: Validation Failed:
3 change sets check sum
db.changelog-1.0.xml::1357593229391-25::rob (generated) is now: 7:5cfe9ecd779a71b6287ef2360a6979bf
db.changelog-7.0.xml::create-address-email-index::davea is now: 7:da0132e30ebd6a1bc52d9a39bb8c56d7
db.changelog-7.0.xml::add-myproject-event-object-id-col::davea is now: 7:2eab5d784647ce33ef3488aa8c383443
liquibase.exception.ValidationFailedException: Validation Failed:
3 change sets check sum
db.changelog-1.0.xml::1357593229391-25::rob (generated) is now: 7:5cfe9ecd779a71b6287ef2360a6979bf
db.changelog-7.0.xml::create-address-email-index::davea is now: 7:da0132e30ebd6a1bc52d9a39bb8c56d7
db.changelog-7.0.xml::add-myproject-event-object-id-col::davea is now: 7:2eab5d784647ce33ef3488aa8c383443
at liquibase.changelog.DatabaseChangeLog.validate(DatabaseChangeLog.java:181)
at liquibase.Liquibase.update(Liquibase.java:191)
at liquibase.Liquibase.update(Liquibase.java:174)
at liquibase.integration.commandline.Main.doMigration(Main.java:997)
at liquibase.integration.commandline.Main.run(Main.java:170)
at liquibase.integration.commandline.Main.main(Main.java:89)
Here is one of the change sets that the script is complaining about β¦
<changeSet author="davea" id="add-myproject-event-object-id-col">
<addColumn tableName="sb_myproject_event">
<column name="OBJECT_ID" type="VARCHAR(32)"/>
</addColumn>
<createIndex indexName="SB_myproject_EVENT_IDX"
tableName="sb_myproject_event"
unique="false">
<column name="OBJECT_ID" type="varchar(32)" />
</createIndex>
<sql>update sb_myproject_event set object_id=LEFT(SUBSTRING_INDEX(event_data, '"id":"', -2), 24) where object_id is null and event_data is not null;</sql>
<!-- Delete older events that no longer need to be processed -->
<sql>delete from sb_myproject_event where id not in (select q.* from (select e.id FROM sb_myproject_event e, (select object_id, max(date_processed) d from sb_myproject_event group by object_id) o where e.object_id = o.object_id and e.date_processed = o.d) q);</sql>
</changeSet>
As I said, I only want to recalculate checksums (have to do this because we're changing Liquibase versions).
| Rather than clearing the checksums yourself using SQL, it will probably be better to let Liquibase do that by using the clearCheckSums command:
https://docs.liquibase.com/commands/community/clearchecksums.html
Removes current checksums from database. On next run checksums will be recomputed.
| Liquibase | 30,219,947 | 64 |
Is there a way in liquibase to create java code change set (i.e. provide a java class, which will receive a JDBC connection and will perform some changes in the database) ?
(I know that flyway has such feature)
| Yes, there is such feature. You can create a customChange:
<customChange class="my.java.Class">
<param name="id" value="2" />
</customChange>
The class must implements the liquibase.change.custom.CustomTaskChange interface.
@Override
public void execute(final Database arg0) throws CustomChangeException {
JdbcConnection dbConn = (JdbcConnection) arg0.getConnection();
try {
... do funny stuff ...
} catch (Exception e) {
// swallow the exception !
}
}
| Liquibase | 11,987,460 | 59 |
I'm creating a link table which has 3 columns; id, product_id, tournament_id.
Adding a uniqueConstraint to the "id" column is trivial, but I want to ensure that any pair of (product_id, tournament_id) is unique.
The example at Liquibase.org shows
<changeSet author="liquibase-docs" id="addUniqueConstraint-example">
<addUniqueConstraint catalogName="cat"
columnNames="id, name"
constraintName="const_name"
deferrable="true"
disabled="true"
initiallyDeferred="true"
schemaName="public"
tableName="person"
tablespace="A String"/>
</changeSet>
but is it possible to accomplish this within a <createTable> block?
Also, just to confirm; does this create a composite unique constraint on the two columns, or does it create two separate unique constraints?
| You can read liquibase manual also similar problem you can find here
In your case it should be:
<changeSet author="liquibase-docs" id="addUniqueConstraint-example">
<addUniqueConstraint
columnNames="product_id, tournament_id"
constraintName="your_constraint_name"
tableName="person"
/>
</changeSet>
| Liquibase | 28,192,652 | 46 |
I was hoping if someone could verify if this is the correct syntax and correct way of populating the DB using liquibase? All, I want is to change value of a row in a table and I'm doing it like this:
<changeSet author="name" id="1231">
<update tableName="SomeTable">
<column name="Properties" value="1" />
<where>PROPERTYNAME = 'someNameOfThePropery"</where>
</update>
<changeSet>
All I want is to change one value in a row in some table. The above doesn't work, although application compiled and it didn't complain, but alas, the value wasn't changed.
Thank you
| The above answers are overly complicated, for most cases this is enough:
<changeSet author="name" id="123">
<update tableName="SomeTable">
<column name="PropertyToSet" value="1" />
<where>otherProperty = 'otherPropertyValue'</where>
</update>
</changeSet>
important to use single quotes ' and not double quotes " in the WHERE clause.
| Liquibase | 16,627,627 | 42 |
For some reason there's no documentation on running liquibase inside Java code. I want to generate tables for Unit tests.
How would I run it directly in Java?
e.g.
Liquibase liquibase = new Liquibase()
liquibase.runUpdates() ?
| It should be something like (taken from liquibase.integration.spring.SpringLiquibase source):
java.sql.Connection c = YOUR_CONNECTION;
Liquibase liquibase = null;
try {
Database database = DatabaseFactory.getInstance().findCorrectDatabaseImplementation(new JdbcConnection(c))
liquibase = new Liquibase(YOUR_CHANGELOG, new FileSystemResourceAccessor(), database);
liquibase.update();
} catch (SQLException e) {
throw new DatabaseException(e);
} finally {
if (c != null) {
try {
c.rollback();
c.close();
} catch (SQLException e) {
//nothing to do
}
}
}
There are multiple implementation of ResourceAccessor depending on how your changelog files should be found.
| Liquibase | 10,620,131 | 41 |
I am pretty new to ES. I have been trying to search for a db migration tool for long and I could not find one. I am wondering if anyone could help to point me to the right direction.
I would be using Elasticsearch as a primary datastore in my project. I would like to version all mapping and configuration changes / data import / data upgrades scripts which I run as I develop new modules in my project.
In the past I used database versioning tools like Flyway or Liquibase.
Are there any frameworks / scripts or methods I could use with ES to achieve something similar ?
Does anyone have any experience doing this by hand using scripts and run migration scripts at least upgrade scripts.
Thanks in advance!
| From this point of view/need, ES have a huge limitations:
despite having dynamic mapping, ES is not schemaless but schema-intensive. Mappings cant be changed in case when this change conflicting with existing documents (practically, if any of documents have not-null field which new mapping affects, this will result in exception)
documents in ES is immutable: once you've indexed one, you can retrieve/delete in only. The syntactic sugar around this is partial update, which makes thread-safe delete + index (with same id) on ES side
What does that mean in context of your question? You, basically, can't have classic migration tools for ES. And here's what can make your work with ES easier:
use strict mapping ("dynamic": "strict" and/or index.mapper.dynamic: false, take a look at mapping docs). This will protect your indexes/types from being accidentally dynamically mapped with wrong type
get explicit error in case when you miss some error in data-mapping relation
you can fetch actual ES mapping and compare it with your data models. If your PL have high enough level library for ES, this should be pretty easy
you can leverage index aliases for migrations
So, a little bit of experience. For me, currently reasonable flow is this:
All data structures described as models in code. This models actually provide ORM abstraction too.
Index/mapping creation call is simple model's method.
Every index has alias (i.e. news) which points to actual index (i.e. news_index_{revision}_{date_created}).
Every time code being deployed, you
Try to put model(type) mapping. If it's done w/o error, this means that you've either
put the same mapping
put mapping that is pure superset of old one (only new fields was provided, old stays untouched)
no documents have values in fields affected by new mapping
All of this actually means that you're good to go with mappping/data you have, just work with data as always.
If ES provide exception about new mapping, you
create new index/type with new mapping (named like name_{revision}_{date}
redirect your alias to new index
fire up migration code that makes bulk requests for fast reindexing
During this reindexing you can safely index new documents normally through the alias. The drawback is that historical data is partially available during reindexing.
This is production-tested solution. Caveats around such approach:
you cannot do such, if your read requests require consistent historical data
you're required to reindex whole index. If you have 1 type per index (viable solution) then its fine. But sometimes you need multi-type indexes
data network roundtrip. Can be pain sometimes
To sum up this:
try to have good abstraction in your models, this always helps
try keeping historical data/fields stale. Just build your code with this idea in mind, that's easier than sounds at first
I strongly recommend to avoid relying on migration tools that leverage ES experimental tools. Those can be changed anytime, like river-* tools did.
| Liquibase | 23,977,688 | 41 |
liquibase is a perfect alternative to hibernate's hbm2ddl_auto property if you are using xml-mapping. But Im using JPA annotation (hibernate annotations). Is it possible to use liquibase then?
| Yes, Liquibase uses hibernate's metadata classes, which are the same whether you use xml mappings or annotations. You do need a hibernate config file to point liquibase to, but your mappings can be xml or jpa annotations. More information can be found at https://github.com/liquibase/liquibase-hibernate/wiki but you can use "database urls" such as
hibernate:classic:com/example/hibernate.cfg.xml
if you have a hibernate xml conf file or
hibernate:ejb3:myPersistenceUnit
if you have a META-INF/persistence.xml, or
hibernate:spring:com.example?dialect=org.hibernate.dialect.MySQL5Dialect
if you would like auto-generate a JPA configuration based on a java package containing annotated Entities.
| Liquibase | 776,787 | 37 |
A lot of people are unsure how to fix logging for liquibase, either to the console or file.
Is it possible to make liquibase log to slf4j?
| There is, but it is a little bit obscure. Quoting Fixing liquibase logging with SLF4J and Log4J:
There's The Easy Way, by dropping in a dependency:
<!-- your own standard logging dependencies -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.5</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId><!-- or log4j2 or logback or whatever-->
<version>1.7.5</version>
</dependency>
<!-- special dependency to fix liquibase's logging fetish -->
<dependency>
<groupId>com.mattbertolini</groupId>
<artifactId>liquibase-slf4j</artifactId>
<version>1.2.1</version>
</dependency>
Now the first two are your everyday logging frameworks (slf4j api and log4j implementation). These are in addition to your standard log4j dependency, as all they do is route to the physical logging framework. Without log4j/logback/etc. itself, they still can't route anything.
The last one however, is an interesting one, as it provides a single class in a specific package that liquibase will scan for Logger implementations. It's open source, by Matt Bertolini, so you can find it on GitHub.
If you wish to do this yourself, there's also The Hard Way:
package liquibase.ext.logging; // this is *very* important
import liquibase.changelog.ChangeSet;
import liquibase.changelog.DatabaseChangeLog;
import liquibase.logging.core.AbstractLogger;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Liquibase finds this class by itself by doing a custom component scan (sl4fj wasn't generic enough).
*/
public class LiquibaseLogger extends AbstractLogger {
private static final Logger LOGGER = LoggerFactory.getLogger(LiquibaseLogger.class);
private String name = "";
@Override
public void setName(String name) {
this.name = name;
}
@Override
public void severe(String message) {
LOGGER.error("{} {}", name, message);
}
@Override
public void severe(String message, Throwable e) {
LOGGER.error("{} {}", name, message, e);
}
@Override
public void warning(String message) {
LOGGER.warn("{} {}", name, message);
}
@Override
public void warning(String message, Throwable e) {
LOGGER.warn("{} {}", name, message, e);
}
@Override
public void info(String message) {
LOGGER.info("{} {}", name, message);
}
@Override
public void info(String message, Throwable e) {
LOGGER.info("{} {}", name, message, e);
}
@Override
public void debug(String message) {
LOGGER.debug("{} {}", name, message);
}
@Override
public void debug(String message, Throwable e) {
LOGGER.debug("{} {}", message, e);
}
@Override
public void setLogLevel(String logLevel, String logFile) {
}
@Override
public void setChangeLog(DatabaseChangeLog databaseChangeLog) {
}
@Override
public void setChangeSet(ChangeSet changeSet) {
}
@Override
public int getPriority() {
return Integer.MAX_VALUE;
}
}
This implementation works, but should only be used as an example. For example, I'm not using Liquibase's names to require a logging, but use this Logger class itself instead. Matt's versions does some null-checks as well, so that's probably a more mature implementation to use, plus it's open source.
| Liquibase | 20,880,783 | 36 |
Following the quickstart on liquibase i've created a changeset (very dumb :) )
Code:
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog/1.6"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog/1.6
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-1.6.xsd">
<changeSet id="1" author="me">
<createTable tableName="first_table">
<column name="id" type="int">
<constraints primaryKey="true" nullable="false"/>
</column>
<column name="name" type="varchar(50)">
<constraints nullable="false"/>
</column>
</createTable>
<createTable tableName="new_table">
<column name="id" type="int">
<constraints primaryKey="true" nullable="false"/>
</column>
</createTable>
</changeSet>
</databaseChangeLog>
I've created a clean schema and i've launched the migrate command.
Liquibase created the database, with the support tables databasechangelog and ..lock.
Now how i can track the changes?? i've modified the changeset adding a new createTable element but when i try the command "update" liquibase tells me this
Migration Failed: Validation Failed:
1 change sets check sum
so i don't think to have understood the way to work with liquibase.
Someone may point me to the right direction??
Thanks
| You should never modify a <changeSet> that was already executed. Liquibase calculates checksums for all executed changeSets and stores them in the log. It will then recalculate that checksum, compare it to the stored ones and fail the next time you run it if the checksums differ.
What you need to do instead is to add another <changeSet> and put your new createTable element in it.
QuickStart is good readin' but it is indeed quick :-) Check out the full manual, particularly its ChangeSet section.
| Liquibase | 1,148,663 | 36 |
We have a existing database in production. We have decided to use liquibase for all further updates and create any new database (like development or integration).
We have created liquibase scripts based on the existing production schema (to create any new database like development, integration, etc). On top of that script we have also added two more updates. Going forward all further updates to production DB will be done by liquibase.
If we execute the liquibase on production, it will try do all the complete changes even those which are already exist, which should not happen as production already has everything except the two new updates. Now we want to use the liquibase to update those two changes alone to productions.
How we can do this?
| The process to put a existing database under liquibase control is the following:
Create the initial changelog (that's what you did)
Run liquibase using the command changelogSync. This will create the Liquibase tables and mark all change sets as being applied (this is what you missed)
Add your change sets
Run liquibase using the command update to apply the change sets.
| Liquibase | 16,455,624 | 35 |
I've looked at both Liquibase and Flyway individually and on an individual comparison alone, Liquibase seems like the better tool for our needs. Some sources mention using both Liquibase and Flyway together. Liquibase seems to have everything Flyway has and more flexibility when it comes to rollbacks. The main advantage of just Flyway seems to be not having to use XML, but Liquibase allows you to specify an SQL file in their XML.
Basically, I'm still not clear on what the benefits of using Flyway & Liquibase together would be over just Liquibase, if there are any. Maybe there's a way to do it I'm not seeing as even if Liquibase was referring to valid Flyway SQL files, both tools would have to be run independently and still have the same pitfalls even though you could technically use either tool.
| A small correction, before I answer question. The assumption
Liquibase seems to have everything Flyway has
isn't correct. Flyway shines when it comes to parsing SQL. You can use unmodified SQL files generated by your native tools containing all kinds of complexity like PL/SQL packages and procedures, MySQL delimiter changes, T-SQL, PostgreSQL procedures, ... With Liquibase you would have to split these in individual statements, add extra comments to the SQL files, ...
The beauty of being able to use your SQL files as-is is that you avoid lock-in. You can take your existing SQL files, start using Flyway with minimal investment and moved away later if it doesn't suit your needs anymore. Not so with Liquibase.
Also the issue of down migrations (think of them as compensating transactions, not rollbacks) is really something that sounds great in theory, but that is almost never needed in practice. See this old documentation pageΒΉ.
However when it comes down to using one or both, I certainly agree with SteveDonie (Liquibase team member) that using just one instead of the both together is almost always the better choice.
Disclaimer: I am Flyway's creator
ΒΉ Though Flyway does support undo migrations nowadays, by reading the old documentation you'll understand the point Axel Fontaine is trying to make.
| Liquibase | 39,044,851 | 34 |
How can I disable checksum validation in Liquibase?
It looks like Liquibase does not provide such feature. Would it be hard to modify Liquibase to achieve that? Your opinion, please.
| Try adding validCheckSum with the literal ANY to the top of your changeSet, like this:
<changeSet>
<validCheckSum>ANY</validCheckSum>
<!-- the rest of your changeSet here -->
</changeSet>
| Liquibase | 30,579,550 | 33 |
I have two tables declared as follows:
<changeSet author="istvan" id="country-table-changelog">
<createTable tableName="country">
<column name="id" type="uuid">
<constraints nullable="false" unique="true" />
</column>
<column name="name" type="varchar">
<constraints nullable="false" unique="true" />
</column>
</createTable>
</changeSet>
<changeSet author="istvan" id="region-table-changelog">
<createTable tableName="region">
<column name="id" type="uuid" >
<constraints nullable="false" unique="true" />
</column>
<column name="country_id" type="uuid">
<constraints nullable="false" />
</column>
<column name="name" type="varchar">
<constraints nullable="false" unique="true" />
</column>
</createTable>
</changeSet>
<changeSet author="istvan" id="region-country-foreign-key-constraint">
<addForeignKeyConstraint
baseTableName="region"
baseColumnNames="country_id"
referencedTableName="country"
referencedColumnNames="id"
constraintName="fk_region_country"
onDelete="CASCADE"
onUpdate="RESTRICT"/>
</changeSet>
I want to fill both tables from liquibase changelog file with some values like:
INSERT INTO country VALUES('aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'HUNGARY');
INSERT INTO region VALUES('bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb', 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'Baranya');
In the example I used aaaa's and bbbb's just because of simplicity. I want to generate those UUID's by the DBMS.
What is the best way to do it? Do I have to use SQL in my changelog files or is it possible with XML? I prefer DBMS independent solution like XML or JSON.
My second question is that how can I declare a column with UUID that creates the UUID on insert. Something like:
<column name="id" type="uuid" value="??? GENERATE UUID ???">
<constraints nullable="false" unique="true" />
</column>
Thank you for your time!
| You can do this by using properties that are defined depending on the current DBMS.
<property name="uuid_type" value="uuid" dbms="postgresql"/>
<property name="uuid_type" value="uniqueidentifier" dbms="mssql"/>
<property name="uuid_type" value="RAW(16)" dbms="oracle"/>
<property name="uuid_function" value="uid.uuid_generate_v4()" dbms="postgresql"/>
<property name="uuid_function" value="NEWID()" dbms="mssql"/>
<property name="uuid_function" value="sys_guid()" dbms="oracle"/>
Then use those properties when defining the table:
<column name="id" type="${uuid_type}" defaultValueComputed="${uuid_function}">
<constraints nullable="false" unique="true" />
</column>
Note that you need to use defaultValueComputed, not value
If the column is defined with a default value, just leave it out in your insert statements and the database will then generate the UUID when inserting.
| Liquibase | 42,361,350 | 30 |
I have configured the maven pluggin for liquibase as specified in maven configuration.
Now created a changeset like :-
<changeSet id="changeRollback" author="nvoxland">
<createTable tableName="changeRollback1">
<column name="id" type="int"/>
</createTable>
<rollback>
<dropTable tableName="changeRollback1"/>
</rollback>
</changeSet>
Created the sql to update DB using command line :-
mvn liquibase:updateSQL
But just want to know how to rollback using a "rollbackTag" parameter.
i.e. If run the command "mvn liquibase:rollbackSQL", what should be the value of "rollbackTag" parameter.
And is it possible to rollback using the changeset id ?
| Rollback tags are designed to checkpoint your database's configuration.
The following commands will roll the database configuration back by 3 changesets and create a tag called "checkpoint":
mvn liquibase:rollback -Dliquibase.rollbackCount=3
mvn liquibase:tag -Dliquibase.tag=checkpoint
You can now update the database, and at any stage rollback to that point using the rollback tag:
mvn liquibase:rollback -Dliquibase.rollbackTag=checkpoint
or alternatively generate the rollback SQL:
mvn liquibase:rollbackSQL -Dliquibase.rollbackTag=checkpoint
Revised example
I initially found it difficult to figure out how to configure the liquibase Maven plugin. Just in case it helps here's the example I've used.
The liquibase update is configured to run automatically, followed by tagging the database at the current Maven revision number.
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>com.myspotontheweb.db</groupId>
<artifactId>liquibase-demo</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<!-- Liquibase settings -->
<liquibase.url>jdbc:h2:target/db1/liquibaseTest;AUTO_SERVER=TRUE</liquibase.url>
<liquibase.driver>org.h2.Driver</liquibase.driver>
<liquibase.username>user</liquibase.username>
<liquibase.password>pass</liquibase.password>
<liquibase.changeLogFile>com/myspotontheweb/db/changelog/db-changelog-master.xml</liquibase.changeLogFile>
<liquibase.promptOnNonLocalDatabase>false</liquibase.promptOnNonLocalDatabase>
</properties>
<dependencies>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>1.3.162</version>
</dependency>
</dependencies>
<profiles>
<profile>
<id>dbupdate</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>2.0.2</version>
<executions>
<execution>
<phase>process-resources</phase>
<configuration>
<tag>${project.version}</tag>
</configuration>
<goals>
<goal>update</goal>
<goal>tag</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>
Liquibase is now configured as part of the standard life-cycle so can be run as follows:
mvn clean compile
| Liquibase | 11,131,978 | 28 |
I'm trying to setup the database schema and some test data with liquibase for some tests. Each test has a separate changelog which setup the schema and some specific data for the test.
In order to make my tests working, I need to drop the schema before each test and fill it with new test data. However, it seems that this is not working because some tests are failing because the old test data is still available. I think something with my configuration is not correct. How can I force liquibase to drop the schema before each test?
My tests look as following:
@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = MyTestConfig.class)
@TestPropertySource(properties = "liquibase.change-log=classpath:changelog/schema-with-testdata.xml")
public class MyRepositoryTest {
The config for the tests looks as follows:
@SpringApplicationConfiguration
@Configuration
@EnableAutoConfiguration
@ComponentScan("com.mypackage")
@EntityScan(basePackages = { "com.mypackage.domain" })
@EnableJpaRepositories(basePackages = { "com.mypackage.domain", "com.mypackage.infra.persistence" })
public class MyTestConfig {
And the application.properties under src/main/test/resources is
liquibase.drop-first=true
spring.jpa.hibernate.ddl-auto=none
| There is a spring.liquibase.dropFirst config property. Maybe this is what you're looking for?
| Liquibase | 35,997,898 | 27 |
I have two table as following :
CREATE TABLE StudentMaster (
sId SERIAL,
StudentName VARCHAR(50)
);
CREATE TABLE StudentClassMap (
studnetId BIGINT UNSIGNED NOT NULL,
studentClass VARCHAR(10),
FOREIGN KEY (studnetId) REFERENCES StudentMaster (sId)
);
This is my insert query.
INSERT INTO StudentMaster (studentName) values ('Jay Parikh');
INSERT INTO StudentClassMap (studnetId, studentClass)
values ((SELECT sId from StudentMaster where studentName='Jay Parikh'),
'M.Sc. 1st Year');
I want to define ChangeSet for thes queries in liquibase.
For First query ChangeSet will be :
<changeSet author="unknown" id="insert-example">
<insert tableName="StudentMaster ">
<column name="studentName" value="Jay Parikh"/>
</insert>
</changeSet>
But I don't know how to define ChangeSet for another query.
Any help ? Thanks in advance.
| Use the valueComputed attribute:
<changeSet author="unknown" id="insert-example-2">
<insert tableName="StudentClassMap">
<column name="studentId" valueComputed="(SELECT sId from StudentMaster where studentName='Jay Parikh')"/>
<column name="studentClass" value="McSc. 1st Year"/>
</insert>
</changeSet>
| Liquibase | 22,356,313 | 26 |
I'm trying to add a lot of records (currently located in an Excel file) into my DB using Liquibase (so that I know how to do it for future DB changes)
My idea was to read the excel file using Java, and then fill the ChangeLogParameters from my Spring initialization class like this:
SpringLiquibase liqui = new SpringLiquibase();
liqui.setBeanName("liquibaseBean");
liqui.setDataSource(dataSource());
liqui.setChangeLog("classpath:changelog.xml");
HashMap<String, String> values = new HashMap<String, String>();
values.put("line1col1", ExcelValue1);
values.put("line1col2", ExcelValue2);
values.put("line1col3", ExcelValue3);
values.put("line2col1", ExcelValue4);
values.put("line2col2", ExcelValue5);
values.put("line2col3", ExcelValue6);
...
liqui.setChangeLogParameters(values);
The problem with this approach is that my changelog.xml would be very strange (and non productive)
<changeSet author="gcardoso" id="2012082707">
<insert tableName="t_user">
<column name="login" value="${ExcelValue1}"/>
<column name="name" value="${ExcelValue2}}"/>
<column name="password" value="${ExcelValue3}"/>
</insert>
<insert tableName="t_user">
<column name="login" value="${ExcelValue4}"/>
<column name="name" value="${ExcelValue5}}"/>
<column name="password" value="${ExcelValue6}"/>
</insert>
...
</changeSet>
Is there any way that I could do something like this:
HashMap<String, ArrayList<String>> values = new HashMap<String, ArrayList<String>>();
values.put("col1", Column1);
values.put("col2", Column2);
values.put("col3", Column3);
liqui.setChangeLogParameters(values);
<changeSet author="gcardoso" id="2012082707">
<insert tableName="t_user">
<column name="login" value="${Column1}"/>
<column name="name" value="${Column2}}"/>
<column name="password" value="${Column3}"/>
</insert>
</changeSet>
Or is there any other way?
EDIT :
My current option is to convert the Excel into a CSV file and import the data using
<changeSet author="gcardoso" id="InitialImport2" runOnChange="true">
<loadData tableName="T_ENTITY" file="com/exictos/dbUpdate/entity.csv">
<column header="SHORTNAME" name="SHORTNAME" />
<column header="DESCRIPTION" name="DESCRIPTION" />
</loadData>
<loadData tableName="T_CLIENT" file="com/exictos/dbUpdate/client.csv">
<column header="fdbhdf" name="ENTITYID" defaultValueComputed="(SELECT ID FROM T_ENTITY WHERE SHORTNAME = ENTITY_REFERENCE"/>
<column header="DESCRIPTION" name="DESCRIPTION" />
</loadData>
</changeSet>
with these CSV files:
entity.csv
SHORTNAME,DESCRIPTION
nome1,descricao1
nome2,descricao2
client.csv
DESCRIPTION,ENTITY_REFERENCE
descricaoCliente1,nome1
descricaoCliente2,nome2
But I get this error:
liquibase.exception.DatabaseException: Error executing SQL INSERT INTO `T_CLIENT` (`DESCRIPTION`, `ENTITY_REFERENCE`) VALUES ('descricaoCliente1', 'nome1'): Unknown column 'ENTITY_REFERENCE' in 'field list'
If I change the header of my client.csv to DESCRIPTION,ENTITYID I get this error:
liquibase.exception.DatabaseException: Error executing SQL INSERT INTO `T_CLIENT` (`DESCRIPTION`, `ENTITYID`) VALUES ('descricaoCliente1', 'nome1'): Incorrect integer value: 'nome1' for column 'entityid' at row 1
I any of these cases, it looks like defaultValueComputed doesn't work in the same way as valueComputed in the following example
<changeSet author="gcardoso" id="InitialImport1">
<insert tableName="T_ENTITY">
<column name="SHORTNAME">nome1</column>
<column name="DESCRIPTION">descricao1</column>
</insert>
<insert tableName="T_CLIENT">
<column name="ENTITYID" valueComputed="(SELECT ID FROM T_ENTITY WHERE SHORTNAME = 'nome1')"/>
<column name="DESCRIPTION">descricaoCliente</column>
</insert>
</changeSet>
Is this the expected behavior? Bug of LiquiBase? Or just me doing something wrong (the most likely) ?
Or is there any other way to import massive amount of data? But always using LiquiBase and/or Spring.
EDIT2 : My problem is that I'm unable to insert the data into the second table with the correct foreign key
| I would say that Liquibase is not the ideal tool for what you want to achieve. Liquibase is well-suited to managing the database structure, not the database's data.
If you still want to use Liquibase to manage the data, you have a couple of options (see here) -
Record your insert statements as SQL, and refer to them from changelog.xml like this:
<sqlFile path="/path/to/file.sql"/>
Use a Custom Refactoring Class which you refer to from the changelog.xml like this:
<customChange class="com.example.YourJavaClass"
csvFile="/path/to/file.csv"/>
YourJavaClass would read the records from the CSV file, and apply them to the database, implementing this method:
void execute(Database database) throws CustomChangeException;
Bear in mind, that once you have loaded this data via Liquibase, you shouldn't modify the data in the file, because those changes won't be re-applied. If you want to make changes to it, you would have to do it in subsequent changesets. So after a while you might end up with a lot of different CSV files/liquibase changesets, all operating on the same/similar data (this depends on how you are going to use this data - will it ever change once inserted?).
I would recommend looking at using DBUnit for managing your reference data. Its a tool primarily used in unit testing, but it is very mature, suitable for use in production I would say. You can store information in CSV or XML. I would suggest using a Spring 'InitializingBean' to load the dataset from the classpath and perform a DBUnit 'refresh' operation, which will, from the docs:
This operation literally refreshes dataset contents into the database. This
means that data of existing rows is updated and non-existing row get
inserted. Any rows which exist in the database but not in dataset stay
unaffected.
This way, you can keep your reference data in one place, and add to it over time so that there is only one source of the information, and it isn't split across multiple Liquibase changesets. Keeping your DBUnit datasets in version control would provide trace-ability, and as a bonus, DBUnit datasets are portable across databases, and can manage things like insert order to prevent foreign key violations for you.
| Liquibase | 12,143,994 | 26 |
I am having problems changing a column length in my postgres db with liquibase.
I have a table account with a field description varchar(300). I want to change it to varchar(2000).
I have dropped and recreated the primary key in the same file so I don't have permissions issues or schema / db names or anything like this. For the sake of testing I have cleared the table of data.
I am running
<changeSet author="liquibase" id="sample">
<modifyDataType
columnName="description"
newDataType="varchar(2000)"
schemaName="accountschema"
tableName="account"/>
</changeSet>
I'm getting this error text but I can't understand the issue. The only constraint the column had was a not null constraint and I successfully added a separate changelog to remove this constraint (ignoring the fact I don't see why this would affect extending the length of the field).
Can anyone point to what I am doing wrong?
FAILURE: Build failed with an exception.
What went wrong:
Execution failed for task ':db-management:update'.
liquibase.exception.LiquibaseException: Unexpected error running Liquibase: Error parsing line 37 column 38 of src/main/changelog/db.changelog-accountdb-1.1.xml: cvc-complex-type.2.4.a: Invalid content was found starting with element
'modifyDataType'. One of '{"http://www.liquibase.org/xml/ns/dbchangelog/1.9":validCheckSum, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":preConditions, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":tagDatabase, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":comment, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":createTable, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":dropTable, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":createView, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":renameView, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":dropView, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":insert, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":addColumn, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":sql, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":createProcedure, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":sqlFile, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":renameTable, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":renameColumn, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":dropColumn, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":modifyColumn, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":mergeColumns, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":createSequence, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":alterSequence, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":dropSequence, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":createIndex, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":dropIndex, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":addNotNullConstraint, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":dropNotNullConstraint, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":addForeignKeyConstraint, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":dropForeignKeyConstraint, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":dropAllForeignKeyConstraints, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":addPrimaryKey, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":dropPrimaryKey, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":addLookupTable, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":addAutoIncrement, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":addDefaultValue, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":dropDefaultValue, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":addUniqueConstraint, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":dropUniqueConstraint, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":customChange, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":update, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":delete, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":loadData, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":executeCommand, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":stop, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":rollback, "http://www.liquibase.org/xml/ns/dbchangelog/1.9":modifySql}' is expected.
| You can increase the size of your column like this:
<changeSet author="liquibase" id="sample">
<modifyDataType
columnName="description"
newDataType="varchar(2000)"
tableName="account"/>
</changeSet>
| Liquibase | 37,319,193 | 25 |
I've used in-mem databases in Spring JPA tests many times, and never had a problem. This time, I have a bit more complex schema to initialize, and that schema must have a custom name (some of the entities in our domain model are tied to a specific catalog name.) So, for that reason, as well as to ensure that the tests are fully in sync and consistent with the way we initialize and maintain our schemas, I am trying to initialize an in-memory H2 database using Liquibase before my Spring Data JPA repository unit tests are executed.
(Note: we use Spring Boot 2.1.3.RELEASE and MySql as our main database, and H2 is only used for tests.)
I have been following the Spring Reference guide for setting up Liquibase executions on startup. I have the following entries in my Maven POM:
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-core</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-test-autoconfigure</artifactId>
<scope>test</scope>
</dependency>
My test files look like this:
@RunWith(SpringRunner.class)
@ContextConfiguration(classes = PersistenceTestConfig.class)
@DataJpaTest
public class MyRepositoryTest {
@Autowired
private MyRepository myRepository;
@Test
public void someDataAccessTest() {
// myRepository method invocation and asserts here...
// ...
}
}
The app context class:
@EnableJpaRepositories({"com.mycompany.myproject"})
@EntityScan({"com.mycompany.myproject"})
public class PersistenceTestConfig {
public static void main(String... args) {
SpringApplication.run(PersistenceTestConfig.class, args);
}
}
According to the reference guide,
By default, Liquibase autowires the (@Primary) DataSource in your context and uses that for migrations. If you need to use a different DataSource, you can create one and mark its @Bean as @LiquibaseDataSource. If you do so and you want two data sources, remember to create another one and mark it as @Primary. Alternatively, you can use Liquibaseβs native DataSource by setting spring.liquibase.[url,user,password] in external properties. Setting either spring.liquibase.url or spring.liquibase.user is sufficient to cause Liquibase to use its own DataSource. If any of the three properties has not be set, the value of its equivalent spring.datasource property will be used.
Obviously, I want my tests to use the same datasource instance as the one Liquibase uses to initialize the database. So, at first, I've tried to specify the spring.datasource properties without providing the spring.liquibase.[url, user, password] properties - assuming that Liquibase would then use the default primary Spring datasource:
spring.datasource.url=jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;INIT=CREATE SCHEMA IF NOT EXISTS corp
spring.datasource.username=sa
spring.datasource.password=
spring.jpa.hibernate.ddl-auto=validate
# LIQUIBASE (LiquibaseProperties)
spring.liquibase.change-log=classpath:db.changelog.xml
#spring.liquibase.url=jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;INIT=CREATE SCHEMA IF NOT EXISTS corp
#spring.liquibase.user=sa
#spring.liquibase.password=
spring.liquibase.default-schema=CORP
spring.liquibase.drop-first=true
That didn't work because Liquibase did not find the CORP schema where I must have my tables created:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/springframework/boot/autoconfigure/liquibase/LiquibaseAutoConfiguratio n$LiquibaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.DatabaseException: liquibase.command.CommandExecutionException: liquibase.exception.DatabaseException: liquibase.exception.LockException: liquibase.exception.DatabaseException: Schema "CORP" not found; SQL statement:
CREATE TABLE CORP.DATABASECHANGELOGLOCK (ID INT NOT NULL, LOCKED BOOLEAN NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID)) [90079-197] [Failed SQL: CREATE TABLE CORP.DATABASECHANGELOGLOCK (ID INT NOT NULL, LOCKED BOOLEAN NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID))]
So, I took out the explicit spring.datasource property definitions and provided only the following Liquibase properties:
spring.jpa.hibernate.ddl-auto=validate
# LIQUIBASE (LiquibaseProperties)
spring.liquibase.change-log=classpath:db.changelog.xml
spring.liquibase.url=jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;INIT=CREATE SCHEMA IF NOT EXISTS corp
spring.liquibase.user=sa
spring.liquibase.password=
spring.liquibase.default-schema=CORP
spring.liquibase.drop-first=true
That resulted in the Liquibase task executing successfully and seemingly loading all the necessary tables and data into its native datasource at startup - using the provided changelog files. I understand that this happens because I have explicitly set the Liquibase DS properties, and, per Spring documentation, that would cause Liquibase to use its own native datasource. I suppose, for that reason, while the Liquibase job now runs successfully, the tests are still attempting to use a different [Spring default?] datasource, and the database schema fails the pre-test validation. (No "corp" schema found, no tables.) So, it is obvious that the tests use a different datasource instance from the one that I am trying to generate using Liquibase.
How do I make the tests use what Liquibase generates?
Nothing I try seems to work. I suspect that there is some kind of conflict between the auto- and explicit configurations that I am using. Is @DataJpaTest a good approach in this case. I do want to limit my app context configuration to strictly JPA testing, I don't need anything else for these tests.
It should be simple... However I have not been able to find the correct way, and I can't find any documentation that would clearly explain how to solve this.
Any help is much appreciated!
| The problem lies in @DataJpaTest you are using.
See the Documentation of @DataJpaTest
By default, tests annotated with @DataJpaTest will use an embedded in-memory database (replacing any explicit or usually auto-configured DataSource). The @AutoConfigureTestDatabase annotation can be used to override these settings.
That means that your auto-configured data source is overriden, and url spring.datasource.url=jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;INIT=CREATE SCHEMA IF NOT EXISTS corp is not taken into account
You will find something similar in the log
EmbeddedDataSourceBeanFactoryPostProcessor : Replacing 'dataSource' DataSource bean with embedded version
To fix, use:
spring.test.database.replace=none
| Liquibase | 57,153,091 | 25 |
I want to update the type of a column named "password". At the moment it has type NVARCHAR(40) and I want it to be of type NVARCHAR(64). This is what I did:
<changeSet id="1 - change password length" author="chris311">
<update tableName="tablename">
<column name="password" type="NVARCHAR(64)"/>
</update>
</changeSet>
What else is there to do? Because this obviously does not change anything in the DB.
| You're using the wrong refactoring operation. Try modifyDataType
| Liquibase | 18,765,121 | 25 |
I read liquibase's best practices, specifically for managing stored procedures:
Managing Stored Procedures: Try to maintain separate changelog for Stored Procedures and use runOnChange=βtrueβ. This flag forces LiquiBase to check if the changeset was modified. If so, liquibase executes the change again.
What do they mean by "maintain separate changelog for stored procedures"?
I typically have a directory of changelogs that are linked to releases. Each changelog file is included in the master.xml.
What would the directory structure be when following their advice?
| What we do is something like this:
\---liquibase
| changelog.xml
| procedures.xml
|
+---procedures
procedure_one.sql
procedure_two.sql
changelog.xml simply includes procedures.xml. Inside procedures.xml we then have something like this:
<changeSet author="arthur" id="1" runOnChange="true" runInTransaction="true">
<sqlFile path="procedures/procedure_one.sql"
encoding="UTF-8"
relativeToChangelogFile="true"
endDelimiter=";"
splitStatements="true"/>
</changeSet>
<changeSet author="arthur" id="2" runOnChange="true" runInTransaction="true">
<sqlFile path="procedures/procedure_two.sql"
encoding="UTF-8"
relativeToChangelogFile="true"
endDelimiter=";"
splitStatements="true"/>
</changeSet>
Of course runInTransaction="true" only makes sense if your DBMS supports transactional DDL.
Each SQL script for the procedures is self contained and re-creates the procedure using create or replace. For DBMS that do not support create or replace we usually do a (conditional) drop procedure; create procedure ... in there.
By explicitly including the files (instead of using includeAll) we have control over the order in which the procedures and functions are created (important if one uses another).
If you add a new procedure, you add a new SQL script and a new changeSet to the procedures.xml
| Liquibase | 39,989,749 | 24 |
I need to set up liquibase for two datasources in Spring, at the moment it seems that only one liquibase set up is possible and you can choose for which data source.
| If you are using spring boot, here is the setup which can help you:
Configuration class:
@Configuration
public class DatasourceConfig {
@Primary
@Bean
@ConfigurationProperties(prefix = "datasource.primary")
public DataSource primaryDataSource() {
return DataSourceBuilder.create().build();
}
@Bean
@ConfigurationProperties(prefix = "datasource.primary.liquibase")
public LiquibaseProperties primaryLiquibaseProperties() {
return new LiquibaseProperties();
}
@Bean
public SpringLiquibase primaryLiquibase() {
return springLiquibase(primaryDataSource(), primaryLiquibaseProperties());
}
@Bean
@ConfigurationProperties(prefix = "datasource.secondary")
public DataSource secondaryDataSource() {
return DataSourceBuilder.create().build();
}
@Bean
@ConfigurationProperties(prefix = "datasource.secondary.liquibase")
public LiquibaseProperties secondaryLiquibaseProperties() {
return new LiquibaseProperties();
}
@Bean
public SpringLiquibase secondaryLiquibase() {
return springLiquibase(secondaryDataSource(), secondaryLiquibaseProperties());
}
private static SpringLiquibase springLiquibase(DataSource dataSource, LiquibaseProperties properties) {
SpringLiquibase liquibase = new SpringLiquibase();
liquibase.setDataSource(dataSource);
liquibase.setChangeLog(properties.getChangeLog());
liquibase.setContexts(properties.getContexts());
liquibase.setDefaultSchema(properties.getDefaultSchema());
liquibase.setDropFirst(properties.isDropFirst());
liquibase.setShouldRun(properties.isEnabled());
liquibase.setLabels(properties.getLabels());
liquibase.setChangeLogParameters(properties.getParameters());
liquibase.setRollbackFile(properties.getRollbackFile());
return liquibase;
}
...
}
properties.yml
datasource:
primary:
url: jdbc:mysql://localhost/primary
username: username
password: password
liquibase:
change-log: classpath:/db/changelog/db.primary.changelog-master.xml
secondary:
url: jdbc:mysql://localhost/secondary
username: username
password: password
liquibase:
change-log: classpath:/db/changelog/db.secondary.changelog-master.xml
| Liquibase | 43,523,971 | 24 |
In Liquibase, I define a table with a column of type BIT(1)
<changeSet author="foobar" id="create-configuration-table">
<createTable tableName="configuration">
<column autoIncrement="true" name="id" type="BIGINT(19)">
<constraints primaryKey="true" />
</column>
<column name="active" type="BIT(1)" />
<column name="version" type="INT(10)" />
</createTable>
</changeSet>
In the subsequent changeset, I want to insert data into this table, however, when inserting data into the 'active' column of type BIT(1), MySQL complains 'Data truncation: Data too long for column'
I have tried:
<insert>
<column name="active" value="1" type="BIT(1)" />
</insert>
and
<insert>
<column name="active" value="1"/>
</insert>
and
<insert>
<column name="active" value="TRUE" type="BOOLEAN"/>
</insert>
What is the correct way to insert into a BIT(1) column?
| Answering my own question as I figured this out right after I posted it. To insert into a BIT(1) column, you need to define the value as valueBoolean
<insert>
<column name="active" valueBoolean="true"/>
</insert>
| Liquibase | 31,252,711 | 24 |
Background: we have a Grails 1.3.7 app and are using Liquibase to manage our database migrations.
I am trying to add a new column to an existing table which is not empty.
My changeset looks like this:
changeSet(author: "someCoolGuy (generated)", id: "1326842592275-1") {
addColumn(tableName: "layer") {
column(name: "abstract_trimmed", type: "VARCHAR(455)", value: "No text") {
constraints(nullable: "false")
}
}
}
Which should have inserted the value 'No text' into every existing row, and therefore satisfied the not null constraint. Liquibase "Add Column" docs.
But when the migrations changesets are being applied I get the following exception:
liquibase.exception.DatabaseException: Error executing SQL ALTER TABLE layer ADD abstract_trimmed VARCHAR(455) NOT NULL: ERROR: column "abstract_trimmed" contains null values
Which looks to me like it is not using the 'value' attribute.
If I change my changeset to work look like the following I can achieve the same thing. But I don't want to (and shouldn't have to) do this.
changeSet(author: "someCoolGuy (generated)", id: "1326842592275-1") {
addColumn(tableName: "layer") {
column(name: "abstract_trimmed", type: "VARCHAR(455)")
}
addNotNullConstraint(tableName: "layer", columnName:"abstract_trimmed", defaultNullValue: "No text")
}
Is Liquibase really ignoring my value attribute, or is there something else going on here that I can't see?
I am using Grails 1.3.7, Database-migration plugin 1.0, Postgres 9.0
| Short answer
The "value" attribute will not work if you are adding a not-null constraint at the time of the column creation (this is not mentioned in the documentation). The SQL generated will not be able to execute.
Workaround
The workaround described in the question is the way to go. The resulting SQL will be:
Add the column
ALTER TABLE layer ADD COLUMN abstract_trimmed varchar(455);
Set it to a non-null value for every row
UPDATE table SET abstract_trimmed = 'No text';
Add the NOT NULL constraint
ALTER TABLE layer ALTER COLUMN abstract_trimmed SET NOT NULL;
Why?
A column default is only inserted into the column with an INSERT. The "value" tag will do that for you, but after the column is added. Liquibase tries to add the column in one step, with the NOT NULL constraint in place:
ALTER TABLE layer ADD abstract_trimmed VARCHAR(455) NOT NULL;
... which is not possible when the table already contains rows. It just isn't smart enough.
Alternative solution
Since PostgreSQL 8.0 (so almost forever by now) an alternative would be to add the new column with a non-null DEFAULT:
ALTER TABLE layer
ADD COLUMN abstract_trimmed varchar(455) NOT NULL DEFAULT 'No text';
The manual:
When a column is added with ADD COLUMN and a non-volatile DEFAULT is
specified, the default is evaluated at the time of the statement and
the result stored in the table's metadata. That value will be used for
the column for all existing rows. If no DEFAULT is specified, NULL is
used. In neither case is a rewrite of the table required.
Adding a column with a volatile DEFAULT or changing the type of an
existing column will require the entire table and its indexes to be
rewritten. As an exception, when changing the type of an existing
column, if the USING clause does not change the column contents and
the old type is either binary coercible to the new type or an
unconstrained domain over the new type, a table rewrite is not needed;
but any indexes on the affected columns must still be rebuilt. Table
and/or index rebuilds may take a significant amount of time for a
large table; and will temporarily require as much as double the disk space.
| Liquibase | 8,904,316 | 24 |
Can I automatically convert Liquibase changelog files in the XML format to the YAML format?
| There is nothing built in, but you can easily do it with a little scripting.
Some starting points:
liquibase.parser.ChangeLogParserFactory.getInstance().getParser(".xml", resourceAccessor).parse(...) will return a DatabaseChangeLog object representing the changelog file.
liquibase.serializer.ChangeLogSerializerFactory.getInstance().getSerializer(".yaββml").write(...) will output the changeSets in the DatabaseChangeLog object out to a file in yaml format
| Liquibase | 22,968,572 | 23 |
I have standalone application. Itβs on java, spring-boot, postgres and it has liquibase.
I need to deploy my app and liquibase should create all tables, etc. But it should do it into custom schema not in public. All service tables of liquibase (databasechangelog and databasechangeloglock) should be in custom schema too. How can I create my schema in DB before liquibase start to work? I must do it inside my app when itβs deploying, in config or some like. Without any manual intervention into the DB.
application.properties:
spring.datasource.jndi-name=java:/PostgresDS
spring.jpa.properties.hibernate.default_schema=my_schema
spring.jpa.show-sql = false
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.PostgreSQLDialect
spring.datasource.continue-on-error=true
spring.datasource.sql-script-encoding=UTF-8
liquibase.change-log = classpath:liquibase/changelog-master.yaml
liquibase.default-schema = my_schema
UPD:
When liquibase start, it's create two tables databasechangelogs and one more table. After that, liquibase start working. But I want liquibase in liquibase.default-schema = my_schema, but it's not exist when liquibase start to work and it an error: exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: ERROR: schema "my_schema" does not exist
I want liquibase work in custom schema, not in public:
liquibase.default-schema = my_schema
but before liquibase can do it, the schema must be created. Liquibase can't do this because it not started yet and for start it needs schema.
Vicious circle.
| To solve this, we need to run a SQL statement that creates the schema during Spring Boot initialization at the point when DataSource bean had been already initialized so DB connections can be easily obtained but before Liquibase runs.
By default, Spring Boot runs Liquibase by creating an InitializingBean named SpringLiquibase. This happens in LiquibaseAutoConfiguration.
Knowing this, we can use AbstractDependsOnBeanFactoryPostProcessor to configure SpringLiquibase to depend on our custom schema creating bean (SchemaInitBean in the example below) which depends on DataSource. This arranges the correct execution order.
My application.properties:
db.schema=my_schema
spring.jpa.hibernate.ddl-auto=validate
spring.jpa.open-in-view=false
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
spring.jpa.properties.hibernate.default_schema=${db.schema}
spring.datasource.url=jdbc:postgresql://localhost:5432/postgres
spring.datasource.username=postgres
spring.datasource.password=postgres
spring.liquibase.enabled=true
spring.liquibase.change-log=classpath:db/changelog/db.changelog-master.xml
spring.liquibase.defaultSchema=${db.schema}
Add the @Configuration class below to the project, for example put it in a package processed by component scan.
@Slf4j
@Configuration
@ConditionalOnClass({ SpringLiquibase.class, DatabaseChange.class })
@ConditionalOnProperty(prefix = "spring.liquibase", name = "enabled", matchIfMissing = true)
@AutoConfigureAfter({ DataSourceAutoConfiguration.class, HibernateJpaAutoConfiguration.class })
@Import({SchemaInit.SpringLiquibaseDependsOnPostProcessor.class})
public class SchemaInit {
@Component
@ConditionalOnProperty(prefix = "spring.liquibase", name = "enabled", matchIfMissing = true)
public static class SchemaInitBean implements InitializingBean {
private final DataSource dataSource;
private final String schemaName;
@Autowired
public SchemaInitBean(DataSource dataSource, @Value("${db.schema}") String schemaName) {
this.dataSource = dataSource;
this.schemaName = schemaName;
}
@Override
public void afterPropertiesSet() {
try (Connection conn = dataSource.getConnection();
Statement statement = conn.createStatement()) {
log.info("Going to create DB schema '{}' if not exists.", schemaName);
statement.execute("create schema if not exists " + schemaName);
} catch (SQLException e) {
throw new RuntimeException("Failed to create schema '" + schemaName + "'", e);
}
}
}
@ConditionalOnBean(SchemaInitBean.class)
static class SpringLiquibaseDependsOnPostProcessor extends AbstractDependsOnBeanFactoryPostProcessor {
SpringLiquibaseDependsOnPostProcessor() {
// Configure the 3rd party SpringLiquibase bean to depend on our SchemaInitBean
super(SpringLiquibase.class, SchemaInitBean.class);
}
}
}
This solution does not require external libraries like Spring Boot Pre-Liquibase and not affected by limitations on data.sql / schema.sql support. My main motivation for finding this solution was a requirement I had that schema name must be a configurable property.
Putting everything in one class and using plain JDBC is for brevity.
| Liquibase | 52,517,529 | 23 |
I'm using Liquibase 3.3.5 to update my database. Having contexts is a nice way to only execute specific parts of the changelog. But I don't understand, why ALL changesets are executed, when no context is provided on update. Consider the following example:
changeset A: context=test
changeset B: no context
changeset C: context=prod
So
executing update with context=test, will execute changeset A+B.
executing update with context=prod, will execute changeset B+C.
executing update with no context, will execute changeset A+B+C.
For me, this doesn't make sense at all :).
I would expect, that only changeset B will be executed, since it doesn't define a specific context.
In the Liquibase contexts example: http://www.liquibase.org/documentation/contexts.html ("Using Contexts for Test Data") they say, that one should mark the changesets for testing with "test", and executing them with giving the context "test" to apply testdata. Fine - make sense. But
"When it comes time to migrate your production database, donβt include the βtest" context, and your test data not be included. "
So, if I wouldn't specify "test" context when executing production update, it would execute the "test" changesets as well, since I didn't specify a context at all.
Again, I would expect that leaving out test on update execution, would only perform the regular changesets without the test changesets.
Or I'm missing something here :)?
| This is just how Liquibase works - if you do an update and don't specify a context, then all changesets are considered as applicable to that update operation.
There were a couple of ways that this could have been implemented, and the development team had to pick one.
if you don't specify a context during an update operation, then
no changesets are considered.
if you don't specify a context, then all changesets are considered.
if you don't specify a context, then only changesets that have no context are considered.
if you don't specify a context and none of the changesets have contexts on them, then all changesets are considered, but if some of the changesets do have contexts, go to option 1, 2, or 3 above.
The team could have gone with option 3 (which matches your expectation) but decided long ago to go with option 2, as that seemed like the 'best' way at the time. I wasn't on the team at that time, so I don't know any more than that.
| Liquibase | 30,783,353 | 23 |
We are supporting several microservices written in Java using Spring Boot and deployed in OpenShift. Some microservices communicate with databases. We often run a single microservice in multiple pods in a single deployment. When each microservice starts, it starts liquibase, which tries to update the database. The problem is that sometimes one pod fails while waiting for the changelog lock.
When this happens in our production OpenShift cluster, we expect other pods to fail while restarting because of the same problem with changelog lock issue. So, in the worst case scenario, all pods will wait for the lock to be lifted.
We want liquidbase to automatically prepare our database schemas when each pod is starting.
Is it good to store this logic in every microservice? How can we automatically solve the problem when the liquidbase changelog lock problem appears? Do we need to put the database preparation logic in a separate deployment?
So maybe I should paraphrase my question. What is the best way to run db migration in term of microservice architecture? Maybe we should not use db migration in each pod? Maybe it is better to do it with separate deployment or do it with some extra Jenkins job not in OpenShift at all?
| We're running liquibase migrations as an init-container in Kubernetes. The problem with running Liquibase in micro-services is that Kubernetes will terminate the pod if the readiness probe is not successful before the configured timeout. In our case this happened sometimes during large DB migrations, which could take a few minutes to complete. Kubernetes will terminate the pod, leaving DATABASECHANGELOGLOCK in a locked state. With init-containers you will not have this problem. See https://www.liquibase.org/blog/using-liquibase-in-kubernetes for a detailed explanation.
UPDATE
Please take a look at this Liquibase extension, which replaces the StandardLockService, by using database locks: https://github.com/blagerweij/liquibase-sessionlock
This extension uses MySQL or Postgres user lock statements, which are automatically released when the database connection is closed (e.g. when the container is stopped unexpectedly). The only thing required to use the extension is to add a dependency to the library. Liquibase will automatically detect the improved LockService.
I'm not the author of the library, but I stumbled upon the library when I was searching for a solution. I helped the author by releasing the library to Maven central. Currently supports MySQL and PostgreSQL, but should be fairly easy to support other RDBMS.
| Liquibase | 61,387,510 | 22 |
What is the correct syntax to alter the table and adding multiple columns at a time using liquibase xml. The official document gives the example for adding only one column :
<changeSet author="liquibase-docs" id="addColumn-example">
<addColumn catalogName="cat"
schemaName="public"
tableName="person">
<column name="address" type="varchar(255)"/>
</addColumn>
</changeSet>
Now if I want to add multiple columns at a time, what is the correct syntax:
<changeSet author="liquibase-docs" id="addColumn-example">
<addColumn catalogName="cat"
schemaName="public"
tableName="person">
<column name="job" type="varchar(255)"/>
</addColumn>
<addColumn catalogName="cat"
schemaName="public"
tableName="person">
<column name="designation" type="varchar(255)"/>
</addColumn>
</changeSet>
Is it correct or
<changeSet author="liquibase-docs" id="addColumn-example">
<addColumn catalogName="cat"
schemaName="public"
tableName="person">
<column name="job" type="varchar(255)"/>
<column name="designation" type="varchar(255)"/>
</addColumn>
</changeSet>
which is correct of the two above? or something different altogether.
| Both of those examples will work.
| Liquibase | 33,022,737 | 22 |
I want to use liquibase but when I want to let it run with command line this happens:
PS C:\Users\Ferid\Downloads\liquibase-3.6.0-bin> .\liquibase
Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.NoClassDefFoundError: ch/qos/logback/core/filter/Filter
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Unknown Source)
at java.lang.Class.privateGetMethodRecursive(Unknown Source)
at java.lang.Class.getMethod0(Unknown Source)
at java.lang.Class.getMethod(Unknown Source)
at sun.launcher.LauncherHelper.validateMainClass(Unknown Source)
at sun.launcher.LauncherHelper.checkAndLoadMain(Unknown Source)
Caused by: java.lang.ClassNotFoundException: ch.qos.logback.core.filter.Filter
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
... 7 more
I have tried liquibase-3.6.1 and now liquibase-3.6.0
| One of the required libraries is missing from the library folder.
See the bug report link below where another user had the same issue.
It appears 3.6.1 is still missing slf4j-api-1.7.25 in the lib folder
and I still receive an error invoking liquibase via cli.
You have three options:
Get the library yourself [here].
Wait for the patched
version (Maybe submit a fix yourself).
Revert to an older version (3.5.5 Should work)
See here for the bug report:
https://liquibase.jira.com/browse/CORE-3201
| Liquibase | 50,487,054 | 21 |
I want mock data for integration tests by liquibase changeset, how to make that to not affect real database? I found partial idea from here, but I am using springboot and I hope there is simpler solution.
| You can use liquibase's context parameter. For example create changeset which will have inserts loaded from sql file and specify the context for it. Something like this:
<changeSet id="test_data_inserts" author="me" context="test">
<sqlFile path="test_data.sql" relativeToChangelogFile="true" />
</changeSet>
and in spring boot's application.properties for test specify the property liquibase.contexts=test.
| Liquibase | 47,036,222 | 21 |
I'm using Postgres DB and for migration I'm using Liquibase.
I have an ORDERS table with the following columns:
ID | DATE | NAME | CREATOR | ...
I need to add a new column which will hold the user who has last modified the order - this column should be not-nullable and should have default value which is the CREATOR.
For new orders I can solve the default value part of the business logic, but thing is I already have an existing orders and I need to set the default value when I create the new column.
Now, I know I can set a hard-coded default value in Liquibase - but is there a way I could add the default value based on some other column of that table (for each entity).
| Since no one answered here I'm posting the way I handled it:
<changeSet id="Add MODIFY_USER_ID to ORDERS" author="Noam">
<addColumn tableName="ORDERS">
<column name="MODIFY_USER_ID" type="BIGINT">
<constraints foreignKeyName="ORDERS_MODIFY_FK" referencedTableName="USERS" referencedColumnNames="ID" />
</column>
</addColumn>
</changeSet>
<changeSet id="update the new MODIFY_USER_ID column to get the CREATOR" author="Noam">
<sql>update ORDERS set MODIFY_USER_ID = CREATOR</sql>
</changeSet>
<changeSet id="Add not nullable constraint on MODIFY_USER_ID column" author="Noam">
<addNotNullConstraint tableName="ORDERS" columnName="MODIFY_USER_ID" columnDataType="BIGINT" />
</changeSet>
I've done this in three different change-sets as the documentation recommends
| Liquibase | 35,172,172 | 21 |
I am using Liquibase for my database updates and testing it against H2.
I am using Spring to configure the properties. I use
dataSource.setUrl("jdbc:h2:mem:test_common");
to connect to test_common database, but it did not work out.
I realized that in H2 database != Schema, so I tried to put a default schema to test_common as
dataSource.setUrl("jdbc:h2:mem:test_common;INIT=CREATE SCHEMA test_common\\; SET SCHEMA test_common");
but this didn't work out, I see logs as
INFO 5/26/14 2:24 PM:liquibase: Dropping Database Objects in schema: TEST_COMMON.PUBLIC
INFO 5/26/14 2:24 PM:liquibase: Creating database history table with name: PUBLIC.DATABASECHANGELOG
INFO 5/26/14 2:24 PM:liquibase: Creating database history table with name: PUBLIC.DATABASECHANGELOG
INFO 5/26/14 2:24 PM:liquibase: Successfully released change log lock
INFO 5/26/14 2:24 PM:liquibase: Successfully acquired change log lock
INFO 5/26/14 2:24 PM:liquibase: Reading from PUBLIC.DATABASECHANGELOG
INFO 5/26/14 2:24 PM:liquibase: Reading from PUBLIC.DATABASECHANGELOG
INFO 5/26/14 2:24 PM:liquibase: Reading from PUBLIC.DATABASECHANGELOG
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-1.xml::05192014.1525::h2: Reading from PUBLIC.DATABASECHANGELOG
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-1.xml::05192014.1525::h2: Table network created
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-1.xml::05192014.1525::h2: ChangeSet liquibase/2014/1-1.xml::05192014.1525::h2 ran successfully in 5ms
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-1.xml::05192014.1525::h2: Reading from PUBLIC.DATABASECHANGELOG
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-2.xml::05192014.1525::h2: Reading from PUBLIC.DATABASECHANGELOG
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-2.xml::05192014.1525::h2: New row inserted into network
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-2.xml::05192014.1525::h2: New row inserted into network
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-2.xml::05192014.1525::h2: New row inserted into network
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-2.xml::05192014.1525::h2: New row inserted into network
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-2.xml::05192014.1525::h2: New row inserted into network
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-2.xml::05192014.1525::h2: New row inserted into network
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-2.xml::05192014.1525::h2: ChangeSet liquibase/2014/1-2.xml::05192014.1525::h2 ran successfully in 5ms
INFO 5/26/14 2:24 PM:liquibase: liquibase/changelog.xml: liquibase/2014/1-2.xml::05192014.1525::h2: Reading from PUBLIC.DATABASECHANGELOG
How do I set default schema and database name in H2?
| Default schema is PUBLIC
For the record, the Commands page of the H2 Database site for the SET SCHEMA command says:
The default schema for new connections is PUBLIC.
That documentation also notes that you can specify the default schema when connecting:
This setting can be appended to the database URL: jdbc:h2:test;SCHEMA=ABC
Only one database
As for accessing various databases, H2 does not support the SQL Standard concepts of CLUSTER or CATALOG. You connect to one specific database (catalog) as part of your JDBC URL. Connections to that database are limited to that one single database. See the Question, Can you create multiple catalogs in H2? with an Answer by Thomas Mueller.
You could open another connection to another database but it would be entirely separate.
So talking about a βdefault databaseβ has no meaning with H2 Database.
| Liquibase | 23,877,972 | 20 |
I need to do some data migration, which is too complex to do it in a liquibase changeset. We use spring
That's why I wrote a class implementing the liquibase.change.custom.CustomTaskChange class. I then reference it from within a changeset.
All is fine to this point.
My question is:
Is it possible to get access to the other spring beans from within such a class?
When I try to use an autowired bean in this class, it's null, which makes me think that the autowiring is simply not done at this point?
I've also read in some other thread, that the Liquibase bean must be initialized before all other beans, is that correct?
Here is a snippet of the class I wrote:
@Component
public class UpdateJob2 implements CustomTaskChange {
private String param1;
@Autowired
private SomeBean someBean;
@Override
public void execute(Database database) throws CustomChangeException {
try {
List<SomeObject> titleTypes = someBean.getSomeObjects(
param1
);
} catch (Exception e) {
throw new CustomChangeException();
}
...
I get an exception and when debugging I can see that someBean is null.
Here is the config for the SpringLiquibase:
@Configuration
@EnableTransactionManagement(proxyTargetClass = true)
@ComponentScan({
"xxx.xxx.."})
public class DatabaseConfiguration {
@Bean
public SpringLiquibase springLiquibase() {
SpringLiquibase liquibase = new SpringLiquibase();
liquibase.setDataSource(dataSource());
liquibase.setChangeLog("classpath:liquibase-changelog.xml");
return liquibase;
}
...
Some more config:
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd">
<includeAll path="dbschema"/>
</databaseChangeLog>
And here the call from the changeset:
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd">
<changeSet id="201509281536" author="sr">
<customChange class="xxx.xxx.xxx.UpdateJob2">
<param name="param1" value="2" />
</customChange>
</changeSet>
| I'm currently running through this problem as well...After hours of digging, I found 2 solutions, no AOP is needed.
Liquibase version: 4.1.1
Solution A
In the official example of customChange
https://docs.liquibase.com/change-types/community/custom-change.html
In CustomChange.setFileOpener, ResourceAccessor actually is an inner class SpringLiquibase$SpringResourceOpener, and it has a member 'resourceLoader', which is indeed an ApplicationContext. Unfortunately, it's private and no getter is available.
So here comes an ugly solution: USE REFLECTION TO GET IT AND INVOKE getBean
Solution B (More elegant)
Before we get started, let's see some basic facts about Liquibase. The official way of integrating Liquibase with Spring Boot is by using:
org.springframework.boot.autoconfigure.liquibase.LiquibaseAutoConfiguration$LiquibaseConfiguration
This is a conditional inner config bean for creating SpringLiquibase ONLY WHEN SpringLiquibase.class IS MISSING
@Configuration
@ConditionalOnMissingBean(SpringLiquibase.class)
@EnableConfigurationProperties({ DataSourceProperties.class,
LiquibaseProperties.class })
@Import(LiquibaseJpaDependencyConfiguration.class)
public static class LiquibaseConfiguration {...}
So we can create our own SpringLiquibase by adding a liquibase config bean
@Getter
@Configuration
@EnableConfigurationProperties(LiquibaseProperties.class)
public class LiquibaseConfig {
private DataSource dataSource;
private LiquibaseProperties properties;
public LiquibaseConfig(DataSource dataSource, LiquibaseProperties properties) {
this.dataSource = dataSource;
this.properties = properties;
}
@Bean
public SpringLiquibase liquibase() {
SpringLiquibase liquibase = new BeanAwareSpringLiquibase();
liquibase.setDataSource(dataSource);
liquibase.setChangeLog(this.properties.getChangeLog());
liquibase.setContexts(this.properties.getContexts());
liquibase.setDefaultSchema(this.properties.getDefaultSchema());
liquibase.setDropFirst(this.properties.isDropFirst());
liquibase.setShouldRun(this.properties.isEnabled());
liquibase.setLabels(this.properties.getLabels());
liquibase.setChangeLogParameters(this.properties.getParameters());
liquibase.setRollbackFile(this.properties.getRollbackFile());
return liquibase;
}
}
inside which we new an extended class of SpringLiquibase: BeanAwareSpringLiquibase
public class BeanAwareSpringLiquibase extends SpringLiquibase {
private static ResourceLoader applicationContext;
public BeanAwareSpringLiquibase() {
}
public static final <T> T getBean(Class<T> beanClass) throws Exception {
if (ApplicationContext.class.isInstance(applicationContext)) {
return ((ApplicationContext)applicationContext).getBean(beanClass);
} else {
throw new Exception("Resource loader is not an instance of ApplicationContext");
}
}
public static final <T> T getBean(String beanName) throws Exception {
if (ApplicationContext.class.isInstance(applicationContext)) {
return ((ApplicationContext)applicationContext).getBean(beanName);
} else {
throw new Exception("Resource loader is not an instance of ApplicationContext");
}
}
@Override
public void setResourceLoader(ResourceLoader resourceLoader) {
super.setResourceLoader(resourceLoader);
applicationContext = resourceLoader;
}}
BeanAwareSpringLiquibase has a static reference to ResourceLoader aforementioned. On Spring Bootstartup, 'setResourceLoader' defined by ResourceLoaderAware interface will be invoked automatically before 'afterPropertiesSet' defined by InitializingBean interface, thus the code execution will be like this:
Spring Boot invokes setResourceLoader, injecting resourceLoader(applicationContext) to BeanAwareSpringLiquibase.
Spring Boot invokes afterPropertiesSet, performing Liquibase update including customChange, by now you already have full access to applicationContext
PS:
Remember adding your Liquibase config bean package path to @ComponentScan or it will still use LiquibaseAutoConfiguration instead of our own LiquibaseConfig.
Prepare all beans you need in 'setUp' before 'execute' would be a better convention.
| Liquibase | 32,826,600 | 20 |
I'm trying to use liquibase for generating the changeLog, starting by snapshoting the current state of my database.
Environment details:
OS: Windows 7 32 x86,
Java JDK 1.7,
mysql jdbc driver from MySQL
liquibase 2.0.5.
I run the following from command line:
liquibase --driver=com.mysql.jdbc.Driver --changeLogFile=./structure.xml --url="jdbc:mysql://mysql.mysite.com" --username=<myuser> --password=<mypass> generateChangeLog
It runs fine, and generated the output file. But the output file just contains:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd"/>
And no tables are created on my database (I was expecting the two tables used for tracking).
What am I missing?
EDITS
Yes, I'm referring to liquibasechanlog and liquibasechangelock tables. I know they should automatically appears on database. My question is why they aren't there. And yes, the provided user have the rights granted for doing such task.
And it is not an empty database. It has near 20 tables, 10 views, data...
| Just specify the database name with the --url flag like ZNK said:
--url="jdbc:mysql://mysql.mysite.com/database_name_here"
| Liquibase | 12,449,824 | 20 |
I'm currently working on a liquibase.xml file to create table table_a. One of my fields is <column name="state" type="ENUM('yes','no')">
I'm using postgresql as my DBMS. is there anything like enum data type?
I've read in this like http://wiki.postgresql.org/wiki/Enum
that postgresql doesn't have such data type. CREATE TYPE function is used to create this data type. I still don't know how to make it in liquibase though.
Any suggestions?
| Well of course PostgreSQL has an enum type (which is clearly documented in the link you have shown and the manual).
I don't think Liquibase "natively" supports enums for PostgreSQL, but you should be able to achieve it with a custom SQL:
<changeSet id="1" author="Arthur">
<sql>CREATE TYPE my_state AS ENUM ('yes','no')</sql>
<table name="foo">
<column name="state" type="my_state"/>
</table>
</changeSet>
For a simple yes/no column, I'd actually use the boolean type instead of an enum
| Liquibase | 5,133,423 | 20 |
I need to map two columns of entity class as json in postgres using spring data jpa. After reading multiple stackoverflow posts and baeldung post ,
How to map a map JSON column to Java Object with JPA
https://www.baeldung.com/hibernate-persist-json-object
I did configuration as below. However, I am facing error "ERROR: column "headers" is of type json but expression is of type character varying"
Please provide some pointer to resolve this issue.
I have an entity class as below
@Entity
@Data
@SuperBuilder
@NoArgsConstructor
@AllArgsConstructor
public class Task {
@Id
@GeneratedValue(strategy = IDENTITY)
private Integer id;
private String url;
private String httpMethod;
@Convert(converter = HashMapConverter.class)
@Column(columnDefinition = "json")
private Map<String, String> headers;
@Convert(converter = HashMapConverter.class)
@Column(columnDefinition = "json")
private Map<String, String> urlVariables;
}
I have created a test class to test if entity is persisted or not. On running this junit, below test case is failing with error as below
@SpringBootTest
class TaskRepositoryTest {
private static Task randomTask = randomTask();
@Autowired
private TaskRepository taskRepository;
@BeforeEach
void setUp() {
taskRepository.deleteAll();
taskRepository.save(randomTask);
}
public static Task randomTask() {
return randomTaskBuilder().build();
}
public static TaskBuilder randomTaskBuilder() {
Map<String,String> headers = new HashMap<>();
headers.put(randomAlphanumericString(10),randomAlphanumericString(10));
Map<String,String> urlVariables = new HashMap<>();
urlVariables.put(randomAlphanumericString(10),randomAlphanumericString(10));
return builder()
.id(randomPositiveInteger())
.httpMethod(randomAlphanumericString(10))
.headers(headers)
.urlVariables(urlVariables)
.url(randomAlphanumericString(10)));
}
}
Using liquibase, I have created table in postgres DB and I could see column datatype as json.
databaseChangeLog:
- changeSet:
id: 1
author: abc
changes:
- createTable:
tableName: task
columns:
- column:
name: id
type: int
autoIncrement: true
constraints:
primaryKey: true
- column:
name: url
type: varchar(250)
constraints:
nullable: false
unique: true
- column:
name: http_method
type: varchar(50)
constraints:
nullable: false
- column:
name: headers
type: json
- column:
name: url_variables
type: json
rollback:
- dropTable:
tableName: task
| For anyone who landed here because they're using JdbcTemplate and getting this error, the solution is very simple: In your SQL statement, cast the JSON argument using ::jsonb or CAST.
E.g. String INSERT_SQL = "INSERT INTO xxx (id, json_column) VALUES(?, ?)";
becomes String INSERT_SQL = "INSERT INTO xxx (id, json_column) VALUES(?, ?::jsonb)";
or with named params String INSERT_SQL = "INSERT INTO xxx (id, json_column) VALUES(:id, CAST(:json_column AS JSONB))";
| Liquibase | 65,478,350 | 19 |
I am using yaml, but I guess it is almost the same as xml or json. I found that you can use addForeignKeyConstraint, but I want to add the constraint at table creation, not altering an existing table. How should I do that? Can I do something like this?
- changeSet:
id: create_questions
author: Author
changes:
- createTable:
tableName: questions
columns:
- column:
name: id
type: int
autoIncrement: true
constraints:
primaryKey: true
nullable: false
- column:
name: user_id
type: int
constraints:
foreignKey:
referencedColumnNames: id
referencedTableName: users
nullable: false
- column:
name: question
type: varchar(255)
constraints:
nullable: false
| I never used the YAML format, but in an XML changelog you can do this:
<column name="user_id" type="int">
<constraints nullable="false"
foreignKeyName="fk_questions_author"
references="users(id)"/>
</column>
The equivalent YAML should be something like this:
- column:
name: user_id
type: int
constraints:
nullable: false
foreignKeyName: fk_questions_author
references: users(id)
| Liquibase | 39,793,397 | 19 |
I try to create a new table via a liquibase changeset that looks like:
<createTable tableName="mytable">
<column name="id" type="number" autoIncrement="true">
<constraints primaryKey="true" nullable="false"/>
</column>
<column name="name" type="varchar(50)"/>
<column name="description" type="varchar(255)"/>
<column name="image_path" type="varchar(255)"/>
</createTable>
this fails with following error:
liquibase.exception.DatabaseException:
Error executing SQL CREATE TABLE
kkm.mytable (id numeric AUTO_INCREMENT NOT NULL, name VARCHAR(50) NULL, description
VARCHAR(255) NULL, image_path VARCHAR(255) NULL,
CONSTRAINT PK_BOUFFE PRIMARY KEY (id)):
Incorrect column specifier for column 'id'
if I set autoIncrement="false", this works perfectly.
Is this a known issue ?
EDIT:
this is working:
<createTable tableName="mytable">
<column name="id" type="number" autoIncrement="false">
<constraints primaryKey="true" nullable="false"/>
</column>
<column name="name" type="varchar(50)"/>
<column name="description" type="varchar(255)"/>
<column name="image_path" type="varchar(255)"/>
</createTable>
<addAutoIncrement
columnDataType="int"
columnName="id"
incrementBy="1"
startWith="1"
tableName="mytable"/>
| Change type="number" to type="BIGINT".
i,e
<createTable tableName="mytable">
<column name="id" type="BIGINT" autoIncrement="true">
<constraints primaryKey="true" nullable="false"/>
</column>
<column name="name" type="varchar(50)"/>
<column name="description" type="varchar(255)"/>
<column name="image_path" type="varchar(255)"/>
</createTable>
Hope it works..!!!!
| Liquibase | 20,473,575 | 19 |
Can anyone tell me the difference between specifying a defaultValue="0" vs a defaultValueNumeric="0" in a changeset? It's for a bigint column.
http://www.liquibase.org/manual/add_default_value doesn't really go into detail here.
| The difference is that defaultValue puts quotes around the value in the resulting SQL. Many database will interpret inserting '42' into a numeric field as the number 42, but some fail. defaultValueNumeric tells liquibase it is a number and therefore will not be quoted and will work on all database types.
| Liquibase | 7,267,925 | 19 |
I need to update my data that have html tag inside so wrote this on liquibase
<sql> update table_something set table_content = " something <br/> in the next line " </sql>
it apparently doesn't work on liquibase ( i got loooong errors .. and meaningless). I tried to remove <br/> and it works.
my question is, is it possible to insert / update something that contains xml tag in Liquibase ?
I am using liquibase 1.9.3 with Grails 1.1.1
edited: forgot to set code sample tag in my examples.
| As the liquibase author mentions here you'll need to add CDATA section inside <sql>.
In your particular example that would become:
<sql><![CDATA[ update table_something set table_content = " something <br/> in the next line " ]]></sql>
| Liquibase | 1,082,371 | 19 |
First, a little background. I have a set of Java applications, some based on JPA, some not. To create my databases I am currently using Hibernates schema export to generate create scripts for those using JPA. Those not using JPA I generate the scripts by hand. These are then run during application installation using ANT. For updates the application installer simply applies update scripts to the database.
To improve the management of database updates I have been looking at Flyway and Liquibase. Both seem to almost do what I want, (aside: I am preferring Flyway at the moment because of all the pre-existing SQL/DDL scripts we have). The issue I can see is that they both update the database directly. This is fine for a lot of installations but not all.
What I would like to do is run Flyway/Liquibase against the database and generate an update script that incorporates all the updates needed to bring the database up to date - including any changes Flyway/Liquibase needs to make to its own tables. This would allow me (or more importantly a database admin) to run the update script outside of the application(s) to update the database. I could then use Flyway/Liquibase within my application purely to verify that the database is up to date.
Is it possible to do this with Flyway or Liquibase or any other tool for that matter?
| Liquibase handles it quite fine. It looks at your database at the current state, finds unapplied changesets and generates an SQL script with update command in sql output mode.
Using a proper database migration tool instead of Hibernate generator is the way to go in any case, sooner or later you'll end up with a situation that Hibernate does not support. For us it was dropping an unique index and replacing it with another. You can also enable hibernate.hbm2ddl.auto=validate to feel safe about the compatibility between database structure and entity beans.
| Liquibase | 14,482,644 | 18 |
I update scheme and initial data in spring context using the following beean:
<bean id="liquibase" class="liquibase.integration.spring.SpringLiquibase">
<property name="dataSource" ref="dataSource" />
<property name="changeLog" value="classpath:db/changelog/db.changelog-master.xml" />
<property name="dropFirst" value="true" />
</bean>
I also use Maven liquibase plugin to generate sql scripts in order to see what tables are created and etc.
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>2.0.5</version>
<configuration>
<!--mvn initialize liquibase:updateSQL-->
<propertyFile>src/main/resources/db/config/liquibase-gensql-data-access.properties</propertyFile>
<changeLogFile>src/main/resources/db/changelog/db.changelog-master.xml</changeLogFile>
</configuration>
</plugin>
The db.changelog-master.xml file includes child liquibase changelog files. The problem, how to refer to them from the master. When I use Spring I have to use the following path via classpath:
<include file="classpath:/db/changelog/db.changelog-1.0.xml"/>
When Maven is used, the path is:
<include file="src/main/resources/db/changelog/db.changelog-1.0.xml"/>
I'd like to have the same configuration for both cases. How can I archive it?
| I think if you change your Maven path from
<changeLogFile>src/main/resources/db/changelog/db.changelog-master.xml</changeLogFile>
to
<changeLogFile>db/changelog/db.changelog-master.xml</changeLogFile>
and update db.changelog-master.xml file for all included files to use path relative to src/main/resources directory, it will fix the problem.
I solved this problem by using the same path to changeLog files in Spring, maven and integration test which call Liquibase. All my changelog files are located under /src/main/resources/db directory in one of the Maven modules within a project.
Maven profile which runs Liquibase, notice path: db/masterChangeLog.xml
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.0.2</version>
<executions>
<execution>
<id>*** Install a last major release version of db ***</id>
<phase>process-resources</phase>
<goals>
<goal>update</goal>
</goals>
<configuration>
<changeLogFile>db/masterChangeLog.xml</changeLogFile>
<contexts>dbBuildContext, dmlDevContext</contexts>
<propertyFile>db/liquibase-${user.name}.properties</propertyFile>
<promptOnNonLocalDatabase>false</promptOnNonLocalDatabase>
<logging>debug</logging>
</configuration>
</execution>
db/masterChangeLog.xml file includes these files:
<include file="db/install.xml"/>
<include file="db/update.xml"/>
db/install.xml file includes other changelog files (so does update.xml):
<includeAll path="db/install/seq"/>
<includeAll path="db/install/tab"/>
<includeAll path="db/install/cst"/>
<includeAll path="db/latest/vw" />
Spring context executes the same set of db scripts upon app startup as follows:
<bean id="liquibase" class="liquibase.integration.spring.SpringLiquibase">
<property name="dataSource" ref="baseCostManagementDataSource" />
<property name="changeLog" value="classpath:db/masterChangelog.xml" />
<property name="contexts" value="dbBuildContext, dmlDevContext" />
</bean>
| Liquibase | 16,605,099 | 17 |
I'm trying to execute the following changeSet in liquibase which should create an index. If the index doesn't exist, it should silently fail:
<changeSet failOnError="false" author="sys" id="1">
<createIndex unique="true" indexName="key1" tableName="Table1">
<column name="name" />
</createIndex>
</changeSet>
So far, so good. The Problem is, that this changeSet doesn't get logged into DATABASECHANGELOG table and is therefor executed every time liquibase runs. According to the liquibase documentation and e.g. this answer from Nathen Voxland i thought that the changeset should be marked as ran in the DATABASECHANGELOG table. Instead it isn't logged at all and as i said before executed every time liquibase runs (and fails everytime again).
Am i missing something?
(I'm using MySQL as DBMS)
| In the answer given by Nathen Voxland, he recommended the more correct approach of using a precondition to check the state of the database, before running the changeset.
It seems to me that ignoring a failure is a bad idea.... Means you don't fully control the database configuration.... The "failOnError" parameter allows liquibase to continue. Wouldn't it be a bad idea for a build to record a changset as executed, if in fact it didn't because an error occurred?
| Liquibase | 10,913,133 | 16 |
I need a list of the generic data types available in Liquibase. Where can I find these in the documentation.
I need them when adding columns to my table:
<changeSet author="liquibase-docs" id="addColumn-example">
<addColumn catalogName="cat"
schemaName="public"
tableName="person">
<column name="address" type="varchar(255)"/>
</addColumn>
</changeSet>
| Liquibase uses the standard JDBC datatypes - here is one reference, from http://db.apache.org/ojb/docu/guides/jdbc-types.html
DBC Type Java Type
CHAR String
VARCHAR String
LONGVARCHAR String
NUMERIC java.math.BigDecimal
DECIMAL java.math.BigDecimal
BIT boolean
BOOLEAN boolean
TINYINT byte
SMALLINT short
INTEGER int
BIGINT long
REAL float
FLOAT double
DOUBLE double
BINARY byte[]
VARBINARY byte[]
LONGVARBINARY byte[]
DATE java.sql.Date
TIME java.sql.Time
TIMESTAMP java.sql.Timestamp
CLOB Clob
BLOB Blob
ARRAY Array
DISTINCT mapping of underlying type
STRUCT Struct
REF Ref
DATALINK java.net.URL
JAVA_OBJECT underlying Java class
| Liquibase | 23,742,361 | 16 |
I tried to remote debug a maven plugin for a liquibase project with Intellij. IDEA is highlighting the wrong source code line.
I manually built and installed the plugin in my local maven repository from sources in my Intellij project. Intellij version is 11.1.3 and maven version is 3.0.4 running on Ubuntu 12.04.
For debugging the maven plugin I used mvnDebug comand.
If someone has any ideas please give me some advice. I'm not too used to remote debugging (in fact this is the second time I've done this).
| For me, whenever IntelliJ is highlighting the wrong line, it was always because the version of the JAR/classes being used to run the application differs from my source files - i.e. different version of the sources were used to build the JAR and/or classes.
You are going to have to be sure that you are working from the exact source that was used to build the classes you are debugging.
You can verify this by looking at the classpath being used to launch the application, locating the JAR file or classes directory that contains the classes you are debugging, and verifying that they were built from the sources you are inspecting.
Note that when you are debugging third-party libraries, you often can download the "sources" jar (see IntelliJ2-IDEA get Maven-2 to download source and documentation).
| Liquibase | 12,824,532 | 16 |
I'm creating a new table, like this:
<createTable tableName="myTable">
<column name="key" type="int" autoIncrement="true">
<constraints primaryKey="true" primaryKeyName="PK_myTable" nullable="false"/>
</column>
<column name="name" type="nvarchar(40)">
<constraints nullable="false"/>
</column>
<column name="description" type="nvarchar(100)">
<constraints nullable="true"/>
</column>
</createTable>
As far as the nullable constraint, if I omit that attribute what is the default setting?
E.g.,
If I only did this:
<column name="description" type="nvarchar(100)"/>
...would the column be nullable?
More importantly, where is the documentation that specifies this (as I have other questions like this)?
I looked here: Liquibase Column Tag, but it only says ambiguously:
nullable - Is column nullable?
| It isn't documented, but I looked at the source code and it appears that if you do not specify, there is no constraint added to the column. One way you can check this yourself is to use the liquibase updateSql command to look at the SQL generated.
| Liquibase | 32,889,080 | 16 |
I'm currently working on a Spring project which uses Hibernate and Liquibase. What I am trying to achieve is to update Liquibase's changelog automatically every time I build the project. It's supposed to generate a diff based on my current productive database and my updated Hibernate Entities.
But the problem I have is that every time I clean and rebuild my project I get the following error:
liquibase-plugin: Running the 'main' activity...
INFO 6/22/15 11:12 AM: liquibase-hibernate: Reading hibernate configuration hibernate:spring:com.example.name.domain?dialect=org.hibernate.dialect.MySQL5Dialect
INFO 6/22/15 11:12 AM: liquibase-hibernate: Found package com.example.app.domain
INFO 6/22/15 11:12 AM: liquibase-hibernate: Found dialect org.hibernate.dialect.MySQL5Dialect
Unexpected error running Liquibase: Unable to resolve persistence unit root URL
SEVERE 6/22/15 11:12 AM: liquibase: Unable to resolve persistence unit root URL
liquibase.exception.DatabaseException: javax.persistence.PersistenceException: Unable to resolve persistence unit root URL
at liquibase.integration.commandline.CommandLineUtils.createDatabaseObject(CommandLineUtils.java:69)
at liquibase.integration.commandline.Main.createReferenceDatabaseFromCommandParams(Main.java:1169)
at liquibase.integration.commandline.Main.doMigration(Main.java:936)
at liquibase.integration.commandline.Main.run(Main.java:175)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
at org.codehaus.groovy.runtime.callsite.StaticMetaMethodSite.invoke(StaticMetaMethodSite.java:43)
at org.codehaus.groovy.runtime.callsite.StaticMetaMethodSite.call(StaticMetaMethodSite.java:88)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at org.liquibase.gradle.LiquibaseTask.runLiquibase(LiquibaseTask.groovy:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:382)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1015)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.callCurrent(PogoMetaClassSite.java:66)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:49)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:133)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:141)
at org.liquibase.gradle.LiquibaseTask$_liquibaseAction_closure1.doCall(LiquibaseTask.groovy:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:292)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1015)
at groovy.lang.Closure.call(Closure.java:423)
at groovy.lang.Closure.call(Closure.java:439)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:1379)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:1310)
at org.codehaus.groovy.runtime.dgm$150.invoke(Unknown Source)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:271)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:53)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at org.liquibase.gradle.LiquibaseTask.liquibaseAction(LiquibaseTask.groovy:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:75)
at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.doExecute(AnnotationProcessingTaskFactory.java:226)
at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.execute(AnnotationProcessingTaskFactory.java:219)
at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.execute(AnnotationProcessingTaskFactory.java:208)
at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:589)
at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:572)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:80)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:61)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:35)
at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:64)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:42)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:53)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at org.gradle.api.internal.AbstractTask.executeWithoutThrowingTaskFailure(AbstractTask.java:310)
at org.gradle.api.internal.AbstractTask.execute(AbstractTask.java:305)
at org.gradle.api.internal.TaskInternal$execute$0.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
at build_au43cnsvyw2rwctsxb5mg4cmw$_run_closure4.doCall(/Users/mihajlovic/Documents/repos/Backend/dbmigration/build.gradle:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:292)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1015)
at groovy.lang.Closure.call(Closure.java:423)
at groovy.lang.Closure.call(Closure.java:439)
at org.gradle.api.internal.AbstractTask$ClosureTaskAction.execute(AbstractTask.java:558)
at org.gradle.api.internal.AbstractTask$ClosureTaskAction.execute(AbstractTask.java:539)
at org.gradle.api.internal.tasks.TaskMutator$1.execute(TaskMutator.java:77)
at org.gradle.api.internal.tasks.TaskMutator$1.execute(TaskMutator.java:73)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:80)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:61)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:35)
at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:64)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:42)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:53)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at org.gradle.api.internal.AbstractTask.executeWithoutThrowingTaskFailure(AbstractTask.java:310)
at org.gradle.execution.taskgraph.AbstractTaskPlanExecutor$TaskExecutorWorker.executeTask(AbstractTaskPlanExecutor.java:79)
at org.gradle.execution.taskgraph.AbstractTaskPlanExecutor$TaskExecutorWorker.processTask(AbstractTaskPlanExecutor.java:63)
at org.gradle.execution.taskgraph.AbstractTaskPlanExecutor$TaskExecutorWorker.run(AbstractTaskPlanExecutor.java:51)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor.process(DefaultTaskPlanExecutor.java:23)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter.execute(DefaultTaskGraphExecuter.java:88)
at org.gradle.execution.SelectedTaskExecutionAction.execute(SelectedTaskExecutionAction.java:37)
at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:62)
at org.gradle.execution.DefaultBuildExecuter.access$200(DefaultBuildExecuter.java:23)
at org.gradle.execution.DefaultBuildExecuter$2.proceed(DefaultBuildExecuter.java:68)
at org.gradle.execution.DryRunBuildExecutionAction.execute(DryRunBuildExecutionAction.java:32)
at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:62)
at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:55)
at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:149)
at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:106)
at org.gradle.initialization.DefaultGradleLauncher.run(DefaultGradleLauncher.java:86)
at org.gradle.launcher.exec.InProcessBuildActionExecuter$DefaultBuildController.run(InProcessBuildActionExecuter.java:90)
at org.gradle.tooling.internal.provider.ExecuteBuildActionRunner.run(ExecuteBuildActionRunner.java:28)
at org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35)
at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:41)
at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:28)
at org.gradle.launcher.exec.DaemonUsageSuggestingBuildActionExecuter.execute(DaemonUsageSuggestingBuildActionExecuter.java:50)
at org.gradle.launcher.exec.DaemonUsageSuggestingBuildActionExecuter.execute(DaemonUsageSuggestingBuildActionExecuter.java:27)
at org.gradle.launcher.cli.RunBuildAction.run(RunBuildAction.java:40)
at org.gradle.internal.Actions$RunnableActionAdapter.execute(Actions.java:169)
at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:237)
at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:210)
at org.gradle.launcher.cli.JavaRuntimeValidationAction.execute(JavaRuntimeValidationAction.java:35)
at org.gradle.launcher.cli.JavaRuntimeValidationAction.execute(JavaRuntimeValidationAction.java:24)
at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:206)
at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:169)
at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:33)
at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:22)
at org.gradle.launcher.Main.doAction(Main.java:33)
at org.gradle.launcher.bootstrap.EntryPoint.run(EntryPoint.java:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.gradle.launcher.bootstrap.ProcessBootstrap.runNoExit(ProcessBootstrap.java:54)
at org.gradle.launcher.bootstrap.ProcessBootstrap.run(ProcessBootstrap.java:35)
at org.gradle.launcher.GradleMain.main(GradleMain.java:23)
Caused by: javax.persistence.PersistenceException: Unable to resolve persistence unit root URL
at org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager.determineDefaultPersistenceUnitRootUrl(DefaultPersistenceUnitManager.java:580)
at org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager.preparePersistenceUnitInfos(DefaultPersistenceUnitManager.java:436)
at liquibase.ext.hibernate.database.HibernateSpringDatabase.buildConfigurationFromScanning(HibernateSpringDatabase.java:227)
at liquibase.ext.hibernate.database.HibernateSpringDatabase.buildConfiguration(HibernateSpringDatabase.java:55)
at liquibase.ext.hibernate.database.HibernateDatabase.setConnection(HibernateDatabase.java:45)
at liquibase.database.DatabaseFactory.findCorrectDatabaseImplementation(DatabaseFactory.java:123)
at liquibase.database.DatabaseFactory.openDatabase(DatabaseFactory.java:143)
at liquibase.integration.commandline.CommandLineUtils.createDatabaseObject(CommandLineUtils.java:50)
... 140 more
Caused by: java.io.FileNotFoundException: class path resource [] cannot be resolved to URL because it does not exist
at org.springframework.core.io.ClassPathResource.getURL(ClassPathResource.java:187)
at org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager.determineDefaultPersistenceUnitRootUrl(DefaultPersistenceUnitManager.java:577)
... 147 more
As far as I understand it the problem is that Liquibase requires all the entity classes to be loaded into gradle's classpath, but since the Java classes are just being compiled or have not been compiled that is not the case yet, hence the error message.
I can fix this error by building the project in two steps:
First I execute the task compileJava to build the entity classes.
Once that is done I can perform a normal build - now without any problems.
However building the project in two steps is quite tedious. Is there a way I can work around this issue to build it in just one step?
My build.gradle currently looks like this:
buildscript {
ext {
springBootVersion = '1.2.3.RELEASE'
}
repositories {
mavenCentral()
files 'build/classes/main'
}
dependencies {
classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
classpath("io.spring.gradle:dependency-management-plugin:0.5.0.RELEASE")
classpath("org.liquibase:liquibase-gradle-plugin:1.1.0")
classpath("mysql:mysql-connector-java:5.1.35")
classpath('org.hibernate.javax.persistence:hibernate-jpa-2.1-api:1.0.0.Final')
classpath('org.liquibase.ext:liquibase-hibernate4:3.5')
classpath('org.springframework.data:spring-data-jpa:1.8.0.RELEASE')
classpath files('build/classes/main/')
}
}
apply plugin: 'org.liquibase.gradle'
apply plugin: 'java'
apply plugin: 'eclipse-wtp'
apply plugin: 'idea'
apply plugin: 'spring-boot'
apply plugin: 'io.spring.dependency-management'
apply plugin: 'war'
war {
baseName = 'example'
version = '0.0.1-SNAPSHOT'
}
sourceCompatibility = 1.8
targetCompatibility = 1.8
repositories {
mavenCentral()
flatDir {
dirs 'libs'
}
}
configurations {
providedRuntime
}
dependencies {
compile("org.springframework.boot:spring-boot-starter-data-jpa")
compile("org.springframework.boot:spring-boot-starter-web")
compile("org.apache.commons:commons-lang3:3.0")
compile("mysql:mysql-connector-java:5.1.35")
compile('org.liquibase:liquibase-core:3.3.5')
providedRuntime("org.springframework.boot:spring-boot-starter-tomcat")
testCompile("org.springframework.boot:spring-boot-starter-test")
}
liquibase {
activities {
main {
changeLogFile 'src/main/resources/db/changelog/db.changelog-master.xml'
url 'jdbc:mysql://localhost:3306/Example'
username 'root'
password ''
referenceDriver 'liquibase.ext.hibernate.database.connection.HibernateDriver'
referenceUrl 'hibernate:spring:com.example.app.domain?dialect=org.hibernate.dialect.MySQL5Dialect'
}
}
runList = 'main'
}
eclipse {
classpath {
containers.remove('org.eclipse.jdt.launching.JRE_CONTAINER')
containers 'org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/JavaSE-1.7'
}
}
task wrapper(type: Wrapper) {
gradleVersion = '2.4'
}
| I finally figured it out! You can use the Exec type to execute commands in the command line which subsequently starts a new process. You can find more information in the documentation.
This is the solution I ended up with:
task updateChangeLog(type: Exec) {
commandLine 'gradle', 'diffChangeLog'
}
updateChangeLog.dependsOn compileJava
This starts a new separate gradle process with my diffChangeLog task.
| Liquibase | 30,972,713 | 16 |
I'm using the dropwizard-migrations module for liquibase db refactoring. See the guide here: http://dropwizard.codahale.com/manual/migrations/
When I run
java -jar my_project.jar db migrate my_project.yml
I get the following error:
ERROR [2013-09-11 20:53:43,089] liquibase: Change Set migrations.xml::11::me failed. Error: Error executing SQL CREATE OR REPLACE TRIGGER add_current_date_to_my_table BEFORE UPDATE ON my_table FOR EACH ROW EXECUTE PROCEDURE change_update_time();: ERROR: syntax error at or near "TRIGGER"
Position: 19
Here are some relevant changesets from my migrations.xml file:
<changeSet id="1" author="me">
<createProcedure>
CREATE OR REPLACE FUNCTION change_update_time() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
NEW.updated_at := CURRENT_TIMESTAMP;
RETURN NEW;
END;
$$;
</createProcedure>
<rollback>
DROP FUNCTION change_update_time();
</rollback>
</changeSet>
<changeSet id="2" author="me">
<preConditions>
<not>
<tableExists tableName="my_table"/>
</not>
</preConditions>
<createTable tableName="my_table">
<column name="_id" type="integer" defaultValue="0">
<constraints nullable="false"/>
</column>
<column name="updated_at" type="timestamp without time zone" defaultValue="now()">
<constraints nullable="false"/>
</column>
</createTable>
</changeSet>
<changeSet id="3" author="me">
<sql splitStatements="false">
CREATE OR REPLACE TRIGGER add_current_date_to_my_table BEFORE UPDATE ON my_table FOR EACH ROW EXECUTE PROCEDURE change_update_time();
</sql>
<rollback>
DROP TRIGGER add_current_date_to_my_table ON my_table;
</rollback>
</changeSet>
Is there any way I can create the trigger add_current_date_to_my_table? Is this redundant with the "RETURNS trigger" from creating the function?
| The solution is:
<changeSet id="3" author="me">
<sql>
DROP TRIGGER IF EXISTS add_current_date_to_my_table ON my_table;
CREATE TRIGGER add_current_date_to_my_table BEFORE UPDATE ON my_table FOR EACH ROW EXECUTE PROCEDURE change_update_time();
</sql>
<rollback>
DROP TRIGGER add_current_date_to_my_table ON my_table;
</rollback>
</changeSet>
H/T Jens.
| Liquibase | 18,751,174 | 16 |
I am utilising Liquibase (www.liquibase.org) into our MVC3 SQL Server 2008 project to manage database migration/changes. However I'm stumbling on the first hurdle: Connecting to Microsoft SQL Server instance.
I am looking at the quick start tutorial on the liquibase site, but exchanging the mysql for sql server DB
I run this command:
liquibase --driver=sqljdbc.jar --changeLogFile="C:\Temp\ChangeLog.xml" --url="jdbc:sqlserver://localhost;databaseName=test" --username=user --password=pass migrate
And receive this error:
Liquibase Update Failed: Cannot find database driver: sqljdbc.jar
I have tried adding --classpath pointing to the sqljdbc driver with no luck.
How can I create or update an MS-SQL Server database with liquibase?
| Create a properties file called liquibase.properties containing the following:
classpath=C:\\Program Files\\Microsoft SQL Server 2005 JDBC Driver\\sqljdbc_1.2\\enu\\sqljdbc.jar
driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
url=jdbc:sqlserver://localhost:1433;databaseName=test
username=myuser
password=mypass
changeLogFile=C:\\Temp\\ChangeLog.xml
liquibase will use this file when located in the same directory. Useful to simplify the command-line.
Database is updated as follows:
liquibase update
Notes:
I'm not a SQL server user, I picked up the JDBC driver and URL details from Microsoft doco
The "migrate" command has been deprecated.
| Liquibase | 8,990,467 | 16 |
We have couple of data schemas and we investigate the migration to Liquibase. (One of data schemas is already migrated to Liquibase).
Important question for us is if Liquibase supports dry run:
We need to run database changes on all schemas without commit to ensure we do not have problems.
In case of success all database changes run once again with commit.
(The question similar to this SQL Server query dry run but related to Liquibase)
Added after the answer
I read documentation related to updateSQL and it is not answers the requirements of βdry runβ.
It just generates the SQL (in command line, in Ant task and in Maven plugin).
I will clarify my question:
Does Liquibase support control on transactions?
I want to open transaction before executing of Liquibase changelog, and to rollback the transaction after the changelog execution.
Of course, I need to verify the result of the execution.
Is it possible?
Added
Without control on transactions (or dry run) we can not migrate to Liquibase all our schemas.
Please help.
| You can try "updateSQL" mode, it will connect db (check you access rights), acquire db lock, generate / print SQL sentences to be applied (based on db state and you current liquibase change sets) also it will print chageset id's missing in current state of db and release db lock.
| Liquibase | 21,847,482 | 15 |
Recently we started using Liquibase. It didn't occurred yet but we imagined what would happen if two developers commits changes in the change log file to the shared Git repository.
How to solve or avoid a merge conflict? To broaden this question some what:
What is the recommended workflow using Liquibase in combination with Git?
Example scenario:
- Michael changes a column in table 'customer'.
- Jacob changes a column in table 'account'.
So both developers added a <changeSet> to the same changelog file changelog.xml.
EDIT :
As commented the scenario isn't that much exciting indeed. Assume Jacob was the last one to push his code. He has to pull first. Gets a warning there are merge conflicts to solve. He solves the conflict by keeping both parts of the code, that of Michael's and his. Updating the database with Liquibase gives no problems.
Advanced example scenario:
-Michael changes the name of column 'name' of table 'customer' in 'first_name', commits and pushes.
-Jacob changes the name of column 'name' of table 'customer' in 'last_name' and commits.
-Jacob gets a merge conflict when pulling Michael's code.
-Jacob and Michael discussed the conflict and agreed it has to be 'last_name', which Jacob commits and pushes.
-Michael pulls the solved conflict and runs a Liquibase update. He gets an error: column "name" does not exist
| At my company, the way we use liquibase prevents these situations from occurring. Basically, you create a separate liquibase file for each change. We name the files after the JIRA ticket that originated the change with a little descriptive text. Each of these files, we put in a folder for the version of the system they are for; if the next release is 1.22 then that folder is created when we start making database changes and we put each liquibase file in there along with an update.xml script that just includes them. That update.xml file winds up being the only place where conflicts can really happen, and they're trivial to resolve.
To illustrate, this is the src/main/liquibase folder:
βββ install
βΒ Β βββ projectauthor.xml
βΒ Β βββ project_obspriorities.xml
βΒ Β βββ project_priorities.xml
βΒ Β βββ project_udv.xml
βΒ Β βββ project.xml
βΒ Β βββ roles.xml
βΒ Β βββ scan.xml
β βββ (the other table definitions in the system go here)
β
βββ install.xml <-- this reads all the files in ./install
β
βββ local.properties <--
βββ prod.properties <-- these are database credentials (boo, hiss)
βββ staging.properties <--
βββ test.properties <--
β
βββ update.xml <-- reads each version/master.xml file
β
βββ v1.16
βΒ Β βββ 2013-06-06_EVL-2240.xml
βΒ Β βββ 2013-07-01_EVL-2286-remove-invalid-name-characters.xml
βΒ Β βββ 2013-07-02_defer-coauthor-projectauthor-unique-constraint.xml
βΒ Β βββ master.xml
βββ v1.17
βΒ Β βββ 2013-07-19_EVL-2295.xml
βΒ Β βββ 2013-09-11_EVL-2370_otf-mosaicking.xml
βΒ Β βββ master.xml
βββ v1.18
βΒ Β βββ 2014-05-05_EVL-2326-remove-prerequisite-construct.xml
βΒ Β βββ 2014-06-03_EVL-2750_fix-p-band-polarizations.xml
βΒ Β βββ master.xml
The install.xml file is just a bunch of file inclusions:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd">
<include file="src/main/liquibase/project/install/projectauthor.xml"/>
<include file="src/main/liquibase/project/install/project_obspriorities.xml"/>
...
</databaseChangeLog>
The update.xml file is the same story:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd">
<include file="src/main/liquibase/project/v1.18/master.xml"/>
</databaseChangeLog>
The one aspect of the workflow I am not in love with is that the install/*.xml are supposed to create the database as it is right before the current version, but we usually don't remember to do that.
Anyway, this approach will save you from a lot of grief with merging. We're using Subversion and not having any merge difficulties with this approach.
| Liquibase | 24,449,879 | 15 |
In my project I just tried upgrading liquibase from 3.2.2 to 3.4.2 (both the jars and the maven plugin). EDIT: same for upgrade to 3.3.x.
As a consequence, starting the application now gives the following error:
Caused by: liquibase.exception.ValidationFailedException: Validation Failed:
4 change sets check sum
src/main/resources/changelogs/xxx_add_indices_to_event_tables.xml::xxx-add_indices_to_event_tables::xxx is now: 7:0fc8f1faf484a59a96125f3e63431128
This for 4 changesets out of 50, all of which add indexes, such as:
<createIndex indexName="idx_eventtype" tableName="events">
<column name="eventtype" type="varchar(64)"/>
</createIndex>
While I can fix this locally, this would be a huge pain to manually fix on all running environments. Is this a bug, or is there some workaround?
| You could also use the <validCheckSum> sub-tag of the <changeSet> to add the new checksums as valid checksums.
Also, checkout the comments on the bug CORE-1950. You could put the log level to "debug" on both of your liquibase versions and see if you can find differences in the log output of the checksum creations.
Use subtag something like this
<changeSet id="00000000000009" author="system">
<validCheckSum>7:19f99d93fcb9909c7749b7fc2dce1417</validCheckSum>
<preConditions onFail="MARK_RAN">
<sqlCheck expectedResult="0">SELECT COUNT(*) FROM users</sqlCheck>
</preConditions>
<loadData encoding="UTF-8" file="users.csv" separator=";" tableName="users">
<column name="active" type="boolean" />
<column name="deleted" type="boolean" />
</loadData>
</changeSet>
You should remember that the value of the validCheckSum tag is the new checksum for the changeset.
| Liquibase | 34,655,157 | 15 |
The problem consist in:
When play the command maven, the seems problem find in https://liquibase.jira.com/browse/CORE-465, but is that 2009, can marked with "Cannot Reproduce", i'm use one file .xml type liquibase with one changeSet, but many createTable, addPrimaryKey, rollback, addForeignKeyConstraint, this file create always tables and your respective constraints, but i'm make a rollback this wrong happened, i'm tired find for Internet, then can't found solution for the problem, are you can solved this problem? share with the community!
The plugin and command use for a maven at this:
liquibase:rollback -Dliquibase.rollbackTag=payScript -PproductionPostgreSql
the plugin at this
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.4.1</version>
<configuration>
<changeLogFile>${basedir}/src/main/resources/changelogs/db.changelog-master.xml</changeLogFile>
<driver>${driver}</driver>
<url> ${host.db}</url>
<username>${user.db}</username>
<password>${password.db}</password>
</configuration>
<dependencies>
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-core</artifactId>
<version>3.4.1</version>
</dependency>
</dependencies>
</plugin>
this produced the stack trace below
[ERROR] Failed to execute goal org.liquibase:liquibase-maven-plugin:3.4.1:rollback (default-cli) on project generic: Error setting up or running Liquibase: liquibase.exception.RollbackImpossibleException: No inverse to liquibase.change.core.RawSQLChange created -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.liquibase:liquibase-maven-plugin:3.4.1:rollback (default-cli) on project generic: Error setting up or running Liquibase: liquibase.exception.RollbackImpossibleException: No inverse to liquibase.change.core.RawSQLChange created
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:862)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:286)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: Error setting up or running Liquibase: liquibase.exception.RollbackImpossibleException: No inverse to liquibase.change.core.RawSQLChange created
at org.liquibase.maven.plugins.AbstractLiquibaseMojo.execute(AbstractLiquibaseMojo.java:398)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
... 20 more
Caused by: liquibase.exception.RollbackFailedException: liquibase.exception.RollbackImpossibleException: No inverse to liquibase.change.core.RawSQLChange created
at liquibase.changelog.ChangeSet.rollback(ChangeSet.java:648)
at liquibase.changelog.visitor.RollbackVisitor.visit(RollbackVisitor.java:39)
at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:73)
at liquibase.Liquibase.rollback(Liquibase.java:656)
at org.liquibase.maven.plugins.LiquibaseRollback.performLiquibaseTask(LiquibaseRollback.java:121)
at org.liquibase.maven.plugins.AbstractLiquibaseMojo.execute(AbstractLiquibaseMojo.java:394)
... 22 more
Caused by: liquibase.exception.RollbackImpossibleException: No inverse to liquibase.change.core.RawSQLChange created
at liquibase.change.AbstractChange.generateRollbackStatementsFromInverse(AbstractChange.java:424)
at liquibase.change.AbstractChange.generateRollbackStatements(AbstractChange.java:397)
at liquibase.database.AbstractJdbcDatabase.executeRollbackStatements(AbstractJdbcDatabase.java:1269)
at liquibase.changelog.ChangeSet.rollback(ChangeSet.java:634)
... 27 more
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
| This is expected behavior. Somewhere in your changelog, you have a changeset that uses raw SQL. You didn't include it here, but the actual contents don't matter - as long as it is raw SQL, Liquibase cannot determine how to 'undo' or rollback that change. The way to fix this is to look at that changeset and add a rollback tag to that changeset that describes how to rollback the change made.
The docs here http://www.liquibase.org/documentation/changes/sql.html are for the SQL tag. Rollback in general is described here: http://www.liquibase.org/documentation/rollback.html
In particular, note this paragraph:
Other refactorings such as βdrop tableβ and βinsert dataβ have no
corresponding rollback commands that can be automatically generated.
In these cases, and cases where you want to override the default
generated rollback commands, you can specify the rollback commands via
the tag within the changeSet tag. If you do not want anything done to
undo a change in rollback mode, use an empty tag.
Here is an example that shows a raw SQL changeset and a corresponding rollback tag.
<changeSet author="liquibase-docs" id="sql-example">
<sql dbms="h2, oracle"
endDelimiter="\nGO"
splitStatements="true"
stripComments="true">insert into person (name) values ('Bob')
<comment>What about Bob?</comment>
</sql>
<rollback>
delete from person where name='Bob';
</rollback>
</changeSet>
Note that this is a VERY naive example - you probably wouldn't want to use this in a real scenario, because it is possible that after you had run liquibase update to deploy this change that whatever programs using the database might insert rows into the person table with the name 'Bob', and this rollback statement would remove ALL the rows with name 'Bob'.
| Liquibase | 32,166,653 | 15 |
I'm using Spring Boot and liquibase for database migrations and refactorings.
I'm having some exceptions (mostly database exception) in my changesets every now and then, but liquibase shares too little information in the default log level. For example it doesn't tell me the exact SQL statement it executed or the name of the CSV file it was processing when it failed.
Do you know any way to configure liquibase's logLevel to DEBUG either through Spring Boot's application.properties or any other non-painful way?
I tried the logging.level.* setting in various combinations but it didn't work.
| It's a limitation of Spring Boot's code that adapts Liquibase's own logging framework to use Commons Logging. I've opened an issue so that we can improve the adapter.
Now that the issue has been fixed, you can use logging.level.liquibase to control the level of Liquibase logging that will be output.
| Liquibase | 30,047,389 | 15 |
I can run Liquibase changelog through maven build (liquibase:update goal) without any problems. Now I'd like Liquibase to use database credentials and URL loaded from a properties file (db.properties) depending on the selected Maven profile:
|-- pom.xml
`-- src
`-- main
`-- resources
|-- local
| `-- db.properties
|-- dev
| `-- db.properties
|-- prod
| `-- db.properties
`-- db-changelog-master.xml
`-- db-changelog-1.0.xml
Each of the 3 properties files would look like the following:
database.driver = oracle.jdbc.driver.OracleDriver
database.url = jdbc:oracle:thin:@<host_name>:<port_number>/instance
database.username = user
database.password = password123
Now, instead of these properties being defined in the POM file itself (as explained in the accepted answer of this question liquibase using maven with two databases does not work), I'd like them to be loaded from the external properties file. I have tried different approaches to no avail:
1. I used Maven's resource element in the POM file:
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.1.0</version>
<configuration>
<changeLogFile>db.changelog-master.xml</changeLogFile>
<verbose>true</verbose>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build>
<profiles>
<profile>
<id>local</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<build>
<resources>
<resource>
<directory>src/main/resources/local</directory>
</resource>
</resources>
</build>
<properties>
<liquibase.url>${database.url}</liquibase.url>
<liquibase.driver>${database.driver}</liquibase.driver>
<liquibase.username>${database.username}</liquibase.username>
<liquibase.password>${database.password}</liquibase.password>
</properties>
</profile>
<profile>
<id>dev</id>
<build>
<resources>
<resource>
<directory>src/main/resources/dev</directory>
</resource>
</resources>
</build>
<properties>
<liquibase.url>${database.url}</liquibase.url>
<liquibase.driver>${database.driver}</liquibase.driver>
<liquibase.username>${database.username}</liquibase.username>
<liquibase.password>${database.password}</liquibase.password>
</properties>
</profile>
</profiles>
2. I tried using the Properties Maven plugin:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>properties-maven-plugin</artifactId>
<version>1.0-alpha-2</version>
<executions>
<execution>
<phase>initialize</phase>
<goals>
<goal>read-project-properties</goal>
</goals>
<configuration>
<files>
<file>src/main/resources/local/db.properties</file>
</files>
</configuration>
</execution>
</executions>
</plugin>
When I run the liquibase:update maven goal, I get this this error:
[ERROR] Failed to execute goal org.liquibase:liquibase-maven-plugin:3.1.0:update (default-cli) on project my-project: The driver has not been specified either as a parameter or in a properties file
It seems that the database properties referred to in the POM file couldn't be resolved.
Any idea how this can be achieved ?
| I managed to get this working. The key was to use the maven filter element in conjunction with the resource element as explained in Liquibase Documentation.
Also it's important to include the resources goal in the maven command:
mvn resources:resources liquibase:update -Plocal
This is the file hierarchy I used:
|-- pom.xml
`-- src
`-- main
|-- resources
| `-- liquibase.properties
| |-- changelog
| `-- db-changelog-master.xml
| `-- db-changelog-1.0.xml
|-- filters
|-- local
| `-- db.properties
|-- dev
| `-- db.properties
The db.properties file would look like the following:
database.driver = oracle.jdbc.driver.OracleDriver
database.url = jdbc:oracle:thin:@<host_name>:<port_number>/instance
database.username = user
database.password = password123
The liquibase.properties file would look like the following:
changeLogFile: changelog/db.changelog-master.xml
driver: ${database.driver}
url: ${database.url}
username: ${database.username}
password: ${database.password}
verbose: true
The POM file would look like the following:
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.1.0</version>
<configuration>
<propertyFile>target/classes/liquibase.properties</propertyFile>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build>
<profiles>
<profile>
<id>local</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<build>
<filters>
<filter>src/main/filters/local/db.properties</filter>
</filters>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
</resource>
</resources>
</build>
</profile>
<profile>
<id>dev</id>
<build>
<filters>
<filter>src/main/filters/dev/db.properties</filter>
</filters>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
</resource>
</resources>
</build>
</profile>
</profiles>
| Liquibase | 22,355,725 | 15 |
Is there a way to write a liquibase addColumn changeset so it generates sql like
ALTER TABLE xxx ADD COLUMN yyy AFTER zzz;
I mean, is there a way to add an equivalent of "after column zzz" in liquibase jargon?
| With Liquibase 3.1 there are new "afterColumn", "beforeColumn" and "position" attributes on the column tag.
The documentation at http://www.liquibase.org/documentation/column.html was just updated to include them.
| Liquibase | 21,179,943 | 15 |
I want to use a custom TestExecutionListener in combination with SpringJUnit4ClassRunner to run a Liquibase schema setup on my test database. My TestExecutionListener works fine but when I use the annotation on my class the injection of the DAO under test no longer works, at least the instance is null.
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { "file:src/main/webapp/WEB-INF/applicationContext-test.xml" })
@TestExecutionListeners({ LiquibaseTestExecutionListener.class })
@LiquibaseChangeSet(changeSetLocations={"liquibase/v001/createTables.xml"})
public class DeviceDAOTest {
...
@Inject
DeviceDAO deviceDAO;
@Test
public void findByCategory_categoryHasSubCategories_returnsAllDescendantsDevices() {
List<Device> devices = deviceDAO.findByCategory(1); // deviceDAO null -> NPE
...
}
}
The listener is fairly simple:
public class LiquibaseTestExecutionListener extends AbstractTestExecutionListener {
@Override
public void beforeTestClass(TestContext testContext) throws Exception {
final LiquibaseChangeSet annotation = AnnotationUtils.findAnnotation(testContext.getTestClass(),
LiquibaseChangeSet.class);
if (annotation != null) {
executeChangesets(testContext, annotation.changeSetLocations());
}
}
private void executeChangesets(TestContext testContext, String[] changeSetLocation) throws SQLException,
LiquibaseException {
for (String location : changeSetLocation) {
DataSource datasource = testContext.getApplicationContext().getBean(DataSource.class);
DatabaseConnection database = new JdbcConnection(datasource.getConnection());
Liquibase liquibase = new Liquibase(location, new FileSystemResourceAccessor(), database);
liquibase.update(null);
}
}
}
There are no errors in the log, just a NullPointerException in my test. I don't see how the use of my TestExecutionListener affects the autowiring or injection.
| I had a look at the spring DEBUG logs and found that when I omit my own TestExecutionListener spring sets a DependencyInjectionTestExecutionListener in place. When annotating the test with @TestExecutionListeners that listener gets overwritten.
So I just added the DependencyInjectionTestExecutionListener explicitly with my custom one and everything works fine:
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { "file:src/main/webapp/WEB-INF/applicationContext-test.xml" })
@TestExecutionListeners(listeners = { LiquibaseTestExecutionListener.class,
DependencyInjectionTestExecutionListener.class })
@LiquibaseChangeSet(changeSetLocations = { "liquibase/v001/createTables.xml" })
public class DeviceDAOTest {
...
UPDATE:
The behavior is documented here.
... Alternatively, you can disable dependency injection altogether by explicitly configuring your class with @TestExecutionListeners and omitting DependencyInjectionTestExecutionListener.class from the list of listeners.
| Liquibase | 15,704,091 | 15 |
Im trying to change a project a bit, by upgrading it with Liquibase. Its a Java EE project. So im using the liquibase-maven-plugin.
So far i have in my pom.xml:
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>2.0.5</version>
<configuration>
<propertyFileWillOverride>true</propertyFileWillOverride>
<propertyFile>src/main/resources/liquibase.properties</propertyFile>
<changeLogFile>src/main/resources/changelogs/changelog.xml</changeLogFile>
</configuration>
<executions>
<execution>
<!-- Another Error: plugin execution not covered by lifecycle configuration..-->
<!-- <phase>process-resources</phase> <goals> <goal>update</goal> </goals> -->
</execution>
</executions>
</plugin>
which already includes a Driver:
<dependency>
<groupId>postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.1-901-1.jdbc4</version>
</dependency>
the liquibase.properties file has the url, username, password, the changeLogFile-Path and the driver:
#liquibase.properties
driver: org.postgresql.Driver
But it does not have a classpath for the driver. Do I need the classpath as well?
The changelog.xml has a simple changeset which creates a table, just to test liquibase for the beginning.
But I dont come so far, because when I run the project with
mvn liquibase:update
Im getting this error:
[ERROR] Failed to execute goal org.liquibase:liquibase-maven-plugin:2.0.5:update (default-cli) on project PROJECT: Error setting up or running Liquibase: java.lang.RuntimeException: Cannot find database driver: org.postgresql.Driver
I cant see the point.. The driver has been already been used before with the project. So why cant liquibase find it?
EDIT
When i edit my configuration in the pom.xml by adding the driver tag it works:
<configuration>
<propertyFileWillOverride>true</propertyFileWillOverride>
<propertyFile>src/main/resources/liquibase.properties</propertyFile>
<changeLogFile>src/main/resources/changelogs/changelog.xml</changeLogFile>
<driver>org.postgresql.Driver</driver>
</configuration>
before that my driver was specified in liquibase.properties, which should actually work as well.
maybe someone can tell me how the liquibase.properties file should look like if iΒ΄d like to keep the driver in the properties file.
| Edit:
The problem was resolved by replacing
driver: org.postgresql.Driver with driver=org.postgresql.Driver in the liquibase.properties file.
Original Answer:
You have added the postgresql driver as a dependency of your webapp. But when maven plugins run, they have their own classpath, which is different to your webapp. So you need to include a dependency on the JDBC driver for the plugin itself (same applies to other plugins, e.g. jetty-maven-plugin):
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>2.0.5</version>
<configuration>
<propertyFileWillOverride>true</propertyFileWillOverride>
<propertyFile>src/main/resources/liquibase.properties</propertyFile>
<changeLogFile>src/main/resources/changelogs/changelog.xml</changeLogFile>
</configuration>
<executions>
<execution>
<!-- Another Error: plugin execution not covered by lifecycle configuration..-->
<!-- <phase>process-resources</phase> <goals> <goal>update</goal> </goals> -->
</execution>
</executions>
<dependencies>
<dependency>
<groupId>postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.1-901-1.jdbc4</version>
</dependency>
</dependencies>
</plugin>
| Liquibase | 14,501,332 | 15 |
I'm trying to implement liquibase in an existing SpringBoot project with MYSQL database. I want to be able to generate changesets which specify the differences when an entity is changed.
What I've done:
I've added liquibase dependencies and the gradle liquibase plugin in my build.gradle file. After making a domain change, I've run gradle generateChangeLog. The command executes successfully but nothing happens.
I read somewhere that this gradle plugin works only for the inmemory h2 database? Is that true? If yes then what alternative should I use to generate changelogs automatically.
I could not find a working SpringBoot gradle based example which uses MYSQL and has liquibase implemented WITH automatic change generation ability. It would be great if someone could provide that.
References:
https://github.com/stevesaliman/liquibase-workshop
https://github.com/liquibase/liquibase-gradle-plugin
| The solutions is to write a gradle task which invokes liquibase diffChangeLog
Create a liquibase.gradle file in the project root directory, add liquibase-hibernate extension and write a gradle task that invokes the liquibase diffChangeLog command.
configurations {
liquibase
}
dependencies {
liquibase group: 'org.liquibase.ext', name: 'liquibase-hibernate4', version: 3.5
}
//loading properties file.
Properties liquibaseProps = new Properties()
liquibaseProps.load(new FileInputStream("src/main/resources/liquibase-task.properties"))
Properties applicationProps = new Properties()
applicationProps.load(new FileInputStream("src/main/resources/application.properties"))
task liquibaseDiffChangelog(type: JavaExec) {
group = "liquibase"
classpath sourceSets.main.runtimeClasspath
classpath configurations.liquibase
main = "liquibase.integration.commandline.Main"
args "--changeLogFile=" + liquibaseProps.getProperty('liquibase.changelog.path')+ buildTimestamp() +"_changelog.xml"
args "--referenceUrl=hibernate:spring:" + liquibaseProps.getProperty('liquibase.domain.package') + "?dialect=" + applicationProps.getProperty('spring.jpa.properties.hibernate.dialect')
args "--username=" + applicationProps.getProperty('spring.datasource.username')
args "--password=" + applicationProps.getProperty('spring.datasource.password')
args "--url=" + applicationProps.getProperty('spring.datasource.url')
args "--driver=com.mysql.jdbc.Driver"
args "diffChangeLog"
}
def buildTimestamp() {
def date = new Date()
def formattedDate = date.format('yyyyMMddHHmmss')
return formattedDate
}
NOTE: I have used properties files to pass arguments to the liquibase command, you could add the values directly, but that would not be a good practice.
Next, you would need to apply the liquibase.gradle file from within the project's build.gradle file. and add the liquibase dependency
apply from: 'liquibase.gradle'
//code omitted
dependencies {
compile (group: 'org.liquibase', name: 'liquibase-core', version: "3.4.2")
}
After this step liquibase would be setup completely.
You can now use gradle liquibaseDiffChangeLog to generate
changelogs.
| Liquibase | 35,716,378 | 14 |
I wonder if it possible to get maximum column value from a certain table and set it as start sequence value with no pure sql. The following code doesn't work:
<property name="maxId" value="(select max(id)+1 from some_table)" dbms="h2,mysql,postgres"/>
<changeSet author="author (generated)" id="1447943899053-1">
<createSequence sequenceName="id_seq" startValue="${maxId}" incrementBy="1"/>
</changeSet>
Got an error:
Caused by: liquibase.parser.core.ParsedNodeException: java.lang.NumberFormatException: For input string: "${m"
I've tried it with no parentheses around select ... etc. with the same result.
So it's not possible to use computed value as start sequence value?
| So, such a solution worked for me:
<changeSet author="dfche" id="1448634241199-1">
<createSequence sequenceName="user_id_seq" startValue="1" incrementBy="1"/>
</changeSet>
<changeSet author="dfche" id="1448634241199-2">
<sql dbms="postgresql">select setval('user_id_seq', max(id)+1) from jhi_user</sql>
<sql dbms="h2">alter sequence user_id_seq restart with (select max(id)+1 from jhi_user)</sql>
</changeSet>
| Liquibase | 33,888,587 | 14 |
I'm comparing two databases using liquibase integrated with ant. But the output it is generating is like generic format. It is not giving sql statements. Please can anyone tell me how compare two databases using liquibase integrated with ant or command line utility.
| Obtaining the SQL statements, representing the diff between two databases, is a two step operation:
Generate the XML "diff" changelog
Generate SQL statements
Example
This example requires a liquibase.properties file (simplifies the command-line parameters):
classpath=/path/to/jdbc/jdbc.jar
driver=org.Driver
url=jdbc:db_url1
username=user1
password=pass1
referenceUrl=jdbc:db_url2
referenceUsername=user2
referencePassword=pass2
changeLogFile=diff.xml
Now run the following commands to create the SQL statements:
liquibase diffChangeLog
liquibase updateSQL > update.sql
A nice feature of liquibase is that it can also generate the rollback SQL:
liquibase futureRollbackSQL > rollback.sql
Update
Liquibase does not generate a data diff between databases, only the schema. However, it is possible to dump database data as a series of changesets:
liquibase --changeLogFile=data.xml --diffTypes=data generateChangeLog
One can use the data.xml file to migrate data contained in new tables.
Update 2:
Also possible to generate liquibase changesets using groovy.
import groovy.sql.Sql
import groovy.xml.MarkupBuilder
//
// DB connection
//
this.class.classLoader.rootLoader.addURL(new URL("file:///home/path/to/h2-1.3.162.jar"))
def sql = Sql.newInstance("jdbc:h2:db/db1","user","pass","org.h2.Driver")
//
// Generate liquibase changeset
//
def author = "generated"
def id = 1
new File("extract.xml").withWriter { writer ->
def xml = new MarkupBuilder(writer);
xml.databaseChangeLog(
"xmlns":"http://www.liquibase.org/xml/ns/dbchangelog",
"xmlns:xsi":"http://www.w3.org/2001/XMLSchema-instance",
"xsi:schemaLocation":"http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd"
) {
changeSet(author:author, id:id++) {
sql.eachRow("select * from employee") { row ->
insert(tableName:"exmployee") {
column(name:"empno", valueNumeric:row.empno)
column(name:"name", value:row.name)
column(name:"job", value:row.job)
column(name:"hiredate", value:row.hiredate)
column(name:"salary", valueNumeric:row.salary)
}
}
}
}
}
| Liquibase | 8,397,488 | 14 |
Subsets and Splits