output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
The administrator could install a modified sshd that records everything from all ssh sessions, interactive or not. The question is: Do you trust the administrator of the remote system?
Can administrators know what commands I execute non-interactively through SSH? For example: does echo hello get logged somewhere at remote if I run this? $ ssh me@remote "echo hello"Can remote commands be otherwise monitored?
Are commands executed through SSH transparent to a remote host's administrator?
The simplest way I can think of is simply creating a new user partyuser, and assigning it read permissions to a 'public' music directory. To make the music directory (with it's subdirectories and files) to become readable by others you run: # chmod -R r+o /path/to/music_dirThat way this user can not list or access files in your own user's home directory by default. The only easy way to do this is by becoming root. If you wish to have the ability to become root as the partyuser for this reason, simply add the user to the /etc/sudoers file and use sudo. Also remember to add the user to appropriate groups, that will enable use of the audio and/or graphics etc. on the system.
I am running Ubuntu 10.04 LTS. I want to use my laptop to play music in a party. MY screensaver does not need a password to deactivate. I would like to allow people to use my computer to play the music they like, but I would like to prevent them to have access to certain directories in the same manner or similar that linux prevents people unauthorized to install programs from the synaptic package manager. I would like this to be at the level of the command line and the file browser. But with the root password to be able to have access. Is this done by changing the permissions of the directory? If so how, which is the command from the terminal? Will that also prevent people from executing the files in the directory as well? Can I also block their searching of the directory and contents?
How to make a folder request root password to view execute?
So, since you seem Ok with the idea, for any searchers: Ecryptfs and its associated PAM facilities do more or less what you want. The filesystem stores an encrypted key which the PAM module locks and unlocks as appropriate. This key is used to read and write files on a fuse filesystem that is mounted on top of the real filesystem over the user's home directory. Anyone else just sees the encrypted key and a bunch of encrypted files with obfuscated names (ie, even if you name your file "super secret stuff", without the user's password somebody else only sees "x18vb45" or something like that). There's a bit of memory and processor overhead, and someone who can see arbitrary memory locations can get more when the user is logged in, but that's true for an file encryption.
I got a VPS with a user dedicated to store my files. As i am aware of the current situation with the NSA and many other governments, and my VPS is hosted on a doubtful country i would like to ensure my privacy (not that i have any top-secret file, or anything similar). I am thinking of making the home folder of my user a truecrypt file, which would be mounted after a successful user login. Is it possible? And can the SSH login also prompt for the password to mount the file after login and before shell access? Something like: "insert user@host password:" and then "insert home folder password:" before shell access. If wrong it would start the working directory somewhere else, like on fs root. Can it be done?
Mount home partition on user login
TrueCrypt won't, but I don't know about Nautilus. If you want to make sure, check all the files that have been modified during your session: find /tmp /var/tmp ~/ -type f -mmin 42where 42 is the number of minutes you've been logged in (the last command might help if you didn't check the time). You can search for image specifically: find /tmp /var/tmp ~/ -type f \( -name '*.jpg' -o -name '*.png' \) -mmin 42Of course, if you don't trust the administrators of the computer, you'll never know if they secretly keep a copy of every file that's been on the machine ever.
I've I have some scans of sensitive documents in image format such as jpg png etc. When I open up the folder containing them which is in a truecrypt file, will nautilus create a temporary version of the scan or even a thumbnail of the scan on the computer which the file was opened on?
Does truecrypt leave behind temporary files
Launch your process through strace: strace -fe open skypeYou will see the list of each open() syscall, that is every file (or connection) the processes opens during its life. Looking at currently opened file descriptors will not provide a log but only a "snapshot" of what the process accesses right now.
There has recently been a (publicly unconfirmed) report that Skype is accessing files that it shouldn't be, without user intervention. I have no idea if this is the case with Skype on linux, but it would be good to be able to find out. Is there a way to keep track of all files accessed by a specific process?
Keep track of all files accessed by a specific process?
Try this for IPv4: sed 's/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/HELLO!/g' fileReplace HELLO! with what you need. Example: echo "Oct 3 19:30:39 hostname pure-ftpd: ([emailprotected]) [INFO] New connection from 0.0.0.0" | sed 's/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/HELLO!/g'Output will: Oct 3 19:30:39 hostname pure-ftpd: (username@HELLO!) [INFO] New connection from HELLO!With IPv6 everything is more complicated than it may seem. I need more examples of your log files to build correct regex pattern. But the simplest pattern for IPv6 replacing in your logs is: sed 's/\([A-Za-z0-9]*:\)\{1,7\}[A-Za-z0-9]\{1,4\}/HELLO!/2g' fileThis pattern will replace most occurence of IPv6 addresses but not all! To replace all occurence you need more complex solution. Example: echo "Oct 3 19:30:39 hostname pure-ftpd: (username@2001:db8:85a3:8d3:1319:8a2e:370:7348) [INFO] New connection from 2001:db8::1" | sed 's/\([A-Za-z0-9]*:\)\{1,7\}[A-Za-z0-9]\{1,4\}/HELLO!/2g'Output will: Oct 3 19:30:39 hostname pure-ftpd: (username@HELLO!) [INFO] New connection from HELLO!See similar topics about IPv6 regex: Regular expression that matches valid IPv6 addresses
I have a pure-ftpd log file I would like to anonymize periodically. How would I go about doing this? I would like to strip IPv4 and IPv6 addresses. I'm afraid I do not know sed / awk. The log file looks like this: Oct 3 19:30:39 hostname pure-ftpd: ([emailprotected]) [INFO] New connection from 0.0.0.0I would like to remove the 0.0.0.0 IP and replace it with something else. I know I will put the script into cron to run periodically. Thanks!
How to redact all IP addresses from a log file in Debian?
Private browsing is a complicated thing. A SSH tunnel can protect you from some kind of sniffing and "man-in-the-middle" attack inside your network, or other LAN focused attack. It will basically connect you to a host, and encrypt the connection between you and this hosts using ssh and add this one to the "path" before you access the site. If the site is http, the information will still be unencripted between your ssh tunnel "jumpbox" and the site. No alteration to the http headers, but javascript could act different, for example: You are using a Windows workstation, and connect to a OpenBSD virtual box somewhere on the internet before access a site. A javascript that would probe your Contry and your Operating System will have the information available to the SSH tunel box ip address. The same way, when using a ssh tunnel on a VPS that you have paid, will not bring "that privacy", since this VPS have a valid ip on the internet, and records of what you have accessed may be available to the site owner. If you make something "wrong" on the internet, you are still a valid person that made a contract with the VPS provider, and they can getter.Having a reverse proxy somewhere, could help you on privacy, but will have the same "legal" caveats of the ssh hosted tunnel, and information accessed through the reverse proxy will not be encripted when needed(aka: http) between you and the proxy
What are the key differences between "SSH tunnels" and "Squid reverse IPs" in regards to web page scraping and private browsing? Interested to know if there are any differences between using "SSH tunnels" and "Squid reverse IPs" and how they connect to a website. For example: do they leave the same HTTP headers and JavaScript system info in a targets' website logs and in JavaScript based analytics software?
What are the key differences between "SSH tunnels" and "Squid reverse IPs" in regards to web page scraping and private browsing?
how does the Whonix workstation ensure nothing is logged, or stored, on the OS It doesn't. Whonix is not amnesic.; There is no Whonix Live version yet.; There is no substitute for Whonix's lack of an Amnesic featureand no fingerprinting can be done on the user who was using the workstation session?See also documentation about Fingerprint as well as technical information Whonix's Protocol-Leak-Protection and Fingerprinting-Protection. Full disclosure: I am maintainer of Whonix.
How does Whonix Linux keep a user secure? I am aware of how it works in terms of the Gateway which uses Tor, but how does the Whonix workstation ensure nothing is logged, or stored, on the OS and no fingerprinting can be done on the user who was using the workstation session?
How Does Whonix Keep The User Anonymous?
The problem description in the GitHub issue was:The following is how i noticed. It gave me the following error when i tried to start my virtual system: The VirtualBox Linux kernel driver is either not loaded or not set up correctly. Please try setting it up again by executing '/sbin/vboxconfig' as root. Which i did, however it didn't work due to some permission problems. It failed and told me to use dmesg to find out why. When i used dmesg i saw what it did in the background. I picked two messages out of many:And the log messages:audit: type=1400 audit(1651914430.711:1128): apparmor="DENIED" operation="open" profile="torbrowser_firefox" name="/home/amnesia/.cache/thumbnails/large/3678dc849747c84908498dd948db8f71.png" pid=10995 comm="pool-firefox" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000This message was generated by AppArmor blocking read access by a process 10995 named pool-firefox running as user ID 1000 to the local file manager's thumbnail image cache. I don't see anything here indicating this message would be in any way connected to executing /sbin/vboxconfig as root. It's more likely that the user simply had a Firefox file dialog open (for whatever reason) and the dialog library was looking for image thumbnails, but because the library was executed as part of a web browser, the AppArmor rules of the privacy-focused distribution blocked the access (to prevent the possibility of exfiltration of local thumbnail cache via the browser).Dropped outbound packet: IN= OUT=wlan0 SRC=i removed the adress DST=i removed the adress LEN=48 TC=0 HOPLIMIT=255 FLOWLBL=762031 PROTO=ICMPv6 TYPE=133 CODE=0 UID=0 GID=0ICMPv6 type 133 is a Router Solicitation packet, i.e. this system is sending out a packet on wlan0, requesting the IPv6 router(s) on the WiFi network to announce themselves. It is a part of the normal functionality of IPv6 auto-configuration, and UID=0 indicates the packet was generated by a root-level process. Normally such packets are sent out as multicasts to the link-local "all-routers" IPv6 multicast address, so if the DST= address was anything other than ff02::2, it would indicate abnormal use, possibly suggesting that the ICMPv6 messages may be being used as a covert data exfiltration channel. But an attack of such complexity (that already requires privileged access to craft custom ICMPv6 messages) while not removing the iptables filter blocking and revealing it would be inconsistent, strongly suggesting that this may be a misinterpretation by the user. I don't see any evidence of a causal connection between the original action, the first log message and/or this second one. This looks more like a common logical fallacy: "Since event Y followed event X, event Y must have been caused by event X." ("post hoc ergo propter hoc") In this case, the user simply assumes that the log messages are resulting exclusively from them running the /sbin/vboxconfig command, when in actuality there are many other processes happening on the system at the same time, and dmesg reports on all of them.
An issue was recently opened on the GitHub page for the privacy-focused VirtualBox wrapper HiddenVM. The opener posts what he claims to be indication of files from his local cache being sent to an external IP:When i used dmesg i saw what it did in the background. I picked two messages out of many: audit: type=1400 audit(1651914430.711:1128): apparmor="DENIED" operation="open" profile="torbrowser_firefox" name="/home/amnesia/.cache/thumbnails/large/3678dc849747c84908498dd948db8f71.png" pid=10995 comm="pool-firefox" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000Dropped outbound packet: IN= OUT=wlan0 SRC=i removed the adress DST=i removed the adress LEN=48 TC=0 HOPLIMIT=255 FLOWLBL=762031 PROTO=ICMPv6 TYPE=133 CODE=0 UID=0 GID=0So it looks like it sent files from my cache to some address. Like why does a script that is supposed to change settings open cache files and sends them somewhere?The opener doesn't say exactly what commands they used or give any further details. Do these two messages indicate files being sent from the local machine to an external IP?
Does this dmesg log show files being transferred?
In the general case, this could be difficult to do. But you are looking at a video. The simplest solution here would be that they placed it into a tag. The first thing I'd do is examine the tags. If you have it, run mediainfo on the file. Do you see your username in the output? If so, it's just in a tag. You could grab a tag editor and manipulate the data. EasyTAG may be an option, but I have no specific editor that I would recommend.
I've downloaded a video via a website that forces a login. Thus, it knows my login. If I run the following on it, I get results that I do not want to have: strings video.mp4 | grep "my user name"It always shows up in the video, regardless of the username used. That is, if I download video1 with user1 and video2 with user2 (where video1 and video2 are the same, except for the account that accessed the video) the following commands produce output: strings video1 | grep user1 strings video2 | grep user2However, mixing the two a la strings video2 | grep user1Produces no information. In short, the website is embedding information about me in the video and I'd like to clear it (I realize there may be other information about me embedded in the video). Is there any way to remove this information that gets output by strings? Or, better yet, is there any way to compare strings video1 to strings video2 and remove anything that differs?
How to remove data that shows up from 'strings' on a file?
dmidecode contain private information:Serial Number IPMI supportbut this information does not make your system vulnerable.
I remember somewhere mentioned that posting dmidecode output publicly is risky (security/privacy).
What are the cons of posting dmidecode output publicly?
The recommendations of https://privacytools.io are Debian, Qubes, and Fedora. I should note you can change the UX/design of Debian (and Fedora, since they both default to the same GUI) by changing from GNOME to a different Desktop, of which there are many. For example, Qubes defaults to the KDE Desktop. Stallman recommends 'ethical' distros, that is to say, those which do not contain non-'free' software. e.g., Dragora, Dyne:bolic, gNewSense, Guix, Hyperbola, Parabola, PureOS, Trisquel, and Ututo S. Neither recommends ParrotOS. Anything based on Ubuntu (such as Elementary) will have the flaws of Ubuntu in Stallman's eyes, unless they go to great lengths to remove non-free apps and capabilities (such as playing MP3 or AAC files).
I am an Ubuntu user. I plan to stop using Ubuntu according to Richard Stallman's and privacytools.io recommendations. I am not very enthusiastic about using Debian because of UX/design which I don't like. I am curious about how my data are processed. For my privacy: Is it safe to use ParrotOS (not ParrotSec), which is based on Debian? Can I use elementaryOS, which is based on Ubuntu but claims to be Free software that will not spy on me? I precise that I don't take into consideration for this question the future software I will install.
Privacy: are elementaryOS and ParrotOS equals to Debian?
Root has all privileges on the machine, so there's no way you can protect stdout from root. If the data had only to transit through the machine (e.g. via a network interface) a solution would be to encrypt it at the source, but since the data is generated on the same machine the root user can easily modify the script to fetch the unencrypted data. As a rule, if you don't trust a machine, don't run anything on it. Also, there's a contradiction in the sentence "This script outputs very sensitive data to stdout" -- if it's very sensitive data, you shouldn't dump it to stdout, unless you are the only user on the machine.
I need to run a script as a regular user on a remote machine whose root I do not trust. This script outputs very sensitive data to stdout. What steps can I take to ensure only I can read the script’s output? What steps can I take to ensure remnant files on the disk (if there are any) are permanently destroyed?
Privacy as regular user against root?
If bash encounters $(pwd) it will execute the command pwd and replace $(pwd) with this command's output. $PWDis a variable that is almost always set. pwd is a builtin shell command since a long time. So $PWD will fail if this variable is not set and $(pwd) will fail if you are using a shell that does not support the $() construct which is to my experience pretty often the case. So I would use $PWD. As every nerd I have my own shell scripting tutorial
I encountered BASEDIR=$(pwd) in a script. Are there any advantages or disadvantages over using BASEDIR="$PWD", other than maybe, that $PWD could be overwritten?
Is it better to use $(pwd) or $PWD?
If your're using bash, then the dirs builtin has the desired behavior: dirs +0 ~/some/random/folder(Note +0, not -0.) With zsh: dirs ~/some/random/folderTo be exactly, we first need to clear the directory stack, else dirs would print all contents: dirs -c; dirsOr with zsh's print builtin: print -rD $PWDor print -P %~(that one turns prompt expansion on. %~ in $PS1 expands to the current directory with $HOME replaced with ~ but also handles other named directories like the home directory of other users or named directories that you define yourself).
pwd gives me /data/users/me/some/random/folderIs there an easy way of obtaining ~/some/random/folderfrom pwd?
Make pwd result in terms of "~"?
Depending on how your pwd command is configured, it may default to showing the logical working directory (output by pwd -L) which would show the symlink location, or the physical working directory (output by pwd -P) which ignores the symlink and shows the "real" directory. For complete information you can do file "$(pwd -L)"Inside a symlink, this will return /path/of/symlink: symbolic link to /path/of/real/directory
Suppose I have a folder: cd /home/cpm135/public_htmland make a symbolic link ln -s /var/lib/class .Later, I'm in that directory: cd /home/cpm135/public_html/classThe pwd is going to tell me I'm in /home/cpm135/public_html/class Is there any way to know that I'm "really" in /var/lib/class ? Thanks
How to tell if I'm actually in a symlink location from command line?
The other answers are oversimplifications, each presenting only parts of the story, and are wrong on a couple of points. There are two ways in which the working directory is tracked:For every process, in the kernel-space data structure that represents that process, the kernel stores two vnode references to the vnodes of the working directory and the root directory for that process. The former reference is set by the chdir() and fchdir() system calls, the latter by chroot(). One can see them indirectly in /proc on Linux operating systems or via the fstat command on FreeBSD and the like:% fstat -p $$|head -n 5 USER CMD PID FD MOUNT INUM MODE SZ|DV R/W JdeBP zsh 92648 text / 24958 -r-xr-xr-x 702360 r JdeBP zsh 92648 ctty /dev 148 crw--w---- pts/4 rw JdeBP zsh 92648 wd /usr/home/JdeBP 4 drwxr-xr-x 124 r JdeBP zsh 92648 root / 4 drwxr-xr-x 35 r % When pathname resolution operates, it begins at one or the other of those referenced vnodes, according to whether the path is relative or absolute. (There is a family of …at() system calls that allow pathname resolution to begin at the vnode referenced by an open (directory) file descriptor as a third option.)In microkernel Unices the data structure is in application space, but the principle of holding open references to these directories remains the same. Internally, within shells such as the Z, Korn, Bourne Again, C, and Almquist shell, the shell additionally keeps track of the working directory using string manipulation of an internal string variable. It does this whenever it has cause to call chdir().If one changes to a relative pathname, it manipulates the string to append that name. If one changes to an absolute pathname, it replaces the string with the new name. In both cases, it adjusts the string to remove . and .. components and to chase down symbolic links replacing them with their linked-to names. (Here is the Z shell's code for that, for example.)The name in the internal string variable is tracked by a shell variable named PWD (or cwd in the C shells). This is conventionally exported as an environment variable (named PWD) to programs spawned by the shell.These two methods of tracking things are revealed by the -P and -L options to the cd and pwd shell built-in commands, and by the differences between the shells' built-in pwd commands and both the /bin/pwd command and the built-in pwd commands of things like (amongst others) VIM and NeoVIM.% mkdir a ; ln -s a b % (cd b; pwd; /bin/pwd; printenv PWD) /usr/home/JdeBP/b /usr/home/JdeBP/a /usr/home/JdeBP/b % (cd b; pwd -P; /bin/pwd -P) /usr/home/JdeBP/a /usr/home/JdeBP/a % (cd b; pwd -L; /bin/pwd -L) /usr/home/JdeBP/b /usr/home/JdeBP/b % (cd -P b; pwd; /bin/pwd; printenv PWD) /usr/home/JdeBP/a /usr/home/JdeBP/a /usr/home/JdeBP/a % (cd b; PWD=/hello/there /bin/pwd -L) /usr/home/JdeBP/a % As you can see: obtaining the "logical" working directory is a matter of looking at the PWD shell variable (or environment variable if one is not the shell program); whereas obtaining the "physical" working directory is a matter of calling the getcwd() library function. The operation of the /bin/pwd program when the -L option is used is somewhat subtle. It cannot trust the value of the PWD environment variable that it has inherited. After all, it need not have been invoked by a shell and intervening programs may not have implemented the shell's mechanism of making the PWD environment variable always track the name of the working directory. Or someone may do what I did just there. So what it does is (as the POSIX standard says) check that the name given in PWD yields the same thing as the name ., as can be seen with a system call trace:% ln -s a c % (cd b; truss /bin/pwd -L 3>&1 1>&2 2>&3 | grep -E '^stat|__getcwd') stat("/usr/home/JdeBP/b",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) stat(".",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) /usr/home/JdeBP/b % (cd b; PWD=/usr/local/etc truss /bin/pwd -L 3>&1 1>&2 2>&3 | grep -E '^stat|__getcwd') stat("/usr/local/etc",{ mode=drwxr-xr-x ,inode=14835,size=158,blksize=10240 }) = 0 (0x0) stat(".",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) __getcwd("/usr/home/JdeBP/a",1024) = 0 (0x0) /usr/home/JdeBP/a % (cd b; PWD=/hello/there truss /bin/pwd -L 3>&1 1>&2 2>&3 | grep -E '^stat|__getcwd') stat("/hello/there",0x7fffffffe730) ERR#2 'No such file or directory' __getcwd("/usr/home/JdeBP/a",1024) = 0 (0x0) /usr/home/JdeBP/a % (cd b; PWD=/usr/home/JdeBP/c truss /bin/pwd -L 3>&1 1>&2 2>&3 | grep -E '^stat|__getcwd') stat("/usr/home/JdeBP/c",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) stat(".",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) /usr/home/JdeBP/c % As you can see: it only calls getcwd() if it detects a mismatch; and it can be fooled by setting PWD to a string that does indeed name the same directory, but by a different route. The getcwd() library function is a subject in its own right. But to précis: Originally it was purely a library function, that built up a pathname from the working directory back up to the root by repeatedly trying to look up the working directory in the .. directory. It stopped when it reached a loop where .. was the same as its working directory or when there was an error trying to open the next .. up. This would be a lot of system calls under the covers. Nowadays the situation is slightly more complex. On FreeBSD, for example (this being true for other operating systems as well), it is a true system call, as you can see in the system call trace given earlier. All of the traversal from the working directory vnode up to the root is done in a single system call, which takes advantage of things like kernel mode code's direct access to the directory entry cache to do the pathname component lookups much more efficiently.However, note that even on FreeBSD and those other operating systems the kernel does not keep track of the working directory with a string.Navigating to .. is again a subject in its own right. Another précis: Although directories conventionally (albeit, as already alluded to, this is not required) contain an actual .. in the directory data structure on disc, the kernel tracks the parent directory of each directory vnode itself and can thus navigate to the .. vnode of any working directory. This is somewhat complicated by the mountpoint and changed root mechanisms, which are beyond the scope of this answer. Aside Windows NT in fact does a similar thing. There is a single working directory per process, set by the SetCurrentDirectory() API call and tracked per process by the kernel via an (internal) open file handle to that directory; and there is a set of environment variables that Win32 programs (not just the command interpreters, but all Win32 programs) use to track the names of multiple working directories (one per drive), appending to or overwriting them whenever they change directory. Conventionally, unlike the case with Unix and Linux operating systems, Win32 programs do not display these environment variables to users. One can sometimes see them in Unix-like subsystems running on Windows NT, though, as well as by using the command interpreters' SET commands in a particular way. Further reading"pwd". The Open Group Base Specifications Issue 7. IEEE 1003.1:2008. The Open Group. 2016. "Pathname Resolution". The Open Group Base Specifications Issue 7. IEEE 1003.1:2008. The Open Group. 2016. https://askubuntu.com/a/636001/43344 How are files opened in unix? what is inode for, in FreeBSD or Solaris Strange environment variable !::=::\ in Cygwin Why does CDPATH not work as documented in the manuals? How can I set zsh to use physical paths? Going into a directory linked by a link
Say I log into a shell on a unix system and begin tapping away commands. I initially begin in my user's home directory ~. I might from there cd down to the directory Documents. The command to change working directory here is very simple intuitively to understand: the parent node has a list of child nodes that it can access, and presumably it uses an (optimised) variant of a search to locate the existence of a child node with the name the user entered, and the working directory is then "altered" to match this — correct me if I'm wrong there. It may even be simpler that the shell simply "naively" tries to attempt to access the directory exactly as per the user's wishes and when the file system returns some type of error, the shell displays a response accordingly. What I am interested in however, is how the same process works when I navigate up a directory, i.e. to a parent, or a parent's parent. Given my unknown, presumably "blind" location of Documents, one of possibly many directories in the entire file system tree with that name, how does Unix determine where I should be placed next? Does it make a reference to pwd and examine that? If yes, how does pwd track the current navigational state?
How does Unix keep track of a user's working directory when navigating the file system?
First of all, you might simply want to change the \w with \W. That way, only the name of the current directory is printed and not its entire path: terdon@oregano:/home/mydirectory1/second_directory_with_a_too_long_name/my_actual_directory_with_another_long_name $ PS1="\u@\h:\W \$ " terdon@oregano:my_actual_directory_with_another_long_name $ That might still not be enough if the directory name itself is too long. In that case, you can use the PROMPT_COMMAND variable for this. This is a special bash variable whose value is executed as a command before each prompt is shown. So, if you set that to a function that sets your desired prompt based upon the length of your current directory's path, you can get the effect you're after. For example, add these lines to your ~/.bashrc: get_PS1(){ limit=${1:-20} if [[ "${#PWD}" -gt "$limit" ]]; then ## Take the first 5 characters of the path left="${PWD:0:5}" ## ${#PWD} is the length of $PWD. Get the last $limit ## characters of $PWD. right="${PWD:$((${#PWD}-$limit)):${#PWD}}" PS1="\[\033[01;33m\]\u@\h\[\033[01;34m\] ${left}...${right} \$\[\033[00m\] " else PS1="\[\033[01;33m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\] " fi} PROMPT_COMMAND=get_PS1The effect looks like this: terdon@oregano ~ $ cd /home/mydirectory1/second_directory_with_a_too_long_name/my_actual_directory_with_another_long_name terdon@oregano /home...th_another_long_name $
In a system with Ubuntu 14.04 and bash, I have the PS1 variable ending with the following contents: \u@\h:\w\$so that the prompt appears as user@machinename:/home/mydirectory$Sometimes, however, the current directory has a long name, or it is inside directories with long names, so that the prompt looks like user@machinename:/home/mydirectory1/second_directory_with_a_too_long_name/my_actual_directory_with_another_long_name$This will fill the line in the terminal and the cursor will go to another line, which is annoying. I would like instead to obtain something like user@machinename:/home/mydirectory1/...another_long_name$Is there a way to define the PS1 variable to "wrap" and "compact" the directory name, to never exceed a certain number of characters, obtaining a shorter prompt?
Compact bash prompt when using a directory tree / filename
In most shells including bash, pwd is a shell builtin: $ type -a pwd pwd is a shell builtin pwd is /bin/pwdIf you use /bin/pwd, you must use the -L option to get the same result as builtin pwd: $ ln -s . test $ cd test && pwd /home/cuonglm/test $ /bin/pwd /home/cuonglm $ /bin/pwd -L /home/cuonglm/testBy default, /bin/pwd ignores symlinks and prints the actual directory. From info pwd: `-L' `--logical' If the contents of the environment variable `PWD' provide an absolute name of the current directory with no `.' or `..' components, but possibly with symbolic links, then output those contents. Otherwise, fall back to default `-P' handling.`-P' `--physical' Print a fully resolved name for the current directory. That is, all components of the printed name will be actual directory names--none will be symbolic links.The built-in pwd includes symlink by default, except that -P option is used, or -o physical set builtin is enabled. From man bash: pwd [-LP] Print the absolute pathname of the current working directory. The pathname printed contains no symbolic links if the -P option is supplied or the -o physical option to the set builtin command is enabled. If the -L option is used, the pathname printed may contain symbolic links. The return status is 0 unless an error occurs while reading the name of the current directory or an invalid option is supplied.
I added a symlink to the current directory with ln -s . aa. If I execute cd aa, and after that I executed pwd, the response is /home/sim/aa. But if I execute /bin/pwd it prints /home/sim (the current directory hasn't changed). Where does this difference come from?
Strange difference between pwd and /bin/pwd
It's not the . that goes up a level, but the fact that in the shells' pattern match syntax, * means any number of any characters, so it matches the .. entry that exists in every directory and points to the next upper directory. You can get the same with something like ls .* in Bash and many other shells. Having .* match the special entries . and .. is a pain that's never useful. You might see people use ugly workarounds like .??* .[!.] to explicitly navigate around the .. entry. (*) Luckily some smarter shells remove those two from glob results, zsh probably being the most prominent, but not the only one. (At least the development versions of Bash also have that as the globskipdots option.) In this particular case, there's the added complication that it's the SSH server that expands the glob but I expect the root cause is the same. (* .??* matches a dot and at least two other characters, .[!.] matches a dot and one character that's not a dot. In regular expressions, * and ? have slightly different meanings.)
I'm doing my first ever moving of user files from an old system to a new system. My goal is to use SCP for it's simple syntax of scp -r source destination. I tried the following command to copy the files first: scp -r [emailprotected]:/home/someuser/* .In retrospect, and from past experience, this copied all files without a leading .. In my attempt to fix this, I did this: scp -r [emailprotected]:/home/someuser/.* .meaning wildcard for anything starting with a .. Obviously (why I'm asking the question) it didn't do what I wanted. The observed result, was that it interpreted the . as moving up a level in the path, and it started copying /home/* instead, also (I think) placing the files one level up from my working directory, rather than the working directory itself. Is my interpretation of the execution of the second command correct? I think it was easy to fix since I was in ~/backup, so one level up was ~. I just rm -rf ~/someuser on each username that had copied before interrupted the command. Those someuser directories were supposed to be in ~/backup I have since learned how to copy the files I wanted by specifying the directory only, not the files contained in the directory.
Why did 'scp 10.0.0.11:/home/someuser/.*' start copying from /home as well?
A directory (like any file) is not defined by its name. Think of the name as the directory's address. When you move the directory, it's still the same directory, just like if you move to a different house, you're still the same person. If you remove a directory and create a new one by the same name, it's a new directory, just like someone who moves into the house where you used to live isn't you. Each process has a working directory. The cd command in the shell changes the shell's current working directory. The pwd command prints the¹ path to the current working directory. When you removed the directory A, what this did was to remove the entry for A in its parent directory. The directory A itself remained in the filesystem, but in a detached state, with no name. It was not deleted yet because it was in use by a process, namely the first shell. When you changed the directory in the first shell, the directory was finally deleted. The same thing happens when a file is deleted while a process still has it open: the file's directory entry is removed immediately, and the file itself is removed when it stops being in use. Similarly, observe what happens when you move directories around. mkdir one two touch one/1 two/2 cd one lsIn another shell: mv one tmp mv two one mv tmp twoIn the first shell: lsThe file 1 is in the directory that was originally called one and is now called two. The file 2 is in the directory that was originally called two and is now called one. ¹ More precisely, a path, which may not be unique if symbolic links or other subtleties are involved.
I have two shells open. The first is in directory A. In the second, I remove directory A, and then recreate it. When I go back to the first shell, and type ls, the output is: ls: cannot open directory .: Stale file handleWhy? I thought the first shell (the one that remained open inside a non-existent directory) would "freeze" while waiting for the next command, and wouldn't have "realized" that the directory was deleted and recreated. Does the shell hold a "deeper" reference to its current working directory other than the string $PWD?
`ls` error when directory is deleted
bash has a built-in command pwd which is what you are using when you simply type pwd into your shell. To get the pwd as described by the manpage, you need force use of the external command. You can do this by specifying the full path to the executable (/bin/pwd in your case) or by prepending env before the line: env pwd, which starts the env command which can be used to add settings to the environment (but which is not done here) and then env starts the command specified. As env doesn't have a builtin pwd, the "real" /bin/pwd is executed. The advantage of the builtin pwd in bash is that bash keeps track of the current directory, so getting the value is at zero cost, whereas the external command needs to search up through the filesystem to determine the path, which is much more IO intensive.
When I display the manual for pwd command, it says that long options like --physical are supported $ man pwd PWD(1) User Commands PWD(1)NAME pwd - print name of current/working directorySYNOPSIS pwd [OPTION]...DESCRIPTION Print the full filename of the current working directory. -L, --logical use PWD from environment, even if it contains symlinks -P, --physical avoid all symlinksHowever, it fails when I type the following $ pwd --physical -bash: pwd: --: invalid option pwd: usage: pwd [-LP]Why are long options not working for me? I'm using RHEL 6.4. No alias for pwd is configured. Looks like it's standard pwd: $ which pwd /bin/pwd
Why pwd does not accept long options like --physical?
IMPORTANT: As pointed out by Gilles 'SO- stop being evil'. The approach below can expose your shell to attacks and leak information unintendedly. Proceed with care.Building off the answer provided by Groggle, you can create an alternative cd (ch) in your ~/.bash_profile like. function ch () { cd "$@" export HISTFILE="$(pwd)/.bash_history" }automatically exporting the new HISTFILE value each time ch is called. The default behavior in bash only updates your history when you end a terminal session, so this alone will simply save the history to a .bash_history file in whichever folder you happen to end your session from. A solution is mentioned in this post and detailed on this page, allowing you to update your HISTFILE in realtime. A complete solution consists of adding two more lines to your ~/.bash_profile, shopt -s histappend PROMPT_COMMAND="history -a;$PROMPT_COMMAND"changing the history mode to append with the first line and then configuring the history command to run at each prompt.
I was wondering if it is possible to keep a file containing history per current working directory. So for example if I was working in /user/temp/1/ and typed a few commands, these commands would be saved in /user/temp/1/.his or something. I am using bash.
Create history log per working directory in bash
Not a single command as far as I know, but this does what you need: echo "$(whoami)@$(hostname):$PWD"You could make that into an alias by adding this line to your shell's rc file (~/.bashrc, or ~/.zshrc or whatever you use): alias foo='echo "$(whoami)@$(hostname):$PWD"'
I know that pwd gives the current working directory, hostname gives the current host and whoami gives the current user. Is there a single unix command that will give me the output of whoami@hostname:pwdso that I can quickly paste the output into an scp command?
A command that gives username@hostname:pwd
How comes that ls doesn't see the problem?There is no "problem" in the first place.something is not right in the terminal AThere is nothing not right. There are defined semantics for processes having unlinked directories open just as there are defined semantics for processes having unlinked files open. Both are normal things. There are defined semantics for unlinking a directory entry that referenced something (whilst having that something open somewhere) and then creating a directory entry by the original name linking to something else: You now have two of those somethings, and referencing the open description for the first does not access the second, or vice versa. This is as true of directories as it is of files. A process can have an open file description for a directory by dint of:it being the process's working directory; it being the process's root directory; it being open by the process having called the opendir() library function; or it being open by the process having called the open() library function.rmdir() is allowed to fail to remove links to a still-open directory (which was the behaviour of some old Unices and is the behaviour of some non-Unix-non-Linux POSIX-conformant systems), and is required to fail if the still-open directory is unlinked via a name that ends in a pathname component .; but if it succeeds and removes the final link to the directory the defined semantics are that a still-open but unlinked directory:has no directory entries at all; cannot have any directory entries created thereafter, even if the attempting process has write access or privileged access.Your operating system is one of the ones that does not return EBUSY from rmdir() in these circumstances, and your shell in the first terminal session has an unlinked but still open directory as its current directory. Everything that you saw was the defined behaviour in that circumstance. ls, for example, showed the empty still open first directory, of the two directories that you had at that point. Even the output of pwd was. When run as a built-in command in that shell it was that shell internally keeping track of the name of the current directory in a shell/environment variable. When run as a built-in command in another shell, it was the other shell failing to match the device and i-node number of its working directory to the second directory now named by the contents of the PWD environment variable that it inherited, thus deciding not to trust the contents of PWD, and then failing in the getcwd() library function because the working directory does not have any names any longer, it having been unlinked. Further readingrmdir(). "System Interfaces". The Open Group Base Specifications. IEEE 1003.1:2017. https://unix.stackexchange.com/a/413225/5132 Why can't I remove the '.' directory? Does 'rm .*' ever delete the parent directory?
In the first terminal A, I create a directory, enter the directory, and create a file: $ mkdir test $ cd test $ touch file1.txt $ ls file1.txtThen in another terminal B, I delete the directory: $ rm -r test $ mkdir test $ cd test $ touch file2.txtAnd back again the terminal A (not doing any cd), I try to list the files: $ lsls doesn't see anything and it doesn't complain either. What happens in the background? How comes that ls doesn't see the problem? And is there a standard, portable, and/or recommended way to find out that something is not right in the terminal A? pwd just prints the seemingly correct directory name. touch file3.txt says no such file or directory which is not helpful. Only bash -c "pwd" gives a two long error lines which somehow give away that something is wrong but are not really descriptive and I'm not sure how portable that is between different systems (I'm on Ubuntu 16.04). cd .. && cd test fixes the problem, but does not really explain what happened.
What happens when the current directory is deleted?
cwd: current working directory (a concept, state, or value) pwd: print working directory (a command)Part of the confusion may be that in some shells $PWD is actually the current working directory name, and pwd is a command to display it (similar to echo "$PWD" where available). At the library level, pwd can be implemented by a call to getcwd(3).
What is the difference between cwd and pwd? I've tried googling it, and one of the answers mentioned that depending on some factor (which I sadly do not remember), the implementation (the code I'm assuming) is not the same? I don't suppose this is like the difference between print('x') vs return str(x) (to use a Python analogy)?
What is the difference between cwd and pwd?
echo does not do anything with standard input; it only parses its parameters. So you are effectively running echo which, by itself, outputs a single empty line, and the standard input is discarded. If you want to see the behavior you are trying to implement, use a tool designed to parse standard input, such as cat: $ pwd | cat /home/usernameIf you really want to use echo to display the current working directory (or the output of another command), you can use Command Substitution to do this for you: $ echo "Your shell is currently working in '$(pwd)'." Your shell is currently working in '/home/username'.
I'm new to unix. I typed this command in ubuntu terminal: pwd | echoI expected to see the output of pwd in terminal(/home/fatemeh/Documents/Code/test) but the output was just a single empty line. why this happens?
why piping pwd and echo does not work? [duplicate]
// is a special case, covered in the POSIX definition of the word "Pathname":Multiple successive <slash> characters are considered to be the same as one <slash>, except for the case of exactly two leading <slash> characters.On most systems // is the same as /, but it is allowed to be different according to POSIX. Further reading:On what systems is //foo/bar different from /foo/bar? How does Linux handle multiple consecutive path separators (/home////username///file)? unix, difference between path starting with '/' and '//'(I think the first of these links is the best.)
When I change directory to //, it seems to put me in a special directory that is very similar to but slightly different to /. However, trying to add any further slashes (///) simply drops me in /. $ cd / ;pwd / $ cd // ;pwd // $ cd /// ;pwd / $ cd //// ;pwd /It seems that // is somehow special, even though it has the same directories and everything, it's still a different string returned by pwd. Why is this? Why can my working directory be // but not ///?
Why can I cd to // but not /// or //// or ///// or … [duplicate]
The easiest is probably to just make a new window, it will start in the directory where screen was started by default. Alternatives include looking at the process' cwd (e.g. /proc/<pid>/cwd, but this requires root as screen is setuid) Note that you can change that directory with C-a :chdir <path> later
Question How do I find out the default directory of a window in GNU screen? NB: I’m not looking for the current directory of the process running in the window. Background I have created a hardcopy of my scrollback buffer without giving an absolute path. Now I don’t know where to find the created file. I don’t remeber from which directory I have originally invoked screen and I haven’t used any chdir command. I’m now wondering which directory I have polluted with my hardcopy … :-\
How to Find out a GNU Screen Window’s Default Directory?
Batch jobs submitted by qsub are executed in your home directory by default. Some versions of qsub support the -d option to specify a different directory. To execute the script in the same directory where you ran qsub, use qsub -d "$PWD" -q hpc-pool ./myScript.shIf the -d option is not available, you can access the directory where you ran qsub in your script, in the PBS_O_WORKDIR variable. So add this line near the beginning of your script: cd "$PBS_O_WORKDIR" || exit $?
It seems that I cannot use ./ in qsub as in qsub -q hpc-pool ./myScript.shwhere myScript.sh contains several ./. After checking, ./ somewhat is translated to ~/. Why is this the case?
Current directory ./ in qsub?
Shells keep track of symbolic links as a convenience for users. This has the nice effect that cd foo && cd .. always goes back to the original directory, even when foo is a symbolic link to a directory. It has two kinds of downsides: the main one is that other programs don't behave this way; additionally, symbolic directory tracking introduces problems of its own (what happens when the symbolic link changes? what happens if the process doesn't have the permission to read the symlink? etc.). Shells can only do this because they keep track of all directory changes, so they remember how you got there. When a new process starts, it doesn't get this historical information. Under the hood, it finds where it is by moving upwards from the current directory, following .. links until it hits the root¹. If you reached a directory through symbolic links, you can print out a symlink-less path with the pwd builtin, by calling pwd -P. If you call another program from the shell, don't pass it a path that contains .. after symlink components, as the program would interpret it differently. Instead, eliminate the .. components by calling pwd -P: cp "$(cd .. && pwd -P && echo /)test-parent.txt" .If you want to forget about symlinks used in past cd commands in a shell session, you can run cd "$(pwd -P && echo /.)"This changes to what is already the current directory, so it doesn't effectively change the shell process's current directory, but it changes the path that the shell has tracked for the current directory, making it symlink-less. ¹ This is how the getcwd call traditionally operates. Some kernels do keep track of the current directory, but don't track symbolic links, for backward compatibility if for no other reason (but also because of the subtle edge cases with symbolic links).
Suppose I have created two folders in /tmp called parent and child. child contains a file called test-child.txt and parent contains a file called test-parent.txt. Now let's go inside the parent and create a symbolic link to the child. Next, go inside the child and try to copy the test-parent.txt from the parent. The bash completion works but the actual file copy fails -- cd /tmp pwd /tmp mkdir parent mkdir child touch child/test-child.txt ls child/ test-child.txt cd parent ln -sf ../child . touch test-parent.txt cd child cp ../test-parent.txt . cp: cannot stat ‘../test-parent.txt’: No such file or directory why ?? Moreover, when I am inside child and if I say -- pwd /tmp/parent/child
copy files from the parent location when I am inside a symbolically linked folder
Calling cd from within awk is possible (using system(), there is no awk command that is called cd), but wouldn't do much. In particular, it would not change the current working directory of the shell that started awk. The working directory is local to an environment, and awk and any other process or subshell are running in their own environments, inherited from their parent processes (and the parent process' environment can't be changed from a child process). If you just want to see whether the current directory is the root directory and cd to /tmp if it is, then you may do so in the shell directly: [ "$PWD" = "/" ] && cd /tmp
i am trying to change dir when from A path to B path like below pwd|awk '{if($1=="/") cd /tmp/}' awk: syntax error near line 1 awk: illegal statement near line 1please suggest
how to run shell command inside awk
pgrep -x program_name_pattern | xargs pwdxExplanationpgrep pattern - looks through the currently running processes and lists the process IDs which match the pattern.-x, --exact - Only match processes whose names exactly match the pattern.pwdx - report current working directory of a process.Testing pgrep -x my_program | xargs pwdx###Output### 15880: /home/minimax/Desktop/sandbox 15907: /home/minimax/Desktop/sandbox/yet_one_folder
How I can get PID number and folder where it works? If I run 2 same programs in different folders: /var/www/public_html/first_folder/test.jar <i>(it runs all the time)</i> /var/www/public_html/second_folder/test.jar <i>(it runs all the time)</i>If I run this command ps aux | grep test.jar Result: www-data 3766 0.5 3.8 2959916 75616 ? Sl 15:01 0:13 java -jar test.jarwww-data 4239 3.4 4.1 2959916 82432 ? Sl 15:31 0:18 java -jar test.jarI don't know which one PID is it —- first folder or second.
How to get pid number and folder where it works
To answer the question as it is stated: This is a simple string concatenation. somedirpath='/some/path' # for example $PWD or $(pwd) somefilepath='/the/path/to/file.txt'newfilepath="$somedirpath"/"$( basename "$somefilepath" )"You most likely would want to include a / between the two path elements when concatenating the strings, and basename takes an argument which is a path (this was missing in the question).Reading your other answer, it looks like you are looking for the bash script path and name. This is available in BASH_SOURCE, which is an array. It's only element (unless you are in a function) will be what you want. In the general case, it's the last element in the array that you want to look at. In bash 4.4, this is ${BASH_SOURCE[-1]}.
I want to assign the path and file name to a variable: /path/to/myfile/file.txtFor example MYFILE=$(pwd)$(basename)How can i do it ?
Concatenate pwd and basename [closed]
pwd -P(in any POSIX shell), is the command you're looking for. -P is for physical (as opposed to logical (-L, the default) where pwd mostly dumps the content of $PWD (which the shell maintains based on the arguments you give to cd or pushd)). $ ln -s . /tmp/here $cd /tmp/here/here $ cd ../here/here $pwd /tmp/here/here/here $pwd -P /tmp
Obviously I know about pwd and readlink, but is there a command to find out the real absolute path to the current directory (ie, resolving links and dots)?
Real current directory [duplicate]
Using "exec" to replace the current shell is a slight improvement, i.e. bash -c 'cd somedirectory && exec bash -i'You could put something like if [ -n "$STARTDIR" ] ; then cd "$STARTDIR" unset STARTDIR fiin your ~/.bashrc, and then use STARTDIR="/some/directory" bashYou could put this in a different file and use bash --rcfile ~/changedir Programs which open folder in terminal will either be asking "terminal" to do the cd, or will be running a sequence of system calls like fork, chdir /some/directory, exec terminal rather than asking the shell to do something.
The short question is: I need to launch an interactive bash in a certain directory, and I can craft the command that launches bash, but I can't modify the system. At the moment, the command that best fits my needs is bash -c 'cd some-directory; bash'That is, I am calling bash, to use cd within bash, and then creating a child instance of bash within that to handle my interactive bashing. I'm aware this isn't the most egregious bash usage in the world, but it feels weird to me. Is there an easier way to do what I'm looking for?Just for unnecessary context: This is a part of my tooling that deals with launching a shell within a docker instance (using docker exec) via docker-compose. I could find myself inside any of about 15 docker containers that represent projects - some of them have a "default" directory that I would want to be in, and others wouldn't. The tooling is meant to bridge the gap, and give me a smoother experience switching from one project to another. Some things that I have tried that I thought might work but didn't include:PWD='~/some/directory' bash - this just gets ignored bash -i -c "cd ~/some/directory - it creates a shell marked as interactive, but doesn't wait for any input like a normal interactive shell. echo "cd ~/some/directory" | bash --rcfile -" was worth a trySome things that I know would work but don't in my case:Change the working directory then launch my command: I'm in docker, so my host's working directory doesn't matter. I'm intrigued as to how bash knows what the current working directory of a parent shell is. docker-compose does have a -w that will take an absolute path, doesn't know how to handle tildes. cd ~/some/directory ; bash - docker-compose requires an actual executable Put it in an rcfile in the container, or create a "launcher" script: Kinda works, but I want to be able to configure it from the outside, and don't want to edit each of the containersSo is there a way to set the initial directory that I've overlooked, or is this just the way it's done? I know that shell integrations such as Gnome's "Open folder in terminal" must be doing something to tell bash what folder to start in, I can't work out what magic it is using though.
Interactive bash shell: Set working directory via commandline options
Yes, pwd has -P (--physical) option to avoid all symlinks. So do: pwd -Por you can use the canonical way, readlink: readlink -f /pathCheck man pwd and man readlink.
On one of our servers, we have a directory with the following path: "/daten/i/scripts"When you go to /daten/i, one can see that scripts is a soft link to "/batch". When I type cd /daten/i/scripts and then pwd, I see /daten/i/scripts. Is there a way, a command, that I can type in at /daten/i/scripts that shows me that I'm in a "soft link", that I am really in /batch?
How can I make pwd resolve a soft link?
If I understand correctly, you want your test directory to be located in the same place as the script. You can get the location of the script (irrespective of your current directory when you ran it) like this: MYPATH=`dirname \`readlink -e "$0"\``Then you can do, for example cd "$MYPATH/test"Explanation: $0 is the name (including path) of the script. readlink -e /foo/bar gives you the absolute location of /foo/bar (resolving out any symlinks too). dirname trims off the file part and leaves you just the path
I have a shell script that cd's to a specific directory to run a set of python files. Now that I have this committed to source code management(the script and the python files) I am cloning this into a Jenkins work space and want to run the files from there. The script is currently written to still cd into the local repository and not the Jenkins work space therefore cloning the repository is being made redundant How do I write a cd command so that it recognizes the file existing in the Jenkins work space and runs it from there instead of the local files existing on the same machine
Running commands from root directory of new workspace
Assuming none of the file names contain newline characters: find "$PWD" -name __openerp__.py | awk -F/ -vOFS=/ 'NF-=2' | sort -u
what I want to do is to find all files based on some search query and get parents parent directory (../..) full path. For example find . -name "__openerp__.py" and then for each file execute something along the lines of (cd ../..; pwd). Then pipe everything to uniq.
print parent directories full path of find output
According the manual, - and -l are the same option.-l Simulate a full login. The environment is discarded except for HOME, SHELL, PATH, TERM, and USER. HOME and SHELL are modified as above. USER is set to the target login. PATH is set to ``/bin:/usr/bin''. TERM is imported from your current environment. The invoked shell is the target login's, and su will change directory to the target login's home directory. - (no letter) The same as -l.By not specifying -l or -, the directory is not changed.
I wonder how in one command I can switch to another user (in my case it is usually root) and still remain in the same location where I am before the change. I usually do it this way, unfortunately taking many steps: user1@m:~/loc1/loc2$ pwd /home/user1/loc1/loc2user1@m:~/loc1/loc2$ su - Password: root@m:~# cd /home/user1/loc1/loc2root@m:/home/user1/loc1/loc2# I am looking for something like: user1@m:~/loc1/loc2$ su - && ...or similar, which will give me this result: root@m:/home/user1/loc1/loc2#
How to change user and keep current location?
A shell can store environment variables in whatever way it wants. It is not really relevant. What is relevant is that the shell should be able to pass the environment to a child process (including printenv) via the execve system call.
So there's $PWD, $PATH, $USERNAME and all that. I've been working on my own shell and I've just today introduced environment variables. The way I'm doing it is by creating strings called pwd, path and all so when there's a command to echo, say, $PWD I tell it to print pwd. Is this the same thing bash does? I don't yet have the provision to set environment variables but I'll work on that, I guess. My main question would be where and how actual shells do it. Another somewhat related question, how is printenv related to all this? Because printenv is a binary and it always prints the bash environment variables, not of the shell I'm currently using to run it in the first place (obviously, how would it detect the strings in my program I've set to be my path and pwd) so where does it get these from?
How/where does the shell store environment variables?
From zshoptions(1) CHASE_LINKS (-w) Resolve symbolic links to their true values when changing direc- tory. This also has the effect of CHASE_DOTS, i.e. a `..' path segment will be treated as referring to the physical parent, even if the preceding path segment is a symbolic link.So you would setopt CHASE_LINKS somewhere in your .zshrc. There are also flags to cd that will vary how cd behaves. If the -s option is specified, cd refuses to change the current directory if the given pathname contains symlinks. If the -P option is given or the CHASE_LINKS option is set, symbolic links are resolved to their true values. If the -L option is given symbolic links are retained in the directory (and not resolved) regardless of the state of the CHASE_LINKS option.
In bash, I can put set -P in my .bashrc, and to use absolute paths. That is, if I change to a directory through a symbolic link, and then use cd .., it takes me to that directory's canonical parent, not the directory containing the symbolic link. How can I configure zsh to always use absolute paths?
How can I set zsh to use physical paths?
With bash4.4+, you could redefine pwd as: pwd() { local - set -o pipefail builtin pwd "$@" | sed '${/\/$/!s|$|/|;}' }That is add a / to the last line of the current working directory if it was not there already (like after cd /) and return the original exit status thanks to the pipefail option (turned on for that function only with local -). (with zsh, replace local -; set -o pipefail with set -o localoptions -o pipefail). POSIXly, you could do: pwd() ( pwd=$(command pwd "$@" && echo .) || exit pwd=${pwd%??} pwd=${pwd%/}/ printf '%s\n' "$pwd" )
I want pwd to return /path/to/dir in the format /path/to/dir/ with the "/" at the end. Any ideas as to how I can accomplish this?
pwd to return current path with "/" at the end
Actually, Dir2 does exist, but the name Dir2 does not. Confused? :) The shell's current directory is still the directory referred by the name Dir2, and this keeps the directory still around. This is analogous to anonymous files. Normally, when a files link count goes to zero, the file is deleted and the inode freed. However, if a process still has the file open, the kernel does not delete the file until the process closes the file, either explicitly or implicitly by exiting. In Dir2's case, the shell is still having the directory "open" as long as it doesn't change its current directory. What is gone are the names Dir1 in the Desktop catalog and the whole hierarchy of names below it, including the . and .. entries. The directory formerly known as Dir1 is also gone (assuming no other process has it as current directory). Files and directories at the inode level do not form a hierarchy, i.e. there are no links from inodes to parent, child or sibling entries. The hierarchy is built up separately by directory entries, which are essentially (name, inode) pairs, pointing to files and other directories. After this lengthy introduction we can rephrase your original question so that it reads: "why does the shell not change its current directory to something else, when the directory entry Dir2 is removed from Dir1?" Well, one reason is that the shell doesn't even know this. Some other process has run the rm program and removed the directories, but there is no mechanism by which the shell would be told about this. Second, which directory would the shell choose as its new current directory? The directory is changed using the chdir system call, which takes a string containing the new directory as argument. The shell could try a chdir(".."), but as we saw above, we already destroyed the .. entry! Third, why should the shell change the current directory? It has no reason to do so, it is comfortable where it sits, and it is not in the habit of magically change directories without being explicitly told to do so. Granted, the situation is kind of pathological, but it is up to the user to avoid it.
I tried a little experiment where I created 2 folders Dir1 and Dir2 inside my Desktop directory, such that Dir1 is parent of Dir2. /home/username/Desktop/Dir1/Dir2 Then, I use cd to set my pwd as /home/username/Desktop/Dir1/Dir2. Next I used rm -r /home/username/Desktop/Dir1 to remove the Dir1. Now if I use pwd it still shows it to be /home/username/Desktop/Dir1/Dir2, which now doesn't exist. Also at this time if I use ls or cd .. it generates an error saying 'Cannot access /home/username/Desktop/Dir1/Dir2: No such file or diectory', which is ablsolutely true but I was thinking this issue generated because of pwd not getting updating after folder deletion. The solution to this is also simple as far as I can think, you can go the parent directory and then delete the requested directory. I want to know if there is some specific reason for pwd not getting updated, is my solution is correct and/or I just found a bug ?
Why does the pwd doesn't update after directory removal?
Just do: feh --recursive --auto-zoom --action 'printf "%%s\n" %F' "$PWD"That is:pass the full path of the current working directory to feh (instead of nothing which feh treats the same as .) so it will give you the full path of files within. use %F, not %f so the quoting is done correctly (your '%f' would choke on filenames containing ' characters, that would even make it an arbitrary command execution vulnerability (imagine a file named ';reboot #.jpg or worse for instance)). don't use echo which in general can't be used to display arbitrary data. the literal % that we need to pass to printf must be escaped as %% (%s alone would be expanded by feh to the size of the file). we use single quotes (the strongest quotes) for the action argument to pass to feh. The argument will be literally: printf "%%s\n" %F. That tells feh to invoke a shell (/bin/sh) to interpret that code with 3 arguments: sh, -c and that code with %% changed to % and %F changed to the path of the file properly quoted in sh syntax, and sh, in turn, will invoke printf (which is builtin in most sh implementations) with printf, %s\n and the full path of the file as arguments.
The feh command allows you to view images within a folder recursively: feh --recursive --auto-zoomWhile viewing images, it also allows you to associate custom commands with keys 0-9 on your keyboard. For example, if I wanted the terminal to output the filename of the image I was viewing (to the terminal), I could make it do that by pressing the zero key while the image is being displayed by running feh with an --action argument like this: feh --recursive --auto-zoom --action "echo '%f'"--action binds the command echo '%f' to the zero key. %f is the relative path and looks like this when outputted ./filename.jpg. However, I need feh to give me the absolute path instead of a relative path. So, I need to cut off that dot and then append what's left onto the pwd. This is my attempt to do that: feh --recursive --auto-zoom --cache-size 2048 --action "echo $(pwd)$(echo '%f' | cut -c 2-)}"but the output looks like this: /absolute/pathf(notice the 'f' on the end of the pwd) How can I instead achieve an output like this? : /absolute/path/filename.jpg
Get the Absolute Path from feh
The problem is that you're using relative paths in the script: ./Linux/lib, ./foo. These paths are relative to the current directory. The current directory of the process running the script is the current directory of whatever process launched it; it has nothing to do with the location of the script. When you run the script by clicking a desktop icon, the current directory is your home directory. One solution is to add a cd command in the script, to change to the directory where the application is installed. #!/bin/sh cd /home/xyz/Software/Test/ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:./Linux/lib" exec ./foo --gc=sgenBut it would be more useful to not change the current directory, and instead use absolute paths. This way you can use the script to open files in the current directory, for example. While I'm at it, I added "$@" to the invocation of foo, which passes the arguments on the script's command line on to the application. #!/bin/sh export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/xyz/Software/Test/Linux/lib" exec /home/xyz/Software/Test/foo --gc=sgen "$@"If the script is located in the application directory, you can make it detect its own location. $0 is the path to the script. ${0%/*} is the path to the script with everything after the last slash stripped off, i.e. the path to the directory containing the script. #!/bin/sh foo_directory="${0%/*}" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$foo_directory/lib" exec "$foo_directory/foo" --gc=sgen "$@"Beware that if LD_LIBRARY_PATH is initially empty, you're adding the current directory, which may not be a good idea. You should test it. #!/bin/sh foo_directory="${0%/*}" if [ -n "$LD_LIBRARY_PATH" ]; then export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$foo_directory/lib" else export LD_LIBRARY_PATH="$foo_directory/lib" fi exec "$foo_directory/foo" --gc=sgen "$@"or (assuming you don't use empty entries in LD_LIBRARY_PATH, which is a sane choice) #!/bin/sh foo_directory="${0%/*}" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$foo_directory/lib" LD_LIBRARY_PATH="${LD_LIBRARY_PATH#:}" exec "$foo_directory/foo" --gc=sgen "$@"
I have this desktop entry: [Desktop Entry] Name=dummy Type=Application Terminal=false Icon=/home/xyz/Software/Test/ico.png Exec= /home/xyz/Software/Test/startwhich is supposed to execute file containing: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./Linux/lib exec ./foo --gc=sgenI have tried creating symlink, but result is the same - it does nothing. When I double click the file in it's folder, it gives me prompt like this:After I click Run it runs fine, but when executing from desktop... I've tried exporting this path to PATH, but when running foo, it can't find some library even tough it should... Also path is 100% correct, because icon is appearing as it should. What I'm trying to do, is creating working desktop shortcut of start file, or foo file (which won't execute without error for some reason, I have added it's path to PATH, maybe missing argument '--gc=sgen' when executing ?) Any help will be greatly appreciated!
*.desktop nor symlink works (just for this one file) - Linux Mint 17.2 Cinnamon
find will output the found names with the path that you give it, so you can start building the command with find /Users/username/Desktop/WebsiteFilesor, if that's where you're located currently, find "$PWD"Next, we restrict the found names to only names matching *.html: find "$PWD" -type f -name '*.html'If you have both *.html and *.HTML (or *.hTmL) files, and if you want to include these, then change -name to -iname (which does case-insensitive name matching). I also added -type f on the off chance that you have any directories with names matching *.html (we don't want to see these in the result). -type f restricts the name to those of regular files only. Then you wanted to remove particular filenames from the result. Names containing the strings txt or text (up or down case). This can be done through negating the -iname test with !: find "$PWD" -type f -name '*.html' ! -iname "*txt*" ! -iname "*text*"And there you have it. Each "predicate" (-type f etc.) acts like a test against the names in the given directory, and there's an implicit logical AND between the tests. If all tests pass, the name is printed. Running in a temporary directory on my machine, with the files that you have in your directory (just empty files for testing): $ ls -l total 24 -rw-r--r-- 1 kk wheel 0 Sep 26 17:47 about.html -rw-r--r-- 1 kk wheel 0 Sep 26 17:47 about_TXT.html -rw-r--r-- 1 kk wheel 0 Sep 26 17:47 answers.html -rw-r--r-- 1 kk wheel 0 Sep 26 17:47 answers_txt.html -rw-r--r-- 1 kk wheel 0 Sep 26 17:47 contact.html -rw-r--r-- 1 kk wheel 0 Sep 26 17:47 contact_text.html -rw-r--r-- 1 kk wheel 596 Sep 26 17:46 files -rw-r--r-- 1 kk wheel 0 Sep 26 17:47 image.jpg -rw-r--r-- 1 kk wheel 0 Sep 26 17:47 image2.jpg -rw-r--r-- 1 kk wheel 0 Sep 26 17:47 images -rw-r--r-- 1 kk wheel 0 Sep 26 17:47 index-TEXT.html -rw-r--r-- 1 kk wheel 0 Sep 26 17:47 index.html -rw-r--r-- 1 kk wheel 0 Sep 26 17:47 notes.txt -rw-r--r-- 1 kk wheel 10240 Sep 26 19:11 test.tar$ find "$PWD" -type f -name '*.html' ! -iname "*txt*" ! -iname "*text*" /tmp/shell-ksh.p56GA7BA/index.html /tmp/shell-ksh.p56GA7BA/answers.html /tmp/shell-ksh.p56GA7BA/about.html /tmp/shell-ksh.p56GA7BA/contact.html
Goal: Output .txt file with the full directory path, including name, of all .html files except for those with "txt" or "text" in the .html file name. I found the following line gives me the desired .txt file with the file's full directory path. The only problem is that it gives me ALL of the folder's contents: ls -d "$PWD"/* > fileList.txtExample Results: /Users/username/Desktop/WebsiteFiles/notes.txt /Users/username/Desktop/WebsiteFiles/index.html /Users/username/Desktop/WebsiteFiles/index-TEXT.html /Users/username/Desktop/WebsiteFiles/answers.html /Users/username/Desktop/WebsiteFiles/answers_txt.html /Users/username/Desktop/WebsiteFiles/image.jpg /Users/username/Desktop/WebsiteFiles/image2.jpg /Users/username/Desktop/WebsiteFiles/about.html /Users/username/Desktop/WebsiteFiles/about_TXT.html /Users/username/Desktop/WebsiteFiles/contact.html /Users/username/Desktop/WebsiteFiles/contact_text.html /Users/username/Desktop/WebsiteFiles/imagesDesired Results: /Users/username/Desktop/WebsiteFiles/index.html /Users/username/Desktop/WebsiteFiles/answers.html /Users/username/Desktop/WebsiteFiles/about.html /Users/username/Desktop/WebsiteFiles/contact.htmlExperimenting: I'm fairly new to using the command line. I've been experimenting trying to figure this stuff out. I found that the following find helps find all .html files: find . -iname '*.html' When used on the parent directory it will give me all .html files but not the full directory path, example result: ./index.html ./index-TEXT.html ./answers.html ./answers_txt.html ./about.html ./about_TXT.html ./contact.html ./contact_text.htmlI'm not familiar enough with the parameters or assembling these commands and haven't been successful in getting a print of just the .html files without the ones with any of the variation of "text" in the name. I have a ton of files to find with this and need that .txt file with the full paths. I want to understand this stuff so please provide detailed responses!
How to find and print specific file paths with exclusions?
There is no way to do thisas part of the posix_spawn() set of functions. There is an ongoing discussion initiated by redhat whether such a feature should be added. If this gets accepted, it could become part of POSIX in the next version - this may be in 2-3 years. BTW: posix_spawn() is implemented on top of vfork()/exec() and as long as you don't like to implement a POSIX shell with vfork() support, vfork()/exec() is really easy to use.
In Linux (CentOS 7.5, kernel 3.10, gcc 7.3), is it possible to change the working directory of a child process created by posix_spawn before it runs a given process image (an executable)? If yes, how? If no, what is the best practice to do it?
How to change working directory of a child process by posix_spawn? [closed]
Since the mc command is aliased to . /some/script the script is executed in the current environment (. is equivalent to source in some shells). A script that is executed in this way may well change the environment of the calling shell, since it's executing in the same environment.
As far as I know, a process cannot modify its parent's environment. However, when I run mc (GNU's Midnight Commander, a curses-based file manager) and then quit it, I end up with another $PWD: [localhost ~]$ echo $PWD /home/pedro [localhost ~]$ mcremoved `/tmp/mc-pedro/mc.pwd.5616' [localhost pedro]$ echo $PWD /nfs/home/pedroI know that it hasn't really "changed", because /home is a symlink to /nfs/home, but anyway, $PWD was a string, and now it's a different string. What's happening here?
If processes can't modify their parent's environment, what is MC doing?
There are various reasons why a cd could fail. The target might not exist, the target might not be a directory, you might not have permission to access the target directory, cd might not be found (although this is extremely unlikely since it's a shell builtin), the chdir() operation might fail due to a broken file system etc. In this particular case, however, it looks like a bug in the script. The script you link to has two calls to cd: cd "$WORK_DIR"/nanorc || { echo "cd failed"; exit 127; }and cd $HOME || { echo "cd $HOME failed"; exit 155; }I am assuming the one that failed is the first one, since cd $HOME should normally work. This is the relevant (slightly simplified) section of your script: if [ ! -d "$WORK_DIR"/nanorc ] then echo "Setting up Nanorc file for all users....please, wait!" git clone https://$OAUTH_TOKEN:[emailprotected]/gnihtemoSgnihtemos/nanorc || { echo "git failed"; exit 127; } chmod 755 "$WORK_DIR"/nanorc || { echo "chmod nanorc failed"; exit 127; } cd "$WORK_DIR"/nanorc || { echo "cd failed"; exit 127; } fiSo, if "$WORK_DIR"/nanorc is not a directory, you run a git command which creates the nanorc directory. The first possible issue is that the nanorc will be created in the current directory which might not be $WORK_DIR. At this point in your script, you haven't actually moved to $WORK_DIR, so it should only work if you run the script from within $WORK_DIR. So the simple solution is to add a cd $WORK_DIR before the git command: if [ ! -d "$WORK_DIR"/nanorc ] then cd "$WORK_DIR" | { echo "cd $WORK_DIR failed"; exit 127; } echo "Setting up Nanorc file for all users....please, wait!" git clone https://$OAUTH_TOKEN:[emailprotected]/gnihtemoSgnihtemos/nanorc || { echo "git failed"; exit 127; } chmod 755 "$WORK_DIR"/nanorc || { echo "chmod nanorc failed"; exit 127; } cd "$WORK_DIR"/nanorc || { echo "cd failed"; exit 127; } fi
I have a shell script that failed to finish last week; it was a failed "cd" command and it exits if it fails. The script is a bash shell script for configuring new Debian installs. Here is the full script: debianConfigAswome.sh. The script is run as root so it has full access to the file-system. Can you please list all the reasons a script would not be able to successfully execute a cd command and what to do to avoid the error?
When can a "cd" command fail in a shell script and what can I do to remedy it?
grep -v "^$PWD$" FILE-LIST-v inverses the search, so only non-matching lines are printed ^...$ ensures that the pattern only matches the whole line (otherwise all subdirectories of $PWD would got filtered as well)
I have a file that contains paths - looks like this: /Users/a/Desktop /Users/a/Documents /Users/a/Documents/WorkWhat would be the easiest way to remove all lines that contain the current directory ($PWD)?
Remove all lines that contain $PWD
$PWD is the current directory, not the directory containing the script. There's no reason why inner.sh would be located in the current directory. The path to the script is stored in $0. You can extract its directory part to find the directory containing the script. script_directory=$(dirname -- "$0") "$script_directory/inner.sh"
Having some trouble getting $PWD to work inside a bash script... I have two scripts in the same directory: ~/outer.sh, ~/inner.sh. I use outer.sh to call inner.sh as follows: (outer.sh contents shown below) #!/bin/bash$PWD/inner.shBut this seems not to work. Further investigation shows that $PWD appears inaccessible as I've used it here (nothing appears with printf $PWD >> logfile.txt), and I suspect it has something to do with calling a script from a script... can anyone clarify what's going on here?
calling $PWD from another script
You could do something like: ( while [ "$PWD" != / ] && cd -P .. do pwd done )There exists however at least one pathological case where that code could run in an infinite loop: when the current directory has been deleted. In that case, I find that on GNU/Linux and with bash-5.0 at least, when bash it not interactive, cd -P .. outputs an error but does not return a failure exit status. Then, $PWD becomes . and subsequent cd -P ..'s do nothing. Changing the loop exit condition to [ "$PWD" != / ] && [ "$PWD" != . ] && cd -P .. works around the problem there. Some comments about your approach:$(pwd) expands to the output of pwd minus the trailing newline characters, so it doesn't work properly if the current directory ends in newline characters. In POSIX shells $PWD holds a path to the current working directory, and that's what pwd prints. However note that unless you used cd with the -P option to get there, the path that's stored in $PWD and that pwd outputs may contain symlink components. If $PWD is /foo/bar/baz and that file is a symlink, then its parent directory may not be /foo/bar. in bash, leaving $(pwd) (or $PWD) unquoted in list context, such as in the in part of that for loop statement, invokes split+glob. You did configure the split part by setting $IFS to /, but forgot to disable the glob part (with set -o noglob). If $PWD was a directory called /tmp/*, that would have been split+globbed into "", "tmp" and all the non-hidden files in the current directory for instance. echo can't be used to output arbitrary data. The way bash is commonly built and configured, in a directory called /tmp/-Ene the echo -Ene would output nothing. To output something followed by a newline, use printf '%s\n' "$var" instead. Even in the straight case of a $PWD being /path/to/dir none of its components being symlinks, your code outputs an empty line, path, to and dir none of which are ancestor directories of the current directory which are /, /path and /path/to instead.
Display recursively all parent directories relative to the current one. #!/bin/bash IFS=/ for var in $(pwd) do echo "$var" done
Display recursively all parent directories relative to the current one
For bash and any other shell supporting readline you might be able to use this function icd() { local a; read -ei "${1:-$PWD}" -p "$FUNCNAME> " a && cd "$a"; }Usage icd # Starts editing with $PWD icd /root # Starts editing with /root
I'm looking for a command that invokes readline or similar, primed with the current $PWD, to let the user edit the current directory, then cd to the edited value. E.g. > cd ~/a/b/c/d > pwd > /home/alice/a/b/c/dThen run the proposed icd command (for "interactive cd", inspired by imv in renameutils). It prompts the user as follows: > icd icd> /home/alice/a/b/c/dThen the user can, e.g. press Alt-b, Alt-b, Alt-t, resulting in: icd> /home/alice/a/c/b/d(Alt-t transposing b and c) Upon pressing Enter, the icd command changes the current directory to /home/alice/a/c/b/d. Ideally icd would have some autocompletion. Maybe even visual indication of whether the current value is an existing/valid directory. This can nearly be done in zsh by typing > cd `pwd`then pressing Tab. But a command like icd would save keystrokes. Related: Interactive cd (directory browser)
sh: is there a command to interactively edit the PWD?
Solved by replacing \w and \W in the PS1 with $PWD: if [ "$color_prompt" = yes ]; then if [[ ${EUID} == 0 ]] ; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;31m\]\h\[\033[01;34m\] $PWD \$\[\033[00m\] ' else PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\] \[\033[01;34m\]$PWD \$\[\033[00m\] ' fi else PS1='${debian_chroot:+($debian_chroot)}\u@\h $PWD \$ ' fi unset color_prompt force_color_prompt# If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h \$PWD\a\]$PS1" ;; *) ;; esac
Relevant .bashrc section: if [ "$color_prompt" = yes ]; then if [[ ${EUID} == 0 ]] ; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;31m\]\h\[\033[01;34m\] \W \$\[\033[00m\] ' else PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\] \[\033[01;34m\]\w \$\[\033[00m\] ' fi else PS1='${debian_chroot:+($debian_chroot)}\u@\h \w \$ ' fi unset color_prompt force_color_prompt# If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h \w\a\]$PS1" ;; *) ;; esacQuestion is, how do I make it not to shorten the pwd if in home directory, thus show the whole pwd at all times in PS1.
How to make my PS1 bash session show the whole pwd at all times?
Single find command will output all the files with absolute path find $(pwd) -type f
Actually, I have a folder which contains several sub folders with lots of images in each. I am trying to gather all the names of the files in each sub folder in a text file (filesNames.txt) in that sub folder in the form of: Absolute/path/to/each/file/filename So, I wrote an script in the parent folder: #!/bin/shfor dir in "$PWD"/*/; do arr=( "$dir"* ) cd "$dir" printf "%s 1\n" "$PWD/${arr[@]##*/}" > "$dir"filesNames.txt cd .. doneMy problem is that: I have the absolute address just for the first file in each sub folder. For the rest there is only filenames without absolute address: /run/media/parent_folder/subfolder/filename1.png filename2.png filename3.png ...I think it is related to $PWD variable which I iterate over it just once for each sub folder. How can I change the script in a suitable form? Thanks in advance.
find my misunderstanding about this little piece of shell script
The logical value of the current working directory (logical cwd, what you call “unresolved pwd”) is an internal concept of the shell, not a concept of the kernel. The kernel only remembers the fully resolved path (physical cwd). So you won't get the information you want through generic system interfaces. You have to get the shell's cooperation. The PWD environment variable is how shells transmit the logical cwd to their subprocesses. When a shell runs another shell, the parent sets PWD to the logical cwd (it does this whenever it runs a program), and the child checks PWD that the value of PWD is sensible and if so uses that as its logical cwd, falling back to the physical cwd if $PWD is missing or wrong. Observe: #!/bin/sh mkdir /tmp/dir ln -sf dir /tmp/link cd /tmp/link sh -c 'echo Default behavior: "$PWD"' env -u PWD sh -c 'echo Unset PWD: "$PWD"' PWD=/something/fishy sh -c 'echo Wrong PWD: "$PWD"' rm /tmp/link rmdir /tmp/dirOutput: Default behavior: /tmp/link Unset PWD: /tmp/dir Wrong PWD: /tmp/dirReading the process's environment doesn't have any particular security implications, but what it tells you is what the value of PWD was when the process started. If the processed changed to another directory, the value in the environment is no longer relevant. What you need is the value that would be in the environment if the shell ran another process now. But the only way for this to appear is to actually make the shell run another process. The typical way for GUIs to find the cwd of a shell that they run is to make the shell print it out. If you need the information occasionally and want to leave maximum control over the shell configuration to the user, ensure that the shell is displaying a prompt and issue the pwd command. This is simple, works even in “exotic” shells like csh and fish, but is ambiguous in corner cases (e.g. a directory name containing newlines). If it's ok to tweak the shell configuration, you can make the shell print an escape sequence each time it displays a prompt (PS1 for many shells, but the way to make it include the current directory varies), or when it changes directories (chpwd_functions in zsh, more invasive ways in other shells). Note that if a component of the logical cwd has been moved or removed, or if the symbolic link has been changed to point elsewhere, the value may be wrong. On the other hand the physical cwd will always be correct. (If the directory has been removed, Linux's /proc/PID/cwd, which is where all programs such as ps and lsof get their information, will appear as a broken symlink whose target ends in (deleted). I don't know what lsof reports on macOS for a deleted current directory.) So if you do find out the logical cwd, you should probably ensure that it matches the physical cwd and fall back to the physical cwd if it doesn't.
I'm hitting an issue where I need to get the unresolved symlink of a shell process. For example given a symlink ~/link -> ~/actual, if bash is launched with a $PWD of ~/link, I need to fetch that from outside the bash process. Getting the resolved cwd is possible using lsof or /proc as called out in https://unix.stackexchange.com/a/94359/115410 but I'm beginning to think it's not possible to get the unresolved path. I have tried to use lsof -b to not use readlink but the logging says that path never tries to use readlink anyway. It does appear possible to read the environment via /proc/.../environ and parse out PWD but this is slow, /proc may not exist on the system and I believe there are some security implications to trying to read a processes' environment. Here is the code in question I'm trying to fix: lsof on macOS: exec('lsof -OPln -p ' + this._ptyProcess.pid + ' | grep cwd', (error, stdout, stderr) => { ... });/proc on Linux: Promises.readlink(`/proc/${this._ptyProcess.pid}/cwd`);
Get the unresolved pwd of a shell from another process
$ scp -pr "$(pwd)" [emailprotected]:"$(basename $(pwd) )/"
Trying to do something like (in pseudo-unix): scp -r <pwd> [emailprotected]:~/<dirname of toplevel> In other words, I'm trying to copy the current directory I'm in locally (and the contents) over to remote while sticking the very last path segment from "pwd" commands output onto /home/user/<here> in the remote. I'm shaky in my unix commands so I figured I'd ask vs. experiment this time to avoid damage
Use SCP from local machine to recursively copy current working directory to remote?
You could try editing ~/.bashrc for it. (Probably C:\Users\[myname]\.bashrc) Append the following line: cd /mnt/c/Users/[myname]/Desktop/cours/rootMeMore details on executing commands on shell startup - here.
I installed bash on my Windows machine, and now when I run the shell, the pwd is: /mnt/c/Windows/System32How may I change this? I open this shell everyday to work, and the directory when I'm working is: /mnt/c/Users/[myname]/Desktop/cours/rootMeSo, it would be faster if my pwd when I'm running the shell is the great!
Change path directory bash.exe
Doesn't look like Tcl gives you any control over cd and pwd. An alternative: resolve the symbolic links in the WORK env var, and compare that to pwd: format {cd "%s"} [expr {[pwd] eq [file normalize $::env(WORK)] ? {$WORK} : [pwd]}]
When I work under tcl environment, once I cd to a directory, even if the path I specify is its symbolic link, then no matter whether I run pwd -L or pwd -P, they all return the absolute path. This is troublesome for me because I try to replace a user-specific workspace path with a variable name so that when different user execute the script, they will switch to their own workspace. However, the system $::env(WORK) returns the symbolic link of the path while pwd command returns the absolute path, so that I cannot do an sed command. For example, stcl> cd $::env(WORK) stcl> puts [format "cd %s" [exec echo [pwd] | sed "s,$::env(WORK),\$WORK,g"]]What I want the code to do is to print "cd $WORK", but because pwd returns absolute path, even if I use pwd -L, I cannot get a match with sed command and therefore cannot replace the string.
tcl cd/pwd command with regard to real/symbolic path
You didn't specify if you want to change actual home directory of user or starting directory of sftp. The first is not a good idea to change, but you can certainly do that using your user settings. The start directory of your user can be set up in sshd_config, where you define sftp subsystem, like this (path on Mac will be probably different): Subsystem sftp /usr/libexec/openssh/sftp-serverBy adding -d start_directory option to this line you are able to change starting directory as described in manual page:-d start_directory specifies an alternate starting directory for users. The pathname may contain the following tokens that are expanded at runtime: %% is replaced by a literal '%', %d is replaced by the home directory of the user being authenticated, and %u is replaced by the username of that user. The default is to use the user's home directory. [...]
I'm trying to set up SFTP's home directory on a MAC OS X Mavericks for user my_user. Now, it looks like: /Users/my_user (i got it with sftp> pwd command) but I want it be: /Users/my_user/Documents/new/dirsHow can I do this?
How I can change the home directory of the sftp server on MAC OS X?
In my .bashrc, my PS1 is configured to display the last component of my current working directory. This can be done using \W (PS1 allows some special sequences along with variables). In your case, that would be: $ export PS1='\W$'More of these sequences may be found in man bash, at Prompting, see here. You may also use \w to use complete paths, but with $HOME abbreviated to ~ (which would shorten most of your paths). Now, if you really want to put a character limit, you may use... $ export PS1='$(pwd | tail -c30)$' # Limited to 30 characters $ export PS1='...$(pwd | tail -c30)$'... or with a little bashism: $ export PS1='${PWD: -30}$' $ export PS1='${PWD:(-30)}$' $ export PS1='...${PWD: -30}$' $ export PS1='...${PWD:(-30)}$'
To know, what is my CWD in console, I added the following to ~/.bash_profile: export PS1='$(pwd)$ 'So, pwd result it shown before the $ symbol. It was very nice. But when I go to a directory with a long name: /var/www/vhosts/beta.sitename.co.uk/httpdocs/sites/beta.sitename.co.uk/modules/bundles In the command prompt I see: .co.uk/modules/bundles$ sitename.co.uk/httpdocs/sites/beta.sitename.This makes the shell almost impossible to work with. Is it possible to trim it, keeping only the last part or the dir name? Say, 30 characters. Or, maybe the better idea would be to print the pwd after every cd command, so I could know, where I am? This can be done using an alias.
How to trim the CWD, shown in the command promt
Self-answer ... Let's say the file bundled_file.txt is in the folder called test ... and let's further say that my pwd is in the directory above, meaning the parent directory of test ... I found this, which works well even though it's changing the pwd, but it immediately returns to the parent directory: In bash, something like — (cd ./test && csh bundled_file.txt)Doing this, files a.txt, b.txt, and c.txt all get created in the folder test, and the pwd stays unchanged outside the subshell. Feel free to respond to this if you have a better answer. 😊
All, I have a file (called, say, bundled_file.txt) of here docs stored in a directory, and the file looks like this: cat > a.txt << 'eof' ... ... ... 'eof' cat > b.txt << 'eof' ... ... ... 'eof' cat > c.txt << 'eof' ... ... ... 'eof'I want to un-bundle this file so that the files a.txt, b.txt, and c.txt get created in the same directory as the original bundled file above. Normally, I would just cd to this directory and run something like csh bundled_file.txt, but I want to execute the csh command while in my pwd (print working directory). However, when doing a remote execution from my pwd, the files a.txt, b.txt, and c.txt get created there. I do not want this. And, in case you're already thinking it, I do not want to change the files to say something like, cat > /full/file/path/a.txt << 'eof' ... ... ... 'eof'Any one who can help a novice out? Thanks!
unbundling a file of here docs when file's directory is different from pwd
You can simply tmux attach -c directory -t session 2> /dev/null &the attaching will exit immediately as the forked background job is not a terminal. But it will succeed in changing the working directory of the tmux session. Edit: I corrected -s with -t but the trick does not seem to work anymore in tmux 3.1b.
I'm looking for a way to do something like this without attaching to the session. tmux attach-session -c <directory> -t <session> ^^^^^^^^^^^^^^Per tmux(1), there isn't a way to change the default working directory (new windows and new panes) of an entire session without attaching to it. I cannot attach to the session because I'm doing this in some automated scripts where attaching would break the automation.
tmux change default working directory of a session without attaching
To be honest, I don't see a reason for the two calls to cd at all. You don't seem to use the directory that you cd into for anything. You give an absolute path for the location of the database dump. If any custom MySQL configuration file is needed, that would be picked up from the user's home directory in any case. You could therefore, quite likely, just use mysqldump -uroot -p"craft" --add-drop-table craft \ > ~/../docker-entrypoint-initdb.d/base.sqlregardless of what directory you run that from.
I use this shell pipeline to get a SQL dump using the terminal: $ cd var/lib/mysql && mysqldump -uroot -p"craft" --add-drop-table craft > ~/../docker-entrypoint-initdb.d/base.sql && cd ~/..As can be seen, I entered the var/lib/mysql directory and create the dump to a file and come back from where I was initially. The command is correct, but, I guess it can be written concisely like without entering directly the var/lib/mysql directory. Can anyone suggest that?
How to run a command with a different working directory?
mv previousVersions/foo.txt .. will move the file foo.txt to the directory above your working directory. To have the file be moved to your working directory, replace .. with .: mv previousVersions/foo.txt . With the file presently in the parent directory of your current working directory, you can move it to your current working directory with this command: mv ../foo.txt .
I have been using *nix systems for a while now, and I was surprised to see this situation in which mv misplaces or deletes my file. For example, I had a file foo.txt in the directory called previousVersions, and when I was in that directory's parent directory, I issued the command mv previousVersions/foo.txt ..expecting it to move foo.txt up to my working directory. Instead, foo.txt is in neither the original directory nor my working directory. Why did this happen and where did my file go?
mv .. with path: Where does my file go?
The trailing / in /home/myuser/ is confusing bash. I think you'll see normal behavior if you remove it. That slash isn't part of the directory name; it's a path separator. It shouldn't be in /etc/passwd, and it shouldn't be in $HOME. You can test that theory without touching a file using just: HOME=/home/myuserafter which the tilde should appear in your prompt.
Between the various parameters that can be included in the bash PS1 variable, \w expandsthe current working directory, with $HOME abbreviated with a tilde (uses the value of the PROMPT_DIRTRIM variable)as stated in the Bash manual. My $HOME is set to /home/myuser/ (the same value specified in /etc/passwd), but the expansion of \w in PS1 gives /home/myuser when I am in the $HOME directory. So, it is not «abbreviated with a tilde». I am Using Ubuntu 16.04 with GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu). What's wrong? What is the source from where \w actually copies the path of the current directory?
Parameters in bash $PS1 variable
Beware that $0 can be a relative path, so calling cd twice might not work. Apart from that, extracting the directory part of $0 works in most cases. Most is not the same as all. Some cases where it can fail include:The script is not invoked by running its path but by calling a shell on it, e.g. bash myscript (which does a PATH lookup). The script is moved during its execution.As long as you document that your script must be called without shenanigans, taking the directory part of $0 is ok. You need to be a little careful; it's possible for $0 to not contain a slash at all, if it was found via an empty PATH entry or, with some shells, a PATH entry for .. This case is worth supporting, and it means you need to take precautions with ${0%/*}. The dirname approach isn't completely straightforward either. The command substitution eats up newlines at the end. And with both approaches you need to take care that the string could begin with - and be interpreted as an option; pass -- to terminate a command's option list. case "$0" in */*) cd -- "${0%/*}";; *) cd -- "$0";; esacor cd -- "$(dirname -- "$0"; echo /)"Regarding the double quotes, you're asking the wrong question. "$0" is the value of the parameter 0. $0, unquoted, is the value of the parameter 0 split at characters in IFS with each element then interpreted as a glob pattern and replaced by matching files if there are any (that latter part only if globbing is not turned off). You don't mean the value of the parameter 0 split at characters in IFS with each element then interpreted as a glob pattern and replaced by matching files if there are any (that latter part only if globbing is not turned off), do you? You mean the value of the parameter 0. So write what you mean: "$0". The only reason your tests didn't choke is that you didn't try it with problematic names and you tested with a specific shell that happens to repair your mistake in this specific scenario. With a directory name like foo bar, you end up passing two arguments foo and bar to the cd command; bash's cd command interprets this as “change to the directory obtained by concatenating foo, a space and bar” so it happens to build back the right name. A different shell might interpret this as “complain about a spurious argument”, “change to the directory foo”, or “change to the directory obtained by replacing foo by bar in the current working directory”. With bash, your script would fail if the name contained two consecutive spaces, for example.
I wondered what might be a safe or the Unix compatible way to change to the directory from which a script is called. These two methods even work in a directory with spaces and special characters in it, without using quotes. But there might be some situations where this might not work. Or is there? #!/bin/bashcd $(dirname $0) pwd cd ${0%/*} pwd~/test 1"\ ü`\($$ ./cd.sh /home/syss/test 1"\ ü`\($ /home/syss/test 1"\ ü`\($
What is the safest (or most elegant or shortest) way to change to the directory from where a script is called?
Your issue is that the $PWD part of your alias definition is unquoted (you close the double quoted string just before $PWD and then start a new one just afterwards). This means that it is evaluated when the alias is defined. Instead, use single quotes around the whole alias text, or even better*, use a shell function to override the pwd command. Like so (or using echo if you must): alias pwd='printf "%s/\n" "$PWD"'Or, with a shell function: pwd () { printf '%s/\n' "$PWD" }*Better = avoids complicated quoting, but still works the same way as you'd expect.
WHen I use the pwd command, it prints e.g /opt instead of /opt/. I would like it to print the trailing slash. However, I tried adding the following line to my ~/.bash_aliases file: alias pwd=" echo "${PWD}/" "But it doesn't work properly. Instead of printing my present working directory with a trailing slash, it prints the working directory I was in at the start of the terminal session, or at the last time I ran the source ~/.bashrc command. Even when I name the command something different, like cwd, it still behaves this way. So - my question is, can I have some way to have an alias that prints my present working directory, followed by a trailing slash? Thanks.
Can I reprogram the pwd command to add a trailing slash?
If you want to understand why this is, you need to understand the difference between files and inodes. rm, rmdir and mv all take action on the inodes describing the file/directory, not the actual file. If you have a file/dir open (e.g. by being in the directory), the inode information is removed, but the actual data file associated with the file/dir is not removed until all file handles pointing to it are closed. So, when you "cd .." the filesystem can swoop in and remove the directory and all its contents. https://en.wikipedia.org/wiki/Inode http://www.grymoire.com/Unix/Inodes.html
I created a directory dir at Desktop and then i keyed in cd dir so as to make dir as my current directory and then i typed in the terminal rmdir /home/user_name/Desktop/dir from the dir directory itself, and surprisingly this removed the dir directory but when i checked my current working directory using pwd it was still showing that i am in the dir directory,So my question is that how it is possible that i am working in a directory that has already been deleted.i am currently working on Ubuntu
Trying to remove current directory using rmdir
You can try these commands instead: sed -ri.bak "s#software/bwa/bwa-0.7.12/bwa#`which bwa`#g" mapping_arima.sh sed -ri.bak "s#software/samtools/samtools-1.3.1/samtools#`which samtools`#g" mapping_arima.sh
I have a file which has the following content: BWA='/software/bwa/bwa-0.7.12/bwa' SAMTOOLS='/software/samtools/samtools-1.3.1/samtools'The above tools are on my computer:which bwa => /work/waterhouse_team/miniconda2/envs/arima/bin/bwa and pwd/hic-fq => /scratch/waterhouse_team/benth/dbg2olc-40x/hic-fqNext, I used those two sed commands: sed -i.bak 's|/software/bwa/bwa-0.7.12/bwa|$(which bwa)|g' mapping_arima.sh sed -i.bak 's|/software/samtools/samtools-1.3.1/samtools|$(which samtools)|g' mapping_arima.shUnfortunately, as output I received: BWA='$(which bwa)' IN_DIR='$(`pwd`)/hic-fq'How do I have to change the sed commands to get:BWA='/work/waterhouse_team/miniconda2/envs/arima/bin/bwa' and IN_DIR=/scratch/waterhouse_team/benth/dbg2olc-40x/hic-fqThank you in advance
content from pwd and which failed with sed to be replaced [duplicate]
Why don't you just add test/ to the path of the file you're trying to create, i.e. #!/bin/bash > `pwd`/test/process_ids.txt while true;do echo "The Process: `ps`" >> `pwd`/test/process_ids.txt #some code to parse the process, etc. and echo it done
startScript.sh is in /root/script/startScript.sh script1.shis in /root/script/test/script1.sh script2.sh is in /root/script/test/script2.sh startScript.sh looks like below #!/bin/bash #some code! sh `pwd`/test/script1.sh 2>&1 & sh `pwd`/test/script2.sh 2>&1 & #some codescript2.sh and script1.sh look like below #!/bin/bash > `pwd`/process_ids.txt while true;do echo "The Process: `ps`" >> `pwd`/process_ids.txt #some code to parse the process, etc. and echo it doneHere is the thing is, the process_ids.txt file is creating in /root/script. But according to scriptx.sh the pwd is /root/scripts/test/. I displayed the pwd in scriptx.sh, it is showing /root/script/. How can I get the pwd of scriptx.sh?
How to get `pwd` in the shell script that was started by another shell script [duplicate]
pwd will output the pathname of the current directory. When you are located in your home folder, pwd will return the pathname of it. In your case, the pathname of your home folder is /home/proteeti. This is your current working directory. Typing ls whilst located in that folder will show you its contents. You can not expect to find your home folder inside your home folder. What you could do to see the folder itself is to go up one level in the directory tree, with cd .. (or cd /home) and do ls there. Your home folder is the one with your username (proteeti). Use cd proteeti (or just cd) to get back to your home folder. The directory /home is the location where all users' home folders are located. On a multi-user system, you can expect /home to contain all home folders of all users, not only yours. In Unix parlance, the "root directory" is /. This directory is the top-most directory in the directory tree. It holds, apart from the /home directory, other directories that contain programs and libraries etc., installed by a system administrator. The / directory is not to be confused with /root, which is the special home directory of the root (administrator) user. Related:The Filesystem Hierarchy Standard Is the slash (/) part of the name of the Linux root directory?
I am writing pwd in terminal, and it is showing home/<my_username>. But I cannot physically find any directory with this path. To clear my confusion, I typed ls, but it shows the folders in my Home directory that I can see from file manager. But I cannot a folder with <my_username> inside Home. What am I missing here? proteeti@proteeti-X556UQK:~$ pwd /home/proteeti proteeti@proteeti-X556UQK:~$ ls Courses Dev Downloads Music Public usr Desktop Documents examples.desktop Pictures Templates VideosNo folder named "Proteeti"
Difficulty in undersanding pwd command
You can specify several commands, separated by ; or &&, as your cron job, for example: * * * * * cd /some/path && foo(This will only run foo if the cd was successful.)
I have a script which before being launched checks via pwd if the path upon launch is something specific (say dir/subdir/script) current_folder=$(pwd | grep dir/subdir/script) if [ "$current_folder" == "" ]; then { echo "something bad" exit } fihow can this script be launched via crontab? I cannot remove the check with pwd or amend the script content under any circumstance since it's subject to continuous updates that would be replaced Thanks everyone
launch a script which requires to be launched from a specific path via crontab
I'd put something similar to the followig code in .bashrc without requiring other files to be in open&write each time you press the enter key SEP=("/" "/") SEP_COLOR=("\e[0;34m" "\e[0;32m") #colors for: (FIXED - DEFAULT) SEPARATOR STRING DIR_COLOR=("\e[0;32m" "\e[0;31m") #colors for: (FIXED - DEFAULT) DIR NAMES CLOSE_COLOR="\e[0m"FIXED_DIR=" /var/www/html" FIXED_DIR=$(realpath ${FIXED_DIR}) FIXED_DIR_ARRAY=()DIR=${FIXED_DIR} while [[ "$DIR" != "/" ]]; do B=$(basename -z $DIR) DIR=$(dirname -z $DIR) FIXED_DIR_ARRAY+=($B) doneset_PS1 (){ local DIR=$PWD local CUR_DIR_ARRAY=()while : ; do local B=$(basename -z $DIR) local DIR=$(dirname -z $DIR) CUR_DIR_ARRAY+=($B) [[ "$DIR" == "/" ]] && break done local SELECTOR=0 local STR=""local i=1 while [[ "$i" -le "${#CUR_DIR_ARRAY[@]}" ]] ; do if [ -n $SELECTOR ] && [ $i -gt ${#FIXED_DIR_ARRAY[@]} ] || [ "${CUR_DIR_ARRAY[-$i]}" != "${FIXED_DIR_ARRAY[-$i]}" ]; then SELECTOR=1 fi local x=$(($SELECTOR%2)); STR+="${SEP_COLOR[$x]}${SEP[$x]}" [[ "${CUR_DIR_ARRAY[-$i]}" != "${SEP[$x]}" ]] && STR+="${DIR_COLOR[$x]}${CUR_DIR_ARRAY[-$i]}" STR+="${CLOSE_COLOR}" ((i++)) done printf "${STR}"}PS1="[ \u@\h:/ -> \[\$(set_PS1)\] ] "
I'm trying to display path in colors but I have problem with this. I have some constant path - in variable MyPath. I want to create colorized path in PS1 but if pwd showing different path then my (but including MyPath) then the rest I want to print in different color (with slash in different color). I wrote some code but I don't know hot to apply this to PS1. It should be something like this : [ [emailprotected]:/ -> /media/user/folder/ ] # : cd /var/www/html [ [emailprotected]:/ -> /var/www/html/ ] (blue slash and green dir names) # : cd applications [ [emailprotected]:/ -> /var/www/html/applications/ ] (blue slash and green dir names but last 2 slashes in green color and last dir "application" in red color) # : cd tmp [ [emailprotected]:/ -> /var/www/html/applications/tmp/ ] (blue slash and green dir names but last 3 slashes in green color and 2 last dirs "application" and "tmp" in red color)And I got stuck - I don't know how to do this. My code : #!/bin/bash MyPath="/var/www/html" MyPathLength=$( echo ${MyPath} | wc -m) CurrentPath="/var/www/html/functions/design" slashColor="\[$(tput setaf 6)\]/\[$(tput sgr0)\]" dirColor="\[$(tput setaf 2)\]" path=""; FinalPath="";for w in $(echo ${CurrentPath} | tr "/" " "); do path="${path}/${w}"; pathLength=$( echo ${path} | wc -m) if [ "${pathLength}" == "${MyPathLength}" ]; then FinalPath="${FinalPath}\[$(tput setaf 6)\]/\[$(tput sgr0)\]\[$(tput setaf 2)\]$w\[$(tput sgr0)\]\[$(tput setaf 6)\]/\[$(tput sgr0)\]"; elif [ "${pathLength}" -lt "${MyPathLength}" ]; then FinalPath="${FinalPath}\[$(tput setaf 6)\]/\[$(tput sgr0)\]\[$(tput setaf 2)\]$w\[$(tput sgr0)\]"; elif [ "${pathLength}" -gt "${MyPathLength}" ]; then FinalPath="${FinalPath}\[$(tput setaf 1)\]$w\[$(tput sgr0)\]\[$(tput setaf 2)\]/\[$(tput sgr0)\]"; fi; doneecho "PS1=\"${FinalPath}\"" > /home/bashrc_split cd "/"; ( bash --rcfile /home/bashrc_split # I want to open new shell with new PS1 )Any help?
colorized path in PS1
The way I worked this out was to have a start-up script that changes to the correct directory and then starts the game. startup.sh : #!/bin/bash cd /path/to/game game cd "OLDPWD"And then in the .desktop file use : Exec=/bin/bash /path/to/startup.sh
I'm making some shortcuts to games I would normally run via terminal. For instance, UT2004: cd "$HOME/Unreal Tournament 2004/System/" ./ut2004-bin-linux-amd64My work so far: [Desktop Entry] Encoding=UTF-8 Version=1.0 Type=Application Terminal=true Path=/home/nick/Unreal Tournament 2004/System/ Exec="/home/nick/Unreal Tournament 2004/System/ut2004-bin-linux-amd64" Name=UT2004 Icon=/home/nick/Unreal Tournament 2004/Help/UT2004Logo.pngUnlike Unreal, EDuke32 actually runs, however I can tell it does so in $HOME, and starts littering it with log files. UT2004 doesn't start with the .desktop file at all. I figure, both of these problems could be solved if there was a way to specify the starting path for each application. Unfortunately, I cannot cd ... && ./... in the .desktop files. How can I specify the "working directory" for each of these shortcuts?
GNOME ".desktop" shortcut: Specify Start-In path
Thanks to Paulo Tomé I needed to change it to the following: for file in "$(pwd)"/* do echo file:"$file" done
I have the following bit of code in a bash script I've created. When I run it from a directory with no spaces in the path it works as expected, however if I run from a directory with spaces it fails. I'm pretty sure I need to escape it somehow, but nothing I've tried seems to work. for file in `pwd`/* do echo file:$file doneBy 'fails' I mean that if I run the command from the path "~/dir 1/ test" it would echo the following: file: ~/dir file: 1/ file: testI'd expect it to list the files in the directory (which it does if there's no spaces in the current directory): file: file 1 file: file 2 file: file 3Note this is only an issue if the current directory has spaces in, files with spaces in are processed fine. I've read several similar examples on here such as Why does my shell script choke on whitespace or other special characters? and Looping through files with spaces in the names? but I can't see how (if they are relevant) they apply to my specific example.
Can't loop files using pwd if current directory has spaces in the path [duplicate]
Using mdadm 3.3+ Since mdadm 3.3 (released 2013, Sep 3), if you have a 3.2+ kernel, you can proceed as follows: # mdadm /dev/md0 --add /dev/sdc1 # mdadm /dev/md0 --replace /dev/sdd1 --with /dev/sdc1sdd1 is the device you want to replace, sdc1 is the preferred device to do so and must be declared as a spare on your array. The --with option is optional, if not specified, any available spare will be used. Older mdadm version Note: You still need a 3.2+ kernel. First, add a new drive as a spare (replace md0 and sdc1 with your RAID and disk device, respectively): # mdadm /dev/md0 --add /dev/sdc1Then, initiate a copy-replace operation like this (sdd1 being the failing device): # echo want_replacement > /sys/block/md0/md/dev-sdd1/state Result The system will copy all readable blocks from sdd1 to sdc1. If it comes to an unreadable block, it will reconstruct it from parity. Once the operation is complete, the former spare (here: sdc1) will become active, and the failing drive will be marked as failed (F) so you can remove it. Note: credit goes to frostschutz and Ansgar Esztermann who found the original solution (see the duplicate question). Older kernels Other answers suggest:Johnny's approach: convert array to RAID6, "replace" the disk, then back to RAID5, Hauke Laging's approach: briefly remove the disk from the RAID5 array, make it part of a RAID1 (mirror) with the new disk and add that mirror drive back to the RAID5 array (theoretical)...
I have a software RAID5 array (Linux md) on 4 disks. I would like to replace one of the disks with a new one, without putting the array in a degraded state, and if possible, online. How would that be possible? It's important because I don't want to:take the risk of stressing the other disks so one may crash during rebuild, take the risk of being in a "no-parity state" so I don't have a safety net for some time.I suppose doing so online is too much asking and I should just raw copy (dd) the data of the old disk to the new one offline and then replace it, but I think it is theoretically possible... Some context: Those disks have all been spinning almost continuously for more than 5.5 years. They still work perfectly for the moment and they all pass the (long) SMART self-test. However, I have reasons to think that one of those 4 disks will not last much longer (supposed predictive failure).
How to safely replace a not-yet-failed disk in a Linux RAID5 array?
First check the disks, try running smart selftest for i in a b c d; do smartctl -s on -t long /dev/sd$i doneIt might take a few hours to finish, but check each drive's test status every few minutes, i.e. smartctl -l selftest /dev/sdaIf the status of a disk reports not completed because of read errors, then this disk should be consider unsafe for md1 reassembly. After the selftest finish, you can start trying to reassembly your array. Optionally, if you want to be extra cautious, move the disks to another machine before continuing (just in case of bad ram/controller/etc). Recently, I had a case exactly like this one. One drive got failed, I re-added in the array but during rebuild 3 of 4 drives failed altogether. The contents of /proc/mdadm was the same as yours (maybe not in the same order) md1 : inactive sdc2[2](S) sdd2[4](S) sdb2[1](S) sda2[0](S)But I was lucky and reassembled the array with this mdadm --assemble /dev/md1 --scan --forceBy looking at the --examine output you provided, I can tell the following scenario happened: sdd2 failed, you removed it and re-added it, So it became a spare drive trying to rebuild. But while rebuilding sda2 failed and then sdb2 failed. So the events counter is bigger in sdc2 and sdd2 which are the last active drives in the array (although sdd didn't have the chance to rebuild and so it is the most outdated of all). Because of the differences in the event counters, --force will be necessary. So you could also try this mdadm --assemble /dev/md1 /dev/sd[abc]2 --forceTo conclude, I think that if the above command fails, you should try to recreate the array like this: mdadm --create /dev/md1 --assume-clean -l5 -n4 -c64 /dev/sd[abc]2 missingIf you do the --create, the missing part is important, don't try to add a fourth drive in the array, because then construction will begin and you will lose your data. Creating the array with a missing drive, will not change its contents and you'll have the chance to get a copy elsewhere (raid5 doesn't work the same way as raid1). If that fails to bring the array up, try this solution (perl script) here Recreating an array If you finally manage to bring the array up, the filesystem will be unclean and probably corrupted. If one disk fails during rebuild, it is expected that the array will stop and freeze not doing any writes to the other disks. In this case two disks failed, maybe the system was performing write requests that wasn't able to complete, so there is some small chance you lost some data, but also a chance that you will never notice it :-) edit: some clarification added.
Some time ago I had a RAID5 system at home. One of the 4 disks failed but after removing and putting it back it seemed to be OK so I started a resync. When it finished I realized, to my horror, that 3 out of 4 disks failed. However I don't belive that's possible. There are multiple partitions on the disks each part of a different RAID array.md0 is a RAID1 array comprised of sda1, sdb1, sdc1 and sdd1. md1 is a RAID5 array comprised of sda2, sdb2, sdc2 and sdd2. md2 is a RAID0 array comprised of sda3, sdb3, sdc3 and sdd3.md0 and md2 reports all disks up while md1 reports 3 failed (sdb2, sdc2, sdd2). It's my uderstanding that when hard drives fail all the partitions should be lost not just the middle ones. At that point I turned the computer off and unplugged the drives. Since then I was using that computer with a smaller new disk. Is there any hope of recovering the data? Can I somehow convince mdadm that my disks are in fact working? The only disk that may really have a problem is sdc but that one too is reported up by the other arrays. Update I finally got a chance to connect the old disks and boot this machine from SystemRescueCd. Everything above was written from memory. Now I have some hard data. Here is the output of mdadm --examine /dev/sd*2 /dev/sda2: Magic : a92b4efc Version : 0.90.00 UUID : 53eb7711:5b290125:db4a62ac:7770c5ea Creation Time : Sun May 30 21:48:55 2010 Raid Level : raid5 Used Dev Size : 625064960 (596.11 GiB 640.07 GB) Array Size : 1875194880 (1788.33 GiB 1920.20 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Update Time : Mon Aug 23 11:40:48 2010 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 1 Spare Devices : 1 Checksum : 68b48835 - correct Events : 53204 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 2 0 active sync /dev/sda2 0 0 8 2 0 active sync /dev/sda2 1 1 8 18 1 active sync /dev/sdb2 2 2 8 34 2 active sync /dev/sdc2 3 3 0 0 3 faulty removed 4 4 8 50 4 spare /dev/sdd2 /dev/sdb2: Magic : a92b4efc Version : 0.90.00 UUID : 53eb7711:5b290125:db4a62ac:7770c5ea Creation Time : Sun May 30 21:48:55 2010 Raid Level : raid5 Used Dev Size : 625064960 (596.11 GiB 640.07 GB) Array Size : 1875194880 (1788.33 GiB 1920.20 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Update Time : Mon Aug 23 11:44:54 2010 State : clean Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Checksum : 68b4894a - correct Events : 53205 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 1 8 18 1 active sync /dev/sdb2 0 0 0 0 0 removed 1 1 8 18 1 active sync /dev/sdb2 2 2 8 34 2 active sync /dev/sdc2 3 3 0 0 3 faulty removed 4 4 8 50 4 spare /dev/sdd2 /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 53eb7711:5b290125:db4a62ac:7770c5ea Creation Time : Sun May 30 21:48:55 2010 Raid Level : raid5 Used Dev Size : 625064960 (596.11 GiB 640.07 GB) Array Size : 1875194880 (1788.33 GiB 1920.20 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Update Time : Mon Aug 23 11:44:54 2010 State : clean Active Devices : 1 Working Devices : 2 Failed Devices : 2 Spare Devices : 1 Checksum : 68b48975 - correct Events : 53210 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 2 8 34 2 active sync /dev/sdc2 0 0 0 0 0 removed 1 1 0 0 1 faulty removed 2 2 8 34 2 active sync /dev/sdc2 3 3 0 0 3 faulty removed 4 4 8 50 4 spare /dev/sdd2 /dev/sdd2: Magic : a92b4efc Version : 0.90.00 UUID : 53eb7711:5b290125:db4a62ac:7770c5ea Creation Time : Sun May 30 21:48:55 2010 Raid Level : raid5 Used Dev Size : 625064960 (596.11 GiB 640.07 GB) Array Size : 1875194880 (1788.33 GiB 1920.20 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Update Time : Mon Aug 23 11:44:54 2010 State : clean Active Devices : 1 Working Devices : 2 Failed Devices : 2 Spare Devices : 1 Checksum : 68b48983 - correct Events : 53210 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 4 8 50 4 spare /dev/sdd2 0 0 0 0 0 removed 1 1 0 0 1 faulty removed 2 2 8 34 2 active sync /dev/sdc2 3 3 0 0 3 faulty removed 4 4 8 50 4 spare /dev/sdd2It appears that things have changed since the last boot. If I'm reading this correctly sda2, sdb2 and sdc2 are working and contain synchronized data and sdd2 is spare. I distinctly remember seeing 3 failed disks but this is good news. Yet the array still isn't working: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md125 : inactive sda2[0](S) sdb2[1](S) sdc2[2](S) 1875194880 blocksmd126 : inactive sdd2[4](S) 625064960 blocksmd127 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1] 64128 blocks [4/4] [UUUU]unused devices: <none>md0 appears to be renamed to md127. md125 and md126 are very strange. They should be one array not two. That used to be called md1. md2 is completely gone but that was my swap so I don't care. I can understand the different names and it doesn't really matter. But why is an array with 3 "active sync" disks unreadable? And what's up with sdd2 being in a separate array? Update I tried the following after backing up the superblocks: root@sysresccd /root % mdadm --stop /dev/md125 mdadm: stopped /dev/md125 root@sysresccd /root % mdadm --stop /dev/md126 mdadm: stopped /dev/md126So far so good. Since sdd2 is spare I don't want to add it yet. root@sysresccd /root % mdadm --assemble /dev/md1 /dev/sd{a,b,c}2 missing mdadm: cannot open device missing: No such file or directory mdadm: missing has no superblock - assembly abortedApparently I can't do that. root@sysresccd /root % mdadm --assemble /dev/md1 /dev/sd{a,b,c}2 mdadm: /dev/md1 assembled from 1 drive - not enough to start the array. root@sysresccd /root % cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : inactive sdc2[2](S) sdb2[1](S) sda2[0](S) 1875194880 blocksmd127 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1] 64128 blocks [4/4] [UUUU]unused devices: <none>That didn't work either. Let's try with all the disks. mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@sysresccd /root % mdadm --assemble /dev/md1 /dev/sd{a,b,c,d}2 mdadm: /dev/md1 assembled from 1 drive and 1 spare - not enough to start the array. root@sysresccd /root % cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : inactive sdc2[2](S) sdd2[4](S) sdb2[1](S) sda2[0](S) 2500259840 blocksmd127 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1] 64128 blocks [4/4] [UUUU]unused devices: <none>No luck. Based on this answer I'm planning to try: mdadm --create /dev/md1 --assume-clean --metadata=0.90 --bitmap=/root/bitmapfile --level=5 --raid-devices=4 /dev/sd{a,b,c}2 missing mdadm --add /dev/md1 /dev/sdd2Is it safe? Update I publish the superblock parser script I used to make that table in the my comment. Maybe someone will find it useful. Thanks for all your help.
How to recover a crashed Linux md RAID5 array?
Unmounting (filesystems) is not sufficient. You'd have to stop the array then re-assemble it afterwards: mdadm --stop /dev/md0 # re-arrange / hotplug drives mdadm --stop /dev/md0 # (*) mdadm --assemble /dev/md0It makes sense to check journalctl / dmesg, and/or cat /proc/partitions / lsblk, to make sure the drives got re-detected fine before attempting to assemble it. (*) On many modern Linux systems, there is some md auto assembly magic going on in udev (/usr/lib/udev/rules.d/*md-raid*.rules) so you might end up with a stale /dev/md0 if you only hotplug a single drive. In that case you actually have to stop it again before assembling — or re-trigger udev rules for drives that didn't get hotplugged, or use mdadm's incremental assembly commands to complete it, but stopping it a 2nd time is simpler, so that's why mdadm --stop is used twice both before and after hotplugging a drive. In some cases mdadm.conf is too verbose and restricts the devices or lists individual drives for each array. This can prevent successful assembly, so if there are still problems, it would be the next place to check. Keep your mdadm.conf as simple as possible (it really only needs to know the UUID for each array).If you have extra drives available, and don't mind resyncing the array, you can do the whole process online, w/o losing redundancy using mdadm --replace mechanism. This way you could swap slots without unmounting or stopping anything.
I have a healthy RAID5 array with 5 disks: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb1[6] sdd1[0] sdh1[5] sdf1[2] sde1[1] 31255166976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU] bitmap: 0/59 pages [0KB], 65536KB chunkunused devices: <none>I would like to move one disk to a different physical slot on the server, without shutting down the server. (the slots do support hot-swapping) Can I safely unmount the array, move the disk and re-mount the array, without it going into degraded mode?
Can a RAID5 disk be moved to a different slot?
It kind of should work like this: # mdadm --manage /dev/md42 --readonly --add-journal /dev/loop3 mdadm: Journal added successfully, making /dev/md42 read-write mdadm: added /dev/loop3However, currently (using kernel 4.18, mdadm 4.1-rc) that only seems to be possible for arrays that were created with journal in the first place. The above output was procuded after: # mdadm --create /dev/md42 --level=5 --raid-devices=3 /dev/loop[012] --write-journal /dev/loop3 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md42 started. # mdadm --manage /dev/md42 --fail /dev/loop3 --remove /dev/loop3 mdadm: set /dev/loop3 faulty in /dev/md42 mdadm: hot removed /dev/loop3 from /dev/md42Creating an array without journal, all attempts to add a journal fail: # mdadm --create /dev/md42 --level=5 --raid-devices=3 /dev/loop[012] mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md42 started. # mdadm --manage /dev/md42 --readonly --add-journal /dev/loop3 mdadm: /dev/md42 does not support journal device. # mdadm --manage /dev/md42 --readwrite --add /dev/loop3 # echo journal > /sys/block/md42/md/dev-loop3/state bash: echo: write error: Invalid argumentSo it just doesn't seem to be possible yet. I have found a discussion on the linux-raid mailing list that this is a planned feature. If it has been implemented since, I don't see how. Perhaps contact the mailing list yourself to remind mdadm devs there are people who want this to work! You might have to resort to mdadm --create to re-create the raid or edit metadata of the array. Either option is a bit dangerous.
I have a raid5 array with quite large disks, so reconstruction is really slow in case of a power outage. Thankfully, there is the --write-journal option for linux md raid. The man page lists the --write-journal option in the For create, build, or grow: section, so I supposed it should work in grow mode, and tried to add a write journal on the fly: # mdadm --grow /dev/md1 --write-journal /dev/ssd/md1-journal mdadm: :option --write-journal not valid in grow modeDoes anyone know whether I can add a write journal to an existing array? And if so, how?
Add linux md raid write journal to and existing array
OK, it looks like we have now access to the raid. At least the first checked files looked good. So here is what we have done:The raid recovery article on the kernel.org wiki suggests two possible solutions for our problem:using --assemble --force (also mentioned by derobert) The article says:[...] If the event count differs by less than 50, then the information on the drive is probably still ok. [...] If the event count closely matches but not exactly, use "mdadm --assemble --force /dev/mdX " to force mdadm to assemble the array [...]. If the event count of a drive is way off [...] that drive [...] shouldn't be included in the assembly.In our case the drive sde had an event difference of 9. So there was a good chance that --force would work. However after we executed the --add command the event count dropped to 0 and the drive was marked as spare. So we better desisted from using --force. recreate the array This solution is explicitly marked as dangerous because you can loose data if you do something wrong. However this seemed to be the only option we had. The idea is to create a new raid on the existing raid-devices (that is overwriting the device's superblocks) with the same configuration of the old raid and explicitly tell mdadm that the raid has already existed and should be assumed as clean. Since the event count difference was just 9 and the only problem was that we lost the superblock of sde there were good chances that writing new superblocks will get us access to our data... and it worked :-)Our solution Note: This solution was specially geared to our problem and may not work on your setup. You should take these notes to get an idea on how things can be done. But you need to research what's best in your case. Backup We already lost a superblock. So this time we saved the first and last gigabyte of each raid device (sd[acdefghij]) using dd before working on the raid. We did this for each raid device: # save the first gigabyte of sda dd if=/dev/sda of=bak_sda_start bs=4096 count=262144# determine the size of the device fdisk -l /dev/sda # In this case the size was 4000787030016 byte.# To get the last gigabyte we need to skip everything except the last gigabyte. # So we need to skip: 4000787030016 byte - 1073741824 byte = 3999713288000 byte # Since we read blocks auf 4096 byte we need to skip 3999713288000/4096=976492502 blocks. dd if=/dev/sda of=bak_sda_end bs=4096 skip=976492502Gather information When recreating the raid it is important to use the same configration as the old raid. This is especially important if you want to recreate the array on another machine using a different mdadm version. In this case mdadm's default values may be different and could create superblocks that do not fit to the existing raid (see the wiki article). In our case we use the same machine (and thus the same mdadm-version) to recreate the array. However the array was created by a 3rd party tool in the first place. So we didn't want to rely on default values here and had to gather some information about the existing raid. From the output of mdadm --examine /dev/sd[acdefghij] we get the following information about the raid (Note: sdb was the ssd containing the OS and was not part of the raid): Raid Level : raid5 Raid Devices : 9 Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0The Used Dev Size is denominated in blocks of 512 byte. You can check this: 7814034432*512/1000000000 ~= 4000.79 But mdadm requires the size in Kibibytes: 7814034432*512/1024 = 3907017216 Important is the Device Role. In the new raid each device must have the same role as before. In our case: device role ------ ---- sda 0 sdc 1 sdd 2 sde 3 sdf 4 sdg 5 sdh 6 sdi spare sdj 8Note: Drive letters (and thus the order) can change after reboot! We also need the layout and the chunk size in the next step. Recreate raid We now can use the information of the last step to recreate the array: mdadm --create --assume-clean --level=5 --raid-devices=9 --size=3907017216 \ --chunk=512 --layout=left-symmetric /dev/md127 /dev/sda /dev/sdc /dev/sdd \ /dev/sde /dev/sdf /dev/sdg /dev/sdh missing /dev/sdjIt is important to pass the devices in the correct order! Moreover we did not add sdi as it's event count was too low. So we set the 7th raid slot to missing. Thus the raid5 contains 8 of 9 devices and will be assembled in degraded mode. And because it lacks a spare device no rebuild will automatically start. Then we used --examine to check if the new superblocks fit to our old superblocks. And it did :-) We were able to mount the filesystem and read the data. The next step is to backup the data and then add back sdi and start the rebuild.
A friend of mine has a mdadm-raid5 with 9 disks which does not reassemble anymore. After having a look at the syslog I found that the disk sdi was kicked from the array: Jul 6 08:43:25 nasty kernel: [ 12.952194] md: bind<sdc> Jul 6 08:43:25 nasty kernel: [ 12.952577] md: bind<sdd> Jul 6 08:43:25 nasty kernel: [ 12.952683] md: bind<sde> Jul 6 08:43:25 nasty kernel: [ 12.952784] md: bind<sdf> Jul 6 08:43:25 nasty kernel: [ 12.952885] md: bind<sdg> Jul 6 08:43:25 nasty kernel: [ 12.952981] md: bind<sdh> Jul 6 08:43:25 nasty kernel: [ 12.953078] md: bind<sdi> Jul 6 08:43:25 nasty kernel: [ 12.953169] md: bind<sdj> Jul 6 08:43:25 nasty kernel: [ 12.953288] md: bind<sda> Jul 6 08:43:25 nasty kernel: [ 12.953308] md: kicking non-fresh sdi from array! Jul 6 08:43:25 nasty kernel: [ 12.953314] md: unbind<sdi> Jul 6 08:43:25 nasty kernel: [ 12.960603] md: export_rdev(sdi) Jul 6 08:43:25 nasty kernel: [ 12.969675] raid5: device sda operational as raid disk 0 Jul 6 08:43:25 nasty kernel: [ 12.969679] raid5: device sdj operational as raid disk 8 Jul 6 08:43:25 nasty kernel: [ 12.969682] raid5: device sdh operational as raid disk 6 Jul 6 08:43:25 nasty kernel: [ 12.969684] raid5: device sdg operational as raid disk 5 Jul 6 08:43:25 nasty kernel: [ 12.969687] raid5: device sdf operational as raid disk 4 Jul 6 08:43:25 nasty kernel: [ 12.969689] raid5: device sde operational as raid disk 3 Jul 6 08:43:25 nasty kernel: [ 12.969692] raid5: device sdd operational as raid disk 2 Jul 6 08:43:25 nasty kernel: [ 12.969694] raid5: device sdc operational as raid disk 1 Jul 6 08:43:25 nasty kernel: [ 12.970536] raid5: allocated 9542kB for md127 Jul 6 08:43:25 nasty kernel: [ 12.973975] 0: w=1 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 6 08:43:25 nasty kernel: [ 12.973980] 8: w=2 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 6 08:43:25 nasty kernel: [ 12.973983] 6: w=3 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 6 08:43:25 nasty kernel: [ 12.973986] 5: w=4 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 6 08:43:25 nasty kernel: [ 12.973989] 4: w=5 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 6 08:43:25 nasty kernel: [ 12.973992] 3: w=6 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 6 08:43:25 nasty kernel: [ 12.973996] 2: w=7 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 6 08:43:25 nasty kernel: [ 12.973999] 1: w=8 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 6 08:43:25 nasty kernel: [ 12.974002] raid5: raid level 5 set md127 active with 8 out of 9 devices, algorithm 2Unfortunately this wasn't recognized and now another drive was kicked (sde): Jul 14 08:02:45 nasty kernel: [ 12.918556] md: bind<sdc> Jul 14 08:02:45 nasty kernel: [ 12.919043] md: bind<sdd> Jul 14 08:02:45 nasty kernel: [ 12.919158] md: bind<sde> Jul 14 08:02:45 nasty kernel: [ 12.919260] md: bind<sdf> Jul 14 08:02:45 nasty kernel: [ 12.919361] md: bind<sdg> Jul 14 08:02:45 nasty kernel: [ 12.919461] md: bind<sdh> Jul 14 08:02:45 nasty kernel: [ 12.919556] md: bind<sdi> Jul 14 08:02:45 nasty kernel: [ 12.919641] md: bind<sdj> Jul 14 08:02:45 nasty kernel: [ 12.919756] md: bind<sda> Jul 14 08:02:45 nasty kernel: [ 12.919775] md: kicking non-fresh sdi from array! Jul 14 08:02:45 nasty kernel: [ 12.919781] md: unbind<sdi> Jul 14 08:02:45 nasty kernel: [ 12.928177] md: export_rdev(sdi) Jul 14 08:02:45 nasty kernel: [ 12.928187] md: kicking non-fresh sde from array! Jul 14 08:02:45 nasty kernel: [ 12.928198] md: unbind<sde> Jul 14 08:02:45 nasty kernel: [ 12.936064] md: export_rdev(sde) Jul 14 08:02:45 nasty kernel: [ 12.943900] raid5: device sda operational as raid disk 0 Jul 14 08:02:45 nasty kernel: [ 12.943904] raid5: device sdj operational as raid disk 8 Jul 14 08:02:45 nasty kernel: [ 12.943907] raid5: device sdh operational as raid disk 6 Jul 14 08:02:45 nasty kernel: [ 12.943909] raid5: device sdg operational as raid disk 5 Jul 14 08:02:45 nasty kernel: [ 12.943911] raid5: device sdf operational as raid disk 4 Jul 14 08:02:45 nasty kernel: [ 12.943914] raid5: device sdd operational as raid disk 2 Jul 14 08:02:45 nasty kernel: [ 12.943916] raid5: device sdc operational as raid disk 1 Jul 14 08:02:45 nasty kernel: [ 12.944776] raid5: allocated 9542kB for md127 Jul 14 08:02:45 nasty kernel: [ 12.944861] 0: w=1 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 14 08:02:45 nasty kernel: [ 12.944864] 8: w=2 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 14 08:02:45 nasty kernel: [ 12.944867] 6: w=3 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 14 08:02:45 nasty kernel: [ 12.944871] 5: w=4 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 14 08:02:45 nasty kernel: [ 12.944874] 4: w=5 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 14 08:02:45 nasty kernel: [ 12.944877] 2: w=6 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 14 08:02:45 nasty kernel: [ 12.944879] 1: w=7 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0 Jul 14 08:02:45 nasty kernel: [ 12.944882] raid5: not enough operational devices for md127 (2/9 failed)And now the array does not start anymore. However it seems that every disk contains the raid metadata: /dev/sda: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 8600bda9:18845be8:02187ecc:1bfad83a Update Time : Mon Jul 14 00:45:35 2014 Checksum : e38d46e8 - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AAA.AAA.A ('A' == active, '.' == missing)/dev/sdc: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : fe612c05:f7a45b0a:e28feafe:891b2bda Update Time : Mon Jul 14 00:45:35 2014 Checksum : 32bb628e - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : AAA.AAA.A ('A' == active, '.' == missing)/dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 1d14616c:d30cadc7:6d042bb3:0d7f6631 Update Time : Mon Jul 14 00:45:35 2014 Checksum : 62bd5499 - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAA.AAA.A ('A' == active, '.' == missing)/dev/sde: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : active Device UUID : a2babca3:1283654a:ef8075b5:aaf5d209 Update Time : Mon Jul 14 00:45:07 2014 Checksum : f78d6456 - correct Events : 123123 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AAAAAAA.A ('A' == active, '.' == missing)/dev/sdf: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : e67d566d:92aaafb4:24f5f16e:5ceb0db7 Update Time : Mon Jul 14 00:45:35 2014 Checksum : 9223b929 - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 4 Array State : AAA.AAA.A ('A' == active, '.' == missing)/dev/sdg: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 2cee1d71:16c27acc:43e80d02:1da74eeb Update Time : Mon Jul 14 00:45:35 2014 Checksum : 7512efd4 - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 5 Array State : AAA.AAA.A ('A' == active, '.' == missing)/dev/sdh: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : c239f0ad:336cdb88:62c5ff46:c36ea5f8 Update Time : Mon Jul 14 00:45:35 2014 Checksum : c08e8a4d - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 6 Array State : AAA.AAA.A ('A' == active, '.' == missing)/dev/sdi: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : active Device UUID : d06c58f8:370a0535:b7e51073:f121f58c Update Time : Mon Jul 14 00:45:07 2014 Checksum : 77844dcc - correct Events : 0 Layout : left-symmetric Chunk Size : 512K Device Role : spare Array State : AAAAAAA.A ('A' == active, '.' == missing)/dev/sdj: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : f2de262f:49d17fea:b9a475c1:b0cad0b7 Update Time : Mon Jul 14 00:45:35 2014 Checksum : dd0acfd9 - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 8 Array State : AAA.AAA.A ('A' == active, '.' == missing)But as you can see the two drives (sde, sdi) are in active state (but raid is stopped) and sdi is a spare. While sde has a slightly lower Events-count than most of the other drives (123123 instead of 123132) sdi has an Events-count of 0. So I think sde is almost up-to-date. But sdi not ... Now we read online that a hard power-off could cause these "kicking non-fresh"-messages. And indeed my friend caused a hard power-off one or two times. So we followed the instructions we found online and tried to re-add sde to the array: $ mdadm /dev/md127 --add /dev/sde mdadm: add new device failed for /dev/sde as 9: Invalid argumentBut that failed and now mdadm --examine /dev/sde shows an Events-count of 0 for sde too (+ it's a spare now like sdi): /dev/sde: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 689e0030:142122ae:7ab37935:c80ab400 Update Time : Mon Jul 14 00:45:35 2014 Checksum : 5e6c4cf7 - correct Events : 0 Layout : left-symmetric Chunk Size : 512K Device Role : spare Array State : AAA.AAA.A ('A' == active, '.' == missing)We know that 2 failed drives usually means the death for a raid5. However is there a way to add at least sde to the raid so that data can be saved?
Reassemble mdadm-raid5
You have two 3TB disks storing 6TB data. You want to install one new 3TB disk. This will allow you to convert the three 3TB disks into a RAID5 array storing 6TB data. However, the process is rather fiddly and the opportunities for losing data somewhere along the route are fairly large. Steps to achieve the requirement Let's declare the disks as sda (contains data), sdb (contains data), sdc (new).If at all possible take a backup of all your data, even if you have to borrow a disk from a friend for a few days Create a RAID1 array on the new disk sdc. It should have two members of which one is missing Create a filesystem on this RAID1 array Copy the data from sdb to the new RAID1 array Verify that you have correctly copied the data Add sdb to the RAID1 array Wait for the synchronisation to complete Reboot Grow the RAID1 array to RAID5. It should have three members of which one is missing. to do this you will need 128K of temporary space on an additional disk. You might want to use a USB thumbstick for this. Do not use a RAM disk. Copy the data from sda to the new RAID5 array Verify that you have correctly copied the data Add sda to the RAID5 array RebootWorked example Here is a worked example using three files as disk images. # Prepare the demonstration # dd if=/dev/zero bs=1M count=100 of=sda.img dd if=/dev/zero bs=1M count=100 of=sdb.img ( echo n; echo p; echo 1; echo; echo; echo w ) | fdisk sda.img # One primary partition ( echo n; echo p; echo 1; echo; echo; echo w ) | fdisk sdb.img # One primary partition losetup --show --find --partscan sda.img losetup --show --find --partscan sdb.img# At this point we have /dev/loop0 representing the first disk sda, with /dev/loop0p1 # equivalent to a disk partition sda1. Also /dev/loop1 representing the second disk.mkfs -t ext4 -L sda /dev/loop0p1 mkfs -t ext4 -L sdb /dev/loop1p1 mkdir -p /mnt/sda1 /mnt/sdb1 mount /dev/loop0p1 /mnt/sda1 mount /dev/loop1p1 /mnt/sdb1 cp -a /usr/local/man/. /mnt/sda1/u.l.man/ mkdir /mnt/sdb1/u.l.etc cp -a /usr/local/bin/. /mnt/sdb1/u.l.bin/ df -h | grep mnt umount /mnt/sda1 umount /mnt/sdb1# Create the third disk # dd if=/dev/zero bs=1M count=100 of=sdc.img ( echo n; echo p; echo 1; echo; echo; echo w ) | fdisk sdc.img # One primary partition losetup --show --find --partscan sdc.img# Create the RAID1 array and its filesystem # mdadm --create /dev/md1 --level=1 --raid-devices=2 --metadata=default /dev/loop2p1 missing mkfs -t ext4 -L md1 /dev/md1 mkdir -p /mnt/md1# Now /dev/loop2 is equivalent to third disk sdc, and /dev/loop2p1 representing sdc1# Copy the data from sdb to md1 # mount /dev/loop1p1 /mnt/sdb1 mount /dev/md1 /mnt/md1 cp -a /mnt/sdb1/. /mnt/md1/ umount /mnt/sdb1 umount /mnt/md1# Complete the RAID1 array # mdadm --manage /dev/md1 --add /dev/loop1p1# Grow the RAID1 array to RAID5 # mdadm --grow /dev/md1 --level=5 --raid-devices=3 --backup-file=/root/workarea.dat --force e2fsck -f /dev/md1 resize2fs /dev/md1# Copy the data from sda to md1 # mount /dev/loop0p1 /mnt/sda1 mount /dev/md1 /mnt/md1 cp -a /mnt/sda1/. /mnt/md1/ umount /mnt/sda1 umount /mnt/md1# Add the remaning disk to the RAID5 array # mdadm --manage /dev/md1 --add /dev/loop0p1# All done # mdadm --stop /dev/md1 losetup -d /dev/loop0 losetup -d /dev/loop1 losetup -d /dev/loop2 rm sda.img sdb.img sdc.imgYou really should ensure you understand the worked example BEFORE touching the live data on your disks. Needless to say, it's your responsibility and I really would recommend a backup before you change your live system.
I currently have 2 3TB HDDs, one that is always almost full, and another with ~200GB free. I would like to purchase an extra 3TB drive and setup a RAID 5 array, but I am concerned about losing the existing data. I have found that mdadm will be used to create the array, with a command similar to mdadm --create --verbose /dev/md0 --level=5 --raid-devices=2 /dev/sdb2 /dev/sdc2 --spare-devices=1 /dev/sdd2, where /dev/sdb2 and /dev/sdc2 are my existing drives (that have data) and /dev/sdd2 is a new 3TB drive with no data on it. Will this cause me to lose the data on /dev/sdb2 and /dev/sdc2? My other idea was to somehow create a 2x3TB RAID 5 array without a spare device, where one of the drives in the array is empty and the other has data. Then I could copy my files over from the existing drive to the new (6TB) array, wipe the now redundant drive, and then add it as the spare drive for the array. Although I doubt this would work? If neither of the above options will work, is there another way to create a RAID 5 array with 2 drives that already have data and 1 that is empty? What about if I were add 2 new empty drives at once, would that open up new options? I am using Ubuntu Server 16.04.2 with mdadm version 3.3.
How to create a 3x3TB RAID 5 array without losing data from 2 of the drives?
Caught between a rock and a hard place. Pick your poison. You couldleave it running as is and hope for the best, --fail /dev/sdc1 to kick the offending drive directly, add another drive and --replace /dev/sdc1, (perhaps this is what you meant to do instead of --grow?)Or --stop the array and then--assemble --update=revert-reshape to undo --grow, --assemble --freeze-reshape while you backup your files, ddrescue to clone the failing drive externally,The problem is that in theory many of these options should work just fine, but in practice there are bugs and so… it's somewhat impossible to predict how things are going to play out. Do check smartctl -a if there are pending/uncorrectable/reallocated sectors or other log entries. If these values are rising it might mean that RAID is correcting read errors as it goes, and not just stuck on a bug. smartctl can set TLER / SCTERC for some drives, which can reduce the time the drive spends on read errors. Also check mdadm --examine, --examine-badblocks since md might be recording bad blocks in metdata instead of kicking the drive. If there are bad blocks on multiple drives, you already lost data. This data is not replicated in subsequent reshape or replace operations. Reverting the reshape is undocumented and rarely tested (moreso with a failing drive in the mix) but since your grow barely processed so far, it might be the fastest way to get you out of this situation. ddrescue could clone your failing drive, but any read errors would not be corrected by the raid layer then. If you use a bad clone in a raid array, it will result in corrupt data.
Context I have a software RAID5 array (mdadm) on 3 disks. Last week, one disk started to get reading issue: # dmesg ata3.00: exception Emask 0x0 SAct 0x30000001 SErr 0x0 action 0x0 ata3.00: irq_stat 0x40000008 ata3.00: failed command: READ FPDMA QUEUED ata3.00: cmd 60/08:e0:40:0b:c6/00:00:a1:00:00/40 tag 28 ncq dma 4096 in res 41/40:00:40:0b:c6/00:00:a1:00:00/40 Emask 0x409 (media error) <F> ata3.00: status: { DRDY ERR } ata3.00: error: { UNC } ata3.00: configured for UDMA/133 sd 2:0:0:0: [sdc] tag#28 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=5s sd 2:0:0:0: [sdc] tag#28 Sense Key : Medium Error [current] sd 2:0:0:0: [sdc] tag#28 Add. Sense: Unrecovered read error - auto reallocate failed sd 2:0:0:0: [sdc] tag#28 CDB: Read(16) 88 00 00 00 00 00 a1 c6 0b 40 00 00 00 08 00 00 I/O error, dev sdc, sector 2714110784 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 ata3: EH completeSo I've formatted and added a new device to the array and then grow the array # mdadm --add /dev/md0 /dev/sdd1 # mdadm --grow --raid-devices=4 /dev/md0It seems that it wasn't the best idea. Due to reading issue of the faulty disk, reshape operation estimated duration is more or less 6 months. (below 12-hours progress) $ cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid5 sdd1[4] sdc1[0] sde1[1] sdb1[3] 5860269184 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] [>....................] reshape = 0.2% (6232960/2930134592) finish=265471.5min speed=183K/sec bitmap: 4/22 pages [16KB], 65536KB chunkunused devices: <none>So many events can occur meanwhile like power issue or second disk fail for example. I would love telling mdadm to stop reading the faulty disk but it seems stopping reshaping operation may lead to data loss. QuestionsShould I mark faulty the disk with reading issue while reshaping operation? Is there a clever way to speed up reshaping? Any other advices?Thanks a lot for your ideas and help.
RAID5 - Mark a disk faulty during reshape
You've got a rather scrambled-looking system there. The key elements from your mdadm --examine output: /dev/sdc1: Update Time : Sat Oct 11 09:20:36 2014 Events : 15084 Device Role : Active device 2/dev/sdd1: Update Time : Wed Oct 15 08:09:37 2014 Events : 15196 Device Role : Active device 1/dev/sde1: Update Time : Wed Oct 15 08:09:37 2014 Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present. Events : 15196 Device Role : spareYou were unable to re-assemble the array because /dev/sdc1 has a lower event count than the other two (the data on it is out of date), while /dev/sde1 is marked as being a spare (the data on it bears no relation to the state of the array). This gives you only one data-containing drive, while a three-disk RAID 5 needs a minimum of two to start running. I've got no idea how you got here, since this doesn't look like a typical two-drive failure. Since the event counts for /dev/sdc1 and /dev/sdd1 don't differ by much, you may be able to recover most or all of the data by forcing mdadm to re-assemble the array from those two volumes. You'll probably want to follow the procedure from the Linux RAID Wiki, but if you don't mind the possibility of losing everything, the key step is mdadm --assemble --force --run /dev/sdc1 /dev/sdd1, followed by a fsck -- this will either work, or destroy the array entirely, and the point of the extended procedure is to figure out which it will be without actually harming the data. Alternatively, since /dev/sdd1 and /dev/sde1 have identical event counts, you may be able to recover everything by changing the metadata on /dev/sde to mark it as having a device role of "Active device 0", but this is the sort of thing that requires expert knowledge and direct hex-editing of the disk contents.
I had RAID5 array of three disks with no spares. There was a power out, and on reboot, the array failed to come back up. In fact, the /dev/md127 device disappeared entirely, and was replaced by an incorrect /dev/md0. It was the only array on the machine. I've tried to reassemble it from the three component devices, but the assembly keeps creating a raid0 array instead of a raid5. The details of the three disks are root@bragi ~ # mdadm -E /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 002fa352:9968adbd:b0efdfea:c60ce290 Name : bragi:0 (local to host bragi) Creation Time : Sun Oct 30 00:10:47 2011 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 2930269954 (1397.26 GiB 1500.30 GB) Array Size : 2930269184 (2794.52 GiB 3000.60 GB) Used Dev Size : 2930269184 (1397.26 GiB 1500.30 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=770 sectors State : clean Device UUID : a8a1b48a:ec28a09c:7aec4559:b839365e Update Time : Sat Oct 11 09:20:36 2014 Checksum : 7b1ad793 - correct Events : 15084 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)root@bragi ~ # mdadm -E /dev/sdd1 /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 002fa352:9968adbd:b0efdfea:c60ce290 Name : bragi:0 (local to host bragi) Creation Time : Sun Oct 30 00:10:47 2011 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 2930269954 (1397.26 GiB 1500.30 GB) Array Size : 2930269184 (2794.52 GiB 3000.60 GB) Used Dev Size : 2930269184 (1397.26 GiB 1500.30 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=770 sectors State : clean Device UUID : 36c08006:d5442799:b028db7c:4d4d33c5 Update Time : Wed Oct 15 08:09:37 2014 Checksum : 7e05979e - correct Events : 15196 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : .A. ('A' == active, '.' == missing, 'R' == replacing)root@bragi ~ # mdadm -E /dev/sde1 /dev/sde1: Magic : a92b4efc Version : 1.2 Feature Map : 0x8 Array UUID : 002fa352:9968adbd:b0efdfea:c60ce290 Name : bragi:0 (local to host bragi) Creation Time : Sun Oct 30 00:10:47 2011 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 2930275057 (1397.26 GiB 1500.30 GB) Array Size : 2930269184 (2794.52 GiB 3000.60 GB) Used Dev Size : 2930269184 (1397.26 GiB 1500.30 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1960 sectors, after=5873 sectors State : clean Device UUID : b048994d:ffbbd710:8eb365d2:b0868ef0 Update Time : Wed Oct 15 08:09:37 2014 Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present. Checksum : bdbc6fc4 - correct Events : 15196 Layout : left-symmetric Chunk Size : 512K Device Role : spare Array State : .A. ('A' == active, '.' == missing, 'R' == replacing)I stopped the old array, then reassembled as follows (blank lines inserted for clarity) root@bragi ~ # mdadm -S /dev/md0 mdadm: stopped /dev/md0root@bragi ~ # mdadm -A /dev/md0 /dev/sdd1 /dev/sdc1 /dev/sde1 mdadm: /dev/md0 assembled from 1 drive and 1 spare - not enough to start the array.root@bragi ~ # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : inactive sdd1[1](S) sde1[3](S) sdc1[2](S) 4395407482 blocks super 1.2 unused devices: <none> root@bragi ~ # mdadm -D /dev/md0 /dev/md0: Version : 1.2 Raid Level : raid0 Total Devices : 3 Persistence : Superblock is persistent State : inactive Name : bragi:0 (local to host bragi) UUID : 002fa352:9968adbd:b0efdfea:c60ce290 Events : 15084 Number Major Minor RaidDevice - 8 33 - /dev/sdc1 - 8 49 - /dev/sdd1 - 8 65 - /dev/sde1root@bragi ~ # mdadm -Q /dev/md0 /dev/md0: is an md device which is not activeWhy is this assembling as a raid0 device and not a raid5 device, as the superblocks of the components indicate it should? Is it because /dev/sde1 is marked as spare? EDIT: I tried the following (according to @wurtel's suggestion), with the following results # mdadm --create -o --assume-clean --level=5 --layout=ls --chunk=512 --raid-devices=3 /dev/md0 missing /dev/sdd1 /dev/sde1 mdadm: /dev/sdd1 appears to contain an ext2fs file system size=1465135936K mtime=Sun Oct 23 13:06:11 2011 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sun Oct 30 00:10:47 2011 mdadm: /dev/sde1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sun Oct 30 00:10:47 2011 mdadm: partition table exists on /dev/sde1 but will be lost or meaningless after creating array Continue creating array? no mdadm: create aborted. #So it looks like /dev/sde1 is causing the problem again. I suspect this is because it has been marked as spare. Is there anyway I can force change its role back to active? In this case I suspect assembling the array might even work.
Missing mdadm raid5 array reassembles as raid0 after powerout
This appears to be a bug in Ubuntu 20.04's mdadm package. The mdcheck script is missing altogether so the timer/service fails to execute the it. When you install mdadm, it also activates the mdcheck_start timer and service. # apt-get install mdadm [...] Setting up mdadm (4.1-5ubuntu1) ... Generating mdadm.conf... done. update-initramfs: deferring update (trigger activated) Created symlink /etc/systemd/system/mdmonitor.service.wants/mdcheck_start.timer → /lib/systemd/system/mdcheck_start.timer. Created symlink /etc/systemd/system/mdmonitor.service.wants/mdmonitor-oneshot.timer → /lib/systemd/system/mdmonitor-oneshot.timer. mdcheck_continue.timer is a disabled or a static unit, not starting it. [...]The mdcheck_start service is then supposed to run the mdcheck script: [Service] Type=oneshot Environment=MDADM_CHECK_DURATION='"6 hours"' ExecStart=/usr/share/mdadm/mdcheck --duration $MDADM_CHECK_DURATIONHowever... /usr/share/mdadm/mdcheck does not actually exist at all, so it can't work. # ls -l /usr/share/mdadm/ total 12 -rwxr-xr-x 1 root root 6475 Jan 23 19:41 checkarray -rwxr-xr-x 1 root root 2637 Jan 23 19:41 mkconfSearching packages.ubuntu.com for this file also yields nothing. So, either Ubuntu forgot to include the mdcheck script, or they intended to remove it and forgot to also remove the systemd timer / service reference to it. If interested, I guess you could grab the file here: https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/misc/mdcheck I found a bug report from January 2020 https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1858342 however this bug doesn't seem to be assigned to anyone yet.Shouldn't mdadm be scrubbing my array periodically to ensure it is working properly?If there is anything that does that in Ubuntu 20.04 then I couldn't find it. There's a checkarray script installed, but no timer or cron job to actually call it. So I don't think it would run any automated checks for now.
I've got a brand new Ubuntu 20.04 machine built out which is using a mdadm RAID5 configuration (3x 10TB). The system is throwing an error every time I log in. I can see from systemctl that the mdcheck_start service has failed. I can also see from checking the service status that the daemon is trying to launch a script that does not exist. This script was not installed with Ubuntu or any of the mdadm packages. systemctl status mdcheck_start.service ● mdcheck_start.service - MD array scrubbing Loaded: loaded (/lib/systemd/system/mdcheck_start.service; static; vendor preset: enabled) Active: failed (Result: exit-code) since Sun 2020-05-03 09:18:05 EDT; 5min ago TriggeredBy: ● mdcheck_start.timer Process: 196602 ExecStart=/usr/share/mdadm/mdcheck --duration $MDADM_CHECK_DURATION (code=exited, status=203/EXEC) Main PID: 196602 (code=exited, status=203/EXEC)May 03 09:18:05 BAILEYFS02 systemd[1]: Starting MD array scrubbing... May 03 09:18:05 BAILEYFS02 systemd[196602]: mdcheck_start.service: Failed to execute command: No such file or directory May 03 09:18:05 BAILEYFS02 systemd[196602]: mdcheck_start.service: Failed at step EXEC spawning /usr/share/mdadm/mdcheck: No such file or directory May 03 09:18:05 BAILEYFS02 systemd[1]: mdcheck_start.service: Main process exited, code=exited, status=203/EXEC May 03 09:18:05 BAILEYFS02 systemd[1]: mdcheck_start.service: Failed with result 'exit-code'. May 03 09:18:05 BAILEYFS02 systemd[1]: Failed to start MD array scrubbing.Is this a legitimate error? Can I safely disable this service so I stop getting these annoying errors everytime I log in? Shouldn't mdadm be scrubbing my array periodically to ensure it is working properly?
mdcheck_start Service Fails to Start
If you actually have a RAID configured through hardware (i.e., the operating system sees fewer physical disks than you actually have) there's no hardware to software conversion method. You have to back up the data to an alternate location, convert the RAID manually, and restore.
I have an old PCI-X controller running 8 drives in RAID 5. I'd like to dump the controller and go to software RAID under Ubuntu. Is there a way to do this and retain the data from current array? EDIT: (and a slight tangent) The answers below are certainly fine, but here's a bit of added detail in my specific situation. The hardware raid was being done by an old Promise raid card (don't remember the model number). My whole system went down (dead mobo, most likely) and the old controller was a PCI-X card (not to be confused with PCI-e). I asked the question hoping to salvage my data. What I did was buy another Promise (HighPoint) card, and plug all the drives in and install Ubuntu. I was expecting to have to rebuild the array, but surprisingly enough, the HighPoint card saw the old array and brought it up clean. Moral of the story - it looks like at least Promise controllers store their metadata on the arrays themselves, and appear to have some amount of forward compatibility.
Migrating from hardware to software RAID
Answered by a combination of the posts linked by Tink and Roaima in this commentHow about this one? At least it directly addresses LVMs as well ... Migrate an entire volume group LVM2 to RAID5Does this answer your question? How to create a 3x3TB RAID 5 array without losing data from 2 of the drives?
I have a home server running Centos 7 that I'm using to host various different servers - a webserver, two Minecraft servers, and a PLEX server. Currently I have a 4TB HDD which serves as the primary storage, housing the OS and all files, and a 240GB SATA SSD which I'm using as a cache drive with Lvmcache. I've begun to run out of space on the 4TB disk, so I'm looking for a way to migrate this single disk to RAID 5 without needing to reinstall and reconfigure the OS and all software. I've chosen RAID 5 as it appears to be the best balance of price, performance, and redundancy for me. If I were to purchase 3 additional drives and build a RAID 5 array with them, is there any possible way to clone the data and OS from the existing drive onto the array, verify that this worked correctly, then erase the single drive and join it into the array? Alternatively, should I purchase an additional, smaller drive to house the OS and clone it there, then simply use the RAID array for mass storage? I have never previously had the occasion to use a RAID array, so I have no prior experience with them. I did find a piece of software called "Raider" that looks like it might possibly be up to the task, but, again, I have no experience with it; better safe than sorry! Edit: output of "lvs -a -o +devices && df":
Migrate Single Disk to RAID [duplicate]
Yes it's perfectly possible, and can be done even on a live system. IMPORTANT NOTE: your data won't survive a disk failure during the conversion process, so make sure you have a backup. Here's a demonstration using some files. # Two "disks", probably called /dev/loop0 ($a) and /dev/loop1 ($b) dd bs=1M count=100 </dev/zero >/tmp/img.a a=$(losetup --show --find /tmp/img.a)dd bs=1M count=100 </dev/zero >/tmp/img.b b=$(losetup --show --find /tmp/img.b)# Create RAID 1 mdadm --create /dev/md0 --metadata=1.2 --level=raid1 --raid-devices=2 $a $b# See what is going on cat /proc/mdstat# Add a filesystem and mount it mkfs -t ext4 -L md /dev/md0mkdir -p /mnt/dsk mount /dev/md0 /mnt/dskNow we'll grow the disk array # Another disk, probably /dev/loop2 ($d) dd bs=1M count=100 </dev/zero >/tmp/img.d d=$(losetup --show --find /tmp/img.d)# Add it as a spare mdadm --add /dev/md0 $d# Convert from RAID 1 to RAID 5 mdadm --grow /dev/md0 --level=raid5 --raid-devices=3# See what is going on cat /proc/mdstatWhen you have confirmed to yourself that the process is indeed safe, you can repeat the process with your real disks. Do you have a backup? a=/dev/sda b=/dev/sdb` d=/dev/sdd
I have Raid 1 /dev/sda /dev/sdb as md0 then now I want to expand for Raid5 So, my idea is sudo mdadm --add /dev/md0 /dev/sddsudo mdadm --grow /dev/md0 --level=raid5 --raid-devices=3In these processes. The contents in the HDD will be deleted or not??
Is it possible to keep data in procedure Raid1 growing to Raid5?
In case of a software raid setup on Windows this is probably a fake-raid. You should install the package dmraid which will handle access to such raid-5 systems.Do make a backup of your data before you start. You can try out dmraid by booting from CD and installing it, without any need to change the Windows setup. dmraid probably only works on the hardware the Windows FTP server was running on (or something similar) as it relies on the raid-support-features of the hardware. do not remove/overwrite the windows setup until you have confirmed access to the drives from LinuxThe hardware support for fake-raid seems to bring very little performance wise and ties you to the hardware. Since you will be making a backup anyway, you might as well consider setting up a new Linux based software raid-5 using mdadm on those disks and restore the backup on that. An mdadm setup would allow you to move the discs to some different hardware for sure. Whether that is possible for you depends on how the disks are connected and if you keep them connected to the same motherboard. In order to use all 6 of the motherboard's SATA connections on my server at home, I had to switch of the hardware support for raid, for those connections that supported it, in the BIOS.
At our university we have a Windows FTP server that was implemented with software RAID-5 technology, but we decided to migrate to Linux. Howcan we mount and modify it under Linux?
How to mount software RAID-5 created by Windows under Linux?
How did you partition the disks the first time? If you used fdisk, you may have limited yourself to just the first 2 TB of each disk, as that's the maximum partition size you can create with fdisk. As such, your raid device probably looks more like a RAID5 of 3 * 2TB disks. Use parted to create your larger than 2TB partition. Example: [root@evil home]# parted /dev/sda -- mklabel GPT yes unit TB mkpart primary ext2 0 -1 Warning: The existing disk label on /dev/sda will be destroyed and all data on this disk will be lost. Do you want to continue? Information: You may need to update /etc/fstab. [root@evil home]#Do this for each of your drives, then recreate your RAID5 device and see if that allows you to use the rest of your drives. You can use parted /dev/sda -- print to view the partition table once you've reset it per the above command line.
I have 3 3TB drives, and have raid5ed them together. I would expect to get a resulting device around 6TB. The command I used: mdadm --create md0 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1Also of note: # fdisk -l | grep 'Disk /dev/sd' Disk /dev/md0 doesn't contain a valid partition table Disk /dev/mapper/root doesn't contain a valid partition table Disk /dev/mapper/swap_1 doesn't contain a valid partition table Disk /dev/sda: 3000.6 GB, 3000592982016 bytes Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes Disk /dev/sdd: 320.1 GB, 320072933376 bytes # mdadm --detail /dev/md0 md0: Version : 1.2 Creation Time : Wed Jul 10 17:11:04 2013 Raid Level : raid5 Array Size : 4294702080 (4095.75 GiB 4397.77 GB) Used Dev Size : 2147351040 (2047.87 GiB 2198.89 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Thu Jul 11 14:51:17 2013 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : ... UUID : 6331582a:92950387:4e4e7314:8bccf9cb Events : 66195 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 3 8 33 2 active sync /dev/sdc1 #Let me know if more information would be helpful.
Raid5 Device Has Less Space Than Expected
(Once the current rebuild is finished,) you can run a check: mdadm --wait /dev/mdX # wait for rebuild to finish mdadm --action=check /dev/mdX # or if mdadm is too old: echo check > /sys/block/mdX/md/sync_actionand then watch the mismatch_cnt: watch cat /sys/block/mdX/md/mismatch_cntas long as it stays 0, the parity is fine. See also man md, SCRUBBING AND MISMATCHES. A count of mismatches is recorded in the sysfs file md/mismatch_cnt. This is set to zero when a scrub starts and is incremented whenever a sector is found that is a mismatch. md normally works in units much larger than a single sector and when it finds a mismatch, it does not determine exactly how many actual sectors were affected but simply adds the number of sectors in the IO unit that was used. So a value of 128 could simply mean that a single 64KB check found an error (128 x 512bytes = 64KB).This process will take as long as the rebuild itself... as it's basically doing the same thing as a rebuild. For progress, refer to /proc/mdstat. It's also possible to test a specific region only — if you only want to check around the 75% mark — but it's more complicated as (I think) there's no command option in mdadm for it. You can set md/sync_min, md/sync_max to determine a range (default range 0-max covers the entire device). If you want parity to be fixed, instead of the purely informative check, use repair which fixes parity. However you have to be sure that data is correct, and parity incorrect. Otherwise if you can identify a single disk that has incorrect data (regardless whether that's data or parity), you have to remove the disk and add it as new disk and rebuild again. Determining the correct course of action for mismatch handling can unfortunately be quite complicated...
So I'm currently building an mdadm RAID5 array attached to my home server. The hardware is an Odroid N2 SBC with a Mediasonic Probox 4 bay enclosure attached. The array is currently rebuilding and has been for days but moving steadily. I'm using armbian stretch with the legacy 4.9.180 kernel. Last night, I was using the system (but not the drives) and was running a checksum on a file on a different USB drive. There is currently an unsolved bug in the USB drivers for the N2 that is exacerbated by high I/O activity. The N2 subsequently died around 11:40pm last night. The N2 came back almost immediately and I didn't even notice until morning. However, the mdadm array rebuild was paused at 75%. I resumed the rebuild and it's progressing happily, but I want to be sure that I didn't do lasting harm to the new array. Is there any mdadm utility that I can use to confirm there are no errors in the parity data? There is no filesystem on the array so I don't think I can use fsck in this case
How to check mdadm RAID5 integrity after power failure/random reboot
First of all, mount it read-only and make a backup of your data. Then run e2fsck on /dev/md0, which should be able to fix the filesystem, but the most recent changes to it are probably lost.
I have a 3 drive raid 5 array using the following disks:/dev/sda /dev/sdc /dev/sdbI'm no longer able to mount the drive with the following error: sudo mount /dev/md0 /mnt/md0/ mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.In dmesg I see it looks like there are invalid checksums [ 1062.187295] JBD2: Invalid checksum recovering block 6 in log [ 1062.199499] JBD2: Invalid checksum recovering block 8 in log [ 1062.231367] JBD2: Invalid checksum recovering block 12 in log [ 1062.239753] JBD2: Invalid checksum recovering block 13 in log [ 1062.272628] JBD2: Invalid checksum recovering block 18 in log [ 1062.279700] JBD2: Invalid checksum recovering block 19 in log [ 1062.313132] JBD2: Invalid checksum recovering block 24 in log [ 1062.344345] JBD2: Invalid checksum recovering block 29 in log [ 1062.606357] JBD2: Invalid checksum recovering block 50 in log [ 1062.656566] JBD2: Invalid checksum recovering block 55 in log [ 1062.831316] JBD2: Invalid checksum recovering block 64 in log [ 1062.882451] JBD2: Invalid checksum recovering block 68 in log [ 1062.913100] JBD2: Invalid checksum recovering block 70 in log [ 1066.094511] JBD2: recovery failed [ 1066.094516] EXT4-fs (md0): error loading journalI have checked all of the drives with smartctl with no errors and have tried stopping and scanning the array with mdadm --stop /dev/md0 and mdadm --assemble --scan and am still unable to mount the raid. However, I'm able to mount my raid in readonly mode. Here is my mdadm config: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. ## by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers# automatically tag new arrays as belonging to the local system HOMEHOST <system># instruct the monitoring daemon where to send mail alerts MAILADDR root# definitions of existing MD arrays# This configuration was auto-generated on Wed, 22 Mar 2017 21:34:32 +0000 by mkconf ARRAY /dev/md0 metadata=1.2 name=media:0 UUID=7787481f:9713f064:1a49afb2:cc380a8dand fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/media--vg-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sdb2 during installation UUID=51b74a1f-ed14-4fe7-b12c-a0efd2f706e4 /boot ext2 defaults 0 2 # /boot/efi was on /dev/sdb1 during installation UUID=A3B3-B410 /boot/efi vfat umask=0077 0 1 /dev/mapper/media--vg-swap_1 none swap sw 0 0 /dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid5 sda[0] sdd[3] sdc[1] 3906766848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] bitmap: 0/15 pages [0KB], 65536KB chunkunused devices: <none>How to proceed?
Unable to mount raid 5 array (mdadm), with no drive errors
All the operations you did were to change the RAID partition structure, but none of those actually resize the filesystem written on it. This means that the filesystem is not aware of the changes you did and if you ask it about its size, it will give you the size it was created. To correct that, you can run resize2fs /dev/md3.
I had a 2 disk RAID 1 partition. I added a device and converted it to RAID 5: mdadm --add /dev/md3 /dev/sdd2 mdadm --grow /dev/md3 --level=5 --raid-devices=3Then added another device and converted to RAID 6: mdadm --add /dev/md3 /dev/sde2 mdadm --grow /dev/md3 --level=6 --raid-devices=4When completed, the capacity is exactly the same as before I started. What do I need to do to get the additional space? The original RAID1 device capacity was 9.1GB on each of the two mirror devices. Converting to RAID5 or 6 with 4 devices of 9.1GB, I expected the capacity to go to 18GB, but it still only shows 9.1GB. I am not running LVM. %> df -h /dev/md3 Filesystem Size Used Avail Use% Mounted on /dev/md3 9.1G 24M 8.6G 1% /tmp%> cat /proc/mdstat md3 : active raid6 sdd2[2] sde2[3] sdb3[0] sdc3[1] 19528704 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/4] [UUUU] bitmap: 1/1 pages [4KB], 65536KB chunk%> mdadm --version mdadm - v3.3 - 3rd September 2013%> sudo fdisk -l /dev/sdb3 /dev/sdc3 /dev/sdd2 /dev/sde2 Disk /dev/sdb3: 9.3 GiB, 9999220736 bytes, 19529728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdc3: 9.3 GiB, 9999220736 bytes, 19529728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdd2: 9.3 GiB, 9999221248 bytes, 19529729 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sde2: 9.3 GiB, 9999221248 bytes, 19529729 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
ubuntu - raid partition capacity unchanged after RAID1 to RAID5 to RAID6
Spare should be put into use automatically. It's strange it didn't. You may try to remove and re-add the spare: mdadm -f /dev/md127 /dev/sdc1 mdadm -r /dev/md127 /dev/sdc1 mdadm --zero-superblock /dev/sdc1 mdadm -a /dev/md127 /dev/sdc1If it doesn't work, there should be error messages in dmesg, explaining what's wrong.
A bit of history to start with. I had a 4 disk RAID5 and one disk failed. I removed it from the array and had it in a degraded state for a while: mdadm --manage /dev/md127 --fail /dev/sde1 --remove /dev/sde1My data requirement suddenly dropped so I decided to permanently reduce the array to 3 disks. I shrank the file system to much less than the new array size then: mdadm --grow /dev/md127 --array-size 35156183040 # reduces array size mdadm --grow --raid-devices=3 /dev/md127 --backup-file /store/4TB_WD/md127.backup # reshape array removing 1 disk.This has now completed: cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md127 : active raid5 sdd1[1] sdc1[3](S) sdb1[2] 35156183040 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/2] [_UU] bitmap: 103/131 pages [412KB], 65536KB chunkunused devices: <none>but has left me with a 3 disk degraded RAID5 with 2 active disks and one spare: mdadm -D /dev/md127 /dev/md127: Version : 1.2 Creation Time : Fri Sep 9 22:39:53 2022 Raid Level : raid5 Array Size : 35156183040 (32.74 TiB 36.00 TB) Used Dev Size : 17578091520 (16.37 TiB 18.00 TB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Jan 20 11:12:10 2023 State : active, degraded Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64KConsistency Policy : bitmap Name : oldserver-h.oldserver.lan:127 UUID : 589dd683:d9945b24:768d9b2b:28441f90 Events : 555962 Number Major Minor RaidDevice State - 0 0 0 removed 1 8 49 1 active sync /dev/sdd1 2 8 17 2 active sync /dev/sdb1 3 8 33 - spare /dev/sdc1How do I make this spare disk active so the array can rebuild to a healthy state? cat /sys/block/md127/md/sync_action shows idle and echoing repair into it does nothing. As a follow up, where did I go wrong in the first place? [edit] Adding output to lsblk as requested: lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 99G 0 part ├─clearos-root 253:0 0 91.1G 0 lvm / └─clearos-swap 253:1 0 7.9G 0 lvm [SWAP] sdb 8:16 0 16.4T 0 disk └─sdb1 8:17 0 16.4T 0 part └─md127 9:127 0 32.8T 0 raid5 /store/RAID_A sdc 8:32 0 16.4T 0 disk └─sdc1 8:33 0 16.4T 0 part └─md127 9:127 0 32.8T 0 raid5 /store/RAID_A sdd 8:48 0 16.4T 0 disk └─sdd1 8:49 0 16.4T 0 part └─md127 9:127 0 32.8T 0 raid5 /store/RAID_A sde 8:64 0 3.7T 0 disk └─sde1 8:65 0 3.7T 0 part /store/4TB_WD sdf 8:80 0 931.5G 0 disk └─sdf1 8:81 0 931.5G 0 part /store/1TB1 sdg 8:96 0 931.5G 0 disk └─sdg1 8:97 0 931.5G 0 part /store/1TB2 sr0 11:0 1 1.2G 0 rom[/edit]
How do I make a spare device active in a degraded mdadm RAID5
I could not find the root cause of this problem, so I decided to ditch LVM in it's entirety and replace it with mdadm - which worked like charm on the first try. Creating mdadm RAID5 (initially with 3 disks)Creating with three disks (henceworth raid-devices = 3):mdadm --create mediaraid --level=raid5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdeOptionally checking what encryption you can use at what speed (memory speed, not disk IO):cryptsetup benchmark /dev/md/mediaraidOptionally encrypting the entire RAID (a construct like this does not require to decrypt every disk on its own. One password for the ENTIRE RAID):cryptsetup luksFormat --hash sha512 --cipher aes-xts-plain64 --key-size 512 /dev/md/mediaraidOpening the LUKS device (necessary for formatting it):cryptsetup luksOpen /dev/md/mediaraidFormat the RAID with btrfs:mkfs.btrfs /dev/mapper/data -f Growing/Expanding a btrfs filesystem by 1 disk and an underlying mdadm RAID5 Preconditions: Filesystem is not mounted and LUKS device is closed: umount /mnt/raid5 && cryptsetup close /dev/mapper/dataAdding /dev/sdc (replace with your drive) to mdadm as a spare disk:mdadm --add /dev/md/mediaraid /dev/sdcVerify it shows up (will be at the bottom, saying it is a spare disk):mdadm --detail /dev/md/mediaraid Note: The following step triggers a RAID reshape, things are getting real, my 10TB hard drives took about 25-30 hours to reshape and sync from 3 to 4 disks. I am not sure if a reboot is safe during the respahe - but I wouldn't recommend it or at least try it in a virtual machine.Grow the RAID to the number of disks (most of the time you want to write the total count of disks here, 3 + 1 = 4, now I have 4 drives available and I want to use ALL 4 of them):mdadm --grow --raid-devices=4 /dev/md/mediaraidMonitor progress of reshape (first one is better):cat /proc/mdstat or mdadm --detail /dev/md/mediaraid After it is done reshaping:Optionally, if you use LUKS Decrypt the RAID - else continue with the next step:cryptsetup luksOpen /dev/md/mediaraid dataMount the btrfs filesystem:mount /dev/mapper/data /mnt/raid5Grow the btrfs filesystem to max or whatever you want:btrfs filesystem resize max /mnt/raid5It might be not necessary but I unmounted and remounted the entire thing after the btrfs filesystem resizeumount /mnt/raid5 && mount /dev/maper/data /mnt/raid5 Done.
I had the following setup: 3 HDDs 10Tb of size, in LVM Raid5 configuration On top a LUKS2 encryption and inside a BTRFS filesystem. Since my storage got low i added another 16TB HDD (was cheaper than 10TB) added it as physcial volume in LVM, added it to the volume group, ran a resync, so that LVM can adjust the size of my RAID. I resized the btrfs partition to max. I noticed, that in dmesg errors began to appear shortly after the btrfs resize when i write to it: [53034.840728] btrfs_dev_stat_print_on_error: 299 callbacks suppressed [53034.840731] BTRFS error (device dm-15): bdev /dev/mapper/data errs: wr 807, rd 0, flush 0, corrupt 0, gen 0 [53034.841289] BTRFS error (device dm-15): bdev /dev/mapper/data errs: wr 808, rd 0, flush 0, corrupt 0, gen 0 [53034.844993] BTRFS error (device dm-15): bdev /dev/mapper/data errs: wr 809, rd 0, flush 0, corrupt 0, gen 0 [53034.845893] BTRFS error (device dm-15): bdev /dev/mapper/data errs: wr 810, rd 0, flush 0, corrupt 0, gen 0 [53034.846154] BTRFS error (device dm-15): bdev /dev/mapper/data errs: wr 811, rd 0, flush 0, corrupt 0, gen 0I can exclude hardware problems, since i tried that on another computer in a virtual machine. The problems in dmesg do appear when i write bigger files (400Mb) to the filesystem, but not something like a text file - the checksum is also wrong after a copy from one file of the raid to another: gallifrey raid5 # dd if=/dev/urandom of=original.img bs=40M count=100 0+100 records in 0+100 records out 3355443100 bytes (3.4 GB, 3.1 GiB) copied, 54.0163 s, 62.1 MB/s gallifrey raid5 # cp original.img copy.img gallifrey raid5 # md5sum original.img copy.img 29867131c09cc5a6e8958b2eba5db4c9 original.img 59511b99494dd4f7cf1432b19f4548c4 copy.imggallifrey raid5 # btrfs device stats /mnt/raid5 [/dev/mapper/data].write_io_errs 811 [/dev/mapper/data].read_io_errs 0 [/dev/mapper/data].flush_io_errs 0 [/dev/mapper/data].corruption_errs 0 [/dev/mapper/data].generation_errs 0I already resynced the entire lvm raid, did a smartctl checkup multiple times (shouldn't be a hw problem, but still) and did btrfs scrub start -B /mnt/raid5 and btrfs check -p --force /dev/mapper/data while non of them returned any error whatsoever. Hapened on kernel 5.15.11 and 5.10.27 lvm version: gallifrey raid5 # lvm version LVM version: 2.02.188(2) (2021-05-07) Library version: 1.02.172 (2021-05-07) Driver version: 4.45.0My goal is that future writes to the drive are non-corrupted, while the already corrupted files can be deleted, but the good files I would like to save or at least not delete. From the man page of btrfs it says, that write_io_errs means that the block device beneath does not succeed in writing. In my case that means, that lvm and or luks2 is the problem here. Any suggestions, or any more information needed? Cheers
LVM, LUKS2 & BTRFS Problem
The source tarball is available from: HERE
This is a long time in the making, and I've finally got it working so I thought I'd share this with as many people as I can in case they're in a similar situation. Long story short - HP SmartArray P410 failed, got another one (which worked for a while) then also failed. I also had a P200/ZM in there with another array (failed). I was sick of HP by that point, but I needed to recover my array's data - was not even going to consider getting another SmartArray card to copy it off with. So. After much research, I found that HP employ some painful RAID5 algorithms (called Delayed Parity) that make normal RAID5 recovery methods very, very difficult. So I wrote my own block driver. This driver (much like md-raid) takes the disks and translates them into a logical drive (array) taking into account HP's bastard algorithms. It's not a proper RAID solution - no paritiy calculations are done, but it should allow you to mount the array and copy it off (as I have now done). Note: Some knowledge of C and compiling C required, see answers for the download.
HP SmartArray RAID5 recovery on linux
Basically, everything except for /boot & update initramfs is the same. I'll assume that your old boot is /dev/sda1. These steps should look familiar if you've ever used a live CD/USB as a rescue disk: # mount /dev/vg0/root /mnt # or whatever your vg/lv name is. # mount --bind /dev /mnt/dev # make these available inside of /mnt # mount --bind /proc /mnt/proc # mount --bind /sys /mnt/sys # mount /dev/sda1 /mnt/mnt # chroot /mnt # this switches into your installed systemYou should now how a shell inside of your installed system. (Note: if you have any other critical directories on separate partitions, such as /usr or /var, go ahead and mount them. Remember to unmount them later in the cleanup.) Your old boot is mounted in /mnt, so just copy everything over: # ( cd /mnt && tar c ) | ( cd /boot && tar vx ) # cp -a would work, too ⋮ lots of output ⋮Go ahead and edit /etc/fstab to comment out the entry for /boot, since it will no longer be on a separate filesystem. Then continue: # update-initramfs -k all -u # this will take a while if you have a lot of kernels ⋮ # dpkg-recofigure -plow grub-pc # I'm assuming you're running a Debian- or Ubuntu-like systemFor the most part, you get to hit enter through the prompts from dpkg-reconfigure. Pay attention to the last prompt, when it asks you which disks to install on—you probably want to install on each of your RAID5 disks. Finally: # exit # gets you back to the Live USB root # umount /mnt/mnt # umount /mnt/proc # umount /mnt/sys # umount /mnt/dev # umount /mntI haven't actually tested this, but I've done it enough times. Please forgive any typos (or better yet, feel free to edit and correct.)
I stumbled upon this question, but I do not think this is a duplicate since I'm under the impression that the OP in this question is not running under a live session. I, on the other hand, have booted to an Ubuntu live USB drive for this operation. I have an Ubuntu installation where everything lives on an LVM2, save for a /boot partition. Thus far, I have done the following:Created and mounted RAID5 array as md0 Mounted the current LVM, such that it's contents (along with /boot) can be accessed in /mediaI would like to do the following:Move my entire installation from the LVM2 to the RAID5 array (I think rsync -avx is the correct way of doing this) Make it so that I can boot from the RAID5 array (i.e. move /boot?)My questions are as follows:Is rsync -avx the correct approach for moving my installation? Do I need to copy /boot to the RAID5 array or to each of it's constituent discs? If so, how can I achieve this given that I am in a live Ubuntu environment?
How can I migrate my Ubuntu installation from LVM2 to RAID5?
What you should do here is trigger a SMART self-test on the bad drive(s). That will pull a lot of the controller/motherboard pieces out of here and give you a better reading on the underlying disks having issues. It's possible for non-drive failure to give wrong results there--power supply failure is the most common cause--but it's a good start. It will take a few hours to run an extended test, but you may get useful enough info with the short one. Drives that are failing tend to complain pretty fast even on that one. The guide at Utilizing SMART to Monitor Drives in 3ware RAID should give you enough info to trigger a self-test and then view the logged results. Probably worthwhile to check the controller card logs for more info here before even running a new test. It would be interesting to know if errors were already increasing before the power outage or not. Sometimes RAID arrays can have hidden inconsistencies you just don't know about. Not impossible for the power cutting out to have damaged one sector via inconsistent writes, and then the problems you didn't know about cause the rebuild to go badly. If there were older sector repairs happening, you may find them in the controller logs, even if the controller fixed them quietly and didn't tell the Linux driver.
Had a power outage two weeks ago on my BSD server that my UPS decided then was a good time to fail. Just fired it up this evening only to have the following errors show during what I would assume is the rebuild process.The port for the ECC error has shown up as either 2 or 1 in following errors, and while the rest of the data that you see on the image has not shown up since it stated the rebuild began, the Drive ECC error has popped once twice since that command was displayed. While I have another system to backup critical data too, my questions really are what is this saying, and should I be shopping for hardware. The Drives are WD 250GB on a hardware RAID5 with a 3Ware 9650SE-4LPML 4-Port 3Gb/s RAID card. The OS is FreeBSD 6.2 EDIT: Something different now. Came up with an error that retries were exhausted, and produced the following line g_vfs_dome():da0s1d[READ(offset=1155956736, length=16384)]error = 5It then reset the controller and started a rebuild on unit -
Worrisome HD related messages after power outage
Ok, just to conclude this - basically, what was said in that link helped, here are my exact commands: mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 mdadm --add /dev/md0 /dev/sdd1 (--re-add didn't work), after this it started resyncing - took about 20 hoursThen, since there is lvm over:lvchange -ay data/data fsck /dev/data/dataThanks for help.
I found simmilar problem here: Missing mdadm raid5 array reassembles as raid0 after powerout, but mine is a bit different. Here too my raid5 reassembles as raid0, but i don't see any of my devices marked as spare in mdadm -E /dev/sdX1 output: /dev/sdb1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 9b244d41:0b94c8f7:0da323ac:f2a873ec Name : bekap:0 (local to host bekap) Creation Time : Wed Oct 9 16:03:25 2013 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB) Array Size : 5860267008 (5588.79 GiB 6000.91 GB) Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262064 sectors, after=1024 sectors State : active Device UUID : f8405c86:85d8bade:8a74b0f5:fec08e3f Update Time : Sat Jan 16 04:41:05 2016 Checksum : da1a9cb2 - correct Events : 134111 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 9b244d41:0b94c8f7:0da323ac:f2a873ec Name : bekap:0 (local to host bekap) Creation Time : Wed Oct 9 16:03:25 2013 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB) Array Size : 5860267008 (5588.79 GiB 6000.91 GB) Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262064 sectors, after=1024 sectors State : active Device UUID : d704efde:067523c1:a6de1be2:e752323f Update Time : Sat Jan 16 04:41:05 2016 Checksum : 124f919 - correct Events : 134111 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : AA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 9b244d41:0b94c8f7:0da323ac:f2a873ec Name : bekap:0 (local to host bekap) Creation Time : Wed Oct 9 16:03:25 2013 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB) Array Size : 5860267008 (5588.79 GiB 6000.91 GB) Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262064 sectors, after=1024 sectors State : clean Device UUID : c52383f7:910118d3:e808a29f:b4edad2c Update Time : Mon Dec 28 10:46:40 2015 Checksum : d69974b5 - correct Events : 52676 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)But they are marked as S (which as far as I know stands for spare) in /proc/mdstat (and there are no personalities for md0): Personalities : md0 : inactive sdb1[0](S) sdd1[3](S) sdc1[1](S) 8790402048 blocks super 1.2unused devices: <none>Here is mdadm -D /dev/md0 output: /dev/md0: Version : 1.2 Raid Level : raid0 Total Devices : 3 Persistence : Superblock is persistent State : inactive Name : bekap:0 (local to host bekap) UUID : 9b244d41:0b94c8f7:0da323ac:f2a873ec Events : 134111 Number Major Minor RaidDevice - 8 17 - /dev/sdb1 - 8 33 - /dev/sdc1 - 8 49 - /dev/sdd1So I'm a bit confused why it can't reassemble this array if it has two (I would say good devices) out of three. I'm not sure if mdadm -D /dev/md0 showed it as raid0 since the failure or if I just messed it up while trying to reassemble the array (I tried mdadm --stop /dev/md0 and mdadm --assemble --scan --verbose and mdadm --assemble --scan --verbose /dev/md0 /dev/sdb1 /dev/sdc1 or something simmilar - I can try to get exact commands if it is necessary). So, my question: can I remove sdd1 from array, then assemble md0 without it and then add sdd1 again? Or should I use --assemble --force --run as mentioned in the linked question? Or something else? I'm quite unexperienced with linux raid and mdadm. Thanks a lot.
raid5 array reassembles as raid0